Posts
Comments
An underrated and little understood virtue in our culture.
And a nice summary with many good, non-obvious and practical points. I've done a lot of what you describe in the section on process, and can testify to its effectiveness.
I'd be curious to hear any examples you have of integrity-maintaining examples of playing a role (which are non-obvious, and where a more simple high integrity approach might naively think one simply shouldn't play the role).
I'm curious, what countries have and haven't seen substantial focus on hand hygiene?
We have that here in Canada.
Also I somehow keep not giving holidays proper respect.
I thought you were an advocate of the Sabbath? 😉
"Free Day", while perhaps not the best option overall, has the merit that these days involving freeing the part of you that communicatess through your gut (and through what you feel like doing). During much of our working (and non-working) week, that part is overridden by our mind's sense of what we have to do.
By contrast, in OP's Recovery Days this part is either:
(a) doing the most basic recharging before it can do things it positively feels like and enjoys, or
(b) overridden or hijacked by addictive behaviours that it doesn't find as roundly rewarding as Free Day activities.
Addiction can also be seen as a lack of freedom.
I agree about the names. 'Rest' days are particularly confusing, since recovery days involve a lot of rest. A main characteristic of 'rest' days instead seems to doing what you feel like and following your gut.
Yes, it seems more reasonable to treat it as evidence of upper bound. Still weak evidence IMO, due to the self-reporting of perceived symptoms.
They say they haven't accounted for sampling bias, though, which makes me doubt the methodology overall, as sampling bias could be huge over 90 day timespans.
Yes, the article doesn't describe the exact methodology, but they could be well deriving the percentages from people who choose to self-report how they're doing after 30 and 90 days. These would be far more likely to be people who still feel unwell.
As a separate point, and I'm skirting around using the word "hypochondria" here, asking people is they still feel unwell or have symptoms a month or three after first contracting covid is going to get some fairly subjective answers. All in all I don't think this particular study tells us much about the likelihood of covid causing permanent damage.
That plus it's a more intelligent than average community with shared knowledge and norms of rationality. This is why I personally value LessWrong and am glad it's making something of a comeback.
These aren't letters from charities, asking for your money for themselves (even if they then spend some or most or all of it on others). If you get a stock letter signed by the president of Charity X, who you don't know, saying they hope your family is well, that's quite different.
Yep - we were thinking Dec 31st, but we've now decided to make it Jan 31st as some student EA groups have said they'd like to share it in their newsletters after students return from the holidays.
I think it's possible to send versions of these emails which aren't annoying. I've sent a bunch myself and people haven't seemed to find them annoying.
I disagree - I know Peter was genuinely interested in hearing back from people.
For reference, here are the results from last year's survey, along with Peter's analysis of them. This includes a link to a Github repository including the raw data, with names and email addresses removed.
Notable findings included:
- The top three sources people in our sample first heard about EA from were LessWrong, friends, or Giving What We Can. LessWrong, GiveWell, and personal contact were cited as the top three reasons people continued to get more involved in EA. (Keep in mind that EAs in our sample might not mean all EAs overall, as discussed in .)
- 66.9% of the EAs in our sample were from the United States, the United Kingdom, and Australia, but we have EAs in many countries. You can see the public location responses visualized on the Map of EAs!
- The Bay Area had the most EAs in our sample, followed by London and then Oxford. New York and Washington DC have surprisingly many EAs and may have flown under the radar.
- The EAs in our sample in total donated over $5.23 million in 2013. The median donation size was $450 in 2013 donations.
- 238 EAs in our sample donated 1% of their income or more, and 84 EAs in our sample give 10% of their income. You can see the past and planned donations that people have chosen to made public on the EA Donation Registry.
- The top three charities donated to by EAs in our sample were GiveWell's three picks for 2013 AMF, SCI, and GiveDirectly. MIRI was the fourth largest donation target, followed by unrestricted donations to GiveWell.
- Poverty was the most popular cause among EAs in our sample, followed by metacharity and then rationality.
- 33.1% of EAs in our sample were either vegan or vegetarian.
- 34.1% of EAs in our sample who indicated a career indicated that they were aiming to earn to give.
Here's drawing your attention to this year's Effective Altruism Survey, which was recently released and which Peter Hurford linked to in LessWrong Main. As he says there:
This is a survey of all EAs to learn about the movement and how it can improve. The data collected in the survey is used to help EA groups improve and grow EA. Data is also used to populate the map of EAs, create new EA meetup groups, and create EA Profiles and the EA Donation Registry.
If you are an EA or otherwise familiar with the community, we hope you will take it using this link. All results will be anonymised and made publicly available to members of the EA community. As an added bonus, one random survey taker will be selected to win a $250 donation to their favorite charity.
For reference, here are the results from last year's survey, along with Peter's analysis of them. This includes a link to a Github repository including the raw data, with names and email addresses removed.
Notable findings included:
- The top three sources people in our sample first heard about EA from were LessWrong, friends, or Giving What We Can. LessWrong, GiveWell, and personal contact were cited as the top three reasons people continued to get more involved in EA. (Keep in mind that EAs in our sample might not mean all EAs overall, as discussed in .)
- 66.9% of the EAs in our sample were from the United States, the United Kingdom, and Australia, but we have EAs in many countries. You can see the public location responses visualized on the Map of EAs!
- The Bay Area had the most EAs in our sample, followed by London and then Oxford. New York and Washington DC have surprisingly many EAs and may have flown under the radar.
- The EAs in our sample in total donated over $5.23 million in 2013. The median donation size was $450 in 2013 donations.
- 238 EAs in our sample donated 1% of their income or more, and 84 EAs in our sample give 10% of their income. You can see the past and planned donations that people have chosen to made public on the EA Donation Registry.
- The top three charities donated to by EAs in our sample were GiveWell's three picks for 2013 AMF, SCI, and GiveDirectly. MIRI was the fourth largest donation target, followed by unrestricted donations to GiveWell.
- Poverty was the most popular cause among EAs in our sample, followed by metacharity and then rationality.
- 33.1% of EAs in our sample were either vegan or vegetarian.
- 34.1% of EAs in our sample who indicated a career indicated that they were aiming to earn to give.
You're conflating something here. The statement only refers to "what is true", not your situation; each pronoun refers only to "what is true"
In that case saying "Owning up to the truth doesn't make the truth any worse" is correct, but doesn't settle the issue at hand as much as people tend to think it does. We don't just care about whether someone owning up to the truth makes the truth itself worse, which it obviously doesn't. We also care about whether it makes their or other people's situation worse, which it sometimes does.
I like the name it sounds like you may be moving to - "guesstimate".
Do you think you'd use this out of interest Owen?
And a friend requests an article comparing IQ and conscientiousness as a predictor for different things.
I asked for a good general guide to IQ (and in particular its objectivity and importance) on the LW FB group a while back. I got a bunch of answers, including these standouts:
http://www.psych.utoronto.ca/users/reingold/courses/intelligence/cache/1198gottfred.html
http://www.newscientist.com/data/doc/article/dn19554/instant_expert_13_-_intelligence.pdf
But there's still plenty of room for improvement on those so I'd be curious to hear others' suggestions.
I've been looking for this all my life without even knowing it. (Well, at least for half a year.)
That being said, what I'm not interested in as my sole aim is to be maximally effective at doing good. I'm more interested in expressing my values in as large and impactful a way as possible - and in allowing others to do the same. This happens to coincide with doing lots and lots f good, but it definitely doesn't mean that I would begin to sacrifice my other values (eg fun, peace, expression) to maximize good.
It's interesting to ask to what extent this is true of everyone - I think we've discussed this before Matt.
Your version and phrasing of what you're interested in is particular to you, but we could broaden the question out to ask how far people have gone a long way moving away from having primarily self-centred drives which overwhelm others when significant self-sacrifice is on the table. I think some people have gone a long way moving away from that, but I'm sceptical that any single human being goes the full distance. Most EAs plausibly don't make any significant self-sacrifices if measured in terms of their happiness significantly dipping.* The people I know who have gone the furthest may be Joey and Kate Savoie, with whom I've talked about these issues a lot.
* Which doesn't mean they haven't done a lot of good! If people can donate 5% or 10% or 20% of their income without becoming significantly less happy then that's great, and convincing people to do that is a low hanging fruit that we should prioritise, rather than focusing our energies on then squeezing out extra sacrifices that start to really eat into their happiness. The good consequences of people donating are what we really care about after all, not the level of sacrifice they themselves are making.
People's expectation clock starts running from the time they hit send. More improtantly, deadlines related to the email content really sets the agenda for how often to check your email.
Then change people's expectations, including those of the deadlines appropriate for tasks communicated by emails that people may not see for a while! (Partly a tongue in cheek answer - I know this may no be feasible, and you make a fair point).
As far as I know, nobody who identifies with EA routinely makes individual decisions between personal purchases and donating. [ ... ] Very few people, if any, cut personal spending to the point where they have to worry about, e.g., electricity bills.
I do know - indeed, live with :S - a couple.
Effective altruism ==/== utilitarianism
Here's the thread on this at the EA Forum: Effective Altruism and Utilitarianism
Here's the thread on this at the EA Forum: Effective Altruism and Utilitarianism
Potentially worth actually doing - what'd be the next step in terms of making that a possibility?
Relevant: a bunch of us are coordinating improvements to the identical EA Forum codebase at https://github.com/tog22/eaforum and https://github.com/tog22/eaforum/issues
Thanks, fixed, now points to http://slatestarcodex.com/2015/05/29/no-time-like-the-present-for-ai-safety-work/
For my part, I'm interested in the connection to GiveWell's powerful advocacy of "cluster thinking". I'll think about this some more and post thoughts if I have time.
http://www.moneysavingexpert.com/ is the best way to learn about these.
Shop for Charity is much better - 5%+ directly to GiveWell-recommended charities, plus browser plugins people have made that apply this every time you buy from Amazon.
Did you edit your original comment?
Not that I recall
Some people offer arguments - eg http://philpapers.org/archive/SINTEA-3.pdf - and for some people it's a basic belief or value not based on argument.
This is a good solution when marginal money has roughly equal utility to Alice and Bob, but suffers otherwise.
If C doesn't want A to play music so loud, but it's A's right to do so, why should A oblige? What is in it for A?
Some (myself included) would say that A should oblige if doing so would increase total utility, even if there's nothing in it for A self-interestedly. (I'm assuming your saying A had a right to play loud music wasn't meant to exclude this.)
"Tit-for-tat is a better strategy than Cooperate-Bot."
Can you use this premise in an explicit argument that expected reciprocation should be a factor in your decision to be nice toward others. How big a factor, relative to others (e.g. what maximises utility)? If there's an easy link to such an argument, all the better!
What if people don't believe in 'duty' - eg certain sorts of consequentialists?
Upvotes/downvotes on LW might take care of the quality worry.
How about moral realist consequentialism? Or a moral realist deontology with defeasible rules like a prohibition on murdering? These can certainly be coherent. I'm not sure what you require them to be non-arbitrary, but one case for consequentialism's being non-arbitrary would be that it is based on a direct acquaintance with or perception of the badness of pain and goodness of happiness. (I find this case plausible.) For a paper on this, see http://philpapers.org/archive/SINTEA-3.pdf
Are you good to do these posts in the future? If not, is anyone else?
I largely agree with the post. Saying Robertson's thought experiment was off limits and he was fantasising about beheading and raping atheists is silly. I think many people's reaction was explained by their being frustrated with his faulty assumption that all atheists are necessarily (implicitly or explicitly) nihilists of the sort who'd say there's nothing wrong with murder.
One amendment I'd make to the post is that many error theorists and non-cognitivists wouldn't be on board with what the murderer is saying in the thought experiment. For example, they could be quasi-realists. I say this as someone who personally leans moral realist.
The latest from Scott:
I'm fine with anyone who wants reposting things for comments on LW, except for posts where I specifically say otherwise or tag them with "things i will regret writing"
In this thread some have also argued for not posting the most hot-button political writings.
Would anyone be up for doing this? Ataxerxes started with "Extremism in Thought Experiments is No Vice"
On fragmentation, I find Raemon's comment fairly convincing:
2) Maybe it'll split the comments? Sure, but the comments there are already huge and unwieldy (possibly more-than-dunbar's number worth of commenters) so I'm actually fine with that. Discussion over there is already pretty split up among comment threads in a hard to follow fashion.
To be clear, I don't have the time to do it personally, I'd just do it for any posts I'd particularly enjoy reading discussion on or discussing. So if someone else feels it's a good idea and Scott's cool with it, their doing it would be the best way to make it happen.
I would be more in favour of pushing SSC to have up/downvotes
That doesn't look like a goer given Scott's response that I quoted.
I would certainly be against linking every single post here given that some of them would be decisively off topic.
Noting that it may be best to exclude some posts as off topic.
I'm not sure those topics are outside the norms of LW, outside the puns. Cf. this discussion: http://lesswrong.com/r/discussion/lw/lj4/what_topics_are_appropriate_for_lesswrong/
There's discussion of this on the LW Facebook group: https://www.facebook.com/groups/144017955332/permalink/10155300261480333/
It includes this comment from Scott:
I've unofficially polled readers about upvotes for comments and there's been what looks like a strong consensus against it on some of the grounds Benjamin brings up. I'm willing to listen to other proposals for changing the comments, although if it's not do-able via an easy WordPress plugin someone else will have to do it for me.
SCI used them some previous years.
Yes, LBTL actually doesn't have any GiveWell charities this year, and also charges the charities a 10% fee plus thousands up front; we don't take any cut. We're officially partnered with SCI on this and are their preferred venue.
Very sad. I enjoyed his books - I'd particularly recommend Small Gods for LessWrongers (it's also the one I enjoyed most in general).
Has anyone seen anything on how he died?