Posts

Why effective altruists should do Charity Science’s Christmas fundraiser 2015-12-01T23:59:14.688Z · score: 3 (6 votes)
What Makes the New Atheists So Charitable? 2015-10-29T01:50:45.618Z · score: -5 (11 votes)
SSC Discussion: No Time Like The Present For AI Safety Work 2015-06-05T02:34:28.645Z · score: 6 (7 votes)
SSC discussion: "bicameral reasoning", epistemology, and scope insensitivity 2015-05-27T05:08:37.621Z · score: 6 (9 votes)
SSC discussion: growth mindset 2015-04-11T15:13:59.432Z · score: 7 (8 votes)
Slate Star Codex: alternative comment threads on LessWrong? 2015-03-27T21:05:43.039Z · score: 28 (33 votes)
Join a major effective altruist fundraiser: get sponsored to eat for under $2.50 a day 2015-03-16T22:55:40.580Z · score: 8 (14 votes)
What topics are appropriate for LessWrong? 2015-01-12T18:58:16.791Z · score: 8 (11 votes)
Productivity poll: how frequently do you think you *should* check email? 2015-01-10T16:36:57.680Z · score: 2 (7 votes)
The new GiveWell recommendations are out: here's a summary of the charities 2014-12-01T21:20:23.819Z · score: 18 (19 votes)
Shop for Charity: how to earn proven charities 5% of your Amazon spending in commission 2014-11-24T08:29:32.996Z · score: 12 (13 votes)
Introducing an EA Donation Registry, covering existential risk donors 2014-10-21T14:10:19.784Z · score: 9 (10 votes)
Introducing Effective Altruist Profiles 2014-10-03T23:01:48.913Z · score: 18 (19 votes)
2014 Survey of Effective Altruists 2014-05-05T02:32:28.735Z · score: 27 (28 votes)
Email tone and status: !s, friendliness, 'please', etc. 2014-05-03T19:05:43.491Z · score: 1 (14 votes)
Jobs and internships available at the Centre for Effective Altruism: new 'EA outreach' roles added 2014-02-21T11:50:45.044Z · score: 5 (6 votes)
Jobs and internships available at the Centre for Effective Altruism 2014-02-07T12:16:19.212Z · score: 19 (20 votes)

Comments

Comment by tog on Stupid Questions December 2016 · 2016-12-23T20:21:28.770Z · score: 1 (1 votes) · LW · GW

That plus it's a more intelligent than average community with shared knowledge and norms of rationality. This is why I personally value LessWrong and am glad it's making something of a comeback.

Comment by tog on Why effective altruists should do Charity Science’s Christmas fundraiser · 2015-12-11T06:11:45.968Z · score: 1 (1 votes) · LW · GW

These aren't letters from charities, asking for your money for themselves (even if they then spend some or most or all of it on others). If you get a stock letter signed by the president of Charity X, who you don't know, saying they hope your family is well, that's quite different.

Comment by tog on Take the EA survey, help the EA movement grow and potentially win $250 to your favorite charity · 2015-12-08T16:42:44.044Z · score: 2 (2 votes) · LW · GW

Yep - we were thinking Dec 31st, but we've now decided to make it Jan 31st as some student EA groups have said they'd like to share it in their newsletters after students return from the holidays.

Comment by tog on Why effective altruists should do Charity Science’s Christmas fundraiser · 2015-12-08T16:40:39.155Z · score: 0 (0 votes) · LW · GW

I think it's possible to send versions of these emails which aren't annoying. I've sent a bunch myself and people haven't seemed to find them annoying.

Comment by tog on Why effective altruists should do Charity Science’s Christmas fundraiser · 2015-12-08T16:39:40.357Z · score: 0 (0 votes) · LW · GW

I disagree - I know Peter was genuinely interested in hearing back from people.

Comment by tog on Take the EA survey, help the EA movement grow and potentially win $250 to your favorite charity · 2015-12-01T17:43:22.561Z · score: 3 (3 votes) · LW · GW

For reference, here are the results from last year's survey, along with Peter's analysis of them. This includes a link to a Github repository including the raw data, with names and email addresses removed.

Notable findings included:

  • The top three sources people in our sample first heard about EA from were LessWrong, friends, or Giving What We Can. LessWrong, GiveWell, and personal contact were cited as the top three reasons people continued to get more involved in EA. (Keep in mind that EAs in our sample might not mean all EAs overall, as discussed in .)
  • 66.9% of the EAs in our sample were from the United States, the United Kingdom, and Australia, but we have EAs in many countries. You can see the public location responses visualized on the Map of EAs!
  • The Bay Area had the most EAs in our sample, followed by London and then Oxford. New York and Washington DC have surprisingly many EAs and may have flown under the radar.
  • The EAs in our sample in total donated over $5.23 million in 2013. The median donation size was $450 in 2013 donations.
  • 238 EAs in our sample donated 1% of their income or more, and 84 EAs in our sample give 10% of their income. You can see the past and planned donations that people have chosen to made public on the EA Donation Registry.
  • The top three charities donated to by EAs in our sample were GiveWell's three picks for 2013 ­­ AMF, SCI, and GiveDirectly. MIRI was the fourth largest donation target, followed by unrestricted donations to GiveWell.
  • Poverty was the most popular cause among EAs in our sample, followed by metacharity and then rationality.
  • 33.1% of EAs in our sample were either vegan or vegetarian.
  • 34.1% of EAs in our sample who indicated a career indicated that they were aiming to earn to give.
Comment by tog on Open thread, Nov. 30 - Dec. 06, 2015 · 2015-11-30T08:50:02.713Z · score: 7 (7 votes) · LW · GW

Here's drawing your attention to this year's Effective Altruism Survey, which was recently released and which Peter Hurford linked to in LessWrong Main. As he says there:

This is a survey of all EAs to learn about the movement and how it can improve. The data collected in the survey is used to help EA groups improve and grow EA. Data is also used to populate the map of EAs, create new EA meetup groups, and create EA Profiles and the EA Donation Registry.

If you are an EA or otherwise familiar with the community, we hope you will take it using this link. All results will be anonymised and made publicly available to members of the EA community. As an added bonus, one random survey taker will be selected to win a $250 donation to their favorite charity.

Take the EA Survey

Comment by tog on Take the EA survey, help the EA movement grow and potentially win $250 to your favorite charity · 2015-11-30T02:30:24.331Z · score: 6 (6 votes) · LW · GW

For reference, here are the results from last year's survey, along with Peter's analysis of them. This includes a link to a Github repository including the raw data, with names and email addresses removed.

Notable findings included:

  • The top three sources people in our sample first heard about EA from were LessWrong, friends, or Giving What We Can. LessWrong, GiveWell, and personal contact were cited as the top three reasons people continued to get more involved in EA. (Keep in mind that EAs in our sample might not mean all EAs overall, as discussed in .)
  • 66.9% of the EAs in our sample were from the United States, the United Kingdom, and Australia, but we have EAs in many countries. You can see the public location responses visualized on the Map of EAs!
  • The Bay Area had the most EAs in our sample, followed by London and then Oxford. New York and Washington DC have surprisingly many EAs and may have flown under the radar.
  • The EAs in our sample in total donated over $5.23 million in 2013. The median donation size was $450 in 2013 donations.
  • 238 EAs in our sample donated 1% of their income or more, and 84 EAs in our sample give 10% of their income. You can see the past and planned donations that people have chosen to made public on the EA Donation Registry.
  • The top three charities donated to by EAs in our sample were GiveWell's three picks for 2013 ­­ AMF, SCI, and GiveDirectly. MIRI was the fourth largest donation target, followed by unrestricted donations to GiveWell.
  • Poverty was the most popular cause among EAs in our sample, followed by metacharity and then rationality.
  • 33.1% of EAs in our sample were either vegan or vegetarian.
  • 34.1% of EAs in our sample who indicated a career indicated that they were aiming to earn to give.
Comment by tog on You Can Face Reality · 2015-10-02T15:24:55.822Z · score: 1 (1 votes) · LW · GW

You're conflating something here. The statement only refers to "what is true", not your situation; each pronoun refers only to "what is true"

In that case saying "Owning up to the truth doesn't make the truth any worse" is correct, but doesn't settle the issue at hand as much as people tend to think it does. We don't just care about whether someone owning up to the truth makes the truth itself worse, which it obviously doesn't. We also care about whether it makes their or other people's situation worse, which it sometimes does.

Comment by tog on Graphical Assumption Modeling · 2015-08-19T15:15:15.431Z · score: 0 (0 votes) · LW · GW

I like the name it sounds like you may be moving to - "guesstimate".

Comment by tog on Graphical Assumption Modeling · 2015-08-19T15:12:48.300Z · score: 0 (0 votes) · LW · GW

Do you think you'd use this out of interest Owen?

Comment by tog on Yvain's most important articles · 2015-08-18T05:12:47.964Z · score: 0 (0 votes) · LW · GW

And a friend requests an article comparing IQ and conscientiousness as a predictor for different things.

Comment by tog on Yvain's most important articles · 2015-08-18T05:12:20.554Z · score: 1 (1 votes) · LW · GW

I asked for a good general guide to IQ (and in particular its objectivity and importance) on the LW FB group a while back. I got a bunch of answers, including these standouts:

http://www.psych.utoronto.ca/users/reingold/courses/intelligence/cache/1198gottfred.html

http://www.newscientist.com/data/doc/article/dn19554/instant_expert_13_-_intelligence.pdf

But there's still plenty of room for improvement on those so I'd be curious to hear others' suggestions.

Comment by tog on Yvain's most important articles · 2015-08-18T05:10:01.297Z · score: 1 (1 votes) · LW · GW

I've been looking for this all my life without even knowing it. (Well, at least for half a year.)

Comment by tog on Effective Altruism from XYZ perspective · 2015-07-15T20:16:07.797Z · score: 0 (0 votes) · LW · GW

That being said, what I'm not interested in as my sole aim is to be maximally effective at doing good. I'm more interested in expressing my values in as large and impactful a way as possible - and in allowing others to do the same. This happens to coincide with doing lots and lots f good, but it definitely doesn't mean that I would begin to sacrifice my other values (eg fun, peace, expression) to maximize good.

It's interesting to ask to what extent this is true of everyone - I think we've discussed this before Matt.

Your version and phrasing of what you're interested in is particular to you, but we could broaden the question out to ask how far people have gone a long way moving away from having primarily self-centred drives which overwhelm others when significant self-sacrifice is on the table. I think some people have gone a long way moving away from that, but I'm sceptical that any single human being goes the full distance. Most EAs plausibly don't make any significant self-sacrifices if measured in terms of their happiness significantly dipping.* The people I know who have gone the furthest may be Joey and Kate Savoie, with whom I've talked about these issues a lot.

* Which doesn't mean they haven't done a lot of good! If people can donate 5% or 10% or 20% of their income without becoming significantly less happy then that's great, and convincing people to do that is a low hanging fruit that we should prioritise, rather than focusing our energies on then squeezing out extra sacrifices that start to really eat into their happiness. The good consequences of people donating are what we really care about after all, not the level of sacrifice they themselves are making.

Comment by tog on Productivity poll: how frequently do you think you *should* check email? · 2015-07-15T08:14:12.588Z · score: 0 (0 votes) · LW · GW

People's expectation clock starts running from the time they hit send. More improtantly, deadlines related to the email content really sets the agenda for how often to check your email.

Then change people's expectations, including those of the deadlines appropriate for tasks communicated by emails that people may not see for a while! (Partly a tongue in cheek answer - I know this may no be feasible, and you make a fair point).

Comment by tog on Effective Altruism from XYZ perspective · 2015-07-13T01:27:27.153Z · score: 1 (1 votes) · LW · GW

As far as I know, nobody who identifies with EA routinely makes individual decisions between personal purchases and donating. [ ... ] Very few people, if any, cut personal spending to the point where they have to worry about, e.g., electricity bills.

I do know - indeed, live with :S - a couple.

Comment by tog on Effective Altruism from XYZ perspective · 2015-07-12T16:44:17.967Z · score: 0 (0 votes) · LW · GW

Effective altruism ==/== utilitarianism

Here's the thread on this at the EA Forum: Effective Altruism and Utilitarianism

Comment by tog on 'Effective Altruism' as utilitarian equivocation. · 2015-07-12T16:43:34.994Z · score: 0 (0 votes) · LW · GW

Here's the thread on this at the EA Forum: Effective Altruism and Utilitarianism

Comment by tog on Lesswrong, Effective Altruism Forum and Slate Star Codex: Harm Reduction · 2015-06-19T07:11:52.304Z · score: 1 (1 votes) · LW · GW

Potentially worth actually doing - what'd be the next step in terms of making that a possibility?

Relevant: a bunch of us are coordinating improvements to the identical EA Forum codebase at https://github.com/tog22/eaforum and https://github.com/tog22/eaforum/issues

Comment by tog on SSC Discussion: No Time Like The Present For AI Safety Work · 2015-06-05T17:00:50.820Z · score: 0 (0 votes) · LW · GW

Thanks, fixed, now points to http://slatestarcodex.com/2015/05/29/no-time-like-the-present-for-ai-safety-work/

Comment by tog on SSC discussion: "bicameral reasoning", epistemology, and scope insensitivity · 2015-05-27T05:11:09.228Z · score: 1 (1 votes) · LW · GW

For my part, I'm interested in the connection to GiveWell's powerful advocacy of "cluster thinking". I'll think about this some more and post thoughts if I have time.

Comment by tog on Effective effective altruism: Get $400 off your next charity donation · 2015-04-23T17:29:58.320Z · score: 0 (0 votes) · LW · GW

http://www.moneysavingexpert.com/ is the best way to learn about these.

Comment by tog on Effective effective altruism: Get $400 off your next charity donation · 2015-04-20T22:52:55.218Z · score: 0 (0 votes) · LW · GW

Shop for Charity is much better - 5%+ directly to GiveWell-recommended charities, plus browser plugins people have made that apply this every time you buy from Amazon.

Comment by tog on What level of compassion do you consider normal, expected, mandatory etc. ? · 2015-04-20T22:47:33.593Z · score: 0 (0 votes) · LW · GW

Did you edit your original comment?

Not that I recall

Comment by tog on What level of compassion do you consider normal, expected, mandatory etc. ? · 2015-04-15T15:07:08.890Z · score: 0 (0 votes) · LW · GW

Some people offer arguments - eg http://philpapers.org/archive/SINTEA-3.pdf - and for some people it's a basic belief or value not based on argument.

Comment by tog on What level of compassion do you consider normal, expected, mandatory etc. ? · 2015-04-11T16:20:40.655Z · score: 0 (0 votes) · LW · GW

This is a good solution when marginal money has roughly equal utility to Alice and Bob, but suffers otherwise.

Comment by tog on What level of compassion do you consider normal, expected, mandatory etc. ? · 2015-04-11T16:18:14.893Z · score: 1 (1 votes) · LW · GW

If C doesn't want A to play music so loud, but it's A's right to do so, why should A oblige? What is in it for A?

Some (myself included) would say that A should oblige if doing so would increase total utility, even if there's nothing in it for A self-interestedly. (I'm assuming your saying A had a right to play loud music wasn't meant to exclude this.)

Comment by tog on What level of compassion do you consider normal, expected, mandatory etc. ? · 2015-04-11T16:14:13.635Z · score: 0 (0 votes) · LW · GW

"Tit-for-tat is a better strategy than Cooperate-Bot."

Can you use this premise in an explicit argument that expected reciprocation should be a factor in your decision to be nice toward others. How big a factor, relative to others (e.g. what maximises utility)? If there's an easy link to such an argument, all the better!

Comment by tog on What level of compassion do you consider normal, expected, mandatory etc. ? · 2015-04-11T16:10:54.895Z · score: 0 (0 votes) · LW · GW

What if people don't believe in 'duty' - eg certain sorts of consequentialists?

Comment by tog on Slate Star Codex: alternative comment threads on LessWrong? · 2015-03-29T20:49:45.797Z · score: 1 (1 votes) · LW · GW

Upvotes/downvotes on LW might take care of the quality worry.

Comment by tog on Discussion of Slate Star Codex: "Extremism in Thought Experiments is No Vice" · 2015-03-29T06:50:31.600Z · score: 2 (2 votes) · LW · GW

How about moral realist consequentialism? Or a moral realist deontology with defeasible rules like a prohibition on murdering? These can certainly be coherent. I'm not sure what you require them to be non-arbitrary, but one case for consequentialism's being non-arbitrary would be that it is based on a direct acquaintance with or perception of the badness of pain and goodness of happiness. (I find this case plausible.) For a paper on this, see http://philpapers.org/archive/SINTEA-3.pdf

Comment by tog on Slate Star Codex: alternative comment threads on LessWrong? · 2015-03-29T01:18:12.358Z · score: 0 (0 votes) · LW · GW

Are you good to do these posts in the future? If not, is anyone else?

Comment by tog on Discussion of Slate Star Codex: "Extremism in Thought Experiments is No Vice" · 2015-03-28T15:17:18.093Z · score: 4 (6 votes) · LW · GW

I largely agree with the post. Saying Robertson's thought experiment was off limits and he was fantasising about beheading and raping atheists is silly. I think many people's reaction was explained by their being frustrated with his faulty assumption that all atheists are necessarily (implicitly or explicitly) nihilists of the sort who'd say there's nothing wrong with murder.

One amendment I'd make to the post is that many error theorists and non-cognitivists wouldn't be on board with what the murderer is saying in the thought experiment. For example, they could be quasi-realists. I say this as someone who personally leans moral realist.

Comment by tog on Slate Star Codex: alternative comment threads on LessWrong? · 2015-03-28T15:06:28.656Z · score: 1 (1 votes) · LW · GW

The latest from Scott:

I'm fine with anyone who wants reposting things for comments on LW, except for posts where I specifically say otherwise or tag them with "things i will regret writing"

In this thread some have also argued for not posting the most hot-button political writings.

Would anyone be up for doing this? Ataxerxes started with "Extremism in Thought Experiments is No Vice"

Comment by tog on Slate Star Codex: alternative comment threads on LessWrong? · 2015-03-28T15:03:11.811Z · score: 1 (1 votes) · LW · GW

On fragmentation, I find Raemon's comment fairly convincing:

2) Maybe it'll split the comments? Sure, but the comments there are already huge and unwieldy (possibly more-than-dunbar's number worth of commenters) so I'm actually fine with that. Discussion over there is already pretty split up among comment threads in a hard to follow fashion.

Comment by tog on Slate Star Codex: alternative comment threads on LessWrong? · 2015-03-28T02:43:09.055Z · score: 2 (2 votes) · LW · GW

To be clear, I don't have the time to do it personally, I'd just do it for any posts I'd particularly enjoy reading discussion on or discussing. So if someone else feels it's a good idea and Scott's cool with it, their doing it would be the best way to make it happen.

Comment by tog on Slate Star Codex: alternative comment threads on LessWrong? · 2015-03-28T01:33:03.271Z · score: 2 (2 votes) · LW · GW

I would be more in favour of pushing SSC to have up/downvotes

That doesn't look like a goer given Scott's response that I quoted.

I would certainly be against linking every single post here given that some of them would be decisively off topic.

Noting that it may be best to exclude some posts as off topic.

Comment by tog on Slate Star Codex: alternative comment threads on LessWrong? · 2015-03-27T22:01:56.386Z · score: 4 (6 votes) · LW · GW

I'm not sure those topics are outside the norms of LW, outside the puns. Cf. this discussion: http://lesswrong.com/r/discussion/lw/lj4/what_topics_are_appropriate_for_lesswrong/

Comment by tog on Slate Star Codex: alternative comment threads on LessWrong? · 2015-03-27T22:00:30.657Z · score: 4 (4 votes) · LW · GW

There's discussion of this on the LW Facebook group: https://www.facebook.com/groups/144017955332/permalink/10155300261480333/

It includes this comment from Scott:

I've unofficially polled readers about upvotes for comments and there's been what looks like a strong consensus against it on some of the grounds Benjamin brings up. I'm willing to listen to other proposals for changing the comments, although if it's not do-able via an easy WordPress plugin someone else will have to do it for me.

Comment by tog on Join a major effective altruist fundraiser: get sponsored to eat for under $2.50 a day · 2015-03-19T23:05:23.284Z · score: 2 (2 votes) · LW · GW

SCI used them some previous years.

Comment by tog on Join a major effective altruist fundraiser: get sponsored to eat for under $2.50 a day · 2015-03-19T16:14:50.973Z · score: 2 (2 votes) · LW · GW

Yes, LBTL actually doesn't have any GiveWell charities this year, and also charges the charities a 10% fee plus thousands up front; we don't take any cut. We're officially partnered with SCI on this and are their preferred venue.

Comment by tog on [LINK] Terry Pratchett is dead · 2015-03-12T17:49:02.885Z · score: 4 (4 votes) · LW · GW

Very sad. I enjoyed his books - I'd particularly recommend Small Gods for LessWrongers (it's also the one I enjoyed most in general).

Has anyone seen anything on how he died?

Comment by tog on Open thread, Mar. 2 - Mar. 8, 2015 · 2015-03-04T16:37:44.942Z · score: 5 (5 votes) · LW · GW

What gets more viewership, an unpromoted post in main or a discussion post? Also, are there any LessWrong traffic stats available?

Comment by tog on Announcing LessWrong Digest · 2015-02-25T10:17:51.564Z · score: 1 (1 votes) · LW · GW

Great job! Evan is also creating an effective altruism digest: https://www.facebook.com/groups/dotimpact/permalink/415596685274654/

Comment by tog on [QUESTION]: LessWrong web traffic data? · 2015-02-16T12:15:21.873Z · score: 0 (0 votes) · LW · GW

Did you ever find the answer to this?

Comment by tog on What topics are appropriate for LessWrong? · 2015-01-27T08:16:56.848Z · score: 0 (0 votes) · LW · GW

That seems quite a bit more restrictive than what currently gets posted, no? (I ask because I don't follow the site that closely.)

Comment by tog on 2015 Repository Reruns - Boring Advice Repository · 2015-01-17T15:36:35.393Z · score: 2 (2 votes) · LW · GW

Make a will. It's worth it, and too easy to put off. Here's a will-writing guide I wrote, including free ways in which you can do so (which also covers how to leave money to charity in it, but is a complete guide.)

Comment by tog on 2015 Repository Reruns - Boring Advice Repository · 2015-01-17T15:35:23.034Z · score: 0 (0 votes) · LW · GW

If you give to charity, use the recommendations at GiveWell.org. (Familiar and boring to most people here I know, but new people might see this thread!)

Comment by tog on Open thread, Jan. 12 - Jan. 18, 2015 · 2015-01-15T10:15:59.989Z · score: 1 (1 votes) · LW · GW

Amusing product you could use with this - the Pavlok, which gives you electric shocks ( http://pavlok.com/ )

There was also a kickstarter device that sucked off your blood as a penalty, but they banned it.