Open letter to young EAs

post by Leif Wenar · 2024-10-11T19:49:10.818Z · LW · GW · 10 comments

Contents

10 comments

In March I published a critique of EA in Wired. The responses I got, especially from young EAs, prompted me to write an open letter to my young friends in EA. 

https://docs.google.com/document/d/1ytfQOfmjWTDiGdjuyNBfWJ_g-644y_Pd/edit?usp=sharing&rtpof=true&sd=true

Thanks for reading.

10 comments

Comments sorted by top scores.

comment by abstractapplic · 2024-10-11T22:20:05.334Z · LW(p) · GW(p)

As with the last article, I think this is almost entirely incoherent/wrong; as with the last article, I'm strong-upvoting it anyway because I think it makes ~1.5 good and important points I've not seen made anywhere else, and they're worth the chaff. (I'd go into more detail but I don't want anyone leaning on my summary instead of reading it themselves.)

. . . is there a reason this is a link to a google doc, instead of a copypaste?

[1]

  1. ^

    and add footnotes.

Replies from: johnswentworth, SaidAchmiz
comment by johnswentworth · 2024-10-11T23:44:04.642Z · LW(p) · GW(p)

Maybe list the 1.5 good points anyway? I read through the first section, concluded that this guy has no idea how credit assignment works (admittedly credit assignment is hard, but he's failing even at an easy case), then saw him saying that Givewell's claims should only be considered credible after assessment by academic "experts" without actually laying out the object-level critique (which was supposedly in a Wired article), and at that point I concluded that this was not worth my time. So if there's 1.5 good points in there, it would be useful to surface them for people (like me) who are not going to dig through that much junk to find the gems.

Replies from: Viliam
comment by Viliam · 2024-10-13T00:32:59.870Z · LW(p) · GW(p)

I liked the point (as I understood it) that effective altruism has a problem with doing good things that have bad things as a result. Imagine two charities: the first one saves lives of thousand people, the second one murders ten people and saves lives of two thousand people. Which one is an EA supposed to donate to?

Replies from: martin-randall, ic-rainbow
comment by Martin Randall (martin-randall) · 2024-10-14T21:36:42.732Z · LW(p) · GW(p)

If I knew a charity had murdered ten people I would report the charity to the appropriate authorities. I wouldn't donate to the charity because that would make me an accessory to murder.

comment by IC Rainbow (ic-rainbow) · 2024-10-13T08:28:46.613Z · LW(p) · GW(p)

The first one, why?

Do you have a more concrete example? Preferably the one from the actual EA causes.

Replies from: Viliam
comment by Viliam · 2024-10-14T17:37:18.178Z · LW(p) · GW(p)

Some people have suggested overthrowing bad governments (or something similar) as a possible EA cause, but that is nowhere near mainstream EA opinion, as far as I know...

The problem is rather philosophical. The idea behind EA is that if people die as a result of your inaction, it kinda makes you responsible for their deaths. (As opposed to the mainstream idea that although saving people's lives is a nice, not giving a fuck about them dying is merely... neutral.) But if you accept this, then even deciding to donate to cause X instead of cause Y kinda makes you responsible for the people who die as a result of you not donating to Y. So in some sense, all EAs already are choosing the second option, inevitably, they are just hypocritical about that.

If choosing a charity X over charity Y, and thereby saving thousand people and letting ten other people die, is considered a good choice, why is killing ten more people to save thousand more people bad?

The mainstream answer is that killing is bad, but letting die is... kinda not bad, or at least not comparably bad. But EAs reject the mainstream answer, so what is their answer?

.

There is a practical objection, that whoever promises to save thousand people by killing ten usually ends up killing ten or more people without actually saving the thousand. Therefore such statements cannot be taken at face value. But that's avoiding the hypothetical. Suppose that someone proposes to kill ten people to save millions, and when you apply all the skepticism and outside view, you conclude that actually killing the ten people will only save about thousand people on average. Should you do it?

Imagine a vaccine against malaria, that would make everyone perfectly immune, but as a side effect of vaccination, ten people would die. Would EAs support that?

Replies from: martin-randall
comment by Martin Randall (martin-randall) · 2024-10-14T21:30:36.544Z · LW(p) · GW(p)

The medical profession supports medical treatments that save lives but very occasionally have lethal side effects. I defer to their judgement but it makes sense to me.

comment by Said Achmiz (SaidAchmiz) · 2024-10-11T23:59:59.585Z · LW(p) · GW(p)

I concur with @johnswentworth’s comment; I read approximately as far as he did, and came to the same conclusion. I would also like to see the “~1.5 good and important points” listed!

comment by nc · 2024-10-11T21:43:01.875Z · LW(p) · GW(p)

Your general argument rings true to my ears - except the part about AI safety. It is very hard to interact with AI safety without entering the x-risk sphere, as shown by this piece of research by the Cosmos Institute, where the x-risk sphere is almost 2/3rds total funding (I have some doubts about the accounting). Your argument about Mustafa Suleyman strikes me as a "just-so" story - I do wish it were replicable, but I would be surprised, particularly with AI safety's sense of urgency. 

I'm here because truly there is no better place, and I mean that in both a praiseworthy and an upsetting sense. If you think it's misguided, we, on the same side, need to show the strength of our alternative, don't we?

comment by ZY (AliceZ) · 2024-10-11T22:35:04.415Z · LW(p) · GW(p)

I also had similar feelings on the simplicity part, and also how theory/idealized situation and execution could be very different.  Also agree on the conflict part (and to me many different type of conflicts).  And, I super super strongly support the section on The humans behind the numbers.
(These thoughts still persist after taking intro to EA courses). 

I think EA's big overall intentions are good to me and I am happy/energized by see how passionate people are comparing to no altruism at all at least; but the details/execution are not quite there to me.