Increasing Demandingness in EA

post by jefftk (jkaufman) · 2022-04-29T01:20:01.507Z · LW · GW · 22 comments

Contents

22 comments

In thinking about what it means to lead a good life, people often struggle with the question of how much is enough: how much does our morality demand of us? People have given a wide range of answers to this question, but effective altruism has historically used "giving 10%". Yes, it's better if you donate a larger fraction, switch to a job where you can earn more, or put your career to use directly, but if you're giving 10% to effective charity you're doing your share, you've met the bar to consider yourself an EA, and we're happy to have you on board.

I say "historically", because it feels like this is changing; I think EAs would generally still agree with my paragraph above, but while in 2014 it would have been uncontroversial now I think some would disagree and others would have to think for a while.

EA started out as a funding-constrained movement. Whether you looked at global poverty, existential risk, animal advocacy, or movement building, many excellent people were working as volunteers or well below what they could earn because there just wasn't the money to offer competitive pay. Every year GiveWell's total room for more funding was a multiple of their money moved. In this environment, the importance of donations was clear.

EA has been pretty successful in raising money, however, and the primary constraint has shifted from money to people. In 2015, 80k made a strong case for focusing on what people can do directly, not mediated by donations, and this case is even stronger today. Personally, I've found this pretty convincing, though in 2017 I decided to return to earning to give because it still seemed like the best fit for me.

What this means, however, is that we are now trying to build a different sort of movement than we were ten years ago. While people who've dedicated their careers toward the most critical things have made up the core of the movement all along, the ratio of impact has changed.

Imagine you have a group of people donating 10% to the typical mix of EA causes. You are given the option to convince one of them to start working on one of 80k's priority areas, but in doing so N others will get discouraged and stop donating. This is a bit of a false dilemma, since ideally these would not be in conflict, but let's stick with this for a bit because I think it is illustrative. In 2012 I would have put a pretty low number for N, perhaps ~3, partly because we were low on money, but also because we were starting a movement. In 2015 I would have put N at ~30: a factor of 6 because of the difference between 10% and the most that people in typical earning to give roles can generally donate (~60%) and a factor of 5 because of the considerations in Why you should focus more on talent gaps, not funding gaps. With the large recent increases in EA-influenced spending I'd roughly put N at ~300 [1], though I'd be interested in better estimates.

Unfortunately, a norm of "10% and you're doing your part" combines very poorly with the reality of 100% of someone's career having ~300x more impact than 10%. This makes EA feel much more demanding than it used to: instead of saying "look at the impact you can have by donating 10%", we're now generally saying "look at the impact you can have by building your entire career around work on an important problem."

(This has not applied evenly. People who were already planning to make EA central to their career are generally experiencing EA as less demanding: pay in EA organizations has gone up, there is less stress around fundraising, and there is less of a focus on frugality or other forms of personal sacrifice. In some cases these changes mean that if someone does decide to shift their career it is less of a sacrifice than it would've been, though that does depend on how the field you enter is funded.)

While not everyone is motivated by a sense that they should be doing their part (see: excited vs. obligatory altruism), I do think this is a major motivation for many people. Figuring out how to encourage people who would thrive in an EA-motivated career to go in that direction without discouraging and losing people for which that would be too large a sacrifice seems really important, and I don't see how to solve it.

Inspired by conversations with Alex Gordon-Brown, Denise Melchin, and others.


[1] I expect people working in EA movement building have an estimate (a) the value of a GWWC pledge and (b) the value of a similar person going into an 80k priority area, and this is essentially the ratio of these. I did a small amount of looking, however, and didn't see public estimates. I guessed ~$10k/y for (a) and ~$3M/y for (b), giving N=~300. Part of why I have (b) this high is that I think it's now difficult to turn additional money into good work on the most important areas. If you would give a much higher number for (b), my guess is that you are imagining someone much stronger than the typical person donating 10%.

22 comments

Comments sorted by top scores.

comment by Alexander (alexander-1) · 2022-04-29T07:01:05.951Z · LW(p) · GW(p)

Premise: people are fundamentally motivated by the "status" rewarded to them by those around them.

I have experienced the phenomenon of demandingness described in your post, and you've elucidated it brilliantly. I regularly frequent in-person EA events, and I can visibly see status being rewarded according to impact, which is very different from how it's typically rewarded in the broader society. (This is not necessarily a bad thing.) The status hierarchy in EA communities goes something like this:

  • People who've dedicated their careers to effective causes. Or philosophers at Oxford.
  • People who facilitate people who've dedicated their careers to effective causes, e.g. research analysts.
  • People who donate 99% of their income to effective causes.
  • People who donate 98% of their income to effective causes.
  • ...
  • People who donate 1% of their income to effective causes.
  • People who donate their time and money to ineffective causes.
  • People who don't donate.
  • People who think altruism is bad.

This hierarchy is very "visible" within the in-person circles I frequent, being enforced by a few core members. I recently convinced a non-EA friend to tag along, and following the event, they said, "I felt incredibly unwelcomed". Within 5 minutes, one of the organisers asked my friend, "What charities do you donate to?" My friend said, "I volunteer at a local charity, and my SO works in sexual health awareness." Following a bit of back and forth debate, the EA organiser looked disappointed and said "I'm confused.", then turned his back on my friend. [This is my vague recollection of what happened, it's not an exact description, and my friend had pre-existing anti-EA biases.]

Upholding the core principles of EA is necessary. Without upholding particular principles at the expense of the rest, the organisation ceases. However, the thing about optimisation and effectiveness is that if we're naively and greedily maximising, we're probably doing it wrong. If we are pushing people away from the cause by rewarding them with low status as soon as we meet them, we will not be winning many allies.

If we reward low status to people who don't donate as much as others, we might cause these people to halt their donations, quit our game, and instead play a different game in which they are rewarded with relatively more status.

I don't know how to solve this problem either, and I think it is hard. We can only do so much to "design" culture and influence how status is rewarded within communities. Culture is mostly a thing that just happens due to many agents interacting in a world.

I watched an interview with Toby Ord a while back, and during the Q&A session, the interviewer asked Ord:

Given your analysis of existential risks, do you think people should be donating purely to long-term causes?

Ord's response was fantastic. He said:

No. I do think this is very important, and there is a strong case to be made that this is the central issue of our time. And potentially the most cost-effective as well. Effective Altruism would be much the worse if it specialised completely in one area. Having a breadth of causes that people are interested in, united by their interest in effectiveness is central to the community's success. [...] We want to be careful not to get into criticising each other for supporting the second-best thing.

Extending this logic, let‘s not get into criticising people for doing good. We can argue and debate how we can do good better, but let’s not attack people for doing whatever good they can and are willing to do.

I have seen snide comments about Planned Parenthood floating around rationalist and EA communities, and I find them distasteful. Yeah, sure, donating to Malaria saves more lives. But again, the thing about optimisation is that if we are pushing people away from our cause by being parochial, then we're probably doing a lousy job at optimising.

Replies from: quanticle
comment by quanticle · 2022-05-02T00:01:31.267Z · LW(p) · GW(p)

Following a bit of back and forth debate, the EA organiser looked disappointed and said “I’m confused.”, then turned his back on my friend.

I don't like analogizing EA to a religious movement, but I think such an analogy is appropriate in this instance. If I went to a Christian gathering, accompanying a devout friend, and someone came up to me and asked, "Oh, I haven't seen you before, which church do you attend?" I would reply, "Oh, I'm not Christian." Then if, after a bit of discussion, that person chose to turn and walk away, I wouldn't be offended. In fact, them turning and walking away is one of the better outcomes. Far better than them attempting to continue proselytize at me for the rest of the event.

In this case, the organizer encountered a person who was clearly not bought into EA, ascertained that they were not bought into EA after a short discussion, and then chose to walk away. While that's not the friendliest response, it's hardly the worst thing in the world.

Replies from: alexander-1
comment by Alexander (alexander-1) · 2022-05-03T11:27:17.130Z · LW(p) · GW(p)

I agree. I don't think this kind of behaviour is the worst thing in the world. I just think it is instrumentally irrational.

comment by JBlack · 2022-04-30T02:40:01.995Z · LW(p) · GW(p)

I currently have a salary of around $80k/year.

If you believe that I could instead provide the same benefits to civilization by being directly employed by some well-run EA organization as $2.4M/year in donations, then I will happily do this for only $1M/year. Everyone will be very much better off.

Does this sound like a good deal? If not, then how does this square with the N ~300 estimate?

Replies from: AnnaSalamon, AnnaSalamon, thomas-kwa
comment by AnnaSalamon · 2022-04-30T21:59:20.928Z · LW(p) · GW(p)

Some components of my own models, here:

  1. I think most of the better-funded EA organizations would not prefer most LWers working there for $1M/yr, nor for a more typical salary, nor for free.

    (Even though many of these same LW-ers are useful many other places.)
     
  2. I think many of the better-funded EA organizations would prefer (being able to continue employing at least their most useful staff members) to (receiving an annual donation equal to 30x what that staff member could make in industry).
     
  3. If a typical LWer somehow really decided, deep in themselves, to try to do good with all their heart and all their mind and creativity… or to do as much of this as was compatible with still working no more than 40 hrs/week and having a family and a life… I suspect this would be quite considerably more useful than donating 10% of their salary to some already-funded-to-near-saturation EA organization.  (Since the latter effect is often small.)  (Though some organizations are not that well-funded!  So this varies by organization IMO.)

2 and 3 are as far as I can get toward agreeing with the OPs estimated factor of 300.  It doesn’t get me all the way there (well, I guess it might for the mean person, but certainly not for the median; plus also there're assumptions implicit in trying to use a multiplier here that I don't buy or can't stomach.).  But it makes me sort of empathize with how people can utter sentences like those.

In terms of what to make of this:

Sometimes people jam 1 and 2 together, to get a perspective like “most people are useless compared to those who work at EA organizations.”  I think this is not quite right, because “scaling an existing EA organization’s impact” is not at all the only way to do good, and my guess is that the same people may be considerably better at other ways to do good than they are at adding productivity to an [organization that already has as many staff as it knows how to use].

One possible alternate perspective:

“Many of the better funded EA organizations don’t much know how to turn additional money, or additional skilled people, into doing their work faster/more/better.  So look for some other way to do good and don’t listen too much to them for how to do it.  Rely on your own geeky ideas, smart outside friends who've done interesting things before, common sense and feedback loops and experimentation and writing out your models and looking for implications/inconsistencies, etc. in place of expecting EA to have a lot of pre-found opportunities that require only your following of their instructions.”

comment by AnnaSalamon · 2022-04-30T20:19:12.633Z · LW(p) · GW(p)

Great use of logic to try to force us to have models, and to make those models explicit!

comment by Thomas Kwa (thomas-kwa) · 2022-04-30T19:20:46.058Z · LW(p) · GW(p)

From the perspective of the EA org, there are hires for whom this would be a good decision (I've heard >1M pay numbers thrown around for critical software engineering roles that are disconnected from EA strategy, or Terence Tao). But it's not obviously good in every case. Here's some of the reasoning I've heard:

  • People often do better work if they're altruistically motivated than if they're mercenaries-- there's a "human alignment problem". When you underpay, you don't attract top talent. When you overpay, you attract more top talent but also more mercenaries. The optimum seems to be somewhere around top industry pay (in other industries, employees often provide the companies far more value than their salary, and the equilibrium for companies is to match median industry pay adjusting a bit for their circumstances). EA orgs are currently transitioning away from the underfunded nonprofit regime, but I think the equilibrium is still lower than top industry pay in many cases (e.g. when EA work is more interesting or saliently meaningful than industry work, and top talent differentially seeks out interesting or saliently meaningful work). Due to the factors below, I don't see the optimum being substantially more than industry.
  • People (even altruists) don't like being paid less than someone else for more impact. Your slightly more talented or harder-working colleague might demand to be paid $1.2 million. If not, this sets up weird dynamics where selfish people are paid 5x more than altruists
  • People (even altruists) don't like getting pay cuts, and often expect pay raises. Paying someone $1M often raises their expectations so they expect $1M * 1.04^n in year n until they retire. This can sometimes be fixed with workplace culture.

edit: the below thing is wrong

The last two factors are especially large because EA has much more human capital than financial capital (edit: as measured by valuation)-- I would guess something like a 5x ratio. If we paid everyone at EA orgs 41% of what they're worth, and they spend it selfishly, this would kill >30% of the surplus from employing all the EAs, force EA funders (who are invested in high-risk, high-EV companies like FTX) to derisk to pay consistent salaries.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2022-04-30T19:30:42.978Z · LW(p) · GW(p)

because EA has much more human capital than financial capital

Is this a typo? It seems in direct contradiction with the OPs claim that EA is people-bottlenecked and not funding-bottlenecked, which I otherwise took you to be agreeing with.

Replies from: thomas-kwa
comment by Thomas Kwa (thomas-kwa) · 2022-04-30T20:24:43.162Z · LW(p) · GW(p)

I mean this in a narrow sense (edited to clarify) based on marginal valuations: I'd much rather delete 1% of EA money than 1% of EA human capital. So we can think of human capital as being worth more than money. I think there might be problems with this framing, but the core point applies: even though there are far fewer people than money (when using the conversion ratio implied by industry salary), the counterfactual value of people adds up to more than money. So paying everyone 40% of their counterfactual value would substantially deplete EA financial capital.

I think this is equivalent to saying that the marginal trade we're making is much worse than the average trade (where trade = buying labor with money)

Replies from: AnnaSalamon
comment by AnnaSalamon · 2022-04-30T20:58:37.829Z · LW(p) · GW(p)

I could still be missing something, but I think this doesn't make sense. If the marginal numbers are as you say and if EA organizations started paying everyone 40% of their counterfactual value, the sum of “EA financial capital” would go down, and so the counterfactual value-in-“EA”-dollars of marginal people would also go down, and so the numbers would probably work out with lower valuations per person in dollars. Similarly, if “supply and demand” works for finding good people to work at EA organizations (which it might? I’m honestly unsure), the number of EA people would go up, which would also reduce the counterfactual value-in-dollars of marginal EA people.

More simply, it seems a bit weird to start with “money is not very useful on the margin, compared to people” and get from there to “because of how useless money is compared to people, if we spend money to get more people, this’ll be a worse deal than you’d think.”

Although, I was missing something / confused about something prior to reading your reply: it does seem likely to me on reflection that losing all of EAs dollars, but keeping the people, would leave us in a much better position than losing all of EAs people (except a few very wealthy donors, say) but losing its dollars. So in that sense it seems likely to me that EA has much more value-from-human-capital than value-from-financial-capital.

Replies from: jkaufman, thomas-kwa
comment by jefftk (jkaufman) · 2022-05-01T10:40:30.445Z · LW(p) · GW(p)

nit: I think "but losing its dollars" should be "but keeping its dollars"

comment by Thomas Kwa (thomas-kwa) · 2022-04-30T21:55:23.456Z · LW(p) · GW(p)

Thanks, I agree with this comment.

comment by JustisMills · 2022-04-30T17:58:57.319Z · LW(p) · GW(p)

I wrote a reply to this from a more-peripheral-EA perspective on the EA forum here:

https://forum.effectivealtruism.org/posts/YeudcYiArwWrg77Ng/notes-from-a-pledger [EA · GW]

comment by Dan Weinand (dan-weinand) · 2022-04-29T19:12:41.476Z · LW(p) · GW(p)

I'm surprised that you think that direct work has such a high impact multiplier relative to one's normal salary. The footnote seems to suggest that you expect someone who could get a $100K salary trying to earn to give could provide $3M in impact per year.


I think GiveWell still estimates it can save a life for ~$6K on the margin, which is ~50 QALYs.

(life / $6K) X (50 QALY / life) X ($3 million / EA year) ~= 25K QALY per EA year

Which both seems like a very high figure and seems to imply that 66K EAs would be sufficient to do good equivalent to totally eliminating the burden of all disease (I'm ignoring decreasing marginal returns).  This seems like an optimistic figure to me, unless you're very optimistic about X-risk charities being effective? I'd be curious to hear how you got to the ~3 million figure intuitively.

I would guess something closer to 5-10X impact relative to industry salary, rather than a 30X impact.

Replies from: jkaufman, T3t
comment by jefftk (jkaufman) · 2022-04-30T01:17:11.676Z · LW(p) · GW(p)

GiveWell still estimates it can save a life for ~$6K on the margin

Not really? In 2021 they announced that (a) they were lowering their funding bar from 8x GiveDirectly to 5x essentially because they are pessimistic long-term about finding things that meet the 8x bar, and (b) that even with the lower bar they still had more money than they could cost effectively direct right now. This is really great news, but it means that the marginal impact of money is now much lower.

seems to imply that 66K EAs would be sufficient to ... I would guess something closer to 5-10X impact relative to industry salary

Then instead of 66K EAs you need 300K EAs. Is that much more plausible? I think arguments in the form of your "seems to imply..." don't work very well.

unless you're very optimistic about X-risk charities being effective?

I don't know if I would say very optimistic, but I do think work here is extremely important (more).

Replies from: dan-weinand
comment by Dan Weinand (dan-weinand) · 2022-05-02T20:55:10.908Z · LW(p) · GW(p)

Fair point that GiveWell has updated their RFMF and increased their estimated cost per QALY. 

I do think that 300K EAs doing something equivalent to eliminating the global disease burden is substantially more plausible than 66K doing so. This seems trivially true since more people can do more than fewer people. I agree that it still sounds ambitious, but saying that ~3X the people involved in the Manhattan project could eliminate the disease burden certainly sounds easier than doing the same with half the Manhattan project's workforce size.

This is getting into nits, but ruling out all arguments of the form 'this seems to imply' seems really strong? Like, it naively seems to limit me to only discussing to implications that the argument maker explicitly acknowledges. I'm probably mis-interpreting you here though, since that seems really silly! This is usually what I'm trying to say when I ask about implications; I note something odd to see if the oddness is implied or if I misinterpreted something.

Agreed that X-risk is very important and also hard to quantify.

comment by RobertM (T3t) · 2022-04-30T01:13:34.745Z · LW(p) · GW(p)

My guess is that it's something like "the impact of mitigating x-risks is probably orders of magnitude greater than public health interventions" (which might be what you meant by "unless you're very optimistic about X-risk charities being effective").

Replies from: dan-weinand
comment by Dan Weinand (dan-weinand) · 2022-05-02T20:59:14.793Z · LW(p) · GW(p)

Agreed, although it feels like in that case we should be comparing 'donating to X-risk organizations' vs 'working at X-risk organizations'. I think that by default I would assume that the money vs talent trade-off is similar at global health and X-risk organizations though.

comment by Victor Novikov (ZT5) · 2022-04-29T07:38:05.703Z · LW(p) · GW(p)

While not everyone is motivated by a sense that they should be doing their part (see: excited vs. obligatory altruism [? · GW]), I do think this is a major motivation for many people.

Nate Soares suggests dropping the idea of should in his Replacing Guilt series.

I'm not sure I disagree with you, though. I just don't like the idea of "demandingness". Though I suppose community norms and standards create a form of pseudo-demandingness, anyway.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2022-04-29T11:57:45.644Z · LW(p) · GW(p)

Nate's blog is down, but here's an archived copy: https://web.archive.org/web/20151108052120/https://mindingourway.com/not-because-you-should/

Main part:

This is a big part of where guilt-free effective altruism comes from, I think: instead of forcing yourself to give to charities sporadically when the guilt overcomes you, promise yourself that you won't give sporadically due to guilt, and then listen to the part of you that says "but then when will I help others!?" Don't force yourself to be an altruist — instead, commit to never forcing yourself, and then work with the part of you that protests, and become an altruist if and only if you want to help.

I think this is probably a good post for many people, but it's not a good post for me or likely others with obligation-derived EA motivation. I participate in EA because I think it's the right thing to do. If I didn't, there are lots of things I'd be excited to do instead.

Replies from: steve2152
comment by DPiepgrass · 2022-05-01T04:47:49.688Z · LW(p) · GW(p)

You are given the option to convince one of them to start working on one of 80k's priority areas, but in doing so N others will get discouraged and stop donating. This is a bit of a false dilemma [...]. In 2012 I would have put a pretty low number for N, perhaps ~3

I find this paragraph very confusing.