Is “Earning to Give” a Bad Framework?
post by clans · 2023-01-16T05:35:21.061Z · LW · GW · 4 commentsThis is a link post for https://locationtbd.home.blog/2022/12/25/is-earning-to-give-a-bad-framework/
Contents
Replaceability and Labor Quality Utilitarian Traps A Useful Baseline Final Thoughts None 4 comments
Among the effective altruists, there are two main camps regarding the most effective way to do good with your career. One is to direct your work specifically at an EA-aligned organization, something like a specific charity or company producing a product that directly saves or improves lives. The other is simply to seek a high-paying career, make as much money as possible, and donate a significant fraction of that money to an effective giving organization like Givewell.
The latter is an idea born straight out of utilitarianist ethics, championed by William MacAskill, who wrote in 2013:
“… while researching ethical career choice, I concluded that it’s in fact better to earn a lot of money and donate a good chunk of it to the most cost-effective charities—a path that I call “earning to give.” … you don’t have to be a billionaire. By making as much money as we can and donating to the best causes, we can each save hundreds of lives… In general, the charitable sector is people-rich but money-poor. Adding another person to the labor pool just isn’t as valuable as providing more money so that more workers can be hired. You might feel less directly involved because you haven’t dedicated every hour of your day to charity, but you’ll have made a much bigger difference.”
MacAskill has later clarified that he does not advocate “earning to give” as the best framework for Effective Altruism (EA), but rather just one of the ways an average someone can pursue making the world a measurably better place with their career. I think that’s a worthwhile clarification to note. I’ve only recently begun earning money in my career and have been reading the discussions on this issue [LW · GW]. I wanted to poke at this “earning to give” concept myself a bit. It's an issue even more on the forefront as the FTX scandal broke with SBF at the helm, who gave away a lot of money that he probably shouldn't have had.
While I think “earning to give” is certainly not a framework antithetical to the tenets of EA – simply to do a maximal amount of good in the world – I am not convinced that it is an advisable thing to advertise without caveats. I’ll cover my specific thoughts below, but overall, it is my impression that compared to direct work, “earning to give” is often not the most “good effecting” path. It is also vulnerable to “utilitarian traps” and is far from a golden rule for the EA community.
I will emphasize right away that these are my rather nitpicky views on the matter, and I am blurring my personal values onto EA values. I don’t seek to invalidate anyone’s personal perspective but to interrogate the “earning to give” concept. I’ve thrown in counterpoints to emphasize this.
Replaceability and Labor Quality
The concept of replaceability is something that MacAskill has stressed repeatedly in writings. The idea is that if you are contemplating doing some good somewhere, the amount of good that you can do is not the amount of good that you can do but rather the difference between the good that you do and the good that would have happened if you had chosen inaction. It’s an important concept and the opportunity cost that this invokes is the reason for the “earning to give” sentiment in the first place. If your options as a skilled worker are to work for some cost-effective humanitarian organization for $50k/year or work in tech for $100k/year, you should always choose the former: not only can you effect more good by donating a whole worker’s salary, you were unlikely to do much more good than any other humanitarian worker.
This is sound logic, but I disagree with the conclusion. First of all, there’s the minor issue that the market is valuing your labor for $100k/year. Provided that there is overlap in the demanded skill at both jobs, you would be depriving the humanitarian organization of your valuable labor, so it ends up being a wash for said organization [1].
My main point of contention is that the replaceability issue works both ways. Say you are a highly skilled worker motivated to do high social good, if you choose not to work directly on projects with high social value, you’ll either be replaced with someone potentially less skilled and motivated or someone like you who would be doing direct work anyways, but now they have to do yours. Or no one replaces you and the work simply does not get done.
Furthermore, anyone who has had a job before knows that highly motivated and self-directing individuals, as people engaged in EA often are, can be orders of magnitude more valuable than regular workers, no matter the work. Even in the circumstances of direct work where this is not immediately the case, you are likely to understand the constraints and difficulties of a given job of high social value and have a high potential to come up with a better, more effective way of doing something.
I know EA-aligned funds have had a hard time finding funding gaps in humanitarian efforts in the past, though it seems now this is less the case as the reach of the organization has expanded. Regardless of funding gaps in charities, it’s a thing to pursue massive direct wealth transfers if you’re ultra-rich a la The Giving Pledge. My intuition is that for the average person, someone else is far more likely to provide the funding than the creativity and skill that you could. Execution is hard.
Counterpoint: There is no shortage of highly skilled workers employed in high social value work and you would only do as much good as the average worker. It is more effective to pursue as much wealth transfer as you can individually contribute.
Utilitarian Traps
It would be very tough to dispute the immense amount of good that the charities EA engages with like Against Malaria Foundation or Helen Keller International do – that is, save lives quite efficiently. There is a monetary cost to saving a life so “earning to give” and donating to these organizations is an effective way to leverage a high paying career to save lives. Someone must buy the supplements and nets, after all.
However, ”earning to give” framework is susceptible to so-called “utilitarian traps” or something like Pascal’s Mugging. It is a common occurrence, at least from personal experience, to come across instances of people who subscribe to EA and donate some amount of their earnings primarily or solely to Longtermist funds – that is, funds that support work to reduce humanity’s existential risk, like CSR or MIRI. This is rational in their view because the amount of potential lives in the future is so vast that enormous efforts must be made to promote both their existence and quality of life. I’ve read Toby Oord’s The Precipice and I agree that these risks are significant and it is important to plan for them accordingly (I’ve worked on these long shot risks myself). I am skeptical that it is advisable that these issues take superiority over the hundreds of people that are dying in the present day, every day. To me, it borders on elitism or missing the forest for the trees or something like that.
Direct work is also subject to this trap of course, but in a far less reaching way – in order to do direct work in an existential risk area, one must acquire the specific skill set to do so. “Earning to give” on the other hand exposes every Effective Altruist to this trap. In my view, it is of utmost importance that EA remains immediately effective at saving lives and alleviating suffering in the present moment, not merely efficient and predicated upon lives that could be saved.
Counterpoint: Only a small fraction of those in EA will subscribe to hardcore utilitarianism and fall into this “trap.” Existential risk mitigation probably deserves some effort, anyway.
A Useful Baseline
At its very core, “earning to give” is effective because the money that is transferred from an individual to an organization promoting work of high social value can buy something. Labor, vaccines, power, equipment, supplies, research, etc. Quality of life has risen and global extreme poverty rates have fallen from 60% to 10% since 1950 not due to massive wealth transfers but because of technology and investment in infrastructure (which is technology). Countries like China where the bulk of this has occurred are still incredibly unequal societies. Improvements in medicine, logistics, energy, communications have pulled most of the global population out of living conditions that are appalling on today’s standards. People didn’t get better at sharing the pie, the pie just got 1,000x bigger. The end game isn’t that everyone has more money, it’s that money itself becomes more effective.
“Earning to give” is instead to play a zero-sum game. The idea is to take money from one area and put it into another while all while being agnostic of where that money is coming from (as long as the harm it does can be outweighed by the dollar amount donated). I disagree with this teleology and don’t think the net-positive ends justify the means. And empirically, it is not the ideal framework for creating the structural change required for massive increases in quality of life.
Instead, the emphasis should be on innovation. Not necessarily for what the market wants, but for high social value. It is more effective to pursue this work as paramount to earning capacity and any excess money to donate is a useful bonus. I’ve been referring to “direct work” a lot, and to clarify, I’m referring to not necessarily humanitarian work, but rather any work that has high social good. The website 80,000 Hours defines it:
“Social impact” or “making a difference” is (tentatively) about promoting total expected wellbeing — considered impartially, over the long term — without sacrificing anything that might be of comparable moral importance.
There are careers of this to varying degrees, but generally anything that has a multiplicative effect on productivity falls into this category (given policy is properly setup to avoid exploitation, of course). In my view, “direct work” with high social value could be anything from a battery engineer to a nurse to a competent government worker. It’s work solving for social impact instead of simply whatever the market demands of skilled labor. Of course, sites like 80,000 Hours already do a good job emphasizing this.
Counterpoint: There are plenty of structural inequalities that exist in the world that can be solved simply by throwing money at them. Investment in infrastructure and medicine distribution, in particular.
Final Thoughts
It’s entirely possible that my perspective on the need for funding versus talent in the charity or humanitarian sector is warped by the success of the EA movement so far. That’s a very good thing. It’s also entirely possible that I have a lot to learn about the specific needs of humanitarian efforts, having only recently begun to seriously research and experiment with contributions firsthand. Regardless, I believe that an EA-aligned career tooled for “earning to give” is often inferior to direct work of perceived high social value and that the EA community should be biased towards encouraging talented, motivated individuals to pursue the latter.
4 comments
Comments sorted by top scores.
comment by JBlack · 2023-01-17T03:21:14.511Z · LW(p) · GW(p)
"Earning to give" seems to me just another application of comparative advantage.
If somebody values my COBOL and reverse-engineering skills at $150k/yr, this says nothing whatsoever about how much my direct contribution of labour would be worth to an effective altruism organisation. It is very unlikely that I would be the best person for any particular job there.
If I worked directly for an EA organisation at lower than market rate for a very different job: (1) I am almost certainly worse for that job than many other people, (2) my company gets someone worse for my current job, (3) the EA organisation doesn't get any money from me, (4) I live on a lot less money and am much more constrained in where I can work. In what way is this better for anyone?
You don't need utilitarianism to decide whether this makes sense, it's a strict Pareto improvement to stay with my current job and donate versus work directly for EA.
Replies from: sharmake-farah↑ comment by Noosphere89 (sharmake-farah) · 2023-01-17T03:30:55.753Z · LW(p) · GW(p)
Yeah this. Even assuming a more dentological framework, I don't understand why earning to give is so vilified, IMO.
Replies from: Dagoncomment by Dagon · 2023-01-16T16:40:56.984Z · LW(p) · GW(p)
[note: I don't identify as EA, because I am a utility monster. I am somewhat altruistic in that I do care about the quantity and distribution of human experiences, and I aim to be effective, but I am not Utilitarian in my aggregation function. Discount my opinion as appropriate. ]
I suspect you should split your uncertainty between "earning to give vs direct action" and "Near-term EA vs Longtermist causes". Skepticism is justified on both dimensions, but as the numbers get bigger and probabilities smaller, the risk of delegating the mechanisms of spending (which ETG is) seems to be higher. In other words, ETG seems great if you have a comparative advantage in something that pays well, and you're focused on measurable, clear improvements. And it seems ... less great if you're unsure of the mechanisms by which the money is solving problems.
The problem is that direct action is EVEN HARDER to evaluate. If your comparative advantage is in something related to malaria prevention, that's probably more valuable than ETG. If it's something more prosaic, ETG is a more direct path to impact.