A Public Choice Take on Effective Altruism

post by vaishnav92 · 2024-12-15T16:58:50.683Z · LW · GW · 4 comments

This is a link post for https://www.optimaloutliers.com/p/effective-altruism-neglectedness

Contents

4 comments

At the heart of Effective Altruism is a commitment to doing "as much good as possible" or maximizing counterfactual impact. EAs break down counterfactual impact into three components - scale (how big is the problem?), tractability (how solvable is the problem) and neglectedness (how neglected is the problem?).

The ‘bad’ kind of neglected is when something is neglected precisely because it's not tractable. But the "good" kind of neglectedness is when a problem or a task is unsolved or undone either because there exists no market or commercial incentive to solve this problem (eg. animal welfare) or because the commercial incentive is small relative to the social benefits of solving the problem (eg. pandemic preparedness).

EA thinks on the margin. The central question it asks: Holding most of the external world constant, how can I use my time and resources to have the highest personal or organizational impact? But as EA gets exported from the grungiest and geekiest corners of Berkeley and Oxford into the "real" world, thinking on the margin presents a practical problem, especially with respect to neglectedness.

This might be a good time to differentiate between ea - the philosophy of effective altruism and EA - the Effective Altruism movement, largely funded by Dustin Moskowitz via Open Philanthropy. I see the latter as just one instantiation of the former idea.

Notice that EA, or any such social movement, that is at some level, cause-neutral and cost-effective has to factor in neglectedness. If you must allocate scarce resources between problems to do the most good, you can't remain indifferent to how others are allocating their resources, if you also care about maximizing impact per dollar spent.

Any movement, by definition, also has to self-perpetuate to be successful and accomplish its goals. Past a particular size, any movement that imbues its adherents with a desire to work on neglected problems, will become inimical to its own growth.

The following might drive home the intuition: If we lived in a world in which the EA movement was 10x its size, would shrimp welfare as a cause be more neglected or less, relative to how neglected it seems today? So if someone came along, having internalized the 'ea' message of doing the most good with their resources, all else equal, they would be less enticed by EA cause areas, in the world in which a larger EA renders something less neglected pretty soon after it declares something a 'top priority'.

This tension manifests clearly in the current oversubscription problem in EA jobs. Operations roles at EA organizations that pay well under $100,000 receive thousands of applications, with extensive selection processes spanning 3-6 months. On the bright side (for EA), this is a marker of success. When a job gets tagged as "EA", it confers credibility and status – as being one of the "highest impact jobs" out there. This is basically the thesis of EA come true - aligning incentives such that the gap between optimizing for status and impact is as narrow as possible.

There is only one problem – The more successful it gets, the less likely it is that these jobs are the most impactful jobs out there. Some in EA defend this with a canonical line about power laws - that since these jobs are so much higher impact than everything else, they're just not worried about oversubscription. That the marginal value of an additional applicant does not diminish even with thousands of applicants.

But this seems implausible for most roles with bounded autonomy, even in exceptionally impactful organizations. The exceptions are high leverage roles like leading organizations or specialized technical positions. For a marketing manager or operations coordinator, it's hard to make the case that from a pool of 2000 qualified applicants, the delta between the best and second-best candidate justifies this insistence on working for an EA organization.

This points to a deeper challenge that EA faces through the lens of public choice theory. EA is not just a handful of grantmakers trying to allocate resources but also the social and intellectual capital of the movement - the people who generate ideas and execute projects. For example, a substantial portion of EA's intellectual capital is now building careers in AI safety. 

If you build a career in Area X, you will naturally be slower to update downward on X's relative importance. You'll see more arguments for X's significance, develop deeper understanding of X's complexities, and be better positioned to articulate why X matters. Even with purely altruistic motives, you might think: "I understand X deeply now, so I need to make sure others appreciate its importance."

This creates a form of intellectual and institutional lock-in. When EA identifies a cause area and invests in it, it's not just allocating money - it's creating careers, expertise, and institutional infrastructure. Any movement sufficiently large and invested in specific causes will face pressure to maintain these structures, potentially at the expense of pure cause neutrality.

One might argue for a distinction between grantmaking organizations at the highest level of EA – which strive for cause-neutrality – and the organizations they fund that work on specific problems. But this is likely a distinction without a difference. The same institutional forces that make it hard for individual EA professionals to remain purely cause-neutral affect the movement's central institutions through network effects, shared discourse, and the need to maintain stable organizational structures.

One potential solution is to transform EA into a movement that primarily focuses on raising and allocating capital, rather than providing subsidized labor to "important causes." Under this model, EA would leverage market mechanisms and incentives to achieve its goals, with movement-building efforts centered on earning to give. 

While some might object that ambitious EA projects require high-trust, value-aligned teams since impact can't be tracked purely through metrics, this argument deserves a bit more scrutiny. Yes, corporations at the highest level have a clearer optimization target in profits, but at each lower level of hierarchy, they face the same challenges of incentive alignment and goodharting that EA organizations do. Despite this, good companies manage to build effective hierarchies and get important things done. EA could similarly harness incentives and competitive dynamics to its advantage.

4 comments

Comments sorted by top scores.

comment by Jonas Hallgren · 2024-12-15T19:50:36.852Z · LW(p) · GW(p)

Good post, did you also cross post to the forum? Also do you have any thoughts on what to do differently in order to enable more exploration and less lock in?

Replies from: vaishnav92
comment by vaishnav92 · 2024-12-15T21:53:59.431Z · LW(p) · GW(p)

I just did. 

I'm not sure I have one that folks within EA would find palatable. The solution, in my mind, is for Effective Altruism to become a movement that mostly focuses and raising and allocating capital - one that uses markets to get things done downstream of that. I think EA should get out of the business of providing subsidized labor to the "most important causes". Instead, allocate capital and use incentives and markets to get what you want. This would mean all movement building efforts focus on earning to give. If you want someone smart to found a charity, pay to incentivize that. 

One response I anticipate from EAs is that ambitious projects often require teams that have high trust (or in EA parlance - value aligned) since impact can't often be tracked purely through metrics and incentives. I'm not sure I buy this. It's true that corporations, at the highest level, have something far more legible that the leadership team can optimize for.  But at each lower level of hierarchy, corporations also face the same problems of Goodharting and incentives alignment. They don't always make the best decisions but good companies do manage to do this well enough at most levels to get important thigns done. What makes me even more suspicious is that people don't even want to try this. 

Replies from: Jonas Hallgren
comment by Jonas Hallgren · 2024-12-16T09:43:24.403Z · LW(p) · GW(p)

I guess the solution that you're more generally pointing at here is something like ensuring a split in the incentives of the people within the specific fields and EA itself as a movement. Almost a bit like making that part of EA only be global priorities research and something like market allocation? 

I have this feeling that there might be other ways to go about doing this with like programs or incentives for making people be more open to taking any type of impactful job? Something like having reoccuring reflection periods or other types of workshops/programs? 

Replies from: vaishnav92
comment by vaishnav92 · 2024-12-16T16:11:06.092Z · LW(p) · GW(p)

I don't think it's great to tell most people to. keep switching fields based on updated impact calculations. There are advantages to building focussed careers - increasing returns to effort within the same domain. The exception would be founder-types and some generalist type talent. I'm not sure why we start with the premise that EA has to channel people into specific career paths based on impact calculations. It has a distortionary effect on the price of labor. Just as I'd prefer tax dollars being channeled into direct cash payments as welfare, i'd prefer if EAs made as much money as possible and donated it, so they can pay for whoever is best qualified to do what needs to be done.