How Democratic Is Effective Altruism — Really?
post by B Jacobs (Bob Jacobs) · 2025-04-25T16:02:42.915Z · LW · GW · 0 commentsThis is a link post for https://bobjacobs.substack.com/p/how-democratic-is-effective-altruism
Contents
Introduction EA and conformity EA and criticism Read the rest of the post here None No comments
Introduction
Effective Altruism (EA) is a social movement that aims to use reason and evidence to help others as much as possible. It encourages people to ask not just “how to do good”, but how to do the most good. This has led members to support things like global health interventions, existential risk reduction, and animal welfare.
I used to be closely involved in the movement, and I still think many of its ideas are worth defending. But as the movement has grown, so have certain structural problems: increasing reliance on large donors, pushback on dissent, and systems that concentrate influence in subtle but significant ways. This post is about those concerns — not to denigrate the movement, but to explore how it might better live up to its own stated values.
EA and conformity
One of the clearest places where these structural issues show up is in how the movement handles conformity and internal disagreement. Carla Zoe Cremer, a former EA insider, has become an outspoken critic of how EA functions internally. Back in 2020, she warned about “value-alignment [EA · GW]”, her term for extreme intellectual conformity within EA:
value-alignment means to agree on a fundamental level. It means to agree with the most broadly accepted values, methodologies, axioms, diet, donation schemes, memes and prioritisations of EA.
Her concerns deepened in 2021, when she co-authored a peer-reviewed paper with Luke Kemp on existential risk studies — one of EA’s flagship cause areas. The paper argued for more diverse voices and warned that the field had become overly “techno-utopian” and was in need of democratic reform.
The reception was… rough. According to her [EA · GW]:
It has been the most emotionally draining paper we have ever written. We lost sleep, time, friends, collaborators, and mentors because we disagreed on: whether this work should be published, whether potential EA funders would decide against funding us and the institutions we're affiliated with, and whether the authors whose work we critique would be upset.
While many in the community responded constructively, others reportedly sought to suppress the paper — not on academic grounds, but out of fear that it might alienate funders. The clear implication here is that critique is encouraged, as long as it doesn’t threaten the financial or ideological foundations of the movement. In response, Cremer laid out a series of concrete reforms [EA · GW] to tackle this problem:
diversify funding sources by breaking up big funding bodies and by reducing each orgs’ reliance on EA funding and tech billionaire funding, it needs to produce academically credible work, set up whistle-blower protection, actively fund critical work, allow for bottom-up control over how funding is distributed, diversify academic fields represented in EA, make the leaders' forum and funding decisions transparent, stop glorifying individual thought-leaders, stop classifying everything as info hazards...amongst other structural changes.
She reached out to MacAskill and other high profile Effective Altruists (EAs) with these concerns [EA(p) · GW(p)]. While they acknowledged the issues, Cremer didn’t think talking to them achieved much:
I was entirely unsuccessful in inspiring EAs to implement any of my suggestions. EAs patted themselves on the back for running an essay competition on critiques against EA, left 253 comments on my and Luke Kemp’s paper, and kept everything that actually could have made a difference just as it was.
EA and criticism
This story might surprise you if you’ve heard that EA is great at receiving criticisms. I think this reputation is partially earned, since the EA community does indeed engage with a large number of them. The EA Forum, for example, has given “Criticism of effective altruism [? · GW]” its own tag. At the moment of writing, this tag has 490 posts [? · GW] on it. Not bad.
Not only does EA allow criticisms, it sometimes monetarily rewards them. In 2022 there was the EA criticism contest [EA · GW], where people could send in their criticisms of EA and the best ones would receive prize money. A total of $120,000 was awarded to 31 of the contest’s 341 entries [EA · GW]. At first glance, this seems like strong evidence that EA rewards critiques, but things become a little bit more complicated when we look at who the winners and losers were.
Read the rest of the post here
0 comments
Comments sorted by top scores.