A critique of effective altruism

post by benkuhn · 2013-12-02T16:53:35.360Z · LW · GW · Legacy · 153 comments

Contents

  How to read this post
  Abstract
  Philosophical difficulties
  Poor cause choices
  Non-obviousness
  Efficient markets for giving
  Inconsistent attitude towards rigor
  Poor psychological understanding
  Historical analogues
  Monoculture
  Community problems
  Movement building issues
  Conclusion
  Are these problems solvable?
  Acknowledgments
None
153 comments

I recently ran across Nick Bostrom’s idea of subjecting your strongest beliefs to a hypothetical apostasy in which you try to muster the strongest arguments you can against them. As you might have figured out, I believe strongly in effective altruism—the idea of applying evidence and reason to finding the best ways to improve the world. As such, I thought it would be productive to write a hypothetical apostasy on the effective altruism movement.

(EDIT: As per the comments of Vaniver, Carl Shulman, and others, this didn't quite come out as a hypothetical apostasy. I originally wrote it with that in mind, but decided that a focus on more plausible, more moderate criticisms would be more productive.)

How to read this post

(EDIT: the following two paragraphs were written before I softened the tone of the piece. They're less relevant to the more moderate version that I actually published.)

Hopefully this is clear, but as a disclaimer: this piece is written in a fairly critical tone. This was part of an attempt to get “in character”. This tone does not indicate my current mental state with regard to the effective altruism movement. I agree, to varying extents, with some of the critiques I present here, but I’m not about to give up on effective altruism or stop cooperating with the EA movement. The apostasy is purely hypothetical.

Also, because of the nature of a hypothetical apostasy, I’d guess that for effective altruist readers, the critical tone of this piece may be especially likely to trigger defensive rationalization. Please read through with this in mind. (A good way to counteract this effect might be, for instance, to imagine that you’re not an effective altruist, but your friend is, and it’s them reading through it: how should they update their beliefs?)

(End less relevant paragraphs.)

Finally, if you’ve never heard of effective altruism before, I don’t recommend making this piece your first impression of it! You’re going to get a very skewed view because I don’t bother to mention all the things that are awesome about the EA movement.

Abstract

Effective altruism is, to my knowledge, the first time that a substantially useful set of ethics and frameworks to analyze one’s effect on the world has gained a broad enough appeal to resemble a social movement. (I’d say these principles are something like altruism, maximization, egalitarianism, and consequentialism; together they imply many improvements over the social default for trying to do good in the world—earning to give as opposed to doing direct charity work, working in the developing world rather than locally, using evidence and feedback to analyze effectiveness, etc.) Unfortunately, as a movement effective altruism is failing to use these principles to acquire correct nontrivial beliefs about how to improve the world.

By way of clarification, consider a distinction between two senses of the word “trying” I used above. Let’s call them “actually trying” and “pretending to try”. Pretending to try to improve the world is something like responding to social pressure to improve the world by querying your brain for a thing which improves the world, taking the first search result and rolling with it. For example, for a while I thought that I would try to improve the world by developing computerized methods of checking informally-written proofs, thus allowing more scalable teaching of higher math, democratizing education, etc. Coincidentally, computer programming and higher math happened to be the two things that I was best at. This is pretending to try. Actually trying is looking at the things that improve the world, figuring out which one maximizes utility, and then doing that thing. For instance, I now run an effective altruist student organization at Harvard because I realized that even though I’m a comparatively bad leader and don’t enjoy it very much, it’s still very high-impact if I work hard enough at it. This isn’t to say that I’m actually trying yet, but I’ve gotten closer.

Using this distinction between pretending and actually trying, I would summarize a lot of effective altruism as “pretending to actually try”. As a social group, effective altruists have successfully noticed the pretending/actually-trying distinction. But they seem to have stopped there, assuming that knowing the difference between fake trying and actually trying translates into ability to actually try. Empirically, it most certainly doesn’t. A lot of effective altruists still end up satisficing—finding actions that are on their face acceptable under core EA standards and then picking those which seem appealing because of other essentially random factors. This is more likely to converge on good actions than what society does by default, because the principles are better than society’s default principles. Nevertheless, it fails to make much progress over what is directly obvious from the core EA principles. As a result, although “doing effective altruism” feels like truth-seeking, it often ends up being just a more credible way to pretend to try.

Below I introduce various ways in which effective altruists have failed to go beyond the social-satisficing algorithm of establishing some credibly acceptable alternatives and then picking among them based on essentially random preferences. I exhibit other areas where the norms of effective altruism fail to guard against motivated cognition. Both of these phenomena add what I call “epistemic inertia” to the effective-altruist consensus: effective altruists become more subject to pressures on their beliefs other than those from a truth-seeking process, meaning that the EA consensus becomes less able to update on new evidence or arguments and preventing the movement from moving forward. I argue that this stems from effective altruists’ reluctance to think through issues of the form “being a successful social movement” rather than “correctly applying utilitarianism individually”. This could potentially be solved by introducing an additional principle of effective altruism—e.g. “group self-awareness”—but it may be too late to add new things to effective altruism’s DNA.

Philosophical difficulties

There is currently wide disagreement among effective altruists on the correct framework for population ethics. This is crucially important for determining the best way to improve the world: different population ethics can lead to drastically different choices (or at least so we would expect a priori), and if the EA movement can’t converge on at least their instrumental goals, it will quickly fragment and lose its power. Yet there has been little progress towards discovering the correct population ethics (or, from a moral anti-realist standpoint, constructing arguments that will lead to convergence on a particular population ethics), or even determining which ethics lead to which interventions being better.

Poor cause choices

Many effective altruists donate to GiveWell’s top charities. All three of these charities work in global health. Is that because GiveWell knows that global health is the highest-leverage cause? No. It’s because it was the only one with enough data to say anything very useful about. There’s little reason to suppose that this correlates with being particularly high-leverage—on the contrary, heuristic but less rigorous arguments for causes like existential risk prevention, vegetarian advocacy and open borders suggest that these could be even more efficient.

Furthermore, the our current “best known intervention” is likely to change (in a more cost-effective direction) in the future. There are two competing effects here: we might discover better interventions to donate to than the ones we currently think are best, but we also might run out of opportunities for the current best known intervention, and have to switch to the second. So far we seem to be in a regime where the first effect dominates, and there’s no evidence that we’ll reach a tipping point very soon, especially given how new the field of effective charity research is.

Given these considerations, it’s quite surprising that effective altruists are donating to global health causes now. Even for those looking to use their donations to set an example, a donor-advised fund would have many of the benefits and none of the downsides. And anyway, donating when you believe it’s not (except for example-setting) the best possible course of action, in order to make a point about figuring out the best possible course of action and then doing that thing, seems perverse.

Non-obviousness

Effective altruists often express surprise that the idea of effective altruism only came about so recently. For instance, my student group recently hosted Elie Hassenfeld for a talk in which he made remarks to that effect, and I’ve heard other people working for EA organizations express the same sentiment. But no one seems to be actually worried about this—just smug that they’ve figured out something that no one else had.

The “market” for ideas is at least somewhat efficient: most simple, obvious and correct things get thought of fairly quickly after it’s possible to think them. If a meme as simple as effective altruism hasn’t taken root yet, we should at least try to understand why before throwing our weight behind it. The absence of such attempts—in other words, the fact that non-obviousness doesn’t make effective altruists worried that they’re missing something—is a strong indicator against the “effective altruists are actually trying” hypothesis.

Efficient markets for giving

It’s often claimed that “nonprofits are not a market for doing good; they’re a market for warm fuzzies”. This is used as justification for why it’s possible to do immense amounts of good by donating. However, while it’s certainly true that most donors aren’t explicitly trying to purchase utililty, there’s still a lot of money that is.

The Gates Foundation is an example of such an organization. They’re effectiveness-minded and with $60 billion behind them. 80,000 Hours has already noted that they’ve probably saved over 6 million lives with their vaccine programs alone—given that they’ve spent a relatively small part of their endowment, they must be getting a much better exchange rate than our current best guesses.

So why not just donate to the Gates Foundation? Effective altruists need a better account of the “market inefficiencies” that they’re exploiting that Gates isn’t. Why didn’t the Gates Foundation fund the Against Malaria Foundation, GiveWell’s top charity, when it’s in one of their main research areas? It seems implausible that the answer is simple incompetence or the like.

A general rule of markets is that if you don’t know what your edge is, you’re the sucker. Many effective altruists, when asked what their edge is, give some answer along the lines of “actually being strategic/thinking about utility/caring about results”, and stop thinking there. This isn’t a compelling case: as mentioned before, it’s not clear why no one else is doing these things.

Inconsistent attitude towards rigor

Effective altruists insist on extraordinary rigor in their charity recommendations—cf. for instance GiveWell’s work. Yet for many ancillary problems—donating now vs. later, choosing a career, and deciding how “meta” to go (between direct work, earning to give, doing advocacy, and donating to advocacy), to name a few—they seem happy to choose between the not-obviously-wrong alternatives based on intuition and gut feelings.

Poor psychological understanding

John Sturm suggests, and I agree, that many of these issues are psychological in nature:

I think a lot of these problems take root a commitment level issue:

I, for instance, am thrilled about changing my mentality towards charity, not my mentality towards having kids. My first guess is that - from an EA and overall ethical perspective - it would be a big mistake for me to have kids (even after taking into account the normal EA excuses about doing things for myself). At least right now, though, I just don’t care that I’m ignoring my ethics and EA; I want to have kids and that’s that.

This is a case in which I’m not “being lazy” so much as just not trying at all. But when someone asks me about it, it’s easier for me to give some EA excuse (like that having kids will make me happier and more productive) that I don’t think is true - and then I look like I’m being a lazy or careless altruist rather than not being one at all.

The model I’m building is this: there are many different areas in life where I could apply EA. In some of them, I’m wholeheartedly willing. In some of them, I’m not willing at all. Then there are two kinds of areas where it looks like I’m being a lazy EA: those where I’m willing and want to be a better EA… and those where I’m not willing but I’m just pretending (to myself or others or both).

The point of this: when we ask someone to be a less lazy EA, we are (1) helping them do a better job at something they want to do, and (2) trying to make them either do more than they want to or admit they are “bad”.

In general, most effective altruists respond to deep conflicts between effective altruism and other goals in one of the following ways:

  1. Unconsciously resolve the cognitive dissonance with motivated reasoning: “it’s clearly my comparative advantage to spread effective altruism through poetry!”
  2. Deliberately and knowingly use motivated reasoning: “dear Facebook group, what are the best utilitarian arguments in favor of becoming an EA poet?”
  3. Take the easiest “honest” way out: “I wouldn’t be psychologically able to do effective altruism if it forced me to go into finance instead of writing poetry, so I’ll become an effective altruist poet instead”.

The third is debatably defensible—though, for a community that purports to put stock in rationality and self-improvement, effective altruists have shown surprisingly little interest in self-modification to have more altruistic intentions. This seems obviously worthy of further work.

Furthermore, EA norms do not proscribe even the first two, leading to a group norm that doesn’t cause people to notice when they’re engaging in a certain amount of motivated cognition. This is quite toxic to the movement’s ability to converge on the truth. (As before, effective altruists are still better than the general population at this; the core EA principles are strong enough to make people notice the most obvious motivated cognition that obviously runs afoul of them. But that’s not nearly good enough.)

Historical analogues

With the partial exception of GiveWell’s history of philanthropy project, there’s been no research into good historical outside views. Although there are no direct precursors of effective altruism (worrying in its own right; see above), there is one notably similar movement: communism, where the idea of “from each according to his ability, to each according to his needs” originated. Communism is also notable for its various abject failures. Effective altruists need to be more worried about how they will avoid failures of a similar class—and in general they need to be more aware of the pitfalls, as well as the benefits, of being an increasingly large social movement.

Aaron Tucker elaborates better than I could:

In particular, Communism/Socialism was a movement that was started by philosophers, then continued by technocrats, where they thought reason and planning could make the world much better, and that if they coordinated to take action to fix everything, they could eliminate poverty, disease, etc.

Marx totally got the “actually trying vs. pretending to try” distinction AFAICT (“Philosophers have only explained the world, but the real problem is to change it” is a quote of his), and he really strongly rails against people who unreflectively try to fix things in ways that make sense to the culture they’re starting from—the problem isn’t that the bourgeoisie aren’t trying to help people, it’s that the only conception of help that the bourgeoisie have is one that’s mostly epiphenomenal to actually improving the lives of the proletariat—giving them nice boureoisie things like education and voting rights, but not doing anything to improve the material condition of their life, or fix the problems of why they don’t have those in the first place, and don’t just make them themselves.

So if Marx got the pretend/actually try distinction, and his followers took over countries, and they had a ton of awesome technocrats, it seems like it’s the perfect EA thing, and it totally didn’t work.

Monoculture

Effective altruists are not very diverse. The vast majority are white, “upper-middle-class”, intellectually and philosophically inclined, from a developed country, etc. (and I think it skews significantly male as well, though I’m less sure of this). And as much as the multiple-perspectives argument for diversity is hackneyed by this point, it seems quite germane, especially when considering e.g. global health interventions, whose beneficiaries are culturally very foreign to us.

Effective altruists are not very humanistically aware either. EA came out of analytic philosophy and spread from there to math and computer science. As such, they are too hasty to dismiss many arguments as moral-relativist postmodernist fluff, e.g. that effective altruists are promoting cultural imperialism by forcing a Westernized conception of “the good” onto people they’re trying to help. Even if EAs are quite confident that the utilitarian/reductionist/rationalist worldview is correct, the outside view is that really engaging with a greater diversity of opinions is very helpful.

Community problems

The discourse around effective altruism in e.g. the Facebook group used to be of fairly high quality. But as the movement grows, the traditional venues of discussion are getting inundated with new people who haven’t absorbed the norms of discussion or standards of proof yet. If this is not rectified quickly, the EA community will cease to be useful at all: there will be no venue in which a group truth-seeking process can operate. Yet nobody seems to be aware of the magnitude of this problem. There have been some half-hearted attempts to fix it, but nothing much has come of them.

Movement building issues

The whole point of having an effective altruism “movement” is that it’ll be bigger than the sum of its parts. Being organized as a movement should turn effective altruism into the kind of large, semi-monolithic actor that can actually get big stuff done, not just make marginal contributions.

But in practice, large movements and truth-seeking hardly ever go together. As movements grow, they get more “epistemic inertia”: it becomes much harder for them to update on evidence. This is because they have to rely on social methods to propagate their memes rather than truth-seeking behavior. But people who have been drawn to EA by social pressure rather than truth-seeking take much longer to change their beliefs, so once the movement reaches a critical mass of them, it will become difficult for it to update on new evidence. As described above, this is already happening to effective altruism with the ever-less-useful Facebook group.

Conclusion

I’ve presented several areas in which the effective altruism movement fails to converge on truth through a combination of the following effects:

  1. Effective altruists “stop thinking” too early and satisfice for “doesn’t obviously conflict with EA principles” rather than optimizing for “increases utility”. (For instance, they choose donations poorly due to this effect.)
  2. Effective altruism puts strong demands on its practitioners, and EA group norms do not appropriately guard against motivated cognition to avoid them. (For example, this often causes people to choose bad careers.)
  3. Effective altruists don’t notice important areas to look into, specifically issues related to “being a successful movement” rather than “correctly implementing utilitarianism”. (For instance, they ignore issues around group epistemology, historical precedents for the movement, movement diversity, etc.)

These problems are worrying on their own, but the lack of awareness of them is the real problem. The monoculture is worrying, but the lackadaisical attitude towards it is worse. The lack of rigor is unfortunate, but the fact that people haven’t noticed it is the real problem.

Either effective altruists don’t yet realize that they’re subject to the failure modes of any large movement, or they don’t feel motivation to do the boring legwork of e.g. engaging with viewpoints that your inside view says are annoying but that the outside view says are useful on expectation. Either way, this bespeaks worrying things about the movement’s staying power.

More importantly, it also indicates an epistemic failure on the part of effective altruists. The fact that no one else within EA has done a substantial critique yet is a huge red flag. If effective altruists aren’t aware of strong critiques of the EA movement, why aren’t they looking for them? This suggests that, contrary to the emphasis on rationality within the movement, many effective altruists’ beliefs are based on social, rather than truth-seeking, behavior.

If it doesn’t solve these problems, effective-altruism-the-movement won’t help me achieve any more good than I could individually. All it will do is add epistemic inertia, as it takes more effort to shift the EA consensus than to update my individual beliefs.

Are these problems solvable?

It seems to me that the third issue above (lack of self-awareness as a social movement) subsumes the other two: if effective altruism as a movement were sufficiently introspective, it could probably notice and solve the other two problems, as well as future ones that will undoubtedly crop up.

Hence, I propose an additional principle of effective altruism. In addition to being altruistic, maximizing, egalitarian, and consequentialist we should be self-aware: we should think carefully about the issues associated with being a successful movement, in order to make sure that we can move beyond the obvious applications of EA principles and come up with non-trivially better ways to improve the world.

Acknowledgments

Thanks to Nick Bostrom for coining the idea of a hypothetical apostasy, and to Will Eden for mentioning it recently.

Thanks to Michael Vassar, Aaron Tucker and Andrew Rettek for inspiring various of these points.

Thanks to Aaron Tucker and John Sturm for reading an advance draft of this post and giving valuable feedback.

Cross-posted from http://www.benkuhn.net/ea-critique since I want outside perspectives, and also LW's comments are nicer than mine.

153 comments

Comments sorted by top scores.

comment by CarlShulman · 2013-12-01T23:29:34.303Z · LW(p) · GW(p)

Disclaimer: I like and support the EA movement.

I agree with Vaniver, that it would be good to give more time to arguments that the EA movement is going to do large net harm. You touch on this a bit with the discussion of Communism and moral disagreement within the movement, but one could go further. Some speculative ways in which the EA movement could have bad consequences:

  • The EA movement, driven by short-term QALYs, pulls effort away from affecting science and policy in rich countries with long-term impacts to brief alleviation of problems for poor humans and animals
  • AMF-style interventions increase population growth and lower average world income and education, which leads to fumbling of long-run trajectories or existential risk
  • The EA movement screws up population ethics and the valuation of different minds in such a way that it doesn't just fail to find good interventions, but pursues actively terrible ones (e.g. making things much worse by trading off human and ant conditions wrongly)
  • Even if the movement mostly does not turn towards promoting bad things, it turns out to be easier to screw things up than to help, and foolish proponents of conflicting sub-ideologies collectively make things worse for everyone, PD style; you see this in animal activists enthused about increasing poverty to reduce meat consumption, or poverty activists happy to create huge deadweight GDP losses as long as resources are transferred to the poor,
  • Something like explicit hedonistic utilitarianism becomes an official ideology somewhere, in the style of Communist states (even though the members don't really embrace it in full on every matter, they nominally endorse it as universal and call their contrary sentiments weakness of will): the doctrine implies that all sentient beings should be killed and replaced by some kind of simulated orgasm-neurons and efficient caretaker robots (or otherwise sacrifice much potential value in the name of a cramped conception of value), and society is pushed in this direction by a tragedy of the commons; also, see Robin Hanson
  • Misallocating a huge mass of idealists' human capital to donation for easily measurable things and away from more effective things elsewhere, sabotages more effective do-gooding for a net worsening of the world
  • The EA movement gets into politics and can't clearly evaluate various policies with huge upside and downside potential because of ideological blinders, and winds up with a massive net downside
  • The EA movement finds extremely important issues, and then turns the public off from them with its fanaticism, warts, or fumbling, so that it would have been better to have left those issues to other institutions
Replies from: benkuhn, Davidmanheim
comment by benkuhn · 2013-12-02T00:02:33.267Z · LW(p) · GW(p)

Hmm. I didn't interpret a hypothetical apostasy as the fiercest critique, but rather the best critique--i.e. weight the arguments not by "badness if true" but by something like badness times plausibility.

But you may be right that I unconsciously biased myself towards arguments that were easier to solve by tweaking the EA movement's direction. For instance, I should probably have included a section about measurability bias, which does seem plausibly quite bad.

Replies from: joaolkf, Vaniver, Ratcourse, christopherj
comment by joaolkf · 2013-12-02T02:35:29.145Z · LW(p) · GW(p)

I don't have time to explain it now, so I will state the following with the hope merely stating it will be useful as a data point. I think Carl's critique was more compelling, more relevant if true (which you agree), and also not that much less likely to be true than yours. Certainly, considering the fact of how destructive they would be if true, and the fact they are almost as likely to be true as yours, I think Carl's is the best critique.

In fact, the 2nd main reason I don’t direct most of my efforts to what most of the EA movement is doing is because I do think some weaker versions of Carl’s points are true. (The 1st is simply that I’m much better at finding out if his points are true and other more abstract things than at doing EA stuff).

comment by Vaniver · 2013-12-02T00:23:57.643Z · LW(p) · GW(p)

For instance, I should probably have included a section about measurability bias, which does seem plausibly quite bad.

This does show up in the poor cause choices section, and I'm not sure it deserves a section of its own (though I do suspect it's the most serious reason for poor cause selection, beyond the underlying population ethics being bad).

comment by Ratcourse · 2013-12-02T19:33:16.203Z · LW(p) · GW(p)

"Hmm. I didn't interpret a hypothetical apostasy as the fiercest critique, but rather the best critique--i.e. weight the arguments not by "badness if true" but by something like badness times plausibility."

See http://www.amirrorclear.net/academic/papers/risk.pdf. Plausibility depends on your current model/arguments/evidence. If the badness times probability of these being wrong dwarfs the former, you must account for it.

comment by christopherj · 2013-12-02T18:45:51.092Z · LW(p) · GW(p)

Hmm. I didn't interpret a hypothetical apostasy as the fiercest critique, but rather the best critique--i.e. weight the arguments not by "badness if true" but by something like badness times plausibility.

Odds are if someone benefits from doing a hypothetical apostasy, then they can't be trusted to be accurate in terms of plausibility. You'd want at least to get the worst case scenario for plausibility, or simply neglect plausibility and later make sure that the things you feel are "very implausible" are in fact very implausible.

I'm slightly suspicious of the whole hypothetical apostasy -- it feels like proofreading, but I find it almost impossible to thoroughly proof myself. Wouldn't it be easier and better to find well-qualified critics, if these exist, and leave hypothetical apostasy for when decent critics can't be found? Although I suppose that it would already have been implied by hypothetical apostasy, as it would be a lazy apostate who didn't research support for his position.

The Elitist Philanthropy of So-Called Effective Altruism

Enterprise Is the Most “Effective Altruism”

I suppose a problem with other critics is that their values likely differ from yours.

Replies from: benkuhn
comment by benkuhn · 2013-12-02T18:53:40.624Z · LW(p) · GW(p)

Yes, I don't consider either the CEO of a GiveWell competitor or a couple of theologians to be well-qualified to critique effective altruism. Part of my motivation in writing this was specifically the abysmal quality of such critiques.

I think that e.g. Michael Vassar is a much more qualified outside critic (outside in the sense of not associating with the EA movement) and indeed several of my arguments here were inspired by him (as filtered through my ability to interpret his sometimes oracular remarks, so he can feel free to disown the results, though he hasn't yet). Some of what I'm doing is making these outside critiques more visible to effective altruists--although arguably a true outsider would be able to make them more forcefully through lack of bias, Vassar understandably would rather spend his time on other stuff, so the best workable option is writing them up myself.

Replies from: christopherj
comment by christopherj · 2013-12-07T04:32:01.816Z · LW(p) · GW(p)

I didn't mean that you can just take other people's critiques as sound nor unbiased, but I can guarantee you that the GiveWell competitor won't share your bias.

In theory, you're even his intended audience (liking EA but not 100% convinced), which means that if he's doing his job right the arguments would be tailored to you. (Though I suspect tailoring an argument for rationalists might require different skills than tailoring it for other types of groups.)

comment by Davidmanheim · 2013-12-03T17:44:07.829Z · LW(p) · GW(p)

Many of these issues seem related to arrow's impossibility theorem; if groups have genuinely different values, and we optimize for one set not another, ants get tiny apartments and people starve, or we destroy the world economy because we discount too much, etc.

To clarify, I think LessWrong thinks most issues are simple, because we know little about them; we want to just fix it. As an example, poverty isn't solved for good reasons; it's hard to balance incentives and growth, and deal with heterogeneity, there exist absolute limits on current wealth and the ability to move it around, and the competing priorities of nations and individuals. It's not unsolved because people are too stupid to give money to feed the poor charities. We underestimate the rest of of the world because we're really good at one thing, and think everyone is stupid for not being good at it - and even if we're right, we're not good at (understanding) many other things, and some of those things matter for fixing these problems.

Replies from: homunq
comment by homunq · 2013-12-21T19:23:09.761Z · LW(p) · GW(p)

Note: Arrow's Impossibility Theorem is not actually a serious philosophical hurdle for a utilitarian (though related issues such as the Gibbard-Satterthwaite theorem may be). That is to say: it is absolutely trivial to create a social utility function which meets all of Arrow's "impossible" criteria, if you simply allow cardinal instead of just ordinal utility. (Arrow's theorem is based on a restriction to ordinal cases.)

Replies from: Davidmanheim
comment by Davidmanheim · 2014-01-13T17:32:12.322Z · LW(p) · GW(p)

Thank you for the clarification; despite this, cardinal utility is difficult because it assumes that we care about different preferences the same amount, or definably different amounts.

Unless there is a commodity that can adequately represent preferences (like money) and a fair redistribution mechanism, we still have problems maximizing overall welfare.

Replies from: homunq
comment by homunq · 2014-03-16T12:43:09.439Z · LW(p) · GW(p)

No argument here. It's hard to build a good social welfare function in theory (ie, even if you can assume away information limitations), and harder in practice (with people actively manipulating it). My point was that it is a mistake to think that Arrow showed it was impossible.

(Also: I appreciate the "thank you", but it would feel more sincere if it came with an upvote.)

Replies from: Davidmanheim
comment by Davidmanheim · 2015-02-10T06:04:00.990Z · LW(p) · GW(p)

I had upvoted you. Also, I used Arrow as a shorthand for that class of theorem, since they all show that a class of group decision problem is unsolvable - mostly because I can never remember how to spell Satterthewaite.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-12-02T16:55:32.222Z · LW(p) · GW(p)

I had the same sense of "This is the kind of criticism where you say 'we need two Stalins'" as one of the commenters. That doesn't mean its correct, and I, like some others, particularly liked the phrase "pretending to actually try". It also seems to me self-evident that this is a huge step forward and a huge improvement over merely pretending to try. Much of what is said here is correct, but none of it is the kind of criticism which would kill EA if it were correct. For that you would have to cross over into alleging things which are false.

From my perspective, by far the most obvious criticism of EA is to take the focus on global poverty at face value and then remark that from the pespective of 100,000,000 years later it is unlikely that the most critical point in this part of history will have been the distribution of enough malaria nets. Since our descendants will reliably think this was not the most utility-impactful intervention 100,000,000 years later, we should go ahead and update now, etc. And indeed I regard the non-x-risk parts of EA as being important only insofar as they raise visibility and eventually get more people involved in, as I would put it, the actual plot.

Replies from: None, benkuhn, Eugine_Nier, Jonathan_Graehl
comment by [deleted] · 2013-12-03T00:17:11.682Z · LW(p) · GW(p)

Excuse me, but this sounds to me like a terrible argument. If the far future goes right, our descendents will despise us as complete ignorant barbarians and won't give a crap what we did or didn't do. If it goes wrong (ie: rocks fall, everyone dies), then all those purported descendents aren't a minus on our humane-ness ledger, they're a zero: potential people don't count (since they're infinite in number and don't exist, after all).

Besides, I damn well do care how people lived 5000 years ago, and I would certainly hope that my great-to-the-Nth-grandchildren will care how I live today. This should especially matter to someone whose idea of the right future involves being around to meet those descendents, in which case the preservation of lives ought to matter quite a lot.

God knows you have an x-risk fetish, but other than FAI (which carries actual benefits aside from averting highly improbable extinction events) you've never actually justified it. There has always been some small risk we could all be wiped out by a random disaster. The world has been overdue for certain natural disasters for millenia now, and we just don't really have a way to prevent any of them. Space colonization would help, but there are vast and systematic reasons why we can't do space colonization right now.

Except, of course, the artificial ones: nuclear winter, global warming, blah blah blah. Those, however, like all artificial problems, are deeply tied in with the human systems generating them, and they need much more systematic solutions than "donate to this anti-global-warming charity to meliorate the impact or reduce the risk of climate change killing everyone everywhere". But rather like the Silicon Valley start-up community, there's a nasty assumption that problems too large for 9 guys in a basement simply don't exist.

You seem to suffer a bias where you simply say, "people are fools and the world is insane" and thus write off any notion of doing something about it, modulo your MIRI/CFAR work.

Replies from: michaeldello, John_Maxwell_IV, Lumifer
comment by michaeldello · 2016-07-11T07:32:56.485Z · LW(p) · GW(p)

I think future humans are definitely worthy of consideration. Consider placing a time bomb in a childcare centre for 6 year old kids set to go off in 10 years. Even though the children who will be blown up don't yet exist, this is still a bad thing to do, because it robs those kids of their future happiness and experience.

If you subscribe to the block model of the universe, then time is just another dimension, and future beings exist in the same way that someone in the room over who you can't see also exists.

Replies from: None
comment by [deleted] · 2016-07-15T23:40:15.811Z · LW(p) · GW(p)

Even though the children who will be blown up don't yet exist, this is still a bad thing to do, because it robs those kids of their future happiness and experience.

Well, it's definitely a bad thing to do because it kills the children. I dunno if I'd follow that next inference ;-).

If you subscribe to the block model of the universe, then time is just another dimension,

Luckily, I don't. It works well for general relativity at the large scale, but doesn't yet some to integrate well with the smallest scales of possible causality at the quantum level. I think that a model which ontically elides the distinction between past, present, and future as "merely epistemic" is quite possibly mistaken and requires additional justification.

I realize this makes me a naive realist about time, but on the other hand, I just don't see which predictions a "block model" actually makes about causality that account for both the success of general relativity and my very real ability to make interventions such as bombing or not bombing (personally, I'd prefer not bombing, there's too many damn bombs lately) a day-care. You might say "you've already made the choice and carried out the bombing in the future", but then you have to explain what the fundamental physical units of information are and how they integrate with relativity to form time as we know it in such a way that there can be no counterfactuals, even if only from some privileged informational reference frame.

In fact, the lack of privileged reference frames seems like an immediate issue: how can there be a "god's eye view" where complete information about past, present, and future exist together without violating relativity by privileging some reference frame? Relativity seems configured to allow loosely-coupled causal systems to "run themselves", so to speak, in parallel, without some universal simulator needing a global clock, so that synchronization only happens at the speed-of-light causality-propagation rate.

comment by John_Maxwell (John_Maxwell_IV) · 2013-12-13T17:36:58.283Z · LW(p) · GW(p)

Nick Bostrom has written some essays arguing for the prioritization of existential risk reduction over other causes, e.g. this one and this one.

I agree with your last paragraph.

comment by Lumifer · 2013-12-03T00:50:05.474Z · LW(p) · GW(p)

I damn well do care how people lived 5000 years ago

Do you, now?

And how does that caring manifest itself?

Replies from: Armok_GoB, None, Dias, gjm
comment by Armok_GoB · 2013-12-03T02:28:57.825Z · LW(p) · GW(p)

Presumably by staying on the lookout for opportunities to get their hands on a time machine.

comment by [deleted] · 2013-12-03T01:11:17.027Z · LW(p) · GW(p)

Hand me a time machine and you'll find out!

Replies from: Lumifer
comment by Lumifer · 2013-12-03T01:14:54.911Z · LW(p) · GW(p)

Go look for blue Public Call Police Boxes :-P

comment by Dias · 2013-12-03T03:09:28.670Z · LW(p) · GW(p)

I feel guilty for not living in ways that would be approved of by our ancestors.

comment by gjm · 2013-12-05T13:38:00.614Z · LW(p) · GW(p)

If I'm correctly understanding the subtext of that question ("if it doesn't affect what you actually do besides talking, it's meaningless to say you care about it") then I respectfully disagree.

I am quite happy to say that A cares about B if, e.g., A's happiness is greatly affected by B. If it happens that A is able to have substantial effect on B, then (1) we may actually be more interested in the question "what if anything does A do about B?", which could also be expressed as "does A care about B?", and (2) if the answer is that A doesn't do anything about B, then we might well doubt A's claims that her happiness is greatly affected by B. But in cases like this one -- where, so far as we know, there is and could be nothing whatever that A can do to affect B -- I suggest that "cares about" should be taken to mean something like "has her happiness affected by", and that asking what A does about B is simply a wrong response.

(Note 1. I am aware that I may be quite wrong about the subtext of the question. If an answer along the lines of "It manifests itself as changes in my emotional state when I discover new things about the lives of people 5000 years ago or when I imagine different ways their lives might have been" would have satisfied you, then the above is aimed not at you but at a hypothetical version of you who meant something else by the question.)

(Note 2. You might say that caring about something you can't influence is pointless and irrelevant. That might be correct, though I'm not entirely convinced, but in any case "how does that caring manifest itself?" seems like a strange thing to say to make that point.)

comment by benkuhn · 2013-12-02T17:55:34.384Z · LW(p) · GW(p)

Judging by the overwhelmingly favorable response, it certainly came out as we-need-two-Stalins criticism, whether or not I "intended" it that way. (One of the less expected side effects of this post was to cause me to update towards devoting more time to things that, unlike writing, don't give me a constant dribble of social reinforcement.)

I think my criticism includes yours, in the following sense: if we solve the "we fail to converge on truth because too much satisficing" problem, we will presumably stop saying things like "but global poverty could totally be the best thing for the far future!" (which has been argued) and start to find the things that are actually the best thing for the far future without privileging certain hypotheses.

Replies from: Lumifer
comment by Lumifer · 2013-12-02T18:03:03.417Z · LW(p) · GW(p)

start to find the things that are actually the best thing for the far future

I have strong doubts about your (not personal but generic) ability to evaluate the far-future consequences of most anything.

Replies from: maia
comment by maia · 2013-12-02T18:39:17.275Z · LW(p) · GW(p)

This is my main problem with the idea that we should have a far-future focus. I just have no idea at all how to get a grip on far-future predictions, and so it seems absurdly unlikely that my predictions will be correct, making it therefore also absurdly unlikely that I (or even most people) will be able to make a difference except in a very few cases by pure luck.

Replies from: atucker
comment by atucker · 2013-12-02T19:44:26.162Z · LW(p) · GW(p)

It seems easier to evaluate "is trying to be relevant" than "has XYZ important long-term consequence". For instance, investing in asteroid detection may not be the most important long-term thing, but it's at least plausibly related to x-risk (and would be confusing for it to be actively harmful), whereas third-world health has confusing long-term repercussions, but is definitely not directly related to x-risk.

Even if third world health is important to x-risk through secondary effects, it still seems that any effect on x-risk it has will necessarily be mediated through some object-level x-risk intervention. It doesn't matter what started the chain of events that leads to decreased asteroid risk, but it has to go through some relatively small family of interventions that deal with it on an object level.

Insofar as current society isn't involved in object-level x-risk interventions, it seems weird to think that bringing third-world living standards closer to our own will lead to more involvement in x-risk intervention without there being some sort of wider-spread availability of object-level x-risk intervention.

(Not that I care particularly much about asteroids, but it's a particularly easy example to think about.)

Replies from: satt, Strange7, gjm
comment by satt · 2013-12-03T02:58:41.030Z · LW(p) · GW(p)

investing in asteroid detection may not be the most important long-term thing, but it's at least plausibly related to x-risk (and would be confusing for it to be actively harmful), whereas third-world health has confusing long-term repercussions, but is definitely not directly related to x-risk.

I'm inclined to agree. A possible counterargument does come to mind, but I don't know how seriously to take it:

  1. Global pandemics are an existential risk. (Even if they don't kill everyone, they might serve as civilizational defeaters that prevent us from escaping Earth or the solar system before something terminal obliterates humanity.)

  2. Such a pandemic is much more likely to emerge and become a threat in less developed countries, because of worse general health and other conditions more conducive to disease transmission.

  3. Funding health improvements in less developed countries would improve their level of general health and impede disease transmission.

  4. From the above, investing in the health of less developed countries may well be related to x-risk.

  5. Optional: asteroid detection, meanwhile, is mostly a solved problem.

Point 4 seems to follow from points 1-3. To me point 2 seems plausible; point 3 seems qualitatively correct, but I don't know whether it's quantitatively strong enough for the argument's conclusion to follow; and point 1 feels a bit strained. (I don't care so much about point 5 because you were just using asteroids as an easy example.)

comment by Strange7 · 2013-12-14T08:49:20.959Z · LW(p) · GW(p)

Any given asteroid will either be detected and deflected in time, or not. There, to my understanding at least, no mediocre level of asteroid impact risk management which makes the situation worse, in the sense of outright increasing the chance of an extinction event. More resources could be invested for further marginal improvements, with no obvious upper bound.

Poverty and disease are more complicated problems. Incautious use of antibiotics leads to disease-resistant strains, or you give a man a fish and he spends the day figuring out how to ask you for another instead of repairing his net. Sufficient resources need to be committed to solve the problem completely, or it just becomes even more of a mess. Once it's solved, it tends to stay solved, and then there are more resources available for everything else because the population of healthy, adequately-capitalized humans has increased.

In a situation like that, my preferred strategy is to focus on the end-in-sight problem first, and compare the various bottomless pits afterward.

Replies from: michaeldello
comment by michaeldello · 2016-07-11T01:34:42.721Z · LW(p) · GW(p)

I would have to disagree that there is no mediocre way to make asteroid risk worse through poor impact risk management, but perhaps it depends on what we mean by this. If we're strictly talking about the risk of some unmitigated asteroid hitting Earth, there is indeed likely nothing we can do to increase this risk. However, a poorly construed detection, characterisation and deflection process could deflect an otherwise harmless asteroid into Earth. Further, developing deflection techniques could make it easier for people with malicious intent to deflect an otherwise harmless asteroid into Earth on purpose. Given how low the natural risk of a catastrophic asteroid impact is, I would argue that the chances of a man-made asteroid impact (either on purpose or by accident) is much higher than the chances of a natural one occurring in the next 100 years.

comment by gjm · 2013-12-05T13:44:21.670Z · LW(p) · GW(p)

Yes, most x-risk reduction will have to come about through explicit work on x-risk reduction at some point.

It could still easily be the case that working on improving the living standards of the world's poorest people is an effective route to x-risk reduction. In practice, scarcely anyone is going to work on x-risk as long as their own life is precarious, and scarcely anyone is going to do useful work on x-risk reduction if they are living somewhere that doesn't have the resources to do serious scientific or engineering work. So interventions that aim, in the longish term, to bring the whole world up to something like current affluent-West living standards seem likely to produce a much larger population of people who might be interested in reducing x-risk and better conditions for them to do such work in.

Replies from: atucker
comment by atucker · 2013-12-05T17:12:31.497Z · LW(p) · GW(p)

See the point about why its weird to think that new affluent populations will work more on x-risk if current affluent populations don't do so at a particularly high rate.

Also, it's easier to move specific people to a country than it is to raise the standard of living of entire countries. If you're doing raising-living-standards as an x-risk strategy, are you sure you shouldn't be spending money on locating people interested in x-risk instead?

Replies from: gjm
comment by gjm · 2013-12-05T17:53:28.333Z · LW(p) · GW(p)

I quite agree that if all you care about is x-risk then trying to address that by raising everyone's living standards is using a nuclear warhead to crack a nut. I was addressing the following thing you said:

it seems weird to think that bringing third-world living standards closer to our own will lead to more involvement in x-risk intervention without there being some sort of wider-spread availability of object-level x-risk intervention.

which I think is clearly wrong: bringing everyone's living standards up will increase the pool of people who have the motive and opportunity to work on x-risk, and since the number of people working on x-risk isn't zero that number will likely increase (say, by 2x) if the size of that pool increases (say, by 2x) as a result of making everyone better off.

I wasn't claiming (because it would be nuts) that the way to get the most x-risk bang per buck is to reduce poverty and disease in the poorest parts of the world. It surely isn't, by a large factor. But you seemed to be saying it would have zero x-risk impact (beyond effects like reducing pandemic risk by reducing overall disease levels). That's all I was disagreeing with.

comment by Eugine_Nier · 2013-12-03T01:30:20.647Z · LW(p) · GW(p)

This logic suffers from an "infinity discontinuity" problem:

Consider a hypothetical paperclip maximizer. It has some resources, it has to choose between using them to make paperclips or using them to develop more efficient ways of gathering resources. A basic positive feedback calculation means the latter will lead to more paperclips in the long run. But if it keeps using that logic, it will keep developing more and more efficient ways of gathering resources and never actually get around to making paperclips.

Replies from: Gurkenglas, ialdabaoth, Strange7
comment by Gurkenglas · 2013-12-03T01:41:24.230Z · LW(p) · GW(p)

In this situation, a maximizer can't work anyway because there is no maximum.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-12-03T01:45:43.397Z · LW(p) · GW(p)

Well, there are states that are better than others.

comment by ialdabaoth · 2013-12-03T01:35:25.022Z · LW(p) · GW(p)

Consider a hypothetical paperclip maximizer. It has some resources, it has to choose between using them to make paperclips or using them to develop more efficient ways of gathering resources. A basic positive feedback calculation means the latter will lead to more paperclips in the long run. But if it keeps using that logic, it will keep developing more and more efficient ways of gathering resources and never actually get around to making paperclips.

Can't this be solved through exponential discounting? If paperclips made later are discounted more than paperclips made sooner, then we can settle on a stable strategy for when to optimize vs. when to execute, based on our estimations of optimization returns at each stage being exponential, super-exponential, or sub-exponential.

Replies from: Gurkenglas, owencb, Eugine_Nier
comment by Gurkenglas · 2013-12-03T19:39:47.343Z · LW(p) · GW(p)

Finding a problem with the simple algorithm that usually gives you a good outcome doesn't mean you get to choose a new utility function.

Replies from: Gurkenglas
comment by Gurkenglas · 2013-12-03T20:00:56.930Z · LW(p) · GW(p)

Clarifying anti-tldr edit time! If you got the above, no need to read on. (I wanted this to be an edit, but apparently I fail at clicking buttons)

The simple algorithm is the greedy decision-finding method "Choose that action which leads to one-time-tick-into-future self having the best possible range of outcomes available via further actions", which you think could handle this problem if only the utility function employed exponential discounting (whether it actually could is irrelevant, since I adress another point).

But your utility function is part of the territory, and the utility function that you use for calculating your actions is part of the map; it is rather suspicious that you want to tweak your map towards a version that is more convenient to your calculations.

comment by owencb · 2013-12-03T19:33:04.130Z · LW(p) · GW(p)

There are questions about why we should discount at all, or if we are going to, how to choose an appropriate rate.

But even setting those aside: this isn't any more of a solution than the version without discounting. They're similarly reliant on empirical facts about the world (the rate of resource growth); they just give differing answers about how fast that rate needs to be before you should wait rather than cash out.

comment by Eugine_Nier · 2013-12-03T01:49:24.383Z · LW(p) · GW(p)

Yes, but Eliezer doesn't believe in discounting terminal values.

Replies from: ialdabaoth
comment by ialdabaoth · 2013-12-03T02:02:23.026Z · LW(p) · GW(p)

So, let's be clear - are we talking about what works, or what we think Eliezer is dumb for believing?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2013-12-05T05:56:38.207Z · LW(p) · GW(p)

Well, first I'm not a consequentialist.

However, the linked post has a point, why should we value future live less?

comment by Strange7 · 2013-12-14T06:50:58.759Z · LW(p) · GW(p)

Unless, or rather until, it hits diminishing returns on resource-gathering. Maybe an ocean, maybe a galaxy, maybe proton decay. With the accessible resources fully captured, it has to decide how much of that budget to convert directly into paperclips, how much to risk on an expedition across the potential barrier, and how much to burn gathering and analyzing information to make the decision. How many in-hand birds will you trade for a chance of capturing two birds currently in Andromeda?

comment by Jonathan_Graehl · 2013-12-02T23:15:26.216Z · LW(p) · GW(p)

from the perspective of 100,000,000 years later it is unlikely that the most critical point in this part of history will have been the distribution of enough malaria nets

I read this as presuming that generating/saving more humans is a worse use of smart/rich people's attention and resources than developing future-good theory+technology (or maybe it's only making more malaria-net-charity-recipients and their descendants that isn't a good investment toward those future-good things, but that's not likely to figure, since we can save quite a few lives at a very favorable ratio).

I wonder if you meant that it's a worse use because we have more people alive now than is optimal for future good, or because we only want more smart people, or something else.

Replies from: MugaSofer
comment by MugaSofer · 2013-12-23T02:23:18.567Z · LW(p) · GW(p)

I don't think he's saying that saving net-recipients is bad, or pointless. So I doubt either of those suggestions are correct.

comment by Vaniver · 2013-12-01T22:39:42.801Z · LW(p) · GW(p)

Cross-posted from http://www.benkuhn.net/ea-critique since I want outside perspectives, and also LW's comments are nicer than mine.

They are! I wish I had realized you cross-posted this here before I commented there. So also cross-posting my comment:


First, good on you for attempting a serious critique of your views. I hope you don’t mind if I’m a little unkind in responding to to your critique, as that makes it easier and more direct.

Second, the cynical bit: to steal Yvain’s great phrase, this post strikes me as the “we need two Stalins!” sort of apostasy that lands you a cushy professorship. (The pretending to try vs. actually trying distinction seems relevant here.) The conclusion- “we need to be sufficiently introspective”- looks self-serving from the outside. Would being introspective happen to be something you consider a comparative advantage? Is the usefulness of the Facebook group how intellectually stimulating and rigorous you find the conversations, or how many dollars are donated as a result of its existence?

Third, the helpful bit: instead of saying “this is what I think would make EA slightly less bad,” consider an alternative prompt: ten years from now, you look back at your EA advocacy as a huge waste of your time. Why?

(Think about that for a while; my answer to that question can wait. These sort of ‘pre-mortems’ are very useful in all sorts of situations, especially because it’s often possible to figure out information now which suggests the likelihood of a plan succeeding or failing, or it’s possible to build in safeguards against particular kinds of failures. Here, I’m focusing on the “EA was a bad idea to begin with” sorts of the failures, not the “EA’s implementation disappointed me, because other people weren’t good enough,” a la a common response to communism’s failures.)

  1. Philosophical differences might be lethal. It could be the case that there isn’t a convincing population ethics, and EAers can’t agree on which causes to promote, and so Givewell turns into a slightly more effective version of Charity Navigator. (Note this actually showed up in Charity Navigator’s recent screed- “we don’t tell people which causes to value, just which charities spend money frivolously.”)

  2. It might turn out that utilitarianism fails, for example, because of various measurement problems, which could be swept under the rug until someone actually tried to launch a broad utilitarian project, when their impracticality became undeniable. (Compare to, say, communists ignoring problems of information cost or incentives.)

  3. Consider each of the four principles. It’s unlikely that maximization will fail individually- if you know that one charity can add 50 human QALYs with your donation, and another charity can add 20 human QALYs with your donation, you’ll go with the first. Gathering the data is costly, but analysts are cheap if you’re directing enough donations. But it could fail socially, as in http://xkcd.com/871/ - any criticism of another person’s inefficiency might turn them off charity, or you. EA might be the hated hipsters of the charity world. (I personally don’t expect that this is a negative on net, because of the huge quality difference between charitable investments- if you have half as many donations used ten times as well, you’ve come out ahead- but it could turn out that way.)

  4. Similarly, consequentialism seems unlikely to fail, but what consequences we care about might be significantly different. (Maximizing fuzzies and maximizing QALYs looks different, but the first seems like it could be more effective charity than the second!)

  5. Egalitarianism might fail. The most plausible hole here seems to be the existential risk / control the singularity arguments, where it turns out that malaria just doesn’t matter much in the grand scheme of things.

  6. Altruism might fail. It might be the case that people don’t actually care about other people anywhere near the level that they care about themselves, and the only people that do are too odd to build a broad, successful movement. (Dipping back into cynical, I must say that I found the quoted story about kids amusing. “My professed beliefs are so convincing, but somehow I don’t feel an urge to commit genetic suicide to benefit unrelated people. It’s almost like that’s been bred into me somehow.”) Trying looks sexy, but actually trying is way costlier and not necessarily sexier than pretending to try, so it’s not clear to me why someone wouldn’t pretend to try. (Cynically again: if you do drop out of EA because you landed a spouse and now it just seems so much less important than your domestic life, it’s unlikely you’ll consider past EA advocacy as a waste if it helped you land that spouse, but likely you’ll consider future EA advocacy a waste.)

Replies from: benkuhn, pianoforte611
comment by benkuhn · 2013-12-01T23:22:15.729Z · LW(p) · GW(p)

Thanks for cross-posting! You didn't realize because I didn't think to cross-post until after you had commented there. (Sorry for being unclear.) I've added a link to this cross-post to the text on benkuhn.net for people who want to comment.

First, good on you for attempting a serious critique of your views. I hope you don’t mind if I’m a little unkind in responding to to your critique, as that makes it easier and more direct.

Go ahead! Obviously this is important enough that Crocker's Rules apply.

Second, the cynical bit: to steal Yvain’s great phrase, this post strikes me as the “we need two Stalins!” sort of apostasy that lands you a cushy professorship. (The pretending to try vs. actually trying distinction seems relevant here.) The conclusion- “we need to be sufficiently introspective”- looks self-serving from the outside. Would being introspective happen to be something you consider a comparative advantage? Is the usefulness of the Facebook group how intellectually stimulating and rigorous you find the conversations, or how many dollars are donated as a result of its existence?

You've correctly detected that I didn't spend as much time on the conclusion as the criticisms. I actually debated not proposing any solutions, but decided against it, for a couple reasons:

  1. The solution is essentially "we need to actually care about these problems I just listed" but phrased more nicely. I think any solution to the problems I listed involves actually caring about them more than we currently do.
  2. The end of this post is the best place I could think of to propose a solution that would actually get people's attention.
  3. I didn't want to end without saying anything constructive.

Incidentally, I don't actually consider being thoughtful about social dynamics a comparative advantage. I think we need more, like, sociologists or something--people who are actually familiar with the pitfalls of being a movement.

Third, the helpful bit: instead of saying "this is what I think would make EA slightly less bad," consider an alternative prompt: ten years from now, you look back at your EA advocacy as a huge waste of your time. Why?

I see now that it's not obvious from the finished product, but this was actually the prompt I started with. I removed most of the doom-mongering (of the form "these problems are so bad that they are going to sink EA as a movement") because I found it less plausible than the actual criticisms and wanted to maximize the chance that this post would be taken seriously by effective altruists. But I stand by these criticisms as the things that I think are most likely to torpedo EA right now. I'm less concerned about one of the principles failing than I am that the principles won't be enough--that people won't apply them properly because of failures of epistemology.

Replies from: Vaniver
comment by Vaniver · 2013-12-02T01:29:06.879Z · LW(p) · GW(p)

Incidentally, I don't actually consider being thoughtful about social dynamics a comparative advantage. I think we need more, like, sociologists or something--people who are actually familiar with the pitfalls of being a movement.

That deflates that criticism. For the object-level social dynamics problem, I think that people will not actually care about those problems unless they are incentivised to care about those problems, and it's not clear to me that is possible to do.

What does the person who EA is easy for look like? My first guess is a person who gets warm fuzzies from rigor. But then that suggests they'll overconsume rigor and underconsume altruism.

I'm less concerned about one of the principles failing than I am that the principles won't be enough--that people won't apply them properly because of failures of epistemology.

Is epistemology the real failing, here? This may just be the communism analogy, but I'm not seeing how the incentive structure of EA is lined up with actually getting things done rather than pretending to actually get things done. Do you have a good model of the incentive structure of EA?

I see now that it's not obvious from the finished product, but this was actually the prompt I started with. I removed most of the doom-mongering (of the form "these problems are so bad that they are going to sink EA as a movement") because I found it less plausible than the actual criticisms and wanted to maximize the chance that this post would be taken seriously by effective altruists.

Interesting. The critique you've written strikes me as more "nudging" than "apostasy," and while nudging is probably more effective at improving EA, keeping those concepts separate seems useful. (The rest of this comment is mostly meta-level discussion of nudging vs. apostasy, and can be ignored by anyone interested in just the object-level discussion.)

I interpreted the idea of apostasy along the lines of Avoiding Your Belief's Real Weak Points. Suppose you knew that EA being a good idea was conditional on there being a workable population ethics, and you were uncertain if a workable population ethics existed. Then you would say "well, the real weak spot of EA is population ethics, because if that fails, then the whole edifice comes crashing down." This way, everyone who isn't on board with EA because they're pessimistic about population ethics says "aha, Ben gets it," and possibly people in EA say "hm, maybe we should take the population ethics problem more seriously." This also fits Bostrom's idea- you could tell your past self "look, past Ben, you're not taking this population ethics problem seriously, and if you do, you'll realize that it's impossible and EA is wasted effort." (And maybe another EAer reads your argument and is motivated to find that workable population ethics.)

I think there's a moderately strong argument for sorting beliefs by badness-if-true rather than badness-if-true times plausibility because it's far easier to subconsciously nudge your estimate of plausibility than your estimate of badness-if-true. I want to say there's an article by Yvain or Kaj Sotala somewhere about "I hear criticisms of utilitarianism and think 'oh, that's just uninteresting engineering, someone else will solve that problem' but when I look at other moral theories I think 'but they don't have an answer for X!' and think that sinks their theory, even though its proponents see X as just uninteresting engineering," which seems to me a good example of what differing plausibility assumptions look like in practice. Part of the benefit of this exercise seems to be listing out all of the questions whose answers could actually kill your theory/plan/etc., and then looking at them together and saying "what is the probability that none of these answers go against my theory?"

Now, it probably is the case that the total probability is small. (This is a belief you picked because you hold it strongly and you've thought about it a long time, not one picked at random!) But the probably may be much higher than it seems at first, because you may have dismissed an unpleasant possibility without fully considering it. (It also may be that by seriously considering one of these questions, you're able to adjust EA so that the question no longer has the chance of killing EA.)

As an example, let's switch causes to cryonics. My example of cryonics apostasy is "actually, freezing dead people is probably worthless; we should put all of our effort into making it legal to freeze live people once they get a diagnosis of a terminal condition or a degenerative neurological condition" and my example of cryonics nudging is "we probably ought to have higher fees / do more advertising and outreach." The first is much more painful to hear, and that pain is both what makes it apostasy and what makes it useful to actually consider. If it's true, the sooner you know the better.

Replies from: Will_Sawin, Jiro, benkuhn, MichaelVassar, benkuhn
comment by Will_Sawin · 2013-12-02T01:54:17.035Z · LW(p) · GW(p)

Arguably trying for apostasy, failing due to motivated cognition, and producing only nudging is a good strategy that should be applied more broadly.

Replies from: Vaniver
comment by Vaniver · 2013-12-02T02:30:24.066Z · LW(p) · GW(p)

Arguably trying for apostasy, failing due to motivated cognition, and producing only nudging is a good strategy that should be applied more broadly.

A good strategy for what ends?

Replies from: Will_Sawin
comment by Will_Sawin · 2013-12-02T21:10:12.722Z · LW(p) · GW(p)

Finding good nudges!

comment by Jiro · 2013-12-02T15:56:30.719Z · LW(p) · GW(p)

I think there's a moderately strong argument for sorting beliefs by badness-if-true rather than badness-if-true times plausibility

This seems to encourage Pascal's mugging. In fact, it's even worse than Pascal's mugging; in Pascal's mugging, at least the large amount of possible damage has to be large enough that the expected value is large even after considering its small probability. Here, the amount of possible damage just has to be large and it doesn't even matter that the plausibility is small.

(If you think plausibility can't be substituted for probability here, then replace "Pascal's mugging" with "problems greatly resembling Pascal's mugging").

Replies from: Vaniver
comment by Vaniver · 2013-12-02T20:32:04.500Z · LW(p) · GW(p)

This seems to encourage Pascal's mugging.

This is one reason why I think the argument is only moderately strong.

Replies from: Strange7
comment by Strange7 · 2013-12-14T09:42:29.569Z · LW(p) · GW(p)

Maybe include plausibility, but put some effort into coming up with pessimistic estimates?

comment by benkuhn · 2013-12-02T18:47:55.786Z · LW(p) · GW(p)

Re your meta point (sorry for taking a while to respond): I now agree with you that this should not be called a "(hypothetical) apostasy" as such. Evidence which updated me in that direction includes:

  1. Your argument
  2. Referencing a "hypothetical apostasy" seems to have already lead to some degradation of the meaning of the term; cf. Diego's calling his counter-argument also an apostasy. (Though this may be a language barrier thing?)
  3. This article got a far more positive response than my verbal anticipations expected (though possibly not than System 1 predicted).

Thanks for calling this out. Should I edit with a disclaimer, do you think?

Replies from: Vaniver
comment by Vaniver · 2013-12-02T19:41:11.998Z · LW(p) · GW(p)

sorry for taking a while to respond

No problem!

I now agree with you

That's what I like to hear! :P

Should I edit with a disclaimer, do you think?

Probably. If you want do the minimal change, I would rewrite the "how to read this" section to basically be just its last paragraph, with a link to something that you think is a better introduction to EA, and maybe a footnote explaining that you originally wrote this as a response to the apostasy challenge but thought the moderate critique was better.

If you want to do the maximal change, I would do the minimal change and also post the "doom-mongering" parts you deleted, probably as a separate article. (Here, the disclaimer is necessary, though it could be worded so that it isn't.)

comment by MichaelVassar · 2013-12-04T17:14:02.319Z · LW(p) · GW(p)

I think that this is an effective list of real weak spots. If these problems can't be fixed, EA won't do much good.

comment by benkuhn · 2013-12-02T03:28:48.140Z · LW(p) · GW(p)

That deflates that criticism. For the object-level social dynamics problem, I think that people will not actually care about those problems unless they are incentivised to care about those problems, and it's not clear to me that is possible to do.

Is epistemology the real failing, here? This may just be the communism analogy, but I'm not seeing how the incentive structure of EA is lined up with actually getting things done rather than pretending to actually get things done. Do you have a good model of the incentive structure of EA?

I don't think EA has to worry about incentive structure in the same way that communism does, because EA doesn't want to take over countries (well, if it does, that's a different issue). Fundamentally we rely on people deciding to do EA on their own, and thus having at least some sort of motivation (or, like, coherent extrapolated motivation) to actually try. (Unless you're arguing that EA is primarily people who are doing it entirely for the social feedback from people and not at all out of a desire to actually implement utilitarianism. This may be true; if it is, it's a separate problem from incentives.)

The problem is more that this motivation gets co-opted by social-reward-seeking systems and we aren't aware of that when it happens. One way to fix this is to fix incentives, it's true, but another way is to fix the underlying problem of responding to social incentives when you intended to actually implement utilitarianism. Since the reason EA started was to fix the latter problem (e.g. people responding to social incentives by donating to the Charity for Rare Diseases in Cute Puppies), I think that that route is likely to be a better solution, and involve fewer epicycles (of the form where we have to consciously fix incentives again whenever we discover other problems).

I'm also not entirely sure this makes sense, though, because as I mentioned, social dynamics isn't a comparative advantage of mine :P

(Responding to the meta-point separately because yay threading.)

Replies from: CarlShulman, ColonelMustard, Vaniver, MichaelVassar, atucker
comment by CarlShulman · 2013-12-02T04:37:56.891Z · LW(p) · GW(p)

I don't think EA has to worry about incentive structure in the same way that communism does, because EA doesn't want to take over countries (well, if it does, that's a different issue)

GiveWell is moving into politics and advocacy, there are 80k people in politics, and GWWC principals like Toby Ord do a lot of advocacy with government and international organizations, and have looked at aid advocacy groups.

Replies from: Strange7
comment by Strange7 · 2013-12-14T09:50:12.465Z · LW(p) · GW(p)

In a more general sense, telling some large, ideologically-cohesive group of people to take as much of their money as they can stand to part with and throw it all at some project, and expecting them to obey, seems like an intrinsically political act.

comment by ColonelMustard · 2013-12-02T04:57:57.944Z · LW(p) · GW(p)

EA doesn't want to take over countries

"Take over countries" is such an ugly phrase. I prefer "country optimisation".

comment by Vaniver · 2013-12-02T04:01:26.805Z · LW(p) · GW(p)

Unless you're arguing that EA is primarily people who are doing it entirely for the social feedback from people and not at all out of a desire to actually implement utilitarianism. This may be true; if it is, it's a separate problem from incentives.

I think that the EA system will be both more robust and more effective if it is designed with the assumption that the people in it do not share the system's utility function, but that win-win trades are possible between the system and the people inside it.

comment by MichaelVassar · 2013-12-04T17:15:20.691Z · LW(p) · GW(p)

I think that attempting effectiveness points towards a strong attractor of taking over countries.

comment by atucker · 2013-12-02T06:22:04.276Z · LW(p) · GW(p)

Social feedback is an incentive, and the bigger the community gets the more social feedback is possible.

Insofar as Utilitarianism is weird, negative social feedback is a major reason to avoid acting on it, and so early EAs must have been very strongly motivated to implement utilitarianism in order to overcome it. As the community gets bigger, it is less weird and there is more positive support, and so it's less of a social feedback hit.

This is partially good, because it makes it easier to "get into" trying to implement utilitarianism, but it's also bad because it means that newer EAs need to care about utilitarianism relatively less.

It seems that saying that incentives don't matter as long as you remove social-approval-seeking ignores the question of why the remaining incentives would actually push people towards actually trying.

It's also unclear what's left of the incentives holding the community together after you remove the social incentives. Yes, talking to each other probably does make it easier to implement utilitarian goals, but at the same time it seems that the accomplishment of utilitarian goals is not in itself a sufficiently powerful incentive, otherwise there wouldn't be effectiveness problems to begin with. If it were, then EAs would just be incentivized to effectively pursue utilitarian goals.

comment by pianoforte611 · 2013-12-03T19:11:43.154Z · LW(p) · GW(p)

I've been thinking about point 6. I think its actually quite obvious in hindsight. People really only care about themselves and people close to them either due to personal connections, physical closeness (like being a neighbor) or similar characteristics. Altruism starts off with the assumption that all lives have equal value which doesn't reflect the values that people actually have. Charity has signaling purpose (it allows you to signal that you are the kind of person that cares) and a selfish purpose (it makes you feel good) but its not really about helping the people most in need.

In the drowning child example, I think the response that an honest non-utilitarian would give is that there is no moral obligation to save the drowning child if you aren't connected to them in any way.

Moreover this might not be a bad thing, capitalism works partly because of selfishness and while I realize that is probably motivated cognition speaking, I think its worth considering that it might be okay that people value lives unconnected to them much lower.

Replies from: Strange7
comment by Strange7 · 2013-12-14T09:28:28.078Z · LW(p) · GW(p)

Even in the absence of a moral obligation, saving a drowning child to whom you are not otherwise connected (or getting first aid training in anticipation of such an opportunity, etc.) might still be a very worthwhile investment, with the right follow-up. In addition to broader reputational effects, there's the possibility of a debt of gratitude from the child and any associated parents or guardians which would broaden and diversify your social circle, without giving those approached an opportunity to resent the intrusion.

comment by lukeprog · 2013-12-01T23:39:01.130Z · LW(p) · GW(p)

Good work! Though, this is much weaker than my model of a hypothetical apostasy, which is informed by my actual deconversion from Christianity, which involved writing a thoroughly withering critique of theism and Christianity, not a "here's how Christianity could be tweaked and improved."

If I were to write a hypothetical apostasy for EA, I might take the communism part further and try to argue that enacting global policies on the basis of unpopular philosophical views was likely to be disastrous. Or maybe that real-world utilitarianism is so far from intuitive human values (which have lots of emotional deontological principles and so on) that using it in the real world would cause the humans to develop all kinds of pathologies. Or something more damning that what you've written. But if you published such a thing then you'd have lots more people misunderstand it and be angry at you, too. :)

Edit: I see that Carl has said this better than I did.

Replies from: benkuhn
comment by benkuhn · 2013-12-02T00:06:27.267Z · LW(p) · GW(p)

Since you delegated to Carl, I responded to him.

comment by Nick_Beckstead · 2013-12-02T18:02:19.963Z · LW(p) · GW(p)

I'd like to see more critical discussion of effective altruism of the type in this post. I particularly enjoyed the idea of "pretending to actually try." People doing sloppy thinking and then making up EA-sounding justifications for their actions is a big issue.

As Will McAskill said in a Facebook comment, I do think that a lot of smart people in the EA movement are aware of the issues you're bringing up and have chosen to focus on other things. Big picture, I find claims like "your thing has problem X so you need to spend more resources on fixing X" more compelling when you point to things we've been spending time on and say that we should have done less of those things and more of the thing you think we should have been doing. E.g., I currently spend a lot of my time on research, advocacy, and trying to help improve 80,000 Hours and I'd be pretty hesitant to switch to writing blogposts criticizing mistakes that people in the EA community commonly make, though I've considered doing so and agree this would be help address some of the issues you've identified. But I would welcome more of that kind of thing.

I disagree with your perspective that the effective altruism movement has underinvested in research into population ethics. I wrote a PhD thesis which heavily featured population ethics and aimed at drawing out big-picture takeaways for issues like existential risk. I wouldn't say I settled all the issues, but I think we'd make more progress as movement if we did less philosophy and more basic fact-finding of the kind that goes into GiveWell shallow cause overviews.

Disclosure: I am a Trustee for the Centre for Effective Altruism and I formerly worked at GiveWell as a summer research analyst.

Replies from: benkuhn
comment by benkuhn · 2013-12-02T18:39:18.128Z · LW(p) · GW(p)

The main thing that I personally think we don't need as much of is donations to object-level charities (e.g. GiveWell's top picks). It's unclear to me how much this can be funged into more self-reflection for the general person, but for instance I am sacrificing potential donations right now in order to write this post and respond to criticism...

I think in general, a case that "X is bad so we need more of fixing X" without specific recommendations can also be useful in that it leaves the resource allocation up to individual people. For instance, you decided that your current plans are better than spending more time on social-movement introspection, but (hopefully) not everyone who reads this post will come to the same conclusion.

I think "writing blogposts criticizing mistakes that people in the EA community commonly make" is a moderate strawman of what I'd actually like to see, in that it gets us closer to being a successful movement but clearly won't be sufficient on its own.

Why do you think basic fact-finding would be particularly helpful? Seems to me that if we can't come to nontrivial conclusions already, the kind of facts we're likely to find won't help very much.

Replies from: jkaufman, Nick_Beckstead
comment by jefftk (jkaufman) · 2013-12-03T02:27:53.145Z · LW(p) · GW(p)

we don't need as much of is donations to object-level charities

These donations are useful for establishing credibility as a real movement and not just "people talking on the internet".

Replies from: benkuhn, Pablo_Stafforini
comment by benkuhn · 2013-12-03T06:35:58.467Z · LW(p) · GW(p)

Yes, I'm well aware. I never said they were completely unuseful, just that IMO the marginal value is lower than resources spent elsewhere.

comment by Pablo (Pablo_Stafforini) · 2014-03-27T11:48:00.930Z · LW(p) · GW(p)

Also, as Ben notes,

Given these considerations, it’s quite surprising that effective altruists are donating to global health causes now. Even for those looking to use their donations to set an example, a donor-advised fund would have many of the benefits and none of the downsides.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2014-03-28T13:29:01.163Z · LW(p) · GW(p)

Even for those looking to use their donations to set an example, a donor-advised fund would have many of the benefits and none of the downsides.

Still not so sure. Legibility and inferential distance are major constraints here. When trying to explain earning to give it's much easier if the "give" part is something obviously good. Donor-advised funds combined with an intention to choose effective charities aren't "obviously good" in the same way as a donation to a charity.

comment by Nick_Beckstead · 2013-12-02T20:36:40.076Z · LW(p) · GW(p)

The main thing that I personally think we don't need as much of is donations to object-level charities (e.g. GiveWell's top picks). It's unclear to me how much this can be funged into more self-reflection for the general person, but for instance I am sacrificing potential donations right now in order to write this post and respond to criticism...

I am substantially less enthusiastic about donations to object-level charities (for their own sake) than I am for opportunities for us to learn and expand our influence. So I'm pretty on board here.

I think "writing blogposts criticizing mistakes that people in the EA community commonly make" is a moderate strawman of what I'd actually like to see, in that it gets us closer to being a successful movement but clearly won't be sufficient on its own.

That was my first pass at how I'd try to start to try to increase the "self-awareness" of the movement. I would be interested in hearing more specifics about what you'd like to see happen.

Why do you think basic fact-finding would be particularly helpful? Seems to me that if we can't come to nontrivial conclusions already, the kind of facts we're likely to find won't help very much.

A few reasons. One is that the model for research having an impact is: you do research --> you find valuable information --> people recognize your valuable information --> people act differently. I have become increasingly pessimistic about people's ability to recognize good research on issues like population ethics. But I believe people can recognize good research on stuff like shallow cause overviews.

Another consideration is our learning and development. I think the above consideration applies to us, not just to other people. If it's easier for us to tell if we're making progress, we'll learn how to learn about these issues more quickly.

I believe that a lot of the more theoretical stuff needs to happen at some point. There can be a reasonable division of labor, but I think many of us would be better off loading up on the theoretical side after we had a stronger command of the basics. By "the basics" I mean stuff like "who is working on synthetic biology?" in contrast with stuff like "what's the right theory of population ethics?".

You might have a look at this conversation I had with Holden Karnofsky, Paul Christiano, Rob Wiblin, and Carl Shulman. I agree with a lot of what Holden says.

comment by MichaelVassar · 2013-12-02T16:39:29.310Z · LW(p) · GW(p)

This is MUCH better than I expected from the title. I strongly agree with essentially the entire post, and many of my qualms about EA are the result of my bringing these points up with, e.g. Nick Beckstead and not seeing them addressed or even acknowledged.

Replies from: Nick_Beckstead
comment by Nick_Beckstead · 2013-12-02T16:49:24.602Z · LW(p) · GW(p)

I would love to hear about your qualms with the EA movement if you ever want to have a conversation about the issue.

Edited: When I first read this, I thought you were saying you hadn't brought these problems up with me, but re-reading it it sounds like you tried to raise these criticisms with me. This post has a Vassar-y feel to it but this is mostly criticism I wouldn't say I'd heard from you, and I would have guessed your criticisms would be different. In any case, I would still be interested in hearing more from you about your criticisms of EA.

Replies from: MichaelVassar
comment by MichaelVassar · 2013-12-10T17:30:32.116Z · LW(p) · GW(p)

I spent many hours explaining a sub-set of these criticisms to you in Dolores Park soon after we first met, but it strongly seemed to me that that time was wasted. I appreciate that you want to be lawful in your approach to reason, and thus to engage with disagreement, but my impression was that you do not actually engage with disagreement, you merely want to engage with disagreement, basically, I felt that you believe in your belief in rational inquiry, but that you don't actually believe in rational inquiry.

I may, of course, be wrong, and I'm not sure how people should respond in such a situation. It strongly seems to me that a) leftist movements tend to collapse in schizm, b) rightist movements tend to converge on generic xenophobic authoritarianism regardless of their associated theory. I'd rather we avoid both of those situations, but the first seems like an inevitable result of not accommodating belief in belief, while the second seems like an inevitable result of accommodating it. My instinct is that the best option is to not accommodate belief in belief and to keep a movement small enough that schizm can be avoided. The worst thing for an epistemic standard is not the person who ignores or denies it, but the person who tries to mostly follow it when doing so feels right or is convenient while not acknowledging that they aren't following it when it feels weird or inconvenient, as that leads to a community of people with such standards engaging in double-think WRT whether their standards call for weird or inconvenient behavior. OTOH, my best guess is that about 50 people is as far as you can get with my proposed approach.

Replies from: Nick_Beckstead, joaolkf
comment by Nick_Beckstead · 2013-12-12T11:42:49.097Z · LW(p) · GW(p)

What I mostly remember from that conversation was disagreeing about the likely consequences of "actually trying". You thought elite people in the EA cluster who actually tried had high probability of much more extreme achievements than I did. I see how that fits into this post, but I didn't know you had loads of other criticism about EA, and I probably would have had a pretty different conversation with you if I did.

Fair enough regarding how you want to spend your time. I think you're mistaken about how open I am to changing my mind about things in the face of arguments, and I hope that you reconsider. I believe that if you consulted with people you trust who know me much better than you, you'd find they have different opinions about me than you do. There are multiple cases where detailed engagement with criticism has substantially changed my operations.

comment by joaolkf · 2014-01-03T23:27:23.059Z · LW(p) · GW(p)

I'm partially unsure if I should be commenting here since I do not know Nick that well, or the other matters that could be involved in this discussion. Those two points nevertheless, not only it seems to me that your impression of him is mistaken, but that the truth lies in the exact opposite direction. If you check his posts and other writings, it seems he has the remarkable habit of taking into consideration many opposing views (e.g.: his thesis) and also putting a whole more weight into others opinions (e.g.: common as sense as a prior) than the average here at LW. I would gather others must have, at worst, a different opinion of him than the one you presented, otherwise he wouldn't be in the positions he's right now, both at FHI and GWWC. That's my two cents. Not even sure if he would agree with all of it, but I would image some data points wouldn't do harm.

comment by [deleted] · 2013-12-03T00:28:52.922Z · LW(p) · GW(p)

As a practicing socialist, I found the comparison to Communism illuminating and somewhat disturbing.

You've already listed some of the major, obvious aspects in which the Effective Altruism movement resembles Communism. Let me add another: failure to take account of local information and preferences.

Information: Communism (or as the socialists say: state capitalism, or as the dictionaries say: state socialism -- centrally planned economies!) failed horrifically at the Economic Calculation Problem because no central planning system composed of humans can take account of all the localized, personal information inherent in real lives. Markets, on the other hand, can take advantage of this information, even if they're not always good at it (see for a chuckle: "Markets are Efficient iff P=NP"). Effective altruism, being centrally planned, suffers this problem.

Preferences: the other major failure of Communist central planning was its foolish claim that the entirety of society had a single, uniform set of valuations over economic inputs and outputs which was determined by the planning committee in the Politburo. The result was, of course, that the system produced vast amounts of things the Politburo thought were Very Important (such as weapons, to Smash the Evil Capitalists), and vast amounts of capital inputs (that sometimes sat uselessly because nobody really wanted them), but very, very small amounts of things most people actually preferred (like meat for household consumption).

Given, as you've mentioned, the overwhelmingly uniform and centrally-planned nature of the Effective Altruism movement, you should expect to suffer exactly the same systematic problems as Communism. My best recommendation to fix the problem is to come up with an optimization metric for Doing Good that doesn't involve your movement's having to personally know and plan all the facts and all the values of each altruistic intervention from the top down. Find a metric by which you can encourage the philanthrophic/charitable system to optimize itself from the bottom up, and then unleash it!

Replies from: Lumifer
comment by Lumifer · 2013-12-03T00:50:46.496Z · LW(p) · GW(p)

Effective altruism, being centrally planned

Hold on a second. This is news to me.

What is it about EA being centrally planned?

Replies from: atucker, None, fubarobfusco, Eugine_Nier
comment by atucker · 2013-12-05T08:06:37.981Z · LW(p) · GW(p)

My guess is that Eli is referring to the fact that the EA community seems to largely donate to where GiveWell says to donate, and that a lot of the discourse is centered around a system of trying to figure out all of the effects of a particular intervention, weigh it against all other factors, and then come up with a plan of what to do, where said plan is incredibly sensitive to you being right about the prioritization, facts about the situation, etc. in a way that will cause you to predictably fail to do as well as you could, due to factors like lack of on-the-ground feedback suggesting other important areas, misunderstanding people's values, errors in reasoning, and a lack of diversity in attempts to do something so that if one of the parts fails nothing gets accomplished.

I tend to think that global health is relatively non-controversial as a broad goal (nobody wants malaria! like, actually nobody) that doesn't suffer from the "we're figuring out what other people value" problem as much as other things, but I also think that that's almost certainly not the most important thing for people to be dealing with now to the exclusion of all else, and lots of people in the EA community seem to hold similar views.

I also think that GiveWell is much better and handling that type of issue than people in the EA community are, but that (at least the facebook group) is somewhat slow to catch up.

comment by [deleted] · 2013-12-11T18:09:43.905Z · LW(p) · GW(p)

At a first reapproximation to my thinking a week ago, I was thinking of things like this. Many acts of "charity" often consist of trying to manage the lives of the unfortunate for them, and evidence is emerging that, well, the unfortunate know their own needs better than we do: we should empower them and leave them "free to optimize", so to speak.

Not that malaria relief or anything is a bad cause, but I generally have more "feeling" regarding poverty myself, since combating poverty over the middle-term (longer than a year, shorter than a generation, let's say) tends to result in the individual benefactors becoming able to solve a lot of their other problems, and has generational knock-on effects (such as: reduced poverty leads to better nutrition and better building materials, meaning healthier, smarter children over time, meaning people can do more to solve their remaining issues, etc.).

And then I was also definitely thinking about people trying to "do maximum good" through existential-risk reduction donations (including MIRI, but not just MIRI), and how these donations tend to be... dubiously effective. Sure, we're not dead yet, but very few organizations can evidentially demonstrate that they're actively reducing the probability that we all die. That is, if I want to be less-probably dead next year than this year, I don't know to whom to donate.

EDIT: Regarding the latter paragraph, I wish to note that I did give MIRI $72 this past year, this being calculated as the equivalent price of several Harry Potter novels for which the author deserved payment. If I become convinced that MIRI/FHI are actually effective in ensuring both that AI doesn't kill us all off, and that they can do better than throw the human species in a permanent Medieval Stasis (ie: that they can "save the world"), resulting in the much-lauded futuristic utopia they use for their recruiting pitches, I will donate larger sums quite willingly. I also want to actually engage in the scientific/philosophical problems involved myself, just to be damn sure. So don't think I'm insulting here, just pointing out that "we're the only ones thinking about AI risk and other x-risk" (which is mostly true: almost all popular consideration of AI risk past the level of Terminator movies has been brought on due to MIRI/FHI propagandizing) is not really very good evidence for "we're effectively reducing the odds of AI being a problem and increasing the odds of a universe tiled in awesomeness".

Replies from: Lumifer
comment by Lumifer · 2013-12-11T18:21:59.528Z · LW(p) · GW(p)

should empower them and leave them "free to optimize"

Yes, but the (currently prevalent) alternative is not central planning, but rather the proliferation of a variety of different "let-us-manage-your-lifestyle" organizations.

very few organizations can evidentially demonstrate that they're actively reducing the probability that we all die.

Actually, I can't think of any. But still, what does this all have to do with central planning?

Replies from: None
comment by [deleted] · 2013-12-11T18:25:33.315Z · LW(p) · GW(p)

Would you like me to amend from "central" planning to "external" planning? As in, organizations who attempt to plan people's lives in an interfering sort of way? Sorry, I just want to check if we're about to get into a massive argument about vocabulary or whether there's some place we are actually talking about the same thing.

Replies from: Nornagest, Lumifer
comment by Nornagest · 2013-12-11T18:46:46.208Z · LW(p) · GW(p)

Interesting; I hadn't previously thought much about the analogy between (macro) economic planning and (micro) goods-and-services-oriented charity, and it probably does deserve some thought.

Still, the analogy isn't exact. If we're talking about basic necessities, things like food and clothes, then the argument seems strong: people's exact needs will differ in ways that aren't easy to predict, and direct distribution of goods will therefore incur inefficiencies that cash transfers won't. I'm pretty sure that GiveWell and its various peers know about these pitfalls, as evidenced by GiveDirectly's consistently high ranking. But I can also think of situations where there are information, infrastructure, or availability problems to overcome -- market defects, in other words -- that cash won't do much for in the medium term, and it's plausible to me that many of the EA community's traditional beneficiaries do work in this space.

As to existential risk... well, that's a completely different approach. To borrow a phrase from GiveWell's blog, existential risk reduction is an extreme charity-as-investment strategy, and there's very little decent analysis covering it. I don't entirely trust MIRI's in-house estimates, but I couldn't point you to anything better, either.

Replies from: None
comment by [deleted] · 2013-12-11T19:01:40.521Z · LW(p) · GW(p)

I'm pretty sure that GiveWell and its various peers know about these pitfalls, as evidenced by GiveDirectly's consistently high ranking.

Well, you just raised my opinion of GiveWell.

comment by Lumifer · 2013-12-11T18:30:39.792Z · LW(p) · GW(p)

I guess it's mostly a terminology thing. I associate "central planning" with things like the USSR and it was jarring to see an offhand reference to EA being centrally planned.

If we redefine things in terms of external management/control vs. just providing resources without strings attached, I don't know if we disagree much.

Replies from: None
comment by [deleted] · 2013-12-11T19:03:37.078Z · LW(p) · GW(p)

In that case, I think I could spend part of the evening hammering out what precisely our differences are, or I could get off LessWrong and do my actual job.

Currently choosing the latter.

comment by fubarobfusco · 2013-12-03T16:45:04.353Z · LW(p) · GW(p)

This seems like a noncentral use of "centrally planned", meaning something like "there exists a highly influential opinion leader" ... or else a noncentral use of "EA", meaning something like "give all your money to GiveWell and let them sort it out".

Replies from: Lumifer
comment by Lumifer · 2013-12-03T17:45:34.365Z · LW(p) · GW(p)

Given that the context is comparison to communism, your explanation doesn't look likely. But I'm sure Eli can explain his meaning if he wants to.

comment by Eugine_Nier · 2013-12-14T06:42:01.119Z · LW(p) · GW(p)

It's centrally planned in the sense that the optimizer behind it is a committee/bureaucracy, as opposed to say a market. Of course, a market in charity tries to optimize warm fuzzies so I don't know of a better solution.

Edit: Or rather the problem is that effective charity is a credence good.

comment by John_Maxwell (John_Maxwell_IV) · 2013-12-02T06:06:47.105Z · LW(p) · GW(p)

Tangential: has their been discussion on LW of the EA implications of having kids? Personally, I would expect that having kids would at least be positive expected utility since they would inherit a good number of your genes/memes and be more likely than a person randomly chosen from the population to become effective altruists. But the opportunity costs seem really high.

I'm also curious how people feel about increasing fertility among reasonably smart people in general.

Replies from: Adele_L, Pablo_Stafforini
comment by Adele_L · 2013-12-02T06:21:03.714Z · LW(p) · GW(p)

Yes.

comment by Pablo (Pablo_Stafforini) · 2014-03-27T12:02:34.049Z · LW(p) · GW(p)

There are much cheaper and faster ways to increase the total number of EAs than procreation. Suppose some EA donor is considering spending half a million dollars in EA movement-building. Would anyone in his right mind think that the most effective thing this person can do is to pay that sum of money to an EA who wants to have a child but couldn't otherwise afford it? (Having a child costs about USD 0.5 million, according to this source. If you think this estimate is inaccurate, just rerun the argument above with your chosen figure.)

Replies from: John_Maxwell_IV, Lumifer
comment by John_Maxwell (John_Maxwell_IV) · 2014-03-27T21:04:04.581Z · LW(p) · GW(p)

If EAs never procreate then in the long run all of the genes that would cause one to become an EA will get selected out of the population. So yes, right now there is fairly low-hanging fruit in terms of EA movement-building that does not involve having children. But in the long run it's definitely something we want EAs to be doing.

comment by Lumifer · 2014-03-27T15:15:14.525Z · LW(p) · GW(p)

create new EAs

Create? That's interesting terminology.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2014-03-28T11:37:45.092Z · LW(p) · GW(p)

I changed it to

increase the total number of EAs

comment by passive_fist · 2013-12-01T23:54:46.532Z · LW(p) · GW(p)

I think a lot of these criticisms are very valid. Many of them are stuff I had been thinking about but your post does a really good job of explaining them better than I ever could.

I guess I have a somewhat unique take on the whole EA thing, since I'm one of the few (probably) non-white people here. I'd be happy to elaborate if you wish.

Replies from: benkuhn
comment by benkuhn · 2013-12-02T00:07:56.542Z · LW(p) · GW(p)

Yes, I'd love to hear you elaborate. More perspectives are great! (I'd also be happy to talk 1:1 if you'd prefer that to a public forum.)

Replies from: passive_fist
comment by passive_fist · 2013-12-02T00:54:54.964Z · LW(p) · GW(p)

Well, I honestly can't tell if my perspective is because I'm not white, or if it's just because I'm an anomaly. But this paragraph is very worrying to me:

Effective altruists are not very humanistically aware either. EA came out of analytic philosophy and spread from there to math and computer science. As such, they are too hasty to dismiss many arguments as moral-relativist postmodernist fluff, e.g. that effective altruists are promoting cultural imperialism by forcing a Westernized conception of “the good” onto people they’re trying to help.

There seems to be this meme that white people are not aware of other cultures and are blinded to the realities of the world because they are rich and coddled and so on. It seems to be fashionable to point this out at any possible opportunity..

Regardless of the validity of that statement, my concern is that promoting multiculturalism has the danger of backlash i.e. by encouring too much tolerance of other cultures, you alienate a large fraction of people (this goes back to your 'community problems' point). I've been seeing this go on over the past decade in the West.

Replies from: Ishaan, Viliam_Bur, benkuhn, John_Maxwell_IV
comment by Ishaan · 2013-12-02T09:22:31.344Z · LW(p) · GW(p)

It has nothing to do with white people - it has to do with cross cultural misunderstandings in general. People just use the word "white" frequently because of certain implicit assumptions about the racial / cultural background of the audience.

Anyway, let me give you an example of when this sort of thing actually happens: In India, there used to be religious figures called Devadasis. They are analogous to nuns in one sense - they get "married" to divinity, and never take a human spouse. Unlike nuns, they are trained in music and dancing. In medieval India, music, dancing, and sexual services were all lumped under the same general category...as in, there was a large overlap between dancers, musicians, and sex workers, and this was widely recognized. (This is not really true today, but if you watch really old Indian movies you can see remnants of this association). We can presume that many of the Devadasis engaged in sex work. It should be noted that they also had a high social status, which allows us to further infer that the sex work probably didn't involve intense coercion and probably wasn't driven by extremely harsh economic pressures.

You can guess where this is going. The actual closest Western analogue to this phenomenon is "Courtesian". However, the West had left "Courtesians" behind in the Renaissance era, and at the time of occupation they were in the Victorian era and the closest cultural analogue that came to mind was "prostitution", which implies exploitation of women, low social status, etc.

To quote one of Eliezer's stories, "it wasn't prudery. It was a memory of disaster"... well, actually in this case it probably was prudery too... but I'm sure the humanitarian concerns were more salient. The British experience of sex work was negative, and the fact that the devadasi "marriages" were child marriages must have made it even more horrifying.

Of course, despite all the social reforms and laws that would-be humanitarians enacted, Devadasis continued to exist...except now they were primarily prostitutes, low status, criminal, and exploitable...and the whole thing continues to be a horrid affair to this day.

So I'd say the real problem is not the imposition of a Western conception of "good" onto others...in fact, I think humans share the "humanist" values of good and evil across cultures. (Although as far as I can tell, what constitutes conservative / traditional morality does seem to be culturally variable.)

The problem is that without cultural knowledge, you might easily misjudge good and evil because of incomplete information, even when both cultures are using the same basic metrics of good and evil...or you might just pick the wrong way of going about making improvements.

Replies from: benkuhn
comment by benkuhn · 2013-12-02T15:42:56.597Z · LW(p) · GW(p)

Misjudging "the good" was essentially what I think the postmodernist-fluffy critics mean when they raise this objection. Thanks for the example!

comment by Viliam_Bur · 2013-12-02T11:56:20.916Z · LW(p) · GW(p)

Well, I honestly can't tell if my perspective is because I'm not white, or if it's just because I'm an anomaly.

You could also write your perspective without having to worry about why exactly you have it.

I'd appreciate if you wrote more, specifically more details, so I have less to guess about what exactly you wanted to say. And I am curious about it. (In case this is a cultural thing, e.g. you feel it is impolite to express your opinions on a topic unless other people are constantly encouraging you to tell more, then you have my explicit encouragement to express your opinions fully, whether here or anywhere else on LW. And of course that includes the cases where you disagree with me.)

comment by benkuhn · 2013-12-02T08:26:11.818Z · LW(p) · GW(p)

There seems to be this meme that white people are not aware of other cultures and are blinded to the realities of the world because they are rich and coddled and so on. It seems to be fashionable to point this out at any possible opportunity.

My impression is that in this particular community, emphasizing multiculturalism without some obvious instrumental benefit is if anything an anti-applause-light.

my concern is that promoting multiculturalism has the danger of backlash i.e. by encouring too much tolerance of other cultures, you alienate a large fraction of people (this goes back to your 'community problems' point). I've been seeing this go on over the past decade in the West.

I'd like to echo John Maxwell IV in asking for examples. Specifically if there's a way you see EA becoming more humanistically aware in a way that is instrumentally useful to the object-level goal of doing good, but harmful to the meta-goal of growing the movement because it alienates a large fraction of their potential audience (and this is worse than their increased capacity to do good). I can't come up with things that seem plausible to me, though this may be my brain being silly again.

comment by John_Maxwell (John_Maxwell_IV) · 2013-12-02T02:35:01.871Z · LW(p) · GW(p)

I've been seeing this go on over the past decade in the West.

Could you give an example or two?

comment by jefftk (jkaufman) · 2013-12-03T02:22:33.845Z · LW(p) · GW(p)

Effective altruists often express surprise that the idea of effective altruism only came about so recently. For instance, my student group recently hosted Elie Hassenfeld for a talk in which he made remarks to that effect, and I’ve heard other people working for EA organizations express the same sentiment. But no one seems to be actually worried about this—just smug that they’ve figured out something that no one else had.

I do think this is worrying, or at least worth looking into. This is part of why I've been looking into the history of earning to give (I, II, III).

comment by Brian_Tomasik · 2013-12-02T21:48:17.730Z · LW(p) · GW(p)

Thanks, Ben. I agree with about half of the points and disagree with the other half. I think some of the claims, e.g., that other people haven't raised these issues, are untrue.

Effective altruists often express surprise that the idea of effective altruism only came about so recently. [...] But no one seems to be actually worried about this—just smug that they’ve figured out something that no one else had.

Honestly, I think this idea is one of EA's bigger oversights -- not that people haven't noticed that EA is recent, but that people don't realize that it's not recent. The components of EA have each existed for millennia, and movements combining several parts are also ancient. This particular combination may be a little different from past movements, but not more than past movements have been from each other.

On the whole, I think EAs vastly overestimate their value relative to everyone else in the world who are actually doing some really important work as well but don't happen to be part of our social circles. I agree that EAs (myself included) would do well to explore more different perspectives on the world beyond the boundaries of their community.

comment by EHeller · 2013-12-07T04:34:40.305Z · LW(p) · GW(p)

I recently ran across Nick Bostrom’s idea of subjecting your strongest beliefs to a hypothetical apostasy in which you try to muster the strongest arguments you can against them.

This is generally known as playing the devil's advocate, and its an idea that long predates Nick Bostrum.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-12-07T08:05:26.235Z · LW(p) · GW(p)

Playing the devil's advocate is when Alice is arguing for some position, and Bob is arguing against it, even though he does not actually disagree with Alice (perhaps because he wants to help Alice strengthen her arguments, clarify her views, etc.).

Hypothetical apostasy is when Alice plays her own devil's advocate, in essence, with no Bob involved.

Replies from: homunq
comment by homunq · 2013-12-21T13:01:43.148Z · LW(p) · GW(p)

... And that is not a new idea either. "Allow me to play the devil's advocate for a moment" is a thing people say even when they are expressing support before and after that moment.

comment by Davidmanheim · 2013-12-03T17:31:46.301Z · LW(p) · GW(p)

(Edit, Later. This is related to the top level replies by CarlShulman and V-V, but I think it's a more general issue, or at least a more general way of putting the same issues.)

I'm wondering about a different effect: over-quantification and false precision leading to bad choices in optimization as more effort goes into the most efficient utility maximization charities.

If we have metrics, and we optimize for them, anything that our metrics distort or exclude will have an exaggerated exclusion from our conversation. For instance, if we agree that maximizing human health is important, and use evidence that shows that something like fighting disease or hunger in fact has a huge positive effect on human health, we can easily optimize towards significant population growth, then a crash due to later resource constraints or food production volatility, killing billions. (It is immaterial if this describes reality, the phenomenon of myopic optimization still stands.)

Given that we advocate optimizing, are we, as rationalists, likely to fall prey to this sort of behavior when we pick metrics? If we don't understand the system more fully, the answer is probably yes; there will always be unanticipated side-effects in incompletely understood systems, by definition, and the more optimized a system becomes, the less stable it is to shocks.

More diversity of investment to lower priority goals and alternative ideals, meaning less optimization, as currently occurs, seem likely to mitigate these problems.

Replies from: homunq
comment by homunq · 2013-12-21T19:29:23.346Z · LW(p) · GW(p)

I think you've done better than CarlShulman and V_V at expressing what I see as the most fundamental problem with EA: the fact that it is biased towards the easily- and short-term- measurable, while (it seems to me) the most effective interventions are often neither.

In other words: how do you avoid the pathologies of No Child Left Behind, where "reform" becomes synonymous with optimizing to a flawed (and ultimately, costly) metric?

This issue is touched by the original post, but not at all deeply.

comment by DylanEvans · 2013-12-12T18:31:34.352Z · LW(p) · GW(p)

To my mind, the worst thing about the EA movement are its delusions of grandeur. Both individually and collectively, the EA people I have met display a staggering and quite sickening sense of their own self-importance. They think they are going to change the world, and yet they have almost nothing to show for their efforts except self-congratulatory rhetoric. It would be funny if it wasn't so revolting.

Replies from: homunq
comment by homunq · 2013-12-21T19:15:16.091Z · LW(p) · GW(p)

Upvoted because I think this is a real issue, though I'm far from sure whether I'd put it at "worst".

comment by V_V · 2013-12-02T15:00:16.355Z · LW(p) · GW(p)

A few thoughts (disclaimer: I do NOT endorse effective altruism):

  • The main reason most people donate to charities may be to signal status to others, or to "purchase warm fuzzies" (a form of status signalling to one's own ego).
    Effective altruists claim to really care about doing good with their donations, but theirs could be just a form of status signalling targeted at communities where memes such as consequentialism, utilitarianism, and "rationality" are well received, and/or similarly a way to "purchase warm fuzzies" for somebody wishing to maintain a self-image of utilitarian/"rationalist".
    To this end, effective altruism doesn't have to be actually effective, it could just superficially pretend to be.

  • Effective altruism is based on a form of total utilitarianism, thus it is subject to the standard problems of this moral philosophy:

  • Interpersonal utility comparison: metrics such as QUALY are supposed to be interpersonally comparable proxies for utility, but they are highly debatable.
  • The repugnant conclusion: optimizing for cumulative QUALYs may lead to a world where the majority of the population live lives only barely worth living. Note that this isn't merely a theoretical concern: as Carl Shulman pointed out, GiveWell's top-ranked charities might well be already pushing in that direction.
  • Difficulties in distinguishing supererogatory actions from morally required actions, as your example of the person questioning their own desire to have kids displays.

  • Even if you assume that optimizing cumulative QUALYs is the proper goal of charitable donation, there are still problems of measurement and incentives of all the involved actors, much like the problems that plagued Communism:

  • Estimating the expected marginal QUALYs per dollar of a charitable donation is difficult. Any method would have to rely on a number of relatively strong assumptions. Charities have an incentive to find and exploit any loophole in the evaluation methods, as per Campbell's law/Goodhart's law/Lucas critique.
  • Individual donors can't plausibly estimate the expected marginal QUALYs/$ of charities, they have to rely on meta-charities like GiveWell. But how you estimate the performance of GiveWell? Given that estimation is costly, GiveWell has no incentive to become any better, it actually has an incentive to become worse. Even if the people currently running GiveWell are honest and competent, they might fall victim to greed or self-serving biases that could make them overestimate their own performance, especially since they lack any independent form of evaluation or model to compare with. Or the honest and competent people could be replaced by less honest and less competent people. Or GiveWell as a whole could be driven out of business and replaced by a competitor that spends less on estimation quality and more on PR. The whole industry has a real possibility of becoming a Market for Lemons.
Replies from: benkuhn, weeatquince, RobbBB
comment by benkuhn · 2013-12-02T15:32:31.979Z · LW(p) · GW(p)

The main reason most people donate to charities may be to signal status to others, or to "purchase warm fuzzies" (a form of status signalling to one's own ego).

Effective altruists claim to really care about doing good with their donations, but theirs could be just a form of status signalling targeted at communities where memes such as consequentialism, utilitarianism, and "rationality" are well received, and/or similarly a way to "purchase warm fuzzies" for somebody wishing to maintain a self-image of utilitarian/"rationalist".

To this end, effective altruism doesn't have to be actually effective, it could just superficially pretend to be.

Yes, I think this there are people for whom this is true. However, the best way to get such people to actually do good is to make "pretending to actually do good" and "actually doing good" equivalently costly, by calling them out when they do the latter (EDIT: former).

I personally want effective altruism to actually do good, not just satisfy people's social desires (though as Diego points out, this is also important). If it turns out that the point of the EA movement becomes to help people signal to a particular consequentialist set, then my hypothetical apostasy will become an actual apostasy, so I'm still going to list this as a critique.

Individual donors can't plausibly estimate the expected marginal QUALYs/$ of charities, they have to rely on meta-charities like GiveWell. But how you estimate the performance of GiveWell? Given that estimation is costly, GiveWell has no incentive to become any better, it actually has an incentive to become worse. Even if the people currently running GiveWell are honest and competent, they might fall victim to greed or self-serving biases that could make them overestimate their own performance, especially since they lack any independent form of evaluation or model to compare with. Or the honest and competent people could be replaced by less honest and less competent people. Or GiveWell as a whole could be driven out of business and replaced by a competitor that spends less on estimation quality and more on PR. The whole industry has a real possibility of becoming a Market for Lemons.

GiveWell spends a lot of time making estimating their performance easier (nearly everything possible is transparent, "mistakes" tab prominently displayed on the website, etc.). And I know some people take their raw material (conversations, etc.) and come to fairly different conclusions based on different values. GiveWell also solicits external reviews.

I think this is as good of an incentive structure as we're going to get (EDIT: not actually--as Carl Shulman points out, more competitors would be better, but without a lot of extra effort, it's hard to beat). Fundamentally, it seems like anything altruistic we do is going to have to rely on at least a few "heroic" people who are responding to a desire to actually do good rather than social signalling.

Everything else you said, I agree with. Are those your totality of reasons for not endorsing EA? If not, I'd like to hear your others (by PM if you like).

Replies from: CarlShulman, Vaniver, V_V
comment by CarlShulman · 2013-12-02T19:13:03.433Z · LW(p) · GW(p)

GiveWell spends a lot of time making estimating their performance easier (nearly everything possible is transparent, "mistakes" tab prominently displayed on the website, etc.). And I know some people take their raw material (conversations, etc.) and come to fairly different conclusions based on different values. GiveWell also solicits external reviews.

I think this is as good of an incentive structure as we're going to get

I think it would be better with more competitors in the same space keeping each other honest.

Replies from: benkuhn, Eugine_Nier
comment by benkuhn · 2013-12-02T19:40:33.726Z · LW(p) · GW(p)

Ah, good point. Weakened.

comment by Eugine_Nier · 2013-12-03T01:21:16.624Z · LW(p) · GW(p)

I think it would be better with more competitors in the same space keeping each other honest.

Not necessarily, a lot of competitors might result in competition on providing plausible fuzzes rather than honesty.

comment by Vaniver · 2013-12-02T17:08:17.326Z · LW(p) · GW(p)

However, the best way to get such people to actually do good is to make "pretending to actually do good" and "actually doing good" equivalently costly, by calling them out when they do the latter.

I'm not sure what you mean by the last clause. Do you mean "calling them out when they do the former"? Or do you mean "making the primary way to pretend to actually do good such that it actually does good"?

Replies from: benkuhn
comment by benkuhn · 2013-12-02T19:39:52.598Z · LW(p) · GW(p)

I meant "former". Sorry for the confusion.

comment by V_V · 2013-12-02T17:03:32.560Z · LW(p) · GW(p)

GiveWell spends a lot of time making estimating their performance easier (nearly everything possible is transparent, "mistakes" tab prominently displayed on the website, etc.). And I know some people take their raw material (conversations, etc.) and come to fairly different conclusions based on different values. GiveWell also solicits external reviews.

This is nice to hear. Still, you have to trust them to report their own shortcomings accurately. And if more and more people join EA for status reasons, GiveWell and related organizations may become less incentivized to achieve high performance.

Everything else you said, I agree with. Are those your totality of reasons for not endorsing EA? If not, I'd like to hear your others (by PM if you like).

Mostly these are the reasons I can think of. Maybe I could also add that donations to people in impoverished communities might create market distortions with difficult to asses results, but I suppose that this could be lumped in the estimation difficulties category of objections.

comment by weeatquince · 2013-12-02T23:27:40.240Z · LW(p) · GW(p)

Effective altruism is based on a form of total utilitarianism

This is not true (and incidentally is a pet peeve of mine). I know plenty of EAs who are not utilitarian EAs. Most EAs I know would dispute this (at least in conversation on the EA facebook group there appears to be a consensus that EA ≠ utilitarianism).

I am curious as to what makes you (/anyone) think this. Could you enlighten me?

I do NOT endorse effective altruism

This statement also interests me too. What do you mean that you do not endorse EA?

  • Are you referring to the idea of applying reason/rationality to doing good?
  • Are you saying that you do not support the movement or the people in it?
  • Do you simply mean that advocating EA just happens to be a thing you have never done?
  • Are you not altruistic/ethical?
Replies from: V_V
comment by V_V · 2013-12-03T16:30:27.970Z · LW(p) · GW(p)

This is not true (and incidentally is a pet peeve of mine). I know plenty of EAs who are not utilitarian EAs. Most EAs I know would dispute this (at least in conversation on the EA facebook group there appears to be a consensus that EA ≠ utilitarianism).

Effective altruism is not the same as utilitarianism, but it is certainly based on it. How else would you call trying to maximize a numeric measure of cumulative good?

What do you mean that you do not endorse EA?

I think I've already responded in the parent comment.

Replies from: weeatquince
comment by weeatquince · 2013-12-07T17:48:38.284Z · LW(p) · GW(p)

Effective altruism is not the same as utilitarianism, but it is certainly based on it. How else would you call trying to maximize a numeric measure of cumulative good?

This is incorrect. Effective altruism is applying rationality to doing good (http://en.wikipedia.org/wiki/Effective_altruism). It is not always maximizing. For example you could be EA and not believe you should ever actively cause harm (ie you would not kill one person to save 5). It does require quantifying things, as much as making any other rational decision requires quantifying things.

I think I've already responded in the parent comment.

No you have not. You have expressed criticisms of things EAs do. The OP expressed lots of criticisms too but still actively endorses EA. I ask mainly because I agree with many of your criticisms, but I still actively endorse EA. And I wonder at what point on the path we differ.

Replies from: V_V
comment by V_V · 2013-12-07T20:23:36.685Z · LW(p) · GW(p)

It is not always maximizing. For example you could be EA and not believe you should ever actively cause harm (ie you would not kill one person to save 5). It does require quantifying things, as much as making any other rational decision requires quantifying things.

Fair enough. I think it could be said that while the philosophy behind EA is rooted in total utilitarianism, people who practice EA can further constrain it within a deontological moral system. (I suppose that this true even of people who explicitly proclaim themselves utilitarians).

No you have not. You have expressed criticisms of things EAs do. The OP expressed lots of criticisms too but still actively endorses EA. I ask mainly because I agree with many of your criticisms, but I still actively endorse EA. And I wonder at what point on the path we differ.

I wonder that too. If you agree with many of my criticisms, why do you still endorse EA?

Replies from: EALE
comment by EALE · 2013-12-09T12:21:28.982Z · LW(p) · GW(p)

The term "EA" is undoubtedly based on a form of total utilitarianism. Whatever the term means today, and whatever Wikipedia says (which, incidentally, weeatquince helped to write, though I can't remember if he wrote the part he is referring to), the motivation behind the creation of the term was the need for a much more palatable and slightly broader term for total utilitarianism.

comment by Rob Bensinger (RobbBB) · 2013-12-02T17:51:56.421Z · LW(p) · GW(p)

(a form of status signalling to one's own ego)

I don't understand what this means. How does one signal to one's 'ego'? What information is being conveyed, and to whom?

Effective altruists claim to really care about doing good with their donations, but theirs could be just a form of status signalling

These could both be true at different explanatory levels. What are we taking to be the site of 'really caring'? The person's conscious desires? The person's conscious volition and decision-making? The person's actions and results?

Difficulties in distinguishing supererogatory actions from morally required actions

What's the import of the distinction? Presumably we should treat actions as obligatory when that makes the world a better place, and as non-obligatory but praiseworthy when that makes the world a better place. Does there need to be a fact of the matter about how mad morality will be at you if you don't help people?

Take two otherwise as-identical-as-possible worlds, and make everything obligatory in one world, nothing obligatory in the other. What physical or psychological change distinguishes the two?

Replies from: V_V, TheAncientGeek
comment by V_V · 2013-12-02T18:07:52.488Z · LW(p) · GW(p)

I don't understand what this means. How does one signal to one's 'ego'? What information is being conveyed, and to whom?

I'm talking about self-deception, essentially. A perfectly rational agent would not be able to do that, but people aren't perfectly rational agents, and they are capable of self deception, and sometimes they do that deliberately, sometimes it is unconscious. Wishful thinking and Confirmation bias are instances of this.

These could both be true at different explanatory levels. What are we taking to be the site of 'really caring'? The person's conscious desires? The person's conscious volition and decision-making? The person's actions and results?

Consider Revealed preferences. Are someone's actions more consistent with their stated goals or with status seeking and signalling?

What's the import of the distinction? Presumably we should treat actions as obligatory when that makes the world a better place, and as non-obligatory but praiseworthy when that makes the world a better place.

I'm not sure I can follow you here. This looks like circular reasoning.

Replies from: benkuhn
comment by benkuhn · 2013-12-02T20:50:23.846Z · LW(p) · GW(p)

I'm not sure I can follow you here. This looks like circular reasoning.

I'm not sure what RobBB meant, but something like this, perhaps:

Utilitarianism doesn't have fundamental concepts of "obligatory" or "supererogatory", only "more good" and "less good". A utilitarian saying "X is obligatory but Y is supererogatory" unpacks to "I'm going to be more annoyed/moralize more/cooperate less at you if you fail to do X than if you fail to do Y". A utilitarian can pick a strategy for which things to get annoyed/moralize/be uncooperative about according to which strategy maximizes utility.

comment by TheAncientGeek · 2013-12-02T18:00:19.161Z · LW(p) · GW(p)

How does one signal to one's 'ego'? What information is being conveyed, and to whom?

Praising and blaming oneself seems a ubiquitous feature of life to me...but then I am starting from an observation, not from a theory of how egos work.

comment by Ishaan · 2013-12-02T08:35:45.753Z · LW(p) · GW(p)

Philosophical difficulties

The main insight of the EA movement is to pick some criteria and go with it (rather than the "warm fuzzies" heuristic that most people use).

What criteria you use is up to you and your preferences.

Poor cause choices

or marketable cause choices. Uncontroversial cause choices. The act of giving a recommendation is also outcome focused...you have to think about what percentage of your audience will actually be moved to act as a result of your announcement. Effective Altruism for a meta-charity also means Effective Advertising, which means picking a cause that everyone is convinced about and not just a small splinter of your target audience.

Non-obviousness

Consolidating an old idea under a catchy brand using the momentum of the internet / being the most recent innovator does not mean you've invented an idea. EA's stand on the shoulders of many people who have been trying to do the same thing before them. Notions of things such as "QALYs", for example, have been around for 50 years.

Inconsistent attitude towards rigor

People don't usually base life choices like that off of pure altruism. Personal preferences are not entirely altruistic.

Poor psychological understanding

Fourth way: My preference function includes altruistic behavior, but is not entirely altruistic. Morality is only a subset of my preferences. Engaging in Effective Altruism is a way to maximize some of my preferences (the morality related ones, specifically) but that's just a sub-goal of a larger attempt to maximize all of my preferences.

...and I think I pretty much agree with all your other points

comment by Desrtopa · 2013-12-02T04:19:22.823Z · LW(p) · GW(p)

The “market” for ideas is at least somewhat efficient: most simple, obvious and correct things get thought of fairly quickly after it’s possible to think them.

This may be tautological depending on how you define your terms (if people don't think of an idea quickly after it's possible to do so, it wasn't obvious.)

If defined in such a way that it could possibly be false, of course, it very much begs further evidence.

Replies from: benkuhn
comment by benkuhn · 2013-12-02T08:28:19.711Z · LW(p) · GW(p)

The most compelling example I'm familiar with is the large number of mathematical breakthroughs that are made nearly simultaneously by multiple people. See e.g. here:

Merton believed that it is multiple discoveries, rather than unique ones, that represent the common pattern in science.

Replies from: Desrtopa
comment by Desrtopa · 2013-12-02T16:56:14.606Z · LW(p) · GW(p)

This is certainly sometimes the case. On the other hand, while it often happens in heavily competitive fields where people are employed all around the world for full time research, it may be rather different when it comes to producing ideas or answering questions where nobody is being employed to address them at all, and anyone who does so is doing it on their own initiative.

comment by joaolkf · 2013-12-02T03:05:37.407Z · LW(p) · GW(p)

One of the plausible ways EA could be worsening instead of improving resources allocation is if, in fact, resources are better with the very rich instead of the very poor countries. I do not believe this question was rightly assessed by the EA movement. More often than not, it is just assumed as evident resources are better on the hands of the ones who don't have it, which would make sense inside an egalitarian-ethics, but not on utilitarianism. I do know there are texts, articles, etc. on this question, but I do not think they are nearly enough given how important the question is.

Prima facie, most extreme cases of moving resources from the rich to the poorer (i.e. USA or UK to Africa) seems right. Nevertheless, analyzing this same question for the less extreme cases could reveal some problems with this prior seemingly evident assumption. For if it were the case resources will create more utility in, say, UK rather than in Brazil, then it might be the case expending resources to make African countries more close to Brazil is counterproductive. Consider these worlds, which could be created with the same amount of resources: A: Most African countries become as developed as South Africa B: All developed countries become similar to Scandinavians on average, or probably even more developed Now consider these actions: P: expending $x resources on developed countries Q: expending $x on developing (but not miserable) ones If P is better than Q – which does not seem implausible at all -, then it would seem the long-term net benefit of creating world B (modulo ceiling effects on development) is far greater than creating A. In other words and exemplifying, since resources are much better expended in UK than in South Africa, they might be also better expended in UK than in Niger - even if this is not true in the short-term.

(Note: I wrote parts of this on a discussion about improving resource allocation, but I believe it is much better placed here.)

Replies from: CarlShulman
comment by CarlShulman · 2013-12-02T05:00:00.830Z · LW(p) · GW(p)

I think the standard long-run argument here would be that it's much cheaper to influence conditions in the very poor countries: a given chunk of altruistic resources will be larger relative to poor countries and smaller relative to rich countries, and the cost difference could outweigh a difference in desirability of the outcomes purchased.

And, of course in the short-term marginal utility of income is many times higher in poor countries.

comment by diegocaleiro · 2013-12-02T07:37:53.573Z · LW(p) · GW(p)

I have written a lengthy response that deals with only one of the points in the critique above, the suggestion that, as a whole, the Effective Altruist movement is pretending to really try, here: http://lesswrong.com/r/discussion/lw/j8v/in_praise_of_tribes_that_pretend_to_try/

My main argument is that pretending to try is quite likely *a good thing(, in the grand scheme of EA.

Disclaimer: I support the EA movement.

comment by TsviBT · 2013-12-01T23:07:42.179Z · LW(p) · GW(p)

"I only regret that I have but one upvote to give for this post."

comment by Philip_W · 2014-02-12T22:07:07.522Z · LW(p) · GW(p)

Concerning historical analogues: From what I understand about their behaviour, it seems like the Rotary Club pattern-matches some of the ideas of Effective Altruism, specifically the earning-to-give and community-building aspects. They have a million members who give on average over $100/yr to charities picked out by Rotary Club International or local groups. This means that in the past decade, their movement has collected one billion dollars towards the elimination of Polio. Some noticeable differences include:

  1. I can't find any mention of Rotary spending on charity effectiveness research^1 .
  2. They have a relatively monolithic structure. The polio-elimination charity was founded by Rotarians, charitable goals are suggested to Rotary and picked by Rotarians, etc.
  3. Relatively low expectations for people within the group. Rotarians tend to be first world upper or middle class, so $100 is likely to be closer to 0.1% of their income than the 10% commonly proscribed by EA.
  4. Relatively high barrier of entry. To become Rotarian, you have to be asked by a Rotarian, and you have to be vetted. Any old fool can call themselves Effective Altruist and nobody will challenge them on it.
  5. Allegedly, nepotism. Rotarians allegedly form a network and are willing to give other Rotarians preferential treatment and/or employment. I've heard some earning-to-give effective altruists speak of evolving to do the same thing, but we currently don't have the network.
  6. They started as businessmen, EA started as philosophers and students. That gives us a significant disadvantage when combined with (5), because we aren't capable of helping each other or funding significant endeavours, and we won't be for some time.

(1) Note that per year Rotary advanced the annihilation of polio, they saved 1,000 lives and improved 200,000. Highballing at 100,000 life-equivalents saved, that would put them at $10,000 per life saved. That's a factor 3-4 worse than GiveWell charities, though I'm not confident the current "skimming the margins" tactic would work when you've got a billion dollars to distribute.

comment by [deleted] · 2015-06-23T00:37:56.080Z · LW(p) · GW(p)

sd

comment by Evan_Gaensbauer · 2015-01-09T10:09:12.766Z · LW(p) · GW(p)

I wish to write a one-year retrospective and/or report on this post. I'll contact Ben Kuhn to run this idea by him, to see if he would be interested. If he's too busy, as I expect he might be, I at least hope to seek his blessing to extend the mission of critiquing effective altruism.

There are also other critiques, like this one. Additionally, there have been counters and defenses in response to this post in the previous year. Further, specific to effective altruism, e.g., on the effective altruism forum, there has been more discussion of these ideas. I will take into account points from relevant and well-argued responses to this article in my retrospective report, and assessment of progress, vis a vis specific criticisms.

I don't believe I'm necessarily the best person to write this. However, I want to see it done, and nobody else has tried. However, I'm eager to collaborate with others. I'll write a rough draft, and outline, for this retrospective assessment. Comment below, or privately message me, if you'd be interested in reading. I will give you access to the Google Doc. This comment is the first action taken toward writing this assessment, so I expect it will take me at least a week to generate a first draft.

Replies from: riceissa, Richard_Kennaway
comment by riceissa · 2016-07-25T19:38:39.183Z · LW(p) · GW(p)

Hi Evan, did you ever write this post?

comment by Richard_Kennaway · 2016-01-27T13:54:08.322Z · LW(p) · GW(p)

I wish to write a one-year retrospective and/or report on this post.

Ping!

comment by EALE · 2013-12-09T13:18:05.016Z · LW(p) · GW(p)

"for a community that purports to put stock in rationality and self-improvement, effective altruists have shown surprisingly little interest in self-modification to have more altruistic intentions. This seems obviously worthy of further work." I would love to see more work done on this. However, I understand "wanting to have more altruistic intentions" as part of a broader class of "wanting to act according to my ultimate/rational/long-term desires rather than my immediate desires", and this doesn't seem niche enough for members of our community to make good progress on (I hope I am wrong), although CFAR's work on Propagating Urges is a start (I just found that particular session relatively useless).

I'd also love to see more work done on historical analogues and more attention given to "Diffusion of Innovations" (h/t Jonas Muller).

On non-obviousness, the arc of history seems to me to bend somewhat towards EA, and it is unsurprising that a society's moral circle would expand and allow more demanding obligations as their own circumstances become more cushy and their awareness of and power over the wider world increases. In other words, we've only just reached the point in history where a large group of people are sufficiently well-off, informed and powerful to be able to (perhaps even need to...think Maslow's Hierarchy of Needs) think about morality on such a massive scale, and EA is pretty superlative until we need to think about morality on such a huge scale. (I would love to hear some thoughts/research on this last paragraph as I was considering developing it into a blog post.)

comment by Benjamin_Todd · 2013-12-03T20:00:55.609Z · LW(p) · GW(p)

Hi Ben,

Thanks for the post. I think this is an important discussion. Though I'm also sympathetic to Nick's comment that a significant amount of extra self-reflection is not the most important thing to EA's success.

I just wanted to flag that I think there are attempts to deal with some of these issues, and explain why I think some of these issues are not a problem.

Philosophical difficulties

Effective altruism was founded by philosophers, so I think there's enough effort going into this, including population ethics. (See Nick's comment)

Poor cause choices

There's a lot being done on this front:

  • GiveWell is running Labs, and Holden has said he expects to find better donation opportunities in the next few years outside of global health
  • CEA is an advocate of further cause prioritisation research, and is about to hire Owen Cotton-Barratt, to work full-time on it.
  • 80k is about to release a list of recommended causes, which will not have global health at the top.

Non-obviousness

I think the more useful framing of this problem is 'what's the competitive advantage that has let us come up with these ideas rather than anyone else?' I think more work on this question would be useful. This also deals with the efficient markets problem. If you don't have an answer to this question, I agree you should be worried.

I've thought about it in the context of 80k, and have some ideas (unfortunately I haven't had time to write about them publicly). I now think the bigger priority is just to try out 80k and see how well it works. More generally, we try to take our disagreements with elite common sense very seriously.

I don't think recency is a problem. It seems reasonable that EA could only develop after we had things like the internet, good quality trial data of different interventions, and Singer's pond argument (which required a certain level of global inequality and globalization), which are all relatively recent.

Inconsistent attitude toward rigor

I think this is mainly because people use the best analysis that's out there, and the best analysis for charity is currently much more in-depth than it is for these other issues. We're trying to make progress on the other issues at 80k and CEA.

Poor psychological understanding

My impression is that people at CEA have worried about these problems quite a bit. At 80k, we try to work on this problem by highlighting members who are really trying rather than rationalising what they want, which we hope will encourage good norms. We'll also consider calling people out, but it can be a delicate issue!

Monoculture

I'm worried about this, but it's difficult to change. All we can do is try to make an active effort to reach out to new groups.

Community problems

I don't see the decline in quality of the FB group as a problem. EA was started by some of the smartest, most well meaning people I have ever met. It's going to be almost impossible to avoid a decline in quality of discussion as the circle is widened.

I'll also push back against equating the community with the FB group. There are efforts by other EA groups to build better venues for the community e.g. the EA Summit by Leverage. We don't even need a good FB group so long as there are other ways for people to form projects (e.g. speak to 80k's careers coaches) and get good information (read GiveWell's research).

Replies from: benkuhn
comment by benkuhn · 2013-12-05T01:17:17.937Z · LW(p) · GW(p)

Hi Ben,

Thanks for responding. I've responded to points below.

Poor cause choices

There's a lot being done on this front: GiveWell is running Labs, and Holden has said he expects to find better donation opportunities in the next few years outside of global health CEA is an advocate of further cause prioritisation research, and is about to hire Owen Cotton-Barratt, to work full-time on it. * 80k is about to release a list of recommended causes, which will not have global health at the top.

The point of this argument wasn't that organizations aren't working on it. In fact the existence of this research strengthens my point, which was that people are donating now anyway despite the fact that it looks like we know very little now and the attitude towards giving now vs. later seems to be "well there's a good case for either one" rather than "we really need to figure this out because we may be pouring money down the drain", which is evidence that people are stopping thinking at the level of "doesn't obviously conflict with EA principles".

Inconsistent attitude toward rigor

I think this is mainly because people use the best analysis that's out there, and the best analysis for charity is currently much more in-depth than it is for these other issues. We're trying to make progress on the other issues at 80k and CEA.

Again, the issue isn't that nobody is trying to solve these, it's that most people are way more worried about the charity analysis issue than ancillary issues that are just as important. If our knowledge of e.g. cost-effectiveness of global health interventions was as limited as our knowledge elsewhere, would people be donating to global health charities? I doubt it.

Poor psychological understanding

My impression is that people at CEA have worried about these problems quite a bit. At 80k, we try to work on this problem by highlighting members who are really trying rather than rationalising what they want, which we hope will encourage good norms. We'll also consider calling people out, but it can be a delicate issue!

I've been following 80k and have not noticed this phenomenon. Can you give some examples?

Monoculture

I'm worried about this, but it's difficult to change. All we can do is try to make an active effort to reach out to new groups.

This is definitely not all we can do (unless you take a tautologically broad interpretation of "make an active effort to reach out"). For instance, if a substantial fraction of effective altruists were raging sexists, it would be wise to fix our group norms before going "hey women! there's this thing called effective altruism!"

Even supposing it is all we can do, is there anything we're actually doing about it?

EA was started by some of the smartest, most well meaning people I have ever met. It's going to be almost impossible to avoid a decline in quality of discussion as the circle is widened.

The point of the critique was not to list easily avoidable problems, but to list bad problems. If decline in quality of people is inevitable, then we better find some solutions to the problems it brings (e.g. epistemic inertia), or the decline of EA is inevitable too.

Replies from: Benjamin_Todd
comment by Benjamin_Todd · 2013-12-05T14:59:46.694Z · LW(p) · GW(p)

Read the response to poor cause choice and inconsistent attitude toward rigor as "while some EAs might be donating without enough thought, lots of others are investing most of their resources in doing more research"

The monoculture problem is something we often think about how to fix at 80k. We haven't come up with great solutions yet though.

I also argued that the decline in the FB group is not obviously important. And if it's difficult to avoid, but many movements started by a small group of smart people nevertheless go on to achieve a lot, that's also evidence that it's not important.

comment by Josh You (scrafty) · 2013-12-03T01:50:22.731Z · LW(p) · GW(p)

Another possible critique is that the philosophical arguments for ethical egoism are (I think) at least fairly plausible. The extent to which this is a critique of EA is debatable (since people within the movement state that it's compatible with non-utilitarian ethical theories and that it appeals to people who want to donate for self-interested reasons) but it's something which merits consideration.