Against EA PR

post by ozymandias · 2017-09-21T01:23:00.502Z · LW · GW · 6 comments

Contents

6 comments

Scott Alexander recently wrote about weird effective altruism. Many people (mostly, but not entirely, people who aren't effective altruists) offered the opinion that weird effective altruists should be banned from EA, or at least shouldn't be allowed to give talks at EA Global and have blog posts written about them. Weird effective altruist causes are (sort of by definition) off-putting to most people; therefore, if you want people to donate to global poverty relief, you should kick out all of those people concerned about farmed animal welfare/AI risk/wild animal welfare/psychedelics research/suffering in fundamental physics, lest we scare the normies.

There are many reasonable critiques of this point of view, including that it's not remotely clear that any of those claims are more frightening to normal people than "it is morally obligatory to make personal sacrifices in order to help poor, faraway black people." But ultimately I reject the entire premise.

I'd like to be clear about what I'm not saying in this post. I am not saying all "weird effective altruism" causes are effective; I believe some are and some aren't. I think many effective altruists are not taking seriously enough the difficulty of figuring out how effective highly speculative causes are, and that unless we seriously address this we're going to waste potentially millions of dollars on boondoggles. And I suspect a lot of weird effective altruism tends to over-explore certain cause areas (for example, things you think of if you read too many science fiction novels) and underexplore other cause areas (for example, boring things). I don't intend this post to be a whole-hearted defense of weird effective altruism, but simply a criticism of a single narrow argument too often wielded against it.

So the question arises: why is effective altruism a thing at all?

Most people care about charity effectiveness, at least a little bit. They look up their charities on Charity Navigator before donating; they object to money being spent on big CEO salaries or on overhead instead of on services; they circulate criticisms of the Susan G Komen Foundation and PETA. And yet not only do most social programs not work, for the vast majority of programs we simply haven't collected the information to see whether it works or not. This isn't a "no one cares about starving Africans" thing; the state of the evidence on warm-fuzzies American medical and educational interventions is equally poor.

Part of the problem is that while people care about effectiveness some, they don't care about effectiveness that much. They are willing to google a charity to see whether it is an outright scam, but they're not willing to read academic papers to see if the charity's intervention works. They're definitely not going to put in the time to separate intuitive but misleading measures of effectiveness (CEO pay) from actually good measures of effectiveness (randomized controlled trials).

The other part of the problem is that all charity advertisements are a hellhole of epistemic doom and despair.

Let's pick on Feeding America. Not because it's an unusually bad charity (it's not), but because it's large and typical.

Looking on their webpage, I find out immediately that 1 in 8 Americans struggles with hunger. That sounds awful! After clicking through several pages, I find that the source is this document, in which 1 in 8 households (not individuals) are food insecure. You can click through to the document to read the full operationalization of food insecure (it's on pages 3-4). Food insecure households include, for instance, a household that sometimes worries about whether they'll run out of food, feeds their children only a few kinds of low-cost food to avoid running out of food, and sometimes can't afford to eat balanced meals. While obviously this household is experiencing a good deal of suffering and Feeding America can help them, it's not exactly what the average person would think of when they hear the word "hunger." This is actively misleading.

I click through to Our Work, where I learn that Feeding America has fed four billion meals last year. What percentage of people who would otherwise go hungry did they feed? 10%? 50%? 99%? How many of their meals went to people who would have otherwise gone hungry, versus people who would have been able to figure out some other way to get enough to eat? Feeding America does not provide any insight into these important questions.

98% of all donations raised go directly to helping people in need: according to Charity Navigator, this refers to program expenses, with 1.1% of their income being spent on fundraising and 0.3% spent on administrative expenses. Would increasing their percent spent on fundraising allow them to help more people by raising more money? Would increased administrative expenses, say, reduce the amount of food waste by hiring someone to improve their distribution practices? We simply don't have enough information to know.

In short, Feeding America is misleading about the scope of the problem they're dealing with and does not provide the necessary information to assess their effectiveness in dealing with it.

Again, I am not picking on Feeding America because it is bad. The reason that charity is a total epistemic hellhole is that all charities are like this. The beloved effective altruist charity the Against Malaria Foundation explains on its homepage that 100% of donations go to buy nets (because presumably in a perfect world AMF employees would not need to earn a salary to pay for such luxuries as "homes" and "food") and entirely omits the fact that most nets will not actually prevent any cases of malaria.

Of course, I'm being unfair here. The purpose of a charity's website is not to tell the complete and unvarnished truth, it's to get people to donate. How many people have actually read a GiveWell charity report all the way through without their eyes glazing over by the time they get to "Niger, Burundi, Malawi, and Liberia Prevalence and Intensity Studies"? If the charity actually had a proper cost-effectiveness assessment rather than a bunch of oversimplified bullet points, everyone would get bored and decide to catch up on Game of Thrones instead and no meals or mosquito nets would be bought at all.

And the harm here seems pretty small. So maybe "100% of public donations go to buy nets" means "we got some people to allocate money towards paying our employees instead of towards nets because you're an idiot who thinks nonprofit employees can survive on nothing more than the satisfaction of doing good." So maybe "struggles with hunger" means "at least one member of the family has missed one meal in the past year due to not having enough money and also the children do not eat enough vegetables" instead of "is hungry most of the time." It's not like they're outright lying, and it's for a good cause. Would you rather people spend that money on a new pair of shoes instead?

But the fact of the matter is that the Red Cross makes the same calculation about disaster relief, and the American Cancer Society makes the same calculation about cancer treatment, and the Smithsonian makes the same calculation about preserving priceless historical artifacts. And that means that it's extraordinarily difficult to figure out really basic questions about charities you might want to donate to, like:

  • How much does the problem the charity is trying to solve affect people's lives?

  • How many people does the problem the charity is trying to solve affect?

  • Does this charity actually help with the problem it is trying to solve?

  • If I donate to this charity, will the money go to really important programs that have a big effect on people's lives, or do they already have enough money for all of that and my donation would go to something that doesn't actually do that much good?

  • Is this charity better than other charities I might donate to?

Which is the reason effective altruism is possible at all.

As far as I'm aware, effective altruist charity evaluators are the only people who are trying to answer this sort of question for the general public (although presumably some big foundations like the Gates Foundation are trying to answer it for themselves). This is our thing. This is the value we add over a Salvation Army bell-ringer who happens to have some fliers for Idealist.

I don't care about effective altruists' personal honesty. Lie to your parents about your dating life, shade the truth on your resume, compliment your friend's hat which vaguely resembles a dead opossum, whatever. Hell, if you're working for a top charity that isn't an explicitly effective-altruism-branded top charity, do the epistemic hellhole thing. Everyone else is and you might as well try to grab some of the charity budget for things that actually work.

But when you are speaking as an effective altruist-- don't get complicated, don't get clever. Just say what you think the best cause area or charity or career is. Every time you think to yourself "well, I think AI risk is more important, but it'll turn people off, so I should probably say the Against Malaria Foundation," the effective altruism movement takes one more step towards being the same as any other group of charitably-minded nerds.

I go pretty far on this. A lot of introductory effective altruism material uses global poverty examples, even articles which were written by people I know perfectly fucking well only donate to MIRI. I think people should generally either use examples from the cause they actually think is most effective, or use an equal number of existential risk, animal welfare, and global poverty examples, in order to reflect the disagreement in the effective altruist community.

I'm not saying you should pay literally zero attention to public relations. There are lots of things you can do to be more persuasive that don't involve misleading people. You can show people pictures of sad animals or happy African children. You can wear professional clothes offline or write with proper grammar online. You can be kind and respectful and try to see things from other people's points of view. But you must abjure all attempts to persuade people by doing anything other than giving people your best assessment of all the evidence, including all the nuance and all the caveats, even if it might turn them off.

6 comments

Comments sorted by top scores.

comment by multiarmedmindset · 2017-09-21T18:04:52.384Z · LW(p) · GW(p)

A year ago when I was a normal EA, I found it useful to continuously point out the ways we could help people, and so easily. I could also play up people's concern with what you describe here. I didn't change a whole lot of minds, but I could see people updating, and friends would definitely be more likely to care about charity's impact after these conversations.

Then I became a weird EA and shortly thereafter tried to convince people of the risk from AI. I could see the inferential distance staring me in the face. It's easy to see why people want to keep talking about global poverty. Think there's a way to rescue this though. I now talk about the principles of "actually trying to do the most good." Then when giving examples, I use the example of curing blindness in the third world vs. training seeing eye dogs. I think this is still quite honest. I do think curing blindness is more important than 1/50th of a seeing eye dog. I will also give my true opinions if asked.

I'm curious to know if you think this satisfies your goal of integrity.

comment by aNeopuritan · 2017-09-23T17:17:36.351Z · LW(p) · GW(p)
I was aware of disagreement between the priorities of existential risk and global poverty, but are there now people who consider animal welfare more important than both others? How many?
Replies from: bwest
comment by bwest · 2017-09-24T17:22:25.180Z · LW(p) · GW(p)

Approximately 10% of respondents to the EA survey said the animal welfare was the most important cause: http://effective-altruism.com/ea/1e5/ea_survey_2017_series_cause_area_preferences/

comment by bwest · 2017-09-24T17:18:56.109Z · LW(p) · GW(p)

Thanks for the interesting post!

It seems to me that there are two types of simplification:

  1. Simplification for pedagogical purposes (e.g. "imagine this as a point mass moving on a frictionless plane…")

  2. Simplification for no good reason (e.g. "balls rolling on a surface will eventually stop because of their natural motion")

I agree that overhead ratios are simplification of the second form: they are not much more complex than things like QALY/dollar yet are much less informative.

I disagree though that QALY/dollar is pointless simplification. It is definitely the case that we need to consider flow-through effects, how are decisions affect the unborn etc., but for both practical as well as pedagogical reasons we might say something like "let us suppose that we only care about human beings living right now." This seems very analogous to a physicist talking about point masses or an economist talking about perfect competition.

I'm curious if you disagree that these simplifications are useful? Or do you just think we should do a better job of calling out that they are simplifications?

comment by AndHisHorse · 2017-09-23T16:17:30.593Z · LW(p) · GW(p)

I find myself more likely to trust political entities (i.e. public entities which I am reasonably confident will tell the technical truth most of the time, but which I do not have a great deal of trust to give a comprehensive account of the facts or events) when they make statements which clearly do not benefit their public image (e.g. when an entity supports an unpopular [among their target audience] proposition). Does this seem like a relatively common view? Is it likely that EA continuing to be weird will prevent it from losing trust (even as it allows it to lose common ground)?

Replies from: Zvi
comment by Zvi · 2017-09-24T19:38:15.674Z · LW(p) · GW(p)

There is a benefit from having a reputation of not aiming too hard maximizing one's reputation, especially a reputation of telling the truth and/or standing up for what you believe in even against your interests. This benefit works for everyone but especially for those trying to be trusted on technical matters.

How much having that reputation for the thing correlates with actually doing the thing is unclear. There are certainly important cases where stating your true beliefs hurts your truth-telling reputation.

(Note: I strongly endorse not letting these considerations alter our behavior much, regardless of the answers to these questions.)