On Leif Wenar's Absurdly Unconvincing Critique Of Effective Altruism
post by omnizoid · 2024-04-04T19:01:00.332Z · LW · GW · 2 commentsContents
There’s not much evidence that unapproved uses are doing harm Bednets and fishing nets None 2 comments
Crossposted here, on my blog.
Leif Wenar recently published a critique of effective altruism that seems to be getting a lot of hype. I don’t know why. There were a few different arguments in the piece: some terrible and others even worse. Yet more strangely, he doesn’t object much to EA as a whole—he just points to random downsides of EA and is snarky. If I accepted every claim in his piece, I’d come away with the belief that some EA charities are bad in a bunch of random ways, but believe nothing that imperils my core belief in the goodness of the effective altruism movement or, indeed, in the charities that Wenar critiques.
I’m not going to quote Wenar’s entire article, as it’s quite long and mostly irrelevant. It contains, at various points, bizarre evidence-free speculation about the motivations of effective altruists. He writes, for instance, “Ord, it seemed, wanted to be the hero—the hero by being smart—just as I had. Behind his glazed eyes, the hero is thinking, “They’re trying to stop me.””
I’m sure this is rooted in Ord’s poor relationship with his mother!
At another point, he mistakes MacAskill’s statement that there’s been a lot of aid in poor countries and that things have gotten better for the claim that aid is responsible for the entirety of the improvement. These strange status games about credit and reward and heroism demonstrate a surprising moral shallowness, caring more about whether people take credit for doing things than what is done. He says, for instance, after quoting MacAskill saying it’s possible to save a life for a few thousand dollars:
But let’s picture that person you’ve supposedly rescued from death in MacAskill’s account—say it’s a young Malawian boy. Do you really deserve all the credit for “saving his life”? Didn’t the people who first developed the bed nets also “make a difference” in preventing his malaria?
Well, as a philosopher, Wenar should know that two things can both cause something else. If there’s a 9-judge panel evaluating an issue, and one side wins on a 5-4, each judge caused the victory, in the relevant, counterfactual sense—had they not acted, the victory wouldn’t have occurred. MacAskill wasn't talking about apportioning blame or brownie points—just describing one’s opportunity to do enormous amounts of good. Would Wenar object to the claim that it would be important to vote if you knew your candidate would be better and that your vote would change the election, on the grounds that you don’t deserve all the credit for it—other voters get some too?
Wenar’s objection also repeats the old objection that Sam Bankman Fried used EA principles to do fraud, so EA must be bad, ignoring, of course, the myriad responses that have been given to this objection. Alex Strasser has addressed this at length [EA · GW], as have I (albeit at less length than Strasser). Pointing that people have done fraud in the name of EA is no more an objection to EA than it would an objection to some charity to note that it happened to receive funds from Al Capone. Obviously one should not carry out fraud, should take common-sense norms seriously, as EA leaders have implored repeatedly [EA · GW] for years.
The article takes random stabs at specific claims that have been made by EAs. Yet strangely, despite the obvious cherry-picking, where Wenar is attempting to target the most errant claims ever made by EAs, every one of his objections to those random out-of-context quotes ends up being wrong. For instance, he claims that MacAskill’s source for the claim that by “giving $3,000 to a lobbying group called Clean Air Task Force (CATF),” “you can reduce carbon emissions by a massive 3,000 metric tons per year,” is “one of Ord’s research assistants—a recent PhD with no obvious experience in climate, energy, or policy—who wrote a report on climate charities.” Apparently writing a nearly 500-page report on existential risks from climate change, in close collaboration with climate change researchers, and a 174-page report about climate charities doesn’t give one any “obvious experience in climate, energy, or policy.”
The article contains almost every objection anyone has given to EA, each with its own associated hyperlink, each misleadingly phrased. Most of them are just links to random hyperlinks involving downsides of some type of aid, claiming that EAs have never considered the downsides when often, they’ve considered them quite explicitly. It exhibits this thin veneer of deep wisdom, making claims like “aid was much more complex than “pills improve lives.”” Well, pills either do or don’t improve lives, and if they do, that seems good and worth knowing about! Now, maybe other things improve lives more, in which case we should do those things, but then you’re looking into comparing costs and benefits—just doing, pretty much, what EAs do, in terms of aid.
At other points, Wener obviously misunderstands what EAs are claiming. For instance, he quotes MacAskill saying “I want to be clear on what [“altruism”] means. As I use the term, altruism simply means improving the lives of others,” before saying:
No competent philosopher could have written that sentence. Their flesh would have melted off and the bones dissolved before their fingers hit the keyboard. What “altruism” really means, of course, is acting on a selfless concern for the well-being of others—the why and the how are part of the concept. But for MacAskill, a totally selfish person could be an “altruist” if they improve others’ lives without meaning to. Even Sweeney Todd could be an altruist by MacAskill’s definition, as he improves the lives of the many Londoners who love his meat pies, made from the Londoners he’s killed.
No competent reader or philosopher could have written that paragraph. If one reads the surrounding context, it’s obvious that MacAskill is not intending to do a conceptual analysis of the word altruism—he’s describing the way he uses it when he talks about effective altruism. MacAskill says:
As the phrase suggests, effective altruism has two parts, and I want to be clear on what each part means. As I use the term, altruism simply means improving the lives of others. Many people believe that altruism should denote sacrifice, but if you can do good while maintaining a comfortable life for yourself, that’s a bonus, and I’m very happy to call that altruism. The second part is effectiveness, by which I mean doing the most good with whatever resources you have.
Here, MacAskill is clearly not trying to define exactly what the term means in general—a famously difficult task for any word. He’s just explaining what effective altruism is about: doing good well. That’s what he’s advising people to do. One could figure this out by, for example, looking at the title of MacAskill’s book—Doing Good Better—or reading the surrounding context.
A lot of the article is like this—Wenar getting confused about some point and then claiming that the person who made it is an idiot or a liar or a fraud.
Much of the rest of the article, however, consists of just listing random downsides of some aid charities, claiming falsely that these downsides aren’t taken into account by effective altruists. I’m reminded of Scott Alexander’s piece steelmanning hitting oneself with a baseball bat for five hours:
“It’s a great way to increase your pain tolerance so that the little things in life don’t bother you as much.”
“It builds character!”
“Every hour you’re hitting yourself on the head with a bat is an hour you’re not out on the street, doing drugs and committing crime.”
“It increases the demand for bats, which stimulates the lumber industry, which means we’ll have surplus lumber available in case of a disaster.”
“It improves strength and hand-eye coordination.”
“It may not literally drive out demons, but it’s a powerful social reminder of our shared commitment for demons to be driven out.”
“It’s one of the few things that everyone, rich or poor, black or white, man or woman, all do together, which means it crosses boundaries and builds a shared identity.”
“It binds us to our forefathers, who hit their own heads with bats eight hours a day.”
“If we stopped forcing everyone to do it, better-informed rich people would probably be the first to abandon the practice. And then they would have fewer concussions than poor people, which would promote inequality.”
“It creates jobs for bat-makers, bat-sellers, and the overseers who watch us to make sure we bang for a full eight hours.”
“Sometimes people collapse of exhaustion after only six hours, and that’s the first sign that they have a serious disease, and then they’re able to get diagnosed and treated. If we didn’t make them bang bats into their heads for eight hours, it would take much longer to catch their condition.”
“Chesterton’s fence!””
Finding random downsides to things is easy. What distinguishes serious people raising serious critiques—you know, the people who work day in and day out weighing up the costs and benefits of aid, writing detailed reports that Wenar lies about—from unserious hacks is that they actually look in detail at comparisons of the costs and benefits, rather than going on google scholar, finding a few hyperlinks for downsides to certain aid programs, and declaring the serious researchers who spend their time analyzing these things errant. Wenar says, for instance:
In a subsection of GiveWell’s analysis of the charity, you’ll find reports of armed men attacking locations where the vaccination money is kept—including one report of a bandit who killed two people and kidnapped two children while looking for the charity’s money. You might think that GiveWell would immediately insist on independent investigations into how often those kinds of incidents happen. Yet even the deaths it already knows about appear nowhere in its calculations on the effects of the charity.
But we only have reports of it happening once. This is a bit like declaring, in response to a bank being robbed, that before supporting banks one should do a detailed statistical investigation into whether banks’ costs outweigh benefits—even if we only have one case of it. This is not serious—it’s just throwing up uncertainty so that those who don’t want to give can have the veneer of plausible deniability.
Wenar lists a lot of random downsides to aid. It’s true that there’s disagreement about the net effect of aid. But the well-targeted aid done by EA organizations generates virtually no controversy among serious scholars. As Karnofsky notes “We believe that the most prominent people known as “aid critics” do not give significant arguments against the sorts of activities our top charities focus on.”
Take, for instance, his claim that “Studies find that when charities hire health workers away from their government jobs, this can increase infant mortality.” Of course, the evidence that Givewell relies on comes from high-quality randomized control trials. It’s easy to point to random downsides to something—the question is whether the upsides outweigh. Which we know they do, based on the randomized control trials gathered by Givewell, looking at a wide variety of aggregate outcomes. The study is totally general—it just notes that sometimes aid programs hire workers who could provide other serives and that might be bad.
And these downsides aren’t enough to undermine the generally positive effect of aid. As Tarp and Mekasha write, in a detailed meta-analysis of the impact of aid on economic growth:
The new and updated results show that the earlier reported positive evidence of aid’s impact is robust to the inclusion of more recent studies and this holds for different time horizons as well. The authenticity of the observed effect is also confirmed by results from funnel plots, regression-based tests, and a cumulative meta-analysis for publication bias.
Now, growth isn’t everything, but it’s a decent indicator of how well things are going. And as one of my professors noted, when one compares the harms of aid to the benefits of, for instance, smallpox eradication, they are nearly undetectable. There is debate about whether aid at the margin does more harm than benefit, but it’s total effect is clearly positive. As MacAskill notes:
Indeed, even those regarded as aid sceptics are very positive about global health.5 Here’s a quote from Angus Deaton, from the same book that Temkin relies so heavily on:
Health campaigns, known as “vertical health programs,” have been effective in saving millions of lives. Other vertical initiatives include the successful campaign to eliminate smallpox throughout the world; the campaign against river blindness jointly mounted by the World Bank, the Carter Center, WHO, and Merck; and the ongoing— but as yet incomplete— attempt to eliminate polio (Deaton 2013 p.104-5).
Wenar elsewhere says “aid coming into a poor country can increase deadly attacks by armed insurgents.” This study is hilariously unconvincing—it describes that in the Phillippines there were a few attacks because “insurgents try to sabotage the program because its success would weaken their support in the population.” In other words, insurgencies a few times in the Phillippines targeted aid programs because the aid programs were so great that they feared they’d weaken their base of popular support. So that’s why it’s bad to give out antimalarial bednets to people that demonstrably save lives.
Wenar elsewhere says “GiveWell has said nothing even as more and more scientific studies have been published on the possible harms of bed nets used for fishing.” But Givewell has looked into this and concluded the claims are unconvincing. The reason they’re not concerned is that it’s not a huge problem. As Piper writes, in an article titled “Bednets are one of our best tools against malaria — but myths about their misuse threaten to obscure that”:
But here’s the thing: The math on bednet effectiveness takes such uses into account. Studies that groups like GiveWell rely upon are conducted by distributing malaria nets and then measuring the resulting fall in mortality rates, so those mortality figures don’t assume perfect use.
Additionally, malaria distribution organizations like the Against Malaria Foundation survey households to make sure nets are still being used. They don’t just ask people whether the nets are in use — people might lie — but go in and check. They’ve found that 80 percent to 90 percent of nets are used as intended, hanging over beds, half a year after first deployment. This isn’t surprising, as people are highly motivated not to die of malaria and won’t put nets to secondary uses lightly.
Bednets would work even better if no one was ever desperate enough to use them for fishing, but no estimates of their effectiveness assume such perfect use. Our figures for the effectiveness of bednets all reflect their effectiveness under real-world conditions.
There’s not much evidence that unapproved uses are doing harm
What about harm to fisheries from people fishing with nets? Researchers have only recently started looking into this. No one has measured detrimental effects yet, though they could emerge later.
…
The insecticide in anti-malarial bednets also does not have negative effects on humans, because the dosages involved are so low. It’s unclear whether there are any harmful effects from fishing with nets. (And, it’s worth noting, there is one oft-forgotten positive effect from the use of bednets for fishing: People are fed.)
Dylan Matthews adds, in an article debunking a similar claim made by Marc Andreessen:
That mosquito nets are dangerous to people would be news to basically any public health professional who’s ever studied them. A systematic review by the Cochrane Collaboration, probably the most respected reviewer of evidence on medical issues, found that across five different randomized studies, insecticide-treated nets reduce child mortality from all causes by 17 percent, and save 5.6 lives for every 1,000 children protected by nets. That implies that the 282 million nets distributed in 2022 alone saved about 1.58 million lives. In one year.
…
Bednets and fishing nets
Andreessen’s objection is rooted in something that’s been true of bednets for decades: sometimes, people use them as fishing nets instead.
This has occasionally popped up as an objection to bednet programs, notably in a 2015 New York Times article. One related argument is that the diversion of nets toward fishing means they’re not as effective an anti-malaria program as they initially appear.
That’s simply a misunderstanding of how the research on bednets works. The scientists who study these programs, and the charities that operate them, are well aware that some share of people who get the nets don’t use them for their intended purpose.
The Against Malaria Foundation, for instance, a charity that funds net distribution in poor countries, conducts extensive “post-distribution monitoring,” sending surveyors into villages that get the nets and having them count up the nets they find hanging in people’s houses, compared to the number previously distributed. When conducted six to 11 months after distribution, they find that about 68 percent of nets are hanging up as they’re supposed to; the percent gradually falls over the years, and by the third year the nets have lost much of their effectiveness.
So does this mean that bednets are only 68 percent as effective as previously estimated? No. Studies of bednet programs do not assume full takeup, because that would be a dumb thing to assume. Instead, they evaluate programs where some villages or households randomly get free bednets, and compare outcomes (like mortality or malaria cases) between the treated people who got the nets and untreated people who didn’t.
For instance, take a 2003 paper evaluating a randomized trial of net distribution in Kenya (this was one of the papers included in the Cochrane review). The researchers’ own surveys show that about 66 percent of nets were used as intended. The researchers did not exclude the one-third of households not using the nets from the study. Instead, they simply compared death rates and other metrics in the villages randomized to receive nets to those metrics in villages randomized to not get them. That comparison already bakes in the fact that a third of households who received the nets weren’t using them.
So estimates like “bednets reduce child mortality by 17 percent” are already assuming that not everybody is using the nets as intended. This just isn’t a problem for the impact estimates.
But is it a problem for fisheries? Andreessen cites one recent article to make this case. It’s not clear to me he actually read it.
The authors start by acknowledging that bednets have saved millions of lives, and even that the use of nets for fishing makes sense for many people. It’s a free way to get food you need to survive in regions often reliant on subsistence farming. Moreover, the authors note that “The worldwide collapse of tropical inland freshwater fisheries is well documented and occurred before the scale-up of ITNs.” At worst, you can accuse nets of making an existing problem worse.
The bigger question the authors raise is that insecticides are toxic. That’s, of course, the point: They’re meant to kill mosquitoes. The question, then, is whether they are toxic to fish or humans when used for fishing. The authors’ conclusion is maybe, but we have no research indicating one way or another. “To our knowledge there is currently a complete lack of data to assess the potential risks associated with pyrethroid insecticide leaching from ITNs,” the authors conclude. They are not sure if the amount leaching from nets is enough to be toxic to fish; they’re not fully sure that the insecticide leaches into the water at all, though they suspect it does. Even less clear is how these insecticides might affect humans who then eat fish that might be exposed to them.
I could keep going through the piece, claim by claim, refuting the false claims about GiveWell’s having no data supporting deworming, for instance, though Givewell has already done that. But Wenar’s piece isn’t really about that—he doesn’t really care to defend, in any detail, any of the specific harms. They’re not what his argument is about—they’re just things he plucked from Google Scholar after five minutes of Googling. His broad point is just that there are downsides that EA hasn’t considered, which is a claim that’s easier to support when you ignore the way that EA studies are built to take into account the downsides and examples of them considering these downsides.
Everything has downsides. The world is about tradeoffs. For every speculative second-order downside to bednets, there are speculative second-order upsides from hundreds fewer children dying daily. Wenar’s piece is a recipe for complacency, for us throwing up our hands and saying “the world is complicated, nothing to see here.” He seems to think we should have an explicit bias against aid, writing:
Call the first the “dearest test.” When you have some big call to make, sit down with a person very dear to you—a parent, partner, child, or friend—and look them in the eyes. Say that you’re making a decision that will affect the lives of many people, to the point that some strangers might be hurt. Say that you believe that the lives of these strangers are just as valuable as anyone else’s. Then tell your dearest, “I believe in my decisions, enough that I’d still make them even if one of the people who could be hurt was you.”
Perhaps Wenar should have applied the “dearest test” before writing the article. He should have looked in the eyes of his loved ones, the potential extra people who might die as a result of people opposing giving aid to effective charities, and saying “I believe in my decisions, enough that I’d still make them even if one of the people who could be hurt was you.”
I agree you should apply this test, only if you’ll also be willing to look the person in the eye if you don’t do it, and say “I believe in my decision to not act, so that if you were a starving child, or a child who might get malaria, I’d do nothing and watch you die.” If you’re going to make people feel extremely distraught about potential risks, they should feel equally distraught about lost benefits, about the kids who die because of western apathy.
Making people imagine that the potential victims are their families would make them less likely to act. Most people wouldn’t donate if the beneficiaries were random strangers and the only people who could be harmed would be their close families. So Wenar’s approach is an excuse for complacency—for not acting, for regarding the possible speculative harms of aid to be far more salient than the demonstrable lives saved. As Richard Chappell [EA · GW] says:
The overwhelmingly thrust of Wenar's article -- from the opening jab about asking EAs "how many people they’ve killed", to the conditional I bolded above -- seems to be to frame charitable giving as a morally risky endeavor, in contrast to the implicit safety of just doing nothing and letting people die.
I think that's a terrible frame. It's philosophically mistaken: letting people die from preventable causes is not a morally safe or innocent alternative (as is precisely the central lesson of Singer's famous article). And it seems practically dangerous to publicly promote this bad moral frame, as he is doing here. The most predictable consequence is to discourage people from doing "riskily good" things like giving to charity. Since he seems to grant that aid is overall good and admirable, it seems like by his own lights he should regard his own article as harmful. It's weird.
This is, I think, the entire point of Wenar’s article. He wants to make it so that every time you consider doing aid, you panic a little bit, even if it’s been vetted extensively, even if there have been a hundred randomized control trials showing how great the intervention is. He wants you not to act because of potential downsides, or at least to very seriously consider not doing it, no matter how good the evidence is for its effectiveness, because there might be downsides. That’s a terrible view. When children are dying and we have high-quality evidence that we can avert their death, pointing to random speculative, second-order harms is not enough to justify inaction in the face of avertable suffering and high-quality data.
Acting may be risky but not acting is much riskier. The mountain of child corpses, who coughed till their throats were raw, who experienced fevers of 105, is a moral emergency that demands action. Effective altruists are doing something about it—saving as many lives annually as stopping AIDS, a 9/11 every year, all gun violence, and Melanoma. Not doing anything because there are risks involved is just ascenting to status quo bias, where poor children die because no one cares enough to do anything. If you’re going to acting as morally risky, you should regard it as similarly risky to do nothing while children die by the millions.
2 comments
Comments sorted by top scores.
comment by localdeity · 2024-04-04T23:40:38.708Z · LW(p) · GW(p)
A friend of mine mentioned the article, and here's what I wrote.
Some of it seems pretty unfair. The early anecdote about charity work in Bali seems to be used to criticize the EA ethos, when "rich westerners flying to poor countries to do manual labor (and possibly post on social media about how virtuous they're being)" is the classic case of something EAs consider to be an ineffective and wasteful charity. (Though perhaps EAs might not go so far as to expect it to be net harmful.) I don't think most EAs would agree to the "bet the Earth on a 51% chance" scheme. As the author says, "SBF consistently made terrible choices" even according to SBF's own goals, so I don't think one can point to his bad outcome as evidence that his goals were bad, except perhaps via a psychological argument that having his grand goals and being a powerful player led him to think "good-for-me-now and good-for-everyone-always started to merge into one" (and subsequent self-serving bias), which would be a general argument against having grand goals and being a powerful player.
But the thing of "lots of aid money ends up in the hands of corrupt middlemen and oppressive rulers and might make the whole thing net negative" is a good point; the specific things that went wrong are good to know about; and the thing of GiveWell not taking seriously and honestly the harm caused by the aid (which, given how they operate, would indeed mean publicly writing up calculations) is a very good point. They should take "tracking the negative consequences" into their routine practice; e.g. there exists N such that >$N being given to a cause justifies having a person go and investigate what's happening.
I hadn't checked any of the specific claims about GiveWell's charities going wrong, or about what they have or haven't written about the downsides; I basically took the author's word on that.
comment by FlorianH (florian-habermacher) · 2024-04-04T22:30:40.714Z · LW(p) · GW(p)
The irony in Wenar's piece is: In all he does, he just outs himself as... an EA himself :-). He clearly thinks its important to think through net impact and to do the things that do have great overall impact. Sad he caricatures the existing EA ecosystem in such an uncompelling and disrespectful way.
Fully agree with your take of him being "absurdly" unconvincing here. I guess nothing is too blatant to be printed in this world, as long as the writer makes bold & enraging enough claims on a popular scapegoat and has a Prof title from a famous uni.
I can only imagine (or hope), the traction the article got, which you mention (though I have not seen it myself), being mainly limited to usual suspects for whom EA anyway, quasi by definition, is simply all stupid, if not outright evil.