REVISED: A drowning child is hard to find
post by Benquo · 2020-01-31T18:07:05.348Z · LW · GW · 35 commentsThis is a link post for http://benjaminrosshoffman.com/drowning-children-rare/
Contents
Summary None 37 comments
Substantial revisions to clarify the post's core claim, including but not limited to this summary at the end:
Summary
- Effective Altruism claims that there is a large funding gap for cheap well-understood developing-world interventions.
- Even the most aggressive plausible construal of this claim implies an annual funding gap that could be covered completely by existing major institutional donors.
- If this is true, it implies opportunities for comparatively cheap experiments (relative to the endowments of major donors in the space) with extremely high information value.
- Such experiments have not happened either because they are impossible, or because the relevant institutional donors think they have better things to do with their money.
- Neither scenario suggests that small donors should try to fill this funding gap. If they trust big donors, they should just give to the big donors. If they don't, why should they believe a story clearly meant to extract money from them?
Original linkpost [LW · GW]
35 comments
Comments sorted by top scores.
comment by Scott Alexander (Yvain) · 2020-02-01T02:01:12.755Z · LW(p) · GW(p)
I'm confused by your math.
You say 10M people die per year of preventable diseases, and the marginal cost of saving a life is (presumed to be) $5K.
The Gates Foundation and OpenPhil combined have about $50B. So if marginal cost = average cost, their money combined is enough to save everyone for one year.
But marginal cost certainly doesn't equal average cost; average cost is probably orders of magnitude higher. Also, Gates and OpenPhil might want to do something other than prevent all diseases for one year, then leave the world to rot after that.
I agree a "grand experiment" would be neat. But are you sure it's this easy? Suppose we want to try eliminating malaria in Madagascar (chosen because it's an island so it seems like an especially good test case). It has 6K malaria deaths yearly, so if we use the 5K per life number, that should cost $30 million. But given the marginal vs. average consideration, the real number should probably be much higher, maybe $50K per person. Now the price tag is $300M/year. But that's still an abstraction. AFAIK OpenPhil doesn't directly employ any epidemiologists, aid workers, or Africans. So who do you pay the $300M to? Is there some charity that is willing to move all their operations to Madagascar and concentrate entirely on that one island for a few years? Do the people who work at that charity speak Malagasay? Do they have families who might want to live somewhere other than Madagascar? Do they already have competent scientists who can measure their data well? If not, can you hire enough good scientists, at scale, to measure an entire country's worth of data? Are there scientists willing to switch to doing that for enough money? Do you have somebody working for you who can find them and convince them to join your cause? Is the Madagascar government going to let thousands of foreign aid workers descend on them and use them as a test case? Does OpenPhil employ someone who can talk with the Madagascar government and ask them? Does that person speak Malagasay? If the experiment goes terribly, does that mean we're bad at treating malaria, or that we were bad at transferring our entire malaria-treating apparatus to Madagascar and scaling it up by orders of magnitude on short notice? What if it went badly because there are swamps in Madagascar that the local environmental board won't let anyone clear, and there's nothing at all like that in most malarial countries? I feel like just saying "run a grand experiment" ignores all of these considerations. I agree there's *some* amount of money that lets you hire/train/bribe everyone you need to make this happen, but by that point maybe this experiment costs $1B/year, which is the kind of money that even OpenPhil and Gates need to worry about. My best guess is that they're both boggled by the amount of work it would take to make something like this happen.
(I think there was something like a grand experiment to eliminate malaria on the island of Zanzibar, and it mostly worked, with transmission rates down 94%, but it involved a lot of things other than bednets because it turned out most of the difficulty involved battering down at the problems that remain after you pick the low-hanging fruit. I don't know if anyone has tried to learn anything from this.)
I'm not sure it's fair to say that if these numbers are accurate then charities "are hoarding money at the price of millions of preventable death". After all, that's basically true of any possible number. If lives cost $500,000 to save, then Gates would still be "hoarding money" if he didn't spend his $50 billion saving 100,000 people. Gates certainly isn't optimizing for saving exactly as many people as he can right now. So either there's no such person as Bill Gates and we're just being bamboozled to believe that there is, or Gates is trying to do things other than simultaneously throwing all of his money at the shortest-term cause possible without any infrastructure to receive it.
I think the EA movement already tries really hard to push the money that it's mostly talent-constrained and not funding-constrained, and it already tries really hard to convince people to donate to smaller causes where they might have an information advantage. But the estimate that you can save a life for $5000 remains probably true (with normal caveats about uncertainty) and is a really important message to get people thinking about ethics and how they want to contribute.
Replies from: Zack_M_Davis, Benquo, pktechgirl, Benquo, gjm↑ comment by Zack_M_Davis · 2020-02-01T08:07:04.620Z · LW(p) · GW(p)
But the estimate that you can save a life for $5000 remains probably true (with normal caveats about uncertainty) is a really important message to get people thinking about ethics and how they want to contribute.
I mean, the $5K estimate is at least plausible. (I certainly don't know how to come up with a better estimate than the people at GiveWell, who I have every reason to believe are very smart and hard-working and well-intentioned.)
But I'm a little worried that by not being loud enough with the caveats, the EA movement's "discourse algorithm" (the collective generalization of "cognitive algorithm" [LW · GW]) might be accidentally running a distributed motte-and-bailey, where the bailey is "You are literally responsible for the death of another human being if you don't donate $5000" and the motte is "The $5000 estimate is plausible, and it's a really important message to get people thinking about ethics and how they want to contribute."
$5K is at least a nontrivial amount of money even for upper-middle–class people in rich countries. It takes more than 12 days at my dayjob for me to acquire that much money—it would be many more days for someone not lucky enough to have a cushy San Francisco software engineer dayjob. When I spend twelve days of my life paying for something for me or my friends, I expect to receive the thing I paid for: if I don't get it, I'm going to seek recourse from the seller. If, when challenged on not delivering the goods, the seller retreats to, "Well, that price was just an estimate, and the estimate was probably true as far as I knew at the time—and besides, it was a really important message to get you thinking about the value of my product," I would be pretty upset!
To be sure, there are significant disanalogies between buying a product and donating to charity, but insofar as those disanalogies lead to charities being much less constrained to actually accomplish the thing they claim to than businesses are (because all criticism can be deflected with, "But we're trying really hard and it's an important message"), that's not a point in favor of encouraging scrupulous idealists to pledge their lives to the top-rated charities rather than trying to optimize the local environment that they can actually get empirical feedback about.
To be clear, the picture I'm painting is an incredibly gloomy one. On the spherical-cow Econ 101 view of the world, altruists should just be able to straightforwardly turn money into utilons. Could our civilization's information-processing institutions [LW · GW] really be that broken, that inadequate, for even that not to be true? Really?!
I can't claim to know. Not for certain.
You'll have to think it through for yourself.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2020-02-01T08:12:13.175Z · LW(p) · GW(p)
I'm a little worried that by not being loud enough with the caveats, the EA movement's "discourse algorithm" (the collective generalization of "cognitive algorithm" [LW · GW]) might be accidentally running a distributed motte-and-bailey, where the bailey is "You are literally responsible for the death of another human being if you don't donate $5000" and the motte is "The $5000 estimate is plausible, and it's a really important message to get people thinking about ethics and how they want to contribute."
I initially wrote a comment engaging with this, I thought that was one of the primary things Ben was trying to talk about in the post, but then Oli persuaded me Ben was just arguing that the cost-effectiveness estimates were false / a lie, so I removed the comment. I'd appreciate an explicit comment on how much this is one of the primary things Ben is trying to say with the essay.
↑ comment by Benquo · 2020-02-02T15:52:54.601Z · LW(p) · GW(p)
On the object level, I agree that such interventions can't scale at stated levels of marginal cost effectiveness. That's actually one of the main points I wanted to communicate ("such experiments ... are impossible"), so while I'm glad you get it, I'm a bit frustrated that you're thinking of it as a counterargument. It seems really, REALLY difficult to communicate a disjunctive argument - rather than an object-level claim - as primary content.
Where I think we disagree is that I think that in practice it's extremely common for EAs to elide the distinction between average and marginal cost, and to imply that if Good Ventures were to fully fund everything near this level of cost-effectiveness, there's a realistic prospect of Good Ventures running out of money in the next couple decades. This is not true, at least because - as you point out - there are limits to how much anyone can scale up programmatic interventions.
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2020-02-02T20:28:39.113Z · LW(p) · GW(p)
I don't hear EAs doing this (except when quoting this post), so maybe that was the source of my confusion.
I agree Good Ventures could saturate the $5000/life tier, bringing marginal cost up to $10000 per life (or whatever). But then we'd be having this same discussion about saving money for $10000/life. So it seems like either:
1. Good Ventures donates all of its money, tomorrow, to stopping these diseases right now, and ends up driving the marginal cost of saving a life to some higher number and having no money left for other causes or the future, or
2. Good Ventures spends some of its money on stopping diseases, helps drive the marginal cost of saving a life up to some number N, but keeps money for other causes and the future, and for more complicated reasons like not wanting to take over charities, even though it could spend the remaining money on short-term disease-curing at $N/life.
(1) seems dumb. (2) seems like what it's doing now, at N = $5000 (with usual caveats).
It still seems accurate to say that you or I, if we wanted to, could currently donate $5000 (with usual caveats) and save a life. It also seems correct to say, once you've convinced people of this surprising fact, that they can probably do even better by taking that money/energy and devoting it to causes other than immediate-life-saving, the same way Good Ventures is.
I agree that if someone said "since saving one life costs $5000, and there are 10M people threatened by these diseases in the world, EA can save every life for $50B", they would be wrong. Is your concern only that someone is saying this? If so, it seems like we don't disagree, though I would be interested in seeing you link such a claim being made by anyone except the occasional confused newbie.
I'm kind of concerned about this because I feel like I've heard people reference your post as proving that EA is fraudulent and we need to throw it out and replace it with something nondeceptive (no, I hypocritically can't link this, it's mostly been in personal conversations), but I can't figure out how to interpret your argument as anything other than "if people worked really hard to misinterpret certain claims, then joined them together in an unlikely way, it's possible a few of them could end up confused in a way that doesn't really affect the bigger picture."
Replies from: Benquo↑ comment by Benquo · 2020-02-09T16:26:51.982Z · LW(p) · GW(p)
How many lives do you think can be saved for between $5k and $10k? The smaller the number, the more "~$5k per life saved" looks like an impact certificate you're buying from Good Ventures at a price assessed by GiveWell, rather than a serious claim that for an extra $5k you can cause a life to be saved through the intervention you funded.
The larger the number, the more the marginal cost looks like the average costs for large numbers of lives saved (and therefore the "why don't they do an experiment at scale?" argument holds).
Claims that you can make the world different in well-specified ways through giving (e.g. more lives saved by the intervention you funded) imply the latter scenario, and substantively conflict with the former one.
Do you disagree with this model? If so, how?
↑ comment by Elizabeth (pktechgirl) · 2020-02-01T02:28:50.472Z · LW(p) · GW(p)
But marginal cost certainly doesn't equal average cost; average cost is probably orders of magnitude higher.
I believe this is Ben's point: That CEA and GiveWell disingenuously imply that the average price is low when their actions imply they don't believe this.
Replies from: habryka4, Benquo↑ comment by habryka (habryka4) · 2020-02-01T04:02:11.459Z · LW(p) · GW(p)
I don't think Ben is implying that CEA and GiveWell are claiming that the average price is low. Here is what I understand to be his argument:
- What you actually mean by marginal price is something like "the price I would have to pay to cause a marginal life to be saved, right now"
- GiveWell and the Gates Foundation have already pledged billions of dollars towards saving marginal lives with the most cost-effective interventions
- This means, that if I am trying to understand how much of a difference a counterfactual additional dollar would make, the relevant question is "what difference would my money make, after GiveWell and the Gates Foundation have spent their already pledged $50B+ on saving marginal lives with the most cost-effective intervention"
- He then argues that the world does not look like it actually has $50B of life-saving opportunities for $5k a piece lying around
- As such, as an independent donor, trying to assess the marginal cost of saving a life, I should estimate that as much higher than $5000, since we should expect the marginal cost of saving a life to go up with investment, and we already have $50B of investment into this area
- Maybe GiveWell and the Gates Foundation state that they have done some weird things to commit themselves to not take some of the opportunities for saving lives at $5k a piece, but he argues that both (I am least clear on this part of the argument, both in my understanding of Benquo, and in my understanding of what the correct game theory here is):
- Doing so is pretty similar to extortion and you should ignore it
- They are most likely lying about that, and have shown in the past to just fund opportunities at that level of funding, and their overall messaging sure seems to communicate that they will take those opportunities
I think Ben is straightforwardly arguing that the marginal cost of saving a life, taking into account some basic game theory and economics, must be much higher than $5k.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2020-02-01T05:36:13.535Z · LW(p) · GW(p)
He then argues that the world does not look like it actually has $50B of life-saving opportunities for $5k a piece lying around
This point seems like it's doing a lot of the work, and I'm honestly uncertain. I can imagine it going either way - for example I can imagine the average life saved being very cheap when you're taking advantages of things at scale.
So it seems like a crux whether Gates Foundation's cost effectiveness is comparably low relative to $5k (GiveWell's suggested cost-effectiveness estimate). If it seems higher, then something is going wrong. Oli and I looked into two cases, here's some partial work:
- Gates Foundation spent $1.3 billion on malaria in 2012. For that to be beating GiveWell cost effectiveness estimates, it would have to beat $5k/person saved, which would be 260k people. This is not obviously implausible, given that ~500k people died of malaria that year. This would overall mean they'd have to have reduced malaria instances by around 30%, which seems massive but not implausible.
- Measles has stayed around constant since 2010, around 300-400k deaths per year. It seems like Gates might have put 100s of millions in, which means for GiveWell's recommendations to beat Gates' cost effectiveness, measles cases would have had to counterfactually double in that time period, or something like this, which seems somewhat unlikely.
- However, I think that Gates was trying to 'kill off' measles, which has a large returns in the long term, so it's not obvious they shouldn't spend a lot of money on high variance bets to maximise coverage of measles vaccines.
↑ comment by Thrasymachus · 2020-02-01T08:32:59.868Z · LW(p) · GW(p)
The malaria story has fair face validity if one observes the wider time series (e.g.). Further, the typical EA 'picks' for net distribution are generally seen as filling around the edges of the mega-distributors.
FWIW: I think this discussion would be clearer if framed in last-dollar terms.
If Gates et al. are doing something like last dollar optimisation, trying to save as many lives as they can allocating across opportunities both now and in the future, leaving the right now best marginal interventions on the table would imply they expect to exhaust their last dollar on more cost-effective interventions in the future.
This implies the right now marginal price should be higher than the (expected) last dollar cost effectiveness (if not, it should be reallocating some of the 'last dollars' to interventions right now). Yet this in turn does not imply we should see 50Bn of marginal price lifesaving lying around right now. So it seems we can explain Gates et al. not availing themselves of the (non-existent) opportunity to (say) halve communicable diseases for 2Bn a year worldwide (extrapolating from the right now marginal prices) without the right now marginal price being lied about or manipulated. (Obviously, even if we forecast the Gates et al. last dollar EV to be higher than the current marginal price, we might venture alternative explanations of this discrepancy besides them screwing us.)
↑ comment by Benquo · 2020-02-02T16:02:44.181Z · LW(p) · GW(p)
This is close, but I'm trying to say some thing slightly more complicated. GiveWell and CEA have, considered as a system, communicated somewhat ambiguously, and I keep seeing people construing the communications from that cluster in ways that don't add up. This is a predictable effect of GiveWell's and CEA's behavior, but my point isn't whether we should be mad at those orgs - my point is that the claims don't add up the way people keep assuming they do.
↑ comment by Benquo · 2020-02-02T16:28:49.833Z · LW(p) · GW(p)
But the estimate that you can save a life for $5000 remains probably true (with normal caveats about uncertainty) and is a really important message to get people thinking about ethics and how they want to contribute.
GiveWell seems not to think this is true:
GiveWell's general position is that you can't take cost-effectiveness estimates literally. It might be confusing that GiveWell nevertheless attempts to estimate cost-effectiveness with a great degree of precision, but Holden's on the record as saying that donors need to adjust for publication bias.
If you look at those detailed cost-effectiveness estimates, you'll find that GiveWell is usually compressing a variety of outcomes into a single metric. The amount of money it takes to literally prevent a death from malaria is higher than the amount of money it takes to do the "equivalent" of saving a life if you count indirect effects. (Nevertheless, the last time I checked, CEA reported the number as though it were literally the price for averting a death from malaria, so I can see why you'd be confused.)
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2020-02-02T20:32:47.590Z · LW(p) · GW(p)
I've read this. I interpret them as saying there are fundamental problems of uncertainty with saying any number, not that the number $5000 is wrong. There is a complicated and meta-uncertain probability distribution with its peak at $5000. This seems like the same thing we mean by many other estimates, like "Biden has a 40% chance of winning the Democratic primary". GiveWell is being unusually diligent in discussing the ways their number is uncertain and meta-uncertain, but it would be wrong (isolated demand for rigor) to retreat from a best estimate to total ignorance because of this.
Replies from: Benquo↑ comment by Benquo · 2020-02-09T16:31:47.682Z · LW(p) · GW(p)
OK but (1) what about the fact that to a large extent they're not actually talking about saving lives if you look into the details of the cost-effectiveness estimate? (2) GiveWell's analysis does not account for the kind of publication bias end users of GiveWell's recommendations should expect, so yes this does analytically imply that we should adjust the $5k substantially downwards based on some kind of model of what kinds of effectiveness claims get promoted to our attention.
↑ comment by gjm · 2020-02-01T12:37:04.472Z · LW(p) · GW(p)
Arguments very similar to this have been made by several people over at Ben's blog, and so far as I can make out his response has just been to dismiss them and reiterate his claim that if the numbers were as EA organizations claim then obviously they should be spending approximately all the money they have to make a big one-time reduction in communicable diseases etc.
It's also apparent from comments there that an earlier version of the post made approximately the same argument but based it on a claim that the number of cases of "communicable, maternal, neonatal and nutritional" diseases is declining at a rate of 30% per year, from which Ben evidently derived some total cost of fixing all such diseases ever to compare with e.g. the total resources of the Gates Foundation. That's a comparison that makes some sense. But after fixing that error (which, all credit to Ben, he did promptly when it was pointed out), he doesn't seem to have appreciably changed his conclusion. He's instead switched to this very-dodgy-looking comparison of annual disease-treating cost with total EA resources, left in place his conclusion that EA organizations don't really believe there are huge numbers of lives to be saved at low cost, and left in place his final conclusion that we should be spending money on ourselves and those around us rather than giving to EA causes.
Maybe I'm wrong, but all this looks to me less like the response I'd expect from someone who's simply trying to figure out what's true, and more like the response I'd expect from someone who's first decided to argue against EA, and then gone looking for arguments that might work.
Replies from: Benquo, Zack_M_Davis↑ comment by Benquo · 2020-02-09T16:23:51.447Z · LW(p) · GW(p)
It seems to me like people keep switching between the "shallow diminishing returns" and "steep diminishing returns" stories, combining claims that only make sense in one scenario with claims that only make sense in the other, instead of taking the disjunction seriously and trying to do some actual accounting. So I keep trying to explain the disjunction.
Replies from: gjm↑ comment by gjm · 2020-02-10T03:07:51.038Z · LW(p) · GW(p)
Could you give an example or two? I don't mean of one person assuming shallow diminishing returns and another assuming steep diminishing returns -- obviously different people may have different opinions -- but of a single person doing the sort of combination you describe.
The actual article doesn't, so far as I can see, at all focus on any such cases; it doesn't say "look, here are some bogus arguments people make that assume two different incompatible things"; rather, it says "EA organizations say you should give money to EA causes because that way you can do a lot of good per unit money, but they are lying to you and you should do other things with your money instead". (Not an actual quotation, of course, but I think a fair paraphrase.)
So I don't understand how your defence here makes any sense as a defence of the actual article.
A couple of other points, while I have your attention.
----
The article says this:
My former employer GiveWell in particular stands out, since it publishes such cost-per-life-saved numbers, and yet recommended to Good Ventures that it not fully fund GiveWell's top charities; they were worried that Good Ventures would be saving more than their "fair share" of lives.
All credit to you, once again, for linking to what GiveWell actually wrote. But ... it seems to me that, while indeed they did use the words "fair share", your description of their reasons doesn't at all match what they say. Let me quote from it:
Over the past couple of weeks, we’ve had many internal discussions about how to reconcile the goals of (a) recommending as much giving as possible from Good Ventures to top charities, which we consider outstanding giving opportunities; (b) preserving long-run incentives for individuals to support these charities as well. The proposals that have come up mostly fit into one of three broad categories:
... and then the three categories are "funging", "matching", and "splitting", and it's in explaining what they mean by "splitting" that they use the words "fair share". But the goal here, as they say it, is not at all to have everyone save a "fair share" of lives. They give some reasons for favouring "splitting" (tentatively and corrigibly) and those reasons have nothing to do with "fair shares". Also, they never, btw, talk about a fair share of lives saved but of funding provided, and while of course those things are closely connected they are not intensionally equivalent and there is an enormous difference between "we favour an approach that can be summarized as 'donors consider the landscape of donors and try to estimate their share of the funding gap, and give that much'" and "it would be bad if anyone saved more than their fair share of lives".
Could you explain why you chose to describe GiveWell's position by saying 'they were worried that Good Ventures would be saving more than their "fair share" of lives'? Do you actually think that is an accurate description of GiveWell's position?
----
A key step in your argument -- though it seems like it's simply taken the place of other entirely different key steps, with the exact same conclusion allegedly following from it, which as I mentioned above seems rather fishy -- goes like this. "If one could do a great deal of good as efficiently as the numbers commonly thrown about imply, then it would be possible to run an experiment that would verify the effectiveness of the interventions, by e.g. completely eliminating malaria in one country. No one is running such an experiment, which shows that they really know those numbers aren't real. On the other hand, if there's only a smallish amount of such good to be done that efficiently, then EA organizations should be spending all their money on doing it, instead of whatever else they're doing. But they aren't, which again shows that they really know those numbers aren't real. Either way, what they say is dishonest PR and you should do something else with your money."
It looks to me as if basically every step in this argument is wrong. Maybe this is because I'm misunderstanding what you're saying, or failing to see how the logic works. Let me lay out the things that look wrong to me; perhaps you can clarify.
- The "great deal of good" branch: running experiments.
- It doesn't at all follow from "there is an enormous amount of good to be done at a rate of $5k per life-equivalent" that there are nice conclusive experiments like reducing malaria deaths to zero in one country for one year and measuring the cost. Many malaria deaths in a given year may be from infections in earlier years; even if a large fraction of malaria can be prevented at $5k per life-equivalent, the marginal cost will surely increase a lot as you get to the hardest cases; eliminating all malaria deaths somehere will probably require multiple different kinds of intervention, and any given organization has expertise only in a subset of them, and coordination is hard.
- You might want (genuinely, or for rhetorical purposes, or both) EA charities' money to be spent on running nice conclusive experiments, but that is no guarantee that that's actually the most effective thing for them to be doing.
- Still less is it a guarantee that they will see that it is. (It could be that running such an experiment is the best thing they could do because it would convince lots of people and open the floodgates for lots of donations, but that for one reason or another they don't realise this.) So even if (1) there are nice conclusive experiments they could run and (2) that would actually be the best use of their money, that's not enough to get from "they aren't running the experiments" to "they know the results would be bad" or anything like that. They might just have an inaccurate model of what the consequences of the experiments would be. But, for the avoidance of doubt, I think #1 and #2 are both extremely doubtful too.
- It's not perfectly clear to me who is supposed to be running these experiments. In order to get to your conclusion that EA organizations like GiveWell are dishonest, it needs to be those organizations that could run them but don't. But ... I don't think that's how it works? GiveWell doesn't have any expertise in running malaria-net experiments. An organization like AMF could maybe run them (but see above: most likely it would actually take lots of different organizations working together to get the sort of clear-cut answers you want) but it isn't AMF that's making the cost-per-life-equivalent claims you object to, and GiveWell doesn't have the power to force AMF to burn a large fraction of its resources on running an experiment that (for whatever reason) it doesn't see as the best use of those resources. (You mention the Gates Foundation as well, but they don't seem actually relevant here.)
- The "smallish amount of good" branch: what follows?
- If I understand your argument here correctly (which I may well not; for whatever reason, I find all your comments on this point hard to understand), you reckon that if there's (say) $100M worth of $5k-per-life-equivalent good to do, then GiveWell should just get Good Ventures to do it and move on.
- As you know, they have given some reasons for not doing that (the reasons I think you mischaracterized in terms of 'saving more than their "fair share" of lives').
- I think your position is: what they're doing is deliberately not saving lives in order to keep having an attractive $5k-per-life-equivalent figure to dangle in front of donors, which means that if you give $5k in the hope of doing one life-equivalent of good then you're likely actually just reducing the amount GiveWell will get Good Ventures to contribute by $5k, so even if the marginal cost really is $5k per life-equivalent then you aren't actually getting that life-equivalent because of GiveWell's policies. (I'm not at all sure I'm understanding you right on this point, though.)
- Whether or not it's your position, I think it's a wrong position unless what GiveWell have said about this is outright lies. When discussing the "splitting" approach they end up preferring, they say this: 'But they [sc. incentives for individual donors] are neutral, provided that the “fair share” is chosen in a principled way rather than as a response to the projected behavior of the other funder.' (Emphasis mine.) And: 'we’ve chosen 50% largely because we don’t want to engineer – or appear to be engineering – the figure around how much we project that individuals will give this year (which would create the problematic incentives associated with “funging” approaches).'
- Incidentally, they also say this: 'For the highest-value giving opportunities, we want to recommend that Good Ventures funds 100%. It is more important to us to ensure these opportunities are funded than to set incentives appropriately.' So for those "highest-value" cases, at least, they are doing exactly what you complain they are not doing.
- A separate consideration: the most effective things for a large organization to fund may not be the same things that are most effective for individual donors to fund. E.g., there may be long-term research projects that only make sense if future support is guaranteed. I think the Gates Foundation does quite a bit of this sort of thing, which is another reason why I think you're wrong to bring them in as (implicitly) an example of an organization that obviously would be giving billions for malaria nets if they were really as effective as the likes of GiveWell say they are.
- Suppose it turns out that the widely-touted figures for what it costs to do one life-equivalent of good are, in fact, somewhat too low. Maybe the right figure is $15k/life instead of $5k/life, or something like that. And suppose it turns out that GiveWell and similar organizations know this and are publicizing smaller numbers because they think it will produce more donations. Does it follow that we can't do a lot of good without a better and more detailed model of the relevant bit of the world than we can realistically obtain, and that we should all abandon EA and switch to "taking care of yourself and your friends and the people around you and your community and trying specific concrete things that might have specific concrete benefits"? I don't see that it does: to make EA a bad "investment" it seems to me that it has to be much wronger than you've given any reason to think it is likely to be. (Jeff K has said something similar in comments to the original article, but you didn't respond.)
↑ comment by Benquo · 2020-02-13T15:08:49.707Z · LW(p) · GW(p)
It doesn’t at all follow from “there is an enormous amount of good to be done at a rate of $5k per life-equivalent” that there are nice conclusive experiments like reducing malaria deaths to zero in one country for one year and measuring the cost. Many malaria deaths in a given year may be from infections in earlier years; even if a large fraction of malaria can be prevented at $5k per life-equivalent, the marginal cost will surely increase a lot as you get to the hardest cases; eliminating all malaria deaths somehere will probably require multiple different kinds of intervention, and any given organization has expertise only in a subset of them, and coordination is hard.
It would be helpful if you actually described the specific quantitative scenario you have in mind here, instead of simply asserting that one exists. What proportion of malaria deaths do you think are from infection in prior years? (Bednets disproportionately save the lives of young children.) How many years does that mean we should expect such an experiment would need to be funded? What percentage of malaria deaths do you think can be prevented at ~$5000 per life saved? What's the implied maximum effect size at that cost (and at $10k per life saved) in a well-defined area like Madagascar, and what would be the total cost of running such an experiment?
Replies from: gjm↑ comment by gjm · 2020-02-13T16:02:52.240Z · LW(p) · GW(p)
I think you have the burden of proof in the wrong place. You are claiming that if there's a lot of good to be done at $5k then there must be experiments that are obviously worth pouring a lot of resources into. I'm simply saying that that's far from clear, for the reasons I gave. If it turns out that actually further details of the situation are such as to mean that there must be good experiments to do, then your argument needs to appeal to those further details and explain how they lead to that conclusion.
I am not making any specific claim about what fraction of malaria deaths are from infection in prior years, or what proportion can be prevented at ~$5k per life-equivalent, etc. To whatever extent those are relevant to the correctness of your claim that EA organizations would be running the sort of experiments you propose if they really believed their numbers, your argument for that claim should already be in terms of those figures.
Replies from: Benquo↑ comment by Benquo · 2020-02-17T16:04:37.219Z · LW(p) · GW(p)
As you point out, you're making entirely nonspecific claims. This is a waste of everyone's time; please stop doing so here.
Replies from: gjm↑ comment by gjm · 2020-02-18T15:45:13.042Z · LW(p) · GW(p)
I'm pointing out what seem to me to be large and important holes in your argument.
To an objection of the form "You have given no good reason to think Y follows from X", it is not reasonable to respond with "You need to give a specific example of how you can have X and not Y, with realistic numbers in it".
I claim that you have given no reason to think that if there's a lot of good to be done at $5k per life-equivalent then there is necessarily an experiment that it's feasible for (say) GiveWell to conduct that would do something like eliminating all malaria deaths in Madagascar for a year. You've just said that obviously there must be.
I reject any norms that say that in that situation anyone saying that your reasoning has gaps in is obliged to show concrete counterexamples.
However, because I'm an obliging sort of chap, let's have a go at constructing one and see what happens. (But, for the avoidance of doubt, I am not conceding that if my specific counterexample turns out not to work then it means your claim is right and mine is wrong. Of course it's possible that you know ahead of time that I can't construct a working counterexample, on account of having a better understanding than mine of the situation -- but, again, in that case communicating that better understanding should be part of your argument.) I'll look at Madagascar since that's the country you mentioned specifically.
[EDITED to add:] Although the foregoing paragraph talks about "constructing a counterexample", in fact what I did in the following paragraphs is just to make some guesses about numbers and see where they lead; I wasn't trying to pick numbers that are maximally persuasive or anything.
So, first of all let's find some numbers. Madagascar has a population of about 26 million. Malaria is the 7th most common cause of death there. If I'm reading the stats correctly, about 10% of the population has malaria and they have about 6k deaths per year. Essentially the entire population is considered at risk. At present Madagascar gets about $50M/year of malaria-fighting from the rest of the world. Insecticide-treated bed nets allegedly reduce the risk of getting malaria by ~70% compared with not having them; it's not clear to me how that's defined, but let's suppose it's per year. The statistics I've seen differ somewhat in their estimates of what fraction of the Madagascan population has access to bed nets; e.g., in this document from the WHO plot E on page 85 seems to show only ~5% of the population with access to either bed nets or indoor spraying; the table on page 117 says 6%; but then another table on page 122 estimates ~80% of households have at least one net and ~44% have at least one per two people. I guess maybe most Madagascan households have a great many people? These figures are much lower in Madagascar than in most of Africa; I don't know why. It seems reasonable to guess that bed net charities expect it to be more expensive, more difficult or less effective in Madagascar than in the other places where they have distributed more nets, but again even if this is correct I don't know what the underlying reasons are. I observe that several African countries have a lot more malaria deaths per unit population; e.g., Niger has slightly fewer people than Madagascar but nearly 3x as many malaria deaths. (And also about 3x as many people with malaria.) So maybe bed net distribution focuses on those countries?
So, my first observation is that this is all consistent with the possbility that the number of lives saveable in Madagascar at ~$5k/life is zero, because of some combination of { lower prevalence of malaria, higher cost of distributing nets, lower effectiveness of nets } there compared with, say, Niger or the DRC. This seems like the simplest explanation of the fact that Madagascar has surprisingly few bed nets per person, and it seems consistent with the fact that, while it certainly has a severe malaria problem, it has substantially less malaria per person than many other African countries. Let's make a handwavy guess that the effectiveness per dollar of bednets in Madagascar is half what it is in the countries with the best effectiveness-per-dollar opportunities, which conditional on that $5k/life-equivalent figure would mean $10k/life-equivalent.
Now, as to fatality: evidently the huge majority of people with malaria do not die in any given year. (~2.5M cases, ~6k deaths.) Malaria is a serious disease even when it doesn't kill you. Back of envelope: suppose deaths from malaria in Madagascar cost 40 QALYs each (life expectancy in Madagascar is ~66y, many malaria deaths are of young children but not all, there's a lot of other disease in Madagascar and I guess quality of life is often poor, handwave handwave; 40 QALYs seems like the right ballpark) and suppose having malaria but not dying costs 0.05 QALYs per year (it puts you completely out of action some of the time, makes you feel ill a lot more of the time, causes mental distress, sometimes does lasting organ damage, etc.; again I'm making handwavy estimates). Then every year Madagascar loses ~125k QALYs to nonfatal malaria and ~240k QALYs to fatal malaria. Those numbers are super-inexact and all I'm really comfortable concluding here is that the two are comparable. I guess (though I don't know) that bednets are somewhere around equally effective in keeping adults and children from getting malaria, and that there isn't any correlation between preventability-by-bednet and severity in any particular case; so I expect the benefits of bednets in death-reduction and other-illness-reduction to, again, be comparable. I believe death, when it occurs, is commonly soon after infection, but the other effects commonly persist for a long time. I'm going to guess that 3/4 of the effects of a change in bednet use happen within ~ a year, with a long tail for the rest.
So, let's put that together a bit. Most of the population is not currently protected by bednets. If they suddenly were then we might expect a ~70% reduction in new malaria cases that year, for those protected by the nets. Best case, that might mean a ~70% reduction in malaria deaths that year; presumably the actual figure is a bit less because some malaria deaths happen longer after infection. Call it 60%. Reduction in malaria harm that year would be more like 50%. Cost would be $10k per life-equivalent saved. Total cost somewhere on the order of $50M, a substantial fraction of e.g. AMF's total assets.
Another way to estimate the cost: GiveWell estimates that AMF's bednet distribution costs somewhere around $4.50 per net. So one net per person in Madagascar is $100M or so.
But that's only ~60% of the deaths; you wanted a nice clear-cut experiment that got rid of all the malaria deaths in Madagascar for one year. And indeed cutting deaths by 60% would not necessarily be conclusive, because the annual variation in malaria cases in Madagascar seems to be large and so is the uncertainty in counting those cases. In the 2010-2017 period the point estimates in the document I linked above have been as low as ~2200 and as high as ~7300; the error bars each year go from just barely above zero to nearly twice the point estimate. (These uncertainties are much larger, incidentally, than in many other African countries with similar malaria rates, which seems consistent with there being something about Madagascar that makes treatment and/or measurement harder than other African countries.)
To get rid of all (or nearly all) the deaths in one year, presumably you need to eliminate infection that happens while people aren't sleeping under their bed nets, and to deal with whatever minority of people are unwilling or unable to use bed nets. Those seem like harder problems. I think countries that have eliminated malaria have done it by eliminating the mosquitoes that spread it, which is a great long-term solution if you can do it but much harder than distributing bed nets. So my best guess is that if you want to get rid of all the malaria, even for one year, you will have to spend an awful lot more per life-equivalent saved that year; I would be unsurprised by 10x as much, not that surprised by 100x, and not altogether astonished if it turned out that no one actually knows how to do it for any amount of money. It might still be worth it if the costs are large -- the future effects are large if you can eliminate malaria from a place permanently. (Which might be easier in Madagascar than in many other African countries, since it's an island.) But it puts the costs out of the range of "things existing EA charities could easily do to prove a point". And it's a Gates Foundation sort of project, not an AMF one, and indeed as I understand it the Gates Foundation is putting a lot of money into investigating ways to eliminate malaria.
Tentative conclusion: It's not a all obvious to me that this sort of experiment would be worth while. For "only" an amount of money comparable to the total assets of the Against Malaria Foundation, it looks like it might be possible to somewhat-more-than-halve malaria deaths in Madagascar for one year (and reduce ongoing malaria a bit in subsequent years). The expected benefits of doing this would be substantially less than those of distributing bed nets in the probably-more-cost-effective other places where organizations like AMF are currently putting them. Given how variable the prevalence of malaria is in Madagascar, and how uncertain the available estimates of that prevalence seem to be, it is not clear that doing this would be anything like conclusive evidence that bednet distribution is as effective as it's claimed to be. (All of the foregoing is conditional on the assumption that it is as effective as claimed.) To get such conclusive evidence, it would be necessary to do things radically different from, and probably far more expensive than, bednet distribution; organizations like AMF would have neither the expertise nor the resources to do that.
I am not very confident about any of the numbers above (other than "easy" ones like the population of Madagascar), and all my calculations are handwavy estimates (because there's little point doing anything more careful when the underlying numbers are so doubtful). But what those calculations suggest to me is that, whether or not doing the sort of experiment you propose would be a good idea, it doesn't seem to be an obviously good idea (since, in particular, my current best estimate is that it would not be a good idea). Therefore, unless I am shown compelling evidence pointing in a different direction, I cannot take seriously the claim that EA organizations that aren't doing such experiments show thereby that they don't believe that there is large scope for doing good at a price on the order of $5k per life-equivalent.
Replies from: Benquo↑ comment by Benquo · 2020-03-22T19:03:45.846Z · LW(p) · GW(p)
You've given a lot of details specifically about Madagascar, but not actually responded to the substantive argument in the post. What global picture does this correspond to, under which the $5k per life saved figure is still true and meaningful? I don't see how the existence of somewhere for which no lives can be saved for $5k makes that claim any more plausible.
Replies from: gjm↑ comment by gjm · 2020-03-22T22:54:36.676Z · LW(p) · GW(p)
Your claim, as I understood it -- which maybe I didn't, because you have been frustratingly vague about your own argument at the same time as demanding ever-increasing amounts of detail from anyone who questions it -- was that if the $5k-per-life-equivalent figure were real then there "should" be some experiment that could be done "in a well-defined area like Madagascar" that would be convincing enough to be a good use of the (large) resources it would cost.
I suggest that the scenario I described above is obviously consistent with a $5k-per-life-equivalent figure in the places where bednets are most effective per unit spent. I assume you picked Madagascar because (being isolated, fairly small, etc.) it would be a good place for an experiment.
If you think it is not credible that any global picture makes the $5k figure "true and meaningful" then it is up to you to give a good argument for that. So far, it seems to me that you have not done so; you have asserted that if it were true then EA organizations should be running large-scale experiments to prove it, but you haven't offered any credible calculations or anything to show that if the $5k figure were right then doing such experiments would be a good use of the available resources, and my back-of-envelope calculations above suggest that in the specific place you proposed, namely Madagascar, they quite likely wouldn't be.
Perhaps I'm wrong. I often am. But I think you need to provide more than handwaving here. Show us your detailed models and calculations that demonstrate that if the $5k figure is anywhere near right then EA organizations should be acting very differently from how they actually are acting. Stop making grand claims and then demanding that other people do the hard work of giving quantitative evidence that you're wrong, when you yourself haven't done the hard work of giving quantitative evidence that you're right.
Once again I say: what you are doing here is not what arguing in good faith usually looks like.
↑ comment by Benquo · 2020-02-13T15:02:46.882Z · LW(p) · GW(p)
Also, they never, btw, talk about a fair share of lives saved but of funding provided, and while of course those things are closely connected they are not intensionally equivalent and there is an enormous difference between “we favour an approach that can be summarized as ‘donors consider the landscape of donors and try to estimate their share of the funding gap, and give that much’” and “it would be bad if anyone saved more than their fair share of lives”.
In the context of a discussion about how much money to give to a specified set of nonprofits, where no other decisions are being discussed other than how to decide how much money to give, what is the difference?
Replies from: gjm↑ comment by gjm · 2020-02-13T16:08:01.155Z · LW(p) · GW(p)
It's a bit like the difference between "Ben thinks Gareth is giving too much money to the Against Malaria Foundation" and "Ben thinks Gareth isn't letting enough babies die of malaria", in the context of a discussion about how individuals should allocate their money.
Replies from: Benquo↑ comment by Benquo · 2020-02-13T14:58:43.126Z · LW(p) · GW(p)
Could you give an example or two? I don’t mean of one person assuming shallow diminishing returns and another assuming steep diminishing returns—obviously different people may have different opinions—but of a single person doing the sort of combination you describe.
I think Scott's doing that here [LW(p) · GW(p)], switching back and forth between a steep diminishing returns story (where Good Ventures is engaged in at the very least intertemporal funging as a matter of policy, so giving to one of their preferred charities doesn't have straightforward effects) and a claim that "you or I, if we wanted to, could currently donate $5000 (with usual caveats) and save a life."
The more general pattern is people making nonspecific claims that some number is "true." I'm claiming that if you try to make it true in some specific sense, you have to posit some weird stuff that should be strongly decision-relevant.
Replies from: gjm↑ comment by gjm · 2020-02-13T16:22:56.608Z · LW(p) · GW(p)
So I assume you're objecting to his statement near the end that "the estimate that you can save a life for $5000 remains probably true (with normal caveats about uncertainty)", on the basis that he should actually say "you probably can't really save a life for $5000 because if you give that $5000 then the actual result will be that Good Ventures gives less in future because GiveWell will make sure of that to ensure that alleged $5000 opportunities continue to exist for PR reasons".
But I don't see the alleged switching back and forth. So far as I can see, Scott simply disagrees with you about the intertemporal funging thing, perhaps for the same reason as I think I do (namely, that GiveWell's actual statements about their recommendations to Good Ventures specifically claim that they are trying to make them in a way that doesn't involve intertemporal funging of a sort that messes up incentives in the way you say it does).
Where do you think Scott's comment assumes the "steep diminishing returns story"?
It does tell a steep-diminishing-returns story about the specific idea of trying to run the sort of experiment you propose. But part of his point is that that sort of experiment would likely be inefficient and impractical, unlike just continuing to do what AMF and similar charities are already doing with whatever funding is available to them. The diminishing returns are different in the two scenarios, and it could be that they are much steeper if you decide that your goal is to eliminate all malaria deaths on Madagascar than if your goal is to reduce malaria in all the areas where there's a lot of malaria that can be addressed via bed nets. It can simultaneously be true that (1) there are readily available opportunities to save more than 6k extra lives by distributing more bed nets, at a cost of $5k per life saved, and that (2) if instead you want to save specifically all 6k people who would otherwise have died from malaria in Madagascar this year, then it will cost hugely more than $5k per life. And also, relatedly, that (3) if instead of this vague "you" we start trying to be specific about who is going to do the thing, then in case 1 the answer is that AMF can save those lives by distributing bed nets, a specific thing that it knows how to do well, whereas in case 2 the answer is that there is no organization that has all the competences required to save all those lives at once, and that making it happen would require a tremendous feat of coordination.
Replies from: Benquo↑ comment by Zack_M_Davis · 2020-02-01T16:35:19.964Z · LW(p) · GW(p)
You are entitled to your character assessment of Ben (Scott has argued that that bias arguments have nowhere to go, while others including Ben contest that modeling motives is necessary [LW · GW]), but if you haven't already read the longer series that the present post was distilled from, it might be useful for better understanding where Ben is coming from: parts 1 2 3 4 5 6.
Replies from: gjm↑ comment by gjm · 2020-02-01T20:27:55.687Z · LW(p) · GW(p)
For the avoidance of doubt, I didn't make a character assessment of Ben, I made an assessment of his arguments on this point. I think those arguments are the sort of arguments people make in bad faith, but that needn't mean Ben's making them in bad faith. But he might reasonably care what it looks like; if so, he might want to take a different approach in responding to his critics.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2020-02-01T21:40:57.780Z · LW(p) · GW(p)
But ... that's at least a probabilistic character assessment, right? Like, if someone exhibits a disposition to behave in ways that are more often done by bad-faith actors than good-faith actors, that likelihood ratio favors the "bad-faith actor" hypothesis, and Bayesian reasoning says you should update yourself incrementally [LW · GW]. Right? What am I missing here?
Replies from: gjm↑ comment by gjm · 2020-02-01T21:58:25.172Z · LW(p) · GW(p)
Making a character assessment of someone is a fundamentally different thing from saying something that is Bayesian evidence about their character, for the obvious reason that saying anything is Bayesian evidence about their character.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2020-02-03T05:58:36.759Z · LW(p) · GW(p)
I agree that saying anything is, technically, Bayesian evidence about their character, but some statements are much more relevant [LW · GW] to someone's character than others. When you say someone's response doesn't look like what you'd expect to hear from someone trying to figure out what's true, that's not very different from just saying that you suspect they're not trying to figure out what's true. Why not cut out the indirection? (That was a rhetorical question; the answer is, "Because it's polite.")
Maybe I'm wrong, but this looks to me less like the response I'd expect from someone not making a character assessment, and more like the response I'd expect from someone who's trying to make a character assessment (which could be construed as a social attack, by the sort of people who do that thing) while maintaining plausible deniability that they're not making a character assessment (in order to avoid being socially attacked on grounds of having made a social attack, by the sort of people who do that thing).
Replies from: gjm↑ comment by gjm · 2020-02-03T11:27:18.207Z · LW(p) · GW(p)
Your answer to your own rhetorical question is wrong, for two reasons. First, because you are confusing likelihoods with posteriors. Second, because you are confusing one-off actions with dispositions.
Likelihoods and posteriors
Yes, it's impolite to say "I think Ben isn't really trying to figure out what's true", and more polite to say "What Ben says is more like what someone says who isn't really trying to figure out what's true".
But it's also wrong to say "I think Ben isn't really trying to figure out what's true", if your actual epistemic state matches mine. Without reading the linked post, I would strongly expect that Ben is really trying to figure out what's true. On the other hand, if I had only the linked post and no other information about Ben, I would (as you obviously think I do) think that Ben is almost certainly arguing with an anti-EA bottom line already written.
But, in fact, I have both that post and other evidence that Ben is generally, let's say, truth-aligned. So what's actually going on? I don't know. So I followed the generally excellent procedure of reporting likelihood rather than posterior, and described how that post seems to me.
(I do also generally prefer to be polite, so probably the threshold for how confident I am that someone's being intellectually dishonest before saying so explicitly is higher than if my only concern was maximum-bandwidth communication. In this case, I don't believe my opinion is over where the threshold would be without concerns for politeness.)
One-off actions and dispositions
But, to be clear, I did intend to communicate that I think it genuinely possible (maybe even likely? Not sure) that on this particular occasion Ben has been arguing in bad faith.
But this is not a statement about Ben's character, it's a statement about his actions on one occasion. It is extremely common for people to do out-of-character things from time to time.
As you said above, of course "X acted in bad faith on this occasion" is evidence for "X is generally a bad-faith actor", which is a character judgement; but, as I said above, almost everything is evidence for or against almost everything, and furthermore almost everything is non-negligible evidence for or against almost everything related, and that is not good enough reason to abandon the distinctions between them.
Acting in bad faith on one occasion is not good enough evidence of a general disposition to act in bad faith for "X acted in bad faith here" to be in any way equivalent to "X is the sort of person who commonly acts in bad faith".
Clear thinking requires that we distinguish between likelihoods and posteriors. Clear thinking requires that we distinguish between one-off actions and general dispositions [LW · GW]. Your comment about "your character assessment of Ben" ignored both distinctions. I don't think you should do that.
Replies from: Zack_M_Davis↑ comment by Zack_M_Davis · 2020-02-03T16:19:24.171Z · LW(p) · GW(p)
Oh, thanks for this explanation (strong-upvoted); you're right that distinguishing likelihoods and posteriors is really important. I also agree that single occasions only make for a very small update on character [LW · GW]. (If this sort of thing comes up again, maybe consider explicitly making the likelihood/posterior point up front? It wasn't clear to me that that's what you were getting at with the great-great-great-grandparent [LW(p) · GW(p)].)
comment by Rafael Harth (sil-ver) · 2020-01-31T19:26:31.322Z · LW(p) · GW(p)
(This is a general comment about the argument, not about the revisions.)
Neither scenario suggests that small donors should try to fill this funding gap. If they trust big donors, they should just give to the big donors. If they don't, why should they believe a story clearly meant to extract money from them?
Because some people are trustworthy and others aren't.
The reason why I believe the EA claims is pretty simple: I trust the people making them. The fact that there is a lot of altruistic value sort of lying on the sidewalks may be a-priori surprising, but we have so much evidence that maximizing altruism is extremely rare that I don't see much of an argument left at this point. EY made this point [LW · GW] in Inqadequate Equiliria:
Eliezer: Well, mostly I’m implying that maximizing altruism is incredibly rare, especially when you also require sufficiently precise reasoning that you aren’t limited to cases where the large-scale, convincing study has already been done; and then we’re demanding the executive ability to start a new project on top of that. But yes, I’m also saying that here on Earth we have much more horrible problems to worry about.