Heuristics and Biases in Charity
post by Kaj_Sotala · 2012-03-02T15:20:23.582Z · LW · GW · Legacy · 57 commentsContents
References None 57 comments
Here on LW, we know that if you want to do the most good, you shouldn't diversify your charitable giving. If a specific charity makes the best use of your money, then you should assign your whole charitable budget to that organization. In the unlikely case that you're a millionaire and the recipient couldn't make full use of all your donations, then sure, diversify. But most people couldn't donate that much even if they wanted to. Also, if you're trying to buy yourself a warm fuzzy feeling, diversification will help. But then you're not trying to do the most good, you're trying to make yourself feel good, and you'd do well to have separate budgets for those two.
We also know about scope insensitivity - when three groups of subjects were asked how much they'd pay to save 2000 / 20000 / 200000 migrating birds from drowning in oil, they answered $80, $78, and $88, respectively. "How much do I value it if 20,000 birds are saved from drowning in oil" is a hard question, and we're unsure of what to compare it with. So we substitute the question into an easier and clearer one - "how much emotion do I feel when I think about birds drowning in oil". And that question doesn't take the number of birds into account, so the number gets mostly ignored.
So diversification and scope insensitivity are two biases that people have, and which affect charitable giving. What others are there?
According to Baron & Szymanska (2010), there are a number of heuristics involved in giving that lead to various biases. Diversification we are already familiar with. The others are Evaluability, Average vs. Marginal Benefit, Prominence, Identifiability, and Voluntary vs. Tax.
The general principle of Evaluability has been discussed on LW before, though not in a charitable context. This one is directly related to scope insensitivity, since both involve it being difficult to judge whether or not a charitable cause is a worthy one. Suppose that you need to choose between two charities, one of them dedicated to malaria prevention and the other dedicated to treating parasitic worm infections. Which one is a more worthy cause? Or should you instead donate to something else entirely?
Presuming that you don't happen to know about GiveWell's reports about the two charities and haven't studied the topic, you probably have no idea of which one is better. But you still need to make a decision, so you look for something to base that decision on. And one type of information that's relatively easily available for many charities is their overhead: what percentage of their costs they use on administration, as opposed to actual work. So you might end up choosing the charity which has the lowest administration costs, and which spends the largest amount of money on actual charity work.
If you truly have no other information available, then this might really be the best you can do. But overhead is by itself a bad criteria. Suppose that charities A and B both receive $100. Charity A spends $10 on overhead and saves 9 human lives with the remaining $90. Charity B, on the other hand, allocates $25 toward its operating expenses, but manages to save 15 lives with the remaining $75. B would clearly be better, but using overhead as a heuristic tells us to give to A.
GoodIntents.org also provides a number of other reasons why you shouldn't use overhead as your criteria: the overhead figure is easy to manipulate, and the pressure to keep administration costs low can cause organizations to understaff projects, or to favor programs that are inefficient but have low administration costs. Still, many donors base their donation decision on the easy-to-evaluate operating costs, rather than some more meaningful figure.
Average vs. Marginal Benefit. Two charitable organizations provide figures about their effectiveness. Charity A claims to save one life for every 900 dollars donated. Charity B claims to save one life for every 1400 dollars donated. Charity A is clearly the correct choice - right?
Maybe. If Charity A is a large organization, it could be that they're unable to spend the extra money effectively. It could be that the most recent one million dollars that they've received in donations have actually been dragging down their average, and they currently need 2000 extra dollars for each additional life that they save. In contrast, charity B might just have paid for most of their fixed costs, and can now leverage each additional donation of 800 dollars into a saved life for a while.
Information on the marginal benefit of a dollar is often hard to come by, especially since it's in the interest of many well-funded charities to hide this information. But it's still something to keep in mind.
Prominence. People tend to pay attention to a single prominent attribute, or an attribute they view as the most important. This can often be an effective fast-and-frugal heuristic, but only focusing on one attribute to the exclusion of others may make it difficult or impossible to compare tradeoffs. It may also cause people to ignore efficiency: if two safety programs differ in cost and in the number of lives saved, people tend to choose an option that saves more people. They do this even if the difference in lives is small and the difference in cost is large. As a result, they may pay large sums for only a small increase in the amount of good done, even though the extra money would have been better spent elsewhere.
Parochialism is characterized as an in-group bias in which people weigh the welfare of their own group more heavily than those of outsiders. In charity, this may show itself by Americans preferring to give to American charities, even if African ones save more lives per dollar. Whether this is truly a bias depends on one whether tries to carry out perfect utilitarianism: if not, preferring to help one's own group first is a question of values, not rationality. On the other hand, if one does strive for pure utilitarianism, then it should not matter where the subjects of aid are located.
It could also be that attempting to correct for parochialism might reduce the amount of charitable giving, if there are many people whose altruism is limited purely to the in-group. Denied of the chance to help the in-group, such people might rather choose not to donate at all.
On the other hand, if US citizens do experience a sense of commitment to tsunami victims in Hawaii, then it might be reasonable to presume that the same cognitive mechanism would affect their commitment to New Zealanders who suffered the same fate. If so, this suggests that parochialism results from cognitive biases. For instance, an American may have an easier time imagining the daily life on Hawaii in detail than imagining the daily life on New Zealand, and this difference in intensity may affect the amount of empathy they experience.
If one does want to reduce parochialism, then there is some evidence that parochialism is greater for harms of inaction than for action. That is, people are reluctant to harm outsiders through acts but much more willing to do nothing to help them. If this can be made to seem like an inconsistency, then people might experience a larger obligation to help outsiders. Parochialism can also be reduced by encouraging people to think of outsiders as individuals, rather than as members of an abstract group. "New Zealanders" might not attract as much empathy as imagining some specific happy family of New Zealanders, essentially no different from a family in any other country.
“Writing about his experiences in the Spanish Civil War, George Orwell tells this story. He had gone out to a spot near the Fascist trenches from which he thought he might snipe at someone. He waited a long time without any luck. None of the enemy made an appearance. Then, at last, some disturbance took place, much shouting and blowing of whistles followed, and a man: jumped out of the trench and ran along the parapet in full view. He was half-dressed and was holding up his trousers with both hands as he ran. I refrained from shooting at him. I did not shoot part because of that detail about the trousers. I had come here to shoot at `Fascists’; but a man holding up his trousers isn’t a ‘Fascist’, he is visibly a fellow-creature, similar to yourself, and you don’t feel like shooting at him.”
Identifiability. Aid recipients who are identifiable evokes more empathy than recipients who are not. In one "dictator game" study, where people could choose to give somebody else some amount of money, giving was higher when the recipient was identified by last name. Small et al. (2007) note that people often become entranced with specific, identifiable victims. In 1987, one child, "Baby Jessica", received over $700,000 in donations from the public when she fell in a well near her home in Texas. In 2003, £275,000 was quickly raised for the medical care of a wounded Iraqi boy, Ali Abbas. And in one case, more than $48,000 was contributed to save a dog stranded on a ship adrift on the Pacific Ocean near Hawaii.
From a simple utilitarian perspective, identifiability is bias. By increasing altruism toward the identifiable victims, it may reduce altruism toward the unidentified ones, who are often the ones most in need of help. On the other hand, it could also increase overall altruism, by making people more willing to incur greater personal costs to help the identifiable victims.
In fact, Small et al. found that teaching people about the identifiability effect makes them less likely to give to identifiable victims, but no more likely to give to statistical victims. So if you see a story about an identifiable victim and kill your impulse to give to them, or experience pleasure from never feeling that impulse in the first place, please take the money you would have donated to the victim if you hadn't known about the effect and actually give it to some worthier cause! The altruism chip jar is a great way of doing this.
Baron & Szymanska suggest an alternative way that might help in channeling the effects of identifiability to good ends: "Victims all have names. The fact that we are aware of one of them is an accident. We could make up names for the others, or even tell ourselves that our donation to some relief fund is going to help someone named Zhang." So if you know rationally that it'd be good to give to a "statistical" cause but are tempted to give to an "identifiable" cause instead, come up with some imaginary person who'd be helped by your "statistical" donation and think of how glad they'd be to receive your aid.
Voluntary vs. Tax. Finally, some people oppose government aid programs supported by taxes, often referred to as "forced charity". I'm inclined to consider this more of a value than a bias, but Baron & Szymanska argue that
In part, the bias against “forced charity” may arise from a belief in freedom, the belief that government should not force us to help others but should, more or less, provide us with services from which we all benefit and pay for collectively, such as roads, military defense, and protection of our property. (Some libertarians would not even go that far.) Insofar as this is true, it may represent a kind of cognitive inconsistency. Some people benefit very little from roads or property protection, so paying taxes for these things is a way of forcing them to sacrifice for the benefit of others. It is a matter of degree.
If we do accept that government aid programs are as morally good as private ones, then that suggests that contributions to political causes that support helpful programs could sometimes be more efficient than direct contributions to the programs themselves. Although the probability of having some effect through political action is very low, the benefits of a successful initiative are potentially very high. Thus the expected utility of donating to the right political campaign might be higher than the expected utility of donating to an actual charity.
References
Baron, J. & Szymanska, E. (2010). Heuristics and Biases in Charity. In D. Oppenheimer & C. Olivola (Eds). The science of giving: Experimental approaches to the study of charity (pp. 215–236). New York: Taylor and Francis. http://www.sas.upenn.edu/~baron/papers/charity.pdf
Small, D.A. & Loewenstein, G. & Slovic, P. (2007) Sympathy and callousness: The impact of deliberative thought on donations to identifiable and statistical victims. Organizational Behavior and Human Decision Processes, 102, 143–153. http://opim.wharton.upenn.edu/risk/library/J2007OBHDP_DAS_sympathy.pdf
57 comments
Comments sorted by top scores.
comment by Scott Alexander (Yvain) · 2012-03-02T16:58:26.449Z · LW(p) · GW(p)
From a simple utilitarian perspective, identifiability is bias. By increasing altruism toward the identifiable victims, it may reduce altruism toward the unidentified ones, who are often the ones most in need of help. On the other hand, it could also increase overall altruism, by making people more willing to incur greater personal costs to help the identifiable victims.
So part of what I think is going on here is that giving to statistical charity is a slippery slope. There is no one number that it's consistent to give: if I give $10 to fight malaria, one could reasonably ask why I didn't give $100; if I give $100, why not $1000; and if $1000, why not every spare cent I make? Usually when we're on a slippery slope like this, we look for a Schelling point, but there are only two good Schelling points here: zero and every spare cent for the rest of your life. Since most people won't donate every spare cent, they stick to "zero". I first realized this when I thought about why I so liked Giving What We Can's philosophy of donating 10% of what you make; it's a powerful suggestion because it provides some number between 0 and 100 which you can reach and then feel good about yourself.
Then identifiable charity succeeds not just because it attaches a face to people, but also because it avoids the slippery slope. If we're told we need to donate to save "baby Jessica", it's very easy to donate exactly as much money as is necessary to help save baby Jessica and then stop. The same is true of natural disasters; if there's an earthquake in Haiti, that means we can donate money to Haiti today but not be under any consistency-related obligations to do so again until the next earthquake. If Haiti is just a horrible impoverished country, then there's no reason to donate now as opposed to any other time, and this is true for all possible "now"s.
Feedback appreciated as I've been planning to make a top-level post about this if I ever get time.
Replies from: Oscar_Cunningham, Will_Newsome, Gust, Kaj_Sotala, John_Maxwell_IV, handoflixue, multifoliaterose, MichaelVassar↑ comment by Oscar_Cunningham · 2012-03-02T18:34:59.438Z · LW(p) · GW(p)
There's a quote about this:
Perfect is the enemy of good.
Commonly attributed to Voltaire
Replies from: Bugmaster, Pablo_Stafforini, Richard_Kennaway, magfrump↑ comment by Bugmaster · 2012-03-02T18:46:11.324Z · LW(p) · GW(p)
It's also a common Russian saying, FWIW. Maybe we ripped it off from Voltaire, though.
Replies from: vi21maobk9vp↑ comment by vi21maobk9vp · 2012-03-03T11:01:19.987Z · LW(p) · GW(p)
In Russian, it is even more blunt - "better is the enemy of good", without superlative associated with perfect.
Replies from: Grognor, Bugmaster↑ comment by Grognor · 2012-03-04T06:02:55.468Z · LW(p) · GW(p)
That was Voltaire's original phrasing. http://en.wikipedia.org/wiki/Perfect_is_the_enemy_of_good
↑ comment by Pablo (Pablo_Stafforini) · 2012-03-03T20:36:21.521Z · LW(p) · GW(p)
In Argentina, the Spanish version of this saying ("Lo mejor es enemigo de lo bueno") is often attributed to dictator Juan Perón.
However, let us not lose sight of Yvain's main point, which is not that this sort of slippery slope exists, but that the identifiability heuristic works in part because it avoids it.
↑ comment by Richard_Kennaway · 2012-03-03T17:05:34.434Z · LW(p) · GW(p)
Definitive Voltairean wording and source (although Voltaire himself attributes it to an unnamed "Italian sage"):
↑ comment by Will_Newsome · 2012-03-03T03:58:32.626Z · LW(p) · GW(p)
Upvoted simply because Less Wrong is seriously lacking in discussion of Schelling points and how they're critical components of the way humans think about practical problems.
↑ comment by Gust · 2012-03-02T18:31:52.295Z · LW(p) · GW(p)
I do think your hypothesis is plausible, but the reasoning it describes seems too complex. One would think like that only if one cares about being consistent and reflects on that, and only after one has decided that the "I've done my part of the job" excuse is not enough... and it seems improbable that most people think like that.
Also, it seems to me that "help Haiti just this once" is not the same scenario as "help just this person".
Worth testing, though. I guess if you set up a scenario like "Help poor kid X grow up well", a long term goal with kinda-hard-to-predict cost that most people wouldn't be willing to pay all at once, with a specific identifiable subject...
Replies from: None↑ comment by [deleted] · 2012-03-02T21:40:55.695Z · LW(p) · GW(p)
I do think your hypothesis is plausible, but the reasoning it describes seems too complex. One would think like that only if one cares about being consistent and reflects on that, and only after one has decided that the "I've done my part of the job" excuse is not enough... and it seems improbable that most people think like that.
The enormous line of research on cognitive dissonance--see the forced compliance paradigm in particular--indicates the importance of consistency, even when it isn't consciously recognized as such.
Replies from: Gust↑ comment by Kaj_Sotala · 2012-03-04T12:04:27.794Z · LW(p) · GW(p)
If we're told we need to donate to save "baby Jessica", it's very easy to donate exactly as much money as is necessary to help save baby Jessica and then stop.
I have the impression that identifiable cases tend to get far more money than what'd be needed to save them.
Replies from: CronoDAS↑ comment by CronoDAS · 2012-03-06T07:25:28.788Z · LW(p) · GW(p)
Well, this is one data point. After the initial request for help was posted, the requested amount was reached in one day... and some people continued to donate even after the stated goal was reached.
These people also managed to raise much more money than they originally asked for...
↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-03-02T22:04:46.824Z · LW(p) · GW(p)
It sucks how sometimes I notice a way in which I could be more effective, but don't do anything because I could've in theory done something a long time ago.
↑ comment by handoflixue · 2012-03-23T00:42:26.485Z · LW(p) · GW(p)
Very solid point, and I appreciate it - I immediately identify with it as one of the major reasons I tend not to engage in charitable giving myself, except for those rare occasions where a charity I support is requesting a specific (and small) amount...
↑ comment by multifoliaterose · 2012-03-04T02:51:43.327Z · LW(p) · GW(p)
So part of what I think is going on here is that giving to statistical charity is a slippery slope. There is no one number that it's consistent to give: if I give $10 to fight malaria, one could reasonably ask why I didn't give $100; if I give $100, why not $1000; and if $1000, why not every spare cent I make? Usually when we're on a slippery slope like this, we look for a Schelling point, but there are only two good Schelling points here: zero and every spare cent for the rest of your life. Since most people won't donate every spare cent, they stick to "zero". I first realized this when I thought about why I so liked Giving What We Can's philosophy of donating 10% of what you make; it's a powerful suggestion because it provides some number between 0 and 100 which you can reach and then feel good about yourself.
There's another option which I think may be better for some people (but I don't know because it hasn't been much explored). One can stagger one's donations over time (say, on a quarterly basis) and alter the amount that one gives according to how one feels about donating based on the feeling of past donations. It seems like this may maximize the amount that one gives locally subject to the constraint of avoiding moral burnout.
If one feels uncomfortable with the amount that one is donating because it's interfering with one's lifestyle one can taper off. On the flip side I've found that donating gives the same pleasure that buying something does: a sense of empowerment. Buying a new garment that one realistically isn't going to wear or a book that one realistically isn't going to read feels good, but probably not as good as donating. This is a pressure toward donating more.
Replies from: handoflixue↑ comment by handoflixue · 2012-03-23T00:44:53.017Z · LW(p) · GW(p)
"On the flip side I've found that donating gives the same pleasure that buying something does: a sense of empowerment."
Hmmm, useful to know. I may have to experiment with this one. I often end up buying stuff simply because the act of purchasing things makes me feel better, and I can't see any reason a small donation to charity wouldn't produce similar results...
↑ comment by MichaelVassar · 2012-03-02T23:33:42.120Z · LW(p) · GW(p)
This seems very plausible to me.
comment by Wei Dai (Wei_Dai) · 2012-03-02T23:16:58.694Z · LW(p) · GW(p)
Here on LW, we know that if you want to do the most good, you shouldn't diversify your charitable giving.
That may not be true when you're not sure what "doing good" means. For example, giving to multiple charities could be considered rational under Bostrom and Ord's Parliamentary Model of dealing with moral uncertainty.
Replies from: steven0461↑ comment by steven0461 · 2012-03-03T23:24:54.678Z · LW(p) · GW(p)
I'm almost tempted to see this as a reductio ad absurdum of the Parliamentary Model.
Suppose you had $100 and were splitting it between the Exciting Futurism Institute and the Fuzzy Animals Foundation. Suppose you knew an anonymous benefactor had given $100 to the FAF earlier that year. Suppose you suddenly remember the benefactor was you! Does that mean you now give the $100 to EFI? That seems like bizarre behavior to me.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2012-03-04T06:06:19.138Z · LW(p) · GW(p)
Why does it seem bizarre? I'm not getting the same feeling...
Replies from: steven0461, Will_Newsome↑ comment by steven0461 · 2012-03-04T07:31:25.738Z · LW(p) · GW(p)
I guess it seems bizarre because you're changing your behavior in response to a piece of information that tells you nothing about moral philosophy and nothing about the consequences of the behavior. Or is the idea that there are good consequences from timeless cooperation between conflicting selves, or something? But I'm not seeing any gains from trade here, and cooperation isn't Bostrom and Ord's original justification, as far as I know. The original scenario is about an agent whole-heartedly committed to doing the right thing as defined by some procedure he doesn't know the outcome of. And what if you found out the earlier donation had been a pure behavioral tic of a sort that doesn't respond to cooperation? Would you still treat it as though it had been made by you, or would you treat it as though it had been made by something else? If the Parliamentary Model tells you to put 30% of your effort into saving puppies, is it good enough if 30% of your Everett copies put all their effort into it and 70% put none of their effort into it? If so, how much effort should you expend on research into what your parallel selves are currently up to? I'm very confused here, and I'm sure it's partly because I don't understand the parliamentary model, but I'm not convinced it's wholly because of that.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2012-03-07T20:48:10.071Z · LW(p) · GW(p)
I guess you're right, the Parliamentary Model seems a better model for moral conflict than moral uncertainty. It doesn't affect my original point too much (that it's not necessarily irrational to diversify charitable giving), since we do have moral conflict as well as moral uncertainty, but we should probably keep thinking about how to deal with moral uncertainty.
Replies from: steven0461↑ comment by steven0461 · 2012-03-07T21:40:12.943Z · LW(p) · GW(p)
I think if you apply this reasoning to moral conflict between different kinds of altruism, it becomes a restatement of "purchase fuzzies and utilons separately", except with more idealized assumptions about partial selves as rational strategists. It seems to me that if I'm the self that wants utilons, then "purchase fuzzies and utilons separately" is a more realistic strategy for me to use in that it gives up only what is needed to placate the other selves, rather than what the other selves could bargain for if they too were rational agents. With parliament-like approaches to moral conflict it sometimes feels to me as though I'm stuck in a room with a rabid gorilla and I'm advised to turn into half a gorilla to make the room's output more agenty, when what is really needed is some relatively small amount of gorilla food, or maybe a tranquilizer gun.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2012-03-07T22:06:52.131Z · LW(p) · GW(p)
You may not be a typical person. Consider instead someone who's conflicted between egotism, utilitarianism, and deontology, and these moralities get more or less influence from moment to moment in a chaotic manner but has a sort of long term power balance. The Parliamentary Model could be a way for the person to coordinate actions so that he doesn't work against himself.
↑ comment by Will_Newsome · 2012-03-04T06:25:00.054Z · LW(p) · GW(p)
On a related note, in a previous thread I think you said that certain axioms needed to derive Bayesian probability seemed sort of iffy to you. I was wondering, is it possible to connect Bayes' anthropic weirdness problems to any axioms in particular?
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2012-03-04T06:39:39.914Z · LW(p) · GW(p)
I wrote a post about that. Is it what you're looking for?
Replies from: Will_Newsome, lukeprog↑ comment by Will_Newsome · 2012-03-04T06:44:21.620Z · LW(p) · GW(p)
Wow, thanks! I'd never seen that post for some reason. (ETA: Apparently I had in fact seen it and remembered the comments, but not the post... scumbag brain.)
(This is another cool post from 2009 that I didn't see until a year ago.)
↑ comment by lukeprog · 2012-03-06T16:26:50.667Z · LW(p) · GW(p)
Which paper was linked in the first sentence of that post? The link is broken now.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2012-03-06T18:22:58.139Z · LW(p) · GW(p)
Thanks, it's fixed now.
comment by novalis · 2012-03-04T20:24:09.821Z · LW(p) · GW(p)
Here on LW, we know that if you want to do the most good, you shouldn't diversify your charitable giving.
If this is so, then why is the Singularity Institute spinning off a separate rationality org? Shouldn't one of rationality or FAI be more important?
Replies from: katydee, steven0461, Alicorn↑ comment by katydee · 2012-03-04T20:38:13.449Z · LW(p) · GW(p)
To an individual, perhaps; but there are almost certainly people out there who think rationality is important but don't think FAI is important, and thus would be willing to donate to the rationality group but not to SIAI.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2012-03-04T21:53:26.835Z · LW(p) · GW(p)
While I like the idea of FAI, I'm unconvinced that AGI is an existential threat in the next two or three human generations; but I'm confident that raising the sanity waterline will be of help in dealing with any existential risks, including AGI. Moreover, people who have differing beliefs on x-risk should be able to agree that teaching rationality is of common interest to their concerns.
↑ comment by steven0461 · 2012-03-04T22:16:33.917Z · LW(p) · GW(p)
Diminishing returns from either individual activity may be important on that scale.
comment by Dmytry · 2012-03-02T21:27:40.171Z · LW(p) · GW(p)
I'm entirely unconvinced about the not diversifying the spendings. If you assume that your algorithm for choice of charity might be faulty in an exploitable way, the #1 charity may be sufficiently able and motivated to exploit you - having all your money as reward (and money of anyone who's reasoning like you) - but all the top #5 , five times less so.
Let's consider selfish actions to engage our primarily selfish intelligence. Should you invest in 1 corporation, the one you deem most effective? The investment to pay-off scenario matches that of charitable giving rather well, except you are the beneficiary (and you do care not to invest in something that flops over and goes bankrupt)
Of course it is the case that in investments, and in charitable giving, people diversify for entirely wrong reasons, and perhaps over-diversify. But then, the very same people, when told not to diversify, may well respond by donating less overall, for a lower expected benefit.
Replies from: CarlShulman, Douglas_Knight↑ comment by CarlShulman · 2012-07-25T04:29:18.629Z · LW(p) · GW(p)
Should you invest in 1 corporation, the one you deem most effective? The investment to pay-off scenario matches that of charitable giving rather well, except you are the beneficiary (and you do care not to invest in something that flops over and goes bankrupt)
You have strong reason not to do this anyway because of risk aversion. This is like saying, "Should you serve butter or margarine to your guests? To get a better intuition, consider the selfish version, where you are yourself going to eat either pristine butter, or a container of margarine that has been poisoned with arsenic?"
If you assume that your algorithm for choice of charity might be faulty in an exploitable way, the #1 charity may be sufficiently able and motivated to exploit you - having all your money as reward (and money of anyone who's reasoning like you) - but all the top #5 , five times less so.
I agree this is an issue, and that you should take manipulable signals as weaker evidence because of Goodhart's Law. But this effect doesn't automatically dominate. Selecting for good expected value with your best efforts incentivizes efforts to produce signals of value, through real as well as fakeable signals.
Note that GiveWell and friends do not follow your heuristic: the great majority of funds flow to the top charity. They take into account the possibility of faked data (to mess with CBA) in their evaluation process, valuing independent verification, defenses against publication bias, audits, and so forth. But in light of those efforts, they think that the benefits of incentivizing (and assisting) more effective and transparent charities outweigh the risk of incentivizing fakers who can defeat their strong countermeasures.
↑ comment by Douglas_Knight · 2012-03-03T05:50:11.207Z · LW(p) · GW(p)
Your first paragraph assumes that giving $5 to the top charity is of no more value than giving $1 to that charity.
If you don't believe me, come up with a formal model that doesn't assume that and see what it says. Just do the math.
Replies from: Dmytry↑ comment by Dmytry · 2012-03-03T07:46:41.020Z · LW(p) · GW(p)
Okay, here's the model: the expected utility of $1 to chosen top 5 charities is nearly equal (due to inaccuracy in evaluation of the utility), and the charities are nearly linear (not super-linear). The expected utility of donating $x to charity i is x*a[i] , and for top 5 a[i] the a[i] values are very close to equal. [They are very close to equal because of your inability to evaluate the utilities of donations to charities]
(for reasonable values of x; we already determined that multi-billionaire needs to diversify)
Thus the combined utility of paying $100 to each of the top 5 charities is then nearly equal to utility of paying $500 to the top one. There is slight loss because the expected utility of the #1 charity is very slightly above that of #5.
At the same time, the strategic reasoning is as follows: the function i (and people like me) used for selecting top (or top 5 even) charities may be exploitable. When the donation is split between top 5, each has 1/5 the incentive to exploit, so the decision to split between top 5, while unable to affect anything about the contribution right now, affects the future payoff of exploitative strategies (and if known beforehand, affects the past payoff estimates as well).
Of course the above reasoning does not work at all if you are grossly over confident in your evaluations of charities and assume some giant differences between expected utility of the top 5, differences which you furthermore had detected correctly.
Replies from: handoflixue, pengvado↑ comment by handoflixue · 2012-03-23T00:52:17.703Z · LW(p) · GW(p)
I think "exploit" is a bad way of looking at it , for the reasons that pengvado objects to. However, there's also the possibility that you're running an incorrect algorithm, or have otherwise made some fault in reasoning when selecting the Top #1 charity.
Also, if numerous people run the same algorithm, you're more likely to run in to over-saturation issues with a "single charity" model (a thousand people all decide to donate $100 this month - suddenly Charity A has $100K, and can only efficiently use, say, $20K). I'd mostly see this coming up when a major influencer (such as a news story) pushes a large number of people to suddenly donate, without being able to easily "cap" that influence (i.e. the news is unlikely to say "okay, Haiti disaster funding is good, stop now")
It's important to realize that if we have, say, a 50% chance of being wrong about each charity, and we're donating $100, we're still producing a net result of $50 worth of charity regardless of how we split it. However, if we put all our eggs in one basket, we get either $100 or $0 worth of charity. With five different charities, we have a bell curve of $100, $80, $60, $40, $20, $0 as possibilities.
If charity is linear, it doesn't matter. However, I'd suspect that there's incentives to a bell curve - both because it minimizes the worst case $0 benefit scenario, and simply out of an aesthetic/personal preference for less risky investments. (If nothing else, risk-adverse individuals will probably donate more to a bell curve than an "all or nothing" gambit)
Obviously I'm simplifying with the idea of an "all or nothing" gambit for the most part (but a fraudulent charity really could be such!), but I think it illustrates why splitting donations really is beneficial even if "shut up and multiply" says they're approximately equal.
Replies from: gwern↑ comment by gwern · 2012-03-23T02:15:14.578Z · LW(p) · GW(p)
Also, if numerous people run the same algorithm, you're more likely to run in to over-saturation issues with a "single charity" model (a thousand people all decide to donate $100 this month - suddenly Charity A has $100K, and can only efficiently use, say, $20K). I'd mostly see this coming up when a major influencer (such as a news story) pushes a large number of people to suddenly donate, without being able to easily "cap" that influence (i.e. the news is unlikely to say "okay, Haiti disaster funding is good, stop now")
If 'numerous' people manage to actually select and overload the same charity, that charity probably has someone running a similar algorithm and will be smart enough to pass the money on to choice #2. (Funnily enough, charities can and do donate to other charities.)
Replies from: handoflixue↑ comment by handoflixue · 2012-03-23T18:40:47.050Z · LW(p) · GW(p)
"that charity probably has someone running a similar algorithm"
That does not follow, unless you're assuming a community of perfect rationalists.
I'm assuming here a community of average people, where Reporter Sara happened to run a personal piece about her favorite charity, Honest Bob's Second Hand Charity, which pulls in $50K/year. The story goes viral, and suddenly Honest Bob has a million dollars in donations, no clue how to best put it to use, and a genuine conviction that his charity is truly the best one out there.
Even if we assume a community of rational donators, that doesn't mean the charity is itself rational. If the charity won't rationally handle over-saturation (over-confidence in it's own abilities, lack of knowledge about other charities, overhead of distributing, social repercussions, etc., etc.), then the community has to handle it. The ideal would probably be a meta-organization: Honest Bob can only really handle $50K more, so everyone donates $100, $50K goes to Honest Bob, and then the rest is split proportionally and refunded or invested in to second-pick charities.
However, the meta-organization is just running the same splitting algorithm on a larger scale. You could just as easily have everyone donate $5 instead of $100, and Honest Bob now has his $50K without the overhead expenses of such a meta-organization.
So, unless you're dealing with a Perfectly Rational charity that can both recognize and respond to it's own over-saturation point, splitting is still a rational tactic.
↑ comment by pengvado · 2012-03-03T20:48:07.515Z · LW(p) · GW(p)
If there's many charities competing to exploit the same ranking heuristic, then your proposal replaces an incentive of (probability p of stealing all of the donations) with (probability 5*p of stealing 1/5 of the donations). That doesn't look like an improvement to me.
Replies from: handoflixue, Dmytry↑ comment by handoflixue · 2012-03-23T00:59:40.134Z · LW(p) · GW(p)
http://lesswrong.com/lw/aid/heuristics_and_biases_in_charity/63gy - second half addresses specifically why "5p 1/5" might be preferred to 1p. In short, "5p 1/5" produces a bell curve instead of an "all or nothing" gambit.
↑ comment by Dmytry · 2012-03-04T07:29:46.689Z · LW(p) · GW(p)
The effort towards exploitation of a ranking heuristics is not restricted to set [the most convenient for you value that you pick when you rationalize], 0 . The effort to pay off curve is flattened out at the high effort side when the higher level of efforts don't get you any better than being in the top 5.
It is clear you are rationalizing; the 5p>1 when p>0.2 (which it can be if one is to expend sufficiently greater effort towards raising p than anyone else); and thus 5p can't possibly make sense.
comment by JoshuaFox · 2012-03-03T19:01:53.954Z · LW(p) · GW(p)
Excellent article..
A couple of minor points:
Giving $1 to a charity can serve the purpose of stating ones support and endorsement. This is an argument for getting lots of people to give that $1.
Giving "parochially" can help if you have better information as to the effectiveness of your donations, e.g. if money to help the poor is being handled by a neighbor you know well and trust. Of course, this consideration can be dominated by others like the greater effect of money to the extreme poor.
But your points are quite correct.
comment by MichaelVassar · 2012-03-02T23:35:17.857Z · LW(p) · GW(p)
To some extent, we may have heuristics in charity evaluation which support some of these behavioral patterns and which are adaptive but evolved along the grain of our natural cognitive biases in order to protect us from Pascal's Mugging and other types of exploitation.
Also, people mostly want to do things that others are doing, not to do the maximally good things.
comment by Pablo (Pablo_Stafforini) · 2014-03-04T13:55:05.146Z · LW(p) · GW(p)
Great essay!
The goodintents link is dead.
comment by Rhwawn · 2012-03-02T15:49:10.869Z · LW(p) · GW(p)
Downvoted, rather obvious.
Replies from: CronoDAS↑ comment by CronoDAS · 2012-03-06T07:17:51.312Z · LW(p) · GW(p)
Everything is obvious once you've already mastered it.
Replies from: wedrifid↑ comment by wedrifid · 2012-03-06T11:00:09.336Z · LW(p) · GW(p)
Tried to think of a counter example of this. I failed.
Replies from: handoflixue↑ comment by handoflixue · 2012-03-23T00:40:55.673Z · LW(p) · GW(p)
Then you clearly haven't mastered the fine art of counter-examples :)