Optimizing Fuzzies And Utilons: The Altruism Chip Jar

post by orthonormal · 2011-01-01T18:53:43.060Z · LW · GW · Legacy · 49 comments

Contents

49 comments

Related: Purchase Fuzzies and Utilons Separately

We genuinely want to do good in the world; but also, we want to feel as if we're doing good, via heuristics that have been hammered into our brains over the course of our social evolution. The interaction between these impulses (in areas like scope insensitivity, refusal to quantify sacred values, etc.) can lead to massive diminution of charitable impact, and can also suck the fun out of the whole process. Even if it's much better to write a big check at the end of the year to the charity with the greatest expected impact than it is to take off work every Thursday afternoon and volunteer at the pet pound, it sure doesn't feel as rewarding. And of course, we're very good at finding excuses to stop doing costly things that don't feel rewarding, or at least to put them off.

But if there's one thing I've learned here, it's that lamenting our irrationality should wait until one's properly searched for a good hack. And I think I've found one.

Not just that, but I've tested it out for you already.

This summer, I had just gone through the usual experience of being asked for money for a nice but inefficient cause, turning them down, and feeling a bit bad about it. I made a mental note to donate some money to a more efficient cause, but worried that I'd forget about it; it's too much work to make a bunch of small donations over the year (plus, if done by credit card, the fees take a bigger cut that way) and there's no way I'd remember that day at the end of the year.

Unless, that is, I found some way to keep track of it.

So I made up several jars with the names of charities I found efficient (SIAI and VillageReach) and kept a bunch of poker chips near them. Starting then, whenever I felt like doing a good deed (and especially if I'd passed up an opportunity to do a less efficient one), I'd take a chip of an appropriate value and toss it in the jar of my choice. I have to say, this gave me much more in the way of warm fuzzies than if I'd just waited and made up a number at the end of the year.

And now I've added up and made my contributions: $1,370 to SIAI and $566 to VillageReach.

A couple of notes:

Let me know if you start trying this out, or if you have any suggested improvements on it. In any case, may your altruism be effective and full of fuzzies!

ADDED 12/26/13: I've continued to use this habit, and I still totally endorse it! A few addenda:

 

 

49 comments

Comments sorted by top scores.

comment by JoshuaFox · 2011-01-02T10:46:07.872Z · LW(p) · GW(p)

Once my workplace had a party/fair allegedly to raise money for some charity.

I was slightly miffed to the low util to fuzzies ratio, and to the company's taking the credit for the employee's fundraising, with no corporate matching.

So, when I was asked for money at the event (one-on-one, not in front of everyone), I wrote a check to my favorite charity, for about the same total as the entire fundraiser, right in front of the person asking for the money; I explained myself politely and the requester (I think) took it as an impressive act of charity rather than as asociality. The check was in addition to my usual monthly donation.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-01-01T18:54:09.820Z · LW(p) · GW(p)

Moved to LW main and promoted.

Let me know if this seems like a bad idea for some reason, but when something gets 34 upvotes...

Replies from: JenniferRM, orthonormal, DanArmak
comment by JenniferRM · 2011-01-03T22:29:27.069Z · LW(p) · GW(p)

I think it might be helpful to set a numerical cutoff and perhaps an editorial policy (EG so that terribly edited content or pure links don't hit the front page) and then stick to that policy. Before seeing what you did here, I briefly thought that this sort of action would be good for the recent vegetarianism discussion, simply because it had more than 20 upvotes and raised apparently legitimate issues about socially calibrating oneself on pragmatic moral issues that require abstract thinking.

Of course, that discussion implicitly criticized SIAI as visible non-vegetarians. The implicit potential danger of promoting this but not that is that this can be seen as consistent with promoting things that build SIAI and LW organizationally and thus clearly benefits yourself as the founder. Some readers might infer the existence of such a policy and think it is off mission relative to the "refining the fine art of human rationality" tagline.

The lack of one intervention can give lie to an explanation for an second intervention if the explanation would justify things that weren't done, like the way North Korea clearly had WMDs (and no oil reserves) but wasn't invaded, but the details of why it "didn't count" were never spelled out in plain english by policy makers, leaving people free to speculate.

Honestly, I think organizational development is instrumentally critical to the tagline mission, because institutions and F2F stuff can do a lot more a lot faster than mere blog posts, and there is all kinds of good literature to this effect. But these sorts of issues can be very tricky to get right if the inferential distance between the readership and the mods gets too large. The connection between institutional growth and front line effort isn't always obvious to everyone.

Having a bright line editorial policy that can be seen to promote content that doesn't obviously build the organization is probably useful for visibly signaling good faith from above. Another approach might be to directly explain part of the rational basis for pursuing certain kinds of institutional growth, so that instead of justifying this with "34 votes" you could have justified this with "34 votes and efficaciously pro-social".

(Also, on a general note, the front page is pretty awesome right now, with more solid content and less meetup stuff than has been normal for a while. If someone is doing something to consciously bring about this state of affairs, they deserve credit.)

comment by orthonormal · 2011-01-01T19:40:33.957Z · LW(p) · GW(p)

Thanks! I'm fine with that.

I'd have tried to polish it up more if I expected it to get promoted, but in that case I might have put off writing it altogether.

comment by DanArmak · 2011-01-01T20:12:33.267Z · LW(p) · GW(p)

The post's upvote score now shows as zero (a dot), even when I add/remove my own vote.

Replies from: Sniffnoy, orthonormal
comment by Sniffnoy · 2011-01-01T20:53:08.840Z · LW(p) · GW(p)

Dot doesn't indicate 0, dot indicates "this is new so we're not displaying a score".

Replies from: DanArmak
comment by DanArmak · 2011-01-01T21:31:58.295Z · LW(p) · GW(p)

Ah! Thanks for the explanation. And indeed now it's displaying a score.

comment by orthonormal · 2011-01-01T20:24:24.321Z · LW(p) · GW(p)

Are you using the old link to the post in the Discussion section or accessing it from the main page?

Replies from: DanArmak
comment by DanArmak · 2011-01-01T20:27:26.806Z · LW(p) · GW(p)

From the main page.

comment by orthonormal · 2010-12-31T22:08:39.572Z · LW(p) · GW(p)

Small note: the post originally had my second charity as the Stop TB Partnership (GiveWell's second-place charity) rather than VillageReach, essentially on the theory that if everyone on GiveWell only donates to the very top charity, then other charities have no incentive to become more transparent unless they can claim the top spot.

Then I went to actually make my donation, and my warm fuzzies were interrupted by the donation process. I switched back to VillageReach, whose donations are handled much more efficiently.

Yeah, this is one of those minor issues, but I think it's really important for my future willingness to donate that I have a good first experience and no nagging doubts about the process.

Replies from: diegocaleiro
comment by diegocaleiro · 2011-01-03T05:53:49.953Z · LW(p) · GW(p)

Just came up with a nice idea for a very good first experience:

Put actual money in the Jar. When the year ends, make the donation through Check or Online.

Match the donation you just made with a donation for yourself, an yearly gift that your previous altruist selves gave you to spend in happiness increasing activities. That is, right after finishing your donation, open the Jar, put the money in your pocket, and start thinking about how to invest it in happiness.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-01-01T04:23:24.120Z · LW(p) · GW(p)

Suggest turning this into LW main post. (You can Edit and re-save it there.)

Replies from: David_Gerard
comment by David_Gerard · 2011-01-01T17:09:47.335Z · LW(p) · GW(p)

I would like to know SIAI's official position on the Slate article and its suggestions to prospective donors.

Replies from: Eliezer_Yudkowsky, Nick_Tarleton, Nick_Tarleton
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-01-01T18:53:04.445Z · LW(p) · GW(p)

Landsburg is correct about what rational agents should do. (Period.)

Human altruists may have to resort to more complex tactics like http://lesswrong.com/lw/6z/purchase_fuzzies_and_utilons_separately/.

Replies from: timtyler
comment by timtyler · 2011-01-01T19:08:12.996Z · LW(p) · GW(p)

My analysis is here.

Landsburg doesn't seem to be thinking about risk aversion - which is what the whole concept of a diverse portfolio depends upon. Maybe risky propositions are less common for charities than for investments - since charities like to offer people a sure thing. However, there are certainly some "long-shot" charities out there.

Replies from: Eliezer_Yudkowsky, quanticle
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-01-01T21:16:06.816Z · LW(p) · GW(p)

Risk aversion would apply if you were an egoist trying to make sure you got at least some warm glow. It does not apply if you're an altruist trying to help people as much as they can be helped.

This is not a complicated issue.

Replies from: Kevin, MichaelVassar
comment by Kevin · 2011-01-02T10:36:52.267Z · LW(p) · GW(p)

It does not apply if you're an altruist trying to help people as much as they can be helped.

First I would like to note that I don't disagree with you in practice, though I remain sorely tempted to donate to Village Reach.

If your goal is not to maximize altruism, but rather to ensure a certain minimum level of altruism given massive uncertainty about the effectiveness of charities, I could see it being reasonable to split donations.

Let's imagine there are two competing existential risk reduction charities. We could call them, say, the Singularity Foundation and the Future of Humanity Cooperative. Neither of them are rated by Give Well because there are no real metrics for evaluating them. If your main concern is not to maximize altruism but to minimize the chance that you give all of your money to something practically useless, why not split? I think it's possible that timtyler means something like this by diversification, though of course I don't think that risk aversion trumps altruism.

Replies from: Eliezer_Yudkowsky, Caspian
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-01-02T10:39:38.882Z · LW(p) · GW(p)

If your goal is not to maximize altruism, but rather to ensure a certain minimum level of altruism given massive uncertainty about the effectiveness of charities

No one who cares about helping people cares about that.

Only people trying to ensure a satisficing level of warm glow care about that.

What part of "Steven Landsburg was simply correct about what a rational agent should do" is so hard for people to come to terms with? Not every mistake can be excused as an amazing clever strategy in disguise.

Replies from: David_Gerard
comment by David_Gerard · 2011-01-02T12:59:54.810Z · LW(p) · GW(p)

What part of "Steven Landsburg was simply correct about what a rational agent should do" is so hard for people to come to terms with? Not every mistake can be excused as an amazing clever strategy in disguise.

I think he's right in a hypothetical world of rational donors who don't interact. I think his strategy fails to win in this world, where most actors aren't rational and where donors do interact in great clumps.

Replies from: nshepperd
comment by nshepperd · 2011-01-02T13:25:12.059Z · LW(p) · GW(p)

Maybe I'm being dumb but I don't see how that's likely to happen. What mechanism is going to cause more net altruism to be created by diversifying in order to influence irrational donors?

Replies from: David_Gerard
comment by David_Gerard · 2011-01-02T13:25:56.855Z · LW(p) · GW(p)

Leading by example, for one. It's somewhat similar to voting with dollars.

There is also the Pareto-like structure where you do a funding drive by starting with large donors and using them to recruit the next level down - "I gave $50k, you can give $10k." This works so well in practice that it's pretty much a standard way to run a proper funding drive. Note that it works by turning donors into co-conspirators.

Replies from: nshepperd
comment by nshepperd · 2011-01-02T13:35:18.152Z · LW(p) · GW(p)

Right, but is that more effective when you spread your donations than when you concentrate them?

Well, I suppose donating to less effective causes might incentivize donors interested in those causes to start donating "at all", and might be worthwhile if they couldn't be convinced to adopt the better charity instead. Is that what you mean?

comment by Caspian · 2011-01-02T15:11:43.038Z · LW(p) · GW(p)

In your hypothetical, is the goal to ensure a minimum level of altruistic effectiveness in total from all donors, or a minimum level attributable to your individual donation?

The former is more selflessly altruistic, but I think you mean the latter.

comment by MichaelVassar · 2011-01-02T04:56:54.340Z · LW(p) · GW(p)

It's not complicated, yet in practice selecting for people who get this simple issue yielded Carl Shulman, and David Brin couldn't get it for some reason, so it seems like complicated or not, it's a good high-ceiling indicator.

comment by quanticle · 2011-01-01T20:10:00.113Z · LW(p) · GW(p)

I submit that the concept of "risk aversion" doesn't really apply to charitable donations. Risk aversion applies to investments, where you have a desire to get your money back. When you give to charity, there is no such expectation.

Replies from: timtyler
comment by timtyler · 2011-01-01T20:21:08.486Z · LW(p) · GW(p)

Givers have the corresponding expectation of helping, though.

Just as some investors care about the chances of making money (rather than just expected gains) - I figure some givers are interested in the chances of helping.

Replies from: Vaniver
comment by Vaniver · 2011-01-02T00:24:51.999Z · LW(p) · GW(p)

Two issues are at play here:

  1. Risk of ruin. Diversifying is a good idea as an investor because if all of your investments fail, you are ruined. If you're a philanthropist, you aren't ruined if all your donations fail to do good things.

  2. Negative correlation of returns. Hedges are a good idea if you know two vehicles are likely to move in opposite directions- that way, by picking both you'll go up in more situations than if you just picked one.

I'm not sure the second is a strong issue for charities.

Replies from: timtyler
comment by timtyler · 2011-01-02T09:22:17.476Z · LW(p) · GW(p)

Note that the risk of charitable assistance not helping can also matter to the giver when they would have been among those helped - for example, when considering asteroid deflection charities, or disease research charities.

As I understand it, the psychology of charitable giving means that this is quite often the case in practice - people tend to support charities whose work also happens to benefit them, their sick relatives, their pets, their gender - or whatever.

Replies from: Vaniver
comment by Vaniver · 2011-01-02T11:36:42.342Z · LW(p) · GW(p)

While true, this runs into the main caveat of the "only support one charity" advice- you have to be donating an amount small enough to not change the marginal value of a dollar. When that's true, dollars and utils are linearly related, meaning there's no direct benefit for specialization.

Also, for existential risk the negative correlation hedge doesn't matter. Hedging your bets only pays off when one bet succeeds and another bet fails, but with x-risks, if a single bet fails all of the bets stop mattering. So you should figure out which x-risk gives you the strongest returns when you fight it, and devote all your resources to that until the marginal value drops below another x-risk.

Now, there is a strong argument for diversifying due to ignorance- if you think that A will reduce risk by 5+/-1 and B will reduce risk by 4+/-1.5, then you should give 71% of your money to A and 29% to B.

Replies from: timtyler
comment by timtyler · 2011-01-03T13:05:24.640Z · LW(p) · GW(p)

To illustrate what I meant, if you are giving to charities that aim to cure a fatal disease that you happen to have - then that means you have an increased risk of ruin if your donations don't help - broadly similar to the one that investors diversify their portfolio to help prevent if their individual investments don't pay off.

Of course, that is not selfless altruism - but it is still giving money to charities with the aim of helping the charities to meet their goals - rather than for signalling purposes.

Replies from: Caspian
comment by Caspian · 2011-01-03T23:33:15.448Z · LW(p) · GW(p)

This still isn't enough to invalidate the argument against diversifying. I'm not fully convinced by it, but...

Suppose your money would be enough to increase charity A's chance of finding a cure from 50% to 50.08%, or charity B's chance from 50% to 50.06%, or by splitting the money increase A's to 50.04 and B's to 50.03, I'm pretty sure you're better off giving it all to A, which can increase the chance of at least one finding a cure from 75% to 75.04%.

Suppose both charities have diminishing returns so funding to increase chance beyond 55% is less effective. That's irrelevant to the situation where we aren't in that range.

Suppose charity A had a 30% of being either corrupt or taking a completely useless approach to the cure, in which case it wouldn't find it no matter how much money was donated. So long as the 50%, 50.04% and 50.08% figures have taken this into account, you don't need to consider it further.

Suppose one of the charities had already hit diminishing returns, and would have been able to increase from 1% chance of success to 2% with the amount of your donation, if they hadn't already had enough money for a 50% chance. That's irrelevant to the situation where we aren't in that range.

I only chose 50% to make the maths a bit easier, so long as neither is near 100% similar arguments apply, though you need to make sure you're considering each charity's effect on the total chance for a cure, not their chance of discovering it themselves.

Replies from: Will_Sawin, timtyler
comment by Will_Sawin · 2011-01-04T11:47:35.741Z · LW(p) · GW(p)

If you consider the good to be produced (- log(probability(no cure)), so 50% is one unit, 75% is two units, etc., then assuming independence you can just add the amounts from at least charity.

This actually has an increasing returns effect, which may partially or entirely mitigate a diminishing returns one. Regardless, if you are still small, only the derivatives matter.

comment by timtyler · 2011-01-04T21:00:56.520Z · LW(p) · GW(p)

Diversifying can pay off - even in relatively simple models - where you have inaccurate information. If you think charity A is best - but it ultimately turns out that that is because they spend 99% of their budget on marketing and advertising - then a portfolio with A, B, and C in it would have been highly likely to produce better results than giving everything to charity A.

Maybe you should obtain better information. However, in practice, assessing charities is poorly funded, there are controversies over which ones best support which goals - and getting better information on such topics is another way of spending money.

The bigger the chances of your information being inaccurate, the more it pays to hedge. Inaccurate estimates seem rather likely in the case of "risky" charities - where the benifit involves multiplying a hypothetical small probabiltiy by a hypothetical large benefit - and it is challenging to measure efficacy.

Replies from: orthonormal
comment by orthonormal · 2011-01-04T21:21:40.084Z · LW(p) · GW(p)

I'm hitting the 'bozo button' for Tim in this conversation. The math has been explained to him several times over.

Replies from: timtyler
comment by timtyler · 2011-01-05T00:24:16.640Z · LW(p) · GW(p)

If you mean this my comment would be that that proposes accounting for uncertainty by appropriately penalising the utitilies associated with the charities you are unsure about. However, charities, especially bad charities, might well be trying to manipulate people's percieved confidence that they are sound - so those figures might be bad.

Replies from: Will_Sawin
comment by Will_Sawin · 2011-01-05T22:21:58.881Z · LW(p) · GW(p)

If perceived utility is negatively correlated (at the top end) with actual utility, as in your example, then your strategy is superior to putting it all in the perceived-best. However, if you expect this to be the case, then you should update your beliefs on perceived utility. If the figures might be bad, account for that in the figures!

If there is even a small correlation, putting it all in one is optimal.

comment by Nick_Tarleton · 2011-01-03T05:42:28.660Z · LW(p) · GW(p)

I don't see why SIAI should have an official position on this.

comment by Nick_Tarleton · 2011-01-03T05:42:06.794Z · LW(p) · GW(p)

I don't see why SIAI should have an official position on this.

comment by lsparrish · 2010-12-31T22:29:02.110Z · LW(p) · GW(p)

Excellent hack!

If it turns out there is a good-deed-for-the-day effect in this context, perhaps one could use another set of jars for purchasing anti-fuzzies with selfish utility. For example if you had one for payments into a cryonics annuity, you might then feel compelled to put equal or greater amounts into the charitable jars to balance it out.

Replies from: orthonormal
comment by orthonormal · 2010-12-31T23:03:57.102Z · LW(p) · GW(p)

Or better yet: books and good beer. The "selfish jar" needs to be something that pays out now or in the near-term.

comment by bentarm · 2011-01-02T22:57:59.065Z · LW(p) · GW(p)

I've been vaguely thinking that someone should make an electronic version of this for a while (I think I might have seen it suggested somewhere else on LW as well?) - a Givewell iphone app, so that when someone on the street asks you for money, you can immediately, there and then, donate it to a more worthy cause, while you're still feeling guilty about not giving money to the homeless person who was only going to spend it on alcohol anyway.

I doubt this would be too much work for someone with any experience, but I have no knowledge of writing iphone Apps and no Mac - is there someone out there with the means and the will to actually do it?

comment by katydee · 2011-01-02T02:51:22.314Z · LW(p) · GW(p)

I like it-- this seems like a good approximation of the "instant donate-a-dollar button" that Marcello (I think) suggested as a potential iPhone app earlier. Does anyone know if progress has been made on such an app, by the way?

Replies from: LucasSloan
comment by LucasSloan · 2011-01-02T05:41:12.668Z · LW(p) · GW(p)

My father spent some time creating such an app, but I don't think it is useful for the purpose of diverting sudden, altruistic impulses toward high impact charities, largely because it requires you to manually input your credit card/paypal info each time you try a donation, which is enough effort to reintroduce the original trivial inconvenience. If anyone knows how to fix that sort of problem, I could get the code for you.

Replies from: PhilGoetz, katydee
comment by PhilGoetz · 2011-01-04T02:17:16.364Z · LW(p) · GW(p)

I don't have to input my financial info when I use paypal - I just log in with my username and password, which can be cached for those who don't like even that inconvenience; and click on "Pay now".

comment by katydee · 2011-01-02T17:04:38.267Z · LW(p) · GW(p)

Yeah, I think the point was that the app would have some way to "preload" that information so you wouldn't have to reenter it every time. I'm not sure if that's viable from a coding perspective, but it seems like it would work to solve the main problem here.

comment by Marius · 2011-01-02T02:38:02.608Z · LW(p) · GW(p)

I think this is an excellent idea. Members of certain religious traditions do something similar, by having literal charity boxes they keep at home; this allows them to donate at the moment rather than allow the moment to pass. Yours obviously updates the idea: people keep much less of their money in physical form, so the change to chips may be very helpful.

I wanted to comment on "I do worry about doing my good deed for the day and having negative externalities flow from that, but I can't say I've seen it happening yet." I think this is always a real problem, and I have a few possible suggestions.

  1. When possible, try to donate later in the day rather than earlier. (but of course, you don't want to let the inspiration pass, so don't take this too far).
  2. When you do donate earlier, you can then remind yourself of the early donation, and therefore the need to find ways to be helpful to others. Alternatively, lsparrish and you talk about the antifuzzy jar for beer/etc. Perhaps the ties should go both ways; by rewarding yourself explicitly for your good behavior you may potentially see that good deed as "resolved", and avoid trying to compensate by mistreating others. I don't have data on that, but it would be interesting to look at (I may try this myself over the next few months).
  3. Make sure the chips only substitute for financial donations. If you let it substitute for helping people with your time/effort, there may be less positive consequences.
comment by JoshuaFox · 2011-01-02T11:08:31.470Z · LW(p) · GW(p)

Why not just make an automated monthly payment (through PayPal, your bank, or your credit card)? For tangibility (but more temptation to slip), write and snailmail a check each month.

Replies from: wedrifid
comment by wedrifid · 2011-01-03T04:35:59.952Z · LW(p) · GW(p)

Why not just make an automated monthly payment (through PayPal, your bank, or your credit card)?

Because that is nearly the opposite of the kind of experience he was looking for.

Replies from: orthonormal
comment by orthonormal · 2011-01-03T23:38:17.451Z · LW(p) · GW(p)

What he said.