Optimal Philanthropy for Human Beings
post by lukeprog · 2011-07-25T07:27:20.053Z · LW · GW · Legacy · 86 commentsContents
Precommitment Time vs. Money Multiplying Your Impact Notes References None 86 comments
Summary: The psychology of charitable giving offers three pieces of advice to those who want to give charity and those who want to receive it: Enjoy the happiness that giving brings, commit future income, and realize that requesting time increases the odds of getting money.
One Saturday morning in 2009, an unknown couple walked into a diner, ate their breakfast, and paid their tab. They also paid the tab for some strangers at another table.
And for the next five hours, dozens of customers got into the joy of giving and paid the favor forward.
This may sound like a movie, but it really happened.
But was it a fluke? Is the much-discussed link between happiness and charity real, or is it one of the 50 Great Myths of Popular Psychology invented to sell books that compete with The Secret?
Several studies suggest that giving does bring happiness. One study found that asking people to commit random acts of kindness can increase their happiness for weeks.1 And at the neurological level, giving money to charity activates the reward centers of the brain, the same ones activated by everything from cocaine to great art to an attractive face.2
Another study randomly assigned participants to spend money either on themselves or on others. As predicted, those who spent money helping others were happier at the end of the day.3
Other studies confirm that just as giving brings happiness, happiness brings giving. A 1972 study showed that people are more likely to help others if they have recently been put in a good mood by receiving a cookie or finding a dime left in a payphone.4 People are also more likely to help after they read something pleasant,5 or when they are made to feel competent at something.6
In fact, deriving happiness from giving may be a human universal.7 Data from 136 countries shows that spending money to help others is correlated with happiness.8
But correlation does not imply causation. To test for causation, researchers randomly assigned participants from two very different cultures (Canada and Uganda) to write about a time when they had spent money on themselves (personal spending) or others (prosocial spending). Participants were asked to report the happiness levels before and after the writing exercise. As predicted, those who wrote (and thought) about a time when they had engaged in prosocial spending saw greater increases in happiness than those who wrote about a time when they spent money on themselves.
So does happiness run in a circular motion?
This, too, has been tested. In one study,9 researchers asked each subject to describe the last time they spent either $20 or $100 on themselves or on someone else. Next, researchers had each participant report their level of happiness, and then predict which future spending behavior ($5 or $20, on themselves or others) would make them happiest.
Subjects assigned to recall prosocial spending reported being happier than those assigned to recall personal spending. Moreover, this reported happiness predicted the future spending choice, but neither the purchase amount nor the purchasing target (oneself or others) did. So happiness and giving do seem to reinforce each other.
So, should charities remind people that donating will make them happy?
This, alas, has not been tested. But for now we might guess that just as people generally do things they believe will make them happier, they will probably give more if persuaded by the (ample) evidence that generosity brings happiness.
Lessons for optimal philanthropists: Read the studies showing that giving brings happiness. (Check the footnotes below.) Pick out an optimal charity in advance, notice when you're happy, and decide to give them money right then.
Lessons for optimal charities: Teach your donors how to be happy. Remind them that generosity begets happiness.
Precommitment
Ulysses did not get past the beautiful but dangerous Sirens with sheer willpower. Rather, he knew his weaknesses and precommitted to sail past the Sirens. He tied himself to his ship's mast.
We all know the power of precommitment. Though many gym memberships remain unused, people do spend more time at the gym if they purchase a gym membership than if they pay per visit.10 Can precommitment work for giving to charity, too?
Yes, it can. In one study, donors were asked to increase their monthly contributions either immediately or two months in the future. One year later, the increase in donations was 32% higher for the group asked to precommit, and donor cancellation rates were identical (and very low) in both groups.11
Does it matter whether a charitable person precommits to donate money they already have vs. money they don't have yet?
Apparently it does. In one experiment, participants were entered into a raffle, with a chance to win $25. Participants had to decide in advance whether to donate the money to United Way or receive it in cash. Nearly 40% of the participants opted to precommit the potential winnings to charity. In another experiment, researchers asked subjects to imagine they had just won the lottery. Then, some were asked to donate some of their 'winnings' immediately, while others were asked to donate their 'winnings' in two months. Surprisingly, those asked to donate current 'winnings' later actually gave less.12
This suggests that pledging to donate current earnings later may be less motivating than donating current earnings now, while pledging to donate future earnings later should work well. (Of course, money is fungible. The donated $100 might as well be from today's paycheck as from the next one. But charities should frame requests for precommitment in terms of future earnings, like Giving What We Can does.)
Precommitment seems to work best when it creates psychological distance between donors and their money.13 The United Way allows donors to give via paycheck donations; because donors never feel like they have that money, they never face the pain of parting with it.
The same principle may explain the success of affinity credit cards. Affinity cards allow consumers to precommit their reward points to benefit a chosen charity. Donors never experience the pain of parting with other things that reward points could otherwise purchase (flights, etc.). As an aspiring optimal philanthropist, I use an affinity card that gives 1%-10% cash back to the Singularity Institute (plus $50 per new card signup). As a lazy optimal philanthropist, I'm glad it took me only four minutes to sign up.
Lessons for optimal philanthropists: Precommit. Use paycheck deduction and affinity cards to give money. Pledge future earnings.
Lessons for optimal charities: Ask donors to precommit to donate future earnings. Offer an affinity card. Offer paycheck deduction donations if possible.
Time vs. Money
In one creative study, researchers asked subjects to read some information about a fictional non-profit, the "American Lung Cancer Association." Subjects were then told that this organization was holding a fundraising event. Half the subjects were asked how much time they would like to donate (a time-ask). The other subjects were not asked about volunteering their time. Next, both groups were asked how much money they would like to donate (a money-ask). Those who first got a time-ask gave more money when asked for money ($36.44 vs. $24.46). Asking donors for time resulted in them giving more money!
Researchers also conducted a field experiment by partnering with HopeLab, a Bay Area charity that aims to improve the quality of life for children with chronic illnesses. A researcher representing HopeLab visited college campuses and waited outside a classroom full of students. When the students emerged, the researcher asked them individually whether they were willing to take part in a 30-minute study in exchange for $10.
Those who agreed read an introduction to HopeLab. Then, a third of them were asked how much they would like to give time to HopeLab, another third were asked how much they would like to donate to HopeLab, and a control group was asked no questions. Finally, all groups were asked their impressions of HopeLab, along with 20 minutes of filler questions.
When exiting the study, participants encountered the researcher (representing HopeLab) next to a box labeled 'HopeLab Donations.' The researcher paid each participant with ten $1 bills and gave them a flyer with details about volunteering for HopeLab. Researchers tracked the amount donated and which participants volunteered during the next month.
Subjects in the time-ask-first condition were the most generous, donating $5.85 of their $10, compared to $4.42 for those in the no-ask condition and $3.07 for those in the money-ask-first condition. Subjects in the time-ask-first condition also volunteered the most (7% gave time, averaging 6.5 hours), compared to those in the money-ask-first condition and the no-ask condition (1.6% each).14
Why do we see this 'Time-Ask Effect'? Perhaps it is because thinking about spending time on something activates a mindset of emotional meaning and satisfaction, allowing a donor to connect emotionally with a charity, whereas thinking about spending money activates a purely instrumental mindset.15 Whatever the reason, asking for time before money may result in more of both.
Lessons for optimal philanthropists: Volunteer your time to an optimal charity. You may soon find yourself giving time and money.
Lessons for optimal charities: Ask supporters for time before you ask them for money.
Multiplying Your Impact
Optimal philanthropy is a new but obvious idea. Spreading the meme at this early stage is a fairly optimal act in itself.
Giving to optimal charities instead of average charities can multiply one person's impact 10, 100, or maybe 1000 times. Now multiply that change in impact by a hundred, thousand, or million people who have been persuaded by the simple math and equipped with the psychology of giving.16
That's a big impact.
So, contact me at OptimalPhilanthropy@gmail.com and precommit some of your time to working with a network of people to spread the meme of optimal philanthropy. :)
Or if you haven't got time for email, sign up for an affinity card.
The world thanks you.
Notes
1 Lyubomirsky et al. (2004).
2 Harbaugh et al. (2007).
3 Dunn et al. (2008).
4 Isen & Levin (1972).
5 Aderman (1972).
6 Harris & Huang (1973); Kazdin & Bryan (1971).
7 On human universals, see Norenzayan & Heine (2005).
8 Aknin et al. (2010).
9 Anik et al. (2010).
10 Della Vigna & Malmendier (2006); Gourville & Soman (1998).
11 Breman (2006).
12 Meyvis et al. (2010).
13 Meyvis et al. (2010). See the work on construal level theory: Trope & Liberman (2003); Liberman et al. (2007).
14 Liu & Aaker (2008).
15 Liu (2010).
16 For overviews, see Oppenheimer & Olivola (2010); Andreoni (2006); Bekkers & Wiepking (2007); Small & Simonsohn (2008); Reed et al. (2007).
References
Aderman (1972). Elation, depression, and helping behavior. Journal of Personality and Social Psychology, 24: 91-101.
Aknin, Barrington-Leigh, Dunn, Helliwell, Biswas-Diener, Kemeza, Nyende, Ashton-James, & Norton (2010). Prosocial spending and well-being: cross-cultural evidence for a psychological universal? NBER Working Paper 16415. National Bureau of Economic Research.
Andreoni (2006). Philanthropy. In Kolm & Ythier (eds.), Handbook of the Economics of Giving, Altruism, and Reciprocity, Vol. 2 (pp. 1201-1269). North Holland.
Anik, Aknin, Norton, & Dunn (2010). Feeling good about giving: The benefits (and costs) of self-interested charitable behavior. In Oppenheimer & Olivola (eds.), The Science of Giving: Experimental Approaches to the Study of Charity (pp. 3-14). Psychology Press.
Armstrong, Carpenter, & Hojnacki (2006). Whose deaths matter? Mortality, advocacy, and attention to disease in the mass media. Journal of Health Politics, Policy and Law, 31: 779-772.
Bekkers & Wiepking (2007). Generosity and philanthropy: A literature review.
Bremen (2006). Give More Tomorrow: A Field Experiment on Intertemporal Choice in Charitable Giving. Working paper, Stockholm School of Economics.
Della Vigna & Malmendier (2006). Paying not to go to the gym. American Economic Review, 96: 694–719.
Dunn, Aknin, & Norton (2008). Spending money on others promotes happiness. Science, 319: 1687-1688.
Eisensee & Stromberg (2007). News floods, news droughts, and U.S. disaster relief. Quarterly Journal of Economics, 122: 693-728.
Gourville & Soman (1998). Payment depreciation: The behavioral effects of temporally separating payments from consumption. Journal of Consumer Research, 25: 160-174.
Harbaugh, Mayr, & Burghart (2007). Neural responses to taxation and voluntary giving reveal motives for charitable donations. Science, 316: 1622-1625.
Harris & Huang (1973). Helping and the attribution process. Journal of Social Psychology, 90: 291-297.
Isen & Levin (1972). The effect of feeling good on helping: Cookies and kindness. Journal of Personality and Social Psychology, 21: 384-388.
Kazdin & Bryan (1971). Competence and volunteering. Journal of Experimental Social Psychology, 7: 87-97.
Liberman, Trope, & Stephan (2007). Psychological distance. In Kruglanski & Higgins (eds.), Social Psychology: Handbook of Basic Principles, 2nd edition. Guilford Press.
Liu & Aaker (2008). The happiness of giving: The time-ask effect. Journal of Consumer Research, 35: 543-547.
Liu (2010). The benefits of asking for time. In Oppenheimer & Olivola (eds.), The Science of Giving: Experimental Approaches to the Study of Charity (pp. 201-214). Psychology Press.
Lyubomirsky, Tkach, & Sheldon (2004). Pursuing sustained happiness through random acts of kindness and counting one's blessings: Tests of two six-week interventions. Unpublished data, Department of Psychology, University of California, Riverside.
Meyvis, Bennett, & Oppenheimer (2010). Precommitment to charity. In Oppenheimer & Olivola (eds.), The Science of Giving: Experimental Approaches to the Study of Charity (pp. 35-48). Psychology Press.
Norenzayan & Heine (2005). Psychological universals: What are they and how can we know? Psychological Bulletin, 131: 763-784.
Oppenheimer & Olivola, eds. (2010). The Science of Giving: Experimental Approaches to the Study of Charity. Psychology Press.
Reed, Aquino, & Levy (2007). Moral identity and judgments of charitable behaviors. Journal of Marketing, 71: 178-193.
Slovic (2007). 'If I look at the mass I will never act': Psychic numbing and genocide. Judgment and Decision Making, 2: 79-95.
Small & Simonsohn (2008). Friends of victims: Personal experience and prosocial behavior. Journal of Consumer Research, 35: 532-542.
Trope & Liberman (2003). Temporal construal. Psychological Review, 110: 403-421.
86 comments
Comments sorted by top scores.
comment by steven0461 · 2011-07-22T21:42:28.009Z · LW(p) · GW(p)
Good picture. Together, we can punch the sun!
I'd be hesitant to generalize from normal people's motivations for giving to those of optimal philanthropists.
Do you think advocating optimal philanthropy is likely to yield greater returns than more direct ways to reduce existential risk? I could see it going either way, and it's hard to figure out what calculations to do to find out.
Replies from: Eliezer_Yudkowsky, lukeprog, nazgulnarsil↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-07-23T22:28:08.708Z · LW(p) · GW(p)
I decided to adopt "Together, we can punch the sun!" as a personal motto even before I scrolled back up and saw the relevant photo.
Now I just need to decide what it's a motto for.
Replies from: CronoDAS↑ comment by lukeprog · 2011-07-22T21:47:31.589Z · LW(p) · GW(p)
I am also co-authoring a journal article and a popular pamphlet which make the case for x-risk reduction as the most optimal philanthropic venture. :)
Three cheers for sun-punching.
Replies from: timtyler↑ comment by timtyler · 2011-07-23T10:04:29.229Z · LW(p) · GW(p)
To give you something to argue against, consider the position that "saving the world" spreads because it acts as a superstimulus to do-gooders. There's no credible evidence that aiming at saving the world has any effect on the probability of the world ending. By contrast, "the end is nigh" plackard syndrome is well known - and it diverts resources from other potentially-useful tasks.
Replies from: Giles↑ comment by Giles · 2011-08-29T04:16:45.858Z · LW(p) · GW(p)
X-risk reduction didn't really act as a superstimulus to me (I had to convince myself). To accept that x-risk reduction is a massive opportunity, I also needed to accept both that x-risk was a massive problem and that I was going to hold a non-mainstream worldview for the foreseeable future. So, there was more bad stuff to think about on this issue than good stuff - it was more ugh field than superstimulus.
That's just me though; n=1.
Replies from: timtyler↑ comment by timtyler · 2011-08-29T07:28:33.103Z · LW(p) · GW(p)
Superstimulii do not have to be positive. Traditional religions spread by invoking eternal damnation. The End of Days groups spread their message by invoking eternal oblivion.
As for holding non-mainstream views, that too is typical cult phenomenon. Weird beliefs act as markers of group membership. They show which tribe you belong to, so the ingroup can identify you. Normally the more crazy and weird the beliefs, the harder the signal is to convincingly fake.
Without meaning to doubt your powers of introspection, people don't necessarily have to be aware of being influenced by superstimulii. Sometimes, if the stimulus becomes conscious, the effect is reduced. So. for example lipstick can be overdone, and often works best of a subliminal level. In the case of The End of Days groups, the superstimulus is pretty obvious, but the effect of on any particular individual it may not be.
Anyway, you can look to the left and see large positive utility, to the right and see large negative utility - but then you have to draw your own conclusions about why you are seeing those things.
↑ comment by nazgulnarsil · 2011-07-25T07:54:56.893Z · LW(p) · GW(p)
An economist might say that when you punch something, you get less of it. I want more giant sources of negative entropy for my use :(
comment by Duk3 · 2011-07-26T03:55:48.893Z · LW(p) · GW(p)
I would like to see a thorough analysis of how someone raising funds can use the tricks from Cialdini's Influence to effectively contribute to charity. Even those without funds could use that sort of lesson to contribute meaningfully.
comment by [deleted] · 2011-07-25T06:46:57.369Z · LW(p) · GW(p)
Darn you, comment retraction mechanism.
Replies from: jsalvatier↑ comment by jsalvatier · 2011-07-26T16:55:56.600Z · LW(p) · GW(p)
I think a Donor Advised Fund is what you're looking for. I recently set one up with Fidelity on Carl Shulman's advice.
comment by Scott Alexander (Yvain) · 2011-07-23T10:25:13.691Z · LW(p) · GW(p)
Why do we see this 'Time-Ask Effect'? Perhaps it is because thinking about spending time on something activates a mindset of emotional meaning and satisfaction, allowing a donor to connect emotionally with a charity, whereas thinking about spending money activates a purely instrumental mindset.15 Whatever the reason, asking for time before money may result in more of both.
I must be more cynical than you. I'd think that if people said "yes", then they've already committed themselves to the organization and so would to give money, and/or if they said no, they would be feeling unpleasantly non-altruistic and would give money to assuage their conscience. Did the studies show differences in the money-ask broken down by whether they said yes or no to the time-ask?
Also, is there any data on whether people feel happier only after donating to fuzzy charities like the local animal shelter, or whether they'll also feel happier donating to something very abstract like SIAI?
Replies from: lukeprog↑ comment by lukeprog · 2011-07-23T12:31:44.079Z · LW(p) · GW(p)
Did the studies show differences in the money-ask broken down by whether they said yes or no to the time-ask?
Yes. And the data suggesting the emotional hypothesis I gave are many and very detailed. But there's no way I can summarize it in a paragraph. The chapter on this in The Science of Giving is good.
comment by [deleted] · 2011-07-22T22:31:07.142Z · LW(p) · GW(p)
About giving making you happy. I don't understand the research. I looked at Dunn's paper, but don't get the claim. They report (p. 5) that the result of participants asked to spend 5$ or 20$ on others/themselves is [mean = .18 / SD = .62] / [mean = -.19, SD = .66], respectively. What's the scale they use or the real distribution? (I can't figure it out from the paper alone.) Isn't this a huge SD? I also looked at Amazon's preview of The Science of Giving, but it includes no numbers whatsoever.
I ask because Dunn and Anik report significant improvements even for small-scale charity, on the range of 5-50$. I picked up similar advice from Wiseman's 59 Seconds, so I tried spending ~35$/month this year on charity (the largest amount I could afford as an undergrad). However, I noticed no persistent gains whatsoever. It did make me more loyal towards the projects I chose, but my happiness was unaffected. Similarly, random acts of kindness only make me happier while I do them, but not afterwards. So I'm interested in the spread this research has, to see how likely it is that I'm just an outlier (or whether it generalizes for non-neurotypicals at all).
Replies from: Unnamed, taryneast↑ comment by Unnamed · 2011-07-23T01:25:56.658Z · LW(p) · GW(p)
The Dunn article was published in Science, which means that most of the details are in the supplemental materials. Here's the relevant part:
Just before receiving their money, participants were asked to complete the Positive and Negative Affect Schedule (PANAS; S16), as well as reporting their happiness on the same single-item measure used in the previous studies. That evening, after spending their windfall, participants again completed the PANAS and a modified version of the single-item happiness measure (specifically, participants were asked to rate their overall happiness that day on a 5-point scale anchored with the words “not happy at all” to “extremely happy”). We standardized the 10 positive affect items of the PANAS and the single-item happiness measure to create reliable 11-item indices of happiness both pre-windfall (α = .81) and post-windfall (α = .87). A preliminary ANOVA on prewindfall happiness revealed no between-group differences, [F’s < 1], enabling the use of ANCOVA (with pre-windfall happiness as a covariate) in the main analyses of postwindfall happiness.
So happiness was measured with 11 items, 1 directly asking about happiness and 10 asking about positive emotions. Each item was rescaled so that the average of all subjects on that item was 0 and the standard deviation was 1. Then the 11 items were averaged together. Those who were instructed to give to others scored .37 points higher on that composite happiness measure at the end of the day than the spend on self group, controlling for scores on that composite happiness measure at the beginning of the day. Since the SD of that composite measure was about .64, that means that they were about .6 SD's happier, which is generally considered a "medium" effect size.
↑ comment by taryneast · 2011-07-26T13:35:13.906Z · LW(p) · GW(p)
Similarly, random acts of kindness only make me happier while I do them, but not afterwards.
I'd count myself as non-neurotypical (and just one data point, of course) but... I agree that RAOK are short-lived - but that short-lived time is fun enough to keep me doing it every so often. I think it also helps to see that kind of thing as a sort of game. Making it fun makes me happy when i do it more frequently (though admittedly not very often).
As to giving to charities. I don't have a regular charity-donation because that is boring. I do, however, randomly give a years-worth of donation to charities that strike my fancy (Sea Shepherds, SIAI, Methuselah Foundation and more)
Perhaps non-optimal from their perspective, but it increases my happiness. Perhaps I'm aiming more at warm-fuzzies than utilons, but it works for me.
One of the other commentors speaks of how to combine this effectively - ie monthly setting aside the cash in "charity account" then being able to donate from this at will - which sounds like a good strategy for keeping it more fun, while still maintaining your pre-committed optimal give-rate.
As to giving in general. I realised a couple of years back that I was a bit of a tight-fist... the kind of person that never bought a round of drinks - and I have been actively working to change that behaviour pattern (eg by shouting lunch for my friends every so often, buying a plate of chips at a meetup etc). Even though I haven't changed very much yet, I have actually noticed a marked increase in happiness - albeit fleeting... there's a nice warm-fuzzy you get from spreading largesse.
...but the long-term effects are that I can now consider myself not to be so tight-fisted. My definition of myself is changing to one for which I have far more respect. That alone is worth the effort (for me).
comment by Rain · 2011-07-27T18:34:01.047Z · LW(p) · GW(p)
What is the optimal time to donate to a charity?
As soon as funds become available? At set time intervals? After saving, accumulating interest, and waiting for a potentially larger than normal impact period?
Replies from: None↑ comment by [deleted] · 2011-07-28T22:07:03.327Z · LW(p) · GW(p)
To answer this, you would need optimal stopping time models of the utility obtained from giving vs. the utility obtained from other ventures with the money.
comment by MatthewBaker · 2011-07-24T09:17:33.747Z · LW(p) · GW(p)
This post made my day, which was already one of my best days of the summer EVER BETTER. Thank you. EDIT: I hope i get approved though, i only earn ~10,000$ a year and live at home/dorm.
comment by Kaj_Sotala · 2011-07-23T06:34:53.534Z · LW(p) · GW(p)
So, should charities remind people that donating will make them happy?
I worry that this might actually have the opposite effect.
There was a time when I helped people due to an explicit goal to feel good about myself, and less because I cared about them. Over time, it changed to me helping them because I wanted to. But before it did, I actually got very little satisfaction out of helping them, because I knew I was just doing it for selfish reasons. I remember complaining about this to someone.
Replies from: Nonecomment by Eugine_Nier · 2011-07-24T05:15:37.839Z · LW(p) · GW(p)
I have seen a similar idea, presented in a much more cynical form. Essentially arguing that people give to charity because it makes them feel less bad about doing bad things during the rest of their day.
Replies from: MatthewBaker↑ comment by MatthewBaker · 2011-07-24T09:28:35.425Z · LW(p) · GW(p)
Cynicism has its place alongside cautious optimism.
comment by Peter Wildeford (peter_hurford) · 2011-07-23T19:43:50.204Z · LW(p) · GW(p)
While donating may make people happier, is anything known about the donating habits of people who exhibit above-average happiness (controlling, of course, for how difficult it is to measure happiness)?
It would be interesting to see if it goes both ways.
comment by Alexei · 2011-07-22T22:46:09.028Z · LW(p) · GW(p)
I just signed up for the SI credit card. My current one gave me no benefits, so this was a no-brainer move, and it took less than 10 minutes.
Replies from: Rain↑ comment by Rain · 2011-07-22T22:51:50.997Z · LW(p) · GW(p)
It's also a way to visibly affiliate with the SingInst brand.
I wonder if they receive any money from the SingInst shirts and stuff on Zazzle.
I'm also curious if seeing the SingInst logo every time you pull out the card to pay for something, you'll get a guilt twinge at realizing the potential utility tradeoffs.
Replies from: Armok_GoBcomment by dugancm · 2011-07-23T02:21:09.165Z · LW(p) · GW(p)
So where can I find anecdotes about how awesome and fun it is to be saving the world through FAI research and how rewarding it is to see your work have a direct impact, so I have something vicariously available to imagine when you ask me to donate my time?
Replies from: DSimon, Armok_GoB, Rain↑ comment by Armok_GoB · 2011-07-23T12:57:00.876Z · LW(p) · GW(p)
Is there any reason anecdotes you can just make up yourself would be less effective?
Replies from: dugancm↑ comment by dugancm · 2011-07-23T17:56:44.751Z · LW(p) · GW(p)
I don't know what donating my time to SI would entail other than writing, so find it difficult to imagine in a positive frame. I may be able to get around this by training myself on the five-second level to instead mentally contrast a charity's desired future outcomes with the present (or your favorite charity's desired future outcomes, when tempted to switch) when asked, but how many others in my position will do so?
↑ comment by Rain · 2011-07-23T02:28:52.133Z · LW(p) · GW(p)
Looks like there are a lot at the Rationality Boot Camp Blog.
comment by Sniffnoy · 2011-07-23T02:05:59.934Z · LW(p) · GW(p)
I think I would consider a charity saying "Give, it'll make you happy!" to be really suspicious...
Replies from: DSimon↑ comment by DSimon · 2011-07-23T05:59:33.527Z · LW(p) · GW(p)
There are more diplomatic ways of putting it. For example: "Help those in need, it'll make you happy, studies X, Y, and Z prove it. You can become happier by donating to a worthy charity, and we humbly suggest our own..."
Replies from: taryneastcomment by taw · 2011-07-23T21:16:04.611Z · LW(p) · GW(p)
And why would giving away money to charities be a good idea? Returns on investment of almost all of them are extremely close to none, and most people are horrible at identifying the exceptions.
For every effective cause like let's say polio eradication, Wikileaks, and whatever GiveWell considers good, there's thousands of charities that essentially waste your money. Especially SIAI.
You're trying to use methods of rationality to come up with the best way to appeal to emotions.
Replies from: Hul-Gil, MatthewBaker, rwallace, shokwave↑ comment by Hul-Gil · 2011-07-25T08:38:52.109Z · LW(p) · GW(p)
Especially SIAI.
I don't know if this a taboo subject or what, but I'm curious. What makes you include SIAI in this category? (If you'd rather not discuss it on LessWrong, you can e-mail me at mainline dot express at gmail.)
Replies from: taw↑ comment by taw · 2011-07-25T23:02:28.274Z · LW(p) · GW(p)
Donating to SIAI is pure display of tribal affiliation, and these are a zero sum game. They have nothing to show for it, and there's not even any real reason to think this reduces rather than increasing existential risk.
If you really care about reducing existential risk, seed vaults and asteroid tracking are two obvious programs that both definitely work at decreasing the risk, and don't cost much.
Replies from: None, wedrifid, taryneast, lessdazed↑ comment by [deleted] · 2011-07-26T11:47:08.911Z · LW(p) · GW(p)
Just weighing in here:
SIAI is an organization built around a particular set of theories about AI -- theories not all AI researchers share. If SIAI's theories are right, they are the most important organization in the world. If they're wrong, they're unimportant.
The field of AI has been littered with (metaphorical) corpses since the 1960's. If an AI researcher tells you any theory, you have a very, very strong prior for believing it is false -- especially if it concerns "general" intelligence or "human-level" intelligence. So, Eliezer is probably wrong just like everyone else. That's not a particular criticism of him; it still puts him in august company.
So my particular position is that I'm not giving to SIAI until I'm worth enough financially that I can ask a few hours of Eliezer's time, and get a better idea of whether the theories are correct.
What I don't like is the suggestion I get from your posts that somehow SIAI is the work of self-deluded charlatans. I know what charlatanism sounds like -- I've had dear friends get halo effects around their pet ideas. I know what it sounds like when someone is just trying to get me to support the team and is playing fast and loose with the facts. And at least some of the SIAI people don't do that at ALL. You have to admire the honesty, even if you're skeptical (as I am) that research can succeed in such isolation from mainstream science. Eliezer is a good person. This is an honest and thoughtful attempt to do what he says he wants to do -- I am very, very confident of that.
Offer these people the respect (or charity, if you will) of judging their ideas on the merits -- or, if you don't have time to look into the ideas, mark that as ignorance on your part. You seem to be saying "They must be wrong because they're weird." The thing is, they're working in a field where even the experts are a little weird, and where even the mainstream academics have been wrong about a lot. You've got to revise your "Don't believe weirdos" prediction down a little bit. The more I learn about the world, the more I realize that the non-weirdos don't have it all sewn up.
Replies from: Vaniver, Dr_Manhattan, multifoliaterose↑ comment by Vaniver · 2011-07-26T15:05:51.344Z · LW(p) · GW(p)
So my particular position is that I'm not giving to SIAI until I'm worth enough financially that I can ask a few hours of Eliezer's time, and get a better idea of whether the theories are correct.
I don't think this matches up with your rejection. Even if you were an expert in the fields Eliezer is working in, it sounds like that wouldn't give you the ability to give any of his ideas a positive seal of approval, since many people worked on ideas for long times without seeing what was wrong with them. It also seems like a few hours to hash out disagreements is a very low estimate. How long do you think Eliezer and Robin Hanson have spent debating their theories, while becoming no closer to resolution?
The scenario you paint- that you get rich enough for Eliezer to wager a few hours of his time on reassuring you- does not sound like one designed to determine the correctness of the theories instead of giving you as much emotional satisfaction as possible.
I should make clear I do not mean to condemn, rather to provoke introspection; it is not clear to me there is a reason to support SIAI or other charities beyond emotional satisfaction, and so it may be wise to pursue opportunities like this without being explicit that's the compensation you expect from charities.
Replies from: None↑ comment by Dr_Manhattan · 2011-07-26T14:45:43.004Z · LW(p) · GW(p)
SIAI is an organization built around a particular set of theories about AI -- theories not all AI researchers share. If SIAI's theories are right, they are the most important organization in the world. If they're wrong, they're unimportant.
So my particular position is that I'm not giving to SIAI until I'm worth enough financially that I can ask a few hours of Eliezer's time, and get a better idea of whether the theories are correct.
There are really three separate things SIAI is working on in the AI area: one is decision theory suitable for controlling a self-modifying intelligent agent in a way that preserves the original goals. Another is deciding what those goals are (CEV). The third is actually implementing the agent design. They have published papers on the first two (CEV and decision theory), and you do not need Eliezer's time to evaluate the results; to me they seem very valuable, even if they are not ultimate solutions to the problem. Their AGI research, if any, remains unpublished (I believe on purpose).
Whether (or more likely, how much) these two successes contribute to ex-risk largely depends on the context, which is the possibility of immanent development of AGI. Perhaps Eliezer can be helpful here, though I'd prefer to get this data independently.
ETA. Personally I've given some money to SI, but it's largely based on previous successes and not on a clear agenda of future direction. I'm ok with this, but it's possibly sub-optimal for getting others to contribute (or getting me to contribute more).
Replies from: None↑ comment by multifoliaterose · 2011-07-26T14:17:56.652Z · LW(p) · GW(p)
SIAI is an organization built around a particular set of theories about AI -- theories not all AI researchers share. If SIAI's theories are right, they are the most important organization in the world. If they're wrong, they're unimportant.
This strikes me as a false dichotomy. It seems unlikely that the theories are all right or all wrong. Also, most important in the world vs. unimportant by what metric? They could be wrong about some crucial things, be unlikely to some around to more accurate views but carry high utilitarian expected value on the possibility that they do.
I agree that taw has been unfairly critical of SIAI and that SIAI people may well be closer to the mark than mainstream AGI theorists (in fact I think this more likely than not).
Replies from: None↑ comment by [deleted] · 2011-07-26T16:09:07.086Z · LW(p) · GW(p)
The main claim that needs to be evaluated is "AI is an existential risk," and the various hypotheses that would imply that it is.
If the kind of AI that poses existential risk is vanishingly unlikely to be invented (which is what I tend to believe, but I'm not super-confident) then SIAI is working to no real purpose, and has about the same usefulness as a basic research organization that isn't making much progress. Pretty low priority.
Replies from: komponisto, Eve↑ comment by komponisto · 2011-08-10T14:40:59.865Z · LW(p) · GW(p)
Are you considering other effects SIAI might have, besides those directly related to its primary purpose?
In my opinion, Eliezer's rationality outreach efforts alone are enough to justify its existence. (And I'm not sure they would be as effective without the motivation of this "secret agenda".)
↑ comment by wedrifid · 2011-07-25T23:09:55.279Z · LW(p) · GW(p)
Donating to SIAI is pure display of tribal affiliation
That just isn't true. It is partially a display of tribal affiliation.
They have nothing to show for it, and there's not even any real reason to think this reduces rather than increasing existential risk.
Even if the SIAI outright increased existential risk that would not mean donations were purely displays of affiliation. It would mean that all those who donated partially for practical instrumental reasons were mistaken and making a poor choice. It would not make their act any more purely an affiliation symbol.
If I was to donate (more) to the SIAI it would be a mix of:
- Tribal affiliation.
- Reciprocation. (They gave me a free bootcamp and airplane ticket.)
- Actually not having a better idea of a way to not die.
↑ comment by taw · 2011-07-25T23:11:34.191Z · LW(p) · GW(p)
And the evidence that donating to SIAI does anything other than signal affiliation is...?
EDIT: Downvoting this post sort of confirms my point that it's all about signaling tribal affiliations.
Replies from: wedrifid, wedrifid, Manfred, handoflixue↑ comment by wedrifid · 2011-07-26T08:24:43.669Z · LW(p) · GW(p)
EDIT: Downvoting this post sort of confirms my point that it's all about signaling tribal affiliations.
If people downvoting you is evidence that you are right then would people upvoting you have been evidence that you were wrong? Or does this kind of 'confirmation' not get conserved the way that evidence does?
↑ comment by wedrifid · 2011-07-26T01:55:08.929Z · LW(p) · GW(p)
And the evidence that donating to SIAI does anything other than signal affiliation is...?
... not required to refute your claim. It's a goal post shift. In fact I explicitly allowed for the SIAI being utterly useless or worse than useless in the comment to which you replied. The claim I rejected is this:
Donating to SIAI is pure display of tribal affiliation
For that to be true it would require that there is nobody who believes that the SIAI does something useful and whose donating behaviour is best modelled as at least somewhat influenced by the desire to achieve the overt goal.
You also require that there are no other causal influences behind the decision including forms of signalling other than tribal affiliation. I have already mentioned "reciprocation" as a non "tribal affiliation" motivating influence. Even if I decided that the SIAI were completely unworthy of my affiliation I would find it difficult to suppress the instinct to pay back at least some of what they gave me.
The SIAI has received anonymous donations. (The relevance should be obvious.)
Replies from: taw↑ comment by taw · 2011-07-26T05:36:03.229Z · LW(p) · GW(p)
Beliefs based on little evidence that people outside of tribe find extremely weird are one of the main forms of signaling tribal affiliation. Taking Jesus story seriously is how people signal belonging to one of Christian tribes, and taking unfriendly AI story seriously is how people signal belonging to one of lesswrong tribe.
No goal post are being shifted here. Donating to SIAI because one believes lesswrong tribal stories is signaling that you have these tribal-marker beliefs, and still counts as pure 100% tribal affiliation signaling.
My reference here would be a fund to build world's largest Jesus statue. These seems to be this largest Jesus contest ongoing, the record was broken twice in just a year, in Poland then in Peru, and now some Croatian group is trying to outdo them both. People who donate to these efforts might honestly belief this is a good idea. Details why they believe so are highly complex, but this is a tribal-marker belief and nothing more.
Virtually nobody who's not a local Catholic considers it such, just like virtually nobody who's not sharing "lesswrongian meme complex" considers what SIAI is doing a particularly good idea. I'm sure these funds got plenty of anonymous donations from local Catholics, and maybe some small amount of money from off-tribal people (e.g. "screw religion, but huge Jesus will be great for tourism here" / "friendly AI is almost certainly bullshit, but weirdos are worth funding by Pascal wager"), this doesn't really change anything.
tl;dr Action signaling beliefs that correlate with tribal affiliation are actions signaling tribal affiliation, regardless of how conscious this is.
Replies from: CuSithBell↑ comment by CuSithBell · 2011-07-26T06:05:34.864Z · LW(p) · GW(p)
tl;dr Action signaling beliefs that correlate with tribal affiliation are actions [solely for] signaling tribal affiliation, regardless of how conscious this is.
(Edit based on context)
This statement is either false or useless.
↑ comment by handoflixue · 2011-07-25T23:25:12.409Z · LW(p) · GW(p)
They've published papers. Presumably if we didn't donate anything, they couldn't publish papers. They also hand out paychecks to Eliezer. Eliezer is a tribal leader, so we want him to succeed! Between those two, we have proof that they're doing more than just signalling affiliation.
The far better question is whether they're doing something useful with that money, and whether it would be better spent elsewhere. That, I do not feel qualified to answer. I think even Give Well gave up on that one.
Replies from: wedrifid↑ comment by taryneast · 2011-07-26T13:41:44.825Z · LW(p) · GW(p)
seed vaults and asteroid tracking
Sounds interesting. Do you have links for charities of this sort that you recommend?
Replies from: mstevens↑ comment by mstevens · 2011-07-26T14:29:36.833Z · LW(p) · GW(p)
I'm a big fan of the very loosely related http://longnow.org/ although their major direct project is building a very nice clock.
They definitely try to promote the kind of thinking that will result in things like seed vaults though
(I'm a member)
My personal estimate is that better environmental and energy policies would reduce existential risk, but I haven't seen any appealing organisations in this area.
Replies from: taryneast↑ comment by MatthewBaker · 2011-07-24T09:27:48.334Z · LW(p) · GW(p)
Um... The return on SIAI so far is well worth it for me :). Can you give me specific examples of how you consider SIAI to waste money? Spreading knowledge of cryonics alone is worth it from an altruistic standpoint and FAI theory development from a selfish one.
Replies from: taw↑ comment by taw · 2011-07-25T23:09:29.013Z · LW(p) · GW(p)
So it's just an awfully convenient coincidence that the charity to donate to best display trial affiliations to lesswrong crowd, and the charity to donate to best save the world just happens to be the same one? What a one in a billion chance! Outside view says they're not anything like that, and they have zero to show for it as a counterargument.
If you absolutely positively have to spend money on existential risk (not that I'm claiming this is a good idea, but if you have to), asteroids are known to cause mass extinction every year with 1:50,000,000 or so chance. That's 1:500,000 per century, not really negligible. And you can make some real difference by supporting asteroid tracking programs.
Replies from: bgaesop↑ comment by bgaesop · 2011-07-26T00:59:00.611Z · LW(p) · GW(p)
So it's just an awfully convenient coincidence that the charity to donate to best display trial affiliations to lesswrong crowd, and the charity to donate to best save the world just happens to be the same one? What a one in a billion chance!
No, that's not it at all. If, as people here like to believe (and may or may not be true), the LWers are very rational and good at picking things that have very high expected value as things to start or donate to, then it makes sense that one of them (Eliezer) would create an organization that would have a very high expected value to have exist (SIAI) and the rest of the people here would donate to it. If that is the case, that SIAI is the best charity to donate to in terms of expected value (which it may or may not be), then it would also be the best charity to best donate to in order to display tribal affiliations (which it definitely is). So if you accept that people on LW are more rational than average, then them donating so much to SIAI should be taken as weak evidence that SIAI is a really good charity to donate to.
you can make some real difference by supporting asteroid tracking programs.
I was under the impression that those already had sufficient resources? Could you link to some more information on this subject, please? I agree that asteroids are a more obviously important issue than the Singularity.
Replies from: taw↑ comment by taw · 2011-07-26T01:48:40.113Z · LW(p) · GW(p)
If, as people here like to believe (and may or may not be true), the LWers are very rational and good at picking things that have very high expected value as things to start or donate to [...]
I didn't downvote you, but what you're saying is essentially "if you accept our tribe is the most awesome and smartest, then it makes sense to donate to our tribal charity". Which is something every single group would say, in slight variation.
I was under the impression that those already had sufficient resources? Could you link to some more information on this subject, please? I agree that asteroids are a more obviously important issue than the Singularity.
Here's results chart for various asteroid tracking efforts. Catalina Sky Survey seems to be doing most of the work these days, and you can probably donate to University of Arizona and have that money go to CSS somehow. I'm not really following this too closely, I'm mostly glad that some people are doing something here.
Replies from: bgaesop, MatthewBaker↑ comment by bgaesop · 2011-07-26T08:00:04.380Z · LW(p) · GW(p)
I didn't downvote you,
Thanks! I upvoted you.
but what you're saying is essentially "if you accept our tribe is the most awesome and smartest, then it makes sense to donate to our tribal charity". Which is something every single group would say, in slight variation.
Well yeah; that's why you should examine the evidence and not just do what everyone else does. So let's look at the beliefs of all the Singularitarians on LW as evidence. What would we expect to see if LW is just an arbitrary tribe that picked a random cause to glom around? I suspect we would see that not many people in the world, and particularly not high-status people and organizations, would pay attention to the Singularity. I predict that everyone on LW would donate money to SIAI and shun people who don't donate or belittle SIAI.
Now what would we see if LW is in fact a group of high-quality rationalists and the world, in general, is too blinded by various biases to think rationally about low-probability, high-impact events? Well, most people, including high-status people (but perhaps not some academics) wouldn't talk about it. People on LW would donate money to SIAI because they did the calculation and decided it was the highest expected value. And they would probably shun the people who disagree, because they're still humans.
Those two situations look awfully similar to me. My point is, I certainly don't think that you can use LW's enthusiasm about SIAI compared to the general public as a strike against LW or SIAI.
Here's results chart for various asteroid tracking efforts. Catalina Sky Survey seems to be doing most of the work these days, and you can probably donate to University of Arizona and have that money go to CSS somehow. I'm not really following this too closely, I'm mostly glad that some people are doing something here.
I'm not finding anything there indicating that they're hurting for funding, but perhaps I'm missing it.
↑ comment by MatthewBaker · 2011-07-26T02:33:39.041Z · LW(p) · GW(p)
I honestly believe that the Singularity is a greater threat then asteroids to the human race. Either an asteroid will be small enough that we can destroy it or its too big to stop. Once you make an asteroid big enough to cause risk to humanity its also a lot easier to find and destroy. However, a positive singularity isn't valued enough and a negative singularity isn't feared enough among humanity unlike asteroid deflection efforts and that's why i focus on SIAI.
Replies from: taw↑ comment by taw · 2011-07-26T05:40:29.141Z · LW(p) · GW(p)
You actually need to detect these asteroids decades in advance for our current technology to stand any chance, and we currently don't do that. More detection efforts mean tracking smaller asteroids than otherwise, but more importantly tracking big asteroids faster.
Arbitrarily massive asteroid can be moved off course very easily given enough time to do so. That's the plan, not "destroying" them.
Replies from: MatthewBaker↑ comment by MatthewBaker · 2011-07-26T05:50:56.467Z · LW(p) · GW(p)
Still, considering there's a very low chance of a large asteroid strike and most the most quoted figure Ive heard is that we have more than 75% of NEO objects that are of dangerous size being tracked. I think a negative singularity is more likely to happen in the next 200 years then an asteroid strike. However, it is a good point that donating money to NEO tracking could be a good charitable donation as well i just don't think its on the same order of magnitude as the danger of a uFAI.
Replies from: taw↑ comment by taw · 2011-07-27T05:15:54.077Z · LW(p) · GW(p)
With asteroid strike everybody agrees on risk within order of magnitude or two. We have a lot of historical data about asteroid strikes of various sizes, can use power level distribution to smooth it a bit etc.
With UFAI people's estimate are about as divergent as with Second Coming of Jesus Christ, ranging from impossible even in theory through essentially impossible all the way to almost certain.
Replies from: nazgulnarsil↑ comment by nazgulnarsil · 2011-07-28T21:25:06.654Z · LW(p) · GW(p)
Money spent on mind uploading is a better defense against asteroids than asteroid detection. At least for me.
↑ comment by rwallace · 2011-07-26T14:39:38.181Z · LW(p) · GW(p)
In particular, for donation to a particular charity to be a good idea, two conditions have to hold:
The sign of the expected utility has to be positive rather than negative.
The magnitude has to be greater than the expected utility of purchasing goods and services in the usual way (which generates benefit not only to you, but to your trading partners, to their trading partners etc.)
It is only moderately unlikely for either condition alone to be true, but it is very unlikely for both conditions to be true simultaneously.
↑ comment by shokwave · 2011-07-25T06:59:33.284Z · LW(p) · GW(p)
And why would giving away money to charities be a good idea?
The studies in the main post suggest that it brings more happiness than spending it on yourself, for small amounts relative to the amount you currently spend on yourself. Bringing happiness is what makes it a pretty good idea.
Replies from: tawcomment by taryneast · 2011-07-26T13:55:29.961Z · LW(p) · GW(p)
Why do we see this 'Time-Ask Effect'?
I think there is at least one possibility that I haven't yet seen mentioned here.
If I personally was given the choice between a charity asking for time, and a charity asking for money - I would consider the one asking for time to be more legitimate than the money-only charity.
So many people ask for your money these days and you basically don't know where it's all going.. whether it's effective or not. But if a charity is even set up such that it can take time-donations (even if i don't actually do it myself), then it feels more legitimate. ie not just another money-sink.
Some charities, on the other hand, make it extremely difficult to give your time to even if you want to. I once got a donation-request from a greenpeace spruiker when I was still young and idealistic... and wanted to actually help (I had very little money at the time)... and they simply didn't know any way that I could actually take part (they themselves were a paid employee). That totally ruined greenpeace's reputation for me.
TL;DR I think the difference between the time-request and the money-request is signaling legitimacy of the charity - which gives you more confidence that they are more likely to do something useful with it.. which in turn makes it more likely that you will actually donate (either time or money).
comment by Vaniver · 2011-07-24T23:15:40.472Z · LW(p) · GW(p)
The "Multiply your Impact" section seems like it was just slipped in. Optimal for what? "Human beings" is not specific. Is my charity budget better spent supporting institutions whose work I approve of, extending lifespans in Africa, or giving me a reputation as a generous spender among my friends? If the main reason to give is because it will make me happier, then isn't optimal philanthropy what makes me happiest, not what does the most good?
Replies from: taryneast↑ comment by taryneast · 2011-07-26T13:40:00.144Z · LW(p) · GW(p)
The trick is to find charities that align both of these goals. :)
Replies from: Vaniver↑ comment by Vaniver · 2011-07-26T14:49:51.502Z · LW(p) · GW(p)
The least convenient possible world is one that involves tradeoffs between desires.
Replies from: taryneastcomment by [deleted] · 2011-07-27T16:25:20.722Z · LW(p) · GW(p)
Lets say you had two choices about how to view the world:
- Giving to charity is a wonderful thing to do. Giving makes you happy and not giving makes you feel sad.
- Giving to charity is a stupid thing to do. Giving makes you feel like a rube who is getting conned and not giving makes you happy for being smart enough to avoid it.
And lets also assume that you value money. Which way of viewing the world is better? Well, I think its obvious that 2 is better because you get to feel good about yourself and keep your money. Shouldn't LWers therefore try as best they can to achieve this viewpoint? Isn't that a better way to go through life? Its certainly not impossible. I've done it. You can too!
Replies from: CuSithBell, nshepperd↑ comment by CuSithBell · 2011-07-27T17:08:37.785Z · LW(p) · GW(p)
Removing a preference can interfere with satisfying that preference.