Public Choice and the Altruist's Burden
post by Roko · 2010-07-22T21:34:52.740Z · LW · GW · Legacy · 101 commentsContents
Evidence Can you do well by doing good? None 101 comments
The reason that we live in good times is that markets give people a selfish incentive to seek to perform actions that maximize total utility across all humans in the relevant economy: namely, they get paid for their efforts. Without this incentive, people would gravitate to choosing actions that maximized their own individual utility, finding local optima that are not globally optimal. Capitalism makes us all into efficient little utilitarians, which we all benefit enormously from.
The problem with charity, and especially efficient charity, is that the incentives for people to contribute to it are all messed up, because we don't have something analogous to the financial system for charities to channel incentives for efficient production of utility back to the producer. One effect of giving away lots of your money and effort to seriously efficient charity is that you create the counterpoint public choice problem to the special interests problem in politics. You harm a concentrated interest (friends, potential partners, children) in order to reward a diffuse interest (helping each of billions of people by a tiny amount).
The concentrated interest then retaliates, because by standard public choice theory it has an incentive to do so, but the diffuse interest just ignores you. Concretely, your friends think that you're weird and potential partners may, in the interest of their own future children, refrain from involvement with you. People in general may perceive you as being of lower status, both because of your reduced ability to signal status via conspicuous consumption if you give a lot of money away, and because of the weirdness associated with the most efficient charities.
Anyone involved in futurism, singularitarianism etc, has probably been on the sharp end of this public choice problem. Presumably, anyone in the west who donated a socially optimal amount of money to charity (i.e. almost everything) would also be on the sharp end (though I know of no cases of someone donating 99.5% of their disposable income to any charity, so we have no examples). This is the Altruist's Burden.
Evidence
Do people around you really punish you for being an altruist? This claim requires some justification.
First off, I have personal experience in this area. Not me, but someone vitally important in the existential risks movement has been put under pressure by ver partner to participate less in existential risk so that the relationship would benefit. Of course, I cannot give details, and please don't ask for them or try to make guesses. I personally have suffered, as have many, from low-level punishment from and worsening of relationships with my family, and social pressure from friends; being perceived as weird. I have also become more weird - spending one's time optimally for social status and personal growth is not at all like spending one's time in a way so as to reduce existential risks. Furthermore, thinking that the world is in grave danger but only you and a select group of people understand makes you feel like you are in a cult due to the huge cognitive dissonance it induces.
In terms of peer-reviewed research, it has been shown that status correlates with happiness via relative income. It has also been shown that (in men) romantic priming increases spending on "conspicuous luxuries but not on basic necessities" and it also "did induce more helpfulness in contexts in which they could display heroism or dominance". In women "mating goals boosted public—but not private— helping". This means that neither gender would seem to be using their time optimally in contributing to a cause that is not widely seen as worthy, and that men especially may be letting themselves down by spending a significant fraction of income on charity of any kind, unless it somehow signaled heroism (and therefore bravery) and dominance.
The usual reference on purchase of moral satisfaction and scope insensitivity is this article by Eliezer, though there are many articles on it.
The studies on status and romantic priming constitute evidence (only a small amount each) that the concentrated interest -- the people around you -- do punish you. In theoretical terms, it should be the default hypothesis: either your effort goes to the many or it goes to the few around you. If you give less to the concentrated interest that is the few around you, they will give less to you.
The result that people purchase moral satisfaction rather than maximizing social welfare further confirms this model: in fact it explains what charity we do have as signalling, and drives a wedge between the kind and extent of charity that is beneficial to you personally, and the kind and extent that maximizes your contribution to social welfare.
Can you do well by doing good?
Mutifoliaterose claimed that you can. In particular, he claimed that by carefully investigating efficient charity, and then donating a large fraction of your wealth, you will do well personally, because you will feel better about yourself. The refutation is that many people have found a more efficient way to purchase moral satisfaction: don't spend your time and energy on investigating efficient charity, make only a small donation, and use your natural human ability to neglect the scope of your donation.
Spending time and effort on efficient charity in order to feel good about yourself doesn't make you feel any more good than not spending time on it, but it does cost you more money.
The correct reason to spend most of your meager and hard-earned cash on efficient charity is because you already want to do good. But that is not an extra reason.
My disagreement with Multifoliaterose's post is more fundamental than these details, though. "It's not to the average person's individual advantage to maximize average utility" is the fundamental theorem of social science. It's like when someone brings you a perpetual motion machine design. You know it's wrong, though yes, it is important to point out the specific error.
Edit: some people in the comments have said that if you just donate a small amount (say 5% of disposable income) to an efficient but non-futurist charity, you can do very well yourself, and help people. Yes you can do well whilst doing some good, but the point is that it is a trade-off. Yes, I agree that there are points on this trade-off that are better than either extrema for a given utility function.
101 comments
Comments sorted by top scores.
comment by pjeby · 2010-07-22T22:55:59.081Z · LW(p) · GW(p)
Invalid logic. What the people around you generally want more of is your attention, to validate their sense of status or acceptance. If you were spending your time on some form of conspicuous consumption, this would be equally disliked as a resource drain.
TL;DR: any time you spend time and resources on something other than the people you're in relationship with, they're not going to like it that much. Altruism has fuck-all to do with it, except as your own signaling that you're a good person and the people you're with are selfish jerks.
Replies from: JGWeissman, Roko↑ comment by JGWeissman · 2010-07-23T18:39:56.941Z · LW(p) · GW(p)
Um, your "TL;DR" summary is longer than the rest of your comment. (Not that either actually is too long to read.)
↑ comment by Roko · 2010-07-22T23:13:41.697Z · LW(p) · GW(p)
This still supports the conclusion that the action that is optimal for you personally doesn't maximize social welfare. Maximizing social welfare is just another thing that will make those around you like you less, and in the case of the most efficient charities it will be compounded by them thinking it is weird and cult-ish.
If you were spending your time on some form of conspicuous consumption, this would be equally disliked as a resource drain.
In the context of signalling, that's just not the way it works; what matters is their impression of you. Perhaps you were thinking of someone very close like your wife, to whom you don't need to signal wealth?
Replies from: pjeby↑ comment by pjeby · 2010-07-23T03:36:30.704Z · LW(p) · GW(p)
Perhaps you were thinking of someone very close like your wife, to whom you don't need to signal wealth?
Why would I want to signal wealth to anybody?
This still supports the conclusion that the action that is optimal for you personally doesn't maximize social welfare.
What are you talking about? In some of the internet marketer circles I hang out in, it's almost gauche to not be involved in some sort of charitable endeavor.
They are not particularly concerned about efficiency, true, but surely you can find some social circle that agrees with you. Hang out with GiveWell staff, if you must. ;-)
IOW, your language both in the post and this comment continue to strike me as victim-thinking. It's not like we're all forced to interact with exactly one social circle.
Replies from: steven0461, cousin_it↑ comment by steven0461 · 2010-07-23T04:22:38.701Z · LW(p) · GW(p)
It's not like we're all forced to interact with exactly one social circle.
Why the bizarre absolute? People don't have perfect freedom choosing whom to interact with, and to the extent that they don't, Roko's thesis holds.
Replies from: pjeby↑ comment by pjeby · 2010-07-23T20:14:17.113Z · LW(p) · GW(p)
People don't have perfect freedom choosing whom to interact with
But we have near-perfect freedom choosing whom not to interact with, and to choose our environment such that we either aren't dependent upon others' opinions of our status, or such that we are only interacting with those who have favorable perceptions of it.
In marketing terminology, this is called, "finding your niche". ;-)
↑ comment by cousin_it · 2010-07-23T18:31:19.112Z · LW(p) · GW(p)
Why would I want to signal wealth to anybody?
I emphatically agree. My strategy of choice is signaling that I'm an exciting person (by trying to actually be an exciting person), and I can't imagine why charity would interfere with that.
comment by cousin_it · 2010-07-22T22:01:16.995Z · LW(p) · GW(p)
What kind of activity are you talking about?
If it's saving children and birds, this doesn't make you less attractive to the opposite sex, quite the contrary.
If it's research work, move into a respectable academic setting. I don't think people view Judea Pearl, Marcus Hutter or Daniel Kahneman as dangerously weird, but each of them did more for "our cause" than most of us combined.
If it's advocacy, well, I kinda see why the spouses are complaining. Advocacy sucks, find something better to do with your life.
Replies from: EStokes, Roko, Roko↑ comment by EStokes · 2010-07-22T22:27:23.298Z · LW(p) · GW(p)
saving children and birds
Not this
research work
A respectable academic setting doesn't seem optimal for getting work done
advocacy
Not necessarily this, though why does advocacy suck, in your opinion?
Replies from: Roko, Will_Newsome↑ comment by Roko · 2010-07-22T22:59:03.149Z · LW(p) · GW(p)
A respectable academic setting doesn't seem optimal for getting work done
For example, they might make you publish your work on AGI. This could be very bad.
Replies from: Alexandros↑ comment by Alexandros · 2010-07-23T07:30:52.111Z · LW(p) · GW(p)
You could always publish impressive but unusable papers if you really wanted to.. Alternatively, if you have good AGI insights, just use them to help you find small improvements in current AGI research to keep people off your back. More overhead, but still, you're getting paid to do whatever you like with part of your time.. not bad.
↑ comment by Will_Newsome · 2010-07-22T22:48:58.439Z · LW(p) · GW(p)
Upvoted for conciseness, but "A respectable academic setting doesn't seem optimal for getting work done"? Why do you say so? That is, of course it's not optimal, but what comparably expensive environment do you think would be better and why?
Replies from: EStokes↑ comment by EStokes · 2010-07-22T22:58:12.539Z · LW(p) · GW(p)
Huh.
I was thinking of FAI as a typical contrarian cause, and that a respectable academic setting might be too strict for Eliezer to, say, work on the book or study math for a year. I wasn't thinking of other causes, nor do I know much about respectable academic settings. Unqualified guess for other causes.
stealth edit
Replies from: JoshuaZ↑ comment by JoshuaZ · 2010-07-22T23:12:26.148Z · LW(p) · GW(p)
I was thinking of FAI as a typical contrarian cause, and that a respectable academic setting might be too strict for Eliezer to, say, work on the book or study math for a year.
The primary point of tenure is that it frees people up to study more or less whatever they please. Now, that only applies to academics who already have major successes behind them, but it isn't at all hard for an academic to spend a year studying something relevant to what they want to do. For that matter, one could just as easily say take a year long Masters in math, or audit relevant classes at a local college. You are overestimating the level of restriction that academic settings create.
Replies from: EStokes↑ comment by Roko · 2010-07-22T23:09:13.899Z · LW(p) · GW(p)
The point is that the extent and nature of charity that is best for you individually is not the same as that which maximizes social welfare. The optimal extent of charity for you personally might be 0.
If might be optimal for you personally to go work as an actuary and retire at 40, or to pursue your personal interest in elliptic curves research. Whatever.
Replies from: cousin_it↑ comment by cousin_it · 2010-07-23T07:36:29.760Z · LW(p) · GW(p)
I can see how taking charities seriously may drain you of resources. But I don't see how it applies to existential risk reduction activities. Have you invented some method of spending all your money to get FAI faster, or something?
Yes, that was a dig at SIAI and similar institutions. I honestly have no idea why we need them. If academia doesn't work for him, Eliezer could have pursued his ideas and published them online while working a day job, as lots of scientists did. He'd make just the same impact.
Replies from: Eliezer_Yudkowsky, Wei_Dai, Vladimir_Nesov, Roko↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-07-24T06:10:51.118Z · LW(p) · GW(p)
I would not have been able to write and pursue a day job at the same time. You seem to have incredibly naive ideas about the amount of time and energy needed to accomplish worthwhile things. There are historical exceptions to this rule, but they are (a) exceptions and (b) we don't know how much faster they could have worked if they'd been full-time.
Replies from: cousin_it↑ comment by cousin_it · 2010-07-24T10:29:32.102Z · LW(p) · GW(p)
A day job doesn't have to exhaust you. For example, I have a "day job" as a programmer where I show up at the office once a week, so I have more free time than I know what to do with. I don't believe you are less capable of finding such a job than me, and I don't believe that none of your major accomplishments were made while multitasking.
Replies from: Vladimir_Nesov, Eliezer_Yudkowsky↑ comment by Vladimir_Nesov · 2010-07-24T11:16:52.014Z · LW(p) · GW(p)
A day job doesn't have to exhaust you.
It's not trivial to find one that doesn't, and takes up only a fraction of your time. You need luck or ingenuity. It makes things simpler if you can just get that problem out of the way - after all, it's a simple matter, something we know how to do. Trivial (and not so trivial) inconveniences that have known resolutions should be removed, it's that simple.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-07-24T11:01:35.772Z · LW(p) · GW(p)
You're silly. I suppose if you started doing things in your free time that are as interesting as what I do in my professional full-time workdays I would pay attention to you again.
Replies from: cousin_it, cousin_it↑ comment by cousin_it · 2010-08-13T08:44:41.241Z · LW(p) · GW(p)
You did a good thing: my last two top-level posts were partly motivated by this comment of yours. And for the record, at the same time as I was writing them, at my day job we launched a website with daily maps of forest fires in Russia that got us 40k visitors a day for awhile, got featured on major news sites and on TV, and got used by actual emergency teams. It's been a crazy month. Thankfully, right now Moscow is no longer covered in smoke and I can relax a little.
Coincidentally, in that time I had several discussions with different people about the same topic. For some reason all of them felt that you have to be "serious" about whatever you do, do it "properly", etc. I just don't believe it. What matters is the results. There's no law of nature saying you can't get good results while viewing yourself as an amateur light-headed butterfly. In fact, I think it helps!
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-08-13T17:08:07.983Z · LW(p) · GW(p)
There's no law of nature saying you can't get good results while viewing yourself as an amateur light-headed butterfly. In fact, I think it helps!
You have to work on systematically developing mastery though. Difficult problems (especially the ones without clear problem statements) require thousands of hours of background-building and familiarizing yourself with the problem to make steps in the right directions, even where these steps appear obvious and easy in retrospect, and where specific subproblems can be resolved easily without having that background. You need to be able to ask the right questions, not only to answer them.
It doesn't seem natural to describe such work as an act of "amateur light-headed butterfly". Butterflies don't work in coal mines.
↑ comment by cousin_it · 2010-07-24T11:16:39.521Z · LW(p) · GW(p)
Sorry, can't parse. Are you making any substantive argument? What's the difference between your worktime now and the free time you'd have if you worked an easy day job, or supported yourself with contract programming, or something? Is it only that there's more of it, or is there a qualitative difference?
Replies from: Eliezer_Yudkowsky, Vladimir_Nesov↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-07-24T11:43:32.598Z · LW(p) · GW(p)
Time, mental energy, focus. I cannot work two jobs and do justice to either of them.
I am feeling viscerally insulted by your assertion that anything I do can be done in my spare time. Let's try that with nuclear engineers and physicists and lawyers and electricians, shall we? Oh, I'm sorry, was that work actually important enough to deserve a real effort or something?
Replies from: cousin_it↑ comment by cousin_it · 2010-07-24T12:01:36.634Z · LW(p) · GW(p)
Sorry, I didn't mean to insult you. Also I didn't downvote your comment, someone else did.
What worries me is the incongruity of it all. What if Einstein, instead of working as a patent clerk and doing physics at the same time, chose to set up a Relativity Foundation to provide himself with money? What if this foundation went on for ten years without actually publishing novel rigorous results, only doing advocacy for the forthcoming theory that will revolutionize the physics world? This is just, uh...
A day job is actually the second recourse that comes to mind. The first recourse is working in academia. There's plenty of people there doing research in logic, probability, computation theory, game theory, decision theory or any other topic you consider important. Robin Hanson is in academia. Nick Bostrom is in academia. Why build SIAI?
Replies from: CarlShulman, Mitchell_Porter, RobinZ↑ comment by CarlShulman · 2010-07-24T14:00:55.169Z · LW(p) · GW(p)
Just as an aside, note that Nick Bostrom is in academia in the Future of Humanity Institute at Oxford that he personally founded (as Eliezer founded SIAI) and that has been mostly funded by donations (like the SIAI), mainly those of James Martin. That funding stream allows the FHI to focus on the important topics that they do focus on, rather than devoting all their energy to slanting work in favor of the latest grant fad. FHI's ability to expand with new hires, and even to sustain operations, depends on private donations, although grants have also played important roles. Robin spent many years getting tenure, mostly focused on relatively standard topics.
One still needs financial resources to get things done in academia (and devoting one's peak years to tenure-optimized research in order to exploit post-tenure freedom has a sizable implicit cost, not to mention the opportunity costs of academic teaching loads). The main advantages, which are indeed very substantial, are increased status and access to funding from grant agencies.
Replies from: cousin_it↑ comment by cousin_it · 2010-07-24T15:40:21.989Z · LW(p) · GW(p)
Thank you for the balanced answer.
Are people in academia really unable to spend their "peak years" researching stuff like probability, machine learning or decision theory? I find this hard to believe.
Replies from: CarlShulman↑ comment by CarlShulman · 2010-07-24T16:20:58.612Z · LW(p) · GW(p)
Of course people spend their peak years working in those fields. If Eliezer took his decision theory stuff to academia he could pursue that in philosophy. Nick Bostrom's anthropic reasoning work is well-accepted in philosophy. But the overlap is limited. Robin Hanson's economics of machine intelligence papers are not taken seriously (as career-advancing work) by economists. Nick Bostrom's stuff on superintelligence and the future of human evolution is not career-optimal by a large margin on a standard philosophy track.
There's a growing (but still pretty marginal, in scale and status) "machine ethics" field, but analysis related to existential risk or superintelligence is much less career-optimal there than issues related to Predator drones and similar.
Some topics are important from an existential risk perspective and well-rewarded (which tends to result in a lot of talent working on them, with diminishing marginal returns) in academia. Others are important, but less rewarded, and there one needs slack to pursue them (donation funding for the FHI with a mission encompassing the work, tenure, etc).
There are various ways to respond to this. I see a lot of value in trying to seed certain areas, illuminating the problems in a respectable fashion so that smart academics (e.g. David Chalmers) use some of their slack on under-addressed problems, and hopefully eventually make those areas well-rewarded.
↑ comment by Mitchell_Porter · 2010-07-24T14:22:30.391Z · LW(p) · GW(p)
Why build SIAI?
For an important topic, it makes sense to have a dedicated research center. And in the end, SIAI is supposed to create a Friendly AI for real, not just to design it. As it turns out, SIAI also manages to serve many other purposes, like organizing the summits. As for FAI theory, I think it would have developed more slowly if Eliezer had apprenticed himself to a computer science department somewhere.
However, I do think we are at a point where the template of the existing FAI solution envisaged by SIAI could be imitated by mainstream institutions. That solution is, more or less, figure out the utility function implicitly supposed by the human decision process, figure out the utility function produced by reflective idealization of the natural utility function, create a self-enhancing AI with this second utility function. I think that is an approach to ethical AI which could easily become the consensus idea of what should be done.
↑ comment by RobinZ · 2010-07-24T13:30:23.175Z · LW(p) · GW(p)
What if Einstein, instead of working as a patent clerk and doing physics at the same time, chose to set up a Relativity Foundation to provide himself with money?
Setting up a Relativity Foundation is a harder job than being a patent clerk.
↑ comment by Vladimir_Nesov · 2010-07-24T12:03:14.005Z · LW(p) · GW(p)
What's the difference between your worktime now and the free time you'd have if you worked an easy day job, or supported yourself with contract programming, or something?
The difference is the attention spent on contract programming. If this can be eliminated, it should be. And it can.
↑ comment by Wei Dai (Wei_Dai) · 2010-07-23T11:32:54.012Z · LW(p) · GW(p)
From what I understand, SIAI was meant to eventually support at least 10 full time FAI researchers/implementers. How is Eliezer supposed to "make the same impact" by doing research part time while working a day job?
Replies from: cousin_it↑ comment by cousin_it · 2010-07-23T12:52:30.360Z · LW(p) · GW(p)
I think the hard problem is finding 10 capable and motivated researchers, and any such people would keep working even without SIAI. Eliezer can make impact the same way he always does: by proving to the Internet that the topic is interesting.
Replies from: Vladimir_Nesov, Roko↑ comment by Vladimir_Nesov · 2010-07-23T14:39:43.189Z · LW(p) · GW(p)
I think the hard problem is finding 10 capable and motivated researchers, and any such people would keep working even without SIAI.
Again: why isn't it obvious to you that it would be easier for these people to have a source of funding and a building to work in?
↑ comment by Roko · 2010-07-23T14:13:02.779Z · LW(p) · GW(p)
and any such people would keep working even without SIAI.
No. Just no.
Replies from: cousin_it↑ comment by cousin_it · 2010-07-23T14:25:06.366Z · LW(p) · GW(p)
Why? I gave the example of Wei Dai who works independently from the SIAI. If you know any people besides Eliezer who do comparable work at the SIAI, who are they?
Replies from: Wei_Dai, Roko↑ comment by Wei Dai (Wei_Dai) · 2010-07-23T23:47:13.494Z · LW(p) · GW(p)
The problem with your example is that I don't work on FAI, I work on certain topics of philosophical interest to me that happen to be relevant to FAI theory. If I were interested in actually building an FAI, I'd definitely want a secure source of funding for a whole team to work on it full time, and a building to work in. It seems implausible that that's not a big improvement (in likelihood of success) over a bunch of volunteers working part time and just collaborating over the Internet.
More generally, money tends to be useful for getting anything accomplished. You seem to be saying that FAI is an exception, and I really don't understand why... Or are you just saying that SIAI in particular is doing a bad job with the money that it's getting? If that's the case, why not offer some constructive suggestions instead of just making "digs" at it?
Replies from: cousin_it↑ comment by cousin_it · 2010-07-24T11:03:10.976Z · LW(p) · GW(p)
I don't believe FAI is ready to be an engineering project. As Richard Hamming would put it, "we do not have an attack". You can't build a 747 before some hobbyist invents the first flyer. The "throw money and people at it" approach has been tried many times with AGI, how is FAI different? I think right now most progress should come from people like you, satisfying their personal interest. As for the best use of SIAI money, I'd use Givewell to get rid of it, or just throw some parties and have fun all around, because money isn't the limiting factor in making math breakthroughs happen.
Replies from: Wei_Dai↑ comment by Wei Dai (Wei_Dai) · 2010-07-27T06:13:51.061Z · LW(p) · GW(p)
I think right now most progress should come from people like you, satisfying their personal interest.
I think the problem with that is that most people have multiple interests, or their interests can shift (perhaps subconsciously) based on considerations of money and status. FAI-related fields have to compete with other fields for a small pool of highly capable researchers, and the lack of money and status (which would come with funding) does not help.
I don't believe FAI is ready to be an engineering project.
Me either, but I think that one, SIAI can use the money to support FAI-related research in the mean time, and two, given that time is not on our side, it seems like a good idea to build up the necessary institutional infrastructure to support FAI as an engineering project, just in case someone makes an unexpected theoretical breakthrough.
↑ comment by Roko · 2010-07-23T14:38:09.421Z · LW(p) · GW(p)
Marcello, Anna Salamon, Carl Shulman, Nick Tarleton, plus a few up-and-coming people I am not acquainted with.
Replies from: Nick_Tarleton, Nick_Tarleton, Nick_Tarleton, cousin_it↑ comment by Nick_Tarleton · 2010-07-23T20:44:56.389Z · LW(p) · GW(p)
I don't do any work comparable to Eliezer's.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-07-23T20:54:01.642Z · LW(p) · GW(p)
Why don't you? You are brilliant, and you understand the problem statement, you merely need to study the right things to get started.
↑ comment by Nick_Tarleton · 2010-07-23T20:44:40.068Z · LW(p) · GW(p)
I don't do any original work comparable to Eliezer.
↑ comment by Nick_Tarleton · 2010-07-23T20:44:09.157Z · LW(p) · GW(p)
I don't do anything comparable to Eliezer.
↑ comment by cousin_it · 2010-07-23T14:38:49.724Z · LW(p) · GW(p)
Is their research secret? Any pointers?
Replies from: Roko↑ comment by Roko · 2010-07-23T14:41:16.205Z · LW(p) · GW(p)
Marcello's research is secret, but not that of the others.
Replies from: cousin_it, Vladimir_Nesov↑ comment by cousin_it · 2010-07-23T14:44:10.921Z · LW(p) · GW(p)
Sorry for deleting my comment, I didn't think you'd answer it so quickly. For posterity, it said: "Is their research secret? Any pointers?"
Here's the list of SIAI publications. Apart from Eliezer's writings, there's only one moderately interesting item on the list: Peter de Blanc's "convergence of expected utility" (or divergence, rather). That's... good, I guess? My point stands.
↑ comment by Vladimir_Nesov · 2010-07-23T14:44:35.750Z · LW(p) · GW(p)
Is it secret why it's secret? I can't imagine.
Replies from: Roko↑ comment by Roko · 2010-07-23T14:49:32.754Z · LW(p) · GW(p)
Yes. If anyone finds out why Marcello's research is secret, they have to be killed and cryopreserved for interrogation after the singularity.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-07-23T14:55:20.698Z · LW(p) · GW(p)
Yes. If anyone finds out why Marcello's research is secret, they have to be killed and cryopreserved for interrogation after the singularity.
Now why do you even ask why should people be afraid of something going terribly wrong at SIAI? Keeping it secret in order to avoid signaling the moment where it becomes necessary to keep it secret? Hmm...
↑ comment by Vladimir_Nesov · 2010-07-23T08:05:29.128Z · LW(p) · GW(p)
If academia doesn't work for him, Eliezer could have pursued his ideas and published them online while working a day job, as lots of scientists did. He'd make just the same impact.
Isn't it better to have an option of pursuing your research without having to work a day job? Presumably, this will allow you to focus more on research...
Replies from: cousin_it↑ comment by cousin_it · 2010-07-23T08:34:57.287Z · LW(p) · GW(p)
But... create a big organization that generates no useful output, except providing you with some money to live on? Is it really the path of least effort? SIAI has existed for 10 years now and here are its glorious accomplishments broken down by year. Frankly, I'd be less embarrassed if Eliezer were just one person doing research!
Replies from: Vladimir_Nesov, NancyLebovitz↑ comment by Vladimir_Nesov · 2010-07-23T10:15:44.947Z · LW(p) · GW(p)
SIAI has existed for 10 years now and here are its glorious accomplishments broken down by year. Frankly, I'd be less embarrassed if Eliezer were just one person doing research!
Yes well, in retrospect many things are seen as suboptimal. Remember that SIAI was founded back when Eliezer didn't figure out importance of Friendliness and thought we need a big concerted effort to develop an AGI. Later, he was unable to interest sufficiently qualified people to do the research on FAI (equivalently, to explain the problem so that qualified people would both understand it and take seriously). This led to blogging on Overcoming Bias and now Less Wrong, which does seem to be a successful, if insanely inefficient, way of explaining the problem. Current SIAI seems to have a chance of mutating into a source of funding for more serious FAI research, but as multifoliaterose points out, right now publicity seems to be a more efficient way to eventually getting things done, since we need to actually find researchers to make the accomplishments you protest about the absence of.
Replies from: cousin_it↑ comment by cousin_it · 2010-07-23T14:29:34.325Z · LW(p) · GW(p)
Since you have advanced the state of the art both here and at decision-theory-workshop, I will take this opportunity to ask you: is your research funded by SIAI? Would it progress faster if it were? Is money the limiting factor?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-07-23T14:47:02.003Z · LW(p) · GW(p)
I'll reply privately via e-mail (SIAI doesn't fund me, and it'd be helpful if a few unlikely things were different).
Replies from: cousin_it↑ comment by NancyLebovitz · 2010-07-23T09:01:43.333Z · LW(p) · GW(p)
The advantages to an organization are mutual support, improving the odds of continuity if something happens to Eliezar, and improving the odds of getting more people who can do high level work.
I don't have a feeling for how fast new organizations for original thought and research should be expected to get things done. Anyone have information?
Replies from: cousin_it↑ comment by cousin_it · 2010-07-23T09:09:56.830Z · LW(p) · GW(p)
improving the odds of continuity if something happens to Eliezer, and improving the odds of getting more people who can do high level work
I don't see who else does high level work at SIAI and who will continue it if Eliezer gets hit by a bus. Wei Dai had the most success building on Eliezer's ideas, but he's not a SIAI employee and SIAI didn't spark his interest in the topic.
comment by [deleted] · 2010-07-23T03:47:00.922Z · LW(p) · GW(p)
Two points:
One. Charity, up to a point, is not necessarily a trade-off. Just as adding a hobby can make you more productive at work by forcing you to be efficient with your time, adding a charitable commitment can force you to stop wasting money. There is a reason why the Judaeo-Christian tradition recommends tithing; a tenth of income is a good rule of thumb for an amount that's significant but not enough to make you noticeably poorer.
Two. When people have personal problems as a result of altruism, I suspect it's the nature of the charity (futurist ideas sound useless to a lot of people) or the nature of the commitment (giving more than a tenth of income, for example) or some interpersonal issue that the altruist doesn't understand. I want to emphasize that last possibility. If you know you have Asperger's, you should be extra skeptical about your own ability to explain interpersonal behavior.
Replies from: steven0461, Roko↑ comment by steven0461 · 2010-07-23T04:20:27.265Z · LW(p) · GW(p)
I wish the concept of "tithing" included spending a tenth of one's free time trying to optimize.
↑ comment by Roko · 2010-07-23T10:55:54.964Z · LW(p) · GW(p)
At small levels of expenditure (<5% disposable income) , charitable spending is such a small expenditure that of course it won't make enough of a difference to negatively impact you enough that you notice.
My strong suspicion is that if existential risk reducers could and wanted to pull off the trick of only devoting 5% of their spare mental energy existential risks, then there would be no problem, either in my case or in the cases of the people I mentioned.
Perhaps there would be a problem with cognitive dissonance, but you could still apply the 5% rule: discount the extent to which you care about humanity as a whole versus near-mode things by a factor of 20.
comment by Mass_Driver · 2010-07-24T03:02:26.078Z · LW(p) · GW(p)
Spending time and effort on efficient charity in order to feel good about yourself doesn't make you feel any more good than not spending time on it, but it does cost you more money. The correct reason to spend most of your meager and hard-earned cash on efficient charity is because you already want to do good. But that is not an extra reason.
Look, I think Multifolaterose made one good point that you either missed or for some reason chose not to address:
Increasing the amount you donate to efficient charity by one order of magnitude can radically improve your self-esteem, productivity, and mental integrity in ways that you would not have expected based merely on your desire to do good.
In other words, if you calculate U(charity) = U(status) + U(doing good), you will seriously underestimate U(charity). You must also include a term for charity's surprising effect on your psyche; U(charity) = U(status) + U(doing good) + U(being good).
Yes you can do well whilst doing some good, but the point is that it is a trade-off. Yes, I agree that there are points on this trade-off that are better than either extrema for a given utility function.
OK, but you shouldn't be quite so cavalier about it. If the actual equilibrium point involves 5x or 10x current donation levels, and rational thinking can help people move toward that equilibrium point, then there's this huge opportunity for us to help people help both themselves and others by explaining to them why charity is awesome-r than they thought. The way you phrase your disclaimer seems to suggest that the trade-off will inevitably break down differently for different people to the point where we shouldn't worry about it.
Replies from: Benquo, PhilGoetz↑ comment by Benquo · 2010-07-24T17:54:34.767Z · LW(p) · GW(p)
Can you say more about how to realize these benefits? I haven't noticed what I've given to have any real effect on my character or well-being...
Replies from: Mass_Driver↑ comment by Mass_Driver · 2010-07-24T18:39:58.639Z · LW(p) · GW(p)
Well, your mileage may vary. But here's Multifolaterose's report on self-esteem before:
Though I hate myself for it, apparently I care a lot more about myself than I care about other people. I’m just not a good enough person to do what I should do. I’m happier when I don’t think about it than when I do, and I do the wrong thing regardless, so I try not to think about it too much. But I know in my heart-of-hearts that the way I’m leading my life is very wrong.
and after:
What effect did donating have on me? Well, since correlation is not causation, one can't be totally sure. But my subjective impression is that it substantially increased my confidence in my ability to act in accordance with my values, which had a runaway effect resulting in me behaving in progressively greater accord with my values; raising my life satisfaction considerably. The vague sense of guilt that I once felt has vanished. The chronic mild depression that I'd experienced for most of my life is gone. I feel like a complete and well integrated human being. I'm happier than I've been in eight years. I could not have done better for myself by spending the $1500 in any other way.
To see why multifolaterose thinks it might happen to you, read the article, especially reason (C) for why happiness correlates only weakly with disposable income and the quotes from Singer's book.
Hope that helps.
Also, at the risk of being preachy or presumptuous, Multifolaterose doesn't predict that you'll get any significant character gains from throwing a few bucks around here and there -- you would have to give in an amount that begins to reflect your values. Spending 1% of your income on charity, e.g., suggests that you value yourself 100 times more than a stranger, which may not do much for your self-esteem.
comment by Unnamed · 2010-07-22T23:30:32.408Z · LW(p) · GW(p)
I wouldn't call the problem public choice, since most kinds of charity devote resources away from your immediate social network but only a few attract problems. If you used GiveWell's standards of efficiency, and gave to Stop Tuberculosis or VillageReach, I doubt you'd run into problems. It sounds like these problems are arising with futurist type charities where you're devoting your efforts and resources to causes that people close to you don't understand and find weird and offputting, which is a different source of trouble.
Replies from: multifoliaterose, Roko↑ comment by multifoliaterose · 2010-07-23T01:26:33.768Z · LW(p) · GW(p)
Yes, this is a major reason that I doubt that donating to SIAI is a good idea. I feel that:
In order for existential risk charities to do a good job, they need good researchers and donors.
In order for existential risk charities to attract good researchers and donors, public interest in and concern for existential risk must grow substantially.
In light of point 2, the most important task for an existential risk charity right now is to increase public interest in and concern for existential risk.
SIAI seems poorly suited to generating interest in and concern for existential risk and may very well be lowering the prestige attached to investigating existential risk rather than raising the prestige attached to investigating existential risk.
↑ comment by Roko · 2010-07-23T01:57:50.671Z · LW(p) · GW(p)
In order for existential risk charities to attract good researchers and donors, public interest in and concern for existential risk must grow substantially.
This is a separate debate, but I think that you overestimate the ability of the general public, and society at large to be sane about existential risks, and AI risks especially. Though it is useful to have someone challenging the orthodoxy here: what evidence do you have that suggests that it is possible to get people to take this really seriously?
Replies from: multifoliaterose↑ comment by multifoliaterose · 2010-07-23T03:15:19.919Z · LW(p) · GW(p)
This is a separate debate, but I think that you overestimate the ability of the general public, and society at large to be sane about existential risks, and AI risks especially.
I don't think it's unreasonable to hope that society can eventually get to a point where being an existential risk researcher has status similar to being a physics researcher.
There's nothing intrinsically weird about the idea "there are things that could cause the extinction of the human race and it's a good idea to have some people studying them and thinking about how to avoid them." I think that the reason that general artificial intelligence research has such a bad reputation is that it's associated with a history of false alarms. I think that by adopting a gradualist approach of getting more and more of the intellectual elite to think about existential risk, it should be possible to gradually change attitudes about artificial intelligence research. I worry that SIAI might sound another "false alarm" or have institutional problems which further damage the credibility of existential risk research.
My remark is related to the top level post. From your top level post it's clear that at the moment there are very strong negative pressures against people studying existential risk. I wish there weren't such pressures but they're there. It's plausible to me that these pressures make it much more difficult for you to existential risk research than you would be if existential risk research were more mainstream. It's also plausible to me that there are people who have something in common with you but who are unable to bear these pressures and so are deterred from working with you.
For this reason, I think that the best way to facilitate existential risk research is to
(a) Raise levels of public interest in making the world a better place. A very large of majority of the people so influenced will not work toward or fund existential risk research, but a small percentage will.
(b) Get the educated public (the sorts of people who read semi scholarly books) interested in existential risk.
(c) Get established scientific experts more interested in existential risk.
In order to accomplish (b) and (c), I think that it's important for an existential risk organization to avoid any appearance of cultishness.
A fundamental problem is that there seems to be a strong positive correlation between having interest in existential risk and having high functioning Aspergers' syndrome, and a strong negative correlation between having high functioning Aspergers' syndrome and having good marketing skills. I think that this issue is the main reason that existential risk research has such low status relative to its importance. Not sure what can be done about this.
Replies from: Roko, Eneasz, Roko, soreff↑ comment by Roko · 2010-07-23T10:38:24.185Z · LW(p) · GW(p)
I think the problem is that the public is like a reinforcement learner, and won't believe claims that are based on long chains of reasoning. Rather, the public and society at large tends to wait for the thing in question to actually happen, so that they have "proof".
Physics is OK because it has repeatedly proved both its value in making novel and astounding predictions that were then proved correct, and because those predictions had important practical consequences. Though there are clear exceptions where dreadful public epistemology has impacted physics: overreaction to the dangers of nuclear power being one.
I think there's a fundamental point about how public epistemology works that I want to make here: the public operates like a dumb agent that is paranoid about not being tricked, and demands real physical proof of things when the bayesian probability with respect to a reasonable prior is already 99.9999... %. Widespread denial of evolution is one case; you can't show someone an ape evolving into a human.
Replies from: soreff↑ comment by soreff · 2010-07-24T04:06:34.193Z · LW(p) · GW(p)
the public operates like a dumb agent that is paranoid about not being tricked
Good point! Perhaps part of the problem is that the public has been subjected to at least two millenia of warnings of existential risks - by the clergy... That's long enough, and the false alarms have been frequent enough and intense enough, that perhaps we have even genetically evolved some extra skepticism about them.
Replies from: ata↑ comment by ata · 2010-07-24T04:20:42.748Z · LW(p) · GW(p)
But do we (i.e. the human race in general) have any more skepticism about such claims than we used to? Most people still do believe in religions that include some form of eschatology.
It might just be that scientific talk about existential risk seems like a competing meme to religious people (you're not allowed to believe in something that says the world won't end the way your religion says it will), while non-religious people may tend to see discussion of global catastrophe as in the genre of apocalyptic religion.
(Then again, global warming doesn't seem to have that problem, so maybe it's just a marketing issue...)
↑ comment by Eneasz · 2010-07-27T20:28:36.636Z · LW(p) · GW(p)
A fundamental problem is that there seems to be a strong positive correlation between having interest in existential risk and having high functioning Aspergers' syndrome, and a strong negative correlation between having high functioning Aspergers' syndrome and having good marketing skills.
Couldn't this be corrected by hiring a Marketing firm? High Functioning Aspergers' can see the link from "hiring a Marketing firm" to "getting the public to believe nearly anything" is very strong and very reliable. It takes only a few tens of millions of dollars to convince the public to commit to billions of dollars in near-future losses (eg: tobacco industry, carbon polluters, election drives).
This may not be desirable, but it is a fact, and if a rational agent wants to win then s/he should accept the fact and design with it.
↑ comment by Roko · 2010-07-23T10:48:20.864Z · LW(p) · GW(p)
Another problem I want to mention: getting "established scientific experts" to take existential risk seriously is impeded by the fact that academia has no mechanism for assessing value of information. Academics are rewarded based upon how true the info they generate is, not on a combination of how true it is and how important it is. So we have more papers on dung beetle reproduction than on human extinction.
Furthermore, academia is utterly paranoid about not causing the utterly dumb public to mistrust it, so it has to adhere to the public's standards about needing real physical proof for outlandish claims, rather than reasoning probabilistically about them using long, complex and somewhat subjective arguments.
Lastly, to complicate things even more, academia is chaos. Nobody is in charge. It is inherently conservative and slow to change, even when there is real physical proff that it is mistaken -- most bad theories are buried along with their owners years after they have been shown to have a miniscule bayesian probability.
Now there are a few academics at Oxford University doing x-risk research. But to grow that community to 1000's of researchers is going to be either very expensive and quite slow, or free and glacially slow.
↑ comment by soreff · 2010-07-24T04:00:03.108Z · LW(p) · GW(p)
From your top level post it's clear that at the moment there are very strong negative pressures against people studying existential risk. I wish there weren't such pressures but they're there. It's plausible to me that these pressures make it much more difficult for you to existential risk research than you would be if existential risk research were more mainstream.
I would phrase this differently. Certain types of existential risks (nuclear war, asteroid impacts) seem to be studied in the mainstream. Perhaps the study of AGI-related existential risks is the key area pushed out of the mainstream?
↑ comment by xamdam · 2010-07-28T04:16:03.589Z · LW(p) · GW(p)
I think your arguments would make sense if there was a general "let's deal with existential risks" program; I see SIAI concentrating specifically on the imminent possibility of uFAI. They feel they already have enough researchers for the specific problem, and they have some fund flow that saves them the effort to tap the more general public. They would rather use the resources they have to attack the problem itself. You may argue with the specific point of compromise, but it is not illogical.
It-just-so-happens that "solving" uFAI risk would most likely solve all other problems by triggering a friendly Singularity, but that does not make SIAI a general existential-risk fighting unit.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2010-07-28T04:39:28.151Z · LW(p) · GW(p)
It-just-so-happens that "solving" uFAI risk would most likely solve all other problems by triggering a friendly Singularity
This seems unlikely to me. Even if you completely solve the problem of Friendly AI you might lack the processing power to implement it. Or it might turn out that there are fundamental limits which prevent a Singularity event from taking place. The first problem seems particularly relevant given that to someone concerned about uFAI, the goal presumably is to solve the Friendliness problem well before we're anywhere near actually having functional general AI. No one want this to be cut close and there's no a priori reason to think it would be cut close. (Indeed if it did seem to be getting cut close one could arguably use that as evidence that we're in a simulation and that this is a semifictionalized account with a timeline specifically engineered to create suspense and drama.)
↑ comment by PhilGoetz · 2010-07-28T03:22:50.017Z · LW(p) · GW(p)
SIAI seems poorly suited to generating interest in and concern for existential risk and may very well be lowering the prestige attached to investigating existential risk rather than raising the prestige attached to investigating existential risk.
Why? How would you do it differently?
↑ comment by Roko · 2010-07-22T23:46:13.191Z · LW(p) · GW(p)
But surely if you donated an amount that was social-utility optimizing to a charity like StopTB, you would personally be worse off, including because of negative effects from people close to you?
Replies from: Unnamed↑ comment by Unnamed · 2010-07-22T23:57:04.137Z · LW(p) · GW(p)
That's true. If you gave everything you could, keeping only enough so that you could keep working & making money, that would probably be bad for you (including your social life). I suppose there's a Laffer-type curve for it. But most people don't give enough to be in the range where there are significant negative personal consequences to additional giving, and multifoliaterose's post didn't focus on those extreme levels of giving.
Replies from: Roko↑ comment by Roko · 2010-07-23T00:05:33.928Z · LW(p) · GW(p)
It seems that the amount that he suggested was neither best for you nor best for social utility, so a trade-off.
The argument I have against his post is the idea that the two incentives line up, whereas I think you and I agree that they trade-off against each other.
Replies from: multifoliaterose↑ comment by multifoliaterose · 2010-07-23T01:16:03.409Z · LW(p) · GW(p)
My position is not that the two incentives line up perfectly. My post was suggesting the possibility that at the margin, most Americans would be happier if they donated noticeably more or donated noticeably better.
Replies from: Unnamed, Roko↑ comment by Unnamed · 2010-07-23T02:23:19.859Z · LW(p) · GW(p)
I was also thinking at the margin. There are some margins where what helps the self and what helps social utility conflict, and some where they line up or are basically independent. At least in our demographic (well-educated people in OECD countries), I think that most people are at a point where giving more to effective non-weird charity would at least not be a noticeable decline for the self (and for some people it would be an improvement). There's likely to be more conflict for large increases in giving or for weird charities, but Roko's post seems to treat the conflict between self & social utility as more fundamental than that.
↑ comment by Roko · 2010-07-23T01:53:51.866Z · LW(p) · GW(p)
Ok, I disagree with you. But point taken: the incentives could fail to line up perfectly, but still line up for small amounts of donation.
It would be interesting if this disagreement were testable.
Replies from: Unknowns↑ comment by Unknowns · 2010-07-23T03:31:45.289Z · LW(p) · GW(p)
The disagreement is easily testable, it just requires that enough people test multifoliaterose's suggestions. He says that he himself became happier by donating more. Do you think he isn't telling the truth?
Of course, the disagreement will not be tested in practice, because no one or very few will be willing to test his suggestion, seeing that such a test would be quite expensive.
Replies from: multifoliaterose↑ comment by multifoliaterose · 2010-07-23T04:17:52.626Z · LW(p) · GW(p)
Of course, the disagreement will not be tested in practice, because no one or very few will be willing to test his suggestion, seeing that such a test would be quite expensive.
Do you find my suggestion that such a test would be worth it for individual prospective donors to perform (based on expected returns considerations) unconvincing?
Replies from: Unknowns↑ comment by Unknowns · 2010-07-23T07:31:19.914Z · LW(p) · GW(p)
I have no doubt it would be worth it. In fact, I expect you are right. Even giving a beggar $20 instead of $1 increased my happiness significantly. But due to people's selfishness, in general they will not be willing to test it even if the expected return is positive.
comment by NancyLebovitz · 2010-07-23T07:52:48.544Z · LW(p) · GW(p)
Just to underline something: multifoliaterose did give 5%. What's perhaps unusual is that he gave it in one swell foop.
IIRC, Americans give about 2%/year on the average, which implies it isn't all that unusual to give twice that much.
I doubt it's possible to stop seeing the untested effectiveness of most charities once you've seen it.
Replies from: Rokocomment by Violet · 2010-07-23T13:52:54.103Z · LW(p) · GW(p)
Have you considered that some of us might have utility functions that do have terms for socially distant people? Thus the charity can give direct utility to us, which seems ignored by the analysis.
Second, end points rarely are optimal. E.g. eating only tuna and nothing else could be unhealthy and weird, but that does not imply that eating some tuna is unhealthy or weird. Thus your analysis seems to miss the obvious answer.
Replies from: Rokocomment by SilasBarta · 2010-07-24T16:18:47.800Z · LW(p) · GW(p)
How did this article go from -8 to +8?
Replies from: JGWeissman, Blueberry↑ comment by JGWeissman · 2010-07-24T17:18:38.332Z · LW(p) · GW(p)
It didn't. The related article that was at -8 was deleted.
Replies from: ShardPhoenix↑ comment by ShardPhoenix · 2010-07-25T13:31:10.836Z · LW(p) · GW(p)
Why was the other article deleted? Someone in another thread said something about a banned topic?
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-02-04T15:45:53.787Z · LW(p) · GW(p)
Gotta love the text of the page I get to by following that link.
comment by Bart119 · 2012-05-01T20:03:56.103Z · LW(p) · GW(p)
Is this the right place to engage in thread necromancy? We'll see.
I've been troubled by the radical altruism argument for some years, and never had a very satisfactory reason for rejecting it. But I just thought of an argument against it. In brief, if people believe that their obligation is to give just about everything they have to charity, then they have created a serious disincentive to create more wealth.
It starts with the argument against pure socialism. In that system, each person works as hard as he or she can in order to produce for the good of society, and society takes 100% of the production and distributes it to people according to need (or utility, as best it can figure it out). This is appealing in many ways. The main determinants of a person's productivity are factors beyond his or her control: genetic endowment, early childhood experiences and what ideas you're exposed to. Even free will is suspect. So if what you can produce is due to factors beyond your control, why should you benefit from it? Distribute according to need instead. It's really a very nice idea. The only problem is, it doesn't work. People in general seem to be not at all perfectible, and when you change the incentives, people's behavior changes. They stop working hard, see others working less hard, see a system that's broken and work even less hard, and eventually everyone loses. I'm hoping there are few enough pure socialists out there that this won't become a political battle, which I realize is discouraged here.
Anyway, the same reasoning could apply to extreme altruism. If a person believes that their obligation is to give just about everything they have to charity, then they have created a serious disincentive to create more wealth. Sure, a noble individual could resist that, just as some few people under communism worked their hardest. So each person can ask if they are that noble or not.
I'm actually in favor of coerced altruism: taxes. My "cheating detector" that evolution has endowed me with is alive and well, and I don't really want to volunteer to redistribute my wealth unless other people are going to participate too. Yeah, it's part of a huge, messy, inefficient political process to determine how much redistribution to do (a tiny fraction of 100%) but the idea of getting everyone to contribute instead of a small minority of not-very-rich people makes it worth it. This may well be an unpopular view. Pointers to where this has been discussed elsewhere are welcome in lieu of reopening some old issue.