Against Cryonics & For Cost-Effective Charity

post by multifoliaterose · 2010-08-10T03:59:28.119Z · LW · GW · Legacy · 189 comments

Contents

  Advocacy of cryonics within the Less Wrong community
  Is cryonics selfish?
  Could funding cryonics be socially optimal?
  Is cryonics rational?
  Implications
None
189 comments

Related To: You Only Live Twice, Normal Cryonics, Abnormal Cryonics, The Threat Of Cryonics, Doing your good deed for the day, Missed opportunities for doing well by doing good

Summary: Many Less Wrong posters are interested in advocating for cryonics. While signing up for cryonics is an understandable personal choice for some people, from a utilitarian point of view the money spent on cryonics would be much better spent by donating to a cost-effective charity. People who sign up for cryonics out of a generalized concern for others would do better not to sign up for cryonics and instead donating any money that they would have spent on cryonics to a cost-effective charity. People who are motivated by a generalized concern for others to advocate the practice of signing up for cryonics would do better to advocate that others donate to cost-effective charities.

Added 08/12:  The comments to this post have prompted me to add the following disclaimers:

(1) Wedrifid understood me to be placing moral pressure on people to sacrifice themselves for the greater good. As I've said elsewhere, "I don't think that Americans should sacrifice their well-being for the sake of others. Even from a utilitarian point of view, I think that there are good reasons for thinking that it would be a bad idea to do this." My motivation for posting on this topic is the one described by rhollerith_dot_com in his comment.

(2) In line with the above comment, when I say "selfish" I don't mean it with the negative moral connotations that the word carries, I mean it as a descriptive term. There are some things that we do for ourselves and there are some things that we do for others - this is as things should be. I'd welcome any suggestions for a substitute for the word "selfish" that has the same denotation but which is free of negative conotations.

(3) Wei_Dai thought that my post assumed a utilitarian ethical framework. I can see how my post may have come across that way. However, while writing the post I was not assuming that the reader ascribes to utilitarianism. When I say "we should" in my post I mean "to the extent that we ascribe to utilitarianism we should." I guess that while writing the post I thought that this would be clear from context, but turned out to have been mistaken on this point.

As an aside, I do think that there are good arguments for a (sophisticated sort of) utilitarian ethical framework. I will make a post about this after reading Eliezer's posts on utilitarianism.

(4) Orthonormal thinks that I'm treating cryonics differently from other expenditures. This is not the case, from my (utilitarian) point of view, expenditures should be judged exclusively based on their social impact. The reason why I wrote a post about cryonics is because I had the impression that there are members of the Less Wrong community who view cryonics expenditures and advocacy as "good" in a broader sense than I believe is warranted. But (from a utilitarian point of view) cryonics is one of thousands of things that people ascribe undue moral signficance to. I certainly don't think that advocacy of and expenditures on "cryonics" is worse from a utilitarian point of view than advocacy of and expenditures on something like "recycling plastic bottles".

I've also made the following modifications to my post

(A) In response to a valid objection raised by Vladimir_Nesov I've added a paragraph clarifying that Robin Hanson's suggestion that cryonics might be an effective charity is based on the idea that doing so will drive costs down, and explanation for why I think that my points still hold.

(B) I've added a third example of advocacy of cryonics within the Less Wrong community to make it more clear that I'm not arguing against a straw man.

Without further ado, below is the main body of the revised post.


Advocacy of cryonics within the Less Wrong community

Most recently, in Christopher Hitchens and Cryonics, James_Miller wrote:

I propose that the Less Wrong community attempt to get Hitchens to at least seriously consider cryonics.


Eliezer has advocated cryonics extensively. In You Only Live Twice, Eliezer says:

If you've already decided this is a good idea, but you "haven't gotten around to it", sign up for cryonics NOW.  I mean RIGHT NOW.  Go to the website of Alcor or the Cryonics Institute and follow the instructions.

[...]

Not signing up for cryonics - what does that say?  That you've lost hope in the future.  That you've lost your will to live.  That you've stopped believing that human life, and your own life, is something of value.

[...]

On behalf of the Future, then - please ask for a little more for yourself.  More than death.  It really... isn't being selfish.  I want you to live.  I think that the Future will want you to live.  That if you let yourself die, people who aren't even born yet will be sad for the irreplaceable thing that was lost.

In Normal Cryonics Eliezer says:

You know what?  I'm going to come out and say it. I've been unsure about saying it, but after attending this event, and talking to the perfectly ordinary parents who signed their kids up for cryonics like the goddamn sane people do, I'm going to come out and say it:  If you don't sign up your kids for cryonics then you are a lousy parent.

In The Threat of Cryonics, lsparrish writes

...we cannot ethically just shut up about it. No lives should be lost, even potentially, due solely to lack of a regular, widely available, low-cost, technologically optimized cryonics practice. It is in fact absolutely unacceptable, from a simple humanitarian perspective, that something as nebulous as the HDM -- however artistic, cultural, and deeply ingrained it may be -- should ever be substituted for an actual human life.

Is cryonics selfish?

There's a common attitude within the general public that cryonics is selfish. This is exemplified by a quote from the recent profile of Robin Hanson and Peggy Jackson in the New York Times article titled Until Cryonics Do Us Part:

“You have to understand,” says Peggy, who at 54 is given to exasperation about her husband’s more exotic ideas. “I am a hospice social worker. I work with people who are dying all the time. I see people dying All. The. Time. And what’s so good about me that I’m going to live forever?”

As suggested by Thursday in a comment to Robin Hanson's post Modern Male Sati, part of what seems to be going on here is that people subscribe to a "Just Deserts" theory of which outcomes ought to occur:

I think another of the reasons that people dislike cryonics is our intuition that immortality should have to be earned. It isn’t something that a person is automatically entitled to.

Relatedly, people sometimes believe in egalitarianism even when achieving it comes at the cost of imposing handicaps on the fortunate as in the Kurt Vonnegut novel Harrison Bergeron.

I believe that the objections that people have to cryonics which are rooted in the belief in people should get what they deserve and in the idea that egalitarianism is so important that we should handicap the privileged to achieve it are maladaptive. So, I think that the common attitude that cryonics is selfish is not held for good reason.

At the same time, it seems very likely to me that paying for cryonics is selfish in the sense that many personal expenditures are. Many personal expenditures that people engage in come with an opportunity cost of providing something of greater value to someone else. My general reaction to cryonics is the same as Tyler Cowen's: rather than signing up for cryonics, "why not save someone else's life instead?"

Could funding cryonics be socially optimal?

In Cryonics As Charity, Robin Hanson explores the idea that paying for cryonics might be a cost-effective charitable expenditure.

...buying cryonics seems to me a pretty good charity in its own right.

[...]

OK, even if consuming cryonics helps others, could it really help as much as direct charity donations? Well it might be hard to compete with cash directly handed to those most in need, but remember that most real charities suffer great inefficiencies and waste from administration costs, agency failures, and the inattention of donors.

Hanson's argument in favor of cryonics as a charity is based on the idea that buying cryonics drives the costs of cryonics down, making it easier for other people to purchase cryonics and also that purchasing cryonics normalizes the practice which raises the probability that people who are cryopreserved will be revived. There are several reasons why I don't find these points a compelling argument for cryonics as a charity. I believe that:

(i) I believe that in absence of human genetic engineering, it's very unlikely that it's possible to overcome the social stigma against cryonics. So I assign a small expected value to the social benefits that Hanson envisages which arise from purchasing cryonics.

(ii) Because of the social stigma against cryonics, signing up for cryonics or advocating cryonicshas a negative unintended consequence of straining interpersonal relationships as hinted at in Until Cryonics Do Us Part. This negative unintended consequence must be weighed against the potential social benefits attached to purchasing cryonics

(iii) Point #3 below: purchasing cryonics may be zero-sum on account of preventing future potential humans and transhumans from living.
Overall I believe that the positive indirect consequences of purchasing cryonics are approximately outweighed by the negative indirect consequences of purchasing cryonics.

How do the direct consequences of cryonics compare with the direct consequences of the best developing world aid charities? Let's look at the numbers.  According to the Alcor website , Alcor charges $150,000 for whole body cryopreservation and $80,000 for Neurocryopreservation.  GiveWell  estimates that VillageReach and StopTB save lives at a cost of $1,000 each. Now, the  standard of living is lower in the developing world  than in the developed world, so that saving lives in the developing world is (on average) less worthwhile than saving lives in developed world. Last February  Michael Vassar estimated  (based on his experience living in the developing world among other things) that one would have to spend $50,000 on developing world aid to save a quality of life comparable to his own. Michael's estimate may be too high or too low, and quality of life within the developed world is variable, but for concreteness let's equate the value of 40 years of life of the typical prospective cryonics sign-up with $50,000 worth of cost-effective developing world aid. Is buying cryonics for oneself then more cost-effective than developing world aid?

Here are some further considerations which are relevant to this question:

  1. Cryopreservation is not a guarantee of revitalization. In  Cryonics As Charity  and elsewhere Robin Hanson has estimated the probability of revitalization at 5% or so.
  2. Revitalization is not a guarantee of a very long life - after one is revived the human race could go extinct.
  3. Insofar as the resources that humans have access to are limited, being revived may have the opportunity cost of another human/transhuman being born.
  4. If humans develop life extension technologies before the prospective cryonics sign-up dies then the prospective cryonics sign-up will probably have no need of cryonics.
  5. If humans develop Friendly AI soon then any people in the developing world whose lives are saved might have the chance to live very long and happy lives.

With all of these factors in mind, I presently believe that from the point of view of general social welfare, donating to VillageReach or StopTB is much more cost-effective than paying for cryopreservation is.

It may be still more cost-effective to fund charities that reduce global catastrophic risk. The question is just whether it's possible to do things that meaningfully reduce global catastrophic risk. Some people in the GiveWell community have the attitude that there's so much  stochastic dilution of efforts to reduce global catastrophic risk that developing world aid is a more promising cause than existential risk reduction is. I share these feelings in regard to SIAI as presently constituted for reasons which I described in  the linked thread . Nevertheless, I personally believe that within 5-10 years there will emerge strong opportunities to donate money to reduce existential risk, opportunities which may be orders of magnitude more cost-effective than developing world aid.

It may be possible to construct a good argument for the idea that funding cryonics is socially optimal. But those who supported cryonics before thinking about whether funding cryonics is socially optimal should beware falling prey to  confirmation bias  in their thinking about whether funding cryonics is socially optimal.

Is cryonics rational?

If you believe that funding cryonics is socially optimal and you have generalized philanthropic concern, then you should fund cryonics. As I say above, I think it very unlikely that funding cryonics is anywhere near socially optimal. For  the sake of definiteness and brevity, in the remainder of this post I will subsequently assume that funding cryonics is far from being socially optimal.

Of course, people have  many values  and generally give greater weight to their own well being and the well being of family and friends than to the well being of unknown others. I see this as an inevitable feature of current human nature and don't think that it makes sense to try to change it at present. People (including myself) constantly spend money on things (restaurant meals, movies, CDs, travel expenses, jewelry, yachts, private airplanes, etc.) which are apparently far from socially optimal. I view cryonics expenses in a similar light. Just as it may be rational for some people to buy expensive jewelry, it may be rational for some people to sign up for cryonics. I think that cryonics is unfairly maligned and largely agree with Robin Hanson's article  Picking on Cryo-Nerds .

On the flip side, just as it would be irrational for some people to buy expensive jewelry, it would be irrational for some people to sign up for cryonics. We should view signing up for cryonics as an understandable indulgence rather than a moral imperative. Advocating that people sign up for cryonics is like advocating that people buy diamond necklaces. I believe that our advocacy efforts should be focused on doing the most good, not on getting people to sign up for cryonics.

I anticipate that some of you will object, saying "But wait! The social value of signing up for cryonics is much higher than the social value of buying diamond necklace!" This may be true, but is irrelevant. Assuming that funding cryonics is orders of magnitude less efficient than the best philanthropic option, in absolute terms, the social opportunity cost of funding cryonics is very close to the social opportunity cost of buying a diamond necklace.

Because charitable efforts vary in cost-effectiveness by many orders of magnitude in unexpected ways, there's no reason to think that the supporting causes that have the most immediate intuitive appeal to oneself are at all close to socially optimal. This is why it's important to  Purchase Fuzzies and Utiltons Separately . If one doesn't, one can end up expending a lot of energy ostensibly dedicated to philanthropy which accomplishes a very small fraction of what one could have accomplished. This is arguably what's going on with cryonics advocacy. As Holden Karnofsky has said, there's  nothing wrong with selfish giving - just don’t call it philanthropy . Holden's post relates to the phenomenon discussed Yvain's great post  Doing your good deed for the day . Quoting from Holden's post

I don’t think it’s wrong to make gifts that aren’t “optimized for pure social impact.” Personally, I’ve made “gifts” with many motivations: because friends asked, because I wanted to support a  resource I personally benefit from , etc. I’ve stopped giving to my alma mater (which I suspect has all the funding it can productively use) and I’ve never made a gift just to “tell myself a nice story,” but in both cases I can understand why one would.

Giving money for selfish reasons, in and of itself, seems no more wrong than unnecessary personal consumption (entertainment, restaurants, etc.), which I and everyone else I know does plenty of. The point at which it becomes a problem, to me, is when you “count it” toward your charitable/philanthropic giving for the year.

[...]

I believe that the world’s wealthy should make gifts that are aimed at nothing but making the world a better place for others. We should challenge ourselves to make these gifts as big as possible. We should not tell ourselves that we are philanthropists while making no gifts that are really aimed at making the world better.

But this philosophy doesn’t forbid you from spending your money in ways that make you feel good. It just asks that you don’t let those expenditures lower the amount you give toward really helping others.

I find it very likely that promoting and funding cryonics for philanthropic reasons is irrational.

Implications

The members of Less Wrong community have uncommonly high analytical skills. These analytical skills are potentially very valuable to society. Collectively, we have a major opportunity to make a positive difference in people's lives. This opportunity will amount to little if we use our skills for things like cryonics advocacy. Remember,  rationalists should win . I believe that we should use our skills for what matters most: helping other people as much as possible. To this end, I would make four concrete suggestions suggestions. I believe that

(A) We should encourage people to give more when we suspect that  in doing so, they would be behaving in accordance with their core values . As  Mass_Driver said , there may be

huge opportunity for us to help people help both themselves and others by explaining to them why charity is awesome-r than they thought.

As I've mentioned elsewhere, according to  Fortune magazine  the 400 biggest American taxpayers donate an average of only 8% of their income a year. For most multibillionaires, it's literally the case that millions of people are dying because the multibillionaire is unwilling to lead a slightly less opulent lifestyle. I'm sure that this isn't what these multibillionaires would want if they were thinking clearly. These people are not moral monsters. Melinda Gates has said that it wasn't until she and Bill Gates visited Africa that they realized that they had a lot of money to spare.

The case of multibillionaires highlights the absurdity of the pathological effects of human biases on people's willingness to give. Multibillionaires are not unusually irrational. If anything, multibillionaires are unusually rational. Many of the people who you know would behave similarly if they were multibillionaires. This gives rise to a strong possibility that they're  presently  exhibiting analogous behavior on a smaller scale on account of irrational biases. 

(B) We should work to raise the standards for analysis of charities for impact and cost-effectiveness and promote effective giving. To this end, I strongly recommend exploring the website and community at  GiveWell . The organization is very transparent and is welcoming of and responsive to  well-considered feedback.

(C) We should conceptualize and advocate high expected value charitable projects but we should be especially vigilant about the possibility of overestimating the returns of a particular project. Less Wrong community members have not always exhibited such vigilance, so there is room for improvement on this point.

(D) We should ourselves donate some money that's optimized for pure positive social impact. Not so much that doing so noticeably interferes with our ability to get what we want out of life, but noticeably more than is typical for people in our respective financial situations. We should do this not only to help the people who will benefit from our contributions, but to prove to ourselves that the analytical skills which are such an integral part of us can help us break  the shackles of unconscious self serving motivations , lift ourselves up and do what we believe in.

189 comments

Comments sorted by top scores.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-11T04:18:32.957Z · LW(p) · GW(p)

So be it first noted that everyone who complains about trying to trade off cryonics against charity, instead of movie tickets or heart transplants for old people, is absolutely correct about cryonics being unfairly discriminated against.

That said, reading through these comments, I'm a bit disturbed that no one followed the principle of using the Least Convenient Possible World / strongest argument you can reconstruct from the corpse. Why are you accepting the original poster's premise of competing with African aid? Why not just substitute donations to the Singularity Institute?

So I know that, obviously, and yet I go around advocating people sign up for cryonics. Why? Because I'm selfish? No. Because I'm under the impression that a dollar spent on cryonics is marginally as useful as a dollar spent on the Singularity Institute? No.

Because I don't think that money spent on cryonics actually comes out of the pocket of the Singularity Institute? Yes. Obviously. I mean, a bit of deduction would tell you that I had to believe that.

Money spent on life insurance and annual membership in a cryonics organization rapidly fades into the background of recurring expenses, just like car insurance. To the extent it substituted for anything, it would tend to substitute for buying a house smaller by $300/year on the mortgage, or retirement savings, or something else that doesn't accomplish nearly as much good as cryonics.

There are maybe two or three people in the entire world who spend only the bare possible minimum on themselves, and contribute everything else to a rationally effective charity. They have an excuse for not signing up. No one else does.

And if you do sign up for cryonics, that contributes to a general frame of mind of "Wait, there are clever solutions to all the world's problems, this planet I'm living in doesn't make any sense, it's okay to do something that other people aren't doing, I'm part of the community of people who are part of the future, and that's why I'm going to donate to SIAI." It's a gateway drug; it's part of the ongoing lifestyle of someone with one foot in the future, staring back at a mad world and doing what they can to save it.

The basic fact about rational charity is that charity is not a matter of people starting out with fixed resources for charity and allocating them optimally. It is about the variance in the tiny little percentage of their income people give to rationally effective charity in the first place. And if I had to place my bets on empirical outcomes, I would bet that this blog post helped decrease that percentage in its readers, more than it actually resulted in any dollars going to an effective charity (i.e., SIAI, who is anyone kidding with this talk about development aid?) by helping to foster a sense of guilt and "ugh" around rational charity.

And finally, with all that said, if we actually did forget about the Singularity and the expected future value of the galaxy and take the original post at face value, if you consider the interval between a planet with slightly more developed poor countries and a planet signed up for cryonics, and ask about marginal impacts you can have on both relative to existing resources, then clearly you should be signing up for cryonics. I am tempted to add a sharp "Duh" to the end of this statement.

But of course, the actual impact of cryonics, just like the actual impact of development aid, in any rational utilitarian calculation, is simply its impact on the future of the galaxies, i.e., its impact on existential risk. Do I think that impact is net negative? Obviously not.

Replies from: None, katydee, utilitymonster, ciphergoth, wedrifid, XiXiDu, multifoliaterose, multifoliaterose, multifoliaterose, Unknowns
comment by [deleted] · 2010-10-17T15:12:17.332Z · LW(p) · GW(p)

There are maybe two or three people in the entire world who spend only the bare possible minimum on themselves, and contribute everything else to a rationally effective charity. They have an excuse for not signing up. No one else does.

The world is full of poor people who genuinely cannot afford to sign up for cryonics. Whether they spend whatever pittance may be left to them above bare subsistence on charity or on rum is irrelevant.

The world also contains many people like me who can afford to eat and live in a decent apartment, but who can't afford health insurance. I'm not so convinced I should be thinking about cryonics at this point either.

comment by katydee · 2010-08-11T04:23:49.753Z · LW(p) · GW(p)

Short version:

It's not that cryonics is one of the best ways that you can spend money, it's that cryonics is one of the best ways that you can spend money on yourself. Since almost everyone who is likely to read this spends a fair amount of money on themselves, almost everyone who is likely to read this would be well-served by signing up for cryonics instead of .

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-11T06:02:38.784Z · LW(p) · GW(p)

Short but not true. Cryonics is one of the ways that, in the self-directed part of your life, you can pretend to be part of a smarter civilization, be the sort of sane person who also fights existential risk in the other-directed part of their life. Anyone who spends money on movie tickets does not get to claim that they have no self-directed component to their life.

Replies from: katydee
comment by katydee · 2010-08-11T06:57:21.157Z · LW(p) · GW(p)

I don't think I'm suggesting that people don't have a self-directed component to their lives, though I suppose there could be some true "charity monks" or something out there. I'd be surprised, though, since I wouldn't even count someone like Peter Singer as without self-directed elements to his life. I only left the potential exception there because I think there is a chance that someone reading the post will not have sufficient funds to purchase the life insurance necessary for cryonic preservation.

comment by utilitymonster · 2010-08-12T15:44:58.776Z · LW(p) · GW(p)

There are maybe two or three people in the entire world who spend only the bare possible minimum on themselves, and contribute everything else to a rationally effective charity. They have an excuse for not signing up. No one else does.

I guess I agree that only the specified people can be said to have made consistently rational decisions when it comes to allocating money between benefiting themselves and benefiting others (at least of those who know something about the issues). I don't think this implies that all but these people should sign up for cryonics. General point: [Your actions cannot be described as motivated by coherent utility function unless you do A] does not imply [you ought to do A].

Simple example: Tom cares about the welfare of others as much as his own, but biases lead him to consistently act as if he cared about his welfare 1,000 times as much as the welfare of others. Tom could overcome these biases, but he has not in the past. In a moment when he is unaffected by these biases, Tom sacrifices his life to save the lives of 900 other people.

[All that said, I take your point that it may be rational for you to advocate signing up for cryonics, since cryonics money and charity money may not be substitutes.]

comment by Paul Crowley (ciphergoth) · 2010-08-11T16:31:15.006Z · LW(p) · GW(p)

Are you suggesting that cryonics advocacy is in any sense an efficient use of time to reduce x-risk? I'd like to believe that since I spend time on it myself, but it seems suspiciously convenient.

comment by wedrifid · 2010-08-11T06:10:51.392Z · LW(p) · GW(p)

There are maybe two or three people in the entire world who spend only the bare possible minimum on themselves, and contribute everything else to a rationally effective charity. They have an excuse for not signing up. No one else does.

If you are opening the scope to the entire world it would seem fair to extend the excuse to all those who don't even have the bare possible minimum for themselves and also don't live within 100 km of anyone who understands cryonics.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-11T18:42:34.371Z · LW(p) · GW(p)

Agreed; your correction is accepted.

comment by XiXiDu · 2010-08-11T15:39:04.823Z · LW(p) · GW(p)

Why not just substitute donations to the Singularity Institute?

Because given my current educational background I am not able to judge the following claims (among others) and therefore perceive it as unreasonable to put all my eggs in one basket:

  • Superhuman Artificial Intelligence (the runaway kind, i.e. God-like and unbeatable not just at Chess or Go.)
  • Advanced real-world molecular nanotechnology (the grey goo kind the above could use to mess things up.)
  • The likelihood of exponential growth versus a slow development over many centuries.
  • That it is worth it to spend most on a future whose likelihood I cannot judge.
  • That Eliezer Yudkowsky (SIAI) is the right and only person who should be working to soften the above.

What do you expect me to do? Just believe you? Like I believed so much in the past which made sense but turned out to be wrong? And besides, my psychic condition wouldn't allow me to devote all my resource to the SIAI without ever going to movies or the like. The thought makes me reluctant to give anything at all.

ETA

Do you have an explanation for the circumstance that you are the only semi-popular person who has figured all this out? The only person who's aware of something that might shatter the utility of the universe, if not multiverse? Why is it that people like Vernor Vinge, Charles Stross or Ray Kurzweil are not running amok using all their influence to convince people of the risks ahead, or at least give all they have to the SIAI?

I'm talking to quite a few educated people outside this community. They are not, as some assert, irrational nerds who doubt all this for no particular reason. Rather they tell me that there are too many open questions to worry about the possibilities depicted on this site rather than other near-term risks that might very well wipe us out.

Why aren't Eric Drexler, Gary Drescher or other AI researches like Marvin Minsky worried to the extent that they signal their support for your movement?

Replies from: ciphergoth, Eliezer_Yudkowsky
comment by Paul Crowley (ciphergoth) · 2010-08-11T16:35:43.292Z · LW(p) · GW(p)

You may be forced to make a judgement under uncertainty.

Replies from: XiXiDu
comment by XiXiDu · 2010-08-11T16:59:37.149Z · LW(p) · GW(p)

My judgement of and attitude towards a situation is necessarily as diffuse as my knowledge of its underlying circumstances and the reasoning involved. Therefore I perceive it as unreasonable to put all my eggs in one basket.

The state of affairs regarding the SIAI and its underlying rationale and rules of operation are not sufficiently clear to me to give it top priority.

Much of all arguments on this site involve a few propositions and the use of probability to legitimate action in case of their accuracy. Here much is uncertain to an extent that I'm not able to judge any nested probability estimations. I'm already unable to judge what the likelihood of something like the existential risk of exponential evolving superhuman AI is compared to us living in a simulated reality. Even if you tell me, am I to believe the data you base those estimations on?

Maybe after a few years of study I'll know more. But right now, if I was forced to choose the future over the present, the SIAI or to have some fun. I'd have some fun.

Replies from: wedrifid
comment by wedrifid · 2010-08-11T17:06:52.497Z · LW(p) · GW(p)

You ask a lot of good questions in these two comments. Some of them are still open questions in my mind.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-11T18:50:10.677Z · LW(p) · GW(p)

put all my eggs in one basket

Keep reading Less Wrong sequences. The fact that you used this phrase when it nakedly exposes reasoning that is a direct, obvious violation of expected utility maximization (with any external goal, that is, rather than psychological goals) tells me that rather than trying to write new material for you, I should rather advise you to keep reading what's already been written, until it no longer seems at all plausible to you that citing Charles Stross's disbelief is a good argument for remaining as a bystander, any more than it will seem remotely plausible to you that "all your eggs in one basket" is a consideration that should guide expected-utility-maximizing personal philanthropy (for amounts less than a million dollars, say).

And of course I was not arguing that you should give up movie tickets for SIAI. It is exactly this psychological backlash that was causing me to be sharp about the alleged "cryonics vs. SIAI" tradeoff in the first place.

Replies from: XiXiDu, thomblake
comment by XiXiDu · 2010-08-11T19:11:08.121Z · LW(p) · GW(p)

The fact that you used this phrase when it nakedly exposes reasoning that is a direct, obvious violation of expected utility maximization...

What I meant to say by using that phrase is that I cannot expect, given my current knowledge, to get the promised utility payoff that would justifiy to make the SIAI a prime priority. I'm donating to the SIAI but also spend considerable amounts of resource to maximizing utility in the present. Enjoying life, so to say, is therefore a safety net given that my inability to judge the probability of a positive payoff will be answered negative in future.

...until it no longer seems at all plausible to you that citing Charles Stross's disbelief is a good argument for remaining as a bystander...

I believe hard-SF authors certainly know a lot more than I do, so far, about related topics. I could have picked Greg Egan. That's besides the point though, it's not just Stross or Egan but everyone versus you and some unknown followers. What about the other Bayesians out there? Are they simply not as literate as you in the maths or maybe somehow teach but not use their own methods of reasoning and decision making?

comment by thomblake · 2010-08-11T19:17:01.663Z · LW(p) · GW(p)

Having read the sequences, I'm still unsure where "a million dollars" comes from. Why not diversify when you have less money than that?

Replies from: JGWeissman
comment by JGWeissman · 2010-08-11T19:26:29.398Z · LW(p) · GW(p)

I'm still unsure where "a million dollars" comes from.

It is an estimate of the amount you would have to donate to the most marginally effective charity, to decrease its marginal effectiveness below the previous second most marginally effective charity.

Replies from: thomblake, XiXiDu
comment by thomblake · 2010-08-11T20:03:29.473Z · LW(p) · GW(p)

I can see following that for charities with high-probability results; I would certainly support that with respect to deciding whether to give to an African food charity versus an Asian food charity, for instance. But for something like existential risk, if there are two charities that I believe each have a 1% chance of working and an arbitrarily high, roughly equal payoff, then it seems I should want both invested in. I might pick one and then hope someone else picks the other, but it seems equivalent if not better to just give equal money to both, to hedge my bets.

Replies from: thomblake
comment by thomblake · 2010-08-11T21:11:54.611Z · LW(p) · GW(p)

Okay, I suppose I could actually pay attention to what everybody else is doing, and just give all my money to the underrepresented one until it stops being underrepresented.

comment by XiXiDu · 2010-08-11T19:49:33.238Z · LW(p) · GW(p)

This is exactly what I'm having trouble to accept, let alone look through. There seems to be a highly complicated framework of estimations to support and reinforce each other. I'm not sure how you call this in English, but in German I'd call this a castle in the air.

And before you start downvoting this comment and tell me to learn about Solomonoff induction etc., I know that what I'm saying may simply be due to a lack of education. But that's what I'm arguing about here. And I bet that many who support the SIAI cannot interpret the reasoning which lead them to support the SIAI in the first place, or at least cannot substantiate the estimations with other kinds of evidence than a coherent internal logic of reciprocal supporting probability estimations.

Replies from: FAWS
comment by FAWS · 2010-08-11T20:15:13.098Z · LW(p) · GW(p)

The figure "a million dollars" doesn't matter. The reasoning in this particular case is pretty simple. Assuming that you actually care about the future and not you personal self esteem (the knowledge of personally having contributed to a good outcome) there is no reason why putting all your personal eggs in one basket should matter at all. You wouldn't want humanity to put all its eggs in one basket, but the only way you would change that would be if you were the only person to put eggs into a particular basket. There may be a particular distribution of eggs that is optimal, but unless you think the distribution of everyone else's eggs is already optimal you shouldn't distribute all you personal eggs the same way, you should put them in the basket that is most underrepresented (measured by marginal utility, not by ratio actual allocation to theoretical optimal allocation or any such nonsense) so to move humanities overall allocation closer to optimal. Unless you have so many eggs that the most underrepresented basket stops being that, (="million dollars").

Replies from: XiXiDu
comment by XiXiDu · 2010-08-12T09:13:07.597Z · LW(p) · GW(p)

This might be sound reasoning. In this particular case you've made up a number and more or less based it on some idea of optimal egg allocation. That is all very well, but was not exactly what I meant to say by using that phrase or by the comment you replied to and wasn't my original intention when replying to EY.

I can follow much of the reasoning and arguments on this site. But I'm currently unable to judge their overall credence. That is, are the conclusions justified? Is the coherent framework build around the SIAI based on firm ground?

I'm concerned that although consistently so, the LW community is updating on fictional evidence. My questions in the original comment were meant to inquire the basic principles, the foundation of the sound argumentation that is based upon those basic premises. That is, are you creating models to treat subsequent models or are the propositions based on fact?

An example here is the treatment and use of MWI, the conclusions, arguments and further estimations based on it. No doubt MWI is the only consistent non-magic interpretation of quantum mechanics. But that's it, an interpretation. A logical consistent deduction. Or should I rather call it an induction, as the inference seems to be of greater generality than the premises, at least withing the LW community? But that's besides the point. The problem here is that such conclusions are widely considered to be weak evidence to base further speculations and estimations on it.

What I'm trying to argue here is that if the cornerstone of your argumentation, if one of your basic tenets is the likelihood of exponential evolving superhuman AI, although a valid speculation given what we know about reality, you are already in over your head with debt. Debt in the form of other kinds of evidence. Not that it is a false hypothesis, that it is not even wrong, but that you cannot base a whole movement and a huge framework of further inference and supportive argumentation on such premises, ideas that are themselves not based on firm ground.

Now you might argue, it's all about probability estimations. Someone else might argue that reports from the past do NOT need to provide evidence to what will occur in the future. But the gist of the matter is that a coherent and consistent framework of sound argumentation based on unsupported inference is nothing more than its description implies. It is fiction. Scientific evidence is at least providing hints. Imagination allows for endless possibilities while scientific evidence at least provides hints of what might be possible and what impossible. Science only has to provide the ability to assess your data. The experience of its realization is a justification that bears a hint. Any hint that empirical criticism provides gives you new information on which you can build on. Not because it bears truth value but because it gives you an idea of where you want to go. An opportunity to try something. There’s that which seemingly fails or contradicts itself and that which seems to work and is consistent.

And that is my problem. Given my current educational background and knowledge I cannot differentiate LW between a consistent internal logic and something which is sufficiently based on empirical criticism to provide a firm substantiation of the strong arguments for action that are proclaimed on this site.

Replies from: wedrifid
comment by wedrifid · 2010-08-12T13:48:18.530Z · LW(p) · GW(p)

And that is my problem. Given my current educational background and knowledge I cannot differentiate LW between a consistent internal logic and something which is sufficiently based on empirical criticism to provide a firm substantiation of the strong arguments for action that are proclaimed on this site.

I cannot fault this reasoning. From everything I have read in your comments this seems to be the right conclusion for you to make given what you know. Taking the word of a somewhat non-mainstream community would be intellectually reckless. For my part there are some claims on LW that I do not feel I am capable of reaching a strong conclusion on - even accounting for respect for expert opinions.

Now I'm curious, here you have referred to "LW" thinking in general, while we can obviously consider also LW conclusions on specific topics. Of all those positions that LW has a consensus on (and are not nearly universally accepted by all educated people) are there any particular topics for which you are confident of either confirming or denying? For example "cryonics is worth a shot" seems to be far more easy to judge than conclusions about quantum mechanics and decision theory.

comment by multifoliaterose · 2010-08-11T13:55:51.812Z · LW(p) · GW(p)

And yes, it seems like my post may have done more harm than good. I was not anticipating such negative reactions. What I said seems to have been construed in ways that were totally unexpected to me and which are largely unrelated to the points that I was trying to make. I take responsibility for the outcome.

comment by multifoliaterose · 2010-08-11T13:40:03.614Z · LW(p) · GW(p)

Thanks for the response. I'm presently in Europe without steady internet access but look forward to writing back. My thoughts on these matters are rather detailed/intricate.

For now I'll just say that I think that because people have such strong irrational biases against cryonics, advocacy of cryonics may (unfairly!) lower the credibility of the rationalist movement among people who it would be good to draw in to the rationalist movement. I think (but am not sure) that this factor makes cryonics advocacy substantially less fruitful than it may appear.

comment by multifoliaterose · 2010-08-11T13:38:49.379Z · LW(p) · GW(p)

Thanks for the response. I'm presently in Europe without steady internet access but look forward to writing back. My thoughts on these matters are rather detailed/intricate.

For now I'll just say that I think that because people have such strong irrational biases against cryonics, advocacy of cryonics may (unfairly!) lower the credibility of the rationalist movement among people who it would be good to draw in to the rationalist movement. I think (but am not sure) that this factor makes cryonics advocacy substantially less fruitful than it may appear.

As a disclaimer, I'll say that I'm quite favorablyimpreaaed by you. My post may have come across as critical of

comment by Unknowns · 2010-08-11T04:54:14.569Z · LW(p) · GW(p)

Ordinary people don't want to sign up for cryonics, while they do want to go to movies and get heart transplants. So if multifoliaterose tells people, "Instead of signing up for cryonics, send money to Africa," he's much more likely to be successful than if he tells people, "Instead of going to the movies, send money to Africa."

So yes, if you want to call this "unfair discrimination," you can, but his whole point is to get people to engage in certain charities, and it seems he is just using a more effective means rather than a less effective one.

Replies from: Eliezer_Yudkowsky, dclayh
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-11T05:55:14.120Z · LW(p) · GW(p)

I'm saying he'll get them to do neither.

Easy way for multi to provide an iota of evidence that what he's doing is effective: Find at least one person who says they canceled a cryo subscription and started sending an exactly corresponding amount of money to the Singularity Institute. If you just start sending an equal amount of money to the Singularity Institute, without canceling the cryo, then it doesn't count as evidence in his favor, of course; and that is just what I would recommend anyone feeling guilty actually do. And if anyone actually sends the money to Africa instead, I am entirely unimpressed, and I suggest that they go outside and look up at the night sky for a while and remember what this is actually about.

comment by dclayh · 2010-08-11T06:24:06.594Z · LW(p) · GW(p)

Even less than signing up for cryonics do most people want to murder their children. Do expect that telling them "Instead of murdering your children, send aid to Africa (or SIAI)" will increase the amount they send to Africa/SIAI?

Replies from: Unknowns
comment by Unknowns · 2010-08-11T07:31:54.782Z · LW(p) · GW(p)

That isn't relevant because murdering your children doesn't cost money.

Replies from: dclayh
comment by dclayh · 2010-08-11T07:35:08.208Z · LW(p) · GW(p)

I think it does, since you'll probably want to buy weapons, hire an assassin, hire a lawyer, etc. But you can change the example to "Send money to al-Qaeda" if you prefer.

Replies from: Spurlock
comment by Spurlock · 2010-08-11T15:09:41.884Z · LW(p) · GW(p)

I'm willing to bet that the number of LW readers seriously considering cryonics greatly outweighs the number seriously considering murdering their kids OR funding al-Qaeda. For the general population this might not be so, but as a LW wrong post it seems more than reasonable to contrast with cryonics rather than terrorism.

comment by CronoDAS · 2010-08-10T05:03:31.982Z · LW(p) · GW(p)

Incidentally, heart transplants and cryonics both cost about the same amount of money... does the "it's selfish" argument also apply to getting a heart transplant?

Replies from: James_Miller, multifoliaterose, Jayson_Virissimo
comment by James_Miller · 2010-08-10T05:36:41.868Z · LW(p) · GW(p)

Most of multifoliaterose's criticisms of cryonics apply to the majority of money spent on medical treatments in rich nations.

comment by multifoliaterose · 2010-08-10T06:04:46.191Z · LW(p) · GW(p)

Getting a heart transplant has instrumental value that cryonics does not.

A heart transplant enables the recipient to continue being a productive member of society. If the recipient is doing a lot to help other people then the cost of the heart transplant is easily outweighed by the recipients' productivity.

By way of contrast, if society gets to the point where cryopreserved people can be restored, it seems likely that society will have advanced to the point where such people are much less vital to society.

Also, the odds of success for a heart transplant are probably significantly higher than the odds of success for cryorestoration.

Edit: See a remark in a post by Jason Fehr at the GiveWell Mailing List:

Think of Bill Clinton, who has now had a heart bypass as well as a cardiac catheterization at age 63. The world will almost certainly be better off having Bill Clinton around for a few more decades running his foundation, thanks to all that cardiovascular research we've been discussing.

I don't think that having Bill Clinton cryopreserved would be nearly as valuable to society as the cardiovascular operations that he underwent were.

Replies from: orthonormal, James_Miller, HughRistik, Unknowns
comment by orthonormal · 2010-08-10T15:12:17.380Z · LW(p) · GW(p)

If the recipient is doing a lot to help other people then the cost of the heart transplant is easily outweighed by the recipients' productivity.

So, then, should prospective heart transplant recipients have to prove that they will do enough with their remaining life to benefit humanity, in order for the operation to be approved?

I think you're holding cryonics to a much higher standard than other expenditures.

Replies from: RichardChappell
comment by RichardChappell · 2010-08-12T08:34:46.857Z · LW(p) · GW(p)

should prospective heart transplant recipients have to prove that they will do enough with their remaining life to benefit humanity, in order for the operation to be approved?

Distinguish personal morality from public enforcement. In a liberal society our personal purchases should (typically) not require anyone else's permission or "approval". But it still might be the case that it would be a better decision to choose the more selfless option, even if you have a right to be selfish. That seems just as true of traditional medical expenditures as it does of cryonics.

comment by James_Miller · 2010-08-10T13:44:31.435Z · LW(p) · GW(p)

But if while President Bill Clinton knew he was going to be cryopreserved he might have caused the government to devote more resources to artificial intelligence research and existential risks.

comment by HughRistik · 2010-08-11T23:51:11.132Z · LW(p) · GW(p)

A heart transplant enables the recipient to continue being a productive member of society.

Doesn't successful cryopreservation and revival have a good chance of doing the same, and for longer?

Replies from: lsparrish
comment by lsparrish · 2010-08-12T00:37:47.609Z · LW(p) · GW(p)

A life kept active and productive in the here and now might be more valuable in some respects than one that is dormant for the near future, given that more other individuals exist in the far future who would have to compete with the reanimated individual.

comment by Unknowns · 2010-08-10T07:29:39.936Z · LW(p) · GW(p)

One of the defects of the karma system is that replies to comments tend to get less votes, even when they're as good as the original comment. Here CronoDAS's comment is at 9, and the response at only 4, even though the response does a very good job of showing that the cases mentioned are not nearly equivalent.

Replies from: wedrifid
comment by wedrifid · 2010-08-10T08:42:10.762Z · LW(p) · GW(p)

I consider Crono's comment more insightful than multi's and my votes reflect my position.

Replies from: Unknowns
comment by Unknowns · 2010-08-10T09:09:05.133Z · LW(p) · GW(p)

Would you disagree that the differences mentioned by multifoliaterose are real?

Anyway, in terms of the general point I made, I see the same thing in numerous cases, even when nearly everyone would say the quality of the comments is equal. For example you might see a parent comment at 8 at a response at 2, maybe because people are less interested, or something like that.

Replies from: JoshuaZ, wedrifid, Airedale
comment by JoshuaZ · 2010-08-10T13:33:56.705Z · LW(p) · GW(p)

Would you disagree that the differences mentioned by multifoliaterose are real?

Yes, I would disagree. A large fraction of the people who are getting heart transplants are old and thus not very productive. More generally, medical expenses in the last three years of life can easily run as much as a hundred thousand US dollars, and often run into the tens of thousands of dollars. Most people in the US and Europe are not at all productive their last year of life.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-10T14:50:44.390Z · LW(p) · GW(p)

If I personally were debilitated to the point of not being able to contribute value comparable to the value of a heart transplant then I would prefer to decline the heart transplant and have the money go to a cost-effective charity. I would rather die knowing that I had done something to help others than live knowing that I had been a burden on society. Others may feel differently and that's fine. We all have our limits. But getting a heart transplant when one is too debilitated to contribute something of comparable value should not be considered philanthropic. Neither should cryonics.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-10T17:41:22.179Z · LW(p) · GW(p)

You are making an error by not placing your own well-being into greater regard than well-being of others. It's a known aspect of human value.

Replies from: WrongBot, steven0461
comment by WrongBot · 2010-08-10T18:25:31.873Z · LW(p) · GW(p)

Err, are you saying that his values are wrong, or just that they're not in line with majoritarian values?

Replies from: orthonormal, Vladimir_Nesov
comment by orthonormal · 2010-08-10T18:59:10.444Z · LW(p) · GW(p)

For one thing, multifoliaterose is probably extrapolating from the values xe signals, which aren't identical to the values xe acts on. I don't doubt the sincerity of multifoliaterose's hypothetical resolve (and indeed I share it), but I suspect that I would find reasons to conclude otherwise were I actually in that situation. (Being signed up for cryonics might make me significantly more willing to actually refuse treatment in such a case, though!)

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-13T10:45:05.855Z · LW(p) · GW(p)

If you missed it, see my comment here. I guess my comment which you responded to was somewhat misleading; I did not intend to claim something about my actual future behavior, rather, I intended simply to make a statement about what I think my future behavior should be.

Replies from: orthonormal
comment by orthonormal · 2010-08-13T15:54:40.868Z · LW(p) · GW(p)

To put on my Robin Hanson hat, I'd note that you're acknowledging this level of selflessness to be a Far value and probably not a Near one.

I have strong sympathies toward privileging Far values over Near ones in many of the cases where they conflict in practice, but it doesn't seem quite accurate to declare that your Far values are your "true" ones and that the Near ones are to be discarded entirely.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-13T16:08:01.042Z · LW(p) · GW(p)

So, I think that the right way to conceptualize this is to say that a given person's values are not fixed but vary with time. I think that at the moment my true values are as I describe. In the course of being tortured, my true values would be very different from the way they are now.

The reason why I generally priviledge Far values over Near values so much is that I value coherence a great deal and I notice that my Near values are very incoherent. But of course if I were being tortured I would have more urgent concerns than coherence.

Replies from: orthonormal
comment by orthonormal · 2010-08-13T16:16:20.717Z · LW(p) · GW(p)

The Near/Far distinction is about more than just decisions made under duress or temptation. Far values have a strong signaling component, and they're subject to their own biases.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-13T16:19:28.270Z · LW(p) · GW(p)

Can you give an example of a bias which arises from Far values? I should say that I haven't actually carefully read Hanson's posts on Near vs. Far modes. In general I think that Hanson's views of human nature are very misguided (though closer to the truth than is typical).

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-13T17:02:24.708Z · LW(p) · GW(p)

Can you give an example of a bias which arises from Far values?

Willingness to wreck people's lives (usually but not always other people's) for the sake of values which may or may not be well thought out.

This is partly a matter of the signaling aspect, and partly because, since Far values are Far, you're less likely to be accurate about them.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-13T19:40:15.249Z · LW(p) · GW(p)

Okay, thanks for clarifying. I still haven't read Robin Hanson on Near vs. Far (nor do I have much interest in doing so) but based on your characterization of Far, I would say that I believe that it's important to strike a balance between Near vs. Far. I don't really understand what part of my comment orthogonal is/was objecting to - maybe the issue is linguistic/semantic more than anything else.

comment by Vladimir_Nesov · 2010-08-10T18:56:09.469Z · LW(p) · GW(p)

I'm saying that he acts under a mistaken idea about his true values. He should be more selfish (recognize himself as being more selfish).

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-12T01:37:25.713Z · LW(p) · GW(p)

I see what I say about my values in a neutral state as more representative of my "true values" than what I would say about my values in a state of distress. Yes, if I were actually in need of a heart transplant that would come at the opportunity cost of something of greater social value then I may very well opt for the transplant. But if I could precommit to declining a transplant under such circumstances by pushing a button right now then I would do so.

Similarly, if I were being tortured for a year then if I were given the option to make it stop for a while in exchange for 50 more years of torture later on while being tortured then I might take the option, but I would precommit to not taking such an option if possible.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-12T08:25:09.622Z · LW(p) · GW(p)

What you would do has little bearing on what you should do. The above argument doesn't argue its case. If you are mistaken about your values, of course you can theoretically use those mistaken beliefs to consciously precommit to follow them, no question there.

comment by steven0461 · 2010-08-10T19:29:16.289Z · LW(p) · GW(p)

By what factor? Assume a random stranger.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-10T20:30:29.732Z · LW(p) · GW(p)

Maybe tens or thousands, but I'm as ignorant as anybody about the answer, so it's a question of pulling a best guess, not of accurately estimating the hidden variable.

Replies from: steven0461
comment by steven0461 · 2010-08-10T22:54:03.895Z · LW(p) · GW(p)

I don't understand how you can be uncertain between 10 and 1000 but not 1 and 10 or 1.1 and 10, especially in the face of things like empathy, symmetry arguments, reductionist personal identity, causal and acausal cooperation (not an intrinsic value, but may prescribe the same actions). I also don't understand the point of preaching egoism; how does it help either you personally or everyone else? Finally, 10 and 1000 are both small relative to astronomical waste.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-11T07:03:24.486Z · LW(p) · GW(p)

I don't understand how you can be uncertain between 10 and 1000 but not 1 and 10 or 1.1 and 10, especially in the face of things like empathy, symmetry arguments, reductionist personal identity, causal and acausal cooperation (not an intrinsic value, but may prescribe the same actions).

Self-preservation and lots of other self-centered behaviors are real psychological adaptations, which make indifference between self and random other very unlikely, so I draw a tentative lower bound at the factor of 10. Empathy extends fairness to other people, offering them control proportional to what's available to me and not just what they can get hold of themselves, but it doesn't suggest equal parts for all, let alone equal to what's reserved for my own preference. Symmetry arguments live at the more simplistic levels of analysis and don't apply. What about personal identity? What do you mean by "prescribing the same action" based on cooperation, when the question was about choice of own vs. others' lives? I don't see a situation where cooperation would make the factor visibly closer to equal.

I also don't understand the point of preaching egoism; how does it help either you personally or everyone else?

I'm not "preaching egoism", I'm being honest about what I believe human preference to be, and any given person's preference in particular, and so I'm raising an issue with what I believe to be an error about this. Of course, it's hypothetically in my interest to fool other people into believing they should be as altruistic as possible, in order to benefit from them, but it's not my game here. Preference is not for grabs.

Finally, 10 and 1000 are both small relative to astronomical waste.

I don't see this argument. Why is astronomical waste relevant? Preference stems from evolutionary godshatter, so I'd expect something on the order of tribe-sized (taking into account that you are talking about random strangers and not close friends/relatives).

Replies from: WrongBot
comment by WrongBot · 2010-08-11T07:17:04.810Z · LW(p) · GW(p)

I'm not "preaching egoism", I'm being honest about what I believe human preference to be, and any given person's preference in particular, and so I'm raising an issue with what I believe to be an error about this.

There is an enormous range of variation in human preference. That range may be a relatively small part of the space of all possible preferences of intelligent entities, but in absolute terms that range is broad enough to defy most (human) generalizations.

There have been people who made the conscious decision to sacrifice their own lives in order to offer a stranger a chance of survival. I don't see how your theory accounts for their behavior.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-11T07:55:31.581Z · LW(p) · GW(p)

There have been people who made the conscious decision to sacrifice their own lives in order to offer a stranger a chance of survival. I don't see how your theory accounts for their behavior.

Error of judgment. People are crazy.

Replies from: WrongBot
comment by WrongBot · 2010-08-11T16:20:29.975Z · LW(p) · GW(p)

Yes, but why are you so sure that it's crazy judgment and not crazy values? How do you know more about their preferences than they do?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-11T17:53:44.490Z · LW(p) · GW(p)

I know that people often hold confused explicit beliefs, so that a person holding belief X is only weak evidence about X, especially if I can point to a specific reason why holding belief X would be likely (other than that X is true). Here, we clearly have psychological adaptations that cry altruism. Nothing else is necessary, as long as the reasons I expect X to be false are stronger than the implied evidence of people believing X. And I expect there to be no crazy values (except for the cases of serious neurological conditions, and perhaps not even then).

Replies from: WrongBot
comment by WrongBot · 2010-08-11T18:35:55.669Z · LW(p) · GW(p)

Are you proposing that evolution has a strong enough effect on human values that we can largely ignore all other influences?

I'm quite dubious of that claim. Different cultures frequently have contradictory mores, and act on them.

Or, from another angle: if values don't influence behavior, what are they and why do you believe they exist?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-11T18:48:14.189Z · LW(p) · GW(p)

Humans have psychological drives, and act on some balance of their effect, through a measure of reflection and cultural priming. To get to more decision-theoretic values, you have to resolve all conflicts between these drives. I tentatively assume this process to be confluent, that is the final result depends little on the order in which you apply moral arguments that shift one's estimation of value. Cultural influence counts as such a collection of moral arguments (as is state of knowledge of facts and understanding of the world), that can bias your moral beliefs. But if rational moral arguing is confluent, these deviations get canceled out.

(I'm only sketching here what amounts to my still confused informal understanding of the topic.)

Replies from: WrongBot
comment by WrongBot · 2010-08-11T20:51:22.108Z · LW(p) · GW(p)

Huh. I wouldn't expect unmodified humans to be able to resolve value conflicts in a confluent way; insomuch as my understanding of neurology is accurate, holding strong beliefs involves some level of self-modification. If prior states influence the direction of self-modification (which I would think they must), confluence goes out the window. That is, moral arguments don't just shift value estimations, they shift the criteria by which future moral arguments are judged. I think this is the same sort of thing we see with halo effects.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-11T20:56:46.488Z · LW(p) · GW(p)

Not humans themselves, sure. To some extent there undoubtedly is divergence caused by environmental factors, but I don't think that surface features, such as explicit beliefs, adequately reflect its nature.

Of course, this is mostly useless speculation, which I only explore in hope of finding inspiration for more formal study, down the decision theory road.

comment by wedrifid · 2010-08-10T13:26:12.869Z · LW(p) · GW(p)

Would you disagree that the differences mentioned by multifoliaterose are real?

The difference is real. Whether it is also the real reason is another question.

comment by Airedale · 2010-08-10T14:45:49.380Z · LW(p) · GW(p)

It rarely bothers me when insightful original comments are voted up more than their (more or less) equally insightful responses. In my view, the original comment often “deserves” more upvotes for raising an interesting issue in the first place and thereby expanding a fruitful discussion.

comment by Jayson_Virissimo · 2010-08-11T19:21:03.596Z · LW(p) · GW(p)

A heart transplant has a much higher expected utility than cryonics. Could that be a major cause of the negative response?

Replies from: lsparrish
comment by lsparrish · 2010-08-11T19:47:12.154Z · LW(p) · GW(p)

Disagree. A heart transplant that adds a few decades is less valuable than a cryopreservation that adds a few millennia.

Also, heart transplants are a congestion resource whereas cryonics is a scale resource.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2010-08-12T02:32:16.806Z · LW(p) · GW(p)

A heart transplant that adds a few decades is less valuable than a cryopreservation that adds a few millennia.

So what? The value of winning the lottery is much higher than working for the next five years, but that doesn't mean it has a higher expected utility.

The expected value of an act is the sum of the products (utilities x probabilities).

Unless you think a heart transplant is just as probable to work as cryonics, then you must consider more than simply the value of each act.

Replies from: lsparrish
comment by lsparrish · 2010-08-12T03:05:20.217Z · LW(p) · GW(p)

The expected value of an act is the sum of the products (utilities x probabilities).

To offset a difference in living 100 times as much longer (even not accounting for other utilities like quality of life), it takes 100 times the probability. I don't think cryonics is 100 times less likely to work than a heart transplant.

comment by Paul Crowley (ciphergoth) · 2010-08-10T07:16:15.461Z · LW(p) · GW(p)

If you want to persuade me to spend less of my money on myself and more on trying to save the world, surely you should start with frippery like nice sandwiches or movies, rather than something that's a matter of life and death?

Replies from: Unknowns, multifoliaterose
comment by Unknowns · 2010-08-10T07:27:08.353Z · LW(p) · GW(p)

It seems reasonable to me that multifoliaterose would start with something that people aren't much naturally inclined to anyway, rather than things like sandwiches, because he's much more likely to succeed in the case of something (like cryonics) that there isn't much natural human tendency for.

comment by multifoliaterose · 2010-08-10T07:27:18.364Z · LW(p) · GW(p)

Some people attach more value to nice sandwiches and movies and other people attach more value to being cryopreserved. If you value being cryopreserved more than nice sandwiches and movies, then if you decide spend more money on trying to save the world, obviously the first expenses that you should cut are nice sandwiches and movies.

The point of my post is that it's inappropriate to characterize signing up for cryonics as something that one is doing to make the world a better place. I have no problem with people signing up for cryonics as long as they recognize that it's something that they're doing for themselves.

Replies from: ciphergoth, Eliezer_Yudkowsky, lsparrish
comment by Paul Crowley (ciphergoth) · 2010-08-10T16:00:29.705Z · LW(p) · GW(p)

What's weird is that people are driven to compare cryonics to charity in a way they're not when it comes to other medical interventions, or theatre tickets. I think Katja Grace explains it plausibly.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-11T03:55:44.743Z · LW(p) · GW(p)

I have no problem with people signing up for cryonics as long as they recognize that it's something that they're doing for themselves.

In your version of the story, what mistake am I making that causes me to go around urging other people to sign up for cryonics?

Replies from: Spurlock
comment by Spurlock · 2010-08-11T15:19:03.865Z · LW(p) · GW(p)

I think you're unfairly equating "signing up for cryonics" with "urging others to sign up for cryonics". If I go see a movie, I do so because I personally want to enjoy it, not out of any concern for whether it promotes good in the wider world (maybe it does, but this isn't my concern). I can later go on to recommend that movie to friends or to the internet in general, but that's a separate act.

Maybe your particular reasons for signing up are at least partially for the greater good (perhaps so you can wake up and continue the work on FAI if it remains undone), but it seems likely that most people sign up because it's something they want for themselves.

Replies from: HughRistik
comment by HughRistik · 2010-08-12T00:07:31.280Z · LW(p) · GW(p)

I think you're unfairly equating "signing up for cryonics" with "urging others to sign up for cryonics".

"Signing up for cryonics" (and talking about it) isn't entirely separable from "urging others to sign up for cryonics," because we are a species of monkeys. Monkey see, monkey do.

comment by lsparrish · 2010-08-10T15:21:29.482Z · LW(p) · GW(p)

I disagree as more people signing up for cryonics makes cryonics more affordable (and thus evens out the unfairness of premature death) and also gives large numbers of people a vested interest in the future. Cryonics on a small scale has unfavorable features that it would lack on a larger scale, so you need to be careful not to conflate the two. Note that as far as PR for existential risk goes, you can't beat cryonics for giving people a legitimate self-interested reason to care.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-10T04:50:55.681Z · LW(p) · GW(p)

after one is revived the human race could go extinct

Given the tech level required for revival, I'd assign a pretty low probability of getting revived before we're through the window of vulnerability.

comment by Vladimir_Nesov · 2010-08-10T09:47:50.821Z · LW(p) · GW(p)

If enough people sign up, cryonics can become a cost-effective way of saving lives. The only way to get there is to support cryonics.

In estimating cost-effectiveness of signing up, you have to take into account this positive externality. This was also an argument in Hanson's Cryonics As Charity, which you didn't properly discuss, instead citing current costs of cryonics.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-10T09:58:43.529Z · LW(p) · GW(p)

As I said in my post, it may be possible to construct a good case for signing up for cryonics or supporting cryonics being comparable to donating to or supporting cost-effective charities. At present I think this is unlikely.

In regard to your points, see the second half of my response to James_Miller's comment.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-10T10:09:46.091Z · LW(p) · GW(p)

[...] there's still the question of whether at the margin advocating for cryonics is a worthwhile endeavor. My intuition is that we're so far away from having a population interested in signing up for cryonics (because of the multitude of irrational biases that people have against cryonics) that advocating for cryonics is a very inefficient way to work against existential risk.

The margin has to take into account all future consequences of the action as well, not just local consequences. Again, a concrete problem I have with your post is essential misrepresentation of Hanson's post by quoting current costs of cryonics, and not mentioning the argument for lowering of costs. This you haven't answered.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-10T10:32:36.813Z · LW(p) · GW(p)

Yes, this is a good point. I have somewhere to go and so don't have time to correct this point immediately, but for now I will add a link to your comments in my post. Thanks.

comment by wedrifid · 2010-08-10T08:48:34.335Z · LW(p) · GW(p)

You are telling me that Cryonically suspending myself is less charitable than donating the same resources to an efficient charity? Um... yes?

I don't think this post contains a non-trivial insight. I found the normative presumptions interspersed with the text distasteful. Multi also presents a misleading image of what best represents the values of most people.

Replies from: rhollerith_dot_com, multifoliaterose, multifoliaterose
comment by RHollerith (rhollerith_dot_com) · 2010-08-10T10:11:51.362Z · LW(p) · GW(p)

Multi . . . presents a misleading image of what best represents the values of most people.

Yes, but many of the participants on this web site share Multifoliate's interest in philanthropy. In fact, the site's subtitle and mission statement, "refining the art of human rationality," came about as a subgoal of the philanthropic goals of the site's founder.

I don't think this post contains a non-trivial insight.

I found it a good answer to the belief which is common around here that cryonics advocacy is an efficient form of philanthropy.

Replies from: ciphergoth, wedrifid
comment by Paul Crowley (ciphergoth) · 2010-08-10T15:57:46.274Z · LW(p) · GW(p)

the belief which is common around here that cryonics advocacy is an efficient form of philanthropy.

Is that belief really common around here? Though I'm inclined to make an effort to get Hitchens to sign up, I think of that effort as self-indulgence in much the same way as I'd think of such efforts for those close to me, or my own decision to sign up.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-08-10T16:44:40.413Z · LW(p) · GW(p)

OK, maybe "common belief" is too strong. Change it to, "make sure no one here is under the illusion that cryonics advocacy is an efficient form of philanthropy, rather than a way to protect one's own interests while meeting like-minded people and engaging in an inefficient form of philanthropy, though I personally doubt that it decreases x-risks."

Replies from: lsparrish, multifoliaterose
comment by lsparrish · 2010-08-10T16:52:22.101Z · LW(p) · GW(p)

I think there are different approaches to cryonics. Advocating global or wide-scale conversion to cryonics is a philanthropic interest. It is very different from a focus on getting yourself preserved using existing organizations and on existing scales -- though they are certainly compatible and complementary interests.

To some extent I support seeing your own preservation as self-interest, under the assumption that this means you do not deduct it from your mental bank account for charitable giving (i.e. you'll give the same amount to starving kids and life-saving vaccines as you did before signing up). However it is a huge mistake to claim that it is purely self interest or at odds with charitable interests. Rather it helps lay the groundwork for a hugely important philanthropic interest.

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-08-10T17:42:22.805Z · LW(p) · GW(p)

it is a huge mistake to claim that [one's own cryopreservation] is purely self interest or at odds with charitable interests. Rather it helps lay the groundwork for a hugely important philanthropic interest.

OK, you are appealing to the the same argument that can be used to argue that the consumers of the 1910s who purchased and used the first automobiles were philanthropists for supporting a fledgling industry which went on to cause a substantial rise in the average standard of living. Do I have that right?

If so, the magnitude of the ability of cryonics to extend life expectency might cause me to admit that your words "huge" and "hugely" are justified -- but only under value systems that assign no utility to the people who will be born or created after the intelligence explosion. Relative to the number of people alive now or who will be born before the intelligence explosion, the expected number of lives after it is huge, and cryonics is of no benefit to those lives whereas any effort we make towards reducing x-risks benefits both the relatively tiny number of people alive now and the huge number that will live later.

The 3 main reasons most philanthropists do not direct their efforts at x-risks reduction are (1) they do not know and will not learn about the intelligence explosion and (2) even if they know about it, it is difficult for them to stay motivated when the object of their efforts are as abstract as people who will not start their lives for 100s of years -- they need to travel to Africa or what not and see the faces of the people they have helped -- or at least they need to know that if they were to travel to Africa or what not, they would -- and (3) they could figure out how to stay motivated to help those who will not start their lives for 100s of years if they wanted to, but they do not want to -- their circle of concern does extend that far into the future (that is, they assign zero or very little intrinsic value to a life that starts in the far future).

But the people whose philanthropic enterprise is to get people to sign up for cryonics do not have excuses (1) and (2). So, I have to conclude that their circle of moral concern stops (or become very thin) before the start of the intelligence explosion or they know that their enterprise is extremely inefficient philanthropy relative to x-risks reduction. Do you see any holes in my reasoning?

There are those (e.g., Carl and Nancy in these pages in the last few days and Roko in the past IIRC) who have taken the position that getting people to sign up for cryonics tends to reduce x-risks. I plan a top-level submission with my rebuttal to that position.

Replies from: lsparrish, ciphergoth
comment by lsparrish · 2010-08-10T20:16:55.695Z · LW(p) · GW(p)

I do think reducing x-risk is extremely important. I agree with Carl, Nancy, Roko, etc. that cryonics tends to reduce x-risk. To reduce x-risk you need people to think about it in the first place, and cryonicists are more likely to do so because it is a direct threat to their lives.

Cryonics confronts a much more concrete and well-known phenomenon than x-risk. We all know about human death, it has happened billions of times already. Humanity has never yet been wiped out by anything (in our world at least). If you want people to start thinking rationally about the future, it seems backwards to start with something less well-understood and more nebulous. Start with a concrete problem like age-related death; most people can understand that.

As to the moral worth of people not yet born, I do consider that lower than people already in existence by far because the probability of them existing as specific individuals is not set in stone yet. I don't think contraception is a crime, for example.

The continuation of the human race does have extremely high moral utility but it is not for the same sort of reason that preventing b/millions of deaths does. If a few dozen breeding humans of both genders and high genetic variation are kept in existence (with a record of our technology and culture), and the rest of us die in an asteroid collision or some such, it's not a heck of a lot worse than what happens if we just let everyone die of old age. (Well, it is the difference between a young death and an old death, which is significant. But not orders of magnitude more significant.)

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2010-08-10T20:42:22.313Z · LW(p) · GW(p)

I have bookmarked your comment and will reflect on it.

BTW I share your way of valuing things as expressed in your final 2 grafs: my previous comment used the language of utilitarianism only because I expected that that would be the most common ethical orientation among my audience and did not wish to distract readers with my personal way of valuing things.

comment by Paul Crowley (ciphergoth) · 2010-08-10T18:32:50.519Z · LW(p) · GW(p)

I wouldn't necessarily say that it's the most effective way to do x-risks advocacy, but it's one introduction to the whole general field of thinking seriously about the future, and it can provide useful extra motivation. I'm looking forward to reading more on the case against from you.

Replies from: steven0461
comment by steven0461 · 2010-08-10T23:38:06.386Z · LW(p) · GW(p)

I'm worried about cryonics tainting "the whole general field of thinking seriously about the future" by being bad PR (head-freezers, etc), and also about it taking up a lot of collective attention.

I've never heard of someone coming to LW through an interest in cryonics, though I'm sure there are a few cases.

comment by multifoliaterose · 2010-08-10T19:44:00.111Z · LW(p) · GW(p)

You're one of the few commentators who understands the point of my post.

Replies from: thomblake, ciphergoth
comment by thomblake · 2010-08-10T19:49:34.467Z · LW(p) · GW(p)

Lots of people here understand the point of your post. Some of us think it is evil to discourage folks from doing cryonics advocacy, since it is likely the only way to save any of the billions of people that are currently dying.

Personally, I'm not a cryonics advocate. But know your audience, and if you've noticed that most the people around here don't seem to understand something, it's probably a good time to check your assumptions and see what you've missed.

comment by Paul Crowley (ciphergoth) · 2010-08-10T20:10:47.168Z · LW(p) · GW(p)

This comes across as if you're miffed at the commentators rather than at yourself - is that what you mean?

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-11T20:57:34.817Z · LW(p) · GW(p)

I'm both irritated by those commentators who responded without taking the time to carefully reading my post and disappointed in myself for failing to communicate clearly. On the latter point, I'll be revising my post as soon as I get a chance. (I'm typing from my iPod at the moment).

comment by wedrifid · 2010-08-10T10:30:29.433Z · LW(p) · GW(p)

Yes, but many of the participants on this web site share Multifoliate's interest in philanthropy.

I have an interest in philanthropy (and altruism in general).

I note Multi's post can be have has a positive influence on my own personal wellbeing. I know I aren't going to be sucked in to self destruction - the undesirable impact is suffered by others. Any effort spent countering the influence would be considered altruistic.

comment by multifoliaterose · 2010-08-10T09:10:39.383Z · LW(p) · GW(p)

If you don't have any interest in philanthropy then my post was not intended for you, and I think that it's unfortunate that my post increased LessWrong's noise-to-signal ratio for you.

If you have some interest in philanthropy, then I would be interested in knowing what you're talking about when you say:

I found the normative presumptions interspersed with the text distasteful.

Replies from: orthonormal, wedrifid
comment by orthonormal · 2010-08-10T19:05:29.567Z · LW(p) · GW(p)

If you don't have any interest in philanthropy then my post was not intended for you

Given that your argument only rules out cryonics for genuine utilitarians or altruists, it's quite possible to have some concern for philanthropy and yet enough concern for yourself to make cryonics the rational choice. You're playing up a false dilemma.

comment by wedrifid · 2010-08-10T10:19:47.049Z · LW(p) · GW(p)

If you don't have any interest in philanthropy then my post was not intended for you

I like philanthropy, and not your sermon.

I think that it's unfortunate that my post increased LessWrong's noise-to-signal ratio for you.

I don't consider this post noise. It is actively bad signal. There is a universal bias that makes it difficult to counter "people should be more altruistic" claims of any kind.

If you have some interest in philanthropy, then I would be interested in knowing what you're talking about when you say:

'Should' claims demanding that people sacrifice their very life to donate the resources that allow their very survival to charity. In particular in those instances where they are backed up with insinuations that 'analytical skills' and rational ability in general require such sacrifice.

The post fits my definition of 'evil'.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-10T10:29:59.952Z · LW(p) · GW(p)

'Should' claims demanding that people sacrifice their very life to donate the resources that allow their very survival to charity. In particular in those instances where they are backed up with insinuations that 'analytical skills' and rational ability in general require such sacrifice.

Nope, you've misunderstood me. Nowhere in my post did I say that people should sacrifice their lives to donate resources to charity. See my response to ciphergoth for my position. If there's some part of my post that you think that I should change to clarify my position, I'm open to suggestions.

The post fits my definition of 'evil'.

Downvoted for being unnecessarily polemical.

Replies from: thomblake, wedrifid
comment by thomblake · 2010-08-10T13:58:21.190Z · LW(p) · GW(p)

Nope, you've misunderstood me. Nowhere in my post did I say that people should sacrifice their lives to donate resources to charity.

That's exactly what you're saying, as far as I can tell. Are you not advocating that people should give money to charity instead of being cryopreserved? While I think charity is a good thing, I draw the line somewhere shy of committing suicide for the benefit of others.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-10T14:30:41.444Z · LW(p) · GW(p)

My post is about how cryonics should be conceptualized rather than an attempt to advocate a uniform policy of how people should interact with cryonics. Again, see my response to ciphergoth. For ciphergoth, cryonics may be the right thing. I personally do not derive fuzzies from the idea of signing up for cryonics (I get my fuzzies in other ways) and I don't think that people should expend resources trying to change this.

comment by wedrifid · 2010-08-10T10:41:42.751Z · LW(p) · GW(p)

Nope, you've misunderstood me.

Perhaps, but I have not misunderstood the literal meaning of the words in the post.

Downvoted for being unnecessarily polemical.

Yet surprisingly necessary. The nearly ubiquitous pattern when people object to demands regarding charity is along the lines of "it's just not interesting to you but for other people it is important" or "it's noise vs signal". People are slow to understand that it is possible to be entirely engaged with the topic and think it is bad. After all, the applause lights are all there, plain as day - how could someone miss them?

comment by multifoliaterose · 2010-08-10T09:34:11.591Z · LW(p) · GW(p)

Multi also presents a misleading image of what best represents the values of most people.

You may be right, on the other hand you may be generalizing from one example. Claims that an author's view of human values is misleading should be substantiated with evidence.

Replies from: wedrifid
comment by wedrifid · 2010-08-10T09:55:13.216Z · LW(p) · GW(p)

"The CEV of most individuals is not Martyrdom" is not something that I consider overwhelmingly contentious.

Replies from: WrongBot, multifoliaterose
comment by WrongBot · 2010-08-10T14:59:24.710Z · LW(p) · GW(p)

Nitpick: Individuals don't have CEV. They have values that can be extrapolated, but the "coherent" part is about large groups; Eliezer was talking about the CEV of all of humanity when he proposed the idea, I believe.

Replies from: wedrifid, Sniffnoy
comment by wedrifid · 2010-08-10T16:56:42.600Z · LW(p) · GW(p)

Individuals don't have CEV.

In this instance I would be comfortable using just "EV". In general, however, I see the whole conflict resolution between agents as a process that isn't quite so clearly delineated at the individual.

Eliezer was talking about the CEV of all of humanity when he proposed the idea, I believe.

He was, and that is something that bothers me. The coherent extrapolated voilition of all of humanity is quite likely to be highly undesirable. I sincerely hope Eliezer was lying when he said that. If he could right now press a button to execute an FAI> I would quite possibly do what I could to stop him.

Replies from: Vladimir_Nesov, None
comment by Vladimir_Nesov · 2010-08-10T18:55:25.197Z · LW(p) · GW(p)

If he could right now press a button to execute an FAI> I would quite possibly do what I could to stop him.

Since we have no idea what that entails and what formalizations of the idea are possible, we can't extend moral judgment to that unclear unknown hypothetical.

Replies from: wedrifid
comment by wedrifid · 2010-08-11T05:32:50.431Z · LW(p) · GW(p)

I fundamentally disagree with what you are saying, and object somewhat to how you are saying it.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-11T06:44:24.021Z · LW(p) · GW(p)

You are drawing moral judgment about something ill-defined, a sketch that can be made concrete in many different ways. This just isn't done, it's like expressing a belief about the color of God's beard.

Replies from: wedrifid
comment by wedrifid · 2010-08-11T07:18:05.674Z · LW(p) · GW(p)

You are mistaken. Read again.

I am mentioning a possible response to a possible stimulus. Doubt in the interpretation of the words is part of the problem. If I knew exactly how Eliezer had implemented CEV and what the outcome would be given the makeup of the human population then that would make the decision far simpler. Without such knowledge choosing whether to aid or hinder must be based on the estimated value of the alternatives given the information available.

Also note that the whole "extend moral judgment" concept is yours, I said nothing about moral judgements, only possible decisions. When the very fate of the universe is at stake I can most certainly make decisions based on inferences from whatever information I have available, including the use of the letters C, E and V.

This just isn't done, it's like expressing a belief about the color of God's beard.

Presenting this as an analogy to deciding whether or not to hinder the implementation of an AI based off limited information is absurd to the point of rudeness.

Replies from: Vladimir_Nesov, wedrifid
comment by Vladimir_Nesov · 2010-08-11T07:52:44.356Z · LW(p) · GW(p)

Also note that the whole "extend moral judgment" concept is yours, I said nothing about moral judgements, only possible decisions.

What I meant is simply that decisions are made based on valuation of their consequences. I consistently use "morality" in this sense.

When the very fate of the universe is at stake I can most certainly make decisions based on inferences from whatever information I have available, including the use of the letters C, E and V.

I agree. What I took issue with about your comment was perceived certainty of the decision. Under severe uncertainty, your current guess at the correct decision may well be "stop Eliezer", but I don't see how with present state of knowledge one can have any certainty in the matter. And you did say that it's "quite likely" that CEV-derived AGI is undesirable:

The coherent extrapolated voilition of all of humanity is quite likely to be highly undesirable. I sincerely hope Eliezer was lying when he said that.

(Why are you angry? Do you need that old murder discussion resolved? Some other reason?)

comment by wedrifid · 2010-08-11T07:26:20.057Z · LW(p) · GW(p)

I note, by the way, that I am not at all suggesting that Eliezer is actually likely to create an AI based dystopia. The risk of that is low (relative to the risk of alternatives.)

comment by [deleted] · 2010-08-11T09:59:06.073Z · LW(p) · GW(p)

I don't quite see how one is supposed to limit FAI> without the race for AI turning into a war of all against all for not just power but survival.

If anything I would like to expand the group not just to currently living humans but all other possible cultures biologically modern humans did or could have developed.

But again this is purely because I value a diverse future. Part of my paperclip is to make sure other people get a share of the mass of the universe to paperclip.

Replies from: wedrifid
comment by wedrifid · 2010-08-11T10:23:05.542Z · LW(p) · GW(p)

I don't quite see how one is supposed to limit FAI> without the race for AI turning into a war of all against all for not just power but survival.

By winning the war before it starts or solving cooperation problems.

The competition you refer to isn't prevented by proposing an especially egalitarian. Being included in part of the Coherent Extrapolated Volition equation is not sufficient reason to stand down in a fight for FAI creation.

But again this is purely because I value a diverse future. Part of my paperclip is to make sure other people get a share of the mass of the universe to paperclip.

CEV would give that result. The 'coherence' thing isn't about sharing. CEV may well decide to give all the mass of the universe to C purely because they can't stand each other while if C was included in the same evaluation CEV they may well decide to do something entirely different. Sure, at least one of those agents is clearly insane but the point is being 'included' is not intrinsically important.

comment by Sniffnoy · 2010-08-10T22:11:01.051Z · LW(p) · GW(p)

The singleton sets of individuals do...

comment by multifoliaterose · 2010-08-10T09:57:38.174Z · LW(p) · GW(p)

I don't think that anything in my post advocates martyrdom. What part of my post appears to you to advocate martyrdom?

Replies from: thomblake
comment by thomblake · 2010-08-10T13:55:09.675Z · LW(p) · GW(p)

To put it in the visceral language favored by cryonics advocates, you're advocating that people commit suicide for the benefit of others.

comment by Wei Dai (Wei_Dai) · 2010-08-11T00:17:23.949Z · LW(p) · GW(p)

The tone of this post really grated on my ears, especially the last section where the words "we should" were used repeatedly. Syntactically "we" must refer to either "members of Less Wrong community" or "rationalists", but those sentences only make semantic sense if "we" actually refers to "utilitarians". I think I feel offended from being implicitly excluded from this community for not being a utilitarian.

Do any of my posts have this kind of problem? Being on the receiving end of this effect makes me want to make sure that I don't unintentionally do it to anyone else.

Replies from: multifoliaterose, steven0461
comment by multifoliaterose · 2010-08-11T13:09:03.342Z · LW(p) · GW(p)

Sorry to hear that my post grated on you. This was totally unintended.

The 'we' in the last section is intended to be "Less Wrong posters who have some generalized/abstract concern for the well being of others. I believe that such people should expend some (not necessarily a lot of) resources on pure social impact because of the "purchase utilons & fuzzies separately" principle.

comment by steven0461 · 2010-08-11T00:59:36.699Z · LW(p) · GW(p)

those sentences only make semantic sense if "we" actually refers to "utilitarians"

Or if 1) "should" refers to true/informed preferences rather than currently endorsed preferences and 2) your true/informed preferences would be utilitarian. That distinction seems to be going out of fashion, though.

Replies from: Wei_Dai, jimrandomh
comment by Wei Dai (Wei_Dai) · 2010-08-11T02:04:05.728Z · LW(p) · GW(p)

It seems obvious from context that multifoliaterose was assuming agreement with utilitarian values and making his arguments about what "we should" do based on that assumption, and not claiming that the true/informed preferences of everyone in this community would be utilitarian. (The post does not explicitly claim that, nor contains any arguments that might support the claim.)

That distinction seems to be going out of fashion, though.

Why do you say that?

Replies from: steven0461
comment by steven0461 · 2010-08-11T02:12:22.377Z · LW(p) · GW(p)

Fair enough; I agree it was clearly not the reading multifoliaterose actually intended. I read multifoliaterose as saying to the extent that our values are utilitarian, cryonics doesn't fulfill them well.

Why do you say that?

I guess it's an impression I got from reading many conversations here.

Replies from: Wei_Dai, multifoliaterose
comment by Wei Dai (Wei_Dai) · 2010-08-11T03:58:57.753Z · LW(p) · GW(p)

I guess it's an impression I got from reading many conversations here.

I would expect that most conversations involve currently endorsed preferences, simply because it's much easier to discuss what we should do now given what we currently think our values are, than to make any nontrivial progress towards figuring out what our values would be if we were fully informed. I don't think that constitutes evidence that people are forgetting the distinction (if that's what you meant by "going out of fashion").

I'd be interested to know if you had something else in mind.

comment by multifoliaterose · 2010-08-11T20:44:53.107Z · LW(p) · GW(p)

Your original reading of my claim is the message that I intended to convey.

comment by jimrandomh · 2010-08-11T01:21:25.259Z · LW(p) · GW(p)

those sentences only make semantic sense if "we" actually refers to "utilitarians"

Or if 1) "should" refers to true/informed preferences rather than currently endorsed preferences and 2) your true/informed preferences would be utilitarian. That distinction seems to be going out of fashion, though.

How is (2) not a definition of a utilitarian?

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-08-11T01:25:48.650Z · LW(p) · GW(p)

A utilitarian, in common usage, is someone who currently endorses utilitarianism.

(I share Steven's desire to see the informed/currently-endorsed distinction used more consistently.)

Replies from: jimrandomh
comment by jimrandomh · 2010-08-11T01:30:20.707Z · LW(p) · GW(p)

A utilitarian would endorse a non-utilitarian value system if doing so maximized utility.

Replies from: mattnewport
comment by mattnewport · 2010-08-11T03:27:38.217Z · LW(p) · GW(p)

A utilitarian would endorse a non-utilitarian value system if doing so maximized utility.

A utilitarian would endorse a non-utilitarian value system if doing so maximized utilitarian utility, which is really the crux of the debate.

The word utilitarian is thrown around a lot here without clearly defining what is meant by it but I would guess that most of the non-utilitarians (like myself) here take issue primarily with the agent neutrality / universality and utility aggregation (whether averaging, summing or weighted summing) aspects commonly implied by utilitarianism as an ethical system rather than with the general idea of maximizing utility (however defined).

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2010-08-11T03:39:22.114Z · LW(p) · GW(p)

Another crucial terminological distinction. Thanks.

comment by RobinHanson · 2010-08-10T15:33:32.992Z · LW(p) · GW(p)

I'd like to see someone post a critical review of those GiveWell estimates. Surely GIveWell isn't the most independent source for such numbers, right?

Replies from: CronoDAS
comment by CronoDAS · 2010-08-11T04:08:14.317Z · LW(p) · GW(p)

GiveWell is an independent evaluator, or at least the closest thing that exists in the world of philanthropy.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-08-12T23:10:50.226Z · LW(p) · GW(p)

I'm confused by Robin Hanson's comment and the fact that it's voted up to 7. Is there some reason to suspect that GiveWell's reputation as an independent evaluator of charities is undeserved, or was Robin making some other point?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2010-08-12T23:26:09.298Z · LW(p) · GW(p)

Simple: the upvoters would also like to see a critical review of GiveWell estimates.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2010-08-13T00:42:10.904Z · LW(p) · GW(p)

If they want to see a critical review of GiveWell's estimates, then they need to make a case that doing such a review is a good use of someone's time and resources, and also give some indication of what they consider to be an acceptable critical review. I mean, by all indications GiveWell is itself an independent, critical, reviewer of charities, so if they're not satisfied with it, why would they be satisfied with any hypothetical meta-reviewer?

comment by MartinB · 2010-08-10T04:17:16.699Z · LW(p) · GW(p)

Revitalization is not a guarantee of a very long life - after one is revived the human race could go extinct.

Extinction is not something that just happens on a rainy day. It requires everyone to die before a new generation is there to take over in a basic sense. Either buy a big scale event or by such massive changes in the environment that we all get replaced. The chance for that to happen soon after the technique for revival is available and used is slim. The whole 'humanity might go extinct' argument look rather FAR to me. People have children and expect them to grow up and have kids on their own while talking abstractly about peak oil, and wondering if humanity might make 2100. In the long run there are all kinds of problems like the sun making earth uninhabitable, but if you get revived way before that you have many years to enjoy and time to find a way out.

Which all is a long winded way to ask for an elaboration on that point.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-10T04:36:31.750Z · LW(p) · GW(p)

I made the point that you quote because I was anticipating an argument of the type "but cryopreservation has really high expected value because if it works the person frozen can live for many billions of years!" I agree with what you say.

Replies from: MartinB
comment by MartinB · 2010-08-10T04:49:18.737Z · LW(p) · GW(p)

Switch the 'b' for a 'm', or lets just say its a thousand years. There is no save way to distinguish these. It would suck to get revived and then die from natural causes a few years later, but considering the effort needed to get awoken in the first place that does not seem likely. You probably saw Aubrey deGreys lecture on the repeated application of enhancements.

Replies from: lsparrish
comment by lsparrish · 2010-08-10T15:39:10.277Z · LW(p) · GW(p)

Heck what if it only doubles the lifespan instead of multiplying it by insanely high numbers? If you could place everyone who is currently alive (including those suffering terminal illness) in a situation where they live exactly twice as long as a healthy person today, wouldn't you? Wouldn't that have the same or greater moral utility as saving the lives of everyone on earth from a massive meteorite strike or some such? (Assuming a few dozen breeding humans survive so it's not an extinction event.)

Cryonics could potentially accomplish this, with (according to Robin) a 5% chance. But only if it is adopted globally and soon (i.e. before such a time as they would be saved anyway, or are dead already).

One possible approach you can take for maximized altruism is simply to support global cryonics without signing up for the small-scale kind. Personally I see signing up myself as a way to lead by example (though I haven't done it yet). Cryonics is in its "early adopter" stage. The sooner it rolls out for mass production, the sooner its real benefits can be realized.

Replies from: DanielLC, MartinB
comment by DanielLC · 2010-08-11T02:47:43.640Z · LW(p) · GW(p)

There's a difference between increasing the lifespans of people and increasing the numbers. The difference between someone living forever and someone living 20 years can be made up for by having an extra kid every 20 years. The bottleneck is how many people the world can support, not how many are born.

Also, saving the lives of everyone on Earth implies allowing them to have kids, and their kids to have kids. It's saving the total number of people the Earth will support, not just the ones alive at the moment.

comment by MartinB · 2010-08-10T15:55:04.275Z · LW(p) · GW(p)

Either number is arbitrary. There is no particular reason for a life to end at some specific point. And many problems can be solved. You can even specify: 'Only revive me if life expectancy goes over n years'

comment by lsparrish · 2010-08-15T18:17:55.236Z · LW(p) · GW(p)

The whole point of arguing that cryonics is a charity, a social good, etc. is because it tends overwhelmingly to be processed as a selfish act. We don't get warm fuzzies for purchasing cryonics the way we do when recycling plastic bottles or whatever. It's not using up the fuzzy supply (or demand rather). It's like cryonics has a huge blinking neon light that says SELFISH on it. But it's not so overwhelmingly selfish in reality -- it is the one thing that the entire world could jump on and live forever with. At least, I don't see any compelling reason to think they couldn't.

The chance of survival is probably much closer to 90% than 5% in my estimation, i.e. there is NO concrete reason to doubt technology can eventually reconstruct the human mind given a good enough morphological approximation to work with -- and cryonics does seem to provide at least that. We can even move it further out of the danger zone over the next couple of decades by researching better vitrification, legalizing premortem suspension, and using larger intermediate-temperature storage units. The fewer morphological distortions and the more cellular viability remains, the better the chances are -- but that does not mean they are bad to begin with.

Now, it may be true that XRR (my handy acronym for existential risk reduction) potentially saves more total lives. It saves the existence of the human race, in addition to all the individual lives it saves. But it bears mentioning that the chance of XR actually happening is a heck of a lot less certain than the chance of everyone DYING in the old-fashioned way from aging and the diseases of aging. Starting with the more well KNOWN problem makes more sense.

As to most people not being capable of being convinced of cryonics, I strongly doubt that this is the case. It's a huge uphill battle no doubt but given enough dollars towards PR (or enough intelligently done promotion by unpaid advocates on the web) it can be done. For all their protestations, it actually corresponds well with most people's actual values, to stay alive when given the option of doing so, to survive into the future, and to save their family and friends from certain death when given the option. And when it comes to being the sort of person who is aware of and cares about AI x-risk factors, cryonics is a good starting point.

That's my opinion anyway.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-09-05T17:02:55.228Z · LW(p) · GW(p)

Thanks for making this comment! I appreciate that you took the time to think about my points and explain where you disagree. I'd be interested in chatting with you sometime - feel free to PM me with your email address if you'd like to correspond.

comment by timtyler · 2010-08-10T04:20:06.698Z · LW(p) · GW(p)

Re: "from a utilitarian point of view the money spent on cryonics would be much better spent by donating to a cost-effective charity".

Sure - but utilitarianism just seems to be a totally bonkers humans moral system to folk like me. Utilitarianism doesn't even seem to be a very good way of signalling unselfishness - because the signal is so unbelievable. Anyway, if you are assuming a utilitarianism framework, maybe consider linking to some utilitarianism advocacy.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-10T04:37:47.782Z · LW(p) · GW(p)

What I mean by "from a utilitarian point of view" is "from the point of view of granting equal ethical consideration to qualitatively similar beings" or something like that. I'm open to suggestions for how I might rephrase more satisfactorily

comment by Wei Dai (Wei_Dai) · 2010-08-14T00:21:18.708Z · LW(p) · GW(p)

This post seems to be Eliezer's own counter/qualification to Purchase Fuzzies and Utilons Separately. It seems very relevant here, and I'm surprised nobody has brought it up yet. Here's a quote:

If we're operating under the assumption that everyone by default is an altruistic akrasic (someone who wishes they could choose to do more) - or at least, that most potential supporters of interest fit this description - then fighting it out over which cause is the best to support, may have the effect of decreasing the overall supply of altruism.

"But," you say, "dollars are fungible; a dollar you use for one thing indeed cannot be used for anything else!" To which I reply: But human beings really aren't expected utility maximizers, as cognitive systems. Dollars come out of different mental accounts, cost different amounts of willpower (the true limiting resource) under different circumstances, people want to spread their donations around as an act of mental accounting to minimize the regret if a single cause fails, and telling someone about an additional cause may increase the total amount they're willing to help.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-09-05T17:09:54.440Z · LW(p) · GW(p)

Thanks for pointing the linked post out, I had not seen it before.

I'm aware of the points raised therein - I don't hold actually hold rigidly to the Purchase Fuzzies and Utilons policy. In my top level post I was making a subjective judgment call that cryonics advocacy is so far from being cost-effective that it shouldn't be on the table as a utilon-producing activity. But I may be wrong about this.

In particular, I think that a recent comment by lsparrish explaining his position is well considered. I look forward to talking more about this matter more with him sometime.

comment by orthonormal · 2010-08-10T15:24:56.052Z · LW(p) · GW(p)

You (appear to) claim too much for your argument. The only pro-cryonics argument that this counters is Robin's claim of cryonics as efficient altruism, and it doesn't seem to me that any of the other cryonics posts you cited depend on this claim.

You ought to make it clear that Robin's post is the only one you object to on these grounds.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-10T15:36:53.190Z · LW(p) · GW(p)

Would this issue be resolved to your satisfaction if I changed the title of the article to "against altruistic cryonics..." ?

Replies from: orthonormal, ciphergoth
comment by orthonormal · 2010-08-10T15:54:46.889Z · LW(p) · GW(p)

That would help, but the introduction needs work too.

It feels like you have two distinct posts awkwardly glued together: one pointing out that cryonics is no more selfish than ordinary selfish expenditures, and another pointing out that it is not the most efficient altruistic use of your money. I'm not sure how they might be better integrated.

comment by Paul Crowley (ciphergoth) · 2010-08-10T15:49:52.613Z · LW(p) · GW(p)

If that's your point, then it would certainly help to make it clear in the heading.

Does wanting to save those close to you count as altrusim?

comment by multifoliaterose · 2010-08-12T18:51:39.853Z · LW(p) · GW(p)

I have edited the main post in response to many of the comments below.

comment by James_Miller · 2010-08-10T05:33:23.769Z · LW(p) · GW(p)

Lots of money spent helping poor people in poor countries has done more harm than good. You wrote: "GiveWell estimates that VillageReach and StopTB save lives at a cost of $1,000 each." I bet at least $100 of each $1000 goes indirectly to dictators, and because the dictators can count on getting this money they don't have to do quite as good a job managing their nation's economy. Also, you need to factor in Malthusian concerns.

Poor people in poor countries might be better off today if rich countries had never given them any charity.

If lots of people signup for cryonics the world would become more concerned about the future and devote more resources to existential risks.

Replies from: CarlShulman, multifoliaterose
comment by CarlShulman · 2010-08-10T06:08:20.770Z · LW(p) · GW(p)

I often find this sort of argument frustrating. Are you making a serious case that the net effects are that harmful? What are your betting odds? Why not donate to things that don't generate rents to steal, e.g. developing cheaper crops and treatments for tropical diseases? Or pay for transparency/civil society/economic liberalization work in poor countries?

Many people just like to throw up possible counter-considerations to blunt the moral condemnation, and then go on with what they were doing, without considering any other alternatives or actually trying to estimate expected values in an unbiased way. One should either engage on the details of the altruism, or focus on the continuum of selfish expenditures, and note the double-standards being applied to cryonics.

I agree that widespread cryonics would have beneficial effects in encouraging long-term thinking. Edit: and even small changes in numbers could significantly increase the portion of people paying attention to existential risk and the like, given how small that pool is to start with.

Replies from: James_Miller
comment by James_Miller · 2010-08-10T13:33:48.842Z · LW(p) · GW(p)

"Are you making a serious case that the net effects are that harmful?"

Yes. Although development isn't my specialty, I'm a professional economist who has read a lot about development. The full argument I would make is similar to the one that supports the "Resource Curse" which holds "The resource curse (also known as the paradox of plenty) refers to the paradox that countries and regions with an abundance of natural resources, specifically point-source non-renewable resources like minerals and fuels, tend to have less economic growth and worse development outcomes than countries with fewer natural resources. This is hypothesized to happen for many different reasons, including a decline in the competitiveness of other economic sectors (caused by appreciation of the real exchange rate as resource revenues enter an economy), volatility of revenues from the natural resource sector due to exposure to global commodity market swings, government mismanagement of resources, or weak, ineffectual, unstable or corrupt institutions (possibly due to the easily diverted actual or anticipated revenue stream from extractive activities)." (From Wikipedia)

"What are your betting odds?" Development data is often horrible in part because of deliberate fraud on the part of poor countries and NGOs so it would be very hard to determine criteria for who wins.

"Why not donate to things that don't generate rents to steal, e.g. developing cheaper crops and treatments for tropical diseases?"

Cheaper crops harm farmers.

Treatments for tropical diseases cause Malthusian problems, must be administered by medical staff dictators approve of in buildings dictators allow to be built. One theory holds that AIDS spread so rapidly through Africa because of dirty Needles used by medical personal.

The best justification for what I wrote comes from a quote from Robert Lucas that is one of the three quotations on my Facebook homepage. "But of the vast increase in the well-being of hundreds of millions of people that has occurred in the 200-year course of the industrial revolution to date, virtually none of it can be attributed to the direct redistribution of resources from rich to poor. The potential for improving the lives of poor people by finding different ways of distributing current production is nothing compared to the apparently limitless potential of increasing production."

The idea that the rich "should" distribute resources to the poor has done massive damage to both the world's rich and poor.

Replies from: CarlShulman, NancyLebovitz
comment by CarlShulman · 2010-08-10T14:20:52.185Z · LW(p) · GW(p)

I agree that the resource curse elements of aid exist (and think it plausible that 'development aid' has had minimal or negative effects), but they have to be quite large to negate the direct lifesaving effects of the best medical aid, e.g. vaccines or malarial bed nets.

Cheaper crops harm farmers.

The Green Revolution did not harm poor Indians, by a very wide margin. I'm talking about developing new strains, not providing food aid purchased from rich-country farmers.

Treatments for tropical diseases cause Malthusian problems, must be administered by medical staff dictators approve of in buildings dictators allow to be built.

There is some bribery and theft bound up with medical aid too, aye. But the Malthusian argument is basically saying better that they die now to expedite growth later? Really?

"But of the vast increase in the well-being of hundreds of millions of people that has occurred in the 200-year course of the industrial revolution to date, virtually none of it can be attributed to the direct redistribution of resources from rich to poor."

The Green Revolution, smallpox eradication, financial support for vaccination and malaria control all involved rich country denizens spending on benefits for the poor. Hundreds of millions of lives involved. The benefits of economic growth dwarf the benefits of aid, but the latter are not negligible.

Replies from: James_Miller, Douglas_Knight
comment by James_Miller · 2010-08-10T14:58:58.535Z · LW(p) · GW(p)

Rich countries used aid dollars to pressure African countries to stop using DDT. Aid has probably increased the number of poor people who have died from Malaria.

Most of the agricultural improving techs were developed for profit not charity reasons, although dwarf wheat is an important exception that supports your viewpoint.

Eliminating smallpox wasn't really done for chartable reasons, meaning that rich countries had an incentive to be efficient about it. It also caused the USSR to develop smallpox bio-weapons.

Africa's main problem is low economic growth caused mostly by its many "vampire" governments. Aid feeds these vampires and so does create negative effects large enough "to negate the direct lifesaving effects of the best medical aid, e.g. vaccines or malarial bed nets."

I'm not claiming Malthusian factors should dominate moral considerations, just that they need to be taken into account.

Although I can't prove this, I believe that the vast sums of money spent on foreign aid to poor nations have done much to convinced the elite of poor nations that their nations' poverty is caused by unjust distribution of the world's resources not the elites' corruption and stupid economic policies.

Replies from: CarlShulman, Douglas_Knight
comment by CarlShulman · 2010-08-10T19:46:02.236Z · LW(p) · GW(p)

James, the discussion was about things that one can donate to as a private individual looking to have a maximal positive impact, using resources like GiveWell and so on. So arguments that governments doing foreign aid are often not trying to help or serving crazy side-concerns (e.g. with DDT, although that's often greatly exaggerated for ideological reasons) aren't very relevant.

I gave smallpox as an example of a benefit conferred to poor people by transferring resources (medical resources) to their countries. I agree about sloppiness on the part of governments and most donors, but that doesn't mean that those rare birds putting effort into efficacy can't attain some.

I agree that Africa's main problem is low economic growth, and that vampire states play a key role there (along with disease, human capital, etc). You never answered my earlier question, "why not fund anti-corruption/transparency/watchdog groups?" Would you guess that the World Bank Doing Business Report saves one net life per $1000 of expenditure?

Replies from: James_Miller
comment by James_Miller · 2010-08-10T21:03:45.748Z · LW(p) · GW(p)

"why not fund anti-corruption/transparency/watchdog groups?" I don't think it would do any good, although I don't know enough about these groups to be certain of this.

I believe that on average charity given to poor people in poor countries does more harm than good, and I don't think most people (myself included) are smart enough (even with the help of GiveWell) to identify situations in which giving aid helps these people in large part because of the negative unintended indirect effects of foreign charity.

In contrast, I think that technological spillovers hugely benefit humanity and so while spending money on cryonics isn't the first best way of helping humanity it is better than spending the money on most types of charities including those designed to help poor people living in corrupt dictatorships.

Replies from: mattnewport
comment by mattnewport · 2010-08-10T21:15:30.535Z · LW(p) · GW(p)

I agree. It seems likely to me that for-profit investment in developing new technologies (and commercializing existing technologies on a large scale) has had a greater positive impact on human welfare than charitable spending over the last few hundred years. Given that it has also made a lot of early investors wealthy in the process (while no doubt also destroying the wealth of many more) and likely has a net positive expected return on investment I personally like it as a way to allocate some of my resources.

comment by Douglas_Knight · 2010-08-10T18:06:47.254Z · LW(p) · GW(p)

Rich countries used aid dollars to pressure African countries to stop using DDT.

As far as I have been able to determine, this is false.

Replies from: ciphergoth, James_Miller
comment by James_Miller · 2010-08-10T18:21:48.700Z · LW(p) · GW(p)

See

http://townhall.com/columnists/JohnStossel/2006/10/04/hooray_for_ddts_life-saving_comeback

http://web.worldbank.org/WBSITE/EXTERNAL/COUNTRIES/AFRICAEXT/EXTAFRHEANUTPOP/0,,contentMDK:20905156~pagePK:34004173~piPK:34003707~theSitePK:717020,00.html

http://www.fightingmalaria.org/article.aspx?id=936

http://www.fightingmalaria.org/article.aspx?id=137

Replies from: satt, Douglas_Knight
comment by satt · 2010-08-11T02:08:44.115Z · LW(p) · GW(p)

I haven't yet looked at your last three links, but the first is a tendentious polemic. Taking a look...

After more than 30 years and tens of millions dead -- mostly children -- the World Health Organization (WHO) has ended its ban on DDT.

This claim is true only in the limited sense that the WHO has tried to stop indiscriminate DDT spraying. But as far as I know, the WHO has never handed down a blanket ban on DDT.

There isn't a date on Stossel's editorial, but going by the URL it was published in October 2006. Official WHO documents predating that condone the use of DDT under limited circumstances. For example, this archived copy of a WHO FAQ on DDT from August 2004 says, "WHO recommends indoor residual spraying of DDT for malaria vector control", citing this 2000 report from the WHO Expert Committee on Malaria. On page 38 (p. 50 in the PDF), the 2000 report "endorsed" the conclusion of a still earlier 1995 study group that "DDT may be used for vector control, provided that it is only used for indoor spraying, it is effective, the WHO product specifications are met, and the necessary safety precautions are applied for its use and disposal".

DDT is the most effective anti-mosquito, anti-malaria pesticide known. But thanks to the worldwide environmental movement and politically correct bureaucrats in the United States and at the United Nations, the use of this benign chemical has been discouraged in Africa and elsewhere, permitting killer mosquitoes to spread death.

I don't see how anyone can honestly call DDT "benign" unless they're ignorant of the evidence for its negative ecological effects. At any rate, Stossel's decision to solely blame environmentalists & government busybodies for DDT's unpopularity is disingenuous. Increasing resistance to DDT is another (I would have thought obvious) reason.

DDT was banned by President Richard Nixon's Environmental Protection Agency in the early 1970s, after Rachel Carson's book, "Silent Spring," claimed to show that DDT threatened human health as well as bird populations. But some scientists found no evidence for her claims.

Which is basically meaningless without quantitative evidence. There are always a couple of scientists somewhere who fail to replicate findings that some chemical is dangerous. Also, the EPA ban does not appear to have been a complete ban; this pro-DDT article points out that "the public health provisions of the 1972 US delisting of DDT have been used several times after 1972 in the US to combat plague-carrying fleas, in Colorado, New Mexico and Nevada".

Even if there was danger to bird eggs, the problem was the amount of DDT used, not the chemical itself.

Presumably Stossel's implying that the EPA should therefore have just regulated the amount of DDT used, instead of just banning it. But the EPA did allow some uses of DDT after its ban, and the trivially true fact that the dose makes the poison isn't sufficient for Stossel's implied argument to go through; he also has to show that regulation would suffice to make DDT exposure less than some critical numerical threshold. Which he doesn't.

Huge amounts of the chemical were sprayed in America. I've watched old videos of people at picnics who just kept eating while trucks sprayed thick white clouds of DDT on top of them. Some people even ran toward the truck -- as if it was an ice-cream truck -- they were so happy to have mosquitoes repelled. Tons of DDT were sprayed on food and people. Despite this overuse, there was no surge in cancer or any other human injury.

This statement is off in two ways. Firstly, just looking for a "surge" in aggregated levels of injury in the US is a poor way to assess DDT's level of dangerousness. Secondly, how does Stossel know there was "no surge" in not only cancer, but also "any other human injury"?

Even sticking to cancer, which is relatively well-reported, The National Cancer Institute's SEER program only has cancer incidence data from 1973 onwards, and I've not found earlier reliable data for US cancer incidence. The SEER data isn't much use for evaluating Stossel's claim because, of course, it starts the year after the EPA banned DDT in the US. There are earlier estimates of the cancer rate based on death certificates, but I don't know how well those track incidence. (I'd guess neither does Stossel.)

Nevertheless, the environmental hysteria led to DDT's suppression in Africa, where its use had been dramatically reducing deaths.

Again Stossel ignores insecticide resistance.

American foreign aid could be used to finance ineffective alternative anti-malaria methods, but not DDT.

I'm not even sure how to test the claim that American foreign aid couldn't be used to finance DDT use — "American foreign aid" is pretty vague. And what about effective alternative anti-malaria methods like bed nets? Is Stossel implying that there are no effective alternative anti-malaria methods?

Within a short time, the mosquitoes and malaria reappeared, and deaths skyrocketed. Tens of millions of people have died in that time.

But specifically what proportion of those deaths were caused by reductions in DDT use? [Edit: and what sub-proportion of that proportion of deaths could be attributed to foreign aid, rather than other motivations for using less DDT?]

And so on and so forth. It's also discouraging that the column's penultimate 4 paragraphs are based on hyperbolic soundbites from Steven Milloy, who has past form in pseudoscience.

Not only is the column misleading, but the claim that

Rich countries used aid dollars to pressure African countries to stop using DDT. Aid has probably increased the number of poor people who have died from Malaria.

is not really meaningful without putting numbers on it. I expect there must be at least one African out there who's died of malaria because of aid's political pressure. But it's not really a compelling argument against aid unless the actual malaria death count due to pressure exerted via foreign aid is much higher.

Replies from: James_Miller
comment by James_Miller · 2010-08-11T10:54:05.089Z · LW(p) · GW(p)

You make some good points.

comment by Douglas_Knight · 2010-08-10T18:38:31.576Z · LW(p) · GW(p)

Of course I've seen lots of articles like that.

The first article opens with "the World Health Organization (WHO) has ended its ban on DDT" which is simply a lie. The third article makes the less verifiable:

Meanwhile, vast swathes of the anti-malaria community, including the malaria teams within national donor agencies, are quietly opposed to DDT. Agencies include insecticide spraying in their literature, but then run No-Spray programs.

but I have never seen evidence of this claim. In fact, I have seen it confabulated on the spot by people caught in the first lie.

comment by Douglas_Knight · 2010-08-10T18:19:59.108Z · LW(p) · GW(p)

But the Malthusian argument is basically saying better that they die now to expedite growth later?

If one believes that it is better, for the individual or the group, to die in war or acute famine than to live malnourished, then peace and a stable food supply may be bad (but then one should apply the reversal test and ask such people whether they support war and high variance food supply).

But disease is not like war or acute famine. The survivors are often permanently affected, in many ways like the malnourished. So many arguments that consider malthusian conditions should support medical aid.

comment by NancyLebovitz · 2010-08-10T15:55:34.170Z · LW(p) · GW(p)

Do gambling and tourism count as resource curses? They're renewable resources, but they don't seem to do localities much good.

Replies from: James_Miller, Oligopsony
comment by James_Miller · 2010-08-10T16:38:27.136Z · LW(p) · GW(p)

No because an incompetent or evil government can lose them as a source of revenue. Zimbabwe, for example, has no doubt lost many tourist dollars because of state violence. This loss might be deterring some other African governments from engaging in too much state violence.

In contrast, governments often get more economic aid if they engage in destructive economic policies.

comment by Oligopsony · 2010-08-10T16:52:09.243Z · LW(p) · GW(p)

Theoretically, a particularly beautiful landscape or cultural affinity for some profession might lead to Dutch Disease effects. The renewability of the resource isn't really the relevant factor; it just happens to be that most supply shocks of the required magnitude consist of natural resource endowments.

Service industries like gambling and tourism don't generally have these effects, though. What they do have is typically lower wages, greater seasonality, and less technology spillover effects than manufacturing.

comment by multifoliaterose · 2010-08-10T05:55:26.497Z · LW(p) · GW(p)

Lots of money spent helping poor people in poor countries has done more harm than good. You wrote: "GiveWell estimates that VillageReach and StopTB save lives at a cost of $1,000 each." I bet at least $100 of each $1000 goes indirectly to dictators, and because the dictators can count on getting this money they don't have to do quite as good a job managing their nation's economy. Also, you need to factor in Malthusian concerns.

See my responses to Vladimir_M's comments here

If lots of people signup for cryonics the world would become more concerned about the future and devote more resources to existential risks.

I think (but am not sure) that you're right about this, but even if you are there's still the question of whether at the margin advocating for cryonics is a worthwhile endeavor. My intuition is that we're so far away from having a population interested in signing up for cryonics (because of the multitude of irrational biases that people have against cryonics) that advocating for cryonics is a very inefficient way to work against existential risk.

I'd be interested in any evidence that you have that

•Signing up for cryonics motivates people to devote resources to assuaging existential risk.

•It's feasible to convince a sufficiently large portion of the population to sign up for cryonics so that cryonics is no longer a fringe thing which makes people in the general population uncomfortable around cryonics sign-ups.

Replies from: James_Miller
comment by James_Miller · 2010-08-10T13:39:27.888Z · LW(p) · GW(p)

"I'd be interested in any evidence that you have"

A vastly disproportionate percentage of the people who have signup for cryonics are interested in the singularity and have helped the SIAI through paying for some of their conferences. This, I admit, might be due to correlation rather than causation.

Replies from: multifoliaterose
comment by multifoliaterose · 2010-08-10T14:40:11.151Z · LW(p) · GW(p)

Your point is valid, but you seem to have dodged the thrust of my main post. Do you really think that cryonics advocacy is comparable in efficacy to the most efficient ways of working against existential risk? If not, you should not conceptualize cryonics advocacy as philanthropic.

Replies from: James_Miller
comment by James_Miller · 2010-08-10T18:29:16.045Z · LW(p) · GW(p)

"Do you really think that cryonics advocacy is comparable in efficacy to the most efficient ways of working against existential risk?"

No, but I do think spending money on cryonics probably increases expenditures on existential risk. Cryonics and existential risk spending are complements not substitutes.

Also, your not first best argument against cryonics also applies to over 99.999% of human expenditures and labors.

comment by multifoliaterose · 2010-08-12T18:52:15.267Z · LW(p) · GW(p)

I have edited the main post in response to many of the comments below.

comment by multifoliaterose · 2010-08-12T18:51:59.006Z · LW(p) · GW(p)

I have edited the main post in response to many of the comments below.

comment by multifoliaterose · 2010-08-12T18:51:50.519Z · LW(p) · GW(p)

I have edited the main post in response to many of the comments below.

comment by multifoliaterose · 2010-08-12T18:50:09.059Z · LW(p) · GW(p)

I have edited the main post in response to many of the comments below.

comment by multifoliaterose · 2010-08-12T18:49:44.410Z · LW(p) · GW(p)

I have edited the main post in response to many of the comments below.

comment by steven0461 · 2010-08-10T23:58:06.132Z · LW(p) · GW(p)

If part of the point of cryonics advocacy is to get people thinking seriously about the future, I'd like to see more LW material aimed at present and future cryonicists explaining to them why as a cryonicist they should start thinking seriously about the future.

comment by [deleted] · 2010-08-10T12:30:00.128Z · LW(p) · GW(p)

Great post, and expresses precisely what I think about the whole issue.