Cryonics and Pascal’s wager

post by Timwi · 2011-02-18T18:36:43.757Z · LW · GW · Legacy · 29 comments

The Cambridge UK meet-up on Saturday 12 February went really well. Many thanks to everyone who came and provided a wonderful and entertaining discussion.

One of the topics that came up was that of cryonics. This is the idea of having your body (or maybe just your brain) frozen after death, to be thawed and revived in the far future when medical technology has advanced to the point where it can heal you. Is this a rational thing to do?

The argument I heard from some of the other attendants effectively boils down to “what have you got to lose?” In other words, have yourself frozen just in case it works and you can be resurrected.

This struck me as awfully reminiscent of Pascal’s Wager, which is similarly a “what have you got to lose?” type argument. Cited in its original form, it is about belief in a god and goes something like this:

You can either believe in God or not. If you do, you will either be rewarded with eternity in heaven (if you’re right) or nothing happens (if you’re wrong). But if you don’t believe, you will either be punished by eternal torture (if you’re wrong) or nothing happens (if you’re right). It’s a no-brainer! You’re better off believing.

This argument falls down on many counts, but I’ll concentrate on a specific one. It makes a far-fetched assumption about the set of possible outcomes. It assumes that there are only the two possibilities quoted and no others. It ignores the possibility of a god that only rewards sceptical atheists.

Coming back to cryonics, the argument seems to proceed approximately like this:

You can either have yourself frozen or not. If you do, you will either wake up in a wonderful, happy-go-lucky utopian future with amazing technological advances (if cryonics works) or nothing happens (if it doesn’t). But if you don’t have yourself frozen, nothing happens either way. It’s a no-brainer! You’re better off in cryopreservation.

If I haven’t already made it abundantly clear, the assumption that the future you wake up in is at all desirable for you is a far-fetched one. It ignores the possibility of waking up as a slave with no opportunity for suicide.

What are everybody’s thoughts on this?

29 comments

Comments sorted by top scores.

comment by Unnamed · 2011-02-18T21:09:24.231Z · LW(p) · GW(p)

Welcome to Less Wrong, Timwi. It looks like you're relatively new here, so you might not know that the topic of cryonics has received a lot of discussion on LW, enough to appear in the tag cloud on the front page. People here tend to have relatively favorable views of it: some (including Eliezer Yudkowsky) are convinced that it's the right choice, have signed up, and advocate that everybody sign up, and even those who aren't convinced tend to at least see cryonics as a reasonable possibility that's worth considering.

Before getting too caught up in debating particular aspects of cryonics, you may want to take some to familiarize yourself with some of the ideas and arguments that have already been discussed on LW to get a better sense of the case for cryonics. The LW wiki entry and this post by Eliezer are good places to start, and they also give you various links to follow to learn more.

Replies from: James_Miller
comment by James_Miller · 2011-02-18T23:03:09.426Z · LW(p) · GW(p)

I think it's OK for a newcomer to post topics in the discussion forum that have already been discussed at length. You learn a lot more if you can actively engage in conversation rather than just reading what other people have written.

comment by Nornagest · 2011-02-18T19:01:07.773Z · LW(p) · GW(p)

Cryonics is a good bet if the expected utility from being restored in the future exceeds the expected utility you lose from funding it in the present, modulo any discounting you may choose to perform. As best I can tell, there are two free variables in that equation: first, the probability of being restored; and second, the expected utility of your future life given restoration.

I don't think we can put firm bounds on either of these, but I'd be very surprised if the latter isn't positive. Your slavery hypothesis strikes me as unlikely: if a society capable of resuscitating cryonics patients found itself in need of cheap labor, it would almost certainly have cheaper sources of it. Ideological reasons for resuscitating cryonics patients and making their new lives unpleasant seem somewhat more likely (perhaps the future finds some aspect of our behavior repugnant and demands retribution?), but I can't think of many that don't require several things to go wrong at once.

comment by Vladimir_Nesov · 2011-02-18T19:58:15.826Z · LW(p) · GW(p)

If I haven’t already made it abundantly clear, the assumption that the future you wake up in is at all desirable for you is a far-fetched one. It ignores the possibility of waking up as a slave with no opportunity for suicide.

See Reversal test, Motivated skepticism, Rationalization.

(The assumption that you can go to a grocery store and buy some orange juice is a far-fetched one, since it ignores the possibility of a global conspiracy led by Sarah Palin bent on not letting you do that.)

comment by JGWeissman · 2011-02-18T18:55:08.622Z · LW(p) · GW(p)

the assumption that the future you wake up in is at all desirable for you is a far-fetched one. It ignores the possibility of waking up as a slave with no opportunity for suicide.

These possibilities are not equally unlikely.

Replies from: Prismattic
comment by Prismattic · 2011-02-19T00:39:57.046Z · LW(p) · GW(p)

I agree with this. It is worth considering, though, that there are possibilities other than slavery and utopia, and also that “desirable for you” is a bit more complicated of an issue that material calculations of utility would suggest.

I think human life is better in the 21st century than it was in the 16th century, and that it was better in the 16th century than it was in the 11th century, etc. I am also pretty sure that if you could grab people from the 19th century and bring them into the present, they would adapt and ultimately end up with more utility in our time. I'm less certain this would be true for someone from the 11th century, however. The world might be so alien to their preferences, and their ability to acquire meaningful skills to contribute in any way other than as a museum curiosity be so limited, that they aren't grateful, even though in material terms life is objectively better. So if cryonics is successful, but takes a thousand years instead of a hundred, the results might not be as “desirable” as one would hope.

comment by wedrifid · 2011-02-19T04:24:18.264Z · LW(p) · GW(p)

It ignores the possibility of waking up as a slave with no opportunity for suicide.

Perhaps not quite as implausible as waking up (or staying asleep) as batteries, but certainly in the same vein!

There are not all that many scenarios in which future agents would bother rebuilding us to use as slaves. We just wouldn't be worth the hassle.

comment by James_Miller · 2011-02-18T20:18:55.459Z · LW(p) · GW(p)

The technology needed to revive you would almost certainly make slavery obsolete.

Replies from: knb, DanielVarga
comment by knb · 2011-02-20T10:35:33.797Z · LW(p) · GW(p)

I disagree. Robin Hanson has extensively argued that the development of software emulations (such a development is a likely necessity for cryonics to work) would lead to a crash in wages (since supply of labor would be able to rise quickly enough to meet demand for the first time since the Industrial Revolution). In that scenario, almost all market power would fall to the owners of capital, and since wages would be near subsistence anyway, lots of ems might offer ownership of themselves in exchange for the right to survive. That situation has historical precedent (and lingers in parts of Africa to this day).

comment by DanielVarga · 2011-02-18T22:06:24.378Z · LW(p) · GW(p)

The technology needed to revive you would almost certainly make humans obsolete.

comment by Dan_Moore · 2011-02-18T21:00:17.659Z · LW(p) · GW(p)

It very well may be that a future society might value the first-hand accounts of the early 21st century as told by a resuscitated mom or popsicle. (no disrespect intended; just anticipating future slang).

comment by Paul Crowley (ciphergoth) · 2011-02-21T09:09:13.134Z · LW(p) · GW(p)

I think the chances of revival are much better than the sliver thin chances suggested by the analogy to Pascal's Wager. While there are many non-technical ways it can fail, no-one is in a position to be very confident that we'll see any of those failure modes - that there will be a general societal failure that stops the LN2 deliveries, or that the cryonics organisations themselves will collapse, or such. And while there are a great many unknowns on the technical side, the more I look into it the more I think that the chances of recovery being technically plausible are high - well into the "more likely than not" side of things.

comment by timtyler · 2011-02-18T20:19:22.596Z · LW(p) · GW(p)

For reference, Yudkowsky's comments on the topic.

Replies from: mwengler
comment by mwengler · 2011-02-18T21:10:44.695Z · LW(p) · GW(p)

Pascal's wager is for what seems like infinite utility times a finite although small probability of happening. Infinite trumps small, how could devoting your life to a non-vanishing probability of infinite utility ever be wrong within the kinds of mathematical approach favored around here? Within that mathematical model, the fact that there are infinite negatives (if the muslim god turns out to be right you get an infinite negative utility from believing in the Christian god). So Infinity - Infinity is undefined, we have no idea what the real payoff is for Pascal's wager, no idea if the expectation value is positive or negative.

But with cryonics, the downside seems close to nothing, I guess the vanishingly small probability that you delay your entry into heaven for a few hundred to billion years while you remain frozen, until cryonics fails or the earth is destroyed, or the other even lower probability that you are revived by people advanced enough to revive you but economically retarded enough to value humans as slaves.

I'm still not signing up. I'd still rather have a summer house or a really kick-ass vacation in Africa or something. But Yudkowsky's article I think does clarify the issue re: Pascal's wager.

Replies from: wedrifid
comment by wedrifid · 2011-02-19T04:06:41.687Z · LW(p) · GW(p)

I guess the vanishingly small probability that you delay your entry into heaven for a few hundred to billion years while you remain frozen,

If we stay here ten billion years,
In the embers of the last shining sun,
We'll have no less days to sing God's praise,
Than if we cark it when it's barely begun.

Replies from: gjm
comment by gjm · 2011-02-19T19:09:53.945Z · LW(p) · GW(p)

That regrettably fails to fit the usual tune.

comment by falenas108 · 2011-02-18T19:55:42.306Z · LW(p) · GW(p)

Another place Pascal's wager fails on is the cost one has to pay for believing (such as attending services, time/money for religious holidays, ect.). In Cryonics case, it is a literal price, the cost of keeping you frozen.

So, what this boils down to is calculating the utilities for the probability of being able to live in the future vs. the value of the money that you and your family could have used for something else.

Replies from: wedrifid
comment by wedrifid · 2011-02-19T04:15:31.405Z · LW(p) · GW(p)

Another place Pascal's wager fails on is the cost one has to pay for believing (such as attending services, time/money for religious holidays, ect.)

Pascal's wager does not fail there. It would hold up under torture, early death and poison that transforms your blood into fire ants too. Cryonics is not nearly so robust in that particular respect.

comment by mutterc · 2011-02-21T02:13:33.195Z · LW(p) · GW(p)

If you believe that nothing is worse than being dead, then cryonics does legitimately become a no-brainer (it's not like it could make you more dead).

But accepting that nothing is worse than being dead, despite being common on LW, is not trivial. It seems at a minimum you'd have to accept Fun Theory (I don't yet understand Fun Theory well enough to accept or reject it).

Replies from: wedrifid
comment by wedrifid · 2011-02-21T04:17:43.875Z · LW(p) · GW(p)

If you believe that nothing is worse than being dead, then cryonics does legitimately become a no-brainer (it's not like it could make you more dead).

Not quite. Even then there is opportunity cost to consider. Those resources could be directed to other methods of life extension.

Replies from: advancedatheist
comment by advancedatheist · 2011-02-22T15:54:47.409Z · LW(p) · GW(p)

Those resources could be directed to other methods of life extension.

None of which happens to work now, despite propaganda I've heard since the 1970's about imminent breakthroughs in superlongevity. The people who say we'll have 150 years life expectancies or whatever by some randomly postulated year in this century don't understand what "life expectancy" means. We determine life expectancy retrospectively from statistics gathered about populations of individuals who have already died, and we don't know until a significant number of people have died past the age of 150 to see if they constitute a trend, or instead represent statistical noise. Clearly we won't have the ability to gather those data in this century - I would have to wait until the year 2109 to see my 150th birthday - and I think all these "immortalist" obsessives like Ray Kurzweil just waste money and possibly damage their health by ingesting their "life extension" quackery.

By contrast, we can conduct experiments in brain cryopreservation which generate useful data in a timely fashion, like most other scientific experiments. If you want to see research into something which could show tangible returns for your survival, cryonics has some advantages over chasing after an anti-aging breakthrough which won't arrive for many decades, if not centuries.

comment by Richard_Kennaway · 2011-02-18T20:07:00.460Z · LW(p) · GW(p)

Up front, I'll say that I am not signed up for cryonics, and have no particular plans to. (Yes, I know this means that I will almost certainly be permanently dead within 50 years.) Nobody else at the meet-up was signed up, although some expressed a vague intention to do it one day.

Anyway, an argument in favour is not only that you are likely to wake up (if you wake up at all) in a favorable situation, but you can look forward to a greatly extended lifespan, since making living bodies last longer seems like an easier problem than resuscitating corpsicles frozen with the crude technology of today. This multiplies the utility by a huge amount.

Despite that and other arguments, and a personal desire to live a healthy life for as long as possible, I remain unenthusiastic about devoting a substantial proportion of my resources to the project (not just money -- you cannot buy cryonics like you can buy a picture to hang on your wall, you need a plan for the actual suspension). Small probabilities of stupendous outcomes fail to move me even if their product is greater than the cost. You can play the figures like a guitar and get any answer you want.

A downside that I haven't seen much attention paid to, although it does get mentioned from time to time, is the problem of having the organisation that has the care of your corpsicle surviving long enough, and taking good enough care. You can't freeze a social structure and put it in a vat for a century -- it has to live through all the time that you don't. What are the chances?

Replies from: lsparrish, advancedatheist
comment by lsparrish · 2011-02-19T03:08:52.935Z · LW(p) · GW(p)

A downside that I haven't seen much attention paid to, although it does get mentioned from time to time, is the problem of having the organisation that has the care of your corpsicle surviving long enough, and taking good enough care. You can't freeze a social structure and put it in a vat for a century -- it has to live through all the time that you don't. What are the chances?

There's a flip side to that, which is that a cryonicist who takes the idea of reanimation seriously must also take the idea of future stability more seriously. Unlike any other living person they have reason to anticipate the long-term results of today's political stances, investments, and social developments in terms of actual sensory experiences. I'm not sure if this actually translates to increased rationality, but seems like should.

comment by advancedatheist · 2011-02-22T16:06:15.839Z · LW(p) · GW(p)

A surprisingly large number of firms have lasted for centuries, even in war-torn countries like Germany:

http://en.wikipedia.org/wiki/List_of_oldest_companies

Replies from: Richard_Kennaway, NancyLebovitz
comment by Richard_Kennaway · 2011-02-22T18:48:50.248Z · LW(p) · GW(p)

2% for a century. Multiply that into the chances of cryonics working.

comment by NancyLebovitz · 2011-02-22T16:40:34.888Z · LW(p) · GW(p)

That surprises me, too-- about 1% are at least 200 years old, though I don't know where the 2 million in the database come from. That seems very low for all the companies in the world, or even in the developed world.

comment by Vladimir_Nesov · 2011-02-18T19:53:25.289Z · LW(p) · GW(p)

This struck me as awfully reminiscent of Pascal’s Wager

See http://lesswrong.com/lw/z0/the_pascals_wager_fallacy_fallacy/

You see it all the time in discussion of cryonics. The one says, "If cryonics works, then the payoff could be, say, at least a thousand additional years of life." And the other one says, "Isn't that a form of Pascal's Wager?"

The original problem with Pascal's Wager is not that the purported payoff is large. This is not where the flaw in the reasoning comes from. That is not the problematic step. The problem with Pascal's original Wager is that the probability is exponentially tiny (in the complexity of the Christian God) and that equally large tiny probabilities offer opposite payoffs for the same action (the Muslim God will damn you for believing in the Christian God).

comment by advancedatheist · 2011-02-22T14:50:36.940Z · LW(p) · GW(p)

If I haven’t already made it abundantly clear, the assumption that the future you wake up in is at all desirable for you is a far-fetched one.

FM-2030 expressed a more optimistic assessment about the values of Future World in his ebook Countdown to Immortality:

TIME REENTRY AND ADAPTATION

How will an individual suspended today adjust to life upon reentry in the future?

Time-reentry adjustment will not be a serious problem for the following reasons:

Anyone suspended in these years will probably not have to wait long for reanimation. In act the time will come when long-term suspension will make no sense. Deathcorrection will be quick and therefore catch-up will not be a problem.

People are living longer and longer. Therefore many of the reanimate’s friends and acquaintances will be around.

More and more people are signing up for cryonic suspension. When they are eventually brought back, they will find other reanimates from their original time zones.

What if you do not find any familiar faces upon reentry? What of it? You will make new friends. Why not start afresh? Isn’t this precisely what tends of millions of people now do when they voluntarily move from one part of the planet to another? In our fluid times many of our friendships and associations are not lifelong and continuous any way.

We humans are remarkably adaptable. In recent decades we have seen entire populations switch eons - from Stone Age to Electronic Age - from the feudal/agrarian world to the industrial and the telespheral. There is no limit to our adaptability.

Entire generations are now born into worlds of real-time acceleration. To them and to all of us rapid realignment is the norm. We are not even aware we are continually desynchronizing.

In the coming decades reanimates may not be the only ones having to readapt. Increasing numbers of people will drop out of our world and start new lives elsewhere in the solar system. Some of these extraterrestrials will come back and may also have to zone in.

In the new century we will learn about Time and Space reentry and devise catch-up skills. For example: rapid updates via onbody computers and audio/visuals - rapid playbacks and overviews via touch-and-enter holospheres - body-attached or brain-implanted decision-assists - automatic information-transfer procedures and so on. We may also have rapid genetic fine-tuning to help returnees improve their concentration - memory - adaptability - learn/unlearn.

Finally in the coming years and decades the world will grow more and more open and friendly. This very day we are outgrowing age-old adversarial barriers: tribalism - racism - classism - sexism - nationalism. The freeflow of people across the planet is speeding up. My projection is that a person suspended in the coming years and reentering decades later will at first have more problems with the relative friendliness and openness of the new century than anything else.

I would also point out the fact that we don't get "the future" all at once. Even if our initial living conditions seem suboptimal at first, if we have radically extended lives, we'll have the time to work towards situations more to our liking.

comment by advancedatheist · 2011-02-22T14:39:57.139Z · LW(p) · GW(p)

Cryonicist Thomas Donaldson (Ph.D. in mathematics from the University of Chicago) pointed out the bad assumptions with this sort of reasoning back in 1989:

http://www.alcor.org/Library/html/probability.html

Here is an example of the problem I'm raising, with the issues raised to an absurd level just for clarity. A new gambling house sets up in Reno. The owner undertakes to bet with everyone about whether or not he, the owner, will do his laundry tomorrow. Bets are made today and close at 6 PM. (Perhaps gambling houses already operate this way?) Do we, then, expect a rush of clients?

The problem with this bet is that he, the owner, has some control over whether or not he does his laundry. Not only are the dice loaded, but he gets to pick, after all bets are laid, which loaded die to use. Computing probabilities only makes sense when the events bet upon are known to be random. . . . this means that our actions can have NO effect upon the outcome. I don't mean "only a very little." NO means none at all, zilch, zero. Why zero? Because our actions now are seeds, not just "observational errors" which lead nowhere. Once we admit that our actions can influence these events, how do we predict by how much and when?

Within a very wide range, what happens to us is our responsibility. We are not passive betters on the outcome of events. I mean this both in the narrow sense of I, me, myself, and in the broader one of cryonicists generally. How can I (myself) affect my frozen fate 100 years from now? Well, for one thing I can choose my cryonics society. I can try to make its officers not only honest and competent as individuals, but operating within a constitution which keeps them honest and competent or throws them out of office. And I can provide enough resources so that evasive action is possible when any threat appears. Third, I can try to arrange that equipment, supplies, and competent people will be available when I'm declared legally dead. And of course last of all I can try to create other cryonicists.

But of course someday I will be frozen. What control do I have then? Not directly, but through other cryonicists who succeed me. We have all joined together for a journey across time. If anyone is revived 50 years from now, even with technology far in advance of ours and in another country, it will strengthen my chances. I believe the important part to remember about [conjectural] social catastrophes is that every one of us is putting out effort to see that they do not occur to us.

In other words, cryonicists can get off their butts and start to do some constructive things to make the project more likely to succeed. I've donated some money towards cryonics-related research that few other people seem interested in, for example. I'd like to see a lot more of that instead of the tendency for cryonicists, who jumped onto Drexler's "nanotechnology" distraction early on, to fantasize about "how cool it would be if we had nanotech factories which would give us genie-like superpowers." Nano-nonsense: 25 years of charlatanry