Rationality, Cryonics and Pascal's Wager

post by Roko · 2009-04-08T20:28:48.644Z · LW · GW · Legacy · 58 comments

Contents

58 comments

This has been covered by Eliezer on OB, but I the debate will work better with the LW voted commenting system, and I hope I can add something to the OB debate, which I feel left the spectre of Pascalian religious apology clinically dead but not quite information theoretically dead. Anna Salamon writes:

The “isn’t that like Pascal’s wager?” response is plausibly an instance of dark side epistemology, and one that affects many aspiring rationalists.

Many of us came up against the Pascal’s wager argument at some point before we gained much rationality skill, disliked the conclusion, and hunted around for some means of disagreeing with its reasoning. The overcoming bias thread discussing Pascal’s wager strikes me as including a fair number of fallacious comments aimed at finding some rationale, any rationale, for dismissing Pascal’s wager.

This really got me worried: do I really rationally believe in the efficacy of cryonics and not of religion? Or did I write the bottom line first and then start thinking of justifications?

Of course, it is easy to write a post justifying cryonics in a way that shuns religion. That's what everyone wants to hear on this forum! What is hard is doing it in a way that ensures you're not just writing even more justification with no chance of retracting the bottom line. I hope that with this post I have succeeded in burying the Pascalian attack on cryonics for good; and in removing a little more ignorance about my true values.


To me, the justification for wanting to be cryopreserved is that there is, in fact, a good chance (more than the chance of rolling a 5 or a 6 on a six sided die)1 that I will be revived into a very nice world indeed, and that the chance of being revived into a hell I cannot escape from is less than or equal to this (I am a risk taker). How sensitive is this to the expected goodness and length-in-time of the utopia I wake up in? If the utopia is as good as Iain M Banks' culture, I'd still be interested in spending 5% of my income a year and 5% of my time getting frozen if the probability was around about the level of rolling two consecutive sixes.

Does making the outcome better change things? Suppose we take the culture and "upgrade it" by fulfilling all of my fantasies:  The Banksian utopia I have described is analogous to the utopia of the tired peasant compared to what is possible. An even better utopia which appeals to me on an intellectual and subtly sentimental level would involve continued personal growth towards experiences beyond raw peak experience as I know it today. This perhaps pushes me to tolerating probabilities around the four sixes level, (1/(6*6*6*6) ~ 1/1000) but no further. For me this probability feels like "just a little bit less unlikely than impossible". 

Now, how does this bear on Pascal's wager? Well, I just don't register long-term life outcomes that happen with a probability of less than one in a thousand. End of story! Heaven *could not be good enough* and hell *could not be bad enough* to make it matter to me, and I can be fairly sure about this because I have just visualized a plausible heaven that I actually "believe in".

Now what is my actual probability estimate of Cryonics working? Robin talks about breaking it down into a series of events and estimating their conditional probabilities. My breakdown of the probability of a successful outcome if you die right now is:

  1. The probability that human civilization will survive into the sufficiently far future (my estimate: 50%)
  2. The probability that you get cryopreserved rather than autopsied or shot in the head, and you get cooled down sufficiently quickly (my estimate: 80%, though this will improve)
  3. The probability that cryonics preserves appropriate brain structure (my estimate: 75%)
  4. The probability that you don't get destroyed whilst frozen, for example by incompetent financial management of cryonics companies (my estimate: 80%)
  5. The probability that someone will revive you into a pleasant society conditional upon the above (my estimate: 95%)

Yielding a disappointingly low probability of 0.228. [I expect this to improve to ~0.4 by the time I get old enough for it to be a personal consideration.] I don't think that one could be any more optimistic than the above. But this probability is tantalizing: Enough to get me very excited about all those peak experiences and growth that I described above, though it probably won't happen. It is roughly the probability of tossing a coin twice and getting heads both times.

It is also worth mentioning that the analyses I have seen relating to the future of humanity indicate that a Banksian almost-utopia is unlikely, that the positive scenarios are *very positive*, and negative scenarios usually involve the destruction of human technological society. My criterion of personal identity will probably be the limiting factor in how good a future I can experience. If I am prepared to spend 5% of my time and effort pursuing a 1 in 100 chance of the this maxed-out utopia, I should be prepared to put quite a lot of effort into making sure I "make it" given the probability I've just estimated.

If someone were to convince me that the probability of cryonics working was, in fact, less than 1 in 1000, I would (quite rationally) give up on it.

This relatively high probability I've estimated (two heads on a coin) has other consequences for cryonaughts alive today, if they believe it. We should be prepared to expend a non-negligible amount of effort moving somewhere where the probability of quick suspension is as high as possible. Making cryonics more popular will make probabilities 1, 2, and 4 increase. (2 will increase because people will have a stake in the future after their deanimation). The cryonics community should therefore spend some time and effort convincing more people to be cryopreserved, though this is a hard problem, intimately related to the purpose of Less Wrong, rationality and to secular ethics and secular "religions" such as secular humanism, h+ and the brights. Those who are pro-cryonics and have old relatives should be prepared to bite the social cost of attempting to persuade those relatives that cryonics is worth thinking about, at least to the extent that they care about their relatives. This is an actionable item that I intend to action with all of my 4 remaining grandparents in the next few months.

I have seen (but cannot find the citation for, though see this) research that predicts that 50% of people will suffer from dementia for the 6 months before they die by 2020 (and that this will get worse over time as life expectancy increases). If we add to my list above a term for "the probability that you won't be information theoretically dead before you're legally dead", and set it to 50%, the overall probability takes a huge hit; in addition, a controlled deanimation improves the probability of being deanimated without damage. Any cryonaught who really shares my beliefs about the rewards and probabilities for cryonics should be prepared to deainmate theselves before they would naturally die, perhaps by a significant amount, say 10 years. (yes, I know this is illegal, but it is a useful thought experiment, and it indicates that we should be campaigning hard for this). If you really believe the probabilities I've given for cryonics, you should deanimate instead of retiring. At a sufficiently high probability of cryonics working, you should rationally attempt to deanimate immediately or within a few years, no matter how old you are, in order to maximize the amount of your personal development which occurs in a really good environment. It seems unlikely that this situation will come to pass, but it is an interesting thought experiment; if you would not be prepared, under sufficiently compelling circumstances, to prematurely deanimate, you may be in cryonics for nonrational reasons.

 

1 [The use of dice rather than numbers to represent probabilities in this article comes from my war-gaming days. I have a good emotional intuition as to how unlikely rolling a 6 is, it is more informative to me than 0.1666. I've won and lost battles based on 6+ saving throws. I recommend that readers play some game that involves dice to get a similarly good intuition]

58 comments

Comments sorted by top scores.

comment by Vladimir_Nesov · 2009-04-09T00:54:25.537Z · LW(p) · GW(p)

I'm somewhat annoyed with the current state of cryonics, because even though there are references to the information-theoretic notion of death, the concept isn't pushed far enough.

It seems pretty plausible to me that an Alzheimer-afflicted head of an autopsied corpse with a bullet in the brain that lied at room temperature for 2 days and then fractured while cooling down after being thrown in liquid nitrogen without any kind of vitrification will still contain redundant information about almost every aspect of the original person.

From this perspective, setting a cutoff for the proper condition in which patients are admissible for preservation is pretty much the same mistake as throwing away the patients that are only clinically dead for several minutes. You just don't know how to restore a patient in that condition, what can possibly be done with this scrambled message.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-08T21:53:30.278Z · LW(p) · GW(p)

Well... this post could easily infringe on things not to be discussed until May...

...but I'm promoting it because of the fascinating idea that playing RPGs with dice-rolling gives you a real-world feel for small probabilities that are still large enough to be encountered once in your career. Presumably Roko feels that anything less than one in a thousand isn't worthwhile because he's never been asked to roll 4 sixes and done so successfully.

Replies from: MrHen
comment by MrHen · 2009-04-09T18:16:26.095Z · LW(p) · GW(p)

That being said, if someone sat me down and offered me the life of my dreams if I rolled 4 sixes but would shoot me if I failed, I would pass. Expected payout be darned, I want to live.

I think my thinking about cryonics does boil down to this: living now is of significantly more value (to me) than potentially living more or better later.

A more concrete example, if a being showed up and told me I had 10 years left to live but had the option of "dying" now, being reanimated 10 years later, and then living for 11 years instead, I would probably still pass. I have no idea when the extra time would tip the scales, but even with 100% confidence it is more than "any".

Replies from: bill
comment by bill · 2009-04-09T20:22:40.439Z · LW(p) · GW(p)

I've used that as a numerical answer to the question "How are you doing today?"

A: Perfect life (health and wealth) B: Instant painless death C: Current life.

What probability p of A (and 1-p of B) makes you indifferent between that deal (p of A, 1-p of B) and C? That probability p, represents an answer to the question "How are you doing?"

Almost nothing that happens to me changes that probability by much, so I've learned not to sweat most ups and downs in life. Things that change that probability (disabling injury or other tragedy) are what to worry about.

comment by MBlume · 2009-04-08T21:54:39.077Z · LW(p) · GW(p)

if you would not be prepared, under sufficiently compelling circumstances, to prematurely deanimate

Were I diagnosed with Alzheimer's, I would probably consider moving to a country in which assisted suicide is legal, and safely deanimating.

Replies from: jimmy
comment by jimmy · 2009-04-09T00:03:18.275Z · LW(p) · GW(p)

That sounds like the right way to do it, but is there anything stopping someone that committed suicide from getting frozen?

Either way, that would be one hell of a test in your belief... "Ending" what feels like a perfectly good life for the <50% chance that you wake up again somewhere nice.

Replies from: Eliezer_Yudkowsky, rwallace
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-09T01:23:51.909Z · LW(p) · GW(p)

Suicides get autopsied automatically, at least in the US.

Replies from: Prolorn
comment by Prolorn · 2009-04-09T21:01:04.552Z · LW(p) · GW(p)

Does this apply to legal assisted suicide within the US as well?

comment by rwallace · 2009-04-09T13:10:48.130Z · LW(p) · GW(p)

In the "diagnosed with Alzheimer's" scenario, cryonics aside, it only feels like a perfectly good life if you assign non-negative utility to the subjective experience of protracted brain death. (This is one reason why abstinence from smoking is at best a half-smart policy, though I'm sure I'll be voted down for that.)

Replies from: ciphergoth, Capla
comment by Paul Crowley (ciphergoth) · 2009-04-10T10:41:45.276Z · LW(p) · GW(p)

I always vote down comments that say "I'll be voted down for this, but..."

comment by Capla · 2014-10-07T01:11:04.183Z · LW(p) · GW(p)

Why did you have to add the last bit? You were making a good point!

comment by AnnaSalamon · 2009-04-09T19:00:03.553Z · LW(p) · GW(p)

Does anybody have thoughts on Roko's suggestion that we don't/shouldn't really value probabilities less than 1 in four dice-rolls, and that this is a reason we aren't/shouldn't be compelled by Pascal's wager (and any more probable but still under 1 in four dice-rolls wagers)?

I'm confused about this and would love to hear more peoples' thoughts.

Replies from: steven0461, Vladimir_Nesov, None
comment by steven0461 · 2009-04-09T20:18:23.532Z · LW(p) · GW(p)

If there's a pool of unknown things that are infinitely important, and what they are correlates positively with what would be important otherwise, then that gives you a lower bound on the probability of scenarios that you should take seriously no matter how high their utility. I'm not sure that it's a very high lower bound though.

There's also a class of things that we can't really decide rationally because they're far more improbable than our understanding of decision theory being completely wrong and/or because if they're true then everything we know is wrong including decision theory.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2009-04-09T20:28:54.212Z · LW(p) · GW(p)

If there's a pool of unknown things that are infinitely important, and what they are correlates positively with what would be important otherwise, then that gives you a lower bound on the probability of scenarios that you should take seriously no matter how high their utility. I'm not sure that it's a very high lower bound though.

It sounds like there may be a great point in here. I can't quite see what it is or whether it works, though. Could you maybe spell it out with some variables or math?

There's also a class of things that we can't really decide rationally because they're far more improbable than our understanding of decision theory being completely wrong.

If we use "decide rationally" to mean "decide in the way that makes most sense, given our limited knowledge and understanding" rather than "follow a particular procedure with a certain sort of justification", I don't think this is true. We should just be able to stick a probability on our understanding of decision theory being right, estimate conditional probabilities for outcomes and preferences if our understanding is/isn't right, etc. There wouldn't be a definite known framework in which this was rigorous, but it should yield good best-guess probabilities for decision-making the way any other taking account of structural uncertainty does.

Replies from: steven0461
comment by steven0461 · 2009-04-09T21:15:51.634Z · LW(p) · GW(p)

It sounds like there may be a great point in here. I can't quite see what it is or whether it works, though. Could you maybe spell it out with some variables or math?

Suppose you have a prima facie utility function U on ordinary outcomes; and suppose that you estimate that due to unknown unknowns, the probability that your real utility function V is infinite for each ordinary outcome is 1/(10^100) * U(outcome). Then you should prefer eating a pie with U = 3 utils (versus say 1 util for not eating it) to an 1 in 10^200 chance of going to heaven and getting infinite utils (which I'm counting here as an extraordinary outcome that the relationship between U and V doesn't apply to).

If we use "decide rationally" to mean "decide in the way that makes most sense, given our limited knowledge and understanding" rather than "follow a particular procedure with a certain sort of justification", I don't think this is true.

I'm confused here, but I'm thinking of cases like: there's a probability of 1 in 10^20 that God exists, but if so then our best guess is also that 1=2. If God exists, then the utility of an otherwise identical outcome is (1/1)^1000 times what it would otherwise be, so it's also (1/2)^1000 times what it would otherwise be, so can we ignore that case?

I suspect reasoning like this would produce cutoffs far below Roko's, though. (And the first argument above probably wouldn't reproduce normal behavior.)

comment by Vladimir_Nesov · 2009-04-09T21:12:24.094Z · LW(p) · GW(p)

If A is (utility of) status quo, B is winning option and C is its counterpart, then the default lottery (not playing) is A, and our 1000-rare lottery is (B+1000*C)/1001, so preferring to pass the lottery corresponds to 1000*(A-C)>(B-A). That is, no benefit B over A is more than 1000 times loss C below A.

Or, formulating as a bound on utility, even the small losses significant enough to think about them weight more than 1/1000th of the greatest possible prize. It looks like a reasonable enough heuristic for the choices of everyday life: don't get bogged down by seemingly small nuisances, they are actually bad enough to invest effort in systematically avoiding them.

comment by [deleted] · 2009-04-09T19:50:18.960Z · LW(p) · GW(p)

My guess is that he doesn't mean it for all things: if you can buy a 1 dollar lottery ticket that has a one in ten thousand chance of winning a million dollars, you shouldn't discount it because of the low probability.

But for pascal-wager type things, we're typically estimating the probabilities instead of being able to calculate them. He seems to be using 1/1000 as the cutoff for where human estimates of probability stop being accurate enough to base decisions on. This doesn't seem like a bad cutoff point to me, but even being charitable and extending it to 1/10000 would still probably disqualify any sort of pascal-wager type argument.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2009-04-09T20:15:37.239Z · LW(p) · GW(p)

He seems to be using 1/1000 as the cutoff for where human estimates of probability stop being accurate enough to base decisions on.

I doubt this is what Roko means. Probabilities are "in the mind"; they're our best subjective estimates of what will happen, given our incomplete knowledge and calculating abilities. In some sense it doesn't make sense to talk about our best-guess probabilities being (externally) "accurate" or "inaccurate". We can just make the best estimates we can make.

What can it mean for probabilities to "not be accurate enough to base decisions on"? We have to decide, one way or another, with the best probabilities we can build or with some other decision procedure. Is zero an accurate enough probability (of cryonics success, or of a given Pascal's wager-like situation) to base decisions on, if an estimated 1 in ten thousand or whatever is not?

Replies from: None, bill
comment by [deleted] · 2009-04-09T22:05:47.085Z · LW(p) · GW(p)

IAWYC (I think my original statement is wrong), but I disagree on there being no difference in 'accurate' and 'inaccurate' probabilities.

In my mind there's a big difference between a probability where you have one step between the data and your probability(such as a lottery or a coin flip), and a case where you have multiple, fuzzy inferential steps (such as an estimation of the longevity of the human race). The more you have to extrapolate out and fill in the gaps where you don't have data, the more room there is for error to creep in.

For things in the realm of 'things that will happen in the far future', it's not clear to me that a probability you assign to something will be anything but speculation, and as such I'd assign any probability (no matter what it is) for that type of event a rather low accuracy.

This raises the question of whether it's worth it at all to assign probabilities to these kinds of events where there are too many unknown (and unknown unknown) factors influencing them. (and if I'm terribly misunderstanding something, please let me know.)

comment by bill · 2009-04-09T20:32:55.622Z · LW(p) · GW(p)

When dealing with health and safety decisions, people often need to deal with one-in-a-million types of risks.

In nuclear safety, I hear, they use a measure called "nanomelts" or a one-in-a-billion risk of a meltdown. They then can rank risks based on cost-to-fix per nanomelt, for example.

In both of these, though, it might be more based on data and then scaled down to different timescales (e.g. if there were 250 deaths per year in the US from car accidents = about 1 in a million per day risk of death from driving; use statistical techniques to adjust this number for age, drunkenness, etc.)

comment by gjm · 2009-04-08T22:17:37.627Z · LW(p) · GW(p)

Now, how does this bear on Pascal's wager? Well, I just don't register long-term life outcomes that happen with a probability of less than one in a thousand. End of story!

I think you just admitted to being outright irrational about this. That's fair enough if what you're trying to explain is why Pascal's wager doesn't move you, but if the question is why it shouldn't (or, less tendentiously, whether it should and why) I don't think it'll do.

As for that cryonics probability stackup: I say 75% for longish-term human civilization at a decent level; 25% for getting frozen quickly enough and well enough by current standards; 50% for enough brain structure being preserved; 25% for the cryonics provider surviving (and not screwing up) for long enough; 25% for getting revived into a decent society. There are some factors missing: let's say 50% for enough technological advances to make reanimation feasible, conditional on civilization surviving at a decent level, and then 50% conditional on that for technological and/or social improvements providing a really good life for a long enough time to make any of this worth while. I think some of those are a bit optimistic and some a bit pessimistic; perhaps the errors cancel out. My guess is that there are more ways for the chain to fail unexpectedly than to succeed unexpectedly, so most likely my estimate is too optimistic. Anyway, it ends up at 3/2048 if I've counted my powers of 2 correctly. Four sixes. Enough to consider, for sure, but -- for me -- not enough to justify the expenditure when it could instead provide somewhat better quality of life for me and my family in the more clearly foreseeable future.

Spelling nitpick: -naut, not -naught. (Related to "nautical", not "naughty".)

Replies from: AnnaSalamon
comment by AnnaSalamon · 2009-04-09T01:37:38.692Z · LW(p) · GW(p)

I think some of those are a bit optimistic and some a bit pessimistic; perhaps the errors cancel out. My guess is that there are more ways for the chain to fail unexpectedly than to succeed unexpectedly, so most likely my estimate is too optimistic. Anyway, it ends up at 3/2048.

Here’s the thing: let’s say that there’re some “objective probabilities” out there, and that your estimate is indeed “most likely too optimistic” compared to those objective probabilities, but that there’s some significant (e.g., 10%) chance that it’s too pessimistic with compared to those same probabilities. If your estimate is “over optimistic”, its overoptimistic by at most 3/2048. If your estimate is “over pessimistic”, it could easily be over pessimistic by more than ten times that much (i.e., by more than 30/2048; Robin Hanson’s estimates the odds as “>5%”, i.e. more than 100/2048). And if you’re trying to do decision theory on whether or not to sign up for cryonics, you’re basically trying to take an average over the different values these “objective probabilities” could have, weighted by how likely they are to have those values -- which means that the scenarios in which your estimate is “too pessimistic” actually have a lot of impact, even if they’re only 10% likely.

Or in other words: one’s analysis has to be unusually careful if it is to justify a resulting probability as low as 3/2048. Absent a terribly careful analysis, if one is trying to estimate some quantity that kinda sounds plausible or about which experts disagree (e.g., not “chances we’ll have a major earthquake during such-and-such a particular milisecond), one should probably just remember the overconfidence results and be wary of assigning a probability that’s very near one or zero.

Replies from: Roko, gjm, Vladimir_Nesov
comment by Roko · 2009-04-09T06:35:48.794Z · LW(p) · GW(p)

Or in other words: one’s analysis has to be unusually careful if it is to justify a resulting probability as low as 3/2048. Absent a terribly careful analysis, if one is trying to estimate some quantity that kinda sounds plausible or about which experts disagree (e.g., not “chances we’ll have a major earthquake during such-and-such a particular milisecond), one should probably just remember the overconfidence results and be wary of assigning a probability that’s very near one or zero.

Great comment - people make this mistake a lot. This should be promoted to a top level post.

comment by gjm · 2009-04-09T12:33:37.962Z · LW(p) · GW(p)

It's quite true that my estimate of 3/2048 is (to say the least) more likely to be too low by 1/512 than to be too high by 1/512 :-). The error is probably something-like-lognormally distributed, being the result of multiplying a bunch of hopefully-kinda-independent errors.

But:

Suppose (for simplicity) that the probability we seek is the product of several independent probabilities, each of which I have independently estimated. Then Pr_actual(win) = Pr_actual(win1) Pr_actual(win2) ... , and likewise Pr_est(win) = Pr_est(win1) Pr_est(win2) ... . If I haven't goofed in estimating the individual probabilities, then Pr_est(win1) = E_subjective(Pr_actual(win1)) etc. Hence:

E_subjective(Pr_actual(win)) = {by (objective) independence} E_subjective(product Pr_actual(win_j)) = {by (subjective) independence} product E_subjective(Pr_actual(win_j)) = {my individual estimates are OK, by assumption} product Pr_est(win_j) = {by (subjective) independence} Pr_est(win)

In other words, even taking my unreliability in probability-estimating into account, and despite the asymmetry you noted, once I've estimated the individual probabilities my best estimate of the overall probability is what I get by using those individual estimates. I should not increase my probability estimate merely because there are uncertainties in the individual probability estimates.

For sure, my estimates could be wrong. For sure, they can be too high by much more than they can be too low. But they are very unlikely to be too high, and it turns out that (subject to the assumptions above) my overall estimate is unbiased provided my individual estimates are.

Replies from: ChrisHibbert
comment by ChrisHibbert · 2009-04-09T18:59:03.173Z · LW(p) · GW(p)

There's something wrong with an analysis that biases the outcome in a particular direction as you add more details. In this case, the more different kinds of things that might go wrong or have to go right, the more fractions you have to multiply your result by. I don't know how to get out of this trap, but it seems a failure mode with any attempt to predict the future by multiplying lists of probabilities, each generated by a handwave.

The only one of your numbers that I think can be estimated based on current experience is #2. Alcor publishes details regularly about how their cryopreservations go, and numbers for how many members quit or die in circumstances that make their preservation hopeless. Your 80% number sounds like it's in the right ballpark. (*)

My other complaint is that your numbers are connected by a more complicated web of conditional likelihoods and interactions than simple multiplication shows. If civilization survives, the likelihood of particular technologies being developed increases, and if organizations like Alcor persist, that ought to raise those odds as well. In many of the bad societal outcomes, you won't get revived. This reduces your downside almost as much as it reduces your upside. You've still paid for suspension, but it doesn't count as a hell scenario.

I don't think "Shut up and Multiply" should be taken literally here.

(*) ETA: simpleton's reference to Alcor's numbers says I'm wrong about this.

Replies from: gjm
comment by gjm · 2009-04-09T19:58:20.216Z · LW(p) · GW(p)

It's perfectly correct that your estimate of P(cryonics will work for me) should go down as you think of more things that all have to happen for it to work. When something depends on many things all working right, it's less likely to work out than intuition suggests; that's one reason why project time estimates are almost always much too short, and why many projects fail.

Of course my probability estimates are only rough guesses. I don't trust estimates derived in this way very much; but I trust an estimate derived by breaking the problem down into smallish bits and handwaving all the bits better than I trust one derived by handwaving the whole thing. (And there's nothing about the breaking-it-down approach that necessitates a pessimistic answer; Robin Hanson did a similar handwavy calculation over on OB a little while ago and came up with a very different result.)

The estimate of 80% for #2 was not mine but Roko's. My estimate for that one is 25%. I'm not in the US, which I gather makes a substantial difference, but in any case, as your later edit points out, it looks like my number may be better than Roko's anyway.

Yes, the relationship between all those factors is not as simple as a bunch of independent events. That would be why, in the comment you're replying to, I said "Suppose (for simplicity) that the probability we seek is the product of several independent probabilities, each of which I have independently estimated." And also why some of my original estimates were explicitly made conditional on their predecessors.

"Shut up and multiply" was never meant to be taken literally, and as it happens I am not so stupid as to think that because someone once said "Shut up and multiply" I should therefore treat all probability calculations as chains of independent events. "Shut up and calculate" would be more accurate, but in the particular cases for which SUAM was (I think) coined the key calculations were very simple.

comment by Vladimir_Nesov · 2009-04-10T10:02:37.246Z · LW(p) · GW(p)

Hmm... I have an idea regarding this, and also regarding Roko's suggestion to disregard low probabilities.

  • If you are generally unable to estimate probabilities of events lower than, say, 1/1000, it means that you must calibrate the estimates for these events way down, below 1/1000.

There are very many things that you'll only be able to estimate as "probability below 1/1000", some of them mutually exclusive. Normalization requires keeping the sum of their probabilities below unity, so the estimate must actually be tuned down. As a result, you can't insist that there are parts of the distribution resulting from uncertain estimate that are sufficiently high to matter, and generally should treat things falling in this class as way less probable than the class suggests.

comment by haig · 2009-04-09T09:52:12.128Z · LW(p) · GW(p)

My issue isn't with cryonics, it is with the whole notion of self identity and post-singularity personhood. I guess this ties into EY's 'fun theory' and underscores the importance of a working positive theory of 'fun' as a prerequisite for immortality as we currently define it.

Assume cryonics works, further more assume that your brain is scanned with enough resolution to capture all salient features of what you consider to be your mind. You are now an uploaded entity, and your mind is as malleable as any other piece of software. There are only so many clock cycles to spend indulging in hedonism and utopian bliss before that gets old. So then naturally, you expand your mind, you merge with other minds, and whatever else is possible. Very soon you will no longer resemble anything of what we consider a 'self', not just a human self with our evolved emotions and thoughts, but a 'self' in general as we can define it. So then, what is the point of trying to preserve yourself if you are going to transcend a self anyway?

I'd want to make sure humanity continues or ensure some posthuman eventuality continues where we leave off. Also, I'd want to continue to live as much of a fruitful existence for as long as I can, as long as I am still living. If that means I reach the singularity and enjoy a certain number of clock-cycles in utopia before the idea of utopia ceases to have any meaning, then lucky me. If not, I'm content to have existed at all and to have done my part in trying to ensure the continuation and spread of consciousness. But it makes no sense to go to such lengths to preserve myself needlessly in relation to such a (near) infinite expansion of consciousness.

This can also be viewed as an update to Camus's question of suicide. Is not signing up for cryonics, or dying while knowing it can be overcome, similar to suicide? I admit, I have contemplated suicide in the past, but I'm over it. It pains me to start feeling the same emotions once again when contemplating cryonics.

Replies from: MrHen, infotropism
comment by MrHen · 2009-04-09T18:05:26.590Z · LW(p) · GW(p)

Your view of cryonics seems similar to the concept of reincarnation.

Also, some would find personal value in setting up another entity for success. Namely, anyone who has children, but also anyone who argues for the good of the next generation. Essentially, it seems as if you are arguing against giving birth to a new you.

While that makes sense in terms of prolonging your own existence, it certainly seems to land on the "evil" side of selfish. Not that that is a problem... just interesting.

comment by infotropism · 2009-04-09T10:11:55.135Z · LW(p) · GW(p)

So you have one precise reason to not want to live on, and it hinges on quite a few assumptions right ? Adding details to a story makes it less probable.

Could you imagine a few other scenarios where things go right instead ? So long as you're alive, there's always at least the unexpected at any rate. You can't predict what your future will be, especially post singularity. Even if you can't imagine how your life could be pleasant or how to make it turn right now, since you aren't expected to outsmart your future self, why wouldn't it find that solution ?

I'll list the most obvious thing to me here, if that can help, which is that I don't see how expanding yourself equates with merging with other minds and loosing your individuality. If there was no way to get individual minds that are bigger than ours without a loss of individuality, then please do tell of how individual and complex humans are as compared to previous lifeforms, upstream the tree of life.

comment by Capla · 2014-10-07T01:03:14.162Z · LW(p) · GW(p)

This is exactly the sort of breakdown I needed to see (regardless of whether I agree with the numbers). Lots of people are asserting the rationality of cryonics, but as a relative newcomer to the community, I want you to explain it to me. I need more info before I'd sign up.

To which Robin Hanson post are you referring? Can anyone share similar analyses of the effectiveness of cryonics?

comment by whpearson · 2009-04-08T21:58:37.087Z · LW(p) · GW(p)

I have a fairly low estimate for 4. Not because I think cryonic companies are incompetent, just that the next 100 years are likely to be more turbulent (financially and in energy terms) than the previous 50 where we had fairly easily increased energy and natural resource consumption.

So the assets might become worthless or seized in a time of national crisis.

comment by steven0461 · 2009-04-08T20:54:48.665Z · LW(p) · GW(p)

I don't know, but your estimates for 1 and 2 seem way optimistic (2 especially so in Europe). Also, don't forget the probability that even without cryonics you'll stay alive until the point where death gets solved.

Replies from: Roko
comment by Roko · 2009-04-08T21:04:57.640Z · LW(p) · GW(p)

my estimate for 2 assumes you're in the USA.

For 1, are you serious!? less than 50% that the human race doesn't wipe itself out by, say 2100?

Replies from: simpleton, JulianMorrison
comment by simpleton · 2009-04-08T22:01:44.549Z · LW(p) · GW(p)

Alcor says they have a >50% incidence of poor cases.

comment by JulianMorrison · 2009-04-08T22:40:03.872Z · LW(p) · GW(p)

I was surprised you'd give the human race so low a chance as a coin-flip.

I wouldn't even give "wiped out badly enough to set us back half a century" that high a chance. The whatever-it-is would have to hit at least 3 continents with devastating force to cause such a severe civilizational reversal. (Example: global warming won't do it. People on the rich continents will just up and relocate.)

Replies from: gwern, Roko
comment by gwern · 2009-04-08T22:45:51.296Z · LW(p) · GW(p)

I was surprised you'd give the human race so low a chance as a coin-flip.

I think the reason Roko puts it so low - and at the probability one gives when one doesn't know either way - is because there are so many current existential risks like nuclear warfare, pandemics, asteroids, supervolcanoes, and there are multiple existential risks which could become a concern between now and when cryo-resurrection is feasible (grey goo-level nanotech is a plausible prerequisite for resurrection, and equally plausible candidate for destroying humanity, to say nothing of AI risks).

comment by Roko · 2009-04-09T06:38:08.390Z · LW(p) · GW(p)

I've actually had people email me and tell me off for putting our chances of survival too high. They cite the AGI risk as contributing most of the risk.

But we can't talk about this at the moment.

Replies from: Pablo_Stafforini
comment by Pablo (Pablo_Stafforini) · 2009-04-09T15:11:04.169Z · LW(p) · GW(p)

But we can at least defer to the judgment of experts. Those who completed the survey circulated at the Global Catastrophic Risks conference at Oxford gave humanity a mean 79% probability of surviving this century. The most pessimistic estimate I know is that of Sir Martin Rees, who believes we have a fifty-fifty chance of making it through, though I was told that in private conversation Rees gives a much lower figure.

Replies from: steven0461
comment by steven0461 · 2009-04-09T16:29:53.319Z · LW(p) · GW(p)

These seem to have been mostly experts on a single risk, not experts on the future in general. There is no field of study called "the future" that you can be an expert in. I'd consider people's opinion on the future more authoritative the more they 1) were smart, 2) had a track record of dealing rationally with "big questions" (anyone with an anti-silly bias is out), 3) had a lot of knowledge about different proposed technologies, 4) had a background in a lot of different scientific fields as well as in philosophy, 5) had clearly grasped what seem to me to be the important points that people have been making about certain unmentionables.

I'm not one of the people who emailed Roko, but yes, I seriously think 50% is an overestimate. Note that this probability is not only for humans not getting extinct, but for civilization not collapsing. One of the reasons why we might survive beyond 2100 is if certain advanced techs are much harder than we thought or even impossible, and in that case cryonicists are probably screwed too.

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2009-04-10T10:48:26.439Z · LW(p) · GW(p)

Yes - it would have been more useful to categorise the risks, get an estimate from experts in that risk on the probability of our survival, and multiply those risks together.

comment by infotropism · 2009-04-09T09:02:40.227Z · LW(p) · GW(p)

First, where do you get those estimates from ? I can only speak for myself, but I wouldn't put figures on estimates when it doesn't seem like even a consensus of experts would get any of those right, as they verge on questions more or less unsolved. Particularly 3 + 5, and 1 too, are going to yield totally speculative (subjective) estimates.

By that I mean that even if your estimate is as good as you can hope to make it, the real world event that you'd bet on will probably turn to be otherwise, like totally unexpected. Spread those confidence bounds. Orders of magnitude larger. Let's admit we don't know. Narrow, focused probability estimates must mean that you know better than you'd by chance, and for that to be true you ought to have good, practical reasons to believe you do.

Then, even if we don't know, and even if the probability of cryonics working is very, very small ... I'd expect that since your life is instrumentally necessary, for you to experience or do anything, that it supersedes in utility most if not all of the other things you value. You may think you love, I don't know, chocolate more than life, but if you're dead, you can't pursue or value, even, that one. Likely so for anything else. Love, friendship, freedom, etc. Given that, shouldn't you be ready to protect your prospects of survival as well as you can ? The only reason to avoid cryonics then, is if the cost of it, makes it impossible for you to invest in another strategy whose potential to sustain your life (indefinitely) is bigger than cryonics'. If not, and you can take on both, then no matter how low the probability of cryonics working, it'd still be rational to invest in it.

(Only other issue I can see with this outlook is that it has some potential to turn you into a money pump, as long as there's a way to justify your bleeding of money no matter what the cost, to secure trillionth after trillionth of a chance to save your life, so long as you can still afford it, down to your las disposable penny.)

Replies from: whpearson, gjm
comment by whpearson · 2009-04-09T10:50:48.436Z · LW(p) · GW(p)

Let us say you value freedom (not just for yourself but for friend/family/fellow countrymen). If you have a choice:

Option 1)Increased the amount of freedom, but killed you Option 2) Decreased the amount of freedom but allowed you to live,

Which would should you choose?

I'd pick 1, sometimes your own death is necessary to promote values even if you won't be around to enjoy the benefits, e.g fighting against the nazis.

Replies from: infotropism
comment by infotropism · 2009-04-09T16:08:29.242Z · LW(p) · GW(p)

Yes, in some cases. Are those a minority, or a majority of the cases where you have to put your life into balance and decide if it's worth being sacrificed for something, or having that thing sacrificed instead ?

It all hinges on your values too. Here it seemed like it was a given that one's life was being considered as valuable, but that under some threshold, the probability of survival was too small to deserve a personal sacrifice (money, time, pleasure, energy, etc.). All of this from a personal standpoint, weighting personal, individual benefits, and individual costs. If all that is being considered is your own subjective enjoyment of life, then it still seems to me that any personal sacrifice is at most as undesirable as the loss of your life. And this calls into question, how much do we value our own personal enjoyment and life, when compared with other values such as other's general well being ? In other words, how selfish and altruistic are we, both in which proportion, and in which cases ?

comment by gjm · 2009-04-09T15:12:57.652Z · LW(p) · GW(p)

I wouldn't put figures on estimates when it doesn't seem like even a consensus of experts would get any of those right, as they verge on questions more or less unsolved.

You have to make a decision. Obviously, what the right decision is depends on what those probabilities are, so you have to factor in your beliefs (uncertain though they are) about them. What makes you think you can do that better without numbers than with?

I'd expect that [...] your life [...] supersedes in utility most if not all of the other things you value.

A reasonable expectation if you care only about yourself. Most of us care to some extent about other people. (In most cases, rather less than we like to think and say we do; but still, more than zero.) Any money you spend on cryonics is money that isn't helping your family in the shorter term.

it has some potential to turn you into a money pump

Yes. There is a parallel objection to Pascal's wager. (Though I think what it really is, in both cases, is not so much a reason to disagree as a sign that there's likely something wrong with the logic. It could turn out that being money-pumped is the best one can do, after all.)

Replies from: infotropism
comment by infotropism · 2009-04-09T18:18:53.320Z · LW(p) · GW(p)

Yes, if we are to take a decision, we need the numbers. I wouldn't say that a decision taken without numbers factored in, can be better, all else being equal. I'd say it can be worse, or equally good. Equally good if the numbers used are as good as numbers that would have been obtained through a random number generator. So even though numbers are ok, seeing a definite probability estimate put forward as if it was significantly better than a random luck guess, is a misuse I think.

When I see one of those, it makes me think of the other; in the absence of a particular reason, a detailed analysis, or a mechanism explaining "why", then I might be tempted to think both estimates rely on the same rule of thumb.

"A news story about an Australian national lottery that was just starting up, interviewed a man on the street, asking him if he would play. He said yes. Then they asked him what he thought his odds were of winning. "Fifty-fifty," he said, "either I win or I don't."

"The probability that human civilization will survive into the sufficiently far future (my estimate: 50%)"

As to the fact that an overwhelming majority of people don't just care about themselves, I agree. Even avowed selfish people still ought to have (barring abnormal neurology) some mindware buried in that brain of theirs, that could cause them to care about others, and possibly even sacrifice their life for them.

comment by dclayh · 2009-04-09T02:29:25.597Z · LW(p) · GW(p)

Personally , I'd like to see more discussion of the possibilities of (a) an effectively omnipotent far-future civilization that can just revive everyone by, e.g., scanning the planet and running physics backwards, or time-travel, or whatever, and (b) all those infinite copies of me. Not to mention (c) regular old quantum immortality. All these effects tend to point to "why bother with cryonics?".

(I have a feeling that Eliezer's answer to (b) would be something like "Make yourself into the kind of person who would (just shut up and) be immortal", but I don't really want to get into the dangerous sport of Eliezer-simulating.)

Replies from: gjm, Roko
comment by gjm · 2009-04-09T15:19:50.515Z · LW(p) · GW(p)

The only thing that distinguishes quantum immortality from old-fashioned probabilistic immortality (play Russian roulette as much as you like, and if you survive then it turns out that you survived, duh) is the idea that all those possible-yous are real. In which case, relying on "quantum immortality" means just letting the vast majority of them die. If cryonics offers a substantial chance of greatly increasing the proportion of possible-yous that have good long happy lives, then "there's always quantum immortality" seems to me no reason at all to pass up that chance.

Imagine that you are going to be put into a machine that will replace you with a million copies, all of them having exactly the same degree of physical and psychological continuity with your present self as you generally expect your near-future selves to have. You have (now, before going into the machine) the choice of two actions, one of which means that all but one of those million copies will die horribly tomorrow, and one of which means that only one of them will. The second choice has a moderate cost associated with it. Which way do you choose? I go for the second, for sure.

Replies from: dclayh
comment by dclayh · 2009-04-15T20:00:20.880Z · LW(p) · GW(p)

In the case of your machine, I would have a 999999/1000000 chance of experiencing death if I choose the first option, and 1/1000000 chance with the second. I would most likely choose the second.

But in the case of cryonics, the experience of death is guaranteed; the only question is how many "copies" wake up later. And I have no intuition or logical argument that a million copies of me is better than one.

Replies from: gjm
comment by gjm · 2009-04-16T13:51:58.897Z · LW(p) · GW(p)

Maybe I didn't understand what you were saying about quantum immortality, then. Let's make it a bit more explicit. To avoid confusion let's ignore (1) other copies of you that exist merely because the universe is very large, if any, (2) simulations of you that exist for any reason at all, if any, and (3) the possibility that you might escape death without either cryonics or extreme good luck, for instance because of a technological singularity in the near future.

1. Ignoring many-worlds quantum stuff:

Without cryonics, you will almost certainly die within the next (say) 100 years, but there is an infinitesimally tiny chance that you will somehow survive much longer.

With cryonics, you will certainly "die" (that is, go through something that greatly resembles death up to and beyond the point at which you cease to be conscious), and then there's some chance (whose size is debatable) that you will later be revived.

Comparing these two outcomes, you should decline to bother with cryonics (even if its costs are very low) if (a) you think the chance of getting revived is very tiny, or (b) what matters to you is having some chance of survival rather than having as much chance of survival as possible.

2. With many-worlds quantum stuff:

Every statement that used to be about probability now gets a reinterpretation in terms of (speaking loosely) "number of versions of you". If you toss a coin, roughly half of your versions see it come up heads and roughly half see it come up tails. If you play Russian roulette with a six-shooter, roughly 5/6 of your versions live and roughly 1/6 die.

So you should feel exactly the same way about "proportion of versions of me for which X is true" as you do about "probability that X is true for me".

Now:

Without cryonics, almost all your versions die within (say) 100 years. An infinitesimally tiny fraction of them get "lucky" and live much longer.

With cryonics, all your versions "die", and then some (debatable how large) fraction get revived later.

Comparing these two outcomes, you should decline to bother with cryonics (even if its costs are very low) if (a) you think the fraction of versions-of-you that get revived is very tiny, or (b) what matters to you is having some copy of you survive rather than having as many copies survive as possible.

And, once again: if you take many-worlds seriously -- which you have to, for "quantum immortality" to make any sense -- then you should feel exactly the same way about this as you did about the corresponding comparison stated in terms of probabilities: because that's what probabilities (of the "objective" sort) are. Probability of survival == fraction of Everett branches on which you survive.

If you're happy for your survival to depend on quantum immortality, then you should also be happy for it to depend on a coin toss or the result of a game of Russian roulette. If quantum immortality doesn't lessen your reluctance to bet your life in those ways, then it also shouldn't make a difference to your feelings about cryonics.

comment by Roko · 2009-04-09T06:40:20.198Z · LW(p) · GW(p)

an effectively omnipotent far-future civilization that can just revive everyone by, e.g., scanning the planet and running physics backwards

How much physics have you studied? Scanning the planet would not be enough. You'd need something like a timeslice through the future light cone of the point you wanted. This would be a huge volume of space.

Time travel is probably impossible, or if it is possible, then our current understanding of reality is so badly wrong that this entire discussion is probably nonsense.

Quantum immortality is somewhat deceptive... if you buy into it you'll become apathetic ("there's always some branch of the wavefunction where what I want happens, so why should I bother?"). Similar comments apply to "all those copies of you" in an infinite universe.

Replies from: AnnaSalamon
comment by AnnaSalamon · 2009-04-09T06:58:25.064Z · LW(p) · GW(p)

How much physics have you studied? Scanning the planet would not be enough. You'd need something like a timeslice through the future light cone of the point you wanted. This would be a huge volume of space.

I haven't studied much physics. Would scanning the planet not be enough? Humans, and human brains, have some predictable structure, so if you were smart about things, it isn't like you'd have to be able to run backwards an arbitrary physics. You'd just to have to be able to make a good probabilistic guess about those portions of a particular state you cared about (the portions that made people "themselves", which you could conceivably infer by, say, having a notion of the probability space "human brains/minds", and by having footprints, bits of writing, etc. left by the person at various points in their life). I don't know how to evaluate the odds here.

Replies from: Eliezer_Yudkowsky, gjm
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-09T16:16:50.170Z · LW(p) · GW(p)

Due to QM the function is many-to-one and can't be reversed. If you isolate a single Everett branch and look only at that branch and not all the other branches, and run that branch backward in time, it will evolve into a multiplicity of pasts (that would in the greater scheme be coherently canceled out by the past evolution of other branches). The upshot is that even if you timeslice the entire future lightcone in a single branch you cannot get the exact past. And we can't get photons that have escaped over the horizon, so we can't get the whole future lightcone anyway, and small divergences will amplify, and the whole thing would require more computing power than all the particles we're trying to run back.

And a time camera using new physics violates the character of known physical law, in particular its elegant locality, that each element of reality only interacts with immediate neighbors.

It's too early to give up hope. But as a working assumption, the dead are dead.

Replies from: Nick_Tarleton
comment by Nick_Tarleton · 2009-04-09T17:16:47.519Z · LW(p) · GW(p)

If you isolate a single Everett branch and look only at that branch and not all the other branches, and run that branch backward in time, it will evolve into a multiplicity of pasts (that would in the greater scheme be coherently canceled out by the past evolution of other branches). The upshot is that even if you timeslice the entire future lightcone in a single branch you cannot get the exact past. And we can't get photons that have escaped over the horizon, so we can't get the whole future lightcone anyway, and small divergences will amplify, and the whole thing would require more computing power than all the particles we're trying to run back.

Then use the quantum 'probability' distribution over pasts to randomly pick a person to instantiate (more properly, pick a past and instantiate everyone in it). If you got the distribution right, then each resurrectee has the same relative measure they did before they died (original measure times fraction of worlds in which you do this). Obviously, you can also do this with subjective probability distributions derived from locally available information.

Just how much good this does for the dead is hard to say.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-04-09T17:23:33.862Z · LW(p) · GW(p)

Um... I think the vast majority of false pasts you got this way would be nothing like the real past. I think they may even end up with higher entropy, i.e., arrow of time runs forward from here after a brief reversal (in the vast majority of extrapolated pasts).

Replies from: Nick_Tarleton, thomblake
comment by Nick_Tarleton · 2009-04-09T17:44:55.747Z · LW(p) · GW(p)

Okay, that would mean you can't just run the physics backward, but that shouldn't stop the Bayesian method (come up with a prior over histories of Earthlike worlds or some similar class, update on memories, records, etc.) or computationally feasible approximations thereof.

comment by thomblake · 2009-04-09T17:32:34.449Z · LW(p) · GW(p)

I agree but find the wording in your comment confusing... especially the part after "i.e." (ironically)

comment by gjm · 2009-04-09T15:29:49.885Z · LW(p) · GW(p)

For most people in the past, we have no footprints, bits of writing, etc. A crude back-of-envelope calculation suggests that maybe it takes ~ 10^15 bits to describe one person's brain; I wouldn't be surprised to find that wrong by a couple of orders of magnitude, but in any case it's rather a lot. Anyone who's (1) in the not-very-recent past and (2) not exceptional in the traces they leave behind has left only very subtle such traces -- which will be tied up in computationally intractable ways with everyone else's very subtle traces, and with all kinds of extraneous cruft. I've no idea what might turn out to be possible in principle, and saying "we'll never have the technology to do X" doesn't have a great track record of success ... but I wouldn't hold out much hope of being able to retrieve enough information to reconstruct past people even with future-lightcone-scanning technologies, never mind without.