How do S-Risk scenarios impact the decision to get cryonics?

post by bgold · 2019-04-21T15:59:50.342Z · score: 12 (5 votes) · LW · GW · No comments

This is a question post.

Contents

  Answers
    6 avturchin
    6 James_Miller
    4 shminux
None
No comments

LessWrong has frequently discussed the value of cryonics, but I haven't seen any discussion of how S-Risks impact the decision to sign up for cryonics.

For example, I would expect that if you're cryonically frozen, you are less likely to be able to exercise control over whether you/your brain patterns are alive/can be simulated. Even a small chance of an S-Risk future, especially if you hold a mild-to-strong negative utilitarian outlook, should deter you from signing up for cryonics.

I am interested in whether this is a consideration for others in signing up for cryonics, and if not why not?

Answers

answer by avturchin · 2019-04-21T16:50:32.638Z · score: 6 (3 votes) · LW(p) · GW(p)

Paradoxically, if a person doesn't sign for cryonics and expresses the desire not to be resurrected by other means, say, resurrectional simulation, she will be resurrected only in those worlds where the superintelligent AI doesn't care about her decisions. Many of this worlds are s-risks worlds.

Thus, by not signing for cryonics she increases the share of her futures where she will be hostily resurrected in total share of her futures.

comment by Andaro · 2019-04-21T18:36:25.235Z · score: 4 (3 votes) · LW(p) · GW(p)

>Thus, by not signing for cryonics she increases the share of her futures where she will be hostily resurrected in total share of her futures.

But she decreases the share of her futures where she will be resurrected at all, some of which contain hostile resurrection, and therefore she really decreases the share of her futures where she will be hostilely resurrected. She just won't consciously experience those where she doesn't exist, which is better than suffering from the perspective of those who consider suffering negative utility.

comment by avturchin · 2019-04-21T18:46:49.593Z · score: 4 (2 votes) · LW(p) · GW(p)

If we assume that the total share matters, we will get some absurd capabilities to manipulate such share by selective forgetting things and thus merging with our copies in different worlds and increase our total share. I tried to explain this idea here [LW · GW]. So only relative share matters.

comment by Andaro · 2019-04-21T18:54:48.121Z · score: 1 (1 votes) · LW(p) · GW(p)

That's a clever accounting trick, but I only care what happens in my actual future(s), not elsewhere in the universe that I can't causally affect.

comment by avturchin · 2019-04-21T19:24:28.896Z · score: 2 (1 votes) · LW(p) · GW(p)

Another argument to ignore "total measure" comes from many-worlds interpretation: as the world branches, my total measure should decline many orders of magnitude every second, but it doesn't affect my decision making.

comment by Andaro · 2019-04-21T21:11:41.439Z · score: 3 (2 votes) · LW(p) · GW(p)

>as the world branches, my total measure should decline many orders of magnitude every second

I'm not sure why you think that. From any moment in time, it's consistent to count all future forks toward my personal identity without having to count all other copies that don't causally branch from my current self. Perhaps this depends on how we define personal identity.

>but it doesn't affect my decision making.

Perhaps it should - tempered by the possibilities that your assumptions are incorrect, of course.

Another accounting trick: Count future where you don't exist as neutral perspectives of your personal identity (empty consciousness). This should collapse the distinction between total and relative measure. Yes, it's a trick, but the alternative is even more counter-intuitive to me.

Let's regard a classical analogy: You're in a hypothetical situation where your future contains of negative utility. Let's say you suffer -5000 utils per unit time for the next 10 minutes, then you die with certainty. But you have the option of adding another 10 trillion years of life at -4999 utils per unit time. If we regard relative rather than total measure, this should be preferable because your average utils will be ~-4999 per unit time rather than -5000. But it's clearly a much more horrible fate.

I always found average utlitarianism unattractive because of mere addition problems like this, in addition to all the other problems utilitariansims have.

comment by mr-hire · 2019-04-21T18:07:16.124Z · score: 3 (3 votes) · LW(p) · GW(p)
Paradoxically, if a person doesn't sign for cryonics and expresses the desire not to be resurrected by other means, say, resurrectional simulation, she will be resurrected only in those worlds where the superintelligent AI doesn't care about her decisions. Many of this worlds are s-risks worlds.

This seems to depend on how much weight you put on resurrection being able to happen without being frozen. Many people put even the possibility of resurrection with being frozen to be negligible, and without being frozen to be impossible. If this is how your probabilities fall, then the chance of S-risk has less to do with the AI caring about your decisions, and more to do with the AI being physically able to resurrect you.

comment by avturchin · 2019-04-21T18:53:03.610Z · score: 0 (2 votes) · LW(p) · GW(p)

If I care only about the relative share of the outcomes, the total resurrection probability doesn't matter. e.g. if there is 1 000 000 timelines, and I will be resurrected in 1000 of them, and 700 of them will be s-risk, my P(alive in the future and in s-risks)=0.7.

If I care about the total world share (the rest 999 000 of timelines) I should chose absurd actions which will increase my total share in the world, for example, forgetting things and merging with other timelines, more here [LW · GW].

answer by James_Miller · 2019-04-21T16:40:40.838Z · score: 6 (3 votes) · LW(p) · GW(p)

Whatever answer you give it should be the same as to the question "How do S-Risk scenarios impact the decision to wear a seat belt when in a car" since both actions increase your expected lifespan and so, if you believe that S-Risks are a threat, increase your exposure to them. If there are a huge number of "yous" in the multiverse some of them are going to be subject to S-risks, and if cryonics causes this you to survive for a very long time in a situation where you are not subject to S-risks it will reduce the fraction of yous in the multiverse subject to S-risks.

Alcor is my cryonics provider.

comment by mr-hire · 2019-04-21T18:01:52.288Z · score: 2 (2 votes) · LW(p) · GW(p)
Whatever answer you give it should be the same as to the question "How do S-Risk scenarios impact the decision to wear a seat belt when in a car" since both actions increase your expected lifespan and so, if you believe that S-Risks are a threat, increase your exposure to them.

This only seems to apply if you have a constant probability for S-risk scenarios. If you think they're more likely in the far future, then the calculation should be quite different.

answer by shminux · 2019-04-21T18:32:27.225Z · score: 4 (4 votes) · LW(p) · GW(p)

My assumption is that getting frozen means giving up all control over what, if anything, happens to the dead frozen piece of organic matter that you used to identify with. With high probability it will get discarded within the next century, due to a failure of the some sort, technical, economical or political. There is a very unlikely eventuality of it being used for recovery of the informational content, even less likely eventuality that the recovery process will result in some sort of self-awareness, and the chance is even more remote that it would be anything resembling the kind of "life" that you hope for when you sign up. if this is a baseline (and if you are more optimistic than that, then I want some of what you are on), then the decision to sign up for cryonics is between a near-certain extinguishing of your identity (not absolutely certain, as there is always a vanishingly small chance that we can be simulated from the information available) and a tiny chance of revival in some form, and in various number of copies/clones of varying faithfulness/awareness/intelligence, maybe to live happily forever, maybe to be tortured forever, maybe the whole spectrum in between.

If your question is whether the odds of happy resurrection are lowered by taking into account S-risks, then my answer is that they are already so low, S-risk doesn't even enter into it.

Still, I'd take my chances and get frozen rather than, say, cremated. Because to me personally, non-existence is worse. Your outlook is likely to be different.

No comments

Comments sorted by top scores.