Can we decrease the risk of worse-than-death outcomes following brain preservation?

post by Synaptic · 2015-02-21T22:58:15.454Z · LW · GW · Legacy · 31 comments

Contents

31 comments

Content note: discussion of things that are worse than death

Over the past few years, a few people have claimed rejection of cryonics due to concerns that they might be revived into a world that they preferred less than being dead or not existing. For example, lukeprog pointed this out in a LW comment here, and Julia Galef expressed similar sentiments in a comment on her blog here

I use brain preservation rather than cryonics here, because it seems like these concerns are technology-platform agnostic.

To me one solution is that it seems possible to have an "out-clause": circumstances under which you'd prefer to have your preservation/suspension terminated. 

Here's how it would work: you specify, prior to entering biostasis, circumstances in which you'd prefer to have your brain/body be taken out of stasis. Then, if those circumstances are realized, the organization carries out your request. 

This almost certainly wouldn't solve all of the potential bad outcomes, but it ought to help some. Also, it requires that you enumerate some of the circumstances in which you'd prefer to have your suspension terminated. 

While obvious, it seems worth pointing out that there's no way to decrease the probability of worse-than-death outcomes to 0%. Although this also is the case for currently-living people (i.e. people whose brains are not necessarily preserved could also experience worse-than-death outcomes and/or have their lifespan extended against their wishes). 

For people who are concerned about this, I have three main questions: 

1) Do you think that an opt-out clause is a useful-in-principle way to address your concerns?

2) If no to #1, is there some other mechanism that you could imagine which would work?

3) Can you enumerate some specific world-states that you think could lead to revival in a worse-than-death state? (Examples: UFAI is imminent, or a malevolent dictator's army is about to take over the world.) 

31 comments

Comments sorted by top scores.

comment by Brian_Tomasik · 2015-02-21T23:16:57.764Z · LW(p) · GW(p)

A "do not resuscitate" kind of request would probably help with some futures that are mildly bad in virtue of some disconnect between your old self and the future (e.g., extreme future shock). But in those cases, you could always just kill yourself.

In the worst futures, presumably those resuscitating you wouldn't care about your wishes. These are the scenarios where a terrible future existence could continue for a very long time without the option of suicide.

Replies from: Evan_Gaensbauer, Synaptic
comment by Evan_Gaensbauer · 2015-02-22T00:33:14.483Z · LW(p) · GW(p)

Edit: replies to this comment have changed my mind: I no longer believe the scenario(s) I illustrate below are absurd. That is, I no longer believe they're so unlikely or nonsensical it's not even worth acknowledging them. However, I don't know what probability to assign to the possibility of such outcomes. Also, for all I know, it might make most sense to think the chances are still very low. I believe it's worth considering them, but I'm not claiming it's a big enough deal that nobody should sign up for cryonics.

In the worst futures, presumably those resuscitating you wouldn't care about your wishes. These are the scenarios where a terrible future existence could continue for a very long time without the option of suicide.

The whole point of this discussion is incredibly bad outcomes, however unlikely, may happen, so we wish to prepare for them. So, I understand why you point out this possibility. Still, that scenario seems very unlikely to me. Yudkowsky's notion of Unfriendly AI is predicated on most possible minds the AI might have not caring about human values, so just using our particles to "make something else". If the future turns into the sort of Malthusian trap Hanson predicts, it doesn't seem the minds then would care about resuscitating us. I believe they would be indifferent, until the point they realized where our mind-brains are being stored is real estate to be used for their own processing power. Again, they obliterate our physical substrates without bothering to revive us.

I'm curious why or what minds would want to resuscitate us without caring about our wishes. Why put us through virtual torture, when if they needed minds to efficiently achieve a goal, they could presumably make new ones that won't object to or suffer through whatever tribulations they must labor through?

Addendum: shminux reasons through it here, concluding it's a non-issue. I understand your concern about possible future minds being made sentient, and forced into torturous labor. As much as that merits concern, it doesn't explain why Omega would bother reviving us of all minds to do it.

Replies from: Houshalter, DefectiveAlgorithm, Richard_Kennaway
comment by Houshalter · 2015-02-23T00:02:56.856Z · LW(p) · GW(p)

I'm not saying its inevitable, but it's a failure of imagination if you can't think of any way the future can go horribly wrong like that.

My biggest concern is an AI or civilization that decides to create a real hell to punish people for their sins. Humans have pretty strong feelings towards wanting to punish those who did wrong, and our morality and views on punishment are constantly changing.

E.g. if slaveholder a were alive today, some people may want to see them tortured. In the future perhaps they will want to punish, hypothetically, meat eaters or people who weren't as altruistic as possible, or something we can't even conceive of.

Replies from: jlp, Evan_Gaensbauer
comment by jlp · 2015-02-23T01:21:04.196Z · LW(p) · GW(p)

Yeah, there are plenty of examples of dictators that go through great lengths to inflict tremendous amounts of pain on many people. It's terrifying to think of someone like that in control of an AGI.

Granted, people like that probably tend to be less likely than the average head-of-state to find themselves in control of an AGI, since brutal dictators often have unhealthy economies, and are therefore unlikely to win an AGI race. But it's not like they have a monopoly on revenge or psychopathy either.

Replies from: Houshalter
comment by Houshalter · 2015-02-23T01:47:24.071Z · LW(p) · GW(p)

I think sociopaths are about 4% of the population, so your scenario isn't really that implausible. I just meant if all of societies' values change over time. Or just the FAI extracting out "true" utility function which includes all the negative stuff, like desire for revenge.

comment by Evan_Gaensbauer · 2015-02-23T10:22:47.994Z · LW(p) · GW(p)

Yeah, someone made another reply to my question to that effect. Yudkowsky and the MIRI emphasize how, in the space of all possible minds a general machine intelligence might develop, the space which contains all human-like minds is very small. So, originally, I was thinking the chances a machine mind would torture living humans would be conditional upon a prior mind (human or other) programming it that way, which itself dependent upon a machine being built which even recognizes human feelings as mattering at all. The chances of all that happening seemed vanishingly small to me.

However, I could be overestimating the likelihood Yudkowsky's predictions are correct. For example, Robin Hanson believes the outcome could be much different, without superintelligence going 'foom', and instead being based upon human brain-emulations (HBEs). Based on related topics, I've assumed the Yudkowsky-Hanson AI Foom debate is over my head. So, I haven't read it yet. However, others more knowledgeable than I apparently notice merit it Hanson's position and criticisms, including Luke Muehlhauser when I asked him a couple years ago. While the MIRI may approach safety-engineering in a way that doesn't discriminate too much between the nature of technological singularity, they still could be wrong about it being an intelligence explosion. I don't claim nobody can tell which type of singularity is more likely. I merely mean I'm agnostic on the subject until I (can) examine it better.

Anyway, a singularity more like the one Hanson predicts make it seem more likely AGI will notice human values, and could hurt us. For example, HBEs could be controlled by hostile minds, which care about hurting us much more than an AGI born from an intelligence explosion. I'm not confident now the likelihood of such scenarios is high enough so myself and others shouldn't sign up for cryonics. I myself am still undecided about cryonics, and skeptical of aspects of the procedure(s). However, I at first believed this outcome was absurd. Like, I thought the scenario so ludicrous or contrived so as not to even assign a probability to such outcomes. That was indeed a failure of my imagination. I don't know what probability to assign now to outcomes where I or others wake up and suffer immense torture at the hands of a hostile future. However, I (no longer) believe it should be utterly neglected in future calculations of the value of 'getting froze', or whatever.

comment by DefectiveAlgorithm · 2015-02-22T13:32:36.230Z · LW(p) · GW(p)

More concerning to me than outright unfriendly AI is AI the creators of which attempted to make it friendly but only partially succeeded such that our state is relevant to its utility calculations but not necessarily in ways we'd like.

comment by Richard_Kennaway · 2015-02-22T07:55:26.513Z · LW(p) · GW(p)

I'm curious why or what minds would want to resuscitate us without caring about our wishes.

Experimental material for developing resuscitation technology. Someone has to be the first attempted revival.

comment by Synaptic · 2015-02-21T23:22:51.134Z · LW(p) · GW(p)

I think I did not explain my proposal clearly enough. What I'm claiming is if that you could see intermediate steps suggesting that a worst-type future is imminent, or merely crosses your probability threshold as "too likely", then you could enumerate those and request to be removed from biostasis then. Before those who are resuscitating you would have a chance to do so.

Replies from: Brian_Tomasik
comment by Brian_Tomasik · 2015-02-21T23:29:15.813Z · LW(p) · GW(p)

Ah, got it. Yeah, that would help, though there would remain many cases where bad futures come too quickly (e.g., if an AGI takes a treacherous turn all of a sudden).

comment by Aleksander · 2015-02-23T17:17:58.383Z · LW(p) · GW(p)

3) Digital blueprints of preserved brains are made available for anyone to download. Large numbers of simulations are run by kids learning how to use the simulation APIs, folks testing poker bots, web search companies making me read every page on the Internet to generate a ranking signal, etc. etc.

comment by Shmi (shminux) · 2015-02-21T23:14:47.314Z · LW(p) · GW(p)

Easy, if you are worried about worse-than-death life after revival, don't get preserved. It's not like there are too few people in the world and no way to create more. I'll take my chances, if I can. I don't expect it to be a problem to self-terminate later, should I want to. I don't put any stock in the scary scenarios where an evil Omega tortures a gazillion of my revived clones for eternity.

Replies from: Synaptic, jlp
comment by Synaptic · 2015-02-21T23:25:25.946Z · LW(p) · GW(p)

Well, this is certainly a reasonable response. But if there is a mechanism to decrease the probability that a worse-than-death outcome would occur so that people who had expressed these concerns are more likely to want to do brain preservation and more people could be a part of the future, that seems like an easy win. I don't think people are particularly fungible.

comment by jlp · 2015-02-22T18:00:05.738Z · LW(p) · GW(p)

I don't put any stock in the scary scenarios where an evil Omega tortures a gazillion of my revived clones for eternity.

Could you elaborate on this? I'd be curious to hear your reasoning.

Does "don't put any stock" mean P(x) = 0? 0.01? 1e-10?

Replies from: shminux
comment by Shmi (shminux) · 2015-02-22T19:00:29.032Z · LW(p) · GW(p)

It means the noise level, down there with Pascal's Wager/Mugger and fairy tales coming true. Assigning a number to it would mean giving in to Pascal's Mugging.

comment by Adam Zerner (adamzerner) · 2015-02-22T17:14:37.299Z · LW(p) · GW(p)

1) Do you think that an opt-out clause is a useful-in-principle way to address your concerns?

Yes. In principle, you should better achieve your outcomes if you have more precise instructions. In principle, this seems sort of obvious.

In practice, I see two problems:

1) If the agency misinterprets your instructions.

2) Ambiguous instructions could facilitate corruption. In the same way that ambiguous laws do.

I'm not sure whether the upsides (more precise instructions allow you to better achieve your outcome) outweigh the downsides (1 and 2). My intuition is that I'm ~65% sure that the upsides outweigh the downsides, but that doesn't reflect much thought.

Replies from: jlp
comment by jlp · 2015-02-22T18:24:12.808Z · LW(p) · GW(p)

Practical problem #3: The agency successfully understands your intentions, and is willing to implement them, but not able to implement them.

For example, a fast intelligence explosion removes their capability of doing so before they can pull the plug. Or a change in their legal environment makes it illegal for them to pull the plug (and they aren't willing to put themselves at legal risk to do so).

comment by Epictetus · 2015-02-23T17:39:46.121Z · LW(p) · GW(p)

To me one solution is that it seems possible to have an "out-clause": circumstances under which you'd prefer to have your preservation/suspension terminated.

This runs into a thorny ethical problem. It's like assisted suicide, except you're neither terminally ill, nor in a vegetative state, nor in extreme pain. Since you don't have anything more than a vague idea of the future, you're unable to provide the kind of informed consent necessary for this sort of thing. A friendly future is more likely to revive you and provide you with the appropriate psychiatric resources.

Replies from: Jiro
comment by Jiro · 2015-02-23T18:30:06.810Z · LW(p) · GW(p)

I think that is an unnecessarily limited idea of informed consent. Shouldn't knowing a probability distribution be enough for the consent to be informed?

Replies from: Lumifer
comment by Lumifer · 2015-02-23T18:52:32.585Z · LW(p) · GW(p)

Shouldn't knowing a probability distribution be enough for the consent to be informed?

You don't know the probability distribution.

comment by Slider · 2015-02-22T13:33:03.554Z · LW(p) · GW(p)

A future can bring unexpected benefits. What if some positive event happened that would offset any kind of literal terminal condition? For example in a future that had nuclear war but everybody is telekinetic, how can you have a standing will on what to do if you never seriously considered telekinesis being possible?

comment by AABoyles · 2015-02-23T15:05:25.532Z · LW(p) · GW(p)

The circumstances under which I would opt to be killed are extremely specific. Namely, I would want not to be revived if I were to be tortured indefinitely. This is actually more specific than it sounds: in order for this to occur, there must exist an entity which would soon possess the ability to revive me, and an incentive to do so rather than just allowing me to die. I find this to be such an extreme edge case that I'm actually uncomfortable with the characterization of the conversation. Instead, I'd turn around the result in question: under what circumstances do you want to be revived?

Trivially, we should want to be revived into a civilization which possesses the technology to revive us at all, and subsequently extend our lives. If circumstances are bad on Earth, we should prefer to defer our revival until those circumstances improve. If they never do, the overwhelming probability is that cryonic remains will simply be forgotten, turned off, and the frozen are never revived. But building a terminal death condition which might be triggered denies us the probability of waiting out those bad circumstances.

tl;dr Don't choose death, choose deferment.

comment by Lalartu · 2015-02-23T08:07:42.699Z · LW(p) · GW(p)

3) Can you enumerate some specific world-states that you think could lead to revival in a worse-than-death state?

Any sort of Hansonian future, where your mind has positive economic value.

comment by irrational_crank · 2015-02-22T19:28:55.542Z · LW(p) · GW(p)

With 1): This may be an obvious problem, but if the singularity for instance occurs thousands of years in the future, then whatever language you write your "do not revive" order in, the future civilization may not be able to understand it and therefore might not necessarily respect your wishes.

With 3) Perhaps if future civilizations who were not interested in revival for its own sake (why would we want another person from so-many-years ago?) would only revive when there is a substantial depopulation crisis (e.g after nuclear war, asteroid strike etc.). If so, these conditions are unlikely to be very pleasant. However, one could argue that in those cases you have a moral imperative to stay alive and reproduce and not commit suicide, because if the crisis is temporary, then all the future utilions of your possible descendants are lost and the human species as a whole is more likely to die out.

comment by advancedatheist · 2015-02-22T00:24:10.907Z · LW(p) · GW(p)

Over the past few years, a few people have claimed rejection of cryonics due to concerns that they might be revived into a world that they preferred less than being dead or not existing.

Uh, plenty of born are born into worse-than-death situations already, at least by our standards, yet they generally make a go of their lives instead of committing suicide. We call many of them our "ancestors."

I get a chuckle out of all the contrived excuses people come up with for not having their brains preserved. I really have to laugh at "But I won't know anyone in Future World!" We go through our lives meeting people every day we've never met before, and humans have good heuristics for deciding which strangers we should get to know better and add to our social circles. I had that experience the other day from meeting a married couple involved in anti-aging research, and I got the sense that they felt that way about me, despite my social inadequacies in some areas.

As for revival in a sucky Future World, well, John Milton said it pretty good:

"The mind is its own place, and in it self/ Can make a Heav'n of Hell, a Hell of Heav'n "

Besides, if you have radical life extension and some freedom of action, you'll have the time and resources to find situations more to your liking. For example, suppose you wake up in Neoreactionary Future World, and you long for the Enlightenment sort of world you remembered in the 21st Century. Well, find your place in the current hierarchy and wait a few centuries. The Enlightenment might come around for a second go.

Replies from: JoshuaZ, JoshuaZ, jlp, ike
comment by JoshuaZ · 2015-02-22T03:38:42.722Z · LW(p) · GW(p)

For example, suppose you wake up in Neoreactionary Future World, and you long for the Enlightenment sort of world you remembered in the 21st Century. Well, find your place in the current hierarchy and wait a few centuries. The Enlightenment might come around for a second go.

We all know at this point that this is your favorite example and apparently what you hope will happen. We get the point. Whatever government there is if any in a few centuries I expect to be radically different from anything we've imagined. People might take your point more seriously if you didn't use it to harp on your own political agenda repeatedly.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2015-02-23T13:15:43.307Z · LW(p) · GW(p)

In this one case I think it was okay. It was a very blank example - the roles could have been swapped or replaced with nearly anything else without any other changes - and moreover he was speculating that people would not like it and that it is reasonable to expect it to fail.

comment by JoshuaZ · 2015-02-22T01:02:33.191Z · LW(p) · GW(p)

I think you are correct to identify these excuses as "contrived"- the central reaction is purely emotional in that the idea seems weird and triggers certain disgust and fear-of-hubris reactions. The reasoning is purely a rationalization.

comment by jlp · 2015-02-22T18:20:42.854Z · LW(p) · GW(p)

Uh, plenty of born are born into worse-than-death situations already, at least by our standards, yet they generally make a go of their lives instead of committing suicide. We call many of them our "ancestors."

Can you elaborate? Your statement seems self-contradictory. By definition, situations "worse than death" would be the ones in which people prefer to kill themselves rather than continue living.

In the context of the original post, I take "worse-than-death" to mean (1) enough misery that a typical person would rather not continue living, and (2) an inability to commit suicide. While I agree many of our ancestors have had a rough time, relatively few of them have had it that hard.

Replies from: Matthew_Opitz
comment by Matthew_Opitz · 2015-02-22T21:09:16.904Z · LW(p) · GW(p)

I'm guessing the author meant that the ancestral environment was one that many of us now would consider "worse than death" considering our higher standards of expectation for standard of living, whereas our ancestors were just perfectly happy to live in cold caves and die from unknown diseases and whatnot.

I guess the question is, how much higher are our expectations now, really? And how much better do we really have it now, really?

Some things, like material comfort and feelings of material security, have obviously gotten better, but others, such as positional social status anxiety and lack of warm social conviviality, have arguably gotten worse.

comment by ike · 2015-02-22T01:30:57.317Z · LW(p) · GW(p)

My "excuse" was outlined here.

The tl;dr is that we live in a multiverse, signing up for cryonics moves your probability mass into worlds in which very few people survive, whereas not signing up moves your probability mass into worlds in which other things make you survive, which are likely to be the sort of thing which make a lot more people survive. Also, surviving via not-cryonics seems better than surviving via cryonics, for reasons I can go into if pressed.

I don't feel it's contrived; if I knew that we weren't living in a multiverse, I don't think I would have major objections.