Cryonic resurrection - an ethical hypothetical

post by ialdabaoth · 2012-11-25T00:44:09.458Z · LW · GW · Legacy · 28 comments

Contents

28 comments

At some point in the future, we hope, brains which have been cryonically preserved may be resurrected, by some process of neural reconstruction (most likely either as nanotech, reconstituted wetware, or virtual simulation).

Imagine that the technology has just come available to resurrect a frozen brain. However, the process has low fidelity, due to resource and technique limitations. Luckily, these limitations are purely practical - as the technique is refined, the process of resurrection will become better and better. The process is also destructive to the original preserved brain, so there's no going back and making a second, higher-quality scan.

The results of the process is effectively a copy of the old brain and personality, but with permanent brain damage in several regions - this manifests effectively as an extreme form of cerebral palsy, partial amnesia (retrograde and anteretrograde), bipolar disthymia, and a partial frontal lobotomy - in short, you'll get something that has recognizable facets of the original, but it's an utter mess.

As the technology progresses, each of these symptoms will be lessened, until eventually they will be effectively eliminated altogether. However, the first few thousand subjects will suffer irrecoverable memory loss and will suffer a horrifically low quality-of-life for at least several decades until the technology improves.

The technology will not progress in refinement without practice, and practice requires actually restoring cryogenically frozen human brains.

Let's establish a metric so we can talk numerically:

 

0.000 - complete and persistent vegetative state, aka dead (this is our current state of progress in this technology)

0.100 - Terry Schiavo (persistent vegetative state with occasional non-conscious responses)

0.500 - the equivalent of advanced Alzheimer's syndrome; severe mental and physical impairment

0.700 - moderate mental and physical impairment

0.800 - significant reduction in facilities (IQ loss of 20 to 35 points, severe difficulty with memory, slurred speech, frequent and severe mood swings)

0.900 - slight reduction in facilities (IQ loss of 10 to 20 points, moderate short- and long-term memory loss, frequent but moderate mood swings)

0.950 - liminal reduction in facilities (IQ loss of 5 to 10 points; occasional slowness in memory recall, occasional mood swings)

1.000 - a perfect reproduction of your original personality and capability

QUESTION 1: If your brain was frozen, at what stage in this technological refinement process would you like your brain to be revived? 

QUESTION 2: If you had had your brain preserved before anyone had asked you this question, how could the reviving technicians ethically know this value? Remember that they cannot thaw you to ask you.

QUESTION 3: Assuming as part of this what-if that the technology cannot progress past 0.500 fidelity without human trials, who should we attempt to revive when the technology is at 0.500? At 0.7? 0.8? 0.9? 0.95? Assume that we haven't asked any of the subjects this question, so we do not know their own preferences.

 

28 comments

Comments sorted by top scores.

comment by Tenoke · 2012-11-25T01:00:49.605Z · LW(p) · GW(p)

The results of the process is effectively a copy of the old brain and personality, but with permanent brain damage in several regions - this manifests effectively as an extreme form of cerebral palsy, partial amnesia (retrograde and anteretrograde), bipolar disthymia, and a partial frontal lobotomy - in short, you'll get something that has recognizable facets of the original, but it's an utter mess.

It seems unlikely that any of those damages except retrograde amnesia can be TRULY permanent in a post-resurrection society. The bigger problem in most people's eyes (afaik) is that what you get back might have only a tiny overlap with the original and the resurrection might be more of a 'creating a new human being which shares something with the original' and less of a resurrection of the old person.

But to answer your questions if we assume that somehow things end up the way you are describing.

Q1: 1.0 if 1.0 is possible otherwise whatever they can do, I don't mind waiting while I am frozen.

Q2: I think this is at least partially addressed in the cryonics contact (That is what I was told on #lesswrong recently) so there are no ethical problems.

Q3 As far as I know cryonics operates under a last in, first out paradigm for obvious reasons.

Replies from: kilobug, ialdabaoth
comment by kilobug · 2012-11-25T10:20:29.766Z · LW(p) · GW(p)

Is "last in, first out" really that obvious ?

I understand the rational that "first in" will be more damaged, and require more advanced technology to fix.

But then, the "first in" will probably have more permanent damage that no technology will be able to undo, so is it really rational to use imperfect technology that will damage a "last in" brain that could have been restored fully, and then using perfect technology to restore a "first in" brain that will be damaged anyway ?

The answer doesn't seem obvious to me, it seems like a real dilemma.

Replies from: Tenoke
comment by Tenoke · 2012-11-25T14:37:09.323Z · LW(p) · GW(p)

It depends on the technology and the actual risks, but yes it makes more sense to start with the best preserved, because after everything has been cleared up you will have less completely messed up people and also because the technology will most probably improve faster if at the beginning you start using it on better preserved people, because there are less factors to worry about.

comment by ialdabaoth · 2012-11-26T20:09:07.084Z · LW(p) · GW(p)

It seems unlikely that any of those damages except retrograde amnesia can be TRULY permanent in a post-resurrection society. The bigger problem in most people's eyes (afaik) is that what you get back might have only a tiny overlap with the original and the resurrection might be more of a 'creating a new human being which shares something with the original' and less of a resurrection of the old person.

This is a good point, and a few others have touched on it as well. To me, retrograde amnesia isn't the only thing that risks permanent damage - there's also the set of basic emotional reactions and sensory preferences that we call "personality".

If someone remembered everyone that you remembered, but hated everything that you loved, would you really call that person "you"?

EDIT: It would be useful to me to know why this just got downvoted.

Replies from: Tenoke, shminux
comment by Tenoke · 2012-11-26T20:23:36.240Z · LW(p) · GW(p)

If the only shared characteristic that we have is memories then probably not.

comment by Shmi (shminux) · 2012-11-26T20:22:38.876Z · LW(p) · GW(p)

If someone remembered everyone that you remembered, but hated everything that you loved, would you really call that person "you"?

Not sure about "everything", but people turn from love to hate quite often, yet no one questions that they are still the same person. Reminds me of the movie The Vow.

I have no clear definition of what constitutes the same person, once you don't take into account inhabiting the same body.

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-26T20:25:05.419Z · LW(p) · GW(p)

Personally, I don't think anyone does, but it does seem to be pretty deep at the bottom of this hypothetical.

EDIT: It would be useful to me to know why this just got downvoted.

comment by [deleted] · 2012-11-26T15:26:35.783Z · LW(p) · GW(p)

If I am frozen, I should volunteer to be revived first/early at any number>0, the moment they get to the point where they need practice to boost the results. It seems quite likely that the net benefit of me doing this to society (which by then will likely include many related descendants of mine.) is much greater than any personal gains I'm likely to achieve. This also neatly avoids the "Well, what if NOONE volunteers to be revived early? How does the technology advance?" conundrum that might stall the technology. Because frankly, the thought of that scenario, of being on the cusp of a scientific revolution that stalls because noone volunteers for the dangerous job, and I could have, and I did not, is just horrible.

I also volunteer to be the old man who goes through the teleporter for human testing, and/or the old man to do some kind of destructive uploading, and/or the old man on a one way trip to Mars, once I actually am an old man. As a corpse, I'm even more willing to sacrifice myself for society then I would be as an old man.

Admittedly, I say this as neither a corpse nor an old man, but it doesn't seem likely for me to change my mind, although I can see circumstances where I might: Here is one of them:

"Well, you got shot in the heart this instant, and Alcor miraculously preserves you even though you aren't even signed up for cryonics, and don't live near an Alcor facility, and someone comes up with a possibly quackish revival technique in 2020, but a very reliable scientific paper indicates a much more reliable technique will be available in 5 years."

In THAT kind of case I would probably wait. But that seems like a strawman of the hypothetical. If I steelman the hypothetical, volunteering to be early seems to be the right choice.

comment by RomeoStevens · 2012-11-25T06:01:42.515Z · LW(p) · GW(p)

Assume that we haven't asked any of the subjects this question, so we do not know their own preferences.

Find terminally ill poor people and offer to make monetary compensation to their families to test revivification techniques on them.

Replies from: DataPacRat
comment by DataPacRat · 2012-11-25T18:09:58.918Z · LW(p) · GW(p)

... presumably at some point after lab-mice, lab-rats, lab-dogs, and lab-chimps have all been able to be revived fully successfully, as far as can be determined?

Replies from: ialdabaoth, RomeoStevens
comment by ialdabaoth · 2012-11-25T23:26:48.839Z · LW(p) · GW(p)

That's an explicit assumption of the hypothetical - "The technology will not progress in refinement without practice, and practice requires actually restoring cryogenically frozen human brains." Suppose that the process requires a lot of recalibration between species, and tends to fail more for brains with more convolutions and synaptic density.

comment by RomeoStevens · 2012-11-25T23:02:57.716Z · LW(p) · GW(p)

Yes, that was assumed.

comment by loup-vaillant · 2012-11-26T11:23:37.684Z · LW(p) · GW(p)

I had a related questions which may be of import for neuros. I heard that a significant part of our nervous system lies in the guts, and is sometimes called "the second brain" in jest. As another example, I'm an amateur musician, and I'm a bit worried that semi-automatic processing of finger motion may lie on nervous ganglia well outside my main brain, closer to my fingers (which may be necessary to play impossible pieces while smiling —no, I'm not that good). It would be a chore to learn basic movements again.

My question is, do we have any evidence about whether important information that cannot be recovered from stem cells, may lie outside our skull? (Which is not the same as saying the brain holds most such information.) Stated a bit differently, do we have reasons to think that 1.000 fidelity for neuros is impossible, even in principle?

comment by DataPacRat · 2012-11-26T19:47:44.393Z · LW(p) · GW(p)

Is anyone even considering waiting for at least 1.01?

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-26T19:55:50.733Z · LW(p) · GW(p)

I think, definitionally, anything that happens beyond 1.00 is an augmentation, not a resurrection. You can't get more information out of a process than you put into it.

EDIT: It would be useful to me to know why this just got downvoted.

Replies from: Crude_Dolorium
comment by Crude_Dolorium · 2012-11-27T18:30:24.594Z · LW(p) · GW(p)

WRT fidelity of reproduction, yes – but the scale is described in terms of defects that we'd object to regardless of whether they were faithful to the original mind. Most people would prefer to be resurrected with higher intelligence and better memory than they originally had, for instance.

It might be better to describe (edit: as Tenoke already did) the imperfect resurrection as causing not impairment but change: the restored mind is fully functional, but some information is lost and must be replaced with inaccurate reconstructions. The resurrected patient is not quite the same person as before; everything that made them who they are – their personality, their tastes and inclinations, their memories, their allegiances and cares and loves – is different. How inaccurate a resurrection is even worthwhile? How long would you wait (missing out on centuries of life!) for better accuracy?

(This is reminiscent of the scenario where a person is reconstructed from their past behavior instead of their brain. The result might resemble the original, but it's unlikely to be very faithful; in particular, secrets they never revealed would be almost impossible to recover, and some such secrets are important.)

comment by RichardHughes · 2012-11-26T17:44:49.551Z · LW(p) · GW(p)

1: Obviously I would PREFER 1.0, but if it appears likely that it will never happen, I'd be okay with 0.900. I won't be a clever motherfucker anymore, but I'll still be a motherfucker. As long as I have the capacity to love and be loved, I'll find a way to be happy. It might not be the way I use now, but I have confidence I'll find one.

2: Anyone who has themselves frozen without considering this angle is being very silly; ideally we just look in their will. If they didn't specify conditions for their revivification, we should revive them whenever the value seems morally justifiable to the unfreezer due to improving conditions or just economic necessity.

3: I'll volunteer to be the trial for 0.500 function. Sure, it'll suck, and I'll probably die again in the near future, confused and unhappy, but whatever, yo. Science ain't easy. Plus right now I'm resigned to my eventual death regardless, so what the fuck ever.

comment by Eneasz · 2012-11-26T17:35:18.669Z · LW(p) · GW(p)

1: Preferably 1, but if there was expected to be a long time gap between .95 and 1, then .95 is acceptable

2: Always assume 1. Much like you always assume the subject would prefer no amputation if at all possible.

3: It's extremely difficult for me to swallow that we won't be able to get to at least .9 with animals. But, given that it must be humans, stick with two criteria: a) those who would most advance the understanding of revivification, so we can minimize the number of subjects that will be required, & b) those who will be the best candidates/least impacted by the damage (ie: better to do someone who'll be dropped to .8 than to do someone who'll be dropped to .7, for the same advancement of knowledge). Among those who best fit the two criteria, randomize. And afterwards the society of revivies should do what they can to make the lives of the damaged as bearable as possible in gratitude for their sacrifice.

comment by asparisi · 2012-11-26T09:24:27.893Z · LW(p) · GW(p)

Question 1: This depends on the technical details of what has been lost. If it merely an access problem: if there are good reasons to believe that current/future technologies of this resurrection society will be able to restore my faculties post-resurrection, I would be willing to go for as low as .5 for the sake of advancing the technology. If we are talking about permanent loss, but with potential repair (so, memories are just gone, but I could repair my ability to remember in the future) probably 9.0. If the difficulties would literally be permanent, 1.0, but that seems unlikely.

Question 2: Outside of asking me or my friends/family (assume none are alive or know the answer) the best they could do is construct a model based on records of my life, including any surviving digital records. It wouldn't be perfect, but any port in a storm...

Question 3: Hm. Well, if it was possible to revive someone who already was in the equivalent state before cryonics, it would probably be ethical provided that it didn't make them WORSE. Assuming it did... draw lots. It isn't pretty, but unless you privledge certain individuals, you end up in a stalemate. (This is assuming it is a legitimate requirement: all other options have been effectively utilized to their maximum benefit, and .50 is the best we're gonna get without a human trial) A model of the expected damage, the anticipated recovery period, and what sorts of changes will likely need to be made over time could make some subjects more viable for this than others, in which case it would be in everyone's interest if the most viable subjects for good improvements were the ones thrown into the lots. (Quality of life concerns might factor in too: if Person A is 80% likely to come out a .7 and 20% likely to come out a .5; and Person B is 20% likely to come out a .7 and 80% likely to come out a .5, then ceteris paribus you go for A and hope you were right. It is unlikely that all cases will be equal.)

comment by pcm · 2012-11-25T18:19:21.263Z · LW(p) · GW(p)

Question 1: About 0.95.

Question 2: Ask people who knew me? Infer a model of my mind from that and my writings? I don't consider it more ethical to use uncertainty as a reason postpone it until some unforeseeable technology is developed.

Question 3: I'm reluctant to enter such a lottery because I don't trust someone who believes those assumptions. I expect the scanning part of the process to improve (without depending on human trials) to the point where enough information is preserved to make a >0.99 fidelity upload theoretically possible. I would accept a trial which took that information and experimented with a simulation of 0.5 fidelity in an attempt to improve the simulation software, assuming the raw information would later be used to produce a better upload.

comment by Mestroyer · 2012-11-25T01:08:37.740Z · LW(p) · GW(p)

Answer 1: At 1.000. I wouldn't get impatient while frozen. Answer 2: If it's feasible to repair/enhance people's brains further after the original reconstruction, it's not such a big deal, maybe .9-1.0, assuming there was no need to bring some people back under worse conditions so that future people could be brought back under better conditions. Answer 3: Just randomly pick among cryopreserved people, not that difficult.

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-25T01:23:55.685Z · LW(p) · GW(p)

Answer 3: Just randomly pick among cryopreserved people, not that difficult.

So, how many dollars/yen/resource-units would you be willing to spend to stay out of that lottery?

Replies from: Mestroyer
comment by Mestroyer · 2012-11-25T01:37:03.764Z · LW(p) · GW(p)

It depends on how likely I am to get poorly resurrected if I don't spend the money, and what else I could be doing with the money. Right now I'm not even signed up to be cryopreserved at all. So my revealed preferences say "none." But I might sign up some other time. Another question is how much I would spend if I had less akrasia, and was more able to intuitively understand large numbers. But if I had no akrasia and could understand large numbers easily, maybe I would not be spending money on cryonics at all, because I would be living in a hut so I could give all my money to effective charities.

But there's also the question of "If there weren't such good things to spend money on in your life, how much resources (say you're trading off vacation days full of fun) would you spend to not be brain-damaged when you are living for billions of years?" And the answer is probably billions, if I as an expected utility maximizer.

I was thinking the question was asked where you had a bunch of frozen people, who had been frozen for a long time, and none of them foresaw what was going to be needed to develop resurrection technology. But, if this was foreseen halfway through the process of freezing people, it could be good to let some people pay their way out of the lottery, if that money could be put to a good enough use (ie saving people's lives).

comment by ChristianKl · 2012-11-29T14:10:49.980Z · LW(p) · GW(p)

I think it's interesting that you focus on capabilities instead of focusing on personality changes.

It's possible that a reconstruction of my brain produces a human with the same IQ as myself, slowness in memory recall and occasional mood swings but who's as different from me as a twin would be.

I would prefer resurrection with -15 IQ but otherwise being completely myself to essentially creating a new human with full IQ that shares some traits that I have.

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-29T21:49:04.204Z · LW(p) · GW(p)

I think it's interesting that you focus on capabilities instead of focusing on personality changes.

It seemed simpler to quantify capabilities than to quantify personality shifts, and I wanted to talk in quantifiable terms to avoid ambiguity and nuance creep. In my mind, I think I assume that capability shifts automatically cause personality changes (i.e., emotional continence vs. stoicism).

comment by Crude_Dolorium · 2012-11-27T20:50:02.992Z · LW(p) · GW(p)

Point of pedantry: could you say “cryonic” instead of “cryogenic”?

Replies from: ialdabaoth
comment by ialdabaoth · 2012-11-27T21:06:56.120Z · LW(p) · GW(p)

Yes. I will edit my original post.

Replies from: Crude_Dolorium
comment by Crude_Dolorium · 2012-11-29T00:16:26.855Z · LW(p) · GW(p)

Thanks. You made the world a little clearer.