More Cryonics Probability Estimates

post by jefftk (jkaufman) · 2012-12-17T20:59:49.000Z · LW · GW · Legacy · 89 comments

There are a lot of steps that all need to go correctly for cryonics to work. People who had gone through the potential problems, assigning probabilities, had come up with odds of success between 1:4 and 1:435. About a year ago I went through and collected estimates, finding other people's and making my own. I've been maintaining these in a googledoc.

Yesterday, on the bus back from the NYC mega-meetup with a group of people from the Cambridge LessWrong meetup, I got more people to give estimates for these probabilities. We started with my potential problems, I explained the model and how independence works in it [1]. For each question everyone decided on their own answer and then we went around and shared our answers (to reduce anchoring). Because there's still going to be some people adjusting to others based on their answers I tried to randomize the order in which I asked people their estimates. My notes are here. [2]

The questions were:

To see people's detailed responses have a look at the googledoc, but bottom line numbers were:

person chance of failure odds of success
Kelly 35% 1:2
Jim 80% 1:5
Mick 89% 1:9
Julia 96% 1:23
Ben 98% 1:44
Jeff 100% 1:1500

(These are all rounded, but one of the two should have enough resolution for each person.)

The most significant way my estimate differs from others turned out to be for "the current cryonics process is insufficient to preserve everything". On that question alone we have:

person chance of failure
Kelly 0%
Jim 35%
Mick 15%
Julia 60%
Ben 33%
Jeff 95%

 

My estimate for this used to be more positive, but it was significantly brought down by reading this lesswrong comment:

Let me give you a fuller view: I am a neuroscientist, and I specialize in the biochemistry/biophysics of the synapse (and interactions with ER and mitochondria there). I also work on membranes and the effect on lipid composition in the opposing leaflets for all the organelles involved.

Looking at what happens during cryonics, I do not see any physically possible way this damage could ever be repaired. Reading the structure and "downloading it" is impossible, since many aspects of synaptic strength and connectivity are irretrievably lost as soon as the synaptic membrane gets distorted. You can't simply replace unfolded proteins, since their relative position and concentration (and modification, and current status in several different signalling pathways) determines what happens to the signals that go through that synapse; you would have to replace them manually, which is a) impossible to do without destroying surrounding membrane, and b) would take thousands of years at best, even if you assume maximally efficient robots doing it (during which period molecular drift would undo the previous work).

Etc, etc. I can't even begin to cover complications I see as soon as I look at what's happening here. I'm all for life extension, I just don't think cryonics is a viable way to accomplish it.

In the responses to their comment they go into more detail.

Should I be giving this information this much weight? "many aspects of synaptic strength and connectivity are irretrievably lost as soon as the synaptic membrane gets distorted" seems critical.

Other questions on which I was substantially more pessimistic than others were "all cryonics companies go out of business", "the technology is never developed to extract the information", "no one is interested in your brain's information", and "it is too expensive to extract your brain's information".

I also posted this on my blog


[1] Specifically, each question is asking you "the chance that X happens and this keeps you from being revived, assuming that all of the previous steps all succeeded". So if both A and B would keep you from being successfully revived, and I ask them in that order, but you think they're basically the same question, then A basically only A gets a probability while B gets 0 or close to it (because B is technically "B given not-A")./p>

 

[2] For some reason I was writing ".000000001" when people said "impossible". For the purposes of this model '0' is fine, and that's what I put on the googledoc.

89 comments

Comments sorted by top scores.

comment by gwern · 2012-12-18T18:45:03.912Z · LW(p) · GW(p)

A fault tree showing all the reasons why a car might not start was shown to several groups of experienced mechanics.96 The tree had seven major branches--insufficient battery charge, defective starting system, defective ignition system, defective fuel system, other engine problems, mischievous acts or vandalism, and all other problems--and a number of subcategories under each branch. One group was shown the full tree and asked to imagine 100 cases in which a car won't start. Members of this group were then asked to estimate how many of the 100 cases were attributable to each of the seven major branches of the tree. A second group of mechanics was shown only an incomplete version of the tree: three major branches were omitted in order to test how sensitive the test subjects were to what was left out. If the mechanics' judgment had been fully sensitive to the missing information, then the number of cases of failure that would normally be attributed to the omitted branches should have been added to the "Other Problems" category. In practice, however, the "Other Problems" category was increased only half as much as it should have been. This indicated that the mechanics shown the incomplete tree were unable to fully recognize and incorporate into their judgments the fact that some of the causes for a car not starting were missing. When the same experiment was run with non-mechanics, the effect of the missing branches was much greater.

https://www.cia.gov/library/center-for-the-study-of-intelligence/csi-publications/books-and-monographs/psychology-of-intelligence-analysis/art13.html

Is subadditivity a one-way ratchet such that we can reliably infer that people are wrong to be more optimistic about cryonics after seeing fewer failure steps?

Replies from: lavalamp, Eliezer_Yudkowsky, MixedNuts
comment by lavalamp · 2012-12-18T19:02:06.277Z · LW(p) · GW(p)

It would have been interesting if they had done a third group and added spurious categories (probably wouldn't work with experienced mechanics) and/or broke down legitimate categories into many more sub categories than necessary. What would that have done to the "other problems" category?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-26T22:44:19.007Z · LW(p) · GW(p)

...it would be really nice if someone had bothered to actually check statistics on how many car failures were actually due to each of the possible causes.

Is subadditivity a one-way ratchet such that we can reliably infer that people are wrong to be more optimistic about cryonics after seeing fewer failure steps?

This sounds wrong to me. In full generality, I expect breaking things into smaller and smaller categories to yield larger and larger probability estimates for the supercategory. We don't know what level of granularity would've led mechanics to be accurate, and furthermore, the main way to produce accuracy would've been to divide things into numbers of categories proportional to their actual probability so that all leaves of the tree had roughly equal weight. Your question sounds like breaking things down more always produces better estimates, and that is not the lesson of this study.

If I was trying to use this effect for a Grey Arts explanation (conveying a better image of what I honestly believe to be reality, without any false statements or omissions, but using explanatory techniques that a Dark Arts practitioner could manipulate to make people believe something else instead, e.g., writing a story as a way of conveying an idea) I would try to diagram cryonics possibilities into a tree where I believed the branches of a given level and the leaf nodes all had roughly equal probability, and just showing the tree would recruit the equal-leaf-size effect to cause the audience to concretely represent this probability estimate.

Replies from: gwern
comment by gwern · 2012-12-27T00:42:42.376Z · LW(p) · GW(p)

This sounds wrong to me. In full generality, I expect breaking things into smaller and smaller categories to yield larger and larger probability estimates for the supercategory. We don't know what level of granularity would've led mechanics to be accurate, and furthermore, the main way to produce accuracy would've been to divide things into numbers of categories proportional to their actual probability so that all leaves of the tree had roughly equal weight. Your question sounds like breaking things down more always produces better estimates, and that is not the lesson of this study.

My suspicion is that conjunctive and disjunctive breakdowns exhibit different behavior which can be manipulated to increase or decrease a naive probability estimate:

  • in a conjunctive case, such as cryonics, the more finely the necessary steps are broken down, the lower you can manipulate a naive estimate.

    To some extent this is appropriate since people are usually overconfident, but I suspect at some granularity, the conjunctions start getting unfairly negative: imagine if people were unwilling to give any step >99% odds, then you can break down a process into a hundred fine steps and their elicited probability must be <0.99^100 or <0.37.

  • in a disjunctive case, we can run it in reverse and instead manipulate upwards a probability estimate by enumerating every possible route

    Like before, this can be appropriate to counter salience biases and really be comprehensive, but it too can be tendentious when it's throwing a laundry list at people. Like before, if people refuse to assign, say, <1% odds to any particular disjunct, then for 100 independent disjuncts, you're going to elicit a high naive probability (>63%*).

Finally, since you frame a problem as p or p-1, if you follow me, you can generally force your preferred choice.

With cryonics, you can take the hostile conjunctive approach: "in order for cryonics to work, you must sign up and the cryonics society must not fail and there must not be hyperinflation rendering your life insurance policy worthless and your family must not stall the procedure and the procedure must go well and Ben Best must decide not to experiment on your particular procedure and..." Or you can take the friendly disjunctive approach: "in order for cryonics to fail, all these strategies must fail: your neuronal weights be unrecoverable by an atom-by-atom readout, unrecoverable by inference from local cell structures, unrecoverable by global inferences, unrecoverable from a lifetime of output, unrecoverable by..."

* not sure about this one. I know the generalized sum rule but not how to apply it to 100 0.01 disjuncts; a Haskell fold give foldr (\a b -> a + b - a*b) 0.01 (replicate 99 0.01) ~> 0.63396.

Replies from: Eliezer_Yudkowsky, Kindly
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-27T02:18:24.232Z · LW(p) · GW(p)

in a conjunctive case, such as cryonics, the more finely the necessary steps are broken down, the lower you can manipulate a naive estimate.

Except that people intuitively average these sorts of links, so hostile manipulation involves negating the conjunction and then turning it into a disjunction - please, dear reader, assign a probability to not-A, and not-B, and not-C - oh, look, the probability of A and B and C seems quite low now! If you were describing an actual conjunction, a Dark Arts practioner would manipulate it in favor of cryonics by zooming in and dwelling on links of great strength. To hostilely drive down the intuitive probability of a conjunction, you have to break it down into lots and lots of possible failure modes - which is of course the strategy practiced by people who prefer to drive down the probability of cryonics. (Motivation is shown by their failure to cover any disjunctive success modes.)

comment by Kindly · 2012-12-27T06:24:57.520Z · LW(p) · GW(p)

not sure about this one. I know the generalized sum rule but not how to apply it to 100 0.01 disjuncts

This is just the complement of the previous probability you computed: 1-0.99^100, which is indeed approximately 0.632. Rather than compute this directly, you might observe that (1-1/n)^n converges very quickly to 1/e or approximately 0.368.

Replies from: gwern
comment by gwern · 2012-12-27T19:10:41.730Z · LW(p) · GW(p)

Yeah, nsheppard pointed that out to me after I wrote the fold. Oh well! I'll know better next time.

comment by MixedNuts · 2012-12-24T13:44:19.282Z · LW(p) · GW(p)

Can you clarify whether the following is correct? "The study shows that domain experts add less weight than non-experts to 'other' when important categories are removed."

Replies from: gwern
comment by gwern · 2012-12-26T21:19:50.907Z · LW(p) · GW(p)

Fortunately for you, I have already jailbroken the PDF: http://www.gwern.net/docs/predictions/1978-fischhoff.pdf

comment by Larks · 2012-12-18T03:41:28.382Z · LW(p) · GW(p)

Science has moved away from considering memories to be simply long-term structural changes in the brain to seeing memories as the products of "continuous enzymatic activity" (Sacktor, 2007). Enzyme activity ceases after death, which could lead to memory destruction.

For instance, in a slightly unnerving study, Sacktor and colleagues taught mice to avoid the taste of saccharin before injecting them with a PKMzeta-blocking drug called ZIP into the insular cortex. PKM, an enzyme, has been associated with increasing receptors between synapses that fire together during memory recollection. Within hours, the mice forgot that saccharin made them nauseous and began guzzling it again. It seems blocking the activity of PKM destroys memories. Since PKM activity (like all enzyme activity) also happens to be blocked following death, a possible extension of this research is that the brain automatically "forgets" everything after death, so a simulation of your brain after death would not be very similar to you.

http://www.nimh.nih.gov/science-news/2007/memory-sustaining-enzyme-may-help-treat-ptsd-cognitive-decline.shtml

Replies from: AndrewH, Synaptic
comment by AndrewH · 2012-12-18T08:32:57.437Z · LW(p) · GW(p)

Accessing long term memory appears to be a reconstructive process, which additionally results in accessed memories becoming fragile again; this is what I believe is occurring here. The learned aversion is reconstructed and as then susceptible to damage much more than other non-recently accessed LTM. Consider that the drug didn't destroy ALL of the mice's (fear?) memories, only that which was most recently accessed.

So no worries to cryonics!

comment by Synaptic · 2012-12-18T21:29:20.501Z · LW(p) · GW(p)

simply long-term structural changes in the brain to seeing memories as the products of "continuous enzymatic activity"

Long-term structural maintenance requires continuous enzymatic activity. For example, the average AMPA receptor lasts only around one day: http://www.ncbi.nlm.nih.gov/pubmed/18320299. The actin cytoskeleton, made up of molecules which largely specify the structure of synapses, also requires continuous remodeling. If a structure is visibly the same after vitrification (not trivial), that means the molecules specifying it are likely to not have changed much.

comment by Benya (Benja) · 2012-12-17T22:27:40.038Z · LW(p) · GW(p)

I think Robin's reply to that comment (which he left there last week) got to the heart of the issue:

No doubt you can identify particular local info that is causally effective in changing local states, and that is lost or destroyed in cryonics. The key question is the redundancy of this info with other info elsewhere. If there is lots of redundancy, then we only need one place where it is held to be preserved. Your comments here have not spoken to this key issue.

It may be that what the brain uses to store some vital information is utterly destroyed by cryonics, but there is some other feature of the arrangement of atoms in the brain, possibly some side effect that has no function in the living brain, that is sufficiently correlated with the information we care about that we can reverse-engineer what we need from it. This is the "hard drive" argument for cryonics (I got it from the Sequences, but I would suspect it didn't originate there): it's not that hard (I think, though I do not know much about this topic) to erase data from a hard drive so that the normal functionality of the hard drive can't bring it back, but it's rather difficult to erase it in a way that someone sufficiently motivated with enough funding can't get it back.

However, kalla724 did say

Distortion of the membranes and replacement of solvent irretrievably destroys information that I believe to be essential to the structure of the mind. I don't think that would ever be readable into anything but a pale copy of the original person, no matter what kind of technological advance occurs (information simply isn't there to be read, regardless of how advanced the reader may be).

This is a clear assertion that there aren't even any correlates of that information preserved, if kalla724 has already thought the correlates argument through. It's not clear to me whether or not they have.

Replies from: Benja, torekp
comment by Benya (Benja) · 2012-12-19T11:24:11.421Z · LW(p) · GW(p)

This is a clear assertion that there aren't even any correlates of that information preserved, if kalla724 has already thought the correlates argument through. It's not clear to me whether or not they have.

Note useful discussion today by wedrifid and Eliezer, arguing that kalla724's comments clearly suggest that they haven't. I got the same vibe, but my knowledge of the relevant science is so spotty that I didn't want to make a confident prediction myself.

comment by torekp · 2012-12-18T02:56:00.195Z · LW(p) · GW(p)

This seems like a good place to inject a related point. One of the failure modes listed is

Reviving people in simulation is impossible.

The contrary of which is: Reviving people in simulation is possible. But there is also this possibility to consider: Reviving people in the flesh is possible. So it would seem that we need to branch here, and then estimate the combined probability after assessing each branch. Maybe P( possible in-flesh | impossible in-simulation) is very small, and this branch can be safely ignored. I haven't looked for other branching points, but I don't feel assured that there aren't more.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2012-12-18T13:51:36.832Z · LW(p) · GW(p)

Branching points are important and could definitely make the whole thing more probable. So if you or anyone else sees others, please point them out.

This particular branching point is one I've thought about (cell D26) and don't think is likely enough to even show up in the final odds. The chemicals they use as cryoprotectants are toxic at the concentrations they need to be using, and while that's fine if you're going to be uploaded it's potentially a big problem if you're going to be revived. Future medicine would need to be really good to keep these cells from dying immediately on rewarming. Expense issues are also mostly worse for in-flesh revival.

(One branching that would help would be if plastination became possible, because it removes the problem of needing cryonics organizations to stay existant, functional, and legal.)

Replies from: loup-vaillant, lsparrish
comment by loup-vaillant · 2012-12-19T17:02:40.111Z · LW(p) · GW(p)

Hmm, even plastination could have legal problem where I live. I'm not sure we can do anything other than burning or burying the corpse.

Now if one is willing to break the law, this is only a cubic foot to keep hidden around. I would be willing to face the risk if it meant my family.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2012-12-21T19:31:26.334Z · LW(p) · GW(p)

The advantage of plastination is that once you're preserved you stay that way. Laws keeping you from being preserved hit plastination and cryonics equally.

comment by lsparrish · 2012-12-21T02:02:01.175Z · LW(p) · GW(p)

Low temperature permits a wider range of molecular machinery to function. For example, you could have a burrowing micro-scale machine (it doesn't need to be nano-scale, although components obviously could be) which slowly removes extracellular cryoprotectant and water, replacing it with a nontoxic cryoprotectant. The replacement matter could be laced with other helpful drugs like ischemia blockers and cell membrane fortifiers, which would activate upon warming.

comment by gjm · 2012-12-17T23:51:26.247Z · LW(p) · GW(p)

There's a possibly-important probability missing from your analysis.

For it to be worth paying for cryonics, it has to (1) work and (2) not be redundant. That is: revival and repair has to become feasible and not too expensive before your cryonics company goes bust, disappears in a collapse of civilization, etc. -- but if that happens within your lifetime then you needn't have bothered with cryonics in the first place.

So the success condition is: huge technical advances, quite soon, but not too soon.

Whether this matters depends on (a) whether it's likely that if revival and repair become viable at all they'll do so in the next few decades, and (b) whether, in that scenario, the outcome is so glorious that you simply won't care that you poured a pile of money into cryonics that you could have spent on books, or sex&drugs&rock&roll, or whatever.

Replies from: CarlShulman, torekp, James_Miller
comment by CarlShulman · 2012-12-18T18:12:22.163Z · LW(p) · GW(p)

The cost of life insurance scales with your risk of death in the covered period: if cryonics is rendered redundant then you can stop paying for the life insurance (and any cryonics membership dues) thereafter.

Redundancy would be a significant worry if, counterfactually, you had to pay a non-refundable lump sum in advance.

comment by torekp · 2012-12-18T02:44:57.590Z · LW(p) · GW(p)

Two other potential forms of redundancy:

  • Future civilizations have the power and motivation to restore even people who were simply buried

  • Everything you ever coherently wanted to get out of cryopreservation can be achieved by a cheaper method, e.g. having children

I don't think the first point has significant probability, but I'll throw it out there in case it inspires someone to find more possibilities I've overlooked.

comment by James_Miller · 2012-12-18T15:24:39.888Z · LW(p) · GW(p)

If the alternative is between saving for retirement and cryonics then for a lot of probability mass of cryonics being redundant nanotech or time travel has made us extremely rich perhaps reducing the cost to us of having not saved (although interest rates might have been high, still you can check for this along the way). For much of the probability mass of cryonics not working, our species has gone extinct (and not in a good way) eliminating the value of money and the harm of not having saved as much as you would have had you not done cryonics.

I'm an Alcor member.

Replies from: jkaufman, gjm
comment by jefftk (jkaufman) · 2012-12-18T15:37:52.626Z · LW(p) · GW(p)

If the alternative is between saving for retirement and cryonics

In my case (and I think for a significant number of others on lw) the alternative is donating more to effective charities. When your money might be going to helping people now or reducing existential risk we have a real tradeoff.

Replies from: James_Miller
comment by James_Miller · 2012-12-18T15:51:40.295Z · LW(p) · GW(p)

So your savings for retirement is < the cost of cryonics? I doubt this is true for many lw >30 years old.

comment by gjm · 2012-12-18T16:59:11.304Z · LW(p) · GW(p)

I agree that the first part of that may well be true -- it was (b) in my last paragraph -- but I'm not so convinced by the first bit. My own evaluation is that most of the probability mass of "cryonics fails for me" involves things going wrong after the end of my life, and while I would indeed very much prefer our species not to go extinct soon after my death, knowing that it will wouldn't stop me caring how comfortable my retirement is, or even caring how much money I'm able to leave to others when I die.

Actually, I'm skeptical of this sort of argument whichever way it goes; my (b) was more a concession to those who think differently than anything else. My preference for the next (say) 20-50 years of my life to be more comfortable isn't materially altered if what follows is going to be infinite blissful heaven, or if it's going to be infinite tormented hell. (Whether the heaven/hell in question are technological or religious or whatever else.) So if cryonics is unnecessary because we all win anyway, I would rather not spend any money preparing for it.

Replies from: James_Miller
comment by James_Miller · 2012-12-18T17:12:58.834Z · LW(p) · GW(p)

Assume that one of the following is true:

1) Cryonics will help you.

2) Cryonics will not help you. Money you save today will not make you happier in the future.

3) Cryonics will not help you. Money you save today will make you happier in the future.

Keeping the likelihood of (1) constant while raising the likelihood of (2) makes cryonics a better bet.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-18T07:19:38.447Z · LW(p) · GW(p)

To me this just looks like a bias-manipulating "unpacking" trick - as you divide larger categories into smaller and smaller subcategories, the probability that people assign to the total category goes up and up. I could equally make cryonics success sound almost certain by lumping all the failure categories together into one or two big things to be probability-assigned, and unpacking all the disjunctive paths to success into finer and finer subcategories. Which I don't do, because I don't lie.

Also, yon neuroscientist does not understand the information-theoretic criterion of death.

Replies from: wuthefwasthat, David_Gerard, jkaufman, juliawise, printing-spoon, Tenoke
comment by wuthefwasthat · 2012-12-18T08:42:22.799Z · LW(p) · GW(p)

There's another effect of "unpacking", which is that it gets us around the conjunction/planning fallacy. Minimally, I would think that unpacking both the paths to failure and the paths to success is better than unpacking neither.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-18T08:58:50.428Z · LW(p) · GW(p)

I wonder if that would actually work, or if the finer granularity basically just trashes the ability of your brain to estimate probabilities.

Replies from: None
comment by [deleted] · 2012-12-18T13:53:59.321Z · LW(p) · GW(p)

I think it's also good to mention that this kind of questionnaire does not account for possible future advancements which are not included due to lack of availability. The same though applies for further negative changes in the future, but when looking at that list for an example items follows are completely missing:

  • Legislation for improving the safety and conditions of cryopreserved people is passed
  • Neuroscientists develop new general techniques for restoring function in patients with braindamage
  • Breakthrough in nanotechnology allows better analysis and faster repair of damaged neurons
  • Supercomputers can be used to retrace the original condition of modified or damaged brain
  • Supercomputers (with the help of FAI?) can be used to reconstruct missing data from redundancy(like mentioned above in Benja's comment )

etc..

..That is to say it's one thing to 'unpack' a proposition and another to do it accurately or at least I would think a questionnaire with uncertain positive and negative future events would seem less biased.

I think it's also worthwhile to consider the possibility that this unpacking business is an sort of an inverse of conjunction fallacy - although it's not exactly the same thing, but I think it's a very closely related topic?

comment by David_Gerard · 2012-12-18T12:31:21.497Z · LW(p) · GW(p)

Also, yon neuroscientist does not understand the information-theoretic criterion of death.

They appear to, they are questioning whether current cryonic practice preserves said information at all - they are saying it will destroy it.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-18T21:24:39.823Z · LW(p) · GW(p)

No they're not, they're describing functional damage and saying why it would be hard to repair in situ, not talking about what you can and can't information-theoretically infer about the original brain from the post-vitrification position of molecules. In other words, the argument does not have the form of, "These two cognitively distinct states will map to molecularly indistinguishable end states". I'm not saying you have to use that exact phrasing but it's what the correct version of the argument is necessarily about, since (modus tollens) anything which defeats that conclusion in real life causes cryonics to work in real life.

Replies from: Kawoomba, David_Gerard
comment by Kawoomba · 2012-12-18T22:30:43.909Z · LW(p) · GW(p)

Are you referring to the neuroscientist's discussion linked in the OP? This comment seems quite clear regarding the information-theoretic consequences:

Distortion of the membranes and replacement of solvent irretrievably destroys information that I believe to be essential to the structure of the mind. (...) (information simply isn't there to be read, regardless of how advanced the reader may be).

In our lingo: the state transformation is a non-injective function (=loss of information).

However, the import of the distance between a "best guess" facsimile and the original is hard to evaluate. Would it be on the order of the difference between before and after a night's sleep? Before and after a TBI injury (yay pleonasm)?

Undifferentiable from your current self in a hypothetical Turing test variant, with you squaring off against such a carbon copy?

Speculatively, I'd rather think all that damage to not play that big of a role. Disrupted membranes should still yield the location of the synapses with high spatial fidelity, and the way we interfere with neurotransmitters constantly, the exact concentration in each synapse does not seem identity-constituting.

Otherwise, we'd incur information-theoretic death of our previous selves each time we take e.g. a neurotransmitter manipulating drug such as an SSRI. Which we do in a way, just not in a relevant way.

Replies from: Swimmer963
comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-12-18T23:04:04.045Z · LW(p) · GW(p)

(yay pleonasm)?

I thought you meant "neoplasm", then I actually Googled pleonasm and there's a good chance you mean that. Which is it???

Replies from: Kawoomba
comment by Kawoomba · 2012-12-18T23:20:46.699Z · LW(p) · GW(p)

Heh, pleonasm, since the "I" in the TBI acronym already refers to "injury", thus rendering the second injury as an overkill. Let's get side-tracked on that, typical LW style :)

Pleonasm, neoplasm ... potato, topota.

comment by David_Gerard · 2012-12-18T23:24:07.609Z · LW(p) · GW(p)

kalla724 quotes from the thread:

Reading the structure and "downloading it" is impossible, since many aspects of synaptic strength and connectivity are irretrievably lost as soon as the synaptic membrane gets distorted.

Distortion of the membranes and replacement of solvent irretrievably destroys information that I believe to be essential to the structure of the mind.

I don't think any intelligence can read information that is no longer there. So, no, I don't think it will help.

The damage that is occurring - distortion of membranes, denaturation of proteins (very likely), disruption of signalling pathways. Just changing the exact localization of Ca microdomains within a synapse can wreak havoc, replacing the liquid completely? Not going to work.

Replacing the solvent, however, would do it almost unavoidably (adding the cryoprotectant might not, but removing it during rehydration will). With membrane-bound proteins you also have the issue of asymmetry. Proteins will seem fine in a symmetric membrane, but more and more data shows that many don't really work properly; there is a reason why cells keep phosphatydilserine and PIPs predominantly on the inner leaflet.

These appear to be saying just what I thought they were saying - current cryonics practice destroys the information - and, given the above, I don't see sufficient evidence to assume your reading.

Replies from: wedrifid
comment by wedrifid · 2012-12-19T00:01:26.444Z · LW(p) · GW(p)

These appear to be saying just what I thought they were saying - current cryonics practice destroys the information - and, given the above, I don't see sufficient evidence to assume your reading.

At best you can get the impression that kalla is in principle aware of the information-theoretic criterion of death but in practice just conflating it with functional damage and knowledge of how hard it would be to repair in situ. What I observe is a domain expert (predictably, and typically) overestimating the relevance of their expertise to a situation outside what they are actually trained and proficient in. Most salient points:

Reading the structure and "downloading it" is impossible, since many aspects of synaptic strength and connectivity are irretrievably lost as soon as the synaptic membrane gets distorted.

Irretrievably? I'd be surprised if that word means what he thinks it means. In particular, for him to have a correct understanding of the term would require abandoning notions of what his field currently considers possible and doing advanced study in probability theory and physics. (To be credible in this claim he'd essentially have to demonstrate that he isn't thinking like a professional neuroscientist for the purpose of the claim.)

The damage that is occurring - distortion of membranes, denaturation of proteins (very likely), disruption of signalling pathways.

(Those sound like a big deal to a neuroscientist in current practice. Whether they are beyond the theoretical capabilities of a superintelligence to recover? I would bet that the comment author really has no good reason to credibly doubt.)

adding the cryoprotectant might not, but removing it during rehydration will

Rehydration? Removing the cryoprotectant? Assume much? (This itself would be enough to conclude that Kalla is giving a Credible, Professional and Authoratative opinion that can not be questioned... on an entirely different question to the one that actually matters for reasoning about cryonics-with-expected-superintelligence.)

Proteins will seem fine in a symmetric membrane, but more and more data shows that many don't really work properly

Don't really work properly, huh? (Someone is missing the point again.)

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-19T04:35:15.131Z · LW(p) · GW(p)

What wedifrid said. Everything the guy says is about functional damage. Talking about the impossibility of repairing proteins in-place even more says that this is somebody thinking about functional damage. Throwing in talk about "information destruction" but not saying anything about many-to-one mappings just tells me that this is somebody who confuses retrievable function with distinguishable states. The person very clearly did not get what the point was, and this being the case, I see no reason to try and read his judgments as being judgments about the point.

Replies from: David_Gerard, MixedNuts
comment by David_Gerard · 2012-12-24T18:59:45.430Z · LW(p) · GW(p)

I'd like to be absolutely clear on the claim that's being made here.

If I overstate the claim, understate the claim or even state it in a manner that seems unduly silly, please do correct me - my aim here is to ascertain precisely what the claim being made is.

As I understand it, you are claiming that:

  • current cryonics practice will preserve sufficient information that a future superintelligence (that we do not presently understand enough about to construct or predict the actions of) may, using unspecified future technologies, be able to use the information in the brain preserved using current cryonics practice to reconstruct the personality that was in said brain at the time of its preservation to a sufficient fidelity that it would count to the personality signing up for such preservation as revival;

  • that having no idea what technologies the superintelligence might use to perform this (presently apparently physically impossible) task and having almost no idea about almost any characteristic of this future superintelligence, beyond a list of things we know we don't want it to do, doesn't count as an objection of substance;

  • and that kalla724 being unable to conclusively disprove this is enough reason to dismiss kalla724's objections in toto.

Have I left anything out, overstated or understated anything here?

If the above is wildly off base, could you please summarise the actual claim in your own words?

Replies from: Eliezer_Yudkowsky, Risto_Saarelma
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-24T20:08:38.946Z · LW(p) · GW(p)

Wildly off base. The key steps are whether on a molecular level, no more than one original person has been mapped to one frozen brain; if this is true, we can expect sufficiently advanced technology generally, and systems described in Drexler's highly specific Nanosystems book particularly, to be sufficient albeit not necessary (brain scanning might work too). There's also a lot of clueless objections along lines of "But they won't just spring back to life when you warm them up" which don't bear on the key question one way or another. Real debate on this subject is from people who understand the concept of information loss, offering neurological scenarios in which information loss might occur; and real cryonicists try to develop still-better suspension technology in order to avert the remaining probability mass of such scenarios. However, for information loss to actually occur, given current vitrification technology which is actually pretty darned advanced, would require that we have learned a new fact presently unknown to neuroscience; and so scenarios in which present cryonics technology fails are speculative. It's not a question of "fail to disprove", it's a question of what happens if you just extrapolate current knowledge at face value without worrying about whether the conclusion sounds weird. Similarly, you can postulate a social collapse which wipes out the infrastructure for liquid nitrogen production, and a cryonics facility could try to further defend against that scenario by having on-premises cooling powered by solar cells... but if you were actually told the US would collapse in 2028, you would have learned a new fact you did not presently know; it's not a default assumption.

Replies from: David_Gerard, yli
comment by David_Gerard · 2012-12-24T20:16:36.533Z · LW(p) · GW(p)

There's also a lot of clueless objections along lines of "But they won't just spring back to life when you warm them up" which don't bear on the key question one way or another.

This is, of course, not anywhere in anything that kalla724 or I said.

However, for information loss to actually occur, given current vitrification technology which is actually pretty darned advanced, would require that we have learned a new fact presently unknown to neuroscience; and so scenarios in which present cryonics technology fails are speculative.

Thank you, this is a solid claim that current cryonics practice preserves sufficient information (even if we presently have literally no idea how to get it out).

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-24T23:31:09.723Z · LW(p) · GW(p)

This is, of course, not anywhere in anything that kalla724 or I said.

If you complain about how it would be hard to in-situ repair denatured proteins - instead of talking about how two dissimilar starting synapses would be mapped to the same post-vitrification synapse because after denaturing it's physically impossible to tell if the starting protein was in conformation X or conformation Y - then you're complaining about the difficulty of repairing functional damage, i.e., the brain won't work after you switch it back on, which is completely missing the point.

If neuroscience says conformation X vs. conformation Y makes a large difference to long-term spiking input/output, which current neuroscience holds to be the primary bearer of long-term brain information, and you can show that denaturing maps X and Y to identical end proteins, then the ball has legitimately been hit back into the court of cryonics because although it's entirely possible that the same information redundantly appears elsewhere and the brain as a whole still identifies as single person and their personality and memories, telling us that cryonics worked would now tell us a new fact of neuroscience we didn't previously know (e.g. that the long-term behavior of this synapse was reflected in a distinguishable effect on the chemical balance of nearby glial cells or something). But currently, if we find out that cryonics doesn't work, we must have learned some new fact of neuroscience about informationally important brain information not visible in vesicle densities, synaptic configurations, and other things that current neuroscience says are important and that we can see preserved in vitrified rat brains.

We don't have current tech for getting info out. There's solid foreseeable routes in both nanoimaging and nanodevices. If the molecules are in-place with sufficient resolution, sufficiently advanced and foreseeable future imaging tech or nanomanipulation tech should be able to get the info out. Like, Nanosystems level would definitely be sufficient though not necessary, and those are some fairly detailed calculations, estimates, and toy systems being bandied about.

comment by yli · 2012-12-25T11:42:26.588Z · LW(p) · GW(p)

The key steps are whether on a molecular level, no more than one original person has been mapped to one frozen brain

Maybe I'm missing something, but even with cremation, on a molecular level probably no more than one person gets mapped to one specific pile of ash, because it would be a huge coincidence if cremating two different bodies ended up creating two identical piles of ash.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-25T11:55:03.196Z · LW(p) · GW(p)

You're missing something. Any one person gets mapped to a very wide spread of possible piles of ash. These spreads overlap a lot between different people. Any one pile of ash could potentially have been generated by an exponentially vast space of persons.

comment by Risto_Saarelma · 2012-12-24T20:46:20.123Z · LW(p) · GW(p)

(presently apparently physically impossible)

I understood a pretty important element in the cryonics argument is assuming that you stick to things that are feasible given our current understanding of physics, though not necessarily given our current level of technology. Conflating technology and physics here will turn the arguments into hash, so it's kinda important to keep them separate. It's generally assumed that the future superintelligences will obey laws of physics that will be pretty much what we understand them to be now, although they may apply them to invent technologies we have no idea about. "Things will have to continue working with the same laws of physics they're working with now" seems different to me from "any random magical stuff can happen because Singularity", which you seem to be going for here.

I'm not sure if "just don't break the laws of physics" is strong enough though. Few people think it very feasible that there would be any way to reconstruct a human body locked in a box and burnt to ash, but go abstract enough with the physics and it's all just a bunch of particles running on neat and reversible trajectories, and maybe some sort of Laplace's demon contraption could track enough of them and trace them back far enough to get the human persona information back. (Or does this run into Heisenberg uncertainty?)

The "possible physically but not technologically" seems like a rather tricky type of reasoning. Imagine trying to explain that you should be able to build a nuclear reactor or a moon rocket to someone who has never heard of physics, in 1920 when you don't have the tech to do either yet. But it seems like the key to this argument, and I rarely see people engaging with it. The counterarguments seem to be mostly about either the technology not being there or philosophical arguments about the continuity of the self.

Replies from: None, David_Gerard
comment by [deleted] · 2013-02-04T19:39:21.691Z · LW(p) · GW(p)

Imagine trying to explain that you should be able to build a nuclear reactor or a moon rocket to someone who has never heard of physics, in 1920 when you don't have the tech to do either yet.

H. G. Wells did it: http://en.wikipedia.org/wiki/The_War_in_the_Air http://en.wikipedia.org/wiki/First_Men_In_The_Moon

Also, people can sometimes do it themselves:

http://www.smithsonianmag.com/history-archaeology/For-40-Years-This-Russian-Family-Was-Cut-Off-From-Human-Contact-Unaware-of-World-War-II-188843001.html

Relevant quote:

"As the Soviet geologists got to know the Lykov family, they realized that they had underestimated their abilities and intelligence. Each family member had a distinct personality; Old Karp was usually delighted by the latest innovations that the scientists brought up from their camp, and though he steadfastly refused to believe that man had set foot on the moon, he adapted swiftly to the idea of satellites. The Lykovs had noticed them as early as the 1950s, when "the stars began to go quickly across the sky," and Karp himself conceived a theory to explain this: "People have thought something up and are sending out fires that are very like stars."

comment by David_Gerard · 2012-12-24T20:57:12.318Z · LW(p) · GW(p)

Note that what I posit as the apparent argument makes no contentions about continuity of self - let's assume minds can in fact be copied around like MP3s.

Yes, I'm annoyed when people pull out a hypothetical magic-equivalent superintelligence that will make everything all better as an argument so solid that the burden of proof is to disprove it: "we don't know what such a being could do (or, indeed, anything else about it), therefore you must prove that such a hypothetical being could not do (whatever magic-equivalent is needed at that point)." They don't know how to get there from here, but they're trying really hard, therefore this hypothetical being should be assumed?

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2012-12-24T21:34:11.518Z · LW(p) · GW(p)

"we don't know what such a being could do (or, indeed, anything else about it), therefore you must prove that such a hypothetical being could not do (whatever magic-equivalent is needed at that point)."

I just said we're assuming we know it can't break the laws of physics.

We can tell that if you blow up someone with antimatter, putting them back together would have to involve breaking the speed of light unless you start out controlling the entire surrounding light cone before the person was blown up. If the person was vitrified, there isn't a similar obvious violation of laws of physics involved in putting them back together.

So it seems like cryonics after death gives you a better chance at being eventually reanimated than antimatter burial after death. With regular burial definitely leaning towards the antimatter option, the causal stuff that needs to be traced back to get you together gets spread too wide. Yet people still argue as if cryonics should be treated just the same as regular burial as long as there's no demonstrable technology that shows it working for humans.

I'm not sure why it's a dealbreaker to assume that the technology side will advance into something we can't fully anticipate. Today's technology is probably extremely weird from the viewpoint of someone from 1900, but barring the quantum mechanical bits, it's still based on the laws of physics a physicists from 1900 would be quite familiar with.

Replies from: army1987
comment by A1987dM (army1987) · 2012-12-24T22:47:43.150Z · LW(p) · GW(p)

Today's technology is probably extremely weird from the viewpoint of someone from 1900, but barring the quantum mechanical bits, it's still based on the laws of physics a physicists from 1900 would be quite familiar with.

The GPS depends on relativity. And "barring the quantum mechanical bits" is a hell of an overwhelming exception. (But make that "a physicist from 1930" and I will agree.)

comment by MixedNuts · 2012-12-24T14:09:03.371Z · LW(p) · GW(p)

Heavy functional damage still rules out some possible revival methods, so reduces probability of success.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-24T20:17:05.000Z · LW(p) · GW(p)

"Warm 'em up and see if they spring back to life" was a possible revival method that cryonicists already didn't believe in, so pointing out its impossibility should not affect probability estimates relative to what cryonicists have already taken into account.

comment by jefftk (jkaufman) · 2012-12-18T14:14:12.784Z · LW(p) · GW(p)

as you divide larger categories into smaller and smaller subcategories, the probability that people assign to the total category goes up and up

The idea that when people disagree over complex topics that they should break their disagreement down is one I've learned in part from Robin Hanson and in fact he applies it cryonics

While Robin has fewer categories, if you look at the detailed probabilities that people gave we could throw out most of their answers without changing their final numbers; people were good about saying "that seems very unlikely" and giving near-zero probabilities. Most of the effect on the total comes from a few questions where people were saying "oh, that seems potentially serious". If I do this more I'll fold many of the less likely questions into more likely ones (mostly so I get a shorter survey) but I don't think that will change the outcome much.

I would expect unpacking to work for two reasons: to help avoid the planning fallacy and to let us see (and focus on) the individual steps people most disagree on.

unpacking all the disjunctive paths to success into finer and finer subcategories

As far as I can tell there's really only one path to success, and it's the one I put here. In my reply to torekp I talked about why I thought in-the-flesh revival was enough less likely not to matter. What would you put as disjunctive paths where "you sign up to get frozen and start playing for it" makes the difference is you being revived?

If any disjunctive paths are serious enough I'm willing to go back and add them to my model.

EDIT retracted: "looking up the Subadditivity effect I think your claim is just wrong. If anything breaking larger probabilities down makes for larger combined probabilities". [This was wrong because I was confusing the negative and positive formulations. Robin Hanson's is positive (which subadditivity should push in the 'cryonics-likely' direction) while mine was negative (which subadditivity should push in the 'cryonics-unlikely' direction).]

Replies from: jimrandomh, thomblake
comment by jimrandomh · 2012-12-18T22:39:59.878Z · LW(p) · GW(p)

As far as I can tell there's really only one path to success, and it's the one I put here.

I raised an alternative path to success when we discussed this Sunday, at the end when you asked for probability of "other failure" and I argued that it should go both ways. Specifically, I suggested that we could be in a multiverse such that being cryopreserved, even if poorly, would increase the probability of other universes copying you into them. I don't remember the probability I gave this at the time, but I believe it was on the order of 10^-2 - small, but still bigger than your bottom-line probability of 1/1500 (which I disagree with) for cryonics working the obvious way.

Some other low-probability paths-to-win that you neglected:

  • My cryopreservation subscription fees are the marginal research dollars that prevent me from dying in the first place, via a cryonics-related discovery with non-cryonics implications
  • I am unsuccessfully preserved but my helping cryonics reach scale saves others; a future AI keeps me alive longer because having signed up for cryonics signaled that I value my life more
  • While my cryopreserved brain is not adequate to resurrect me by itself, it will be combined with electronic records of my life and others' memories of me to build an approximation of me.
Replies from: homunq, jkaufman
comment by homunq · 2012-12-19T03:16:22.281Z · LW(p) · GW(p)

There are also some less-traditional paths-to-lose:

  • Your cryopreservation subscription fees prevent you from buying something else that ends up saving your life (or someone else's)

  • You would never die anyway, so your cryopreservation fees only cost pre-singularity utilons from you (or others you would have given the money to).

  • Simulation is possible, but it is for some reason much "thinner" than reality; that is, a given simulation, even as it runs on a computer existing in a quantum MWI, follows only a very limited number of quantum branches, so has a tiny impact on the measure of the set of future happy versions of you (smaller even than the plain old non-technological-quantum-immortality versions who simply didn't happen to die).

  • You are resurrected by a future UFAI in a hell-world. For instance, in order to get one working version of you, the UFAI must create many almost-versions which are painfully insane; and its ethics say that's OK. And it does this to all corpsicles it finds but not to any other dead people.

I have strong opinions of the likeliness of these (I'd put one at p>99% and another at p<1%) but in any case they're worth mentioning.

Replies from: loup-vaillant
comment by loup-vaillant · 2012-12-19T17:37:32.915Z · LW(p) · GW(p)

Hmm, regarding quantum immortality, I did think about it. Taken to its extreme, I could perform quantum suicide while tying the result of the quantum draw to the lottery. Then it occurred to me that the vast majority of worlds, in which I did not win the lottery, would contain one more sad mother. Such a situation scores far lower in my utility function than the status quo does.

I feel I should treat quantum suicide by cryostination the same way. The only problem is that the status quo bias works against me this time.

Replies from: homunq
comment by homunq · 2012-12-19T18:53:32.102Z · LW(p) · GW(p)

Sorry: I edited the comment you were responding to to clarify my intended meaning, and now perhaps the (unintended?) idea you were responding to is no longer there.

comment by jefftk (jkaufman) · 2012-12-19T01:50:16.781Z · LW(p) · GW(p)

Whoops; this totally slipped my mind. Thanks for including them here.

comment by thomblake · 2012-12-18T15:16:42.062Z · LW(p) · GW(p)

looking up the Subadditivity effect I think your claim is just wrong. If anything breaking larger probabilities down makes for larger combined probabilities.

Yes, that was the claim.

comment by juliawise · 2012-12-20T02:06:22.867Z · LW(p) · GW(p)

I could equally make cryonics success sound almost certain

I'd be interested to see someone do that.

There are a lot of variants on this exercise that could be studies in bias. The five of us doing this estimate on the bus, for example, realized that our answers came out clustered while Jeff's was far away because we had done it together. For each individual question we were supposed to think of our own answer before anyone spoke, to avoid anchoring. But we were anchored by the answers the others had given to all the previous questions.

comment by printing-spoon · 2012-12-19T02:37:05.740Z · LW(p) · GW(p)

To me this just looks like a bias-manipulating "unpacking" trick - as you divide larger categories into smaller and smaller subcategories, the probability that people assign to the total category goes up and up.

How do you know the raised estimate with this "trick" is worse than the estimate without?

I could just as easily say, "As you merge smaller categories into larger and larger categories, the probability that people assign to the total category goes down."

comment by Tenoke · 2012-12-18T12:19:56.298Z · LW(p) · GW(p)

'Subadditivity effect'

Replies from: jkaufman
comment by jefftk (jkaufman) · 2012-12-18T14:26:06.807Z · LW(p) · GW(p)

Which points in the opposite direction.

comment by Synaptic · 2012-12-18T21:20:22.594Z · LW(p) · GW(p)

Upvoted the post. Worthy thing to discuss.

A reply to kalla724 that you did not mention is here: http://lesswrong.com/lw/d4a/brief_response_to_kalla724_on_preserving_personal/

Kalla724 claims that it is not possible to upload a C. elegans with particular memories and/or behaviors. I think that this is a testable claim and should shed light on kalla724's views on preserving personal identity with vitrification. I also think it is likely wrong.

Replies from: ialdabaoth
comment by ialdabaoth · 2012-12-25T05:33:57.520Z · LW(p) · GW(p)

Whether C. elegans can be uploaded with particular memories and/or behaviors has no bearing on whether human personal identity is preserved, since the C. elegans nervous system is completely identified - every C. elegans brain grows identically to every other C. elegans brain, so there is no structural wiring differences between one C. elegans and another. "Memories" (better thought of as stimulus-based behavioral divergences in something so small) are not encoded in the C. elegans' neural pattern at all, the way they are encoded in the human brain; they're merely held in a sort of 'active loop' of neurochemical feedback mechanisms.

It's certainly possible that the same sort of thing happens with human brains, but on a much more complex scale - but it definitely seems true that human brains actively re-wire our neural interconnections in a way that C. elegans doesn't.

Replies from: asparisi
comment by asparisi · 2012-12-25T06:59:48.810Z · LW(p) · GW(p)

I wouldn't say it has no bearing. If C. elegans could NOT be uploaded in a way that preserved behaviors/memories, you would assign a high probability to human brains not being able to be uploaded. So:

If (C. elegans) & ~(Uploading) goes up, then (Human) & ~(Uploading) goes WAY up.

Of course, this commits us to the converse. And since the converse is what happened we would say that it does raise the Human&Uploadable probabilities. Maybe not by MUCH. You rightly point out the dissimilarities that would make it a relatively small increase. But it certainly has some bearing, and in the absense of better evidence it is at least encouraging.

comment by mfb · 2012-12-17T23:43:53.567Z · LW(p) · GW(p)

It would be very interesting to see cryonics for very simple brains of other species. This could determine or narrow down the range of probability for several factors.

Edit: Removed doubled word

comment by Merkle · 2012-12-19T04:36:56.294Z · LW(p) · GW(p)

There is a helpful web page on the probability that cryonics will work.

There are also some useful facts at the Alcor Scientists' Cryonics FAQ.

The neuroscientist might wish to pay attention to the answer to "Q: Can a brain stop working without losing information?" The referenced article by Mayford, Siegelbaum, and Kandel should be particularly helpful.

comment by KrisC · 2012-12-23T01:05:40.032Z · LW(p) · GW(p)

What is the chance that some other means are found of simulating your personality without physical access to your brain (preserved or otherwise)?

Would you like to consider the possibility of cryonic preservation / plastination becoming redundant in your estimates?

Replies from: army1987, CAE_Jones
comment by A1987dM (army1987) · 2012-12-23T11:50:01.219Z · LW(p) · GW(p)

What is the chance that some other means are found of simulating your personality without physical access to your brain (preserved or otherwise)?

It probably depends on how faithful a copy you'd be contented with, as well as on how much evidence about you you leave behind (writings, internet posts, other people's memories, etc. -- lifelogging being the extreme version).

comment by CAE_Jones · 2012-12-23T03:20:55.185Z · LW(p) · GW(p)

I wouldn't, because a simulation of me is effectively a copy, and having a copy lying around would not keep me from dying. It's not like I know a huge number of people would be thrilled at having a simulation of me to interact with (and probably annoy, hehehe). Having a simulation of me while I'm still alive, though, would probably come in handy, so it's not an idea to which I am opposed. I just don't see it making anything with a chance of preserving this instance of me redundant.

Replies from: KrisC
comment by KrisC · 2012-12-23T04:58:05.504Z · LW(p) · GW(p)

Every future state of you is a copy.

I believe having a copy of me lying around would keep me from dying.

However, I was referring to processes that might be put into place after a person's death. To name three, consequences of the simulation hypothesis, personality emulation from recorded sources, or advances in physics allowing observations of past events. Three more: multi world hypothesis, fundamental error in worldview, ongoing extra-terrestral intervention. And the big one, FOOM!

I'm not sure how to cheat death, but I am open to examining options.

comment by simplicio · 2012-12-18T13:27:50.055Z · LW(p) · GW(p)

Not all of what makes you you is encoded in the physical state of the brain (or whatever you would have preserved).

This is probably true, isn't it? Most of what makes you, you, is in your brain, but another large part of it is mediated by hormones going to and from the rest of your body... I think. Yet most LWers who are into cryo go the 'neuro' route. Is there some reason why this consideration is not nearly as big a deal as I think? Is the idea that making a 'spare' human body is cheap?

Replies from: Douglas_Knight, Synaptic
comment by Douglas_Knight · 2012-12-18T20:10:37.613Z · LW(p) · GW(p)

(2) I'm not sure whether I should generalize to much of LW, but when people talk about extracting information from the brain, the plan is not repair, but to make a new brain, whether physical or in simulation. Making a new body is very cheap compared to this.

(1) Simulating hormones is important, but is there any information there to preserve? If the brain controls hormones, then there is no information outside the brain. Of course, it doesn't control them directly, but mediated by glands that probably have different responsiveness in different people; certainly in people with glandular tumors. But there are just a few parameters to determine, basically average levels for that person. Testing different levels for a person would be like giving them external hormones. This changes people's personalities, but only temporarily. Thus it does not appear that much long-term information is stored in hormone levels. In principle the glands could do lots of information processing, but I don't think that there's any reason to believe that. However, the spinal column is made of nerves, which we do know are all about information processing, so it is likely that some information is stored there.

Replies from: simplicio
comment by simplicio · 2012-12-21T16:33:36.609Z · LW(p) · GW(p)

I see your point, thanks!

comment by Synaptic · 2012-12-18T21:21:51.684Z · LW(p) · GW(p)

but another large part of it is mediated by hormones going to and from the rest of your body

http://en.wikipedia.org/wiki/Hypothalamus

comment by hairyfigment · 2012-12-17T23:38:54.388Z · LW(p) · GW(p)

I just ran the numbers assuming I pay US $3000 /year (I forget Hoffman's actual figure) for 33 years (mind you, I think deathtimer.com is too pessimistic there) discounted at 3% /year (the average annual inflation rate since 1913 equals 3.24%). The EPA set the value of a human life at $9.1 million two years ago. Perhaps I'm rigging the numbers by updating this for (actual) inflation and only discounting it by the 1/1500 probability. But I first estimated the value of my own life at $20 million, and I don't think I'd actually kill myself in return for (say) an SI donation that size.

The 'official' numbers would appear to make cryonics under-priced by $1403 in present value. (Edited to use official figures.)

Replies from: jkaufman
comment by jefftk (jkaufman) · 2012-12-18T14:21:13.571Z · LW(p) · GW(p)

Poking at deathtimer I'm not sure it's adjusting for "you've already lived to age X". It says I'll probably live to 77 which is pretty close to what this table has for my life expectancy at birth.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2012-12-19T01:53:34.487Z · LW(p) · GW(p)

Plus it's definitely not adjusting for potential future medical advances.

comment by [deleted] · 2012-12-17T21:30:43.230Z · LW(p) · GW(p)

"Brain degradation after death" is the key point in this list that I'd be interested in learning about. I'm not sure if it's proper to ask this in a comment now or should I be studying diligently around the issue, but I think it's also an interesting subject so excuse me.

The cryonics process is often analoguously compared the the event of a harddrive being broken, and the data being retrievable, but brains and harddrives store information in very different ways and this problem always strikes me as very unnerving. Without going into too much detail, it's very easy to see how something that is be mostly truthful for harddrives might turn out not to be true at all for brains.

Personally I would already be signed up for cryonics if I only had the money for that, and I think it's very important to discuss the topic. This is very much related because when I've had those few discussions around cryonics it has usually stumbled on this particular detail. Can the information of the brain really be preserved via cryonics? Does the brain not deteriorate before the actual event of cryopreservation?

Considering how microinfarctions seem to be an irreversible problem even with live human beings for the time being, I'm very skeptical about frequency of the tissue surviving to the point where it's finally frozen.

Just to point out the cited paragraph in the main post did not cover this area exactly, but instead focused on the process of the cryopreservation, and personally I completely disagree with the skepticism of that neuroscientist. If you're interested in why: With his current knowledge the neuroscientist might be underestimating the capacity of future technologies and he is just concentrating on his view of not being able to solve the problem with present technology. As long as the information is stored well enough to be reconstructed in theory, I think it's plausible to say it will be possible in practice later. And the neuroscientist did not seem to (from my extremely layman perspective) concentrate on the issue from the aspect of information theoretic loss, but rather from a practical aspect of extracting that warped information. I think the cryopreservation process is kind of a stable environment where changes to the brain can be traced back and the damage caused by the process potentially reversible. Meanwhile I think occurring chemical reactions, damage from microbes, etc. prior to cryopreservation pose the threat of the information being completely lost, degradation of the brain prior to preservation being they key problem.Something missing as opposed to something being distorted. That's what I think anyway - which is not much in terms of reliability

Could anyone please be nice and elaborate on this?

Replies from: Synaptic, MugaSofer
comment by Synaptic · 2012-12-18T21:32:14.348Z · LW(p) · GW(p)

"Brain degradation after death" is the key point in this list that I'd be interested in learning about. I'm not sure if it's proper to ask this in a comment now or should I be studying diligently around the issue, but I think it's also an interesting subject so excuse me.

Yes, good intuition. This is what Mike Darwin considers the largest problem in cryonics: http://chronopause.com/index.php/2011/02/23/does-personal-identity-survive-cryopreservation/

comment by MugaSofer · 2012-12-17T21:55:47.559Z · LW(p) · GW(p)

I would already be signed up for cryonics if I only had the money for that

I hear it's common to overestimate the cost of cryonics. Have you actually checked on the prices? If not, it may be lower than you think.

Full disclosure: I am not signed up for cryonics.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2012-12-17T22:09:45.383Z · LW(p) · GW(p)

The prices range from ~$400/year for life insurance and membership fees if you're young and healthy to ~$100,000 if you're about to die and need to pay for it in full.

Replies from: gjm
comment by gjm · 2012-12-17T23:44:16.873Z · LW(p) · GW(p)

Presumably the $400/year should be expected to increase over time as you grow older and less healthy, and you should expect to end up contributing enough on average (one way or another) to pay that ~$100k when you finally die?

Replies from: Benja, TsviBT
comment by Benya (Benja) · 2012-12-18T02:46:52.853Z · LW(p) · GW(p)

Upvoted because the idea is correct, but $100k is the upper end of the scale: Alcor charges $80,000 for neuropreservation (though $200,000 for whole-body, but really, why would you want that?); with Cryonics Institute you can get by with $28,000 for the cryopreservation and $1,250 for a lifetime membership (plus $120 per year until you can afford the $1,250); and Kriorus only charges $10k for neuropreservation.

comment by TsviBT · 2012-12-18T02:54:01.598Z · LW(p) · GW(p)

Fixed rate life policies are available, but they tend to cost a bit more.

Replies from: Kindly
comment by Kindly · 2012-12-20T03:10:05.977Z · LW(p) · GW(p)

I don't expect there to be a way to cheat statistics: if the life policies all have the same payout, they most likely all have the same expected cost when you take into account interest rates. The insurance company wants to make money (in expectation), after all.

comment by [deleted] · 2012-12-24T15:38:47.285Z · LW(p) · GW(p)

Question: Why do people here seem to only focus on the technical aspects of cryonics, and assume "future society will revive you-who-are-frozen" as a given? I can't see much reason to do this, other than as a historical curiosity.

Replies from: None
comment by [deleted] · 2012-12-25T04:09:22.791Z · LW(p) · GW(p)

Replying to my own question: Xachariah made a more detailed argument about a similar issue, a while back.