I Will Pay $500 To Anyone Who Can Convince Me To Cancel My Cryonics Subscription

post by ChrisHallquist · 2014-01-11T10:39:04.856Z · LW · GW · Legacy · 182 comments

Contents

182 comments

Background:

On the most recent LessWrong readership survey, I assigned a probability of 0.30 on the cryonics question. I had previously been persuaded to sign up for cryonics by reading the sequences, but this thread and particularly this comment lowered my estimate of the chances of cryonics working considerably. Also relevant from the same thread was ciphergoth's comment:

By and large cryonics critics don't make clear exactly what part of the cryonics argument they mean to target, so it's hard to say exactly whether it covers an area of their expertise, but it's at least plausible to read them as asserting that cryopreserved people are information-theoretically dead, which is not guesswork about future technology and would fall under their area of expertise.

Based on this, I think there's a substantial chance that there's information out there that would convince me that the folks who dismiss cryonics as pseudoscience are essentially correct, that the right answer to the survey question was epsilon. I've seen what seem like convincing objections to cryonics, and it seems possible that an expanded version of those arguments, with full references and replies to pro-cryonics arguments, would convince me. Or someone could just go to the trouble of showing that a large majority of cryobiologists really do think cryopreserved people are information-theoretically dead.

However, it's not clear to me how well worth my time it is to seek out such information. It seems coming up with decisive information would be hard, especially since e.g. ciphergoth has put a lot of energy into trying to figure out what the experts think about cryonics and come away without a clear answer. And part of the reason I signed up for cryonics in the first place is because it doesn't cost me much: the largest component is the life insurance for funding, only $50 / month.

So I've decided to put a bounty on being persuaded to cancel my cryonics subscription. If no one succeeds in convincing me, it costs me nothing, and if someone does succeed in convincing me the cost is less than the cost of being signed up for cryonics for a year. And yes, I'm aware that providing one-sided financial incentives like this requires me to take the fact that I've done this into account when evaluating anti-cryonics arguments, and apply extra scrutiny to them.

Note that there are several issues that ultimately go in to whether you should sign up for cryonics (the neuroscience / evaluation of current technology, estimate of the probability of a "good" future, various philosophical issues), I anticipate the greatest chance of being persuaded from scientific arguments. In particular, I find questions about personal identity and consciousness of uploads made from preserved brains confusing, but think there are very few people in the world, if any, who are likely to have much chance of getting me un-confused about those issues. The offer is blind to the exact nature of the arguments given, but I mostly foresee being persuaded by the neuroscience arguments.

And of course, I'm happy to listen to people tell me why the anti-cryonics arguments are wrong and I should stay signed up for cryonics. There's just no prize for doing so.

182 comments

Comments sorted by top scores.

comment by V_V · 2014-01-11T16:22:17.147Z · LW(p) · GW(p)

Cryonics success is an highly conjunctive event, depending on a number of different, roughly independent, events to happen.

Consider this list:

  • The cryorpreservation process as performed by current cryo companies, when executed perfectly, preserves enough information to reconstruct your personal identity. Neurobiologists and cryobiologists generally believe this is improbable, for the reasons explained in the links you cited.
  • Cryocompanies actually implement the cryorpreservation process susbstantially as advertised, without botching or faking it, or generally behaving incompetently. I think there is a significant (>= 50%) probability that they don't: there have been anecdotal allegations of mis-behavior, at least one company (the Cryonics Institute) has policies that betray gross incompetence or disregard for the success of the procedure ( such as keeping certain cryopatients on dry ice for two weeks ), and more generally, since cryocompanies operate without public oversight and without any mean to assess the quality of their work, they have every incentive to hide mistakes, take cost-saving shortcuts, use sub-par materials, equipment, unqualified staff, or even outright defraud you.

  • Assuming that the process has actually preserved the relevant information, technology for recover it and revive you in some way must be developed. Guessing about future technology is difficult. Historically, predicted technological advances that seemed quite obvious at some point (AGI, nuclear fusion power, space colonization, or even flying cars and jetpacks) failed to materialize, while actual technological improvements were often not widely predicted many years in advance (personal computers, cellphones, the Internet, etc.). The probability that technology many years from now goes along a trajectory we can predict is low.

  • Assuming that the tech is eventually developed, it must be sufficiently cheap, and future people must have an incentive to use it to revive you. It's unclear what such an incentive could be. Revival of a few people for scientific purposes, even at a considerable cost, seems plausible, but mass revival of >thousands frozen primitives?

  • Your cryocompany must not suffer financial failure, or some other significant local disruption, before the tech becomes available and economically affordable. Very few organizations survive more than one century, and those which do, often radically alter their mission. Even worse, it is plausible that before revival tech becomes available, radical life extension becomes available, and therefore people stop signing up for cryonics. Cryocompanies might be required to go on for many decades or centuries without new customers. It's unclear that they could remain financially viable and motivated in this condition. The further in the future revival tech becomes available, the lower the chances that your cryocompany will still exist.

  • Regional or planetary disasters, either natural (earthquake, flood, hurricane, volcanic eruption, asteroid strike, etc.) or human-made (war, economic crisis, demographic crisis due to environmental collapse, etc.) must not disrupt your preservation. Some of these disaster are exceptional, other hit with a certain regularity over the course of a few centuries. Again, the further in the future revival tech becomes available, the lower the chances that a disaster will destroy your frozen remains before.

You can play with assigning probabilities to these events and multiplying them. I don't recommend trusting too much any such estimate due to the fact that it is easy to fool yourself into a sense of false precision while picking numbers that suit whatever you already wanted to believe.
But the takeaway point is that in order to cryonics to succeed, many things have to happen or be true in succession, and the failure of only one of them would make cryonics ultimately fail at reviving you. Therefore, I think, cryonics success is so improbable that it is not worth the cost.

Replies from: gothgirl420666, gjm, ChrisHallquist, Zaine, Gunnar_Zarncke, MugaSofer
comment by gothgirl420666 · 2014-01-11T19:01:19.760Z · LW(p) · GW(p)

You forgot "You will die in a way that keeps your brain intact and allows you to be cryopreserved".

Replies from: None
comment by [deleted] · 2014-01-11T19:11:48.251Z · LW(p) · GW(p)

"... by an expert team with specialized equipment within hours (minutes?) of your death."

Replies from: None
comment by [deleted] · 2014-01-13T13:52:08.473Z · LW(p) · GW(p)

"...a death which left you with a functional-enough circulatory system for cryoprotectants to get to your brain, didn't involve major cranial trauma, and didn't leave you exposed to extreme heat or other conditions which could irretrievably destroy large amounts of brain information. Also the 'expert' team, which probably consists of hobbyists or technicians who have done this at best a few times and with informal training, does everything right."

(This is not meant as a knock against the expert teams in question, but against civilization for not making an effort to get something better together. The people involved seem to be doing the best they can with the resources they have.)

Replies from: khafra
comment by khafra · 2014-01-13T20:07:49.356Z · LW(p) · GW(p)

...Which pretty much rules out anything but death from chronic disease; which mostly happens when you get quite old; which means funding your cryo with term insurance is useless and you need to spring for the much more expense whole life.

comment by gjm · 2014-01-11T20:15:01.209Z · LW(p) · GW(p)

(My version of) the above is essentially my reason for thinking cryonics is unlikely to have much value.

There's a slightly subtle point in this area that I think often gets missed. The relevant question is not "how likely is it that cryonics will work?" but "how likely is it that cryonics will both work and be needed?". A substantial amount of the probability that cryonics does something useful, I think, comes from scenarios where there's huge technological progress within the next century or thereabouts (because if it takes longer then there's much less chance that the cryonics companies are still around and haven't lost their patients in accidents, wars, etc.) -- but conditional on that it's quite likely that the huge technological progress actually happens fast enough that someone reasonably young (like Chris) ends up getting magical life extension without needing to die and be revived first.

So the window within which there's value in signing up for cryonics is where huge progress happens soon but not too soon. You're betting on an upper as well as a lower bound to the rate of progress.

Replies from: CarlShulman
comment by CarlShulman · 2014-01-11T21:20:58.611Z · LW(p) · GW(p)

There's a slightly subtle point in this area that I think often gets missed.

I have seen a number of people make (and withdraw) this point, but it doesn't make sense, since both the costs and benefits change (you stop buying life insurance when you no longer need it, so costs decline in the same ballpark as benefits).

Contrast with the following question:

"Why buy fire insurance for 2014, if in 2075 anti-fire technology will be so advanced that fire losses are negligible?"

You pay for fire insurance this year to guard against the chance of fire this year. If fire risk goes down, the price of fire insurance goes down too, and you can cancel your insurance at will.

Replies from: NoSuchPlace
comment by NoSuchPlace · 2014-01-11T23:43:46.035Z · LW(p) · GW(p)

I don't think that this is meant as a complete counter-argument against cryonics, but rather a point which needs to be considered when calculating the expected benefit of cryonics. For a very hypothetical example (which doesn't reflect my beliefs) where this sort of consideration makes a big difference:

Say I'm young and healthy, so that I can be 90% confident to still be alive in 40 years time and I also believe that immortality and reanimation will become available at roughly the same time. Then the expected benefit of signing up for cryonics, all else being equal, would be about 10 times lower if I expected the relevant technologies to go online either very soon (next 40 years) or very late (longer than I would expect cryonics companies to last) than if I expected them to go online some time after I very likely died but before cryonics companies disappeared.

Edit: Fixed silly typo.

Replies from: CarlShulman, army1987, Adele_L
comment by CarlShulman · 2014-01-12T01:31:53.448Z · LW(p) · GW(p)

That would make sense if you were doing something like buying a lifetime cryonics subscription upfront that could not be refunded even in part. But it doesn't make sense with actual insurance, where you stop buying it if is no longer useful, so costs are matched to benefits.

  • Life insurance, and cryonics membership fees, are paid on an annual basis
  • The price of life insurance is set largely based on your annual risk of death: if your risk of death is low (young, healthy, etc) then the cost of coverage will be low; if your risk of death is high the cost will be high
  • You can terminate both the life insurance and the cryonics membership whenever you choose, ending coverage
  • If you die in a year before 'immortality' becomes available, then it does not help you

So, in your scenario:

  • You have a 10% chance of dying before 40 years have passed
  • During the first 40 years you pay on the order of 10% of the cost of lifetime cryonics coverage (higher because there is some frontloading, e.g. membership fees not being scaled to mortality risk)
  • After 40 years 'immortality' becomes available, so you cancel your cryonics membership and insurance after only paying for life insurance priced for a 10% risk of death
  • In this world the potential benefits are cut by a factor of 10, but so are the costs (roughly); so the cost-benefit ratio does not change by a factor of 10
Replies from: NoSuchPlace, private_messaging
comment by NoSuchPlace · 2014-01-12T02:31:19.356Z · LW(p) · GW(p)

True. While the effect would still exist due to front-loading it would be smaller than I assumed . Thank you for pointing this out to me.

comment by private_messaging · 2014-01-12T10:56:55.687Z · LW(p) · GW(p)

Except people do usually compare the spending on the insurance which takes low probability of need into account, to the benefits of cryonics that are calculated without taking the probability of need into account.

The issue is that it is not "cryonics or nothing". There's many possible actions. For example you can put money or time into better healthcare, to have a better chance of surviving until better brain preservation (at which point you may re-decide and sign up for it).

The probability of cryonics actually working is, frankly, negligible - you can not expect people to do something like this right without any testing, even if the general approach is right and it is workable in principle*. (Especially not in the alternative universe where people are crazy and you're one of the very few sane ones), and is easily out-weighted even by minor improvements in your general health. Go subscribe to a gym, for a young person offering $500 for changing his mind that'll probably blow cryonics out of water by orders of magnitude, cost benefit wise. Already subscribed to a gym? Work on other personal risks.

  • I'm assuming that cryonics proponents do agree that some level of damage - cryonics too late, for example - would result in information loss that likely can not be recovered even in principle.
comment by A1987dM (army1987) · 2014-01-12T10:19:06.905Z · LW(p) · GW(p)

but after cryonics companies disappeared.

ITYM “before”.

comment by Adele_L · 2014-01-12T00:53:14.547Z · LW(p) · GW(p)

When immortality is at stake, a 91% chance is much much better than a 90% chance.

Replies from: private_messaging
comment by private_messaging · 2014-01-12T11:04:51.840Z · LW(p) · GW(p)

Not if that 1% (seems way over optimistic to me) is more expensive than other ways to gain 1% , such as by spending money or time on better health. Really, you guys are way over-awed by the multiplication of made up probabilities by made up benefits, forgetting that all you did was making an utterly lopsided, extremely biased pros and cons list, which is a far cry from actually finding the optimum action.

Replies from: Dentin
comment by Dentin · 2014-01-12T14:26:17.808Z · LW(p) · GW(p)

I signed up for cryonics precisely because I'm effectively out of lower cost options, and most of the other cryonicists are in a similar situation.

Replies from: private_messaging
comment by private_messaging · 2014-01-12T21:33:32.741Z · LW(p) · GW(p)

I wonder how good of an idea is a yearly full body MRI for early cancer detection...

Replies from: None
comment by [deleted] · 2014-01-13T00:53:41.536Z · LW(p) · GW(p)

There are those that argue that it's more likely to find something benign you've always had and wouldn't hurt you but you never knew about, seeing as we all have weird things in us, leading to unnecessary treatments which have risks.

Replies from: private_messaging, bogus
comment by private_messaging · 2014-01-13T20:10:56.188Z · LW(p) · GW(p)

What's about growing weird things?

Here we very often use ultrasound (and the ultrasound is done by the medical doctor rather than by a technician), it finds weird things very very well and the solution is simply to follow up later and see if its growing.

comment by bogus · 2014-01-13T03:07:39.312Z · LW(p) · GW(p)

There are those that argue that it's more likely to find something benign you've always had

This can only decrease the amount of useful information you'd get from the MRI, though - it can't convert a benefit into a cost. After all, if the MRI doesn't show more than the expected amount of weirdness, you should avoid costly treatments.

comment by ChrisHallquist · 2014-01-13T02:46:31.381Z · LW(p) · GW(p)

Most of these issues I was already aware of, though I did have a brief "holy crap" moment when I read this parenthetical statement:

such as keeping certain cryopatients on dry ice for two weeks

But following the links to the explanation, I don't think this impacts considerably my estimate of CI's competence / trustworthiness. This specific issue only affects people who didn't sign up for cryonics in advance, comes with an understandable (if not correct) rationale, and comes with acknowledgement that it's less likely to work than the approach they use for people who were signed up for cryonics before their deaths.

Their position may not be entirely rational, but I didn't previously have any illusions about cryonics organizations being entirely rational (it seems to me cryonics literature has too much emphasis on the possibility of reviving the original meat as opposed to uploading.)

Replies from: V_V
comment by V_V · 2014-01-13T16:10:06.487Z · LW(p) · GW(p)

But following the links to the explanation, I don't think this impacts considerably my estimate of CI's competence / trustworthiness. This specific issue only affects people who didn't sign up for cryonics in advance, comes with an understandable (if not correct) rationale, and comes with acknowledgement that it's less likely to work than the approach they use for people who were signed up for cryonics before their deaths.

"less likely to work" seems a bit of an euphemism. I think that the chances that this works are essentially negligible even if cryopreservation under best condition did work (which is already unlikely).

My point is that even if they don't apply this procedure to all their patients, the fact that CI are offering it means that they are either interested in maximizing profit instead of success probability, and/or they don't know what they are doing, which is consistent with some claims by Mike Darwin (who, however, might have had an axe to grind).

Signing up for cryonics is always buying a pig in a poke because you have no way of directly evaluating the quality of the provider work within your lifetime, therefore the reputation of the provider is paramount. If the provider behaves in a way which is consistent with greed or incompetence, it is an extremely bad sign.

Replies from: ChrisHallquist
comment by ChrisHallquist · 2014-01-13T21:17:37.239Z · LW(p) · GW(p)

Read a bit of Mike Darwin's complaints, those look more serious. I will have to look into that further. Can you give me a better sense of your true (not just lower bound) estimate of the chances there's something wrong with cryonics orgs on an institutional level that would lead to inadequate preservation even if in theory they had a working procedure in theory?

Replies from: V_V
comment by V_V · 2014-01-13T22:27:15.002Z · LW(p) · GW(p)

I'm not sure how to condense my informal intuition into a single number. I would say > 0.5 and < 0.9, closer to the upper bound (and even closer for the Cryonics Institute than for Alcor).

comment by Zaine · 2014-01-12T00:25:23.309Z · LW(p) · GW(p)

To keep the information all in one place, I'll reply here.

Cryogenic preservation exists in the proof of tardigrades - also called waterbears - which can reanimate from temperatures as low as 0.15 K, and have sufficient neurophysiological complexity to enable analysis of neuronal structural damage.

We don't know if the identity of a given waterbear pre-cyrobiosis is preserved post-reanimation. For that we'd need a more complex organism. However, the waterbear is idiosyncratic in its capacity for preservation; while it proves the possibility for cyrogenic preservation exists, we ourselves do not have the traits of the waterbear that facilitate its capacity for preservation.

In the human brain, there are billions of synapses - to what neurones other neurones connect, we call the connectome: this informs who you are. According to our current theoretical and practical understanding of how memories work, if synapses degrade even the slightest amount your connectome will change dramatically, and will thus represent a different person - perhaps even a lesser human (fewer memories, etcetera).

Now, let's assume uploading becomes commonplace and you mainly care about preserving your genetic self rather than your developed self (you without most of your memories and different thought processes vs. the person you've endeavoured to become), so any synaptic degradation of subsistence brain areas becomes irrelevant. What will the computer upload? Into what kind of person will your synapses reorganise? Even assuming they will reorganise might ask too much of the hypothetical.

Ask yourself who - or what - you would like to cyropreserve; the more particular your answer, the more science needed to accommodate the possibility.

Replies from: None
comment by [deleted] · 2014-01-12T06:54:25.254Z · LW(p) · GW(p)

We don't know if the identity of a given waterbear pre-cyrobiosis is preserved post-reanimation. For that we'd need a more complex organism.

How would you design that experiment? I would think all you'd need is a better understanding of what identity is. But maybe we mean different things by identity.

Replies from: Zaine
comment by Zaine · 2014-01-12T07:50:44.255Z · LW(p) · GW(p)

We'd need to have a means of differentiating the subject waterbear's behaviour from other waterbears; while not exhaustive, classically conditioning a modified reflexive reaction to stimuli (desensitisation, sensitisation) or inducing LTP or LTD on a synapse, then testing whether the adaptations were retained post-reanimation, would be a starting point.

The problem comes when you try to extrapolate success in the above experiment to mean potential for more complex organisms to survive the same procedure given x. Ideally you would image all of the subjects synapses pre-freeze or pre-cryobiosis (depending on what x turns out to be), then image them again post-reanimation, and have a program search for discrepancies. Unfortunately, the closest we are to whole-brain imaging is neuronal fluorescence imaging, which doesn't light up every synapse. Perhaps it might if we use transcranial DC or magnetic stimulation to activate every cell in the brain; doing so may explode a bunch of cells, too. I've just about bent over the conjecture tree by this point.

Replies from: None
comment by [deleted] · 2014-01-12T08:27:09.826Z · LW(p) · GW(p)

Does the waterbear experience verification and then wake up again after being thawed, or does subjective experience terminate with vitrification - subjective experience of death / oblivion - and a new waterbear with identical memories begin living?

Replies from: Zaine, adbge
comment by Zaine · 2014-01-13T00:05:04.970Z · LW(p) · GW(p)

We need to stop and (biologically) define life and death for a moment. A human can be cryogenically frozen before or after their brain shuts down; in either case, their metabolism will cease all function. This is typically a criterion of death. However if, when reanimated, the human carries on as they would from a wee kip, does this mean they have begun a new life? resumed their old life after a sojourn to the Underworld?

You see the quandary our scenario puts to this definition of life, for the waterbear does the exact above. They will suspend their metabolism, which can be considered death, reanimate when harsh environmental conditions subside, and go about their waterbearing ways. Again, do the waterbears live a subset of multiple lives within the set of one life? Quite confusing to think about, yes?

Now let's redefine life.

A waterbear ceases all metabolic activity, resumes it, then lumbers away. In sleep, one's state pre- and post-sleep will differ; one wakes up with changed neuronal connections, yet considers themselves the same person - or not, but let's presume they do. Take, then, the scenario in which one's state pre- and post-sleep does not differ; indeed, neurophysiologically speaking, it appears they've merely paused then recommenced their brain's processes, just as the time 1:31:00 follows 1:30:59.

This suggests that biological life depends not on metabolic function, but on the presence of an organised system of (metabolic) processes. If the system maintains a pristine state, then it matters not how much time has passed since it last operated; the life of the system's organism will end only when when that system becomes so corrupted as to lose the capacity for function. Sufficient corruption might amount to one specalated synapse; it might amount to a missing ganglion. Thus cyrogenics' knottiness.

As to whether they experience verification, you'll have to query a waterbear yourself. More seriously, for any questions on waterbear experience I refer you to a waterbear, or a waterbear philosopher. As to whether and to what degree they experience sensation when undergoing cryptobiosis, we can test to find out, but any results will be interpreted through layers of extrapolation: "Ganglion A was observed inhibiting Ganglion B via neurotransmitter D binding postsynaptic alpha receptors upon tickling the watebear's belly; based on the conclusions of Researchers et. al., this suggests the waterbear experienced either mildly positive or extremely negative sensation."

Replies from: Benquo
comment by Benquo · 2014-01-13T19:13:07.617Z · LW(p) · GW(p)

I think the question was a practical one and "verification" should have been "vitrification."

Replies from: Zaine
comment by Zaine · 2014-01-13T19:59:30.435Z · LW(p) · GW(p)

I considered that, but the words seemed too different to result from a typo; I'm interested to learn the fact of the matter.

I've edited the grandparent to accommodate your interpretation.

comment by adbge · 2014-01-12T18:36:52.098Z · LW(p) · GW(p)

Going under anesthesia is a similar discontinuity in subjective experience, along with sleep, situations where people are technically dead for a few moments and then brought back to life, coma patients, and so on.

I don't personally regard any of these as the death of one person followed by the resurrection of a new person with identical memories, so I also reject the sort of reasoning that says cryogenic resurrection, mind uploading, and Star Trek-style transportation is death.

Eliezer has a post here about similar concerns. It's perhaps of interest to note that the PhilPapers survey revealed a fairly even split on the teletransporter problem among philosophers, with the breakdown being 36.2%/32.7%/31.1% as survive/other/die respectively.

ETA: Ah, nevermind, I see you've already considered this.

Replies from: None
comment by [deleted] · 2014-01-12T19:33:58.306Z · LW(p) · GW(p)

Yes, that post still reflects my views. I should point out again that sleep and many forms of anesthesia don't stop operation of the brain, they just halt the creation of new memories so people don't remember. That's why, for example, some surgery patients end up with PTSD from waking up on the table, even if they don't remember.

Other cases like temporary (clinical) death and revival also aren't useful comparisons. Even if the body is dying, the heart and breathing stops, etc., there are still neural computations going on from which identity is derived. The irrecoverable disassociation of the particle interactions underlying consciousness probably takes a while - hours or more, unless there is violent physical damage to the brain. Eventually the brain state fully reverts to random interactions and identity is destroyed, but clinical revival becomes impossible well before then.

Cryonics is more of a weird edge case ... we don't know enough now to say with any certainty whether cryonics patients have crossed that red line or not with respect to destruction of identity.

comment by Gunnar_Zarncke · 2014-01-12T20:24:41.529Z · LW(p) · GW(p)

For a formula see http://www.alcor.org/Library/html/WillCryonicsWork.html (I do find the given probabilities significantly to optimistic though and lacking and references).

comment by MugaSofer · 2014-01-17T02:07:26.597Z · LW(p) · GW(p)

I think there is a significant (>= 50%) probability that they don't: there have been anecdotal allegations of mis-behavior, at least one company (the Cryonics Institute) has policies that betray gross incompetence or disregard for the success of the procedure ( such as keeping certain cryopatients on dry ice for two weeks ), and more generally, since cryocompanies operate without public oversight and without any mean to assess the quality of their work, they have every incentive to hide mistakes, take cost-saving shortcuts, use sub-par materials, equipment, unqualified staff, or even outright defraud you.

Woah, really? This seems ... somewhat worse than my estimation. (Note that I am not signed up, for reasons that have nothing to do with this.)

it is plausible that before revival tech becomes available, radical life extension becomes available, and therefore people stop signing up for cryonics. Cryocompanies might be required to go on for many decades or centuries without new customers. It's unclear that they could remain financially viable and motivated in this condition.

This is a good point that I hadn't heard before.

Replies from: handoflixue, V_V
comment by handoflixue · 2014-01-19T09:26:35.465Z · LW(p) · GW(p)

http://www.alcor.org/cases.html A loooot of them include things going wrong, pretty clear signs that this is a novice operation with minimal experience, and so forth. Also notice that they don't even HAVE case reports for half the patients admitted prior to ~2008.

It's worth noting that pretty much all of these have a delay of at LEAST a day. There's one example where they "cryopreserved" someone who had been buried for over a year, against the wishes of the family, because "that is what the member requested." (It even includes notes that they don't expect it to work, but the family is still $50K poorer!)

I'm not saying they're horrible, but they really come off as enthusiastic amateurs, NOT professionals. Cryonics might work, but the modern approach is ... shoddy at best, and really doesn't strike me as matching the optimistic assumptions of people who advocate for it.

Replies from: MugaSofer
comment by MugaSofer · 2014-01-20T20:48:08.189Z · LW(p) · GW(p)

Yikes. Yeah, that seems like a serious problem that needs more publicity in cryonics circles.

comment by V_V · 2014-01-18T11:43:03.693Z · LW(p) · GW(p)

I think it's also worth considering that a society of people who rarely die would probably have population issues, as there is a limited carrying capacity.
That's most obvious in the case of biologic humans, where even with our normal lifespan, we are already close or even above carrying capacity. In more exotic (and thus less probable, IMHO) scenarios such as Hansonian brain emulations, the carrying capacity might be perhaps higher, but it would still be fixed, or at least it would increase slowly once all the easily reachable resources on earth have been put to use (barring, of course, extreme singularity scenarios where nanomagicbots turn Jupiter into "computronium" or something, which I consider highly improbable).

Thus, if the long-lived future people are to avoid continuous cycles of population overshoot and crash, they must have some way of enforcing a population cap, whether by market forces or government regulation. This implies that reviving cryopreserved people would probably have costs other than those of the revival tech. Whoever revives you would have to split in some way their share of resources with you (or maybe in the extreme case, commit suicide to make room for you).
Hanson, for instance, predicts that his brain emulation society would be a Malthusian subsistence economy. I don't think that such a society could afford to ever revive any significant number of cryopatients, even if they had the technology (how Hanson can believe that society is likely and be still signed up for cryonics, is beyond my understanding).
Even if you don't think that a Malthusian scenario is likely, it still likely that the future will be an approximately steady-state economy, which means it would be strong disincentives against adding more people.

Replies from: MugaSofer
comment by MugaSofer · 2014-01-18T15:03:58.372Z · LW(p) · GW(p)

Even if you don't think that a Malthusian scenario is likely, it still likely that the future will be an approximately steady-state economy, which means it would be strong disincentives against adding more people.

I'm inclined to agree, actually, but I would expect a post-scarcity "steady-state economy" large enough that absorbing such a tiny number of people is negligible.

With that said:

  • Honestly, it doesn't sound all that implausible that humans will find ways to expand - if nothing else, without FTL (I infer you don't anticipate FTL) there's pretty much always going to be a lot of unused universe out there for many billions of years to come (until the universe expands enough we can't reach anything, I guess.)

  • Brain emulations sound extremely plausible. In fact, the notion that we will never get them seems ... somewhat artificial in it's constraints. Are you sure you aren't penalizing them merely for sounding "exotic"?

  • I can't really comment on turning Jupiter into processing substrate and living there, but ... could you maybe throw out some numbers regarding the amounts of processing power and population numbers you're imagining? I think I have a higher credence for "extreme singularity scenarios" than you do, so I'd like to know where you're coming from better.

Hanson, for instance, predicts that his brain emulation society would be a Malthusian subsistence economy. I don't think that such a society could afford to ever revive any significant number of cryopatients, even if they had the technology (how Hanson can believe that society is likely and be still signed up for cryonics, is beyond my understanding).

That ... is strange. Actually, has he talked anywhere about his views on cryonics?

Replies from: V_V
comment by V_V · 2014-01-18T22:39:13.311Z · LW(p) · GW(p)

Honestly, it doesn't sound all that implausible that humans will find ways to expand - if nothing else, without FTL (I infer you don't anticipate FTL)

Obviously I don't anticipate FTL. Do you?

there's pretty much always going to be a lot of unused universe out there for many billions of years to come (until the universe expands enough we can't reach anything, I guess.)

Yes, but exploiting resources in our solar system is already difficult and costly. Currently there is nothing in space worth the cost of going there or bringing it back, maybe in the future it will be different, but I expect progress to be relatively slow.
Interstellar colonization might be forever physically impossible or economically unfeasible. Even if it is feasible I expect it to be very very slow. I think that's the best solution to Fermi's paradox.

Tom Murphy discussed these issue here and here. He focused on proven space technology (rockets) and didn't analyze more speculative stuff like mass drivers, but it seems to me that his whole analysis is reasonable.

Brain emulations sound extremely plausible. In fact, the notion that we will never get them seems ... somewhat artificial in it's constraints. Are you sure you aren't penalizing them merely for sounding "exotic"?

I'm penalizing them because they seem to be far away from what current technology allows (consider the current status of the Blue Brain Project or the Human Brain Project).
It's unclear how many hidden hurdles are there, and how long Moore's law will continue to hold. Even if the emulation of a few human brains becomes possible, it's unclear that the technology would scale to allow a population of billions, or trillions as Hanson predicts. Keep in mind that biological brains are much more energy efficient than modern computers.

Conditionally on radical life extension technology being available, brain emulation is more probable, since it seems to be an obvious avenue to radical life extension. But it's not obvious that it would be cheap and scalable.

I can't really comment on turning Jupiter into processing substrate and living there, but ... could you maybe throw out some numbers regarding the amounts of processing power and population numbers you're imagining? I think I have a higher credence for "extreme singularity scenarios" than you do, so I'd like to know where you're coming from better.

I think the most likely scenario, at least for a few centuries, is that human will still be essentially biological and will only inhabit the Earth (except possibly for a few Earth-dependent outposts in the solar system). Realistic population sizes will be between 2 and 10 billions.

Total processing power is more difficult to estimate: it depends on how long Moore's law (and related trends such as Koomey's law) will continue to hold. Since there seem to be physical limits that would be hit in 30-40 years of continued exponential growth, I would estimate that 20 years is a realistic time frame. Then there is the question of how much energy and other resources people will invest into computation.
I'd say that a growth of total computing power to between 10,000x and 10,000,000x of the current one in 20-30 years, followed by stagnation or perhaps a slow growth, seems reasonable. Novel hardware technologies might change that, but as usual probabilities on speculative future tech should be discounted.

Replies from: MugaSofer, private_messaging
comment by MugaSofer · 2014-01-20T21:23:37.326Z · LW(p) · GW(p)

I don't anticipate FTL.

Prediction confirmed, then. I think you might be surprised how common anticipating that we will eventually "solve FTL" using "wormholes", some sort of Alcubierre variant or plain old Clarke-esque New Discoveries - in sciencey circles, anyway.

I'm penalizing them because they seem to be far away from what current technology allows

I ... see. OK then.

Keep in mind that biological brains are much more energy efficient than modern computers.

That seems like a more plausible objection.

Total processing power is more difficult to estimate: it depends on how long Moore's law (and related trends such as Koomey's law) will continue to hold. Since there seem to be physical limits that would be hit in 30-40 years of continued exponential growth, I would estimate that 20 years is a realistic time frame. Then there is the question of how much energy and other resources people will invest into computation. I'd say that a growth of total computing power to between 10,000x and 10,000,000x of the current one in 20-30 years, followed by stagnation or perhaps a slow growth, seems reasonable. Novel hardware technologies might change that, but as usual probabilities on speculative future tech should be discounted.

Hmm. I started to calculate out some stuff, but I just realized: all that really matters is how the amount of humans we can support compares to available human-supporting resources, be they virtual, biological or, I don't know, some sort of posthuman cyborg.

So: how on earth can we calculate this?

We could use population projections - I understand the projected peak is around 2100 at 9 billion or so - but those are infamously unhelpful for futurists and, obviously, may not hold when some technology or another is introduced.

So ... what about wildly irresponsible economic speculation? What's your opinion of the idea we'll end up in a "post-scarcity economy", due to widespread automation etc.

Alternatively, do you think the population controls malthusians have been predicting since forever will finally materialize?

Or ... basically I'm curious as to the sociological landscape you anticipate here.

Replies from: V_V
comment by V_V · 2014-01-23T17:50:59.998Z · LW(p) · GW(p)

So ... what about wildly irresponsible economic speculation? What's your opinion of the idea we'll end up in a "post-scarcity economy", due to widespread automation etc.

As long as we are talking about biologic humans (I don't think anything else is likely, at least for a few centuries), then carrying capacity is most likely in the order of billions: each human requires a certain amount of food, water, clothing, housing, healthcare, etc. The technologies we use to provide these things are already highly efficient, hence their efficiency will probably not grow much, at least not by incremental improvement.
Groundbreaking developments comparable to the invention of agriculture might make a difference, but there doesn't seem to be any obvious candidate for that which we can foresee, hence I wouldn't consider that likely.

In optimistic scenarios, we get an approximately steady state (or slowly growing) economy with high per capita wealth, with high automation relieving many people from the necessity of working long hours, or perhaps even of working at all.
In pessimistic scenarios, Malthusian predictions come true, and we get either steady state economy at subsistence level, or growth-collapse oscillations with permanent destruction of carrying capacity due to resource depletion, climate change, nuclear war, etc. up to the most extreme scenarios of total civilization breakdown or human extinction.

Replies from: Lumifer
comment by Lumifer · 2014-01-23T18:32:21.760Z · LW(p) · GW(p)

The technologies we use to provide these things are already highly efficient

This is certainly not true for healthcare.

Groundbreaking developments comparable to the invention of agriculture might make a difference, but there doesn't seem to be any obvious candidate for that which we can foresee

I think that making energy really cheap ("too cheap to meter") is foreseeable and that would count as a groundbreaking development.

Replies from: V_V
comment by V_V · 2014-01-23T20:44:28.902Z · LW(p) · GW(p)

This is certainly not true for healthcare.

Do you think that modern healthcare is inefficient in energy and resource usage? Why?

I think that making energy really cheap ("too cheap to meter") is foreseeable and that would count as a groundbreaking development.

What energy source you have in mind?

Replies from: Lumifer
comment by Lumifer · 2014-01-23T20:53:38.641Z · LW(p) · GW(p)

Do you think that modern healthcare is inefficient in energy and resource usage? Why?

I think that modern healthcare is inefficient in general cost/benefit terms: what outputs you get at the cost of which inputs. Compared to what seems achievable in the future, of course.

What energy source you have in mind?

Fusion reactors, for example.

Replies from: V_V
comment by V_V · 2014-01-24T15:33:44.132Z · LW(p) · GW(p)

I think that modern healthcare is inefficient in general cost/benefit terms: what outputs you get at the cost of which inputs. Compared to what seems achievable in the future, of course.

I suppose that in optimistic scenarios one could imagine cutting labor costs using high automation, but we would probably still going to need hospitals, drug manufacturing facilities, medical equipment factories, and so on.

Fusion reactors, for example.

Always 20-30 years in the future for the last 60 years.
I'm under the impression that nuclear fusion reactors might have already reached technological maturity and thus diminishing returns before becoming commercially viable.

Even if commercial fusion reactors become available, they would hardly be "too cheap to meter".
They have to use the deuterium-tritium reaction (deuterium-deuterium is considered practically unfeasible), which has two main issues: it generates lots of high-energy neutrons and tritium must be produced from lithium.

High-energy neutrons erode any material and make it radioactive. This problem exists in conventional fission reactors, but it's more significant in fusion reactors because of the higher neutron flux. A commercial fusion reactor would probably have higher maintenance requirement and/or shorter lifespan than a fission reactor with the same power.

Lithium is not rare, but not terribly common either. If we were to produce all the energy of the world from fusion, lithium reserves would last between thousands and tens of thousands years, assuming that energy consumption does not increase.
That's clearly an abundant source of energy (in the same ballpark of uranium and thorium), but not much more abundant than other sources we are used to.

Moreover, in a fission power station the fuel costs make up only a fraction of the total costs per joule of energy. Most of the costs are fixed costs of construction, maintenance and decommissioning.
A fusion power station would have similar operational and decommissioning safety issues of a fission one (although it can't go into melt down), and probably and higher complexity, which mean that fixed cost will dominate, as for fission power.

If fusion power becomes commercially viable it would be valuable but most likely not "too cheap to meter".

Replies from: Lumifer
comment by Lumifer · 2014-01-24T16:04:13.554Z · LW(p) · GW(p)

I suppose that in optimistic scenarios one could imagine cutting labor costs using high automation

No, I primarily mean new ways of treatment. For example, a hypothetical country which can easily cure Alzheimer's would have much lower costs of medical care for the elderly. Being able to cure (as opposed to control) diabetes, a large variety of autoimmune disorders, etc. has the potential to greatly improve the efficiency of health care.

Always 20-30 years in the future for the last 60 years.

Yes, but I am not saying it would happen, I'm saying this is an example of what might happen. You're basically claiming that there will be no major breakthroughs in the foreseeable future -- I disagree, but of course can't come up with bulletproof examples :-/

Replies from: V_V
comment by V_V · 2014-01-24T16:23:38.445Z · LW(p) · GW(p)

No, I primarily mean new ways of treatment. For example, a hypothetical country which can easily cure Alzheimer's would have much lower costs of medical care for the elderly. Being able to cure (as opposed to control) diabetes, a large variety of autoimmune disorders, etc. has the potential to greatly improve the efficiency of health care.

I see. But the point is how much disability people will have before they die. It's not obvious to me that it will go down, at least it has gone up in the recent past.

You're basically claiming that there will be no major breakthroughs in the foreseeable future

I'm claiming that breakthroughs which increase the amount of available energy or other scarce resources by a huge amount don't seem especially likely in the foreseeable future.

comment by private_messaging · 2014-01-28T00:26:44.616Z · LW(p) · GW(p)

I'd say that a growth of total computing power to between 10,000x and 10,000,000x of the current one in 20-30 years, followed by stagnation or perhaps a slow growth, seems reasonable

From Wikipedia:

Although this trend has continued for more than half a century, Moore's law should be considered an observation or conjecture and not a physical or natural law. Sources in 2005 expected it to continue until at least 2015 or 2020.[note 1][11] However, the 2010 update to the International Technology Roadmap for Semiconductors predicts that growth will slow at the end of 2013,[12] when transistor counts and densities are to double only every three years.

It's already happening.

Current process size is ~22nm, silicon lattice size is ~0.5nm . Something around 5..10 nm is the limit for photolithography, and we don't have any other methods of bulk manufacturing in sight. The problem with individual atoms is that you can't place them in bulk because of the stochastic nature of the interactions.

comment by JRMayne · 2014-01-11T22:34:33.247Z · LW(p) · GW(p)

I'll bite. (I don't want the money. If I get it, I'll use it for what is considered by some on this site as ego-gratifying wastage for Give Directly or some similar charity.)

If you look around, you'll find "scientist"-signed letters supporting creationism. Philip Johnson, a Berkeley law professor is on that list, but you find a very low percertage of biologists. If you're using lawyers to sell science, you're doing badly. (I am a lawyer.)

The global warming issue has better lists of people signing off, including one genuinely credible human: Richard Lindzen of MIT. Lindzen, though, has oscillated from "manmade global warming is a myth," to a more measured view that the degree of manmade global warming is much, much lower than the general view. The list of signatories to a global warming skeptic letter contains some people with some qualifications on the matter, but many who do not seem to have expertise.

Cryonics? Well, there's this. Assuming they would put any neuroscience qualifications that the signatories had... this looks like the intelligent design letters. Electrical engineers, physicists... let's count the people with neuroscience expertise, other than people whose careers are in hawking cryonics:

  1. Kenneth Hayworth, a post-doc now at Harvard.

  2. Ravin Jain, Los Angeles neurologist. He was listed as an assistant professor of neurology at UCLA in 2004, but he's no longer employed by UCLA.

That's them. There are a number of other doctors on there; looking up the people who worked for cryonics orgs is fun. Many of them have interesting histories, and many have moved on. The letter is pretty lightweight; it just says there's a credible chance that they can put you back together again after the big freeze. I think computer scientists dominate the list. That is a completely terrible sign.

There are other conversations here and elsewhere about the state of the brain involving interplay between the neurons that's not replicable with just the physical brain. There's also the failure to resuscitate anyone from brain death. This provides additional evidence that this won't work.

Finally, the people running the cryonics outfits have not had the best record of honesty and stability. If Google ran a cryonics outfit, that would be more interesting, for sure. But I don't think that's going to happen; this is not the route to very long life.

[Edit 1/14 - fixed a miscapitalization and a terrible sentence construction. No substantive changes.]

Replies from: jkaufman, James_Miller
comment by jefftk (jkaufman) · 2014-01-13T18:56:26.343Z · LW(p) · GW(p)

let's count the people with neuroscience expertise, other than people whose careers are in hawking cryonics

This is a little unfair: if you have neuroscience experience and think cryonics is very important, then going to work for Alcor or CI may be where you can have the most impact. At which point others note that you're financially dependent on people signing up for cryonics and write you off as biased.

Replies from: fezziwig
comment by fezziwig · 2014-01-13T19:45:26.580Z · LW(p) · GW(p)

In a world where cryonics were obviously worthwhile to anyone with neuroscience expertise, one would expect to see many more cryonics-boosting neuroscientists than could be employed by Alcor and CI. Indeed, you might expect there to be more major cryonics orgs than just those two.

In other words, it's only unfair if we think size of the "neuroscientist" pool is roughly comparable to the size of the market for cryonics researchers. It's not, so IMO JRMayne raises an interesting point, and not one I'd considered before.

comment by James_Miller · 2014-01-13T16:22:15.765Z · LW(p) · GW(p)

Economists are the scientists most qualified to speculate on the likely success of cryonics because this kind of prediction involves speculating on long-term technological trends and although all of mankind is bad at this, economists at least try to do so with rigor.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2014-01-13T19:51:11.083Z · LW(p) · GW(p)

"How likely is it that the current cryonics process prevents information-theoretic death" is a question for neuroscientists, not economists.

Replies from: MakoYass, James_Miller
comment by mako yass (MakoYass) · 2020-04-14T05:21:20.827Z · LW(p) · GW(p)

I wonder if the really heaviest claims cryonics makes are mostly split between civics (questions like can an operation keep running long enough, will there always be people who care about reviving the stiffs) and partially in computer science (can the information needed be recovered from what remains), and the questions that are in the domain neuroscience (what biochemical information is important) might be legible enough to people outside of the field that neuroscientists don't end up being closer to the truth? I wouldn't say so, judging by the difficulties the openworm project is having in figuring out which information is important, but it's conceivable a time will come when it falls this way.

This is making me wonder how often people assume a question resides exclusively in one field when it's split between a number of fields in such a way that a majority of the experts in the one assumed focal field don't tend to be right about it.

comment by James_Miller · 2014-01-13T22:11:43.166Z · LW(p) · GW(p)

Identical twins raised apart act fairly similarly, and economists are better qualified to judge this claim than neuroscientists. Given my DNA and all the information saved in my brain by cryonics it almost certainly would be possible for a super-intelligence with full nanotech to create something which would act similar to how I do in similar circumstances. For me at least, that's enough to preserve my identity and have cryonics work. So for me the answer to your question is almost certainly yes. To know if cryonics will work we need to estimate long-term tech trends to guess if Alcor could keep my body in tact long enough until someone develops the needed revival technologies.

Replies from: TheOtherDave, jkaufman
comment by TheOtherDave · 2014-01-13T22:16:35.824Z · LW(p) · GW(p)

I'm curious... if P1 is the probability that a superintelligence with full nanotech can create something which would act similar to how you do in similar circumstances given your DNA and all the information in your cryonically frozen brain, and P2 is that probability given just your DNA, what's your estimate of P1/P2?

Replies from: James_Miller
comment by James_Miller · 2014-01-13T23:12:27.998Z · LW(p) · GW(p)

Good point, especially if you include everything I have published in both P1 and P2 then P1 and P2 might be fairly close. This along with the possibility of time travel to bring back the dead is a valid argument against cryonics. Even in these two instances, cryonics would be valuable as a strong signal to the future that yes I really, really want to be brought back. Also, the more information the super-intelligence has the better job it will do. Cryonics working isn't a completely binary thing.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-13T23:20:49.131Z · LW(p) · GW(p)

So... it sounds like you're saying that your confidence that cryonic preservation differentially prevents information-theoretic death is relatively low (given that you estimate the results with and without it to be fairly close)... yes?

as a strong signal to the future that yes I really, really want to be brought back.

(nods)
What's your estimate of the signal-strength ratio, to such a superintelligence of your preferences in the matter, between (everything it knows about you + you signed up for cryonics) and (everything it knows about you + you didn't sign up for cryonics)?

Also, the more information the super-intelligence has the better job it will do. Cryonics working isn't a completely binary thing.

True.

Replies from: James_Miller
comment by James_Miller · 2014-01-13T23:51:27.135Z · LW(p) · GW(p)

So... it sounds like you're saying that your confidence that cryonic preservation differentially prevents information-theoretic death is relatively low (given that you estimate the results with and without it to be fairly close)... yes?

Yes given an AI super-intelligence trying to bring me back.

What's your estimate of the signal-strength ratio, to such a superintelligence of your preferences in the matter, between (everything it knows about you + you signed up for cryonics) and (everything it knows about you + you didn't sign up for cryonics)?

I'm not sure. So few people have signed up for cryonics and given cryonics' significant monetary and social cost it does make for a powerful signal.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-14T04:51:42.760Z · LW(p) · GW(p)

Yes given an AI super-intelligence trying to bring me back.

If we assume there is no AI superintelligence trying to bring you back, what's your estimate of the ratio of the probabilities of information-theoretic death given cryonic preservation and absent cryonic preservation?

So few people have signed up for cryonics and given cryonics' significant monetary and social cost it does make for a powerful signal.

To a modern-day observer, I agree completely. Do you think it's an equally powerful signal to the superintelligence you posit?

Replies from: James_Miller
comment by James_Miller · 2014-01-14T05:24:29.609Z · LW(p) · GW(p)

If we assume there is no AI superintelligence trying to bring you back, what's your estimate of the ratio of the probabilities of information-theoretic death given cryonic preservation and absent cryonic preservation?

I don't know enough about nanotech to give a good estimate of this path. The brain uploading path via brain scans is reasonable given cryonics and, of course, hopeless without it.

Do you think it's an equally powerful signal to the superintelligence you posit? Perhaps given that in part by signing up for cryonics I have probably changed my brain state to more want to outlive my natural death and this would be reflected in my writings.

Replies from: TheOtherDave
comment by TheOtherDave · 2014-01-14T14:46:55.148Z · LW(p) · GW(p)

OK... thanks for clarifying.

comment by jefftk (jkaufman) · 2014-01-13T22:39:05.596Z · LW(p) · GW(p)

Have you considered getting your DNA sequenced and storing that in a very robust medium?

Replies from: James_Miller
comment by James_Miller · 2014-01-13T23:14:38.097Z · LW(p) · GW(p)

Yes. I'm a member of 23andMe, although they don't do a full sequencing.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2014-01-14T02:09:41.281Z · LW(p) · GW(p)

Sorry, I should be more clear. You think your DNA is going to be really helpful to a superintelligence bringing you back, then it would make sense to try and increase the chances it stays around. 23andMe is a step in this direction, but as full genome sequencing gets cheaper at some point you should probably do that too. It's alreadfy much cheaper than cryonics and in a few years should be cheaper by an even larger margin.

comment by satt · 2014-01-11T11:57:13.886Z · LW(p) · GW(p)

I'm glad you attached your bounty to a concrete action (cancelling your cryonics subscription) rather than something fuzzy like "convincing me to change my mind". When someone offers a bounty for the latter I cynically expect them to use motivated cognition to explain away any evidence presented, and then refuse to pay out even if the evidence is very strong. (While you might still end up doing that here, the bounty is at least tied to an unambiguously defined action.)

Replies from: Kawoomba
comment by Kawoomba · 2014-01-11T12:30:24.143Z · LW(p) · GW(p)

Not really, because the sequence of events is "Change my mind", then "Cancel subscription", i.e. the latter hinges on the former. Hence, since "Change my mind" is a necessary prerequisite, the ambiguity remains.

Replies from: satt
comment by satt · 2014-01-11T12:39:13.200Z · LW(p) · GW(p)

When all is said & done, we may never know whether Chris Hallquist really did or really should have changed his mind. But, assuming Alcor/CI is willing to publicly disclose CH's subscription status, we will be able to decide unambiguously whether he's obliged to cough up $500.

Replies from: Kawoomba
comment by Kawoomba · 2014-01-11T13:04:26.555Z · LW(p) · GW(p)

Obviously a private enterprise won't publicly disclose the subscription status of its members.

He can publicly state whatever he wants regarding whether he changed his mind or not, no matter what he actually did. He can publicly state whatever he wants regarding whether he actually cancelled his subscription, no matter what he actually did.

If you assume OP wouldn't actually publicly lie (but still be subject to motivated cognition, as you said in the grandparent), then my previous comment is exactly right. You don't avoid any motivated cognition by adding an action which is still contingent on the problematic "change your mind" part.

In the end, you'll have to ask him "Well, did you change your mind?", and whether he answers you "yes or no" versus "I cancelled my subscription" or "I did not cancel my subscription" comes out to the same thing.

Replies from: James_Miller, satt
comment by James_Miller · 2014-01-11T16:40:04.023Z · LW(p) · GW(p)

When Alcor was fact checking my article titled Cryonics and the Singularity (page 21) for their magazine they said they needed some public source for everyone I listed as a member of Alcor. They made me delete reference to one member because my only source was that he had told me of his membership (and had given me permission to disclose it).

Replies from: Kawoomba
comment by Kawoomba · 2014-01-12T08:24:34.402Z · LW(p) · GW(p)

Good article, you should repost it as a discussion topic or in the open thread.

comment by satt · 2014-01-11T15:26:38.405Z · LW(p) · GW(p)

Obviously a private enterprise won't publicly disclose the subscription status of its members.

Not so obvious to me. CH could write to Alcor/CI explaining what he's done, and tell them he's happy for them to disclose his subscription status for the purpose of verification. (Even if they weren't willing to follow through on that, CH could write a letter asking them to confirm in writing that he's no longer a member, and then post a copy of the response. CH might conceivably fake such a written confirmation, but I find it very unlikely that CH would put words in someone else's mouth over their faked signature to save $500.)

comment by NancyLebovitz · 2014-01-11T21:45:25.287Z · LW(p) · GW(p)

Supposing that you get convinced that a cryonics subscription isn't worth having for you.

What's the likelihood that it's just one person offering a definitive argument rather than a collaborative effect? If the latter, will you divide the $500?

Replies from: ChrisHallquist
comment by ChrisHallquist · 2014-01-13T02:54:09.741Z · LW(p) · GW(p)

Good question, should have answered it in the OP. The answer is possibly, but I anticipate a disproportionate share of the contribution coming from one person, someone like kalla724, and in that case it goes to that one person. But definitely not divided between the contributors to an entire LW thread.

comment by topynate · 2014-01-11T12:28:35.053Z · LW(p) · GW(p)

It is likely that you would not wish for your brain-state to be available to all-and-sundry, subjecting you to the possibility of being simulated according to their whims. However, you know nothing about the ethics of the society that will exist when the technology to extract and run your brain-state is developed. Thus you are taking a risk of a negative outcome that may be less attractive to you than mere non-existence.

Replies from: jowen, Ishaan
comment by jowen · 2014-01-13T23:21:52.586Z · LW(p) · GW(p)

This argument has made me start seriously reconsidering my generally positive view of cryonics. Does anyone have a convincing refutation?

The best I can come up with is that if resuscitation is likely to happen soon, we can predict the values of the society we'll wake up in, especially if recovery becomes possible before more potentially "value disrupting" technologies like uploading and AI are developed. But I don't find this too convincing.

Replies from: topynate
comment by topynate · 2014-01-15T20:22:37.491Z · LW(p) · GW(p)

My attempt at a reply turned into an essay, which I've posted here.

comment by Ishaan · 2014-01-11T20:26:31.968Z · LW(p) · GW(p)

This answer raises the question of how narrow the scope of the contest is:

Do you want to specifically hear arguments from scientific evidence about how cryonics is not going to preserve your consciousness?

Or, do you want arguments not to do cryonics in general? Because that can also be accomplished via arguments as to the possible cons of having your consciousness preserved, arguments towards opportunity costs of attempting it (effective altruism), etc. It's a much broader question.

(Edit - nevermind, answered in the OP upon more careful reading)

comment by [deleted] · 2014-01-11T21:14:02.171Z · LW(p) · GW(p)

You have read the full kalla724 thread, right?

I think V_V's comment is sufficient for you to retract your cryonics subscription. If we get uFAI you lose anyways, so I would be putting my money into that and other existential risks. You'll benefit a lot more people that way.

Replies from: ChrisHallquist, Furcas
comment by ChrisHallquist · 2014-01-13T03:00:50.958Z · LW(p) · GW(p)

I had read some of that thread, and just went and made a point of reading any comments by kalla724 that I had missed. Actually, I had them in mind when I made this thread - hoping that $500 could induce a neuroscientist to write the post kalla724 mentioned (but as far as I can tell never wrote), or or else be willing to spend a few hours fielding questions from me about cryonics. I considered PMing kalla724 directly, but they don't seem to have participated in LW in some time.

Edit: PM'd kalla724. Don't expect a response, but seemed worth the 10 seconds on that off-chance.

comment by Furcas · 2014-01-11T22:34:12.624Z · LW(p) · GW(p)

Kalla724 is strongly convinced that the information that makes us us won't be preserved by current cryonics techniques, and he says he's a neuroscientist. Still, it would be nice if he'd write something a bit more complete so it could be looked at by other neuroscientists who could then tell us if he knows what he's talking about, at least.

comment by Alsadius · 2014-01-16T06:56:09.769Z · LW(p) · GW(p)

My objection to cryonics is financial - I'm all for it if you're a millionaire, but most people aren't. For most people, cryonics will eat a giant percentage of your life's total production of wealth, in a fairly faint-hope chance at resurrection. The exact chances are a judgement call, but I'd ballpark it at about 10%, because there's so very many realistic ways that things can go wrong.

If your cryonics insurance is $50/month, unless cryonics is vastly cheaper than I think it is, it's term insurance, and the price will jump drastically over time(2-3x per decade, generally). In other words, you're buying temporary cryonics coverage, not lifetime. That is not generally the sort of thing cryonics fans seem to want. Life insurance is a nice way to spread out the costs, but insurance companies are not in the business of giving you something for nothing.

Replies from: ChrisHallquist
comment by ChrisHallquist · 2014-01-16T07:47:24.705Z · LW(p) · GW(p)

$50/month is for universal life insurance. It helps that I'm young and a non-smoker.

Replies from: Alsadius
comment by Alsadius · 2014-01-16T08:05:35.822Z · LW(p) · GW(p)

What payout? And "universal life" is an incredibly broad umbrella - what's the insurance cost structure within the UL policy? Flat, limited-pay, term, YRT? (Pardon the technical questions, but selling life insurance is a reasonably large portion of my day job). Even for someone young and healthy, $50/mo will only buy you $25-50k or so. I thought cryonics was closer to $200k.

Replies from: ChrisHallquist
comment by ChrisHallquist · 2014-01-18T03:53:26.371Z · LW(p) · GW(p)

$100k. Cryonics costs vary with method and provider. I don't have exact up-to-date numbers, but I believe the Cryonics Institute charges ~$30k, while Alcor charges ~$80k for "neuro" (i.e. just your head) or ~$200k for full-body.

Replies from: Alsadius
comment by Alsadius · 2014-01-19T22:02:42.596Z · LW(p) · GW(p)

Running the numbers, it seems you can get a bare-bones policy for that. I don't tend to sell many bare-bones permanent policies, though, because most people buying permanent insurance want some sort of growth in the payout to compensate for inflation. But I guess with cheaper cryo than I expected, the numbers do add up. Cryo may be less crazy than I thought.

comment by [deleted] · 2014-01-11T18:21:25.336Z · LW(p) · GW(p)

Be aware that you are going to get a very one-sided debate. I am very much pro-cryonics, but you're not going to hear much from me or others like me because (1) I'm not motivated to rehash the supporting arguments, and (2) attaching monetary value actually deentivises me from participating (particularly when I am unlikely to receive it).

ETA: Ok, I said that and then I countered myself by being compelled to respond to this point:

In particular, I find questions about personal identity and consciousness of uploads made from preserved brains confusing, but think there are very few people in the world, if any, who are likely to have much chance of getting me un-confused about those issues.

Issues of mind-uploading should not affect your decision. I personally am convinced that the reigning opinion on mind uploading and personal identity is outright wrong - if they destructively upload my mind then they might as well thaw out and cremate me. There would be no continuity of consciousness and I would not benefit.

My own application for cyronics membership is held up in part because I'm still negotiating a contract that forces them to preserve me for revival only, not uploading, but that should be sorted out soon. All you need to do is make your wishes clear and legally binding.

Replies from: kalium, Ishaan, DanielLC
comment by kalium · 2014-01-12T04:39:52.469Z · LW(p) · GW(p)

Why shouldn't uploading affect his decision? If he's resurrected into a physical body and finds the future is not a place he wants to live, he can opt out by destroying his body. If he's uploaded, there is very plausibly no way out.

comment by Ishaan · 2014-01-11T21:43:23.942Z · LW(p) · GW(p)

Curious - would you retain this belief if uploading actually happened, the uploaded consciousnesses felt continuity, and external observers could tell no difference between the uploaded consciousnesses and the original consciousnesses?

(Because if so, you can just have an "only if it works for others may you upload me" clause)

Replies from: None
comment by [deleted] · 2014-01-12T07:04:24.222Z · LW(p) · GW(p)

To whom are you asking the question? I'd be dead. That computer program running a simulation of me would be a real person, yes, with all associated moral implications. It'd even think and behave like me. But it wouldn't be me - a direct continuation of my personal identity - anymore than my twin brother or any of the multiverse copies of "me" are actually me. If my brain was still functioning at all I'd be cursing the technicians as they ferry my useless body from the uploader to the crematorium. Then I'd be dead while some digital doppelgänger takes over my life.

Do you see? This isn't about whether uploading works or not. Uploading when it works creates a copy of me. It will not continue my personal existence. We can be sure of this, right now.

Replies from: TheOtherDave, Ishaan, ArisKatsaris, ephion, Dentin
comment by TheOtherDave · 2014-01-12T20:00:16.297Z · LW(p) · GW(p)

On what grounds do you believe that the person who wrote that comment is the same person who is reading this response?

I mean, I assume that the person reading this response thinks and behaves like the same person (more or less), and that it remembers having been the person who wrote the comment, but that's just thought and behavior and memory, and on your account those things don't determine identity.

So, on your account, what does determine identity? What observations actually constitute evidence that you're the same person who wrote that comment? How confident are you that those things are more reliable indicators of shared identity than thought and behavior and memory?

Replies from: None
comment by [deleted] · 2014-01-12T20:54:46.394Z · LW(p) · GW(p)

On what grounds do you believe that the person who wrote that comment is the same person who is reading this response?

By examining the history of interactions which occured between the two states.

How confident are you that those things are more reliable indicators of shared identity than thought and behavior and memory?

Because it is very easy to construct thought experiments which show that thought, behavior, and memory are not sufficient for making a determination. For example, imagine a non-destructive sci-fi teleporter. The version of you I'm talking to right now walks into the machine, sees some flashing lights, and then walks out. Some time later another Dave out of a similar machine on Mars. Now step back a moment in time. Before walking into the machine, what experience do you expect to have after: (1) walking back out or (2) waking up on Mars?

Replies from: TheOtherDave, Dentin
comment by TheOtherDave · 2014-01-12T20:57:13.334Z · LW(p) · GW(p)

By examining the history of interactions which occured between the two states.

Well, yes, but what are you looking for when you do the examination?

That is, OK, you examine the history, and you think "Well, I observe X, and I don't observe Y, and therefore I conclude identity was preserved." What I'm trying to figure out is what X and Y are.

Before walking into the machine, what experience do you expect to have after: (1) walking back out or (2) waking up on Mars?

Both.

comment by Dentin · 2014-01-13T21:49:27.614Z · LW(p) · GW(p)

With 50% probability, I expect to walk back out, and with 50% probability I expect to wake up on mars. Both copies will feel like and believe that they are the original, and both copies will believe they are the 'original'.

Replies from: None
comment by [deleted] · 2014-01-14T09:58:31.727Z · LW(p) · GW(p)

But you expect one or the other, right? In other words, you don't expect to experience both futures, correct?

Now what if the replicator on Mars gets stuck, and starts continuously outputting Dentins. What is your probability of staying on Earth now?

Further, doesn't it seem odd that you are assigning any probability that after a non-invasive scan, and while your brain and body continues to operate just fine on Earth, you suddenly find yourself on Mars, and someone else takes over your life on Earth?

What is the mechanism by which you expect your subjective experience to be transferred from Earth to Mars?

Replies from: TheOtherDave, Dentin
comment by TheOtherDave · 2014-01-14T20:12:46.363Z · LW(p) · GW(p)

Not Dentin, but since I gave the same answer above I figured I'd weigh in here.

you expect one or the other, right? In other words, you don't expect to experience both futures, correct?

I expect to experience both futures, but not simultaneously.

Somewhat similarly, if you show me a Necker cube, do I expect to see a cube whose front face points down and to the left? Or a cube whose front face points up and to the right? Well, I expect to see both. But I don't expect to see both at once... I'm not capable of that.

(Of course, the two situations are not the same. I can switch between views of a Necker cube, whereas after the duplication there are two mes each tied to their own body.)

what if the replicator on Mars gets stuck [..] What is your probability of staying on Earth now?

I will stay on Earth, with a probability that doesn't change.
I will also appear repeatedly on Mars.

doesn't it seem odd that you are assigning any probability that after a non-invasive scan, and while your brain and body continues to operate just fine on Earth, you suddenly find yourself on Mars,

Well, sure, in the real world it seems very odd to take this possibility seriously. And, indeed, it never seems to happen, so I don't take it seriously... I don't in fact expect to wake up on Mars.

But in the hypothetical you've constructed, it doesn't seem odd at all... that's what a nondestructive teleporter does.

and someone else takes over your life on Earth?

(shrug) In ten minutes, someone will take over my life on Earth. They will resemble me extremely closely, though there will be some small differences. I, as I am now, will no longer exist. This is the normal, ordinary course of events; it has always been like this.

I'm comfortable describing that person as me, and I'm comfortable describing the person I was ten minutes ago as me, so I'm comfortable saying that I continue to exist throughout that 20-minute period. I expect me in 10 minutes to be comfortable describing me as him.

If in the course of those ten minutes, I am nondestructively teleported to Mars, someone will still take over my life on Earth. Someone else, also very similar but not identical, will take over my life on Mars. I'm comfortable describing all of us as me. I expect both of me in 10 minutes to be comfortable describing me as them.

That certainly seems odd, but again, what's odd about it is the nondestructively teleported to Mars part, which the thought experiment presupposes.

What is the mechanism by which you expect your subjective experience to be transferred from Earth to Mars?

It will travel along with my body, via whatever mechanism allows that to be transferred. (Much as my subjective experience travels along with my body when I drive a car or fly cross-country.)

It would be odd if it did anything else.

comment by Dentin · 2014-01-14T18:42:48.840Z · LW(p) · GW(p)

No, I would never expect to simultaneously experience being on both Mars and Earth. If you find anyone who believes that, they are severely confused, or are trolling you.

If I know the replicator will get stuck and output 99 dentins on Mars, I would only expect a 1% chance of waking up on earth. If I'm told that it will only output one copy, I would expect a 50% chance of waking up on earth, only to find out later that the actual probability was 1%. The map is not the territory.

Further, doesn't it seem odd that you are assigning any probability that after a non-invasive scan, and while your brain and body continues to operate just fine on Earth, you suddenly find yourself on Mars, and someone else takes over your life on Earth?

Not at all. In fact, it seems odd to me that anyone would be surprised to end up on Mars.

What is the mechanism by which you expect your subjective experience to be transferred from Earth to Mars?

Because conciousness is how information processing feels from the inside, and 'information processing' has no intrinsic requirement that the substrate or cycle times be continuous.

If I pause a playing wave file, copy the remainder to another machine, and start playing it out, it still plays music. It doesn't matter that the machine is different, that the decoder software is different, that the audio transducers are different - the music is still there.

Another, closer analogy is that of the common VM: it is possible to stop a VPS (virtual private server), including operating system, virtual disk, and all running programs, take a snapshot, copy it entirely to another machine halfway around the planet, and restart it on that other machine as though there were no interruption in processing. The VPS may not even know that anything has happened, other than suddenly its clock is wrong compared to external sources. The fact that it spent half an hour 'suspended' doesn't affect its ability to process information one whit.

comment by Ishaan · 2014-01-12T12:04:37.025Z · LW(p) · GW(p)

OK, I was just checking.

There were two ways to interpret your statement - that uploaded won't be identical human beings (an empirical statement) vs. uploads will disrupt your continuity (a philosophical statement).

I was just wondering which one it was. I'm interested in hearing arguments against uploading

-How do you know right now that you are a continuity of the being that existed one-hour-in-the-past, and that the being that exists one-hour-in-the-future will be in continuity with you?

-Would you ever step into a sci-fi style teleporter?

-cryonics constitutes "pausing" and "resuming" yourself. How is this sort of temporal discontinuity different from the spatial discontinuity involved in teleporting?

Replies from: None
comment by [deleted] · 2014-01-12T19:14:22.762Z · LW(p) · GW(p)

There were two ways to interpret your statement - that uploaded won't be identical human beings (an empirical statement) vs. uploads will disrupt your continuity (a philosophical statement).

The latter, but they are both empirical questions. The former deals with comparing informational configurations at two points in time, whereas the latter is concerned with the history of how we went from state A to state B (both having real-world implications).

How do you know right now that you are a continuity of the being that existed one-hour-in-the-past, and that the being that exists one-hour-in-the-future will be in continuity with you?

We need more research on the physical basis for consciousness to understand this better such that we can properly answer the question. Right now all we have is the fleeting experience of continued identity moment to moment, and the induction principle which is invalid to apply over singular events like destructive uploading.

My guess as to the underlying nature of the problem is that consciousness exists in any complex interaction of particles - not the pattern itself, but the instantiation of the computation. And so long as this interaction is continuous and ongoing we have a physical basis for the continuation of subjective experience.

Would you ever step into a sci-fi style teleporter?

Never, for the same reasons.

Cryonics constitutes "pausing" and "resuming" yourself. How is this sort of temporal discontinuity different from the spatial discontinuity involved in teleporting?

Pausing is a metaphor. You can't freeze time and chemistry never stops entirely. The particles in a cryonic patient's brain keep interacting in complex, albeit much slowed down ways. Recall that the point of pumping the brain full of anti-freeze is that it remains intact and structurally unmolested even at liquid nitrogen temperatures. It is likely that some portion of biological activity is ongoing in cryostatasis albeit at a glacial pace. This may or may not be sufficient for continuity of experience, but unlike uploading the probability is at least not zero.

BTW the problem with teleporting is not spatial or temporal. The problem is that the computational process which is the subjective experience of the person being teleported is interrupted. The machine violently disassembles them and they die, then somewhere else a clone/copy is created. If you have trouble seeing that, imagine that the process is not destructive. You step into the teleporter, it scans you, and then you step out. I then shoot you in the head with a gun. The teleporter then reconstructs a copy of you. Do you really think that you, the person I just shot in the head and now is splattered all over the floor, gets to experience walking out of the teleporter as a copy? If you're still having trouble, imagine that the teleporter got stuck in a loop and kept outputting copies. Which one is you? Which one do you expect to "wake up" as at the other end of the process?

Replies from: Ishaan, Dentin
comment by Ishaan · 2014-01-12T22:40:58.320Z · LW(p) · GW(p)

You step into the teleporter, it scans you, and then you step out. I then shoot you in the head with a gun. The teleporter then reconstructs a copy of you. Do you really think that you, the person I just shot in the head and now is splattered all over the floor, gets to experience walking out of the teleporter as a copy? If you're still having trouble, imagine that the teleporter got stuck in a loop and kept outputting copies. Which one is you? Which one do you expect to "wake up" as at the other end of the process?

My current thought on the matter is that Ishaan0 stepped into the elevator, Ishaan1a stepped out of the elevator, and Ishaan1b was replicated by the elevator.

At time 2, Ishaan2a was shot, and Ishaan2b survived.

Ishaan0 -> ishaan1a --> ishaan2a just died.

Ishaan0 -> ishaan1b--->ishaan2b--->ishaan3b --->... gets to live on.

So Ishaan0 can be said to have survived, whereas ishaan1a has died.

Right now all we have is the fleeting experience of continued identity moment to moment

The way I see it, my past self is "dead" in every respect other than that my current self exists and contains memories of that past self.

I don't think there is anything fundamental saying we aught to be able to have "expectations" about our future subjective experiences, only "predictions" about the future.

Meaning, if ishaan0 had a blindfold on, then at time1 when I step out of the teleporter I would have memories which indicate that my current qualia qualify me to be in the position of either Ishaan1a or Ishaan1b. When I take my blindfold off, I find out which one I am.

comment by Dentin · 2014-01-13T21:58:36.269Z · LW(p) · GW(p)

The problem is that the computational process which is the subjective experience of the person being teleported is interrupted.

It sounds to me like you're ascribing some critical, necessary aspect of consciousness to the 'computation' that occurs between states, as opposed to the presence of the states themselves.

It strikes me as similar to the 'sampling fallacy' of analog audio enthusiasts, who constantly claim that digitization of a recording is by definition lossy because a discrete stream can not contain all the data needed to reconstruct a continuous waveform.

Replies from: None
comment by [deleted] · 2014-01-14T10:07:08.992Z · LW(p) · GW(p)

It sounds to me like you're ascribing some critical, necessary aspect of consciousness to the 'computation' that occurs between states, as opposed to the presence of the states themselves.

Absolutely (although I don't see the connection to analog audio). Is a frozen brain conscious? No. It is the dynamic response of brain from which the subjective experience of consciousness arises.

See a more physical explanation here.

Replies from: Dentin
comment by Dentin · 2014-01-14T18:23:23.426Z · LW(p) · GW(p)

The connection to analog audio seems obvious to me: a digitized audio file contains no music, it contains only discrete samples taken at various times, samples which when played out properly generate music. An upload file containing the recording of a digital brain contains no conciousness, but is concious when run, one cycle at a time.

A sample is a snapshot of an instant of music; an upload is a snapshot of conciousness. Playing out a large number of samples creates music; running an upload forward in time creates conciousness. In the same way that a frozen brain isn't concious but an unfrozen, running brain is - an uploaded copy isn't concious, but a running, uploaded copy is.

That's the point I was trying to get across. The discussion of samples and states is important because you seem to have this need for transitions to be 'continuous' for conciousness to be preserved - but the sampling theorem explicitly says that's not necessary. There's no 'continuous' transition between two samples in a wave file, yet the original can still be reconstructed perfectly. There may not be a continous transition between a brain and its destructively uploaded copy - but the original and 'continuous transition' can still be reconstructed perfectly. It's simple math.

As a direct result of this, it seems pretty obvious to me that conciousness doesn't go away because there's a time gap between states or because the states happen to be recorded on different media, any more than breaking a wave file into five thousand non-contiguous sectors on a hard disk platter destroys the music in the recording. Pretty much the only escape from this is to use a mangled definition of conciousness which requires 'continuous transition' for no obvious good reason.

Replies from: None
comment by [deleted] · 2014-01-14T19:59:51.647Z · LW(p) · GW(p)

I'm not saying it goes away, I'm saying the uploaded brain is a different person, a different being, a separate identity from the one that was scanned. It is conscious yes, but it is not me in the sense that if I walk into an uploader I expect to walk out again in my fleshy body. Maybe that scan is then used to start a simulation from which arises a fully conscious copy of me, but I don't expect to directly experience what that copy experiences.

Replies from: Dentin
comment by Dentin · 2014-01-15T00:46:38.980Z · LW(p) · GW(p)

The uploaded brain is a different person, a different being, a separate identity from the one that was scanned. It is conscious yes, and it is me in the sense that I expect with high probability to wake up as an upload and watch my fleshy body walk out of the scanner under its own power.

Of course I wouldn't expect the simulation to experience the exact same things as the meat version, or expect to experience both copies at the same time. Frankly, that's an idiotic belief; I would prefer you not bring it into the conversation in the future, as it makes me feel like you're intentionally trolling me. I may not believe what you believe, but even I'm not that stupid.

comment by ArisKatsaris · 2014-01-12T12:30:42.706Z · LW(p) · GW(p)

Uploading when it works creates a copy of me. It will not continue my personal existence.

I honestly don't know how "copy" is distinct from "continuation" on a physical level and/or in regards to 'consciousness'/'personal existence'.

If the MWI is correct, every moment I am copied into a billion versions of myself. Even if it's wrong, every moment I can be said to be copied to a single future version of myself. Both of these can be seen as 'continuations' rather than 'copies'. Why would uploading be different?

Mind you, I'm not saying it necessary isn't -- but I understand too little about consciousness to argue about it definitively and with the certainty you claim one way or another.

Replies from: None
comment by [deleted] · 2014-01-12T18:32:33.710Z · LW(p) · GW(p)

If the MWI is correct, every moment I am copied into a billion versions of myself. Even if it's wrong, every moment I can be said to be copied to a single future version of myself. Both of these can be seen as 'continuations' rather than 'copies'. Why would uploading be different?

It's not any different, and that's precisely the point. Do you get to experience what your MWI copies are doing? Does their existence in any way benefit you, the copy which is reading this sentence? No? Why should you care if they even exist at all? So it goes with uploading. That person created by uploading will not be you any more than some alternate dimension copy is you. From the outside I wouldn't be able to tell the difference, but for you it would be very real: you, the person I am talking to right now, will die, and some other sentient being with your implanted memories will take over your life. Personally I don't see the benefit of that, especially when it is plausible that other choices (e.g. revival) might lead to continuation of my existence in the way that uploading does not.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2014-01-12T19:16:56.405Z · LW(p) · GW(p)

Do you get to experience what your MWI copies are doing?

Uh, the present me is experiencing none of the future. I will "get to experience" the future, only via all the future copies of me that have a remembered history that leads back to the present me.

Does their existence in any way benefit you, the copy which is reading this sentence? No? Why should you care if they even exist at all?

If none of the future mes exist, then that means I'm dead. So of course I care because I don't want to die?

I think we're suffering from a misunderstanding here. The MWI future copy versions of me are not something that exist in addition to the ordinary future me, they are the ordinary future me. All of them are, though each of them has only one remembered timeline.

That person created by uploading will not be you any more than some alternate dimension copy is you.

Or "that person created by uploading will be as much me as any future version of me is me".

Replies from: None
comment by [deleted] · 2014-01-12T19:20:50.795Z · LW(p) · GW(p)

I'm a physicist, I understand perfectly well MWI. Each time we decohere we end up on one branch and not the others. Do you care at all what happens on the others? If you do, fine, that's very altruistic of you.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2014-01-12T19:33:39.885Z · LW(p) · GW(p)

Let me try again.

First example: Let's say that tomorrow I'll decohere into 2 versions of me, version A and version B, with equal measure. Can you tell me whether now I should only care to what happens to version A or only to version B?

No, you can't. Because you don't know which branch I'll "end up on" (in fact I don't consider that statement meaningful, but even if it was meaningful, we wouldn't know which branch I'd end up on). So now I have to care about those two future branches equally. Until I know which one of these I'll "end up on", I have no way to judge between them.

Second example. Let's say that tomorrow instead of decohering via MWI physics, I'll split into 2 versions of me, version U via uploading, and version P via ordinary physics. Can you tell me in advance why now I should only be caring about version (P) and not about version (U)?

Seems to me that like in the first example I can't know which of the two branches "I'll end up on". So now I must care about the two future versions equally.

Replies from: None
comment by [deleted] · 2014-01-12T19:38:54.617Z · LW(p) · GW(p)

Let's say that tomorrow instead of decohering via MWI physics, I'll split into 2 versions of me, version U via uploading, and version P via ordinary physics. Can you tell me in advance why now I should only be caring about version (P) and not about version (U)?

Yes, you'd care about P and not U, because there's a chance you'd end up on P. There's zero chance you'd end up as U.

Seems to me that like in the first example I can't know which of the two branches "I'll end up on". So now I must care about the two future versions equally.

Now tomorrow has come, and you ended up as one of the branches. How much do you care about the others you did not end up on?

Replies from: Dentin, Dentin, ArisKatsaris
comment by Dentin · 2014-01-14T00:25:02.806Z · LW(p) · GW(p)

Now tomorrow has come, and you ended up as one of the branches. How much do you care about the others you did not end up on?

In the case of MWI physics, I don't care about the other copies at all, because they cannot interact with me or my universe in any way whatsoever. That is not true for other copies of myself I may make by uploading or other mechanisms. An upload will do the same things that I would do, will have the same goals I have, and will in all probability do things that I would approve of, things which affect the universe in a way that I would probably approve of. None of that is true for an MWI copy.

comment by Dentin · 2014-01-13T21:54:06.517Z · LW(p) · GW(p)

Yes, you'd care about P and not U, because there's a chance you'd end up on P. There's zero chance you'd end up as U.

This statement requires evidence or at least a coherent argument.

Replies from: None
comment by [deleted] · 2014-01-14T10:03:27.968Z · LW(p) · GW(p)

Actually, I think the burden of proof lies in the other direction. By what mechanism might you think that your subjective experience would carry over into the upload, rather than stay with your biological brain while the upload diverges as a separate individual? That's the more extraordinary belief.

Replies from: Dentin
comment by Dentin · 2014-01-14T18:56:24.511Z · LW(p) · GW(p)

I think this is at least partially a bogus question/description. Let me break it up into pieces:

By what mechanism might you think that your subjective experience would carry over into the upload, rather than stay with your biological brain ...

This postulates an 'either/or' scenario, which in my mind isn't valid. A subjective experience carries over into the upload, and a subjective experience also stays in the biological brain. There isn't a need for the subjective experience to have a 'home'. It's ok for there to be two subjective experiences, one in each location.

... rather than stay with your biological brain while the upload diverges as a separate individual?

Of course the upload diverges from the biological. Or rather, the biological diverges from the upload. This was never a question. Of course the two subjective experiences diverge over time.

And lastly:

By what mechanism might you think that your subjective experience would carry over into the upload ...

By the sampling theorem, which separates the content from the substrate.

Replies from: None
comment by [deleted] · 2014-01-14T20:03:46.388Z · LW(p) · GW(p)

You are talking about something completely different. Can you describe to me what it feels like for someone to be nondestructively scanned for upload? What should someone walking into the clinic expect?

Replies from: Dentin
comment by Dentin · 2014-01-15T00:35:58.450Z · LW(p) · GW(p)

Sample scenario 1: I go to an upload clinic. They give me a coma inducing drug and tell me that it will wear off in approximately 8 hours, after the scan is complete. As I drift off, I expect a 50% chance that I will awake to find myself an upload, and a 50% chance that I will awake to find myself still stuck in a meat body.

Sample scenario 2: I go to an upload clinic. They tell me the machine is instantaneous and I will be conscious for the scan, and that the uploaded copy will be fully tested and operational in virtual form in about an hour. I step into the machine. I expect with 50% probability that I will step out of the machine after the scan, not feeling particularly different, and that an hour later I'll be able to talk to my virtual upload in the machine. I also expect with 50% probability that I will find myself spontaneously in virtual form the instant after the scan completes, and that when I check the block, an hour or more of real time will have passed even though it felt instantaneous to me.

(Waking up as an upload in scenario 2 doesn't seem much different from being put under for surgery to me, at least based on my experiences. You're talking, then suddenly everything is in a different place and the anaestheseologist is asking 'can you tell me your name', interrupting your train of thought and half an hour has passed and the doctor has totally lost track of the conversation right when it was getting interesting.)

Replies from: None
comment by [deleted] · 2014-01-15T04:48:13.631Z · LW(p) · GW(p)

Ok, I understand your position. It is not impossible that what you describe is reality. However I believe that it depends on a model of consciousness / subjective experience / personal identity as I have been using those terms which has not definitely been shown to be true. There are other plausible models which would predict with certainty that you would walk out of the machine and not wake up in the simulator. Since (I believe) we do not yet know enough to say with certainty which theory is correct, the conservative, dare I say rational way to proceed is to make choices which come out favorably under both models.

However in the case of destructive uploading vs. revival in cryonics we can go further. Under no model is it better to upload than to revive. This is analogous to scenario #2 - where the patient has (in your model) only a 50% chance of ending up in the simulation vs. the morgue. If I'm right he or she has a 0% chance of success. If you are right then that same person has a 50% chance of success. Personally I'd take revival with a 100% chance of success in both models (modulo chance of losing identity anyway during the vitrification process).

Replies from: Dentin
comment by Dentin · 2014-01-15T07:05:01.737Z · LW(p) · GW(p)

Nothing I said implied a '50% chance of ending up in the simulation vs. the morgue'. In the scenario where destructive uploading is used, I would expect to walk into the uploading booth, and wake up as an upload with ~100% probability, not 50%. Are you sure you understand my position? Signs point to no.

comment by ArisKatsaris · 2014-01-12T19:41:14.562Z · LW(p) · GW(p)

Yes, you'd care about P and not U, because there's a chance you'd end up on P. There's zero chance you'd end up as U.

Why are you saying that? If you don't answer this question, of why you believe there's no chance of ending up as the upload, what's the point of writing a single other word in response?

I see no meaningful difference between first and second example. Tell me what the difference is that makes you believe that there's no chance I'll end up as version U.

comment by ephion · 2014-01-12T16:06:24.821Z · LW(p) · GW(p)

The copy will remember writing this, and will feel pretty strongly that it's a continuation of you.

Replies from: None
comment by [deleted] · 2014-01-12T19:41:20.434Z · LW(p) · GW(p)

So? So all the other Everett branches distinct from me. So would some random person implanted with my memories. I don't care what it thinks or feels, what I care about is whether it actually is a direct continuation of me.

comment by Dentin · 2014-01-12T14:38:34.439Z · LW(p) · GW(p)

I'm sorry to hear that. It's unfortunate for you, and really limits your options.

In my case, uploading does continue my personal existence, and uploading in my case is a critical aspect of getting enough redundancy in my self to survive black swan random events.

Regarding your last sentence, "We can be sure of this, right now", what are you talking about exactly?

Replies from: None
comment by [deleted] · 2014-01-12T19:42:37.514Z · LW(p) · GW(p)

Regarding your last sentence, "We can be sure of this, right now", what are you talking about exactly?

I mean we can do thought experiments which show prettying convincingly that I should not expect to experience the other end of uploading.

Replies from: Dentin
comment by Dentin · 2014-01-13T21:50:49.698Z · LW(p) · GW(p)

What might those thought experiments be? I have yet to hear any convincing ones.

Replies from: None
comment by [deleted] · 2014-01-14T10:00:26.035Z · LW(p) · GW(p)

The teleporter arguments we've already been discussing, and variants.

comment by DanielLC · 2014-01-11T19:11:51.617Z · LW(p) · GW(p)

I am very much pro-cryonics, but you're not going to hear much from me or others like me because ...

He has already heard from others like you. The point is for him to find the arguments he hasn't heard, which tend to be the ones against cryonics.

My own application for cyronics membership is held up in part because I'm still negotiating a contract that forces them to preserve me for revival only, not uploading, but that should be sorted out soon.

That sounds much more difficult and correspondingly less likely to be accomplished.

comment by byrnema · 2014-01-12T03:04:00.147Z · LW(p) · GW(p)

If it could be done, would you pay $500 for a copy of you to be created tomorrow in a similar but separate alternate reality?(Like an Everette branch that is somewhat close to ours, but faraway enough that you are not already in it?)

Given what we know about identity, etc., this is what you are buying.

Personally, I wouldn't pay five cents.

Unless people that you know and love are also signed up for cryonics? (In which case you ought to sign up, for lots of reasons including keeping them company and supporting their cause.)

Replies from: None
comment by [deleted] · 2014-01-12T07:08:12.012Z · LW(p) · GW(p)

Cryonics does not necessarily imply uploading. It is possible that using atomically precise medical technology we could revive and rebuild the brain and body in-situ, thereby retaining continuity.

Replies from: byrnema
comment by byrnema · 2014-01-12T08:12:05.592Z · LW(p) · GW(p)

I meant a physical copy.

Would it make a difference, to you, if they rebuilt you in-situ, rather than adjacent?

But I just noticed this set of sentences, so I was incorrect to assume common ideas about identity:

In particular, I find questions about personal identity and consciousness of uploads made from preserved brains confusing,

Replies from: None
comment by [deleted] · 2014-01-12T08:19:05.891Z · LW(p) · GW(p)

I know. I was pointing out that your thought experiment might not actually apply to the topic of cryonics.

comment by JTHM · 2014-01-11T20:26:44.132Z · LW(p) · GW(p)

Let me attempt to convince you that your resurrection from cryonic stasis has negative expected value, and that therefore it would be better for you not to have the information necessary to reconstruct your mind persist after the event colloquially known as "death," even if such preservation were absolutely free.

Most likely, your resurrection would require technology developed by AI. Since we're estimating the expected value of your resurrection, let's work on the assumption that the AGI will be developed.

Friendly AI is strictly more difficult to develop than AI with values orthogonal to ours or malevolent AI. Because the FAI developers are at such an inherent disadvantage, AGI tech will be most used by those least concerned with its ethical ramifications. Most likely, this will result in the extinction of humanity. But it might not. In the cases where humanity survives but technology developed by AGI continues to be used by those who are little concerned with its ramifications, it would be best for you not to exist at all. Since those with moral scruples would be the most averse to wantonly duplicating, creating, or modifying life, we can assume that those doing such things most often will be vicious psychopaths (or fools who might as well be), and that therefore the amount of suffering in the world inflicted on those synthetic minds would greatly outweigh any increased happiness of biological humans. A world where a teenager can take your brain scan remotely with his iPhone in the year 2080 and download an app that allows him to torture an em of you for one trillion subjective years every real second is a world in which you'd be best off not existing in any form. Or you could find yourself transformed into a slave em forced to perform menial mental labor until the heat death of the universe.

Likely? No. More likely than FAI taking off first, despite the massive advantage the unscrupulous enjoy in AGI development? I think so. Better to die long before that day comes. For that matter, have yourself cremated rather than decaying naturally, just in case.

comment by Daniel_Burfoot · 2014-01-11T15:07:49.342Z · LW(p) · GW(p)

How low would your estimate have to get before you canceled your subscription? I might try to convince you by writing down something like:

P(CW) = P(CW | CTA) * P(CTA)

Where CW = "cryonics working for you" and CTA = "continued technological advancement in the historical short term", and arguing that your estimate of P(CTA) is probably much too high. Of course, this would only reduce your overall estimate by 10x at most, so if you still value cryonics at P=0.03 instead of P=0.3, it wouldn't matter.

comment by Gunnar_Zarncke · 2014-02-16T10:14:56.432Z · LW(p) · GW(p)

One rational utilitarian argument I haven't seen here but which was brought up in an old thread is that cryonics competes with organ donation.

With organ donation you can save on average more than one life (the thread mentions 3.75, this site says "up to 8") wheras cryonics saves only <0.1 (but your own life).

And you probably can't have both.

comment by BaconServ · 2014-01-12T21:46:28.518Z · LW(p) · GW(p)

Assuming you meant for the comment section to be used to convince you. Not necessarily because you meant it, but because making that assumption means not willfully acting against your wishes on what normally would be a trivial issue that holds no real preference for you. Maybe it would be better to do it with private messages, maybe not. There's a general ambient utility to just making the argument here, so there shouldn't be any fault in doing so.

Since this is a real-world issue rather than a simple matter of crunching numbers, what you're really asking for here isn't merely to be convinced, but to be happy with whatever decision you make. Ten months worth of payment for the relief of not having to pay an entirely useless cost every month, and whatever more immediate utility will accompany that "extra" 50$/month. If 50$ doesn't buy much immediate utility for you, then a compelling argument needs to encompass in-depth discussion of trivial things. It would mean having to know precise information about what you actually value. Or at the very least, an accurate heuristic about how you feel about trivial decisions. As it stands, you feel the 50$/month investment is worth it for a very narrow type of investment: Cryonics.

This is simply restating the knowns in a particular format, but it emphasizes what the core argument needs to be here: Either that the investment harbors even less utility than 50$/month can buy, or that there are clearly superior investments you can make at the same price.

Awareness of just how severely confirmation bias exists in the brain (despite any tactics you might suspect would uproot it) should readily show that convincing you that there are better investments to make (and therefore to stop making this particular investment) is the route most likely to produce payment. Of course, this undermines the nature of the challenge: A reason to not invest at all.

comment by Ishaan · 2014-01-11T20:56:56.943Z · LW(p) · GW(p)

This post inspired me to quickly do this calculation. I did not know what the answer would be when I started. It could convince you in either direction really, depending on your level of self/altruism balance and probability estimate.

Cost of neuro-suspension cryonics > $20,000

Cost of saving a single life via effective altruism, with high certainty < $5,000

Let's say you value a good outcome with a mostly-immortal life at X stranger's regular-span lives.

Let "C" represent the threshold of certainty that signing up for cryonics causes that good outcome.

C*X / $20,000 > 1 / $5,000

C > 4/x

Conclusion: with estimates biased towards the cryonics side of the equation... in order to sign up your minimum certainty that it will work as expected must be four divided by the number of strangers you would sacrifice your immortality for.

If you value immortality at the cost of 4 strangers, you should sign up for cryonics instead of E.A. only if you are 100% certain it will work.

If you value immortallity at the cost of 400 strangers, you should sign of for cryonics instead of E.A. only if you are more than 1% certain it will work.

(^ Really what is happening here is that at the cost of 4 strangers you are taking a gamble on a 1% chance..but it amounts to the same thing if you shut up and multiply)

The numbers for whole-body suspension will be rather different.

Replies from: solipsist
comment by solipsist · 2014-01-11T21:38:58.989Z · LW(p) · GW(p)

This sort of utilitarian calculation should be done with something like QALYs, not lives. If the best charities extend life at $150 per QALY, and a $20,000 neuro-suspension extends life by a risk-adjusted 200 QALYs, then purchasing cryonics for yourself would be altruistically utilitarian.

Replies from: Ishaan, jkaufman
comment by Ishaan · 2014-01-11T21:53:39.315Z · LW(p) · GW(p)

True, but that's much harder to estimate (because real world QALY data) and involves more uncertainty (how many QALYs to expect after revival?) and I didn't want that much work - just a quick estimate.

However, I'm guessing someone else has done this properly at some point?

Replies from: solipsist
comment by solipsist · 2014-01-11T23:15:13.485Z · LW(p) · GW(p)

However, I'm guessing someone else has done this properly at some point?

Note: I have not, so do not use my 200 QALYs as an anchor.

Replies from: somervta
comment by somervta · 2014-01-12T02:22:00.634Z · LW(p) · GW(p) Yes. Because instructing people to avoid anchoring effects works.
comment by jefftk (jkaufman) · 2014-01-12T06:22:12.407Z · LW(p) · GW(p)

These calculations get really messy because the future civilization reviving you as an upload is unlikely to have their population limited by frozen people to scan. Instead they probably run as many people as they have resources or work for, and if they decide to run you it's instead of someone else. There are probably no altruistic QALYs in preserving someone for this future.

Replies from: solipsist
comment by solipsist · 2014-01-15T02:42:36.437Z · LW(p) · GW(p)

This reply made me really think, and prompted me to ask this question.

comment by Humbug · 2014-01-11T20:31:04.640Z · LW(p) · GW(p)

Given that you believe that unfriendly AI is likely, I think one of the best arguments against cryonics is that you do not want to increase the probability of being "resurrected" by "something". But this concerns the forbidden topic, so I can't get into more details here. For hints see Iain M. Banks' novel Surface detail on why you might want to be extremely risk averse when it comes to the possibility of waking up in a world controlled by posthuman uploads.

comment by handoflixue · 2014-01-19T08:59:09.019Z · LW(p) · GW(p)

It's easy to get lost in incidental costs and not realize how they add up over time. If you weren't signed up for cryonics, and you inherited $30K, would you be inclined to dump it in to a cryonics fund, or use it someplace else? If the answer is the latter, you probably don't REALLY value cryonics as much as you think - you've bought in to it because the price is spread out and our brains are bad at budgeting small, reoccurring expenses like that.

My argument is pretty much entirely on the "expense" side of things, but I would also point out that you probably want to unpack your expectations from cryonics: Are you assuming you'll live infinite years? Live until the heat death of the universe? Gain an extra 200 years until you die in a situation cryonics can't fix? Gain an extra 50 years until you die of a further age limit?

When I see p(cryonics) = 0.3, I tend to suspect that's leaning more towards the 50-200 year side of things. Straight-up immortal-until-the-universe-ends seems a LOT less likely than a few hundred extra years.


Where'd that $30K figure come from?

You've said you're young and have a good rate on life insurance, so let's assume male (from the name) and 25. Wikipedia suggests you should live until you're 76.

$50/month 12 months/year (76-25 = 51 years) = $30,600.

So, it's less that you're paying $50/month and more that you're committing to pay $30,000 over the course of your life.


What else could you do with that same money?

Portland State University quotes ~$2500/semester for tuition. 3 semesters/year and 4 years/degree ~= $30K. Pretty sure you can get loans and go in to debt for this, so it's still something you could pay off over time. And if you're smart, do community college for the first two years, get a scholarship, etc., you can probably easily knock enough off to make up for interest charges.

Replies from: ChrisHallquist
comment by ChrisHallquist · 2014-01-19T09:47:50.723Z · LW(p) · GW(p)

I'm not that young--I graduated collect four years ago. If I inherited ~30k, it would go into a generic early start on retirement / early start on hypothetical kids' college fund / maybe downpayment on a condo fund. Given that I'd just be holding on to it in the short-term anyway, putting it in a cryonics fund doesn't actually strike me as completely crazy. Even in that case, though I think I'd get the insurance anyway, so I'd know the inheritance money could be used for anything I needed for when said need arose. Also, I understand that funding through insurance can avoid legal battles over the money.

Replies from: handoflixue
comment by handoflixue · 2014-01-20T05:01:31.820Z · LW(p) · GW(p)

The average college graduate is 26, and I was estimating 25, so I'd assume that by this community's standards, you're probably on the younger side. No offense was intended :)

I would point out that by the nature of it being LIFE insurance, it will generally not be used for stuff YOU need, nor timed to "when the need arises". That's investments, not insurance :)

(And if you have 100K of insurance for $50/month that lets you early-withdrawal AND isn't term insurance... then I'd be really curious how, because that sounds like a scam or someone misrepresenting what your policy really offers :))

comment by polymathwannabe · 2014-01-11T19:37:16.828Z · LW(p) · GW(p)

Let's suppose your mind is perfectly preserved (in whatever method they choose to use). Let's suppose you retain the continuity of your memories and you still feel you are "you." Let's suppose the future society is kinder, nicer, less wasteful, more tolerant, and every kid owns a puppy. Let's suppose the end of fossil fuels didn't destroy civilization because we were wise enough to have an alternative ready in time. Let's suppose we managed to save the ozone layer and reverse global warming and the world is still a more-or-less pleasant place to live in. Let's suppose the future society has actually competent people in political positions.

Good! But still...

What body do you end up having? Even if the future doctors can clone a whole new, young, strong body from your DNA (and remove all your potential genetic diseases), that doesn't mean you're immortal. Physical destruction of the body (from accidents, natural disasters, etc.) is still a concern. Your new body would still need to have cryonics insurance in case anything happens to it. And there's always the risk of spontaneous mutations that will ruin everything: http://www.nytimes.com/2014/01/05/sunday-review/why-everyone-seems-to-have-cancer.html?_r=0 Even if sharks don't naturally die from aging, the mere fact of them living more years increases the probability that they'll eventually find something that kills them. Digital uploading is no guarantee of immortality either. Hard drives can be damaged and destroyed too. Even after getting used to a billion years of subjective existence, you will never really, really be able to shake off the fear of annihilation from unforeseen causes. Even if you (or any of your future copies, which is no guarantee of continued identity) are one of the few lucky who make it to the end of the universe, you will still die. If a heart attack didn't get you, entropy will. So it really doesn't matter how much of an effort you make. In forty years or forty eons, you will still die. What that means to you will depend on how much you plan to do with that time, but unless we find a way to reboot the universe AND survive the reboot AND find ourselves in an environment where life can survive, the last enemy will still be undefeatable.

Replies from: gjm, polymathwannabe
comment by gjm · 2014-01-11T20:06:31.432Z · LW(p) · GW(p)

I don't follow how this is an argument against cryonics, unless you're talking to someone who really truly believed that cryonics meant a serious chance of actual literal immortality.

(Also, I have seen it alleged that at least one plausible model of the future of the universe has it dying after finite time, but in such a way that an infinite amount of computation can be done before the end. So it's not even entirely obvious you couldn't be subjectively immortal given sufficiently advanced technology. Though I think there have been cosmological discoveries since this model was alleged to be plausible that may undermine its plausibility.)

comment by polymathwannabe · 2014-01-11T19:54:41.539Z · LW(p) · GW(p)

On the other hand, you're actually paying people to get you to forfeit your chance at eternity. To paraphrase religious language, you're dangerously selling your soul too short.

comment by Dentin · 2014-01-11T18:22:24.557Z · LW(p) · GW(p)

After I ran my estimates, I concluded that cryonics raised my odds of living to ~90 years old by approximately 5% absolute, from 50% to 55%. It's not very much, but that 5% was enough for me to justify signing up.

I think the most important part is to be honest about the fact that cryonics is a fairly expensive safety net largely consisting of holes. There are many unknowns, it relies on nonexistent technology, and in many scenarios you may become permanently dead before you can be frozen. That said, it does increase your odds of long term survivability.

comment by [deleted] · 2014-01-11T17:11:03.010Z · LW(p) · GW(p)

Doesn't this thread go against the principles of The Bottom Line?

Replies from: DanielLC
comment by DanielLC · 2014-01-11T19:08:55.093Z · LW(p) · GW(p)

Not entirely. It's well known that, if you can't find an unbiased opinion, it's good to at least get biases from different directions. He has already seen the arguments in favor of cryonics. Repeating them would be wasting his time. Now he wants to find the arguments against. If they are more convincing than he expected, his expectations of cryonics working will go down. Otherwise, they will go up.

comment by itaibn0 · 2014-01-11T13:26:54.790Z · LW(p) · GW(p)

It's worth mentioning that anyone with a strong argument against cryonics is likely to believe that you will be persuaded by it (due to low base-rates for these kinds of conversions). Thus the financial incentive is not as influential as you would like it to be.

Added: Relevant prediction

Replies from: wuncidunci
comment by wuncidunci · 2014-01-11T13:32:00.553Z · LW(p) · GW(p)

If someone believes they have a really good argument against cryonics, even if it only has a 10% chance of working, that is $50 in expected gain for maybe an hour of work writing it up really well. Sounds to me like quite worth their time.

comment by lmm · 2014-01-11T11:19:55.728Z · LW(p) · GW(p)

I work in software. I once saw a changelog that said something like " * session saving (loading to be implemented in a future version)", and I laughed out loud. The argument in favour of cryonics seems to boil down to "we can't see why revival won't work", which is basically meaningless for a system this complex and poorly-understood. How can we be at all confident that we're preserving memories when we don't even know how they're encoded? I can't predict exactly what crucial thing we will have missed preserving. But I can predict we will have missed something.

I think it requires an incredible degree of fine-tuning of our future-tech assumptions to say that our post-singularity overlords will be able to revive people who were frozen, but not people who weren't.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2014-01-11T13:43:52.929Z · LW(p) · GW(p)

I found myself in that situation once.

When I wrote the loader, the saved-game files worked.

Of course, that was because I just took the whole game data object and serialized it into a file stream. Similarly, here, we're storing the actual thing.

Last paragraph: ha. Restoring someone who wasn't frozen requires time travel. If cryo works and time travel doesn't, there you go.

Replies from: VAuroch
comment by VAuroch · 2014-01-13T07:56:09.469Z · LW(p) · GW(p)

It doesn't necessarily involve time travel. It could just require extremely precise backwards extrapolation.

And if it does involve time travel, it only requires the travel of pure information from the past to its future. And since information can already be transmitted to its future light cone, the idea that it's possible to specify a particular location in spacetime sufficiently specifically that you can induce a process to transfer information about that specified location to a specific point in its future lightcone (i.e. your apparatus).

Which still sounds extremely difficult, but also much more likely to be possible than describing it as time travel.

For the record, I assign the possibility of time travel that could travel to our current point in time as epsilon, the possibility of time travel that can travel to no point earlier than the creation of the specific time machine as very small (<0.1%) but greater than epsilon, and the possibility of the outlined information-only "time travel" as in the range of 0.1%-1%.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2014-01-13T13:53:59.182Z · LW(p) · GW(p)

The ability to radiate light into space means that nope, you need to catch up to all those photons. Second law murders extrapolation like that.

Replies from: VAuroch
comment by VAuroch · 2014-01-13T19:25:49.337Z · LW(p) · GW(p)

That's true, slipped my mind.

comment by Prismattic · 2014-01-12T01:03:45.674Z · LW(p) · GW(p)

I will pay $500 to anyone who can convince me to NOT X

Is incentivizing yourself to X. Not ideal for being open to genuinely changing your mind.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2014-01-12T06:18:21.722Z · LW(p) · GW(p)

He stands to save a lot of money over the years by canceling his subscription, much more than this $500. The net short and medium term (which of course ignores the potential, long term, payoff of cryonics working) incentive is towards changing his mind and believing "not X", he's just offering to split some of that incentive with us.

comment by [deleted] · 2014-01-12T15:49:45.310Z · LW(p) · GW(p)

The definition of science that I prefer is: a theory that can be tested and shown to fail. If a theory gives itself room to always add one more variable and thus never be shown to fail, it might be useful or beautiful or powerful or comforting but it won't be science. Revival 'some day' can always be one more day away, one more variable added.

comment by Torello · 2014-01-11T21:04:03.703Z · LW(p) · GW(p)

Pour some milk into water. Now, get the milk back out. Not milk powder, not the milk plus a little water, not 99.9% of the milk and some minerals from the water, just the milk. I don't think it's possible. Now, let your brain die. Freeze it (freezing a live brain will kill it). Then, restart the most complex machine/arrangement of matter known. It just doesn't seem feasible.

I think machines can have consciousness, and I think a copy of you can have consciousness, but you can't have the consciousness of your copy, and it seems to me that after death and freezing you would get a copy of you, which would be perhaps be good for a number of reasons, but not for the reason (I presume) is most important--for you (your consciousness) to become immortal.

Replies from: None
comment by [deleted] · 2014-01-11T21:22:01.206Z · LW(p) · GW(p)

I think machines can have consciousness, and I think a copy of you can have consciousness, but you can't have the consciousness of your copy, and it seems to me that after death and freezing you would get a copy of you, which would be perhaps be good for a number of reasons, but not for the reason (I presume) is most important--for you (your consciousness) to become immortal.

A copy of you is identical to you. Therefore I don't see how a copy of you could not have your consciousness.

Replies from: Torello
comment by Torello · 2014-01-11T23:07:30.254Z · LW(p) · GW(p)

Yes, I agree that the copy would have your consciousness, I guess I wasn't clear about expressing that.

But that's the point; the copy would have your consciousness--you wouldn't.

Replies from: None, ArisKatsaris
comment by [deleted] · 2014-01-12T12:20:08.355Z · LW(p) · GW(p)

Since the copy of Chris Hallquist would say "I am Chris Hallquist" for the same reason Chris Hallquist says "I am Chris Hallquist", I would say that the copy of Chris Hallquist just is Chris Hallquist in every way. So Chris Hallquist still has Chris Hallquist's consciousness in the cryonics scenario. In the computer scenario, both Chris Hallquist in the flesh and Chris Hallquist on the computer have Chris Hallquist's consciousness. Over time they might become different versions of Chris Hallquist if exposed to different things, but at the start, from the inside, it seems the same to both.

Replies from: Torello
comment by Torello · 2014-01-12T14:59:27.549Z · LW(p) · GW(p)

"the copy of Chris Hallquist just is Chris Hallquist in every way"

I would say that by definition of a copy, it can't be Chris in every way, because there is one clear way that it isn't:--it's a copy! This is a fundamental principle of identity--a thing can only be identical to itself. Things might be functionally equivalent, or very similar, but a copy by definition isn't the same, or we wouldn't call it a copy.

Replies from: None
comment by [deleted] · 2014-01-12T15:41:33.166Z · LW(p) · GW(p)

But why would Chris Hallquist care about this "fundamental principle of identity", if it makes no difference to his experiences?

Replies from: Torello
comment by Torello · 2014-01-12T15:56:15.262Z · LW(p) · GW(p)

It does make a difference--the use of the word "his" is key. "Copy of Chris" might have experiences and would not notice any difference regarding the fate of Chris, but for Chris, HIS experiences would end. (sorry for the caps; not shouting, just don't know how to do italics).

Let's say that "Chris" and "copy of Chris" are in a room.

I come into the room and say, "I'm going to kill one of you". Both "Chris" and "copy of Chris" are going to prefer that the other is killed, because their particular ability to experience things would end, even if a very similar consciousness would live on.

Replies from: None
comment by [deleted] · 2014-01-12T20:03:44.635Z · LW(p) · GW(p)

Both "Chris" and "copy of Chris" are Chris Hallquist. Both remember being Chris Hallquist, which is the only way anyone's identity ever persists. Copy of Chris would insist that he's Chris Hallquist for the same reason the original Chris would insist so. And as far as I'm concerned, they'd both be right - because if you weren't in the room when the copying process happened, you'd have no way of telling the difference. I don't deny that as time passes they gradually would become different people.

I prefer to frame things this way. Suppose you take Chris Hallquist and scan his entire body and brain such that you could rebuild it exactly the same way later. Then you wait 5 minutes and then kill him. Now you use the machine to rebuild his body and brain. Is Chris Hallquist dead? I would say no - it would basically be the same as if he had amnesia - I would prefer to experience amnesia than to be killed, and I definitely don't anticipate having the same experiences in either case. Yet your view seems to imply that, since the original was killed, despite having a living, talking Chris Hallquist in front of you, it's somehow not really him.

Edit: Moreover, if I was convinced the technology worked as advertised, I would happily undergo this amnesia process for even small amounts of money, say, $100. Just to show that I actually do believe what I'm saying.

Replies from: Torello
comment by Torello · 2014-01-12T21:32:17.960Z · LW(p) · GW(p)

with regard to "Yet your view seems to imply that, since the original was killed, despite having a living, talking Chris Hallquist in front of you, it's somehow not really him."

Yes, I do believe that the copy of Chris Hallquist would have an identical consciousness (until, as you stated, he had some new experiences), but the original (non-copy) Chris is still gone. So from a functional perspective I can interact with "copy of Chris" in the same way, but the original, unbroken consciousness of "original Chris" is still gone, which from the perspective of that consciousness, would be important.

with regard to "Both "Chris" and "copy of Chris" are Chris Hallquist." I still am confused: they may have the same structure, function, and properties, but there are still two of them, so they cannot be the same thing. There are two entities; just because you made a copy doesn't mean that when you destroy the original that the original isn't changed as a result.

Replies from: None
comment by [deleted] · 2014-01-12T22:22:16.614Z · LW(p) · GW(p)

Why do you consider Chris Hallquist to be the same person when he wakes up in the morning as he is when he went to bed the night before (do you?)?

There are two entities; just because you made a copy doesn't mean that when you destroy the original that the original isn't changed as a result.

The original is changed. And I agree that there are two entities. But I don't see why Chris Hallquist should care about that before the split even occurs. Would you undergo the amnesia procedure (if you were convinced the tech worked, that the people were being honest, etc.) for $1000? What's the difference between that and a 5-minute long dreamless sleep (other than the fact that a dead body has magically appeared outside the room)?

Replies from: Torello
comment by Torello · 2014-01-12T22:37:09.760Z · LW(p) · GW(p)

I would consider the Chris that wakes up in the morning the same person because his consciousness was never destroyed. Death destroys consciousness, sleep doesn't; this seems obvious to me (and I think most people); otherwise we wouldn't be here discussing this (if this was the case it seems we'd be discussing nightly cryonics to prevent our nightly deaths). Just because most people agree doesn't make something right, but my intuition tells me that sleep doesn't kill me (or my consciousness) while death does.

Sorry for caps, how do you italicize in comments? I think the crux of the issue is that you believe GENERIC "Chris H consciousness" is all that matters, no matter what platform is running it. I agree that another platform ("copy of Chris") would run it equally well, but I still think that the PARTICULAR person experiencing the consciousness (Chris) would go away, and I don't like it. It seems like you are treating consciousness as a means--we can run the software on a copy, so it's exactly the same, where I see it as an end--original Chris should hold on to his particular consciousness. Isn't this why death is a fundamental problem for people? If people could upload their consciousness to a computer, it may provide some solace but I don't think it would eliminate completely the sting of death.

With regard to whether I would do it for $1,000--no. Earlier you equated the amnesia procedure with death (I agree). So no, I wouldn't agree to have a copy of me who happens to be running my consciousness $1,000 for the privilege of committing suicide!

Replies from: None
comment by [deleted] · 2014-01-13T01:22:13.848Z · LW(p) · GW(p)

how do you italicize in comments?

Asterisks around your *italic text* like that. There should be a "Show help" button below the comment field which will pop up a table that explains this stuff.

Isn't this why death is a fundamental problem for people?

I actually think so. I mean, I used to think of death as this horrible thing, but I realized that I will never experience being dead, so it doesn't bother me so much anymore. Not being alive bothers me, because I like being alive, but that's another story. However, I'm dying all the time, in a sense. For example, most of the daily thoughts of 10-year old me are thoughts I will never have again; particularly, because I live somewhere else now, I won't even have the same patterns being burned into my visual cortex.

I think the crux of the issue is that you believe generic "Chris H consciousness" is all that matters, no matter what platform is running it.

That's a good way of putting it. The main thing that bothers me about focusing on a "particular" person is that I (in your sense of the word) have no way of knowing whether I'm a copy (in your sense of the word) or not. But I do know that my experiences are real. So I would prefer to say not that there is a copy but that there are two originals. There is, as a matter of fact, a copy in your sense of the word, but I don't think that attribute should factor into a person's decision-making (or moral weighting of individuals). The copy has the same thoughts as the original for the same reason the original has his own thoughts! So I don't see why you consider one as being privileged, because I don't see location as being that which truly confers consciousness on someone.

Replies from: Torello
comment by Torello · 2014-01-13T23:22:29.764Z · LW(p) · GW(p)

I (in your sense of the word) have no way of knowing whether I'm a copy (in your sense of the word) or not. But I do >know that my experiences are real.

I see what you mean about not knowing whether you are a copy. I think this is almost part of the intuition I'm having--you in particular know that your experiences are real, and that you value them. So even if the copy doesn't know it's a copy, I feel that the original will still lose out. I don't think people experience death, as you noted above, but not being alive sucks, and that's what I think would happen to "original Chris"

By the way, thanks for having this conversation--it made me think about the consequences of my intuitions about this matter more than I have previously--even counting the time I spent as an undergrad writing a paper about the "copy machine dilemma" we've been toying with.

Thanks for the italics! Don't know how I missed the huge show help button for so long.

comment by ArisKatsaris · 2014-01-12T04:08:39.194Z · LW(p) · GW(p)

How would this objection work if I believe it likely that a billion copies of me are likely created every single second (see Many Worlds Interpretation), all of them equally real, and all of them equally me?

Replies from: polymathwannabe
comment by polymathwannabe · 2014-01-12T07:19:45.268Z · LW(p) · GW(p)

The "you" that is in this universe is the only "you" you can realistically care about. You don't live vicariously through your other selves more than you can live through a twin.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2014-01-12T14:38:28.052Z · LW(p) · GW(p)

You didn't understand my objection, or perhaps I didn't communicate it clearly enough: According to MWI the "me" in the present leads to a billion zillion different "me"s in the near future. I'm NOT talking about people who have already branched from me in the past -- I'm talking about the future versions of me.

Torello seems to argue that a "me" who has branched through the typical procedures of physics is still a real "me", but a me who has branched via "uploading" isn't one.

I don't see why that should be so.

Replies from: Torello
comment by Torello · 2014-01-12T15:04:39.747Z · LW(p) · GW(p)

The "me" traveling through typical physics is a single entity, so it can continue to experience it's consciousness. The "me(s)" in these many worlds don't have the continuity to maintain identity.

Think of this: if one actually believed in Many Worlds, and didn't find any problem with what I've stated above, this would be a convincing argument not to do cryonics, because it's already happening for free, and you can spend the money on entertainment or whatever (minus of course the $500 you owe me for convincing you not to do it ;)

Replies from: ArisKatsaris
comment by ArisKatsaris · 2014-01-12T18:36:11.232Z · LW(p) · GW(p)

The "me(s)" in these many worlds don't have the continuity to maintain identity.

So you believe that people nowadays have continuity that maintain identity only because you don't believe in MWI?

So if MWI proves true, does this distinction between "copies" and "continuations" become meaningless (according to you)?

Replies from: Torello
comment by Torello · 2014-01-12T21:19:09.750Z · LW(p) · GW(p)

No, I think that the other worlds of MWI don't/wouldn't affect our world so continuity/identity in our world wouldn't change if MSI were true (or suddenly became true).

The break in continuity comes BETWEEN (sorry for caps, can't italicize) the many worlds, preventing the "me(s" in different worlds from continuity.

comment by [deleted] · 2014-01-11T17:09:37.256Z · LW(p) · GW(p)

With the sheer mind power and wisdom in a hive of virtual uploads, a government would certainly wish to gain legal control over you. You are too good to pass up. They will find a way. Like what is happening with the internet, and spying etc.

As a simulation you will be the ultimate provider! They can teach you programming and force you to work, without using up precious earth resources. Also you run at 1,000,000,000x human speed. Humans will vote for you to do all their work for them! Human rights have a way of dissapearing when its the rights of very few, to the benefit of the rewards of nearly all. And you aren't even human!

And you probably wont like much of their work. Most jobs are boring. But that wont matter as...

The world's top researchers (hive-mind-psychiatrists) will invent hyperadvertising, virtual dopamine, and hypertorture in order to get you to do much of this work. With potentially no theoretical limit on the reward systems you will become a form of cyber crack whore (apologies for lack of a nicer term). Are you yourself when you are on drugs? Is it worth spending money from your real-life to buy your future drug addict's existence? Would it not be much different to another person's future drug addict existence, as you are eventually super-optimised to think the most precise way to solve your problem set, and so would have the precise same thoughts?

If you believe there is no difference between your future crack addict and another's, as you think the same thoughts (as they are optimum) plus only your thoughts now define you (as you are a simulation), then why spend YOUR MONEY now paying for something which nobody else is paying, to get your brain to be molded, when it matters not which brain is actually molded? And all it does is benefit the future organisation which seeks to milk you!

To summarise so far:

  1. Simulations too useful for government to not (over decades) seek to gain control of

  2. You will not have rights in there, as humans will use cognitive dissonance to justify the huge gain they get from you working for them (super-efficient, invisible to them).

  3. Your brain will be optimised for the tasks, so over a little time will converge to any-other brain on the same tasks. So why spend your precious money in this existence to pay for some vanilla play-dough which will not be you in the future.


Further, many wish to die as they age. And its not from growing weak/tired, but from boredom

I would at the very least persuade you to cancel your subscription until you are ~40, and better understand the risks as well as the rewards. As rationalists we see the rewards, but as a generally young community we do not emotionally feel the aversion to immortality that our elders seem to (from my own anecdotal experiences). And as irrational feelings are, I dare you to attempt to persuade me its worth living a rational but unhappy existence.

Often the only thing keeping our parents and grandparents alive is their children. They no longer care so much about the rest of the world. They don't have that much power to change anything else, and are bored of the things they used to find fun. If you lived in minecraft, how long until you get bored of minecraft? As a 14 year old I could have played that game 14+hours a day forever. A decade later and intense guilt for more than a few hours would overwhelm me. (Perhaps they could freeze your brain into a state where you stay 14 forever though, but gain wisdom. It might be possible.) Same principle applies to long term work for the majority of people, I hear. People look forward to retirement. Then retired people die sooner as they are bored.

I encourage you to go to a retirement home, and see what the majority of people spend their time doing, and whether they are content with life and ready for death. Perhaps it is a question of your discipline and commitment to work. Will you train your mind for an eternity? As opposed to finding a time where you have achieved what you wished to achieve in life, and wish to retire to the deep sleep.

As you age and gain wisdom, many do not wish to live forever. With suicide being immoral and effectively illegal, and our culture becoming increasing progressive and protective of all classes of citizens with each decade, would you be able to opt-out? You will probably have familial uploads from later generations: would they allow you to commit suicide, at the cost of their feelings (and your ability to provide for the real-worldlings)?

Further Conclusions from this section:

  1. As you get older you may want to die anyway, and if the government in control of the hivemind decides that the life in it is too valuable to let go, will they let it? You need to be confident you want to live forever, as the cost of a 'fate-worse-than-death' is greater than the cost of death. So wait until you are 40 and your feelings change before you make such a dangerous risk as investing in immortality. Do not prioritise a rational immortality over unhappy immortality.

http://www.youtube.com/watch?v=BYOE_b4aYD0