Optimism versus cryonics

post by lsparrish · 2010-10-25T02:13:34.654Z · LW · GW · Legacy · 110 comments

Contents

110 comments

Within the immortalist community, cryonics is the most pessimistic possible position. Consider the following superoptimistic alternative scenarios:

  1. Uploading will be possible before I die.
  2. Aging will be cured before I die.
  3. They will be able to reanimate a whole mouse before I die, then I'll sign up.
  4. I could get frozen in a freezer when I die, and they will eventually figure out how to reanimate me.
  5. I could pickle my brain when I die, and they will eventually figure out how to reanimate me.
  6. Friendly AI will cure aging and/or let me be uploaded before I die.

Cryonics -- perfusion and vitrification at LN2 temperatures under the best conditions possible -- is by far less optimistic than any of these. Of all the possible scenarios where you end up immortal, cryonics is the least optimistic. Cryonics can work even if there is no singularity or reversal tech for thousands of years into the future. It can work under the conditions of the slowest technological growth imaginable. All it assumes is that the organization (or its descendants) can survive long enough, technology doesn't go backwards (on average), and that cryopreservation of a technically sufficient nature can predate reanimation tech.

It doesn't even require the assumption that today's best possible vitrifications are good enough. See, it's entirely plausible that it's 100 years from now when they start being good enough, and 500 years later when they figure out how to reverse them. Perhaps today's population is doomed because of this. We don't know. But the fact that we don't know what exact point is good enough is sufficient to make this a worthwhile endeavor at as early of a point as possible. It doesn't require optimism -- it simply requires deliberate, rational action. The fact is that we are late for the party. In retrospect, we should have started preserving brains hundreds of years ago. Benjamin Franklin should have gone ahead and had himself immersed in alcohol.

There's a difference between having a fear and being immobilized by it. If you have a fear that cryonics won't work -- good for you! That's a perfectly rational fear. But if that fear immobilizes you and discourages you from taking action, you've lost the game. Worse than lost, you never played.

This is something of a response to Charles Platt's recent article on Cryoptimism: Part 1 Part 2

110 comments

Comments sorted by top scores.

comment by [deleted] · 2010-10-25T01:39:44.011Z · LW(p) · GW(p)

I like this post. Upvoted.

On a tangiential node, I had an experience today that made me take cryonics much more seriously. I had a (silly, in retrospect) near-miss with serious injury, and I realized that I was afraid. Ridiculously, helplessly, calling-on-imaginary-God-for-mercy afraid. I had vastly underestimated how much I cared about my own physical safety, and how helpless I become when it's threatened. I feel much less cavalier about my own body now.

So, you know, freezing myself looks more appealing now that I know that I'm scared. I can see why I'd want to have somewhere to wake up to, if I died.

Replies from: cousin_it, AngryParsley, XiXiDu
comment by cousin_it · 2010-10-25T07:34:22.614Z · LW(p) · GW(p)

Your comment suggests a convenient hack for aspiring rationalists to overcome their fear of cryonics.

comment by AngryParsley · 2010-10-25T09:21:38.316Z · LW(p) · GW(p)

I wonder if cryonicists (before signing up) are more likely than cryocrastinators to have experienced an "oh jeez I almost died" moment.

Anecdotal evidence: I'm signed up and someone once tried to rob me at gunpoint.

It would also be interesting to know how many close friends or relatives of cryonicists have died compared to cryocrastinators.

Replies from: Richard_Kennaway, advancedatheist, niplav, JoshuaZ
comment by Richard_Kennaway · 2010-10-26T19:03:06.538Z · LW(p) · GW(p)

Anecdotal evidence: although sympathetic to the idea, I am not signed up, and have had two close brushes with death (that I know of).

comment by advancedatheist · 2010-10-25T17:01:53.318Z · LW(p) · GW(p)

Not in my case for the original plan. I decided to sign up for cryonic suspension some day at age 14 (1974), after reading Robert Ettinger's Man Into Superman (an underappreciated book, in my opinion). I followed through in 1990 with Alcor. This November 2 (coincidentally my 51st birthday) marks the 20th anniversary of my suspension membership.

I did have an health issue recently which has motivated me to get more involved in trying to untangle the cryonics clusterfuck (long story). I had a branch retinal vein occlusion in my right eye back in June (basically a stroke in that retina), which has caused some vision loss. Since then I've lost about 40 lbs. and I take Lisinopril to lower my blood pressure (116/80 this morning).

comment by niplav · 2022-04-15T15:36:57.500Z · LW(p) · GW(p)

Datapoint: I didn't have such an experience before deciding to sign up.

comment by JoshuaZ · 2010-10-28T02:07:36.343Z · LW(p) · GW(p)

I've had a fair number of relatives die. This is actually one reason I'm delaying on cryonics right now. I first got strongly interested in cryonics about 6 months ago. Then shortly after that, multiple relatives of fairly young ages died fairly suddenly. They weren't the first such deaths in my family by any means. And a family friend died at about the same time from cancer that he had had for a very long time. The initial reaction was that I should run off and sign up for cryonics right now. So I'm now delaying in part to make sure that I am making the decision more rationally and not just based on sudden recent events clouding my judgment. That and the whole thing with cryonics being fairly expensive for a grad student budget are the main causes of not signing up at this point.

comment by XiXiDu · 2010-10-25T10:28:04.355Z · LW(p) · GW(p)

I've been scared shitless all my life but it gradually gets better. I stopped caring that much and am much happier now. I'm at the point where I'm really too lazy to go for Cryonics now. Although note that I never doubted that Cryonics could work or that it is worth it if you care. But really the more I learn the less I fear not being around. Of course I do want to be around, I don't want to die. But I'm just not going wear a helmet while driving a car, if you see what I mean. I learnt that there are so many possibilities, MWI, the Mathematical Universe etc. and most of all you people. If I'm not around then someone like you or EY will be who pretty much contain all awesomeness I could ever mobilize. I know many people wouldn't even be satisfied by having a perfect copy around, because it's not them. I'm pretty much the other extreme who's already satisfied with having people around who I believe are at least as worthy as I am so to not fear death anymore. Further there are the possibilities that the future will suck anyway :-)

Replies from: None
comment by [deleted] · 2010-10-25T11:49:12.842Z · LW(p) · GW(p)

It actually didn't occur to me to wear a helmet in a car.

For me this was sort of the dividing line between "I'm young, I'll live forever" and "Wait, shit, I won't, I really do need to do all those boring things like use hand sanitizer and look both ways before crossing the street and take my vitamins."

Replies from: Jonathan_Graehl, XiXiDu
comment by Jonathan_Graehl · 2010-10-25T20:00:33.067Z · LW(p) · GW(p)

Wait, should I wear a helmet in my car? :) It sounds plausible. I'd say no, because of reduced visibility increasing odds of accident, and already ample protection from airbags, seatbelt, and crumple-zone into rigid structure protecting against crushing.

Replies from: khafra
comment by khafra · 2010-10-27T21:02:24.943Z · LW(p) · GW(p)

A good motorcycle helmet provides well over 180 degrees of side vision, while your peripheral vision can only reach about 160 degrees. While I can't find a reference, IIRC the percentage of motorcycling fatalities resulting from head injuries is around 50%, and the percentage of car fatalities resulting from head injuries is considerably higher. So, disregarding the vastly diminished prior probability of all-cause fatalities in a car, you should actually be more adamantly in favor of helmet use in a car than on a motorcycle.

Replies from: JGWeissman
comment by JGWeissman · 2010-10-27T21:15:34.955Z · LW(p) · GW(p)

So, disregarding the vastly diminished prior probability of all-cause fatalities in a car,

Why would we disregard that?

comment by XiXiDu · 2010-10-25T12:33:04.142Z · LW(p) · GW(p)

Traumatic Brain Injuries (TBI) are a leading cause of death and disability in the United States, and car accidents are one of the leading causes of TBIs.

I'd rather wear a helmet than signing up for cryonics. But most people who sign up for cryonics probably do not wear helmets.

comment by advancedatheist · 2010-10-25T03:22:20.280Z · LW(p) · GW(p)

Since you mentioned Benjamin Franklin, apparently when he died he left two trust funds to demonstrate the power of compound interest over a couple of centuries. The example of these trusts shows that the idea of a reanimation trust staying intact for centuries doesn't sound absurd:

http://en.wikipedia.org/wiki/Benjamin_Franklin#Death_and_legacy

comment by dclayh · 2010-10-25T05:37:43.591Z · LW(p) · GW(p)

You forgot the most optimistic of all:

  1. I could do absolutely nothing, get cremated and the eventual Friendly AI will still be able to reanimate me, via time-travel or equivalent.
Replies from: Thomas, XiXiDu
comment by Thomas · 2010-10-25T06:30:07.153Z · LW(p) · GW(p)

Before I saw your comment I made the same one.

Now I deleted mine and I'll upvote yours.

comment by XiXiDu · 2010-10-25T10:29:22.140Z · LW(p) · GW(p)

Well, she forgot beta-level simulations too. The AI resurrecting you by interpreting recorded behavioral patterns and your DNA.

Replies from: false_vacuum, JoshuaZ
comment by false_vacuum · 2010-10-25T12:43:59.052Z · LW(p) · GW(p)

Is this a standard term? I've only seen it in Alastair Reynolds's writing.

Replies from: XiXiDu, Jach
comment by XiXiDu · 2010-10-25T13:24:51.573Z · LW(p) · GW(p)

Me too, but I think resurrection without a backup should be seriously considered given the possibility of superhuman AI. That is, a simulation based on modelling the behavioural patterns of the person copied, attempting to predict their reactions to a given stimulus. If there are enough records of the person and by the person plus their DNA, given sufficiently powerful AI, such a beta-level simulation might be sufficiently close so that only a powerful posthuman being could notice any difference compared to the original. I'm not sure if Reynolds was the first person to consider this, I doubt it, but I deem the term beta level simulation adequate.

Resurrection without a backup. As with ecosystem reconstruction, such "resurrections" are in fact clever simulations. If the available information is sufficiently detailed, the individual and even close friends and associations are unable to tell the difference. However, transapient informants say that any being of the same toposophic level as the "resurrector" can see marks that the new being is not at all like the old one. "Resurrections" of historical figures, or of persons who lived at the fringes of Terragen civilization and were not well recorded, are of very uneven quality. -- Orion's Arm - Encyclopedia Galactica - Limits of Transapient Power

Replies from: lsparrish, dclayh
comment by lsparrish · 2010-10-26T09:40:35.615Z · LW(p) · GW(p)

I think this should be considered conceivable, but not in the same realm of plausibility as cryonics working. If you rate cryonics chances of working low, this is much lower. If you rate its chances extremely high, this possibility might be in a more moderate range.

My favorite idea is to scan interstellar dust for reflected electromagnetic and gravitational data. Intuitively, I imagine this would lead first to resolving only massive objects like stars and planets, but with time and enough computation it could be refined into higher details.

comment by dclayh · 2010-10-25T20:22:15.764Z · LW(p) · GW(p)

This is also the approach they take on the TV show Caprica.

comment by Jach · 2010-10-27T10:13:15.596Z · LW(p) · GW(p)

It was my understanding that this is one of Kurzweil's eventual goals: reconstructing his father from DNA, memories of people who knew him, and just general human stuff.

comment by JoshuaZ · 2010-10-28T02:12:38.342Z · LW(p) · GW(p)

As we move farther away it becomes harder to self-identify. I have some difficulty self-identifying with an upload but there's still a fair bit. I have a lot of trouble identifying with a beta copy. In fact, I imagine that an accurate beta version of me would spend a lot of time simply worrying about whether it is me. (Of course now putting that in writing means that out super friendly AI will likely only make a beta of me that does do that worrying is potentially counterproductive. There's a weird issue here where the more the beta is like me the more it will worry about this. So a beta that was just like me but didn't worry would certainly not be close enough in behavior for me to self-identify with it. So such worrying in a beta is evidence that I should make the self-identification with that beta.)

Replies from: MartinB, lsparrish, Mestroyer
comment by MartinB · 2010-10-28T03:11:44.309Z · LW(p) · GW(p)

In fact, I imagine that an accurate beta version of me would spend a lot of time simply worrying about whether it is me.

You will get over it fast. Ever experienced a few hours of memory loss of found some old writing of yours that you can not recall? The person you are changes all the time. You will be happy to be there, and probably find lots of writing about how/why an upload you enough by other minds.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-10-28T03:16:55.561Z · LW(p) · GW(p)

You will get over it fast. Ever experienced a few hours of memory loss of found some old writing of yours that you can not recall?

Yes. And this bothers me a bit. Sometimes I even worry (a teeny tiny bit) that I've somehow switched into a parallel universe where I've wrote slightly different things (or in some cases really dumb things). When I was younger I sometimes had borderline panic attacks about how if I didn't remember some event how could I identify as that individual? I suspect that I'm a little less psychologically balanced about these sorts of issues than most people...

Replies from: XiXiDu, MartinB
comment by XiXiDu · 2010-10-28T12:02:51.078Z · LW(p) · GW(p)

I suspect that I'm a little less psychologically balanced about these sorts of issues than most people...

Most people maybe, but not most LWers I bet. I have had such attacks too...

comment by MartinB · 2010-10-28T04:42:11.984Z · LW(p) · GW(p)

I am deeply frightened by the fact that most important developments in my live are accidents. But well, there is little use in being afraid.

You could try to figure out how much you actually change from time unit to time unit by journaling, or by tracking mental changes. Maybe you also find a technique that can be adapted to measure your discomfort and to try out ways to reduce it.

I externalize some of my brainpower into note-files, with some funny results.

comment by lsparrish · 2010-10-28T02:56:49.278Z · LW(p) · GW(p)

One way to resolve the dissonance this produces is to quit identifying with yourself from one point in time to the next. Me-from-yesterday can be seen as a different self (though sharing most identity characteristics like values, goals, memories, etc.) from me-from-today.

Replies from: MartinB
comment by MartinB · 2010-10-28T03:12:29.447Z · LW(p) · GW(p)

I dislike this concept, but that is what with. Identity breaks down, and personhood ends.

comment by Mestroyer · 2012-05-19T00:14:08.423Z · LW(p) · GW(p)

That's an interesting way of thinking about it. My take on it is the opposite. If an accurate copy of me was made after my death, I am pretty sure the copy wouldn't care if it was me or not, just as I don't care if I am as my past self wished me to be. If the copy was convinced it was me, there would be no problem. If it was convinced it wasn't, than it wouldn't think of my death as any more important than the deaths of everyone else throughout history.

comment by ata · 2010-10-26T00:20:08.875Z · LW(p) · GW(p)

Within the immortalist community, cryonics is the most pessimistic possible position.

Indeed; I think the cryonics organizations themselves have a saying, "Cryonics is the second worst thing that can happen to you."

comment by JoshuaZ · 2010-10-25T04:12:45.827Z · LW(p) · GW(p)

Cryonics can work even if there is no singularity or reversal tech for thousands of years into the future.

This doesn't alter your overall point much but this seems unlikely. Aside from the issue of the high probability of something going drastically wrong after more than a few centuries, low-level background radiation as well as intermittent chemical reactions will gradually create trouble. Unfortunately, estimating the timespan for when these will be an issue seems difficult but the general level seems to be somewhere between about 100 to a 1000 years. The idea of reanimating someone "thousands" of years in the future is extremely unlikely to work even with perfect preservation.

Edit: Actually not sure I'm correct above. The most optimistc estimate thrown around seems to be that around 100 years in cryo is about the same as about 20 minutes at room temperature as far as chemical reactions are concerned. That means that if one considers information theoretic death to have definitely occurred 24 hours after death that leaves a decent upper bound on cryonics working until around 7200 years which is a lot higher than my estimate above. So thousands seems reasonable as long as one is only concerned about chemical reaction issues and not radiation or systemic failures of equipment.

Replies from: ciphergoth, lsparrish
comment by Paul Crowley (ciphergoth) · 2010-10-26T07:48:03.980Z · LW(p) · GW(p)

How Cold is Cold Enough? estimates that 1 second at body temperature is equivalent to 24.628 million years at LN2 temperatures, as far as chemical reactions are concerned. The speed of nuclear processes isn't changed of course.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-10-26T15:47:18.848Z · LW(p) · GW(p)

Hmm, that's a much tighter estimate. (The page is poorly written (and frankly comes across as condescending(telling people that "exp" is some mathematical operation is not helpful) and poorly formatted. This is not good when cryonics already triggers some peoples' crank warning detectors) The math seems correct. That gives a much better bound for when chemical reactions will be an issue. It seems strongly then that my initial estimate that chemical reactions prevent kiloyear preservation is pretty wrong.

comment by lsparrish · 2010-10-25T18:59:37.636Z · LW(p) · GW(p)

Isn't that a trivial engineering detail to be solved by liquid helium and lots of radiation shielding?

Replies from: JoshuaZ
comment by JoshuaZ · 2010-10-26T15:51:58.979Z · LW(p) · GW(p)

Well, no. You can't put a body at liquid helium temperatures without massive damage (the whole vitrification trick doesn't work as well). And liquid helium is also much harder to get and work with than liquid nitrogen. Helium is in general much rarer. The radiation shielding also will only help with background radiation. It won't help much with radiation due to C-14 decay or due to potassium decay. Since both are in your body naturally there's not much to do about them.

Replies from: cupholder, lsparrish
comment by cupholder · 2010-10-27T17:34:35.409Z · LW(p) · GW(p)

I don't know so much about C-14, but wouldn't potassium decay's effects be small on timescales ~10,000 years? The radioactive natural isotope K-40 has a ridiculously long half life (1.25 billion years, which is why potassium-argon dating is popular for dating really old things) and only composes 0.012% of natural potassium. Potassium's also much less abundant in the body than carbon - only about 140g of a 70kg person is potassium, although admittedly it might be more concentrated in the brain, which is the important part.

ETA - I did calculations, and maybe there is a problem. Suppose 0.012% of K is K-40 by mass. Then I get 0.0168 grams of K-40 in a body, which comes out as 0.00042 moles, 2.53e20 K-40 atoms. With a 1.25 billion year half life that makes 1.40e15 decays after 10,000 years. In absolute terms that's a lot of emitted electrons and positrons. I don't know whether the absolute number (huge) or the relative number (miniscule) is more important though.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-10-27T17:51:56.007Z · LW(p) · GW(p)

I don't have enough background to estimate how serious the decay would be. But with 1.40e15 decays after 10,000 years that's around 3000 decay events a second (in practice most of that will be in the first few thousand years given that decay occurs exponentially). It seems that part of the issue also is that there's no repair mechanism. When something is living it can take a fair bit of radiation with minimal negative results. In some circumstances living creatures can even benefit from low levels of radiation. But radiation is going to be much more damaging to cells when they can't engage in any repairs.

Edit:Also note that the majority of the radiation that people are subject to is from potassium 40 so if this is ok then we're generally ok. It seems that radiation is not a major limiting factor on long-term cryonic storage.

Replies from: rwallace, cupholder
comment by rwallace · 2010-10-29T15:50:18.953Z · LW(p) · GW(p)

It's true that radiation is more damaging to cells when they can't engage in repairs. But damage is nothing to worry about in this case. When e.g. a gamma ray photon breaks a protein molecule, that molecule is rendered nonfunctional; enough such events will kill a cell. But in the context of cryonics, a broken molecule is as good as an intact one provided it's still recognizable. Rendering it impossible to tell what the original molecule was, would take far more thorough destruction.

From Wikipedia, "The worldwide average background dose for a human being is about 2.4 millisievert (mSv) per year." Even a lethal prompt dose is a couple of thousand times this quantity. And you can take maybe 10 times the lethal dose and still be conscious for a little while. So that's 20,000 years of background radiation verified to not even significantly damage, let alone erase, the information in the brain. I'd be surprised if the timescale to information theoretic death by that mechanism was very much less than a billion years.

comment by cupholder · 2010-10-27T21:42:41.153Z · LW(p) · GW(p)

The lack of an automatic repair mechanism makes things hairier, but while frozen, the radiation damage will be localized to the cells that get hit by radiation. By the time you get the tech to revive people from cryonic freezing, you'll most likely have the tech to fix/remove/replace the individual damaged cells before reviving someone. I think you're right that radiation won't be a big limiting factor, though it may be an annoying obstacle.

comment by lsparrish · 2010-10-26T17:04:06.150Z · LW(p) · GW(p)

Ok, not so trivial. The isotope breakdown issue might be unsolvable (unless you have nanobots to go in and scrub out the unstable isotopes with?) but I would imagine that to be quite a bit less than you get from solar incidence. Liquid helium cooling doesn't seem like it would cause information-theoretic damage, just additional cracking. Ice crystal formation is already taken care of at this point.

But liquid helium level preservation tech really does not seem likely to be needed, given how stable LN2 already gets you. The only reason to need it is if technological progression starts taking a really long time.

Replies from: JoshuaZ
comment by JoshuaZ · 2010-10-26T23:06:32.912Z · LW(p) · GW(p)

If you've got good enough nanobots to remove unstable isotopes you almost certainly have the tech to do full out repair. I don't know if the radiation less than what you get from solar incidence. I suspect that it is but I also suspect that in a general underground environment much more radiation will be due to one's own body than the sun.

Cracking can include information theoretic damage if it mangles up the interface at synapses badly enough. We don't actually have a good enough understanding of how the brain stores information to really make more than very rough estimates. And cracking is also a problem for the cryonics proponents who don't self-identify with a computerized instantiation of their brain.

comment by multifoliaterose · 2010-10-25T00:35:17.517Z · LW(p) · GW(p)

Good post, upvoted.

I think that your remark

But the fact that we don't know what exact point is good enough is sufficient to make this a worthwhile endeavor at as early of a point as possible. It doesn't require optimism -- it simply requires deliberate, rational action.

assumes a utility function which may not be universal. In particular, at present I feel that the value of my personal survival into transhuman times is dwarfed by other considerations. But certainly your points are good ones for people who place high value on personally living into transhuman times to bear in mind.

Replies from: XiXiDu
comment by XiXiDu · 2010-10-25T10:31:28.232Z · LW(p) · GW(p)

Indeed, I always feel there is too much ought on LW. After all rationality is about winning and if I don't care that much about my personal survival then Cryonics is a waste of money.

comment by David_Gerard · 2010-10-25T19:03:23.383Z · LW(p) · GW(p)

Although it's not marked as the inspiration, this post comes straight after an article by many-decades cryonicist Charles Platt, which he wrote for Cryonics magazine but which was rejected by the Alcor board:

Cryoptimism Part 1 Part 2

Platt discusses what he sees as the dangerously excessive optimism of cryonics, particularly with regard to financial arrangements: that because money shouldn't be a problem, people behave as though it therefore isn't a problem. When it appears clear that it is. To quote:

In fact their determination to achieve and defend their goal results in optimism that I think is so intense, I'm going to call it cryoptimism, which I might define as rampant optimism flavored with a dose of hubris and a dash of megalomania, sustained by fear of oblivion.

The above post may make more sense considered as a response to Platt's article.

Replies from: lsparrish, ciphergoth, Perplexed
comment by lsparrish · 2010-10-25T19:20:43.641Z · LW(p) · GW(p)

The above post may make more sense considered as a response to Platt's article.

Correct. I will add the links in the article.

comment by Paul Crowley (ciphergoth) · 2010-10-26T18:46:04.410Z · LW(p) · GW(p)

Hey David, welcome to Less Wrong, and thanks for the link to these articles!

Replies from: David_Gerard
comment by David_Gerard · 2010-10-26T19:00:04.831Z · LW(p) · GW(p)

Count yourself as having other-optimised ;-p

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-10-26T22:12:29.678Z · LW(p) · GW(p)

Marvellous :-) Does that mean you've started looking at the Sequences?

Replies from: David_Gerard
comment by David_Gerard · 2010-10-27T11:15:49.067Z · LW(p) · GW(p)

Glanced over them. I started with the Intuitive Explanation and my brain slid off it repeatedly. I fear that if that's the "intuitive" explanation, then all the merely quite bright people are b*ggered. Needs rewriting for the merely quite bright, as opposed to the brilliant. This is what I meant about how, if you have a pitch, it better target the merely quite bright if you have any serious interest in spreading your favoured ideas.

This ties into my current interest, books that eat people's brains. I'm increasingly suspecting this has little to do with the book itself. I realise that sentence is condensed to all but incomprehensibility, but the eventual heartbreaking work of staggering genius will show a lot more of the working.

Replies from: Risto_Saarelma, ciphergoth
comment by Risto_Saarelma · 2010-10-27T14:46:39.704Z · LW(p) · GW(p)

I fear that if that's the "intuitive" explanation, then all the merely quite bright people are b*ggered. Needs rewriting for the merely quite bright, as opposed to the brilliant.

Writing accessible math stuff is pretty hard, since sometime after you've figured out the basic math, you start getting blind to what was difficult in the first place. I suppose you'd need a continuous supply of math-naive test readers who you could A-B test continuing iterations of the text with, trying to come up with one that presents the easiest path to actually making the content comprehensible.

I'm having trouble coming up with any articles that try to present some abstract mathematical concept in anything resembling the form actual mathematicians work with, and wouldn't be pretty tough to work through with no preknowledge of learning and working with abstract math concepts. Maybe it's just hard to do.

On a quick glance, the intuitive explanation article seems several times longer than people who would want to get a quick idea about what all the Bayes stuff is about would be prepared to read.

This ties into my current interest, books that eat people's brains. I'm increasingly suspecting this has little to do with the book itself.

I'm guessing this refers to books that start cults, not just books that will consume limitless amount of brainpower if you let them? In any case, quite interested in hearing more about this.

Replies from: David_Gerard
comment by David_Gerard · 2010-10-27T16:06:17.493Z · LW(p) · GW(p)

On a quick glance, the intuitive explanation article seems several times longer than people who would want to get a quick idea about what all the Bayes stuff is about would be prepared to read.

That's another factor. But I just couldn't get a feel for the numbers in the breast cancer example. This is noting that I found Bruce Schneier's analogous numbers on why security theatre is actively damaging [couldn't find the link, sorry] quite comprehensible.

(I certainly used to know maths. Did the Olympiad at high school. Always hated learning maths despite being able to, though, finally beaching about halfway through second-year engineering maths twenty years ago. I recently realised I've almost completely forgotten calculus. Obviously spent too long on word-oriented artistic pursuits. I suppose it's back to the music industry for me.)

As someone who is definitely smart but has adopted a so far highly productive life strategy of associating with people who make me feel stupid by comparison, I am happy to be a test stupid person for these purposes.

I'm guessing this refers to books that start cults, not just books that will consume limitless amount of brainpower if you let them? In any case, quite interested in hearing more about this.

More a reference to how to cure a raging memetic cold. Cults count (I am/was an expert on Scientology ), Ayn Rand sure counts (this being the example that suggests a memetic cold is not curable from the outside and you have to let the disease run its course). What struck me was that quite innocuous works that I don't get a cold from have clearly caused one in others.

"Memetic cold": an inadequate piece of jargon I made up to describe the phenomenon of someone who gets a big idea that eats their life. As opposed to the situation where someone has a clear idea but is struggling to put it into words, I'm not even entirely sure I'm talking about an actual phenomenon. Hence the vagueness.

Possible alternate term: "sucker shoot" (Florence Littauer, who has much useful material but many of whose works should carry a "memetic hazard" warning sign). It's full of apparent life and vitality, but sucks the energy out of the entire rest of your life. When you get an exciting new idea and you wake up a year later and you've been evicted and your boyfriend/girlfriend moved out and your friends look at you like you're crazy because that's the external appearance. Or you don't wake up and you stay a crank. The catch then is when the idea is valid and it was all worth it. But that's a compelling trope because it's not the usual case.

I just looked over my notes and didn't entirely understand them, which means I need to get to work if this is ever to make coherent sense and not just remain a couple of tantalising comments and an unreleased Google doc.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2010-10-27T16:46:24.393Z · LW(p) · GW(p)

Another thing re. the Bayesian explanation. It's probably quite a bad place to start reading LW content. It seems to really aim to get the reader to be able to do the math instead of just presenting the general idea. I find the newer sequence stuff a lot more approachable. Haven't ever bothered to go through the Bayes article myself.

The memetic cold thing is interesting, in particular, like you said, because there isn't a foolproof way of telling if the all-consuming weird preoccupation is fundamentally flawed, on the right track but most likely going to fail, or going to produce something genuinely useful. Recognizing mathematics in general as something other than a memetic cold before mathematics established itself as something useful might not have been entirely easy, and there's still the tension between math obsession that gets you the Fields medal, math obsession that makes you write angry handwritten letters about your disproof of Cantor's diagonalization argument for decades, and math obsession that makes you correlate numerological sums of the names of Fortune 500 CEOs with charts made from the pages of the Hebrew Bible to discover the identity of the Antichrist.

This also reminds me a bit of Robert Pirsig, both in the Motorcycle book and in Lila. Pirsig talks about the difficulty of discerning good stuff from bad stuff when the stuff goes outside a preset framework to evaluate it in, describes his personal sucker shoot episodes, and the books have probably started more than a few memetic colds themselves.

Replies from: David_Gerard
comment by David_Gerard · 2010-10-27T17:01:45.886Z · LW(p) · GW(p)

You can get a bad memetic cold by deliberately compromising your memetic immune system: decompartmentalising too aggressively, getting a not quite so magical click and it all becomes terribly clear: the infidel must die!

That's an extreme failure mode of decompartmentalisation, of course. (Some lesser ones are on RationalWiki: Engineers and woo.) But when you see a new idea and you feel your eyes light up with the realisation that it's compelling and brilliant, that’s precisely the time to put it in a sandbox.

Maybe. I'm not sure yet. It feels a bit like deliberate stupidity. On the other hand, we live in a world where the most virulent possible memes are used to sell toothpaste. Western civilisation has largely solved food and shelter for its residents, so using infectious waste as a token of social bonding appears to be what we do with the rest of our lives.

comment by Paul Crowley (ciphergoth) · 2010-10-27T11:31:53.246Z · LW(p) · GW(p)

My video lecture you've found also includes a brief introduction to Bayes' Theorem.

comment by Perplexed · 2010-10-25T19:30:53.308Z · LW(p) · GW(p)

The above post may make more sense considered as a response to Platt's article.

If interpreted in that way, it fails completely. It doesn't respond in any way to Pratt's argument that the cryonics industry does not have the financial resources to deliver on its promises, and that the shortfall gets larger as more people sign up.

Isparrish simply advises people to ignore this and to optimistically sign up anyways. Since Isparrish does not seem to be irrational, I have to assume he is not attempting to respond to Platt.

Edit: Whoops. Bad assumption.

Replies from: lsparrish
comment by lsparrish · 2010-10-25T20:15:14.195Z · LW(p) · GW(p)

I should clarify that it was not his main point about shortfalls due to signups, but the peripheral point about cryonics being optimistic that I was replying to. I disagree with his main point to a limited degree, i.e. I consider it probable that Alcor is not going to go bankrupt, though I recognize the need to be alert to the possibility.

As he said, money has shown up in the past from wealthy donors who don't want it to fail. I'm not upset at the inequity there because the donors are purchasing social status, and I don't have a problem with paying slightly more (or, if I can afford it, a lot more) to help cover someone else's expenses. (I am more open to socialistic logic than most current cryonicists.)

comment by lionhearted (Sebastian Marshall) (lionhearted) · 2010-10-25T09:57:26.638Z · LW(p) · GW(p)

After reading Eliezer on it, I with certainty to sign up for cryonics, but I figured I'd wait until I had a more stable lifestyle. I'm currently traveling through Asia - Saigon, Vietnam right now, Kuala Lumpur, Malaysia next. I figure if the lights go off while I'm here, it's not particularly likely I'd make it to a cryonics facility in reasonable time.

Also, it's the kind of thing I'd like to research a bit, but I know that's a common procrastination technique so I'm not putting too much weight on that.

comment by Jonii · 2010-10-25T16:11:33.075Z · LW(p) · GW(p)

Nice post, though it avoided the reason why I don't intend to get cryopreserved. That is, because it's way too expensive.

comment by Mitchell_Porter · 2010-10-26T04:41:30.169Z · LW(p) · GW(p)

I think cryonics is a waste of money unless you want to make living copies of a dead person or otherwise have a reason to preserve information about the dead. Cryonics does not prevent the death of you, it just prevents the destruction of the leftovers as well.

Replies from: Risto_Saarelma, lsparrish, JoshuaZ
comment by Risto_Saarelma · 2010-10-26T07:55:58.823Z · LW(p) · GW(p)

You come off as assuming that the people in this thread are not aware of the personal identity debate. That doesn't really strike me as productive.

David Chalmers did go into this in his Singularity analysis paper. In chapter 10, he basically noted that both the standard stances lead into unintuitive results in scenarios that seem physically possible.

The interesting thing about the personal identity discontinuity stance is to imagine growing up in a world where reviving people from something that very obviously halts all their bodily functions for a nontrivial duration, such as cryonics or uploading, is commonplace. All your life you see people getting killed, have their heads vitrified, and then the mind-states get reinstated in new bodies and these new people are in all appearances the same as the ones that died.

How would people develop the intuition that the metabolic cessation leads to personal death, and the revived people are new individuals with false memories, in this world?

Replies from: Mitchell_Porter, DanielLC
comment by Mitchell_Porter · 2010-10-26T11:30:42.713Z · LW(p) · GW(p)

You come off as assuming that the people in this thread are not aware of the personal identity debate. That doesn't really strike me as productive.

I know the debate exists, I just think the wrong side is winning (in this little corner of the Internet).

These discussions usually occur in an atmosphere where there is far more presumption than there is knowledge, regarding the relationship between the physical brain and elements of personhood like mind, consciousness, or identity.

The default attitude is that a neuron is just a sort of self-healing transistor, that the physical-computational reality in the brain that is relevant for the existence of a person is a set of trillions of physically distinct but causally connected elementary acts of information processing, that the person exists in or alongside these events in some vague way that is not completely specified or understood, and that so long as the cloud of information processing continues in a vaguely similar way, or so long as it is instantiated in a way with vaguely similar causality, the person will continue to exist or will exist again, thanks to the vague and not-understood principle of association that links physical computation and self.

The view of the self which naturally arises subjectively is that it is real and that it persists in time. But because of the computational atomism present in the default attitude (described in the previous paragraph), and also because of MWI dogma and various thought-experiments involving computer programs, the dominant tendency is to say, the natural view is naive and wrong, there's no continuity, you are as much your copies and your duplicates elsewhere in the multiverse as you are this-you, and so on ad infinitum, in a large number of permutations on the theme that you aren't what you think you are or what you appear to be.

Another reason that people are willing to sacrifice the natural view wholesale has to do with attachment to the clarity that comes from mathematical, physical, and computational concepts. These concepts also have an empirical side to their attraction, in that science has validated their use to a certain degree. But I think there is also a love of the types of objective thought that we already know how to perform, a love of them for their own sake, even when they do not easily mesh with something we would otherwise consider a reality. An example would be the flow of time. If there were no mathematical physics which treats time as just another sort of space, we would say, of course time passes; it may be a little mysterious, a little hard to describe (see St Augustine), but of course it's real. But since we have geometric models of space-time which are most easily apprehended in a timeless or static fashion (as a single self-existent geometric object), the flow of time is relegated to consciousness, subjectivity, illusion, even denied outright.

The matter-of-fact belief in uploading (in the sense that a digital emulation of your brain will be conscious and will be you), in cryonics, and a few of the other less baroque permutations of identity, doesn't necessarily derive from this quasi-platonic absorption in abstract objectivity. It can also come from "thought-experiment common-sense", as in your scenario of a world where cryonics or uploading is already common. But the abstract attachment to certain modes of thought is a significant factor in this particular intellectual environment, and in discussions where there is an attempt to think about these matters from first principles. And apparently it needs to be pointed out that everything which is "subjective", and which presents a problem for the standard picture (by which I mean, the combination of computational atomism and physical ontology), is quickly discarded as unreal or is treated as a problem that will take care of itself somehow, even though the standard picture is incredibly vague regarding how identity, mind, or consciousness relates to the physical and biological description.

I lean in the other direction, because the standard picture is still very vague, and also because subjective appearances have evidential value. We have every reason to look for models of reality in which time is real, the self is real, and so on, rather than bravely biting the naturalistic bullet and asserting that all that stuff is somehow unreal or otherwise ontologically secondary. Last year I suggested here that the exact physical correlate of the self might be a specific, very large irreducible tensor factor in the quantum state of the brain. That wasn't a very popular idea. There's also the idea that the self is a little more mesoscopic than that, and should be identified with a dynamic "functional organization" glued together more by causality than by anything else. I think this idea is a little fuzzy, a little problematic, but it's still superior to the patternist theories according to which physical continuity in space and time has nothing to do with identity; and perhaps you can see that on this theory also, a destructive upload or a cryonic resurrectee isn't you - because you were the particular dynamical process which terminated when you died, and the upload and the cryo-copy are distinct processes, with a definite beginning in time which came long after the end of the earlier process on which they were modeled.

You ask how, in a world where cryonic suspension and mind uploading are commonplace, a person could arrive at the intuition that identity does not persist across these transformations. All it would take is the knowledge that a self is an entity of type X, and that in these transformations, one such entity is destroyed and another created. These questions won't remain unanswered forever, and I see no reason to think that the final answers will be friendly to or consistent with a lax attitude towards personal identity. To be is to be something, and inside appearances already tell us that each of us is one particular thing which persists in time. All that remains is to figure out what that thing is, from the perspective of natural science.

Replies from: Risto_Saarelma, cousin_it, NihilCredo
comment by Risto_Saarelma · 2010-10-26T18:27:39.026Z · LW(p) · GW(p)

Thank you for the thoughtful reply. I see the problem with overly cavalier attitudes to personal identity as a reproducible pattern, but I'm not quite willing to let the continuous process view out of the hook that easily either.

It seems awfully convenient that your posited process of personal identity survives exactly those events, blinking ones eyes, epileptic fits, sleep, coma, which are not assumed to disrupt personal identity in everyday thought. Unless we have some actual neuroscience to point to, all I know is that I'm conscious right now. I don't see why I should assume that whatever process creates my conscious feeling of self is tied exactly to the layer of physical metabolism. It could be dissolved and created anew several times a second, or it could vanish whenever I lose consciousness. I (or the new me) would be none the wiser, since the consciousness going away or coming into being doesn't feel like anything by itself. Assuming that the continuity of a physical system is indeed vital, it could be that it's tied to cellular metabolism in exactly the convenient way, but I'm not buying an argument that seems to basically come down to an appeal to common sense.

This is also why I'm a bit wary of your answer to the thought experiment. I'm not entirely sure how the process of discovery you describe would happen. Suppose that people today do neuroscience, and identify properties that seem to always be present in the brains of awake, alert and therefore supposedly conscious people. These properties vanish when the people lose consciousness. Most likely scientists would not conclude that the dissolution of this state intrinsically tied to consciousness means that the subject's personal identity is gone, since common-sense reason assures us that we retain our personal identity through unconsciousness. I don't see any way of actually knowing this though. Going to sleep and waking up would feel exactly the same for the sleep-goer and up-waker if the unconsciousness caused a destruction and reconstruction of personal identity than if it would not. I assume that people living in a society with ubiquitous revival from zero metabolism would have similar preconceptions about the revival. The situation is of course likely to be different once we have a better understanding of exactly how the brain works, but lacking that understanding, I'm having some trouble envisioning exactly how the destruction of personal identity could be determined to be intractably tied to the observed entity X.

Finally there's the question of exactly what it means for a physical system to be intrinsically tied to being continuous in space-time. I can't think of any phenomenon in classical mechanics where I could point to any property of the system that would be disrupted if the system got disassembled and reassembled mid-evolution. There may be something like that in quantum physics though, I don't have much intuition regarding that.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-10-29T11:59:04.773Z · LW(p) · GW(p)

It seems awfully convenient that your posited process of personal identity survives exactly those events, blinking ones eyes, epileptic fits, sleep, coma, which are not assumed to disrupt personal identity in everyday thought.

The philosophical habit of skeptically deconstructing basic appearances seems to prepare people badly for the task of scientifically understanding consciousness. When considering the relationship between mind and matter, it's a little peculiar to immediately jump to complicated possibilities ("whatever process creates my conscious feeling of self ... could be dissolved and created anew several times a second") or to the possibility that appearances are radically misleading (consciousness might be constantly "going away or coming into being" without any impact on the apparent continuity of experience or of personal existence). Just because there might be an elephant around the next corner doesn't mean we should attach much significance to the possibility.

I'm not entirely sure how the process of discovery you describe would happen... The situation is of course likely to be different once we have a better understanding of exactly how the brain works, but lacking that understanding, I'm having some trouble envisioning exactly how the destruction of personal identity could be determined to be intractably tied to the observed entity X.

It is unlikely that society would develop the capacity for mind uploading and cryonic resurrection without also coming to understand, very thoroughly, how the brain works. We may think we can imagine these procedures being performed locally in the brain, with the global result being achieved by brute force, without a systemic understanding. But to upload or reanimate you do have to know how to put the pieces back together, and the ability to perform local reassembly of parts correctly, in a physical or computational sense, also implies some ability to perform local reassembly conceptually.

In fact it would be reasonable to argue that without a systemic understanding, attempts at uploading and cryonic restoration would be a game of trial and error, producing biased copies which deviate from their originals in unpredictable ways. Suppose you use high-resolution fMRI time series to develop state-machine simulations of microscopic volumes in the brain of your subject (each such "voxel" consisting of a few hundred neighboring neurons). You will be developing a causal model of the parts of the subject's brain by analysing the time series. It's easy to imagine the analysis assuming that interactions only occur between neighboring voxels, or even next-nearest neighbors, and thereby overlooking long-range interactions due to long axonal fibers. The resulting upload will have lost some of the causal structure of its prototype.

The possibility of elementary errors like this, to say nothing of whatever more subtle mistakes may occur, implies that we can't really trust procedures like this without simultaneously developing that "better understanding of exactly how the brain works".

I can't think of any phenomenon in classical mechanics where I could point to any property of the system that would be disrupted if the system got disassembled and reassembled mid-evolution.

How about the property of being an asymptotically bound system, in the absence of active disassembly by external forces? To me that still seems way too weak to be the ontological basis of physical identity, but that is (more or less) the philosopher Mario Bunge's definition of systemhood. (Btw, Bunge was a physicist before he was a philosopher.)

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2010-10-30T07:28:05.826Z · LW(p) · GW(p)

The philosophical habit of skeptically deconstructing basic appearances seems to prepare people badly for the task of scientifically understanding consciousness. When considering the relationship between mind and matter, it's a little peculiar to immediately jump to complicated possibilities

It wasn't philosophers who came up with general relativity and quantum mechanics when everyday intuition about nature didn't quite add up in some obscure corner cases. Coming up with a simple model that seems to resolve contradictions even if it doesn't quite fit everyday intuition seems to be useful in gaining a better understanding of things.

I'm also having genuine difficulties going anywhere past the everyday intuition with the idea of the discontinuity of personal identity separate from the discontinuity of mindstate. The idea of there being only a sequence of conscious moments instead of an intrinsic continuity doesn't present any immediately obvious contradiction and doesn't have the confusion of exactly what the mindstate-independent component of continuous personal is really supposed to be.

Of course going with the mindstate history view, now the difference becomes the sliding scale of possible differences from the previous state. It looks like personal continuity would become a matter of degree rather than a binary thing, which pushes things further into the unintuitive.

I can't think of any phenomenon in classical mechanics where I could point to any property of the system that would be disrupted if the system got disassembled and reassembled mid-evolution.

How about the property of being an asymptotically bound system, in the absence of active disassembly by external forces?

I'm afraid I don't understand what that means. Can you give more concrete examples of physical things that do or don't have this property?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-10-31T13:31:07.123Z · LW(p) · GW(p)

The idea of there being only a sequence of conscious moments instead of an intrinsic continuity doesn't present any immediately obvious contradiction

It contradicts the experience of time passing - the experience of change. The passage of time is an appearance, and an appearance is something stronger than an intuition. An intuition is a sort of guess about the truth, and may or may not be true, but one normally supposes that appearances definitely exist, at least as appearances. The object implied by a hallucination may not exist, but the hallucination itself does exist. It is always a very radical move to assert that an alleged appearance does not exist even on the plane of appearance. When you deny the existence of a subject which persists in time and which experiences time during that persistent existence, you are right on the edge of denying a fundamental appearance, or perhaps over the edge already.

Normally one supposes that there is an elemental experience of time flowing, and that this experience itself exists in time and somehow endures through time. When you disintegrate temporal experience into a set of distinct momentary experiences not actually joined by temporal flow, the most you can do to retain the appearance of flow is to say that each momentary experience has an illusion of flow. Nothing is ever actually happening in consciousness, but it always looks like it is. Consciousness in every moment is a static thing, but it always has an illusion of change embedded in it. (I suppose you could have a wacky theory of dynamic momentary experiences, whereby they're all still distinct, but they do come into and then go out of existence, and the momentary appearance of flow is somehow derived from this; the illusion would then be the illusion of persistent flow.)

To sum up, it's hard to have an actual experience of persistent flow without actually persisting. If you deny that, then either the experience of persistence or the experience of flow has to be called an illusion. And if one becomes willing to assert the persistence of the perceiver, the one having the experience, then there's no particular reason to be minimalist about it - which I think would be the next step up for someone retreating from a position of temporal atomism. "OK, when I'm aware that time is passing, maybe it's likely that I persistently exist throughout that experience. But what about when I'm just in the moment, and there's a gap in time before I contrast the present with the past via memory? How do I know that there was continuity?" The simplest interpretation of this is to say that there was continuity, but you weren't paying attention to it.

How about the property of being an asymptotically bound system, in the absence of active disassembly by external forces?

I'm afraid I don't understand what that means. Can you give more concrete examples of physical things that do or don't have this property?

Consider two gravitating objects. If they orbit a common center of gravity forever, we can call that asymptotically bound; if they eventually fly apart and become arbitrarily distant, they are asymptotically free. You could start with a system which, in the absence of perturbing influences, is asymptotically bound; then perturb it until it became asymptotically free, and then perturb it again in order to restore asymptotic boundedness.

comment by cousin_it · 2010-10-26T11:43:49.050Z · LW(p) · GW(p)

I'm curious: do you consider sleeping, or falling unconscious after hitting your head, to be as deadly as cryonics?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-10-26T11:50:24.717Z · LW(p) · GW(p)

No. See this discussion, including the spinoff discussion with a now-invisible Roko.

comment by NihilCredo · 2010-10-26T13:13:07.068Z · LW(p) · GW(p)

Beautifully put. Thank you.

comment by DanielLC · 2010-10-29T22:36:45.117Z · LW(p) · GW(p)

imagine growing up in a world where reviving people from something that very obviously halts all their bodily functions for a nontrivial duration, such as cryonics or uploading, is commonplace.

I go to sleep every night. It doesn't halt my bodily processes, but I become less intelligent then animals. People largely don't seem concerned with animals dying, so the logical conclusion is that someone reduced to such low intelligence is effectively already dead.

comment by lsparrish · 2010-10-26T15:32:56.360Z · LW(p) · GW(p)

This argument is weird, because it implies that you are 100% willing to consider clinical death as 100% dead with no chance of being wrong. What gives you such incredible confidence in your ability to judge the situation correctly at this early stage in the game?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-10-29T12:07:45.065Z · LW(p) · GW(p)

This isn't confidence in present-day criteria of clinical death, it's confidence that completely freezing your brain breaks whatever form of continuity is responsible for persistence of personhood through ordinary life changes. I'm not 100% sure about it, but that is a radically pulverizing transformation compared to just about anything that a brain can live through. Making a new brain in the image of the old brain and from the pieces of the old brain doesn't change that.

Replies from: lsparrish, lsparrish
comment by lsparrish · 2010-10-29T15:40:40.896Z · LW(p) · GW(p)

First off, if you can't be very nearly 100% sure of failure, you should do cryonics anyway -- as long as your expected value of survival is greater than cost times probability. If you are still only 99% sure cryonics would fail, you should still be willing to bet up to $50,000 on it if your life is worth $5 million.

Second off, your argument seems to include damage and suspension under the same umbrella. Suspension as a problem for personhood doesn't make much sense, unless you are willing to admit to there being a real a risk that people who undergo extreme hypothermia also actually reanimate as a different person.

Third, repair scenarios as a risk to personhood make sense only if you apply the same criteria to stroke, dementia, and trauma victims who would benefit from similar extreme advances in brain repair technology.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-10-31T13:57:07.366Z · LW(p) · GW(p)

As a first approximation to the true ontology of personhood, I'm going to talk about three degees of persistence. First, persistence of your current stream of consciousness. Second, continuous existence in time of your physical or ontological self even through interruptions to consciousness. Finally, creation of a new self which is a duplicate or approximation to an earlier self which was destroyed.

I consider it very likely that the self - whatever that is - exists continuously throughout any interval of uninterrupted consciousness. I consider it unlikely but remotely possible that sleep, and total unconsciousness in other forms, is death to this self, and that each time we awake we are literally a new conscious being, fooled into thinking that the experiences of its predecessors also happened to it because it has their memories. I consider it far more likely that the self - whatever that is - lasts a lifetime, and corresponds to some form of brain activity which persists continuously even in deep sleep or coma, but which will certainly be terminated or at least interrupted by something as radical as cryopreservation. I consider it unlikely but remotely possible that some form of reanimation from current techniques of cryopreservation constitutes second-tier persistence - an interruption to consciousness but not an interruption to basic identity - but I think it far more likely that reanimation from such a condition will require the creation of something that by physical criteria is a new self.

I'm not totally against seeking third-tier persistence but that's not what I would mean by immortalism.

Replies from: lsparrish
comment by lsparrish · 2010-10-31T19:49:10.679Z · LW(p) · GW(p)

Are you positing that actual information or structure would be lost if your brain dropped below a certain temperature and hence ceased all molecular and electrical activity for a time? I'm not sure what kind of empirical results you are expecting to see that differ from the empirical results I would expect to see.

Replies from: wedrifid, Mitchell_Porter
comment by wedrifid · 2010-10-31T19:53:04.595Z · LW(p) · GW(p)

Don't mind Mitchell. He believes consciousness to be some kind of magic.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-11-02T01:41:28.940Z · LW(p) · GW(p)

Remind me again how it works, then?

comment by Mitchell_Porter · 2010-11-02T01:38:55.671Z · LW(p) · GW(p)

"Information loss" is not the issue. The issue is whether creation of a copy of you counts as survival, and whether cryonics can produce anything but a copy.

Replies from: lsparrish
comment by lsparrish · 2010-11-02T02:09:19.283Z · LW(p) · GW(p)

In the absence of any empirically different prediction between the hypothesis that you survived versus a mere copy of you survived, how do we decide who wins the argument either way? Aren't we just debating angels dancing on the head of a pin here?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-11-02T02:33:13.094Z · LW(p) · GW(p)

In the absence of any empirically different prediction between the hypothesis that you survived versus a mere copy of you survived, how do we decide who wins the argument either way?

We can start by asking, what are you? And once we have an answer, we can see whether the entity on the other side of a resurrection process is you or not.

Replies from: Perplexed
comment by Perplexed · 2010-11-02T02:42:12.860Z · LW(p) · GW(p)

What is it like, to be a copy?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-11-02T02:50:22.799Z · LW(p) · GW(p)

Do you consider every entity anywhere in reality that feels like you, to be you?

Replies from: Perplexed
comment by Perplexed · 2010-11-02T02:57:02.971Z · LW(p) · GW(p)

No. In fact, I rather resent the fact that those other entities that feel like me actually have the nerve to claim to be me. I encountered one of them recently and gave him a piece of my mind. But it didn't seem to make any difference.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-11-02T04:14:04.361Z · LW(p) · GW(p)

Which piece ... never mind.

Returning to the original discussion: Do you actually have an opinion or a point of view about it? If you were killed, and then resurrected from a backup, did you survive? If it were possible, would you consider making a personal backup a priority, for the elemental reason of personal survival? Do you consider cryonic suspension to be something more than a way of making a backup copy of a dead person?

Replies from: Perplexed
comment by Perplexed · 2010-11-02T06:10:08.987Z · LW(p) · GW(p)

I am unsure about the efficacy of cryonics (or uploading). Assuming revival can work reliably (something I am doubtful of), I would estimate about a 30% chance that you are correct (different person) and 70% chance that cryonics enthusiasts are correct (Same person. Woohoo! Immortality!)

Here is a thought experiment you might find interesting though. Imagine "transporter" technology a la Star Trek which works like this: A person who wishes to be transported from Earth to Ganymede is sedated and non-destructively scanned in a laboratory on Earth and then the information is sent (at lightspeed) to the replicator on Ganymede. There, a new body is constructed, and an electric shock is given to start the heart. Finally, since the sedatives in the blood-stream were replicated as well, the copy on Ganymede is awakened and given a quick medical exam. Assuming all looks well, a signal indicating successful transmission is sent back to Earth. Upon receipt of the message on Earth, the original body is now considered redundant - no longer needed as backup in case of malfunction - so it is killed by lethal injection and cremated.

So, imagine that you wake up from sedation and notice that you are in Ganymede's weak gravitational field. Great! a successful trip.

Now imagine that you wake up from sedation and notice that you are still feeling Earth's gravity. What happened? Was there some kind of transmission failure and you will have to be scanned again if you still want to get to Ganymede? Or was the communication problem in the message from Ganymede to Earth? Did Earth have to request retransmission of the success signal? Uh oh, here comes a doctor with a needle. Is that norepinephrine or cyanide? Or another dose of sedative because they don't know yet and don't want you to be stressed out.

In my story, I can well imagine that the copy on Ganymede believes he truly is the same person that walked into the transporter station a couple hours ago and paid the fare. And I can also imagine that the original on Earth knows that he is the real McCoy and that he is likely to die very soon.

And I guess that I think that they are both right. Would I make use of that method of travel? I might, if I really needed to get to Ganymede in a hurry.

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2010-11-02T06:54:12.276Z · LW(p) · GW(p)

So, more briefly, intuitions about personal identity work when looking backwards in time but sometimes break down when looking forwards in time.

Replies from: Perplexed
comment by Perplexed · 2010-11-02T13:24:28.733Z · LW(p) · GW(p)

I think you are correct that there is a time-direction asymmetry. But ... "intuitions" ... "work" vs "break down"?

I'm not sure we have intuitions, or, if we do, that mine are the same as Mitchell's are the same as yours. And I don't know what criterion we can use to criticise those intuitions.

My opinions regarding "looking backward" are informed by experiences with reconstructing a thread of personal-continuity narrative after sleep, concussion, and high fever. It seems reasonable to assume that memory is the key to our post-hoc reconstruction of the illusion of continuity.

I'm not sure what evidence biases our guesses in the forward-looking direction. I do seem to recall a kind of instinctive terror that I felt as a child at the prospect of undergoing anesthesia for the first time. I was also very afraid the first time I jumped out of an airplane (while in the army). But I swallowed my fear, things seem to have turned out all right, and I had less fear the second time around. I'm guessing the same kinds of things might apply to how I would feel about being "improved" or "rehosted" as an uploaded entity.

comment by lsparrish · 2010-10-29T14:54:19.613Z · LW(p) · GW(p)

Please be more clear. Are you attacking cryonics based on the amount of brain damage, or the fact that the brain undergoes suspended animation?

comment by JoshuaZ · 2010-10-26T15:49:34.338Z · LW(p) · GW(p)

So you don't consider a restored body the same as you? I can see why one might not self-identify with a physical copy but most cryonists plan on having their current body restored. Do you not identify with such an entity?

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-10-29T12:18:11.328Z · LW(p) · GW(p)

I identify with some forms of restoration but not others. If I cut off all my hair and then it grows back, I have been "restored" without having ever gone away. If I get vaporized now, and then a trillion years later I happen to live again as a Boltzmann brain, that's clearly a copy. There are many conceivable transformations between those two extremes, and as I said to Risto in my first reply, I believe I am something, and that consequently there is an objective fact as to whether a particular transformation just changes me, or whether it actually destroys me (though a substitute may later be created). I think cryonics (and uploading) fall into this latter category. It's not certain but it's likely.

comment by Thomas · 2010-10-25T06:27:44.702Z · LW(p) · GW(p)

What about - The SAI can reborn me no matter how long I will be dead and how poor my remains will be then?

comment by timtyler · 2010-10-25T16:26:01.482Z · LW(p) · GW(p)

If you have a fear that cryonics won't work -- good for you! That's a perfectly rational fear. But if that fear immobilizes you and discourages you from taking action, you've lost the game.

The "game" of trying to live for as long as possible?!?

That is a game for Methusalahites - and we should not expect to see very many of those around - since natural selection will have snuffed their ancestors out long ago.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2010-10-25T19:57:28.843Z · LW(p) · GW(p)

Are you arguing that humans should not try to live as long as possible, because if we were meant to, then Evolution would have made us so?

Sounds a lot like: "if God had meant us to fly, he would have given us wings." What possible relevance does evolutionary pressure for innate drives have when considering whether we want to pursue a goal? Very weak evidence that we may be confused in our desire, and it won't ultimately bring us happiness? Counterexample: repeated sex without reproduction, with the same woman, makes me happy.

Replies from: timtyler
comment by timtyler · 2010-10-25T20:08:36.849Z · LW(p) · GW(p)

My comment was an explanation for why so few are interested in cryonics.

Most humans are just not built to be interested in living for a long time. Such humans are not losing "the game" of living for as long as possible. They were not playing that game in the first place!

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2010-10-25T21:50:25.763Z · LW(p) · GW(p)

People aren't interested in cryonics because it seems unreliable/speculative, it pattern matches as a religion or scam, and its advocates are too few and too low status. I don't expect any evolutionary inclination to be at play except the desire to survive, which we certainly should expect to have evolved.

Replies from: timtyler
comment by timtyler · 2010-10-25T21:57:25.847Z · LW(p) · GW(p)

Those are very strange expectations in my book. Most people are much more interested in things like relationships, family, loved ones, sex, fertility, status, weath - and heath and fitness - than they are in living for a long time. That is just what evolution 101 would predict.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2010-10-25T22:32:59.888Z · LW(p) · GW(p)

I agree that it's not necessary for genes' survival for individuals to be long-lived. I don't agree that people want to die.

Replies from: Perplexed
comment by Perplexed · 2010-10-25T23:09:11.730Z · LW(p) · GW(p)

Yet we have had people here advocate jumping off a bridge in front of a trolley if you are fat enough to stop it.

Suppose it could be argued young people create more joy per annum, for themselves and others, than do old people. Suppose (more controversially) that this excess joy over the first thirty years or so of life more than counterbalances the negative joy associated with death (for self and others).

That is, we are assuming that people contribute net positive utility to the world - even when their death after three score and ten is taken into account. Most people would, I believe, assent to this.

Now assume that there is a bound on the total number of people that can be supported comfortably in any milieu. This should be completely obvious given the previous assumption, even in a post-singularity universe. If the milieu is not yet at the carrying capacity, generate more children - don't resurrect more corpsicles!

Given this analysis, a utilitarian seems to have a clear-cut duty not to support cryonics - unless he disagrees that mortal human life is a net plus. And in that case, cryonics should be a lower priority to vasectomy or tubal ligation.

Edit: spelling correction

Replies from: timtyler, lsparrish, Jonathan_Graehl, timtyler
comment by timtyler · 2010-10-27T03:06:47.289Z · LW(p) · GW(p)

I think the usual idea is to fix aging - so young people are not more joyful people.

Until then, cryonics does seem like a bad move - from society's POV. Having big freezers sitting around doing nothing except burn up fuel serves very little useful purpose to society. Those resources could be going into living scientists or engineers - who would make a more positive contribution to the world.

comment by lsparrish · 2010-10-26T00:35:21.189Z · LW(p) · GW(p)

Your argument doesn't seem to take into account the plausible difference between old dying people and old immortal people.

Replies from: Perplexed
comment by Perplexed · 2010-10-26T00:49:49.126Z · LW(p) · GW(p)

Well, I had in mind a situation in which immortal people maintain a physical age of roughly 50 forever. But that the first 50 years of a person's life are so much better than any succeeding immortal 50 year period so as to make up for the mortal "bad years" from physical 50 to death.

So, I am taking it into account, though perhaps I was insufficiently explicit.

It strikes me as entirely rational to regard death as so terrible or youth as so angst-ridden that a world filled with immortals is the ideal. In which case cryonics makes sense. But it certainly is not a slam-dunk judgement. And this judgment is also inconsistent with a lack of activism regarding population limitation in the absense of cryonic revival.

Replies from: lsparrish
comment by lsparrish · 2010-10-26T02:07:47.336Z · LW(p) · GW(p)

It could be argued that while the creation of new children has positive utility (it certainly suits the preferences of the parents, e.g.), it is not anywhere near as high as the continued survival of humans already in existence.

Replies from: Perplexed
comment by Perplexed · 2010-10-26T02:13:34.282Z · LW(p) · GW(p)

Probably not for the humans already in existence. But, given a reasonable life prospect, the utility of being born is pretty high for the child being born. Higher for a neutral onlooker, too, I think.

comment by Jonathan_Graehl · 2010-10-25T23:31:18.544Z · LW(p) · GW(p)

Since we don't know the ultimate limits on human technology (one possibility: we're just too dumb to ever invent AI or FAI; space travel will never be practical; but maybe cryonics is actually easy w/ enough experimentation), it's reasonable to imagine an eventual bound as you discuss.

To further concur, and to counter the obvious objection against the evolutionary benefit of old people to their genes (tangential to cryonics, IMO) that natural reproductive life ends at 40-60 years (for women), and that most men stop fathering children perhaps a decade later: there is some benefit to the young in having old people around (including advice about infrequent events from the elders' distant experience, caring and education from their grandparents/aunts, and the expectation of similar treatment in their own senescence) .

Also, utilitarianism aside, I'm certainly selfishly in favor of my own long life regardless of whether on net I'm bringing others utility :) Similarly, I'm in selfishly in favor of own property rights, and the mechanisms in society that enforce them.

Replies from: Perplexed
comment by Perplexed · 2010-10-26T00:22:13.516Z · LW(p) · GW(p)

Actually, space travel is completely irrelevant to the limitation of resources argument. As is FAI. Regardless of how cheap it becomes to transport a corpsicle to Alpha Centauri, it will always be cheaper yet to just make a baby once you get there. And this is true whether we are talking about real or simulated babies.

But I agree that anyone who can put together a trust fund of a few million dollars should have the legal, moral, and economic right to stay frozen as long as they want, and then pay for their own resurrection, if it is technically possible. I might do so myself if I had those millions of dollars and no younger relative that I would prefer to give it to.

Cryonics makes some sense as an egoistic act. But please spare me the preaching (Yes, I'm talking to you, EY!) about how it is some kind of rationalist moral duty.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2010-10-26T00:40:08.738Z · LW(p) · GW(p)

While I think there probably are fundamental (and maybe also human closed under self-mod) limits, if technology keeps improving fast enough, then it doesn't follow that the universe can only support a finite number of us. I'm considering simulations, new universes, etc. as all possibilities for satisfactory continued existence, not only resurrecting a frozen body.

I agree that for all plausible amounts of science+tech, infinite expected lifespan + desire for reproduction (either concern-for-possible-beings, or hedonistic/value) would mean we eventually run into effectively scarce resources. I think it's nearly as likely we end up there without extremely long lifespans. Human population is growing quite nicely already.

Replies from: Perplexed
comment by Perplexed · 2010-10-26T00:56:48.562Z · LW(p) · GW(p)

My argument is independent of whether the universe can support only a finite number. All I am assuming is that the population growth rate is limited, which means that at any particular time the population is (at least for the time being) bounded. And that if there is currently room for more people, babies have moral priority over old people (for a utilitarian, given my assumptions).

Now a case can be made for the opposite - that people already alive have moral priority over the unborn. But this case can not be made by a utilitarian who accepts my assumptions regarding the "wonder years".

Edit: That is my response to your first paragraph. I notice too late that your second paragraph seems to agree with me. But you seem to think that it is relevant to point out that longevity is not the cause of overpopulation. Of course it is not. The question is, given that the world only supports so many, who is it that should not live? The young, or the old?

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2010-10-26T02:20:17.842Z · LW(p) · GW(p)

It's a fair question. It can hypothetically happen, no matter how rich technology makes you, that resources are effectively scarce in that moment (you desire to produce copies of yourself at a nearly infinite rate, or breed w/ legions of artificial wombs).

To rephrase old vs. young, you could ask: who deserves to exist - those who already exist, or those who might be created anew? Precedent (and conflict avoidance) favors the incumbent, but an extremist utilitarian coalition or singleton might have the power to disregard that. Say that we rule against the old (the average lifespan is finite by decree); you still have to decide which new lives to create. The question you pose is relevant to life extension, but not limited to it.

I guess I think that arguments over who ought to exist are just a distant curiosity (of course it's easy to imagine a future where they're actually used to make decisions; but for now they're just for fun). I'm also interested in the slight generalization: ought anti-wealth-concentration mechanisms (e.g. taxes) block the possibility (or maybe just the frequency of) long-lived winners, if everybody can't live to 100 million years? I propose that the right to create offspring or duplicates, and the right to live longer than normal yourself, should come from the same ration. It's easy to imagine this as money, but I suppose just as slavery is prohibited, you could prohibit trading of the right to other families/entities.

Replies from: Perplexed
comment by Perplexed · 2010-10-26T03:49:56.050Z · LW(p) · GW(p)

I propose that the right to create offspring or duplicates, and the right to live longer than normal yourself, should come from the same ration. It's easy to imagine this as money, but I suppose just as slavery is prohibited, you could prohibit trading of the right to other families/entities.

Maybe I'm missing something, but I see no reason to place such restrictions. A "ration coupon" for reproduction only comes into existence when someone dies or moves off-planet or moves away from this space-station or whatever. The deceased or departed or lucky lottery winner should have the right to pass on the "coupon" to whomever he/she chooses for whatever compensation is mutually agreed. Same goes for the heir.

Hmmm. Maybe you are right. It might not be a good idea to have a futures market on those things.

comment by timtyler · 2010-10-27T02:53:42.662Z · LW(p) · GW(p)

You're not a utilitarian, though. Presumably most cryonics patients are not utilitarians - at least they spend more money freezing themselves than others - which seems like a pretty reliable indication of egoism to me.

A utilitarian analysis might be relevant to the government if deciding whether to fund or ban cryonics, I suppose. It is pretty hard to imagine government funded cryonics at the moment. Not very many are going to vote for that.