Female Test Subject - Convince Me To Get Cryo
post by Epiphany · 2012-09-30T05:13:00.302Z · LW · GW · Legacy · 176 commentsContents
176 comments
I heard that women are difficult to convince when it comes to signing up for cryo. In mentioning cryonics to a dying person, there seems to be a consensus that it's not going to happen. I encountered a post: Years saved: Cryonics vs VillageReach, which addressed my main objection (that the amount of money spent on cryo may be better spent on saving starving children, especially considering that you could save multiple children for that amount of money with high probability whereas you save only one life with low probability by paying for cryo). Now I'm open to being persuaded.
My first instinct was to go read a lot about cryo, but it dawned on me that there are a lot of people here who will want to convince family members, some of them female, to sign up - and these people may appreciate the opportunity to practice on somebody. It has been argued that "Brilliant and creative minds have explored the argument territory quite thoroughly." but if we already know all of the objections and have working rebuttals for each, why is it still thought of as extra difficult to get through to women? If there were a solution to this, it would not be seen as difficult. There must be something that pro-cryo people need for persuading women that they either haven't figured out or aren't good enough at yet.
So, I decided to offer myself for experiments in attempting to convince a woman to sign up for cryo and took a poll in an open thread to see whether there was interest. I don't claim to be perfectly representative of the female population, but I assume that I will have at least some objections in common with them and that persuading me would still be good practice for anyone planning to convince family members in the future. Having a study on persuading women would be more scientific but how do you come up with hypotheses to test for such a study if you have no actual experience persuading women?
So, here is your opportunity to try whatever methods of persuasion you feel like with no guilt, explore my full list of objections without worrying about it being socially awkward, (I will even share cached religious thoughts, as annoyed as I am that I still have them.), and I will document as many of my impressions and objections as I can before I forget them.
I am putting each objection / impression into a new comment for organization. Also, I have decided to avoid reading anything further on cryo, until/unless it is suggested by one of my persuaders.
Well, have fun getting inside my head.
176 comments
Comments sorted by top scores.
comment by Eneasz · 2012-10-01T17:04:50.580Z · LW(p) · GW(p)
I was surprised to see the most relevant objection of the vast majority of people not mentioned. It is conspicuously absent, in fact. Social norms.
The social norms against cyro are so strong that almost no one even remotely considers it. This is almost everyone’s true rejection.
When people say it’s extra-hard to convince women, I think they’re misattributing the source of difficulty. It’s very hard to find people who are so blind to (or resistant to) social norms (take your pick of connotation :) ) that they’re willing to consider the merits of cryo. For whatever reason it seems easier to find males who are so blinded/fortified than females. I would wager that it’s the same reason that the gender distribution of LW skews very male.
Perhaps the most effective argument to make to get most people to sign up would be “This is why you may safely ignore social conventions in this case.” With little/no attention being given to the merits of cryo, and almost all the effort being put into convincing the subject that the social costs will be minimal.
Replies from: Epiphany↑ comment by Epiphany · 2012-10-02T19:40:04.913Z · LW(p) · GW(p)
Ooh good observation. It can be so much harder to notice things that aren't there.
The answer to why I didn't make a social norm objection is simple: I don't have to tell anyone that won't understand. It's not like anyone is going to publish my name in the newspaper.
Interesting that they don't appear to realize this. Maybe the difference is that if you're talking to people in a non-anonymous context where others are overhearing, they will appear wary of cryo for social reasons, but I can't help but wonder if they then go away and think about it on their own, privately considering it's merits. After all, this is life or death, right?
Maybe the only thing that you have to do to overcome this is tell people it can be done privately (I'm only assuming that it can be, can it?) and to present cryo to them when nobody else is around.
Or you could open the cryo discussion with something to the effect of "If everyone else were jumping off a cliff, would you do it just because they were?" If no, which is likely, then: "If there was something that could keep you from dying but it wasn't popular, would you say it was jumping off a cliff with them if you would not even consider it?" If yes, then: "Cryo could stop you from dying. It isn't popular, but would you consider it anyway?"
That pits an even more socially unacceptable thing, being such a sheep that you die, against something that can't possibly be as unacceptable since it doesn't require you to knowingly make a decision which leads to your own death. Unless survivor's guilt is prevalent, in which case the irrational notion "But I shouldn't kill everyone else by surviving!" trumps "I can't jump off a cliff like an idiot."
comment by Epiphany · 2012-09-30T22:21:33.327Z · LW(p) · GW(p)
Survivor's guilt (resolved objection):
Viliam Bur suggested survivor's guilt, and I realized that I was experiencing survivor's guilt while imagining getting cryo.
I wonder if women experience stronger survivor's guilt than than men. Testosterone supposedly makes one more selfish. Women are known for altruistic acts (many of which are pathological, like the phenomenon where women will often stay with an abusive partner trying to love him into changing), possibly because of some differences with oxytocin. I bet there's a connection here between hormonal differences and survivor's guilt that might explain the extra difficulty in convincing women.
Seeing that survivor's guilt didn't seem rational, I became curious about it and introspected for a moment. It seems to be resolved. I documented my thinking process:
I have thought of a question to ask myself that may get rid of it:
"Imagine that there are three people who I really want to see live. By random chance, something happens outside their control and two of them die but one of them lives. Do I feel happy that the one person lived? Or do I feel like they should die?"
My feeling is that they definitely should not die.
Now, I also feel compelled to try this:
"Imagine three people I don't like, but who I don't think deserve to die. Same scenario, one lives."
My feeling is that I prefer they do not die.
Now I'm asking "If it was more fair to the other two, would I have had them die along with them?"
No, I'd have tried to save them, and if the other two wanted to see the person die for "fairness" that's just crazy.
Okay, so now I'm asking myself:
If I was in that situation where two of the same people died but I survived by chance, would I feel it was crazy to think it was unfair for me to survive?
Yes, that is laughable now.
Something in me feels compelled to ask: "Were you better than those two other people?"
My answer is: "Who chose whether they died?"
Ah! Now this is separated. I have separated myself from the cause of their death. I had to see that I was not at fault for this.
The obvious question then is "What is the cause of most people dying except me who got cryo?"
Answer: All the causes. I cant stop them all. But I can tell more people about cryo and I can try to stop my own death, and this is good. That's the best that I can do.
Now, I have this warm feeling like my guilt is alleviated, like saving my own life isn't an affront to them, but something they would think was good - just as I thought it was good that one person survived when two died.
Okay, I think I figured out how to hack survivor's guilt, at least, as it applies to me. I will update here if the guilty feeling returns.
Now onto my other objections... (:
Replies from: Desrtopa↑ comment by Desrtopa · 2012-10-03T01:31:05.026Z · LW(p) · GW(p)
I wonder if women experience stronger survivor's guilt than than men.
If I were to make a prediction for an experiment, I would guess no, because men are conditioned to see themselves as more expendable. I'm guessing that the same norms which led to more women in steerage class making it off the Titanic alive than men in first class would lead to men having stronger survivor's guilt than women.
Replies from: TimS, Epiphany↑ comment by TimS · 2012-10-03T02:44:54.662Z · LW(p) · GW(p)
more women in steerage class making it off the Titanic alive than men in first class
The Titanic was an exception. Slate.com had a link to the study itself (I think).
↑ comment by Epiphany · 2012-10-03T02:31:37.593Z · LW(p) · GW(p)
That men feel expendable is an interesting idea, but that sounds like more of a cultural pressure having to do with the military or women being capable of pregnancy than an instinct. The hormonal differences, on the other hand, are unavoidable and internal. I wonder which is stronger and whether anyone has done research on whether women are more self-sacrificing. (Not seeing anything from my searches.)
Replies from: Desrtopa↑ comment by Desrtopa · 2012-10-03T03:58:13.462Z · LW(p) · GW(p)
It may or may not be instinctual, but then, there are probably some rather strong selective forces which have encouraged men to be more cavalier with their lives than women. Even if it's cultural, it's a cultural value that's reinforced quite consistently.
Replies from: grendelkhan↑ comment by grendelkhan · 2012-10-22T03:10:52.269Z · LW(p) · GW(p)
It's reinforced by a lot of talk. Historically, men do not save women in shipwreck situations. This is information that would be pretty surprising based on your previous beliefs. Shouldn't it change your mind?
Replies from: Desrtopa, gwern↑ comment by Desrtopa · 2012-10-22T06:13:19.275Z · LW(p) · GW(p)
It's somewhat surprising, but then, men can still be significantly more prone than women to consider themselves expendable, and still outsurvive women in shipwrecks if both genders tend to be non-self-sacrificing enough for the situations to devolve to "every man for himself." For purely physical reasons, men are more likely to make it out of a panicked crowd alive. I'm a bit surprised that the Titanic scenario was as exceptional as it was, but I would not necessarily have predicted that relative rates of self sacrifice would dominate survival rates.
If a reliable study were to find that women are as or more likely to risk or sacrifice their lives to save non-progeny compared to men, it would certainly be sufficient to change my mind.
↑ comment by gwern · 2012-10-22T03:37:01.048Z · LW(p) · GW(p)
I'd say the shipwreck data reinforces it: in the circumstances where heroism is least observable and where death is most likely (reducing the potential reward and increasing the incurred risk), we see less peacocking. If the relationship ran the inverse direction - the more the reward and the less the risk, the less risk-taking - that'd be pretty strange and hard to reconcile with the Baumeister paradigm.
comment by wedrifid · 2012-10-02T06:05:18.751Z · LW(p) · GW(p)
Female Test Subject - Convince Me To Get Cryo
"Female" is probably not the most relevant descriptor here. "Nerd on Lesswrong Test Subject" would perhaps be more representative. Or "Epiphany Test Subject". If 'female' comes in to it it should be qualified as "Female Lesswrong Participant", with that second part conveying more information relative to the population at large than your sex.
comment by Epiphany · 2012-09-30T05:20:33.566Z · LW(p) · GW(p)
When I realized cryo is real (documentation): About a year ago, I went on a date with someone who had signed up for cryo. I remember asking him whether it was expensive, and he told me that his life insurance paid for it. My feeling was "Oh, you can actually do that? I had no idea." - and it felt weird because it seemed strange to believe that freezing yourself is going to save your life (I didn't think technology was that far along yet), but I'm OK with entertaining weird ideas, so I was pretty neutral. I thought about whether I should do it, but I wasn't in a financial position to take on new bills at the time, so I stored that for later.
Replies from: lsparrish↑ comment by lsparrish · 2012-09-30T16:43:11.988Z · LW(p) · GW(p)
My guess is he said (or meant to say) "life insurance" rather than "health insurance". I don't think there's health insurance that covers cryonics. The idea that freezing yourself will save your life is indeed a weird one that should be carefully researched before you adopt that position. As you probably realize by now, cryonics involves (an attempt at) vitrification of the brain, which means that unlike normal freezing, ice crystals are (at least in ideal cases) prevented from forming.
Highly concentrated cryoprotectants must currently be used, and this does significant damage which needs to be repaired later. Thus it's a conditional bet about scientific unknowns -- if technology reaches a certain level, having my brain vitrified may turn out to save it well enough that science can restore me to a healthy existence (which may or may not be all digital). Most cryonics advocates do not take the hard line of belief that it definitely will save their life, but that it presents a good enough chance to be worth it given the sum of current scientific knowledge.
In my opinion, the chance of it working must exceed something in the range of 1% to be reasonable and not considered quackery. My reasoning is that the cost is in the $50k range ($28k-$150k) whereas actuaries budget somewhere in the range of $5M towards saving human lives in matters of public safety. Spending $50k on a procedure with .01% chance of working is only for rich egoists and/or people who assign a much higher value to the longevity and self-improvement opportunities of the future. Go too much lower than that and you end up with a "pascal's wager" kind of scenario, which could conceivably justify all kinds of quackery. In any case I think it is safe to say that if the chance is greater than 1%, it is something that everyone should have access to, and should ideally be covered by medical insurance.
The chance of it working seems to be much higher than that, in the average person's mind. But then, average people often accept all kinds of weird ideas so that's probably not the best metric available to us. How scientists (especially those with relevant expertise) feel about it is the major question. I would be curious as to what a survey of scientists with relevant expertise would turn up. What is disturbing to me (and what turned me from a fairly neutral party into something of an activist) is how unimportant the topic seems to be treated by both the scientific community and the nonscientific world. This should be hotly debated, not dismissed out of hand.
I suspect social causes are a dominating one, and I suspect women on average may have a better grasp on the social causes than men on average. So my plea to females (since that's the point of this thread, coming up with more female-appealing arguments) would be to at least try and understand this from the perspective of advocates and why we are passionately in favor of it. Read Kim Suozzi's description of her reasoning -- it is a logical step to take when you don't feel you are done living and think science is likely to conquer the problems involved.
As to the creepiness of freezing people, well, while a negative visceral first reaction is understandable, there's nothing about it that is any creepier than what emergency and surgical medicine already entails, and more science is (usually) a good thing for humanity. We've been shipping organs on ice and transplanting them for decades, and we've reanimated stopped-heart "dead" patients for even longer.
Another reason might be that it seems like "mad science". Mad science as seen in fiction is ambitious (which cryonics also is) but it is also cruel and morally indifferent. This is where the chance of it working is important, because if there is a sufficiently good chance of it working, cryonics becomes something that compassionate people are motivated by, not just egoists.
However even if the chances are too low for compassionate motives to come into play much, there does not appear to be any reason to regard it as cruel, since patients are completely unconscious (some of the drugs perfused in the legally dead patient are strong anesthetics) before they are cooled. And it is something patients choose for themselves rather than having it forced upon them.
Replies from: Epiphany↑ comment by Epiphany · 2012-09-30T21:52:39.645Z · LW(p) · GW(p)
Yes, he said life insurance. Typo, sorry.
I don't know if there's any way of telling what the real probability of revival is. Do you know of a good source on this?
This should be hotly debated, not dismissed out of hand.
Well I got that part right at least. (:
understand this from the perspective of advocates and why we are passionately in favor of it
It's true that I don't know why you're passionately in favor of it. I know that Eliezer is passionately in favor because he lost his brother. That makes sense to me. Considering my concerns about waking up as a horror, and the fact that I don't have any family members that are signed up for cryo who will miss a chance at interacting with me in the future if I don't sign up, that simply doesn't apply in my case.
Read Kim Suozzi's description of her reasoning
I don't know where that is. Do you?
As to the creepiness of freezing people
It's not creepy to me anymore. It was depicted as creepy in the cartoon, though - there were all these rows of really ugly alien looking bodies and some ominous music was playing and the children were theorizing about what they were and they realized they were dead.
Being frozen isn't any creepier than being buried. My body has to go somewhere after it dies. Actually, I think this is less creepy - it's a lot cleaner. No worms or anything.
mad science
I'm probably unusually accepting here. I have had a lot of fun doing things like touring a particle accelerator and hanging out with "mad scientists" in labs. I love it.
I don't know how I got this way but I'm thinking it has to do with realizing that the "mad scientists" come up with awesome stuff sometimes.
comment by Epiphany · 2012-09-30T06:43:10.087Z · LW(p) · GW(p)
Trying to live forever is associated with evil (religious cached thought):
I'm not religious, but was raised Christian. Annoying as this is, I still find religious cached thoughts sometimes. I don't want to keep them - I'm sharing for the sake of documenting all the thoughts that are being triggered while I make my decision. Thinking about signing up for cryo triggered this:
My cached thought is associating living forever with being tempted by the devil, and seeing it as a thing that only sinful people would do.
I realize that I would not be guaranteed everlasting life. Even if I was revived, I expect it would be for a much shorter time than "forever". That wouldn't change the fact that I'm mortal or circumvent the threat of hell. I'm not sure where the sense of defiance comes from. I suppose it would defy the current way of things but expecting life forms to just shut up and die is silly.
I don't see why extending your life would have to qualify as sinful. It just makes sense.
Replies from: NancyLebovitz, mwengler, Nornagest, Dolores1984, V_V, DanArmak↑ comment by NancyLebovitz · 2012-09-30T17:13:56.824Z · LW(p) · GW(p)
Thanks for posting that-- I wasn't raised Christian, and that objection never would have occurred to me. Do you have a feeling for whether it might be a common Christian objection? The Christian objection I've heard is that great longevity means putting off going to Heaven. I've never heard a Christian say that great longevity increases the odds of repenting and avoiding Hell.
My exposure to anti-longevity/immortality thoughts are from science fiction and fantasy, which doesn't just have a wide streak of "you'd need evil methods" (see also Bug Jack Barron, in which it takes killing poor children for something from their glands), but a very strong streak of "if you were immortal, you wouldn't like it". You'd be bored or you'd go mad. I think it's sour grapes.
Replies from: DanArmak, Epiphany, Epiphany↑ comment by DanArmak · 2012-10-01T20:03:23.674Z · LW(p) · GW(p)
The Christian objection I've heard is that great longevity means putting off going to Heaven. I've never heard a Christian say that great longevity increases the odds of repenting and avoiding Hell.
During a long life a Christian may repent many times, and sin again many times. Whether they go to Heaven or Hell depends on when they die. Suicide is a mortal sin because otherwise they'd kill themselves after repenting. So they do the next best thing, and repent when they think they're going to die. Confession and absolution on a deathbed are standard. Conversion and baptism on a deathbed are known to happen.
↑ comment by Epiphany · 2012-09-30T20:14:48.330Z · LW(p) · GW(p)
Common Christian Objections: (Guesses, as I am no longer a Christian) and rebuttals (Within the Christian religious framework, as it's not always feasible to convince them to be Atheists).
1.) You're trying to get something that's forbidden. (Life is important, so God must control it, if you were supposed to have more, you would already have more. Therefore trying to get more should be viewed as bucking a limitation.)
Rebuttal: If you attribute other medical breakthroughs to God, how do we know God didn't give this to us, too?
2.) Only God should decide when you die. (He forbids you from living longer except at his discretion.)
Rebuttal: Why should I believe that a loving God expects me to just shut up and die?
3.) You're making a deal with the devil. (Because only God should decide.)
Rebuttal: Nobody asked me for my soul or to do anything evil to sign up for cryo. The ten commandments don't tell me not to. In fact "You shall not murder." may be interpreted as an obligation to continue your own life wherever possible, otherwise you're knowingly choosing to die when it isn't necessary, thereby "murdering" yourself. I see no evidence that this is temptation by the devil.
4.) You're tinkering with the sacred.
Rebuttal: If life is sacred, and saving lives is an option, isn't it worse to fail to do everything you can to save lives, even if your attempts are somewhere between not perfect and horribly incompetent at first?
↑ comment by Epiphany · 2012-09-30T20:16:03.421Z · LW(p) · GW(p)
great longevity increases the odds of repenting and avoiding Hell.
That's a really good argument. If Christians want Atheists to come around, shouldn't they hope we live longer so we have a better chance of finding some reason to believe in God? I'm not religious, and I really doubt any Atheists will "come around", but I think this would work as an argument.
+1 Karma
↑ comment by mwengler · 2012-09-30T15:38:33.947Z · LW(p) · GW(p)
You might say living forever on earth is associated with being tempted by the devil. But the fundamental (it seems to me) temptation offered by Christians in trying to sign up new members and keep old ones is the promise of eternal life in heaven. Indeed, many retail christian outlets declare you will get an eternal life no matter what you do, and the reason to sign up is so that your eternal life isn't an eternity of torture.
Just interested in pointing out that "eternal life" is not something Christians typically run from.
↑ comment by Nornagest · 2012-09-30T07:14:43.996Z · LW(p) · GW(p)
Funny, I was aware of this meme in Western culture but I never associated it with religion. (I was raised mostly secular, modulo a little residual Catholicism in my family.) Immortality often shows up as a goal in media, but almost exclusively as a villainous one: heroes accept their fate, villains fight against it. Often the methods of obtaining immortality lean towards the cartoonishly evil (the mythical version of Elizabeth Bathory bathing in virgins' blood; Lord Voldemort's horcruces), but just as often they're fairly benign and the pursuit itself is seen as hubristic and therefore evil. At best, a hero (Gilgamesh, say) will pursue it for a while before learning better, but this is actually pretty rare.
This seems to tie into another thought of mine about how villains and heroes get constructed in our culture, but that'd be a bit of a sideline in this context. I don't think I'm familiar with the construction of immortality in a Christian context, though, aside from incredibly esoteric stuff like medieval alchemy; can you tell me more?
Replies from: Epiphany↑ comment by Epiphany · 2012-09-30T08:11:55.693Z · LW(p) · GW(p)
Yeah, you know what, why is immortality portrayed as evil in all of these different places? There must be some specific spot in the bible, but I can't recall it. Maybe it isn't even from the bible. Now I'm really curious to find out exactly where this cultural association between immortality and evil came from...
Replies from: Nornagest↑ comment by Nornagest · 2012-09-30T08:34:02.610Z · LW(p) · GW(p)
The closest the Bible gets, as far as I remember, is the bit in Genesis about the Tree of Life, and that's pretty ambiguous. It's been a while since I've read it, though.
I'm not actually sure, but I think this is mainly a hubris thing. For whatever reason, there's a fairly well-defined set of activities in our culture that are thought of as outside the proper domain of humanity; this might have gotten its start in a religious context, but it's certainly not limited to that anymore. (Consider "frankenfoods".) Seeking immortality's on that list, along with playing with the building blocks of life or, worse, creating new life; doing any of these things seems to be considered usurping the role of God or nature, and therefore blasphemous or at least very close to it. This is, of course, nothing new.
Where we get that list from is another question. I don't think it's purely Christian; cautionary tales about immortality go back at least to the Epic of Gilgamesh, although as far as mythological treatments go I think the Cumaean Sibyl's has more punch.
Replies from: Richard_Kennaway, Epiphany↑ comment by Richard_Kennaway · 2012-09-30T09:42:21.167Z · LW(p) · GW(p)
cautionary tales about immortality go back at least to the Epic of Gilgamesh
I never read Gilgamesh as a story against immortality. On the contrary, it is a tragedy that Gilgamesh loses the flower of immortality that he has brought back. The gods in this story are enemies who keep immortality for themselves.
↑ comment by Epiphany · 2012-09-30T21:14:14.404Z · LW(p) · GW(p)
the bit in Genesis about the Tree of Life
Lol somebody ate an apple once, now we're not allowed to live forever.
Even if that was real, I don't see cryonics as a means of living forever. Forever is a long time. There's no guarantee of that.
set of activities ... thought of as outside the proper domain of humanity
Now that's interesting. I wonder if that might actually be more of an instinct to avoid screwing up important things, or just common sense, than something that's religious. Even if it has been codified in religion, might it have originally stemmed from a sense of not wanting to screw up something important. It's true that we are flawed and that whenever we attempt to do something ambitious, there is a risk of horribly screwing things up. Eg: communism. There can be unintended side-effects. Eg. X-ray technicians used to x-ray their hands every morning to make sure the machine was warmed up. You can imagine the horror they encountered years later...
I think we're right to have a sense of trepidation about messing with life and death. It's a big deal, and we really could gravely screw something up, there really could be unexpected consequences.
↑ comment by Dolores1984 · 2012-09-30T20:16:17.555Z · LW(p) · GW(p)
Living forever isn't quite impossible. If we ever develop acausal computing, or a way to beat the first law of thermodynamics (AND the universe turns out to be spatially infinite), then it's possible that a sufficiently powerful mind could construct a mathematical system containing representations of all our minds that it could formally prove would keep us existent and value-fulfilled forever, and then just... run it.
Not very likely, though. In the mean time, more life is definitely better than less.
Replies from: Epiphany↑ comment by Epiphany · 2012-09-30T21:28:30.200Z · LW(p) · GW(p)
Let me ask you this. Somebody makes a copy of your mind. They turn it on. Do you see what it sees? Someone touches the new instance of you. Do you feel it?
When you die, do you inhabit it? Or are you dead?
Replies from: Dolores1984↑ comment by Dolores1984 · 2012-09-30T21:43:06.611Z · LW(p) · GW(p)
Depends on your definition of 'you.' Mine are pretty broad. The way I see it, my only causal link to myself of yesterday is that I remember being him. I can't prove we're made of the same matter. Under quantum mechanics, that isn't even a coherent concept. So, if I believe that I didn't die in the night, then I must accept that that's a form of survival.
Uploaded copies of you are still 'you' in the sense that the you of tomorrow is you. I can talk about myself tomorrow, and believe that he's me (and his existence guarantees my survival), even though if he were teleported back in time to now, we would not share a single thread of conscious experience. I can also consider different possibilities tomorrow. I could go to class, or I could go to the store. Both of those hypothetical people are still me, but they are not quite exactly each other.
So, to make a long story short, yes: if an adequately detailed model is made of my brain, then I consider that to be survival. I don't want bad things to happen to future me's.
↑ comment by V_V · 2012-09-30T17:38:50.170Z · LW(p) · GW(p)
Actually trying to live forever ("saving your soul") is the central stated point of religions such as Christianity and Islam.
Religious opposition to cryonics could stem from the fact that cryonics is preceived (correctly, IMHO) as a competing religion. Note that there is no strong religious opposition to most other procedures that promise a lifespan extension.
Replies from: Epiphany, palladias↑ comment by Epiphany · 2012-09-30T20:55:40.382Z · LW(p) · GW(p)
Huh. That is such a simplistic way of viewing religion. I think you're right in a sense - that it may very well threaten religions by providing an alternative for a key reason people become religious. However, I think most religious people I know (I'm not one so I am guessing at their reasoning) would object to this, saying that there is a lot more to religion than that, and that if the person is in it only to go to heaven, they're being superficial and not really "getting" it. For that reason, I think they'd say that they do not categorize their religion as a religion because it promises to save your soul, and they'd probably also not categorize cryonics that way either.
Replies from: V_V↑ comment by palladias · 2012-09-30T23:34:58.969Z · LW(p) · GW(p)
The difference between saving the soul and extending life is that saving the soul means preserving it to live in a particular way (i.e. the imago Dei). Extending life is neutral with regard to how you live it.
Replies from: V_V↑ comment by V_V · 2012-10-01T00:51:12.912Z · LW(p) · GW(p)
Brain upload? Imago FAI? Come on, it's the same sort of stuff, just with supernatural miracles replaced by technological ones.
Replies from: palladias↑ comment by palladias · 2012-10-01T02:51:09.583Z · LW(p) · GW(p)
But the cryo people aren't prescriptive about what imago FAI looks like, that's the point. They'll give you more life, but they won't tell you how to live it. Whereas religion doesn't change your material circumstances but is very emphatic about how you should live with them.
Replies from: Mitchell_Porter, V_V↑ comment by Mitchell_Porter · 2012-10-01T14:26:17.457Z · LW(p) · GW(p)
"Imago FAI" is a serendipitous coinage. It sounds like what I had in mind here, when I talked about the mature form of a friendly "AI" being like a ubiquitous meme rather than a great brain in space. If a civilization has widely available knowledge and technology that's dangerous (because it can make WMDs or UFAIs), then any "intelligence" with access to dangerous power, needs to possess the traits we would call "friendly", if they were found in a developing AI. Or at least, empowered elements of the civilization must not have the potential or the tendency to start overturning the core values of the civilization (values which need not be friendly by human standards, for this to be a condition of the civilization's stability and survival). It implies that access to technological power in such a civilization must come at the price of consenting to whatever form of mental monitoring and debugging is employed to enforce the analogue of friendliness.
↑ comment by V_V · 2012-10-01T16:46:10.975Z · LW(p) · GW(p)
Cryonics itself makes no moral prescriptions. You can consider it as a type of burial ritual.
But rituals are not performed in isolation, they are performed in the context of religions (or religious-like ideologies, if you prefer) that do make moral prescriptions.
Cryonics typically comes in the transhumanist/singularitarian ideological package, which has a moral content.
↑ comment by DanArmak · 2012-09-30T15:14:23.418Z · LW(p) · GW(p)
I don't see why extending your life would have to qualify as sinful.
This is speculation: I'm not a Christian.
In Christianity, death brings the judgment of God who sends you to heaven or hell (or purgatory).
If you expect heaven, you don't want to put off death. Suicide is a sin but as long as you don't see non-cryonics as willful suicide, you would want to die early to get to heaven early.
If you expect hell, then you think you've sinned mortally. Most brands of Christianity allow for redemption by various means. If you think you're a sinner, trying to put off death means trying to avoid the judgement of God, which is both just and good; so struggling against it would make you evil. If you fear hell, instead of focusing on avoiding death, you would focus on expiating your sins in order to go to heaven.
In addition, some but not all brands of Christianity have the meme that this world is impure, and one should abstain from it, and not be attached to it. Trying to live longer than is natural is attachment to the profane; one should instead spend their lives thinking of God, praying, abjuring the pleasures of the flesh, etc. in order to obtain heaven.
Replies from: NancyLebovitz, Epiphany↑ comment by NancyLebovitz · 2012-09-30T18:16:13.087Z · LW(p) · GW(p)
Hypothesis: Religious people (or at least Jews and Christians, which are the religions I'm most familiar with) tend to say that life and death are ultimately in the hands of God/G-d. I suspect this is a way of avoiding survivor's guilt, though both groups are generally in favor of medicine.
From memory: a news story about a conference on medical ethics where the Orthodox Jews were the only ones in favor of life extension.
I suspect that any religion with a vividly imagined heaven has to have rules against suicide, or else the religion won't survive. It's plausible to me that the revulsion against life extension is a mere side effect of the rule against suicide.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-09-30T19:10:23.438Z · LW(p) · GW(p)
It's plausible to me that the revulsion against life extension is a mere side effect of the rule against suicide.
This seems strange, I would think an aversion to suicide would make people more pro-life extension.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2012-09-30T19:26:09.636Z · LW(p) · GW(p)
My hypothesis is that the rule (life and death are in the hands of God) was instituted when suicide was available and life extension wasn't. Life is in the hands of God wasn't really relevant, it was just thrown in to make God sound more benevolent (so that He isn't just killing people) and more powerful.
↑ comment by Epiphany · 2012-09-30T21:24:31.711Z · LW(p) · GW(p)
Hmm. Most of these seem to ignore the fact (not saying YOU are ignoring the fact, but that the religion would have to be ignoring the fact) that there are reasons to extend life that have nothing to do with heaven and hell.
It's interesting that you mention "trying to live longer than is natural is attachment to the profane" - this strikes me as more Buddhist, but I could see Christians believing that, too. However, if cryo is attachment to the profane, so is eating healthy and exercising. Heck, so is eating at all. I am so glad I'm not religious. It causes such horrible cognitive dissonance to harmonize these types of beliefs with other information I have about life.
Replies from: DanArmak↑ comment by DanArmak · 2012-10-01T13:47:26.277Z · LW(p) · GW(p)
However, if cryo is attachment to the profane, so is eating healthy and exercising. Heck, so is eating at all.
Yes - hence the idea of religious fasting. The Catholic and Orthodox Christian traditions consider "mortification of the flesh" to be holy, and luxuries of the flesh (enjoying eating, sex, and bodily sensations in general) to be wicked or at least a dangerous temptation.
comment by AngryParsley · 2012-09-30T06:20:04.303Z · LW(p) · GW(p)
I'm signed up for cryo and I don't want to convince you.
This topic has been discussed to death, both here and elsewhere online. Do you think you've brought up any arguments that haven't been discussed before? Replying to these objections is a waste of time.
In general, "convince me" posts are a bad idea. You've got a brain. You've got a computer. You've got a search engine. Use them. Convince yourself.
Replies from: Epiphany↑ comment by Epiphany · 2012-09-30T06:53:57.955Z · LW(p) · GW(p)
You've got a search engine. Use them. Convince yourself.
That was my first instinct, but then I remembered that there was a consensus in another thread that women are impossible to convince. In that thread, the poster wanted to convince his mom to sign up for cryo but didn't know how to. A lot of people here might want a chance to figure out how to convince women to get cryo. So instead of convincing myself, I gave them an opportunity to practice on me.
Do you think you've brought up any arguments that haven't been discussed before?
I have no way of knowing that, seeing as how I avoided convincing myself so that other people could experiment on me. I am open to reading articles that people feel are convincing, as I realize that it would be pretty boring to explain the same stuff all over again. It says that in the OP.
Replies from: AngryParsley↑ comment by AngryParsley · 2012-09-30T08:01:27.173Z · LW(p) · GW(p)
It was a rhetorical question. You do have a way of knowing that you haven't thought of anything new: The idea of cryonics has been around for over half a century. Brilliant and creative minds have explored the argument territory quite thoroughly. You should expect to bring nothing new to the table.
Rant mode engaged.
Your post won't help us learn how to convince women to sign up for cryonics. The sample size isn't random and it's certainly not big enough to draw any useful conclusions from. We'll just replay some tired replies to some tired objections. At best, it will teach us how to convince Epiphany to sign up.
Most importantly, is there any other area of debate where we use different arguments to convince women? It would be bizarre. This is especially true for a topic like cryonics, where "convincing" mostly involves fielding objections. If you want to convince people, then learn about the topic. When someone brings up a specific objection, you can use your knowledge to construct a reply that's convincing, informative, and true. It works no matter one's gender.
Rant mode disengaged.
Replies from: mwengler, Epiphany, jkaufman, Epiphany↑ comment by mwengler · 2012-09-30T15:34:39.430Z · LW(p) · GW(p)
Most importantly, is there any other area of debate where we use different arguments to convince women? It would be bizarre.
You seem to be ignorant of what values are. From the point of view of a rationalist, they are axioms, and slippery ones at that as they are axioms elucidated by the individual introspecting his (or her) own emotional reactions to various theoretical situations.
Arguments to convince someone to DO something are tailored to fit the individual being convinced.
Trivial examples of using different arguments to convince women vs men (on average) include arguments to see a particular movie (chick flick vs boobsploitation or violence).
↑ comment by Epiphany · 2012-09-30T19:52:39.204Z · LW(p) · GW(p)
Brilliant and creative minds have explored the argument territory quite thoroughly. You should expect to bring nothing new to the table.
Also, if you guys have already figured everything out, then why is convincing women perceived as extra hard? Obviously something is missing, and that element might be anything from not knowing all of the objections women will make to not having good enough persuasive skills to a seemingly unrelated difference between the genders (maybe it's that women don't read as much about technology or that they go to doctors more often and have learned more about the flaws in medical technology, leading to distrust) - but without opening up a line of communication about it, and experimenting to see what kinds of ideas emerge, how are you ever going to make testable guesses about what the missing piece(s) is/are?
Replies from: RomeoStevens↑ comment by RomeoStevens · 2012-09-30T21:36:35.034Z · LW(p) · GW(p)
if you guys have already figured everything out, then why is convincing women perceived as extra hard?
Having a detailed map doesn't mean that a particular route isn't going to be arduous and fraught with potential missteps that send you down a cliff.
Replies from: Epiphany↑ comment by jefftk (jkaufman) · 2012-09-30T23:59:56.753Z · LW(p) · GW(p)
When someone brings up a specific objection, you can use your knowledge to construct a reply that's convincing, informative, and true
Am I correct in reading you to be saying that it's pretty much a clear case in cryonics' favor?
If you were to die in a month, with sufficient warning to line up deathbed cryosuspension and all, how likely do you see some form of revival?
↑ comment by Epiphany · 2012-09-30T08:42:18.623Z · LW(p) · GW(p)
The idea of cryonics has been around for over half a century. Brilliant and creative minds have explored the argument territory quite thoroughly.
Why would you have thought I would have known that?
All I know is that I wasn't convinced, and people didn't know how to convince women, and a bunch of people voted in my poll that they thought this was a good topic idea.
You really don't think anyone here is interested in getting practice? Just about everyone here has family members. I imagine they'll want them to survive.
comment by jefftk (jkaufman) · 2012-10-01T00:14:33.019Z · LW(p) · GW(p)
Another objection that I don't see below: it's pretty unlikely to work. Many things in series have to go right in order for you to get revived. Proponents who take the time to consider what could go wrong come up with chances of success like 1 in 7 to 1 in 435 and 1 in 17.
Replies from: GeraldMonroe↑ comment by GeraldMonroe · 2012-10-01T23:34:54.151Z · LW(p) · GW(p)
This depends heavily on assumptions. Consider this : the oldest cryonics patients have survived more than 30 years. The loss per decade for reasonably well funded cryonics organizations is currently 0.
If you check a chart of causes of death, the overwhelming majority of causes are ones where a cryonics team could be there.
You would have to choose a legal method of suicide in some of these cases, however (like voluntarily dehydrating yourself to the point of death), or your brain would deteriorate from progressive disease to the point of probably being non-viable for a future revival.
As for long term risks : ultimately these depend on your perception of risks to human civilization and the chance of ultimately developing a form of nanotechnology that could scan your frozen brain and create an emulation at minimal cost. I personally don't think there are many probable causes that could cause civilization to fail, and I think the developement of the nanotechnology to be almost certain. There is no future world I can imagine where eventually a commercial or governmental enitity would not have extreme levels of motivation to develop the technology, due to the incredible advantages it would grant.
This is my personal bias, perhaps, but let's look at this a bit more rationally.
a. How could a civilization ending event actually happen? Are nuclear escalations the most likely outcome or are the exchanges ending at a city or 2 nuked more probable? b. What could stop a civilization from developing molecular tools with self replication? Living cells are an existence proof that the tools are possible, and developing the tools would give the entity that possessed them incredible power and wealth. c. Cryonics organizations have already survived 30 years. Maybe they need to survive 90 or 120 more. They have more money and resources today, decreasing the probability of failure with each year. What is the chance that they will not be able to survive the rest of the needed time? In another 20 years, they might have hardened facilities in the desert with backup power and liquid nitrogen production.
And so on. This is a complicated question, but I have an educated hunch that the risks of failure for cryonics are lower than many of the estimates might show. I suspect that many of the estimates are made by people who suffer from biases towards excessive skepticism, and/or are motivated to find a way to not spend hundreds of thousands of dollars, preferring shorter term gains.
Replies from: drnickbone, jkaufman↑ comment by drnickbone · 2012-10-02T19:14:31.700Z · LW(p) · GW(p)
The civilization-ending risks are the most worrying from my point of view. Basically, I see a couple of scenarios:
Technology never gets anywhere near the point where we can revive frozen brains. Industrial civilization collapses first through a combination of resource constraints, environmental damage, and devastating wars; most likely, these all happen together and feed off each other. This doesn't immediately cause human extinction, but the probability of a future industrial civilization arising from the ruins is very low, because all the easily-extracted fossil fuels, ores etc. have already gone.
Technology continues to advance to a point where revival is becoming distinctly feasible, but such advanced tech also comes with very high and increasing existential risks. For instance genetically-engineered plagues, molecular nanotechnology used as a weapon, strong but unfriendly AI. There is low probability of avoiding all these risks.
It's a nasty dilemma really, and cryonic revival can only happen if we somehow avoid both horns.
That's on top of a separate concern that cryo as currently practised simply comes too late to avoid truly irreversible brain damage (what is sometimes called "information theoretic death"). If critical information about a person's mind has already been lost before freezing then no future technology, however advanced, can restore that mind. I don't know enough about how minds are stored in brains to answer that concern, but I'm not confident. Freezing immediately on point of bodily death (or shortly before) looks much more likely to work, but it happens to be illegal.
Replies from: GeraldMonroe↑ comment by GeraldMonroe · 2012-10-03T21:30:53.085Z · LW(p) · GW(p)
How, precisely, would this happen? We aren't writing sci-fi here. There's dozens of countries on this planet with world class R&D occurring each and every day. The key technology needed to revive frozen brains is the development of nanoscale machine tools that are versatile enough to aid in manufacturing more copies of themselves. This sort of technology would change many industries, and in the short term would give the developers of the tech (assuming they had some means of keeping control of it) enormous economic and military advantages.
a. Economic - these tools would be cheap in mass quantities because they can be used to make themselves. Nearly any manufactured good made today could probably be duplicated, and it would not require the elaborate and complex manufacturing chains that it takes today. Also, the products would be very close to atomically perfect, so there would be little need for quality control. b. Military - high end weapons are some of the most expensive to manufacture products available, for a myriad of reasons. (I mean jets, drones, tanks, etc). Nanoscale printers would drop the price to rock bottom for each additional copy of a weapon.
A civilization armed with these tools of course would not be worried about resources or environmental damage.
a. There are a lot of resources not feasible today because we can't manufacture mining robots at rock bottom prices and send them to go after these low yield resources.
b. We suffer from a lack of energy because solar panels and high end batteries have high manufacturing costs. (the raw materials are mostly very cheap). Same goes for nuclear reactors.
c. We cannot reverse environmental damage because we cannot afford to manufacture square miles worth of machinery to reverse the damage. (mostly C02 and other greenhouse gas capturing plants, but also robots to clean up various messes)
I say we revive people as soon as possible as computer simulations to give us a form of friendly AI that we can more or less trust. These people could be emulated at high speed and duplicated many times and used to counter the other risks.
I agree with you entirely on the irreversible brain damage. I think this problem can be fixed with systematic efforts to solve it (and a legal work around or a change to the laws) but this requires resources that Alcor and CI lack at the moment.
Replies from: drnickbone↑ comment by drnickbone · 2012-10-04T07:25:49.640Z · LW(p) · GW(p)
"Horn 1" of the dilemma is a Limits to Growth style crisis. It's perfectly possible that such a limits-crisis arrives before the technology needed to expand the limits shows up to save us. (The early signs would be a major recession which never seems to end, and funding for speculative ideas like nano-machines doesn't last.) Or another analogy would be crossing a desert with a small, leaky bottle of water and an ever-growing thirst. On the edge of the desert there is a huge lake, and the traveller reaching it will never be thirsty again. But it's still possible to die before reaching the lake.
I see you think that the technology will arrive in time, which is a legitimate view, but then that also creates big risks of a catastrophe (we reach the lake, and it is poisonous... oops). This is "Horn 2".
My own experience with exciting new technologies is a bit jaded, and my probability assessment for Horn 1 has been moving upward over the years. Radically-new technology always take much longer to deploy than first expected, development never proceeds at the preferred pace of the engineers, and there can be maddeningly-long delays in getting anyone to deploy, even when the tech has been proven to work. The people who provide the money usually have their own agenda and it slows everything down. Space technology is one example here. Nanotechnology looks like another (huge excitement and major funding, but almost none of it going into development of true nano-machines along Drexler's lines.)
↑ comment by jefftk (jkaufman) · 2012-10-02T01:46:56.391Z · LW(p) · GW(p)
I suspect that many of the estimates are made by people who suffer from biases towards excessive skepticism, and/or are motivated to find a way to not spend hundreds of thousands of dollars, preferring shorter term gains.
The two estimates I linked to are both from people who have signed up for it; the second one is Robin Hanson's. On the spreadsheet, as far as I know the only estimate from someone who has not signed up is mine.
comment by Epiphany · 2012-09-30T06:01:11.064Z · LW(p) · GW(p)
What if I can't get a good body? (current objection). There are a few variations on this:
I will probably be in old age if I'm frozen, so I might wake up in the future as an old person. If they can make me a young body, that's not a problem, but should I assume that they're going to be able to do that? Maybe waking up from cryo in the future will involve being on life support for long periods of time while we're waiting for the technology for new bodies.
Who is going to pay for my new body? I have no idea what that would cost, so I can't possibly save for it now, and I'm not sure it's a good idea to assume that money will be N/A in the future. I'm pretty sure that all my skills would be worthless at that time, but not convinced that there would be money to make me a decent body at that time.
What if I wake up with no body at all... I'm imagining waking up as a head in a jar or a brain in some kind of server rack of brains.
What if the bodies are ill-conceived? I'm imagining waking up as a brain inside of R2D2 and having about the same quality of life as a mobile trash can. If you think this out, being stuck inside of an R2D2 body would be a really, really horrible fate - which I explain here.
There are certain things I'd like to retain the ability to do, and for some of those, I will need to be anatomically correct.
Once again, if I sign up now, I'll be an early adopter, which may mean that the technology for putting people into new bodies is still experimental and I may end up as a test subject.
Replies from: jsalvatier, Dolores1984, MixedNuts↑ comment by jsalvatier · 2012-09-30T07:34:58.058Z · LW(p) · GW(p)
- Currently, I think most people just get their brains preserved. So they'd have to give you a whole new body or just have you as software anyway.
Early adopter for being preserved doesn't mean early adopter for being revived. In fact, it probably means the opposite. Since the easiest people to revive will probably be the people preserved with the most advanced technology.
Replies from: Epiphany↑ comment by Epiphany · 2012-09-30T08:51:00.869Z · LW(p) · GW(p)
Oh! Good point. Hm. But that might mean that I'm among a group that was using such old technology that it's more or less arcane by that point... which could mean that there aren't very many people in my set to revive, and so less leeway to iron out the flaws before they get to me...
Is anyone freezing any lab mice or anything?
I can see myself at the cryo counter: "Hi, I want me and these 100 lab mice frozen."
Replies from: Dolores1984, MixedNuts, jsalvatier↑ comment by Dolores1984 · 2012-09-30T20:39:19.734Z · LW(p) · GW(p)
Remember: you can always take random recently dead guys who donated their bodies to science, vitrify their brains, and experiment on them. And this'll be after years of animal studies and such.
↑ comment by jsalvatier · 2012-09-30T09:42:08.660Z · LW(p) · GW(p)
Haha, I hadn't thought about that.
↑ comment by Dolores1984 · 2012-09-30T20:37:43.081Z · LW(p) · GW(p)
You are overwhelmingly likely not to wake up in a body, depending on the details of your instructions to Alcor.. Scanning a frozen brain is exponentially cheaper and technologically easier than trying to repair every cell in your body. You will almost certainly wake up as a computer program running on server somewhere.
This is not a bad thing. Your computer program can be plugged into software body models in convincing virtual environments, permitting normal human activities (companionship, art, fun, sex, etc.), plus some activities not normally possible for humans. It'll likely be possible to rent mechanical bodies for interacting with the physical world.
Replies from: CAE_Jones↑ comment by CAE_Jones · 2012-11-14T06:55:29.196Z · LW(p) · GW(p)
This is not a bad thing
It is if you want to not die, rather than be copied. How likely would it be, assuming that politics and funding weren't an issue, that we could grow a new body, prevent the brain from developing, yet keep it alive to the point that an existing brain could be inserted? I'm not necessarily concerned with the details of getting a brain transplant to work smoothly in general, just the replacement body.
It doesn't seem like it should be difficult in theory; I'd be more worried about the resources.
I'm also curious as to what's stopping us from keeping brains alive even if the body can no longer function. I'm not well researched in this area, but if it is a matter of keeping chemical resources flowing in and waste flowing out, then our current technology should be capable of as much. At that point, all we'd need is to develop artificial i/o for brains (which seems slightly more difficult, but not so difficult that it couldn't happen within a few decades).
But I've probably overlooked something obvious and well known and am completely confused. :(
I don't like the idea of being "revived" as an upload, though. An upload would be nice to have (It'd certainly make it easier to examine stored data, if only a little), but I still see an upload as a copy rather than preserving the original. And, being the original, that isn't the most appealing outcome to me.
↑ comment by MixedNuts · 2012-09-30T08:32:31.763Z · LW(p) · GW(p)
A bad body is better than no body at all. It's not uncommon for abled people to go "Ew, I'd rather die than get $disability", but when they do... actually I don't know if they're as happy as before after 18 months, because everyone mentions that but gives no cite. Anyway, people after a bad event are less unhappy and get happier faster than they predicted, and will remember afterwards. At least for some disabilities, this depends on people adapting to their condition, rather than putting their life on hold until they get better. (More affective forecasting papers.)
Poke around in the disability blogosphere for more perspectives on that. They range from "My body is awesome, but because it's not the type you build your world for you call it 'disabled'", through "It kinda sucks that you're not an Olympic-level athlete and you don't obsess over that all the time; I feel the same way about my disability", through "It's miserable when you're not used to it, but once you adapt it's not so bad", to "It's awful, but still better than being dead".
The things you're afraid of aren't even particularly freaky ones: weakness, limited mobility and endurance, need for support systems, body dysphoria, inability to live as you used to. People live with that every day.
I admit I have no idea what would happen if you lacked a body completely. A head-in-a-jar scenario sounds like locked-in syndrome, which is still better than death. The other scenario could be anything from total sensory deprivation (yeah, that one is probably worse than death) to living in a simulation.
Replies from: Epiphany↑ comment by Epiphany · 2012-09-30T21:35:59.999Z · LW(p) · GW(p)
A bad body is better than no body at all.
Then why do so many people have living wills?
this depends on people adapting to their condition
Also, which condition they get. I could see myself happy in this body with a wheelchair, but I can't see myself happy as a paraplegic. I think my ideas about how happy I'd be with a disability are pretty realistic. Anything that keeps me from communicating would make me miserable. Anything that makes me dependent on others will be stressful. Not being able to walk I could get around - I could still program and make a living, still communicate, still do something of meaning, still get around. How many of the things you enjoy about life and get meaning from are dependent on your body? There are some conditions that would make pretty much everything that's meaningful and fun about life impossible. See my R2D2 objection.
"Living is always good" / "Any body at all is good" - hasty generalizations, sorry.
Replies from: TimS↑ comment by TimS · 2012-10-01T16:31:21.136Z · LW(p) · GW(p)
Then why do so many people have living wills?
From a legal point of view, a living will is not really very like a will. One's will contains the directions for distribution of one's property after death. In short, the key focus of a will is financial.
By contrast, a living will is one's list of instructions regarding medical treatment when one is unavailable to consult (i.e. unconscious). Do-not-resuscitate requests, and the circumstances when one does and doesn't want particular intense medical interventions. Also, who should make decisions when your pre-made list does not address a particular circumstance. When one is creating a living will, financial considerations might play a part, but the key focus of a living will is medical, not financial.
comment by Epiphany · 2012-09-30T05:46:57.182Z · LW(p) · GW(p)
What if revival technology causes misery? (current objection). There are a few variations on this:
I would be an early adopter, which means that the technology for reviving people might still be experimental at the time when it is used on me. The unintentional result of this could be that I become a test subject.
What if they get reviving my brain slightly wrong and a small change in it's structure or chemical composition means that all my consciousness is capable of experiencing is ultimate misery, and this goes on for some prolonged period of time where they're assuming the reason I'm miserable is because of the shock of waking up in a world where so many of the people I know are dead and everything else is changed or gone, so nobody has any idea that it's due to a chemical or structural problem in my actual brain.
What if I get brain damage or massive memory loss from the procedure? This would mean, essentially that I wasn't saved. Then I would have to live as a sort of zombie-like horror.
I get some horrible and as yet unimagined disability due to, I don't know, ice crystals destroying my tissues or amine accumulation or something unexpected.
Just because cryo is the only way we currently have to avoid death, that doesn't mean it's a good way.
Replies from: Eneasz, Dolores1984, MixedNuts↑ comment by Eneasz · 2012-10-01T17:04:26.571Z · LW(p) · GW(p)
What the heck? What if any technology X causes misery? It was argued that in vitro fertilization would cause soulless humans to be born (seriously) with all sorts of ramifications (from them destroying society, to their existence being constant agony). This claim has been made repeatedly about all sorts of medical interventions, from organ transplants to cloning. Right now there are people who claim aspartame is turning us into zombie-like horrors.
There is always a risk from any medical intervention. A bad anesthesiologist can give you brain damage and turn you into a zombie when all you wanted to was have a wisdom tooth pulled. This objection is so generalized that I'm not sure it's a true objection at all. I think you may be searching for other objections rather than stating a true objection.
Replies from: grendelkhan↑ comment by grendelkhan · 2012-10-22T11:18:22.836Z · LW(p) · GW(p)
It was argued that in vitro fertilization would cause soulless humans to be born (seriously) with all sorts of ramifications (from them destroying society, to their existence being constant agony).
Did someone actually suggest that? A cursory glance through some articles shows, for instance, the Pope expressing worry that women would be used as 'baby factories', but questions about IVF seem to have, historically, been tied up with worries about custom-designed people.
Replies from: Eneasz↑ comment by Eneasz · 2012-10-22T16:40:09.723Z · LW(p) · GW(p)
Yes. I suppose it depends a bit on how official you require the suggester to be before you're willing to grant that it was a legitimate social discourse. A few examples:
Cathy Lynn Grossman of USA Today's "Faith & Reason" column asked her readers "Do you think a baby conceived in test tube is still a child in the eyes of God?" in 2010.
People have reported asking priests for advice and being told " He told them that if they were to go ahead with it, they would be doing something worse than abortion, their child would be born without a soul as he or she would be manmade and not Godmade."
There's various crazy ministries on the internet that make/made the soulless claims as well.
So yes, someone did actually suggest that. Multiple someones. How much they count is debatable.
Replies from: grendelkhan↑ comment by grendelkhan · 2012-10-24T03:12:14.161Z · LW(p) · GW(p)
Yeah, I don't mean the crazy-ministry people, I mean people connected enough to reality that they wouldn't say that sort of thing now, but who did right up to the point where normal human babies showed up and the position became unsupportable.
Maybe I'm looking too far into this, but I'm trying to understand how you could look at a person pretty much indistinguishable from other people and claim that they have all of these hilariously weird properties. I can see if happening if people conceived via IVF all had red hair or something, but people did know these would be, y'know, people conceived in-vitro, right?
Replies from: Eneasz↑ comment by Eneasz · 2012-10-24T17:39:54.184Z · LW(p) · GW(p)
/shrug. The concept of souls is unsupportable right now but it doesn't stop anyone from claiming all sorts of hilariously weird properties for them. I don't know how hard it would be to say that one person has an unsupportable property X and another person doesn't, since they're both just naked assertion anyway. When your references are that detached from reality you can start saying all sorts of nonsensical crap.
↑ comment by Dolores1984 · 2012-09-30T20:31:12.071Z · LW(p) · GW(p)
There's no reason to experiment o cryo patients. Lots of people donate their brains to science. Grab somebody who isn't expecting to be resurrected, and test your technology on them. Worst case, you wake up somebody who doesn't want to be alive, and they kill themselves.
Number two is very unlikely. We're basically talking brain damage, and I've never heard of a case of brain damage, no matter how severe, doing that.
As for number three, that shambling horror would not be you in a meaningful sense. You'd just be dead, which is the default case. Also, I have my doubts that they'd even bother to try to resurrect you with that much damage if they didn't already have a way of patching the gaps in your neurology.
As for number four, depending on the degree of the disability, suicide or euthanasia is probably possible. Besides, I think it's unlikely they'll be able to drag you back from being a corpsicle without being able to fix problems like that.
Replies from: Epiphany↑ comment by Epiphany · 2012-09-30T23:24:13.396Z · LW(p) · GW(p)
There's no reason to experiment to cryo patients
There's no way not to. It will be a new technology. Somebody has to get reanimated first. Even if we freeze 100 mice to test on, or monkeys, reviving humans will be different. Doing something for the first time is, by it's very nature, an experiment.
Grab somebody who isn't expecting to be resurrected
Awful! That's experimenting on a person against their will, and without their knowledge, even! I sure hope people like you don't start freezing people like me in the event that I decide against cryo...
I've never heard of a case of brain damage, no matter how severe, doing that.
People experience this every day. It's called chemical depression. Even if you don't currently see a way for preservation or revival technology to cause this condition, it exists, it's possible that more than one mechanism may exist to trigger it, and that these technologies may have that as an accidental side-effect.
As for number three, that shambling horror would not be you in a meaningful sense. You'd just be dead, which is the default case.
Uh... no, because I'd be experiencing life, I would just be without what makes me me. That would be horror, not non-existence. So it is not death.
euthanasia is probably possible
Is it now? Most people don't believe in the right to die. In a world where we had figured out how to reanimate preserved corpses, do you think that they'll believe in the right to die? They'll probably automatically save and revive everyone.
Replies from: Dolores1984↑ comment by Dolores1984 · 2012-09-30T23:52:57.135Z · LW(p) · GW(p)
Awful! That's experimenting on a person against their will, and without their knowledge, even! I sure hope people like you don't start freezing people like me in the event that I decide against cryo...
-shrug- so don't leave your brain to science. I figure if somebody is prepared to let their brain decompose on a table while first year medical students poke at it, you might as well try to save their life. Provided, of course, the laws wherever you are permit you to put the results down if they're horrible. Worst case, they're back where they started.
People experience this every day. It's called chemical depression. Even if you don't currently see a way for preservation or revival technology to cause this condition, it exists, it's possible that more than one mechanism may exist to trigger it, and that these technologies may have that as an accidental side-effect.
Chemical depression is not 'absolute misery.' Besides, we know how to treat that now. That we'll be able to bring you back, but unable to tweak your brain activity a little is not very credible. Worst case, once we have the scan, we can always put it back on ice for another decade or two until we can fix the problem.
Uh... no, because I'd be experiencing life, I would just be without what makes me me. That would be horror, not non-existence. So it is not death.
If I took a bunch of Drexler-class nanotech, took your brain, and restructured its material to be a perfect replica of my brain, that would be murder. You would cease to exist. The person living in your head would be me, not you. If brain damage is adequately severe, then you don't exist any more. The 'thing that makes you you' is necessary to 'do the experiencing.'
↑ comment by MixedNuts · 2012-09-30T08:44:39.803Z · LW(p) · GW(p)
See disability arguments on the other comment for personality-preserving brain damage.
Then I would have to live as a sort of zombie-like horror.
Well, no. You'd just be dead. There'd be a Schiavo-like body looking like yours, or a new person in a body looking like yours, but that doesn't seem to add much to the horror of death.
this goes on for some prolonged period of time where they're assuming the reason I'm miserable is because of the shock of waking up in a world where so many of the people I know are dead and everything else is changed or gone, so nobody has any idea that it's due to a chemical or structural problem in my actual brain.
That sounds like a weird change. Right now the DSM allows a depression diagnosis two months after a traumatic event, less if it gets really bad, and even less in practice. How prolonged are you thinking of?
People who age often get depression, and get the worst disabilities because they can't adapt fast and their disabilities keep increasing. Do you accept "I should kill myself now, so I don't run that risk"? If not, how is that different.
Replies from: Epiphany↑ comment by Epiphany · 2012-09-30T23:39:36.713Z · LW(p) · GW(p)
Well, no. You'd just be dead.
This thing that would not die though, this ability to know pain and pleasure, this continuing experience, it would remain in the event that my memories were all gone, presumably. THAT is the part I'm worried about. That the part of me that feels could wake up and have to go through the experience of realizing that who I am has been lost to brain damage.
Right now the DSM allows a depression diagnosis two months after a traumatic event
Is this supposed to rebut my objection? I don't see where you're going with this at all.
Do you accept "I should kill myself now, so I don't run that risk"? If not, how is that different.
Right now, there isn't a guarantee that I'm going to go through a medical procedure anytime soon. Going through a medical procedure, especially one that is new, or one that few people have been through, is likely to cause some sort of horrible side effects. We have no reason to assume that this technology will be flawless by the time we get to use it, no reason to believe it won't turn us into horrors.
It's different because not killing myself right now leaves me with a reasonable chance to have some number of happy years ahead whereas going through a medical procedure with unexpected side effects and risks may have a much greater chance of making me completely miserable for a long time.
I think our disagreement may have a lot to do with how much faith we place in the medical establishment.
If you haven't got experience with it, you can't know how bad it can be. Have you ever looked into how incompetent and horrible medical professionals and treatments can be?
I have a pile of statistics if you want a shock.
Replies from: MixedNuts↑ comment by MixedNuts · 2012-10-01T09:43:22.195Z · LW(p) · GW(p)
That the part of me that feels could wake up and have to go through the experience of realizing that who I am has been lost to brain damage.
Okay, that's freaky. Only a little freakier than "The child I was has been replaced by an adult", but point taken.
Right now the DSM allows a depression diagnosis two months after a traumatic event
Is this supposed to rebut my objection? I don't see where you're going with this at all.
If medicine when you wake up if anything like it is now, after a couple months at most you'll be able to say "Doc, I feel utterly miserable" and the doc will answer "One box of magic future antidepressants, coming right up!", not (only) "Well duh it's future shock".
Have you ever looked into how incompetent and horrible medical professionals and treatments can be?
I have a pile of statistics if you want a shock.
Only in specific cases (medical errors, psychiatric hospitals, nursing homes). Can I haz stats?
comment by Epiphany · 2012-09-30T05:17:09.863Z · LW(p) · GW(p)
My first impression of cryo (documentation): My introduction to cryo was in a cartoon as a child - the bad guys were freezing themselves and using the blood of children to live forever. I felt it was terrifying and horribly unfair that the bad guys could live forever and very creepy that there were so many frozen dead bodies.
Replies from: MixedNuts, Dallas, pleeppleep↑ comment by MixedNuts · 2012-09-30T08:56:29.309Z · LW(p) · GW(p)
horribly unfair that the bad guys could live forever
There's a common attitude that eternal life is a very special prize - something a few great heroes might deserve, and if you seek it out you're basically claiming to be a deity or something impossibly high-status along those lines. I have no idea where that comes from; it's like someone proposed advances in agriculture and people went "But famines are part of life!".
Replies from: Viliam_Bur, Epiphany↑ comment by Viliam_Bur · 2012-09-30T13:18:42.424Z · LW(p) · GW(p)
Possibly related: Survivor guilt
I guess that if you survive and other people don't, it instinctively pattern-matches to you causing their death. Even if it does not make sense, and you know it. Maybe it's a broken algorithm for determining outside view -- if you go somewhere with a group of people, you return and they are dead, you should expect other people to suspect you; therefore you'd rather show some extremely strong self-destructive emotion to convince them game-theoretically that you did not benefit from that outcome.
If we get immortality, we can expect a lot of survivor guilt. Also, it will seriously ruin the just world hypothesis, if some people will get 3^^^3 more utilons just for the fact they were born in the right era and did not die randomly a few years sooner.
Replies from: Epiphany↑ comment by Epiphany · 2012-09-30T21:56:30.555Z · LW(p) · GW(p)
Survivor guilt
Hmmm. These are really good points. I do feel guilty about the idea of living a really long time while a lot of others don't. That may be what triggered my first big objection - that you could save a lot of people with that money. Now I wonder if that objection was a rationalization of some type of survivor's guilt. I think that this is likely. Very good point. Now I'm wondering what the nature of this survivor's guilt is, for me.
I still feel survivor's guilt, actually. Even though it's not attached to a specific objection any longer - the objection about saving starving children has been rebutted.
New objection - Survivor's Guilt
it will seriously ruin the just world hypothesis
That's already seen as a fallacy isn't it?
Replies from: thomblake↑ comment by pleeppleep · 2012-10-02T01:37:35.945Z · LW(p) · GW(p)
No, seriously, what cartoon is this. It sounds awesome.
comment by drnickbone · 2012-10-01T23:48:22.670Z · LW(p) · GW(p)
Has anyone on Less Wrong considered (and answered) an anthropic objection to cryonics? It might go something like this:
"If cryonics works, then the society in which I am revived will be a transhuman/posthuman one with very advanced technology, and a very large number of observer moments. But if such societies existed in the universe, or ever came to exist, then I would expect to find myself already part of one, and I don't. (Note here the use of Bostrom's strong self-selection assumption or SSSA.) Therefore I should judge it unlikely that posthuman/transhuman societies exist or will come to exist. Therefore I should judge it unlikely that cryonics will work."
One counter-argument (which Bostrom himself might use) would be based on reference classes. Perhaps I'm currently in a limited reference class that precludes me being part of a transhuman/posthuman society. But this also has a dubious implication for cryonics, since for cryonics to work it must be possible for me to change that reference class, moving from a very small one to a much larger one. So again, wouldn't I expect to have already done that?
Replies from: pleeppleep↑ comment by pleeppleep · 2012-10-02T01:35:27.708Z · LW(p) · GW(p)
You should provide an argument as to why it would be more likely to be born into a post human society. For a post human society to exist, a human and pre-human world would probably, although perhaps not necessarily, have to exist first. Even if it is more common to exist in transhumance state, there would still be non-transhuman minds.
Replies from: drnickbone↑ comment by drnickbone · 2012-10-02T17:05:53.560Z · LW(p) · GW(p)
This is just an application of the "Self-Sampling Assumption" (or "principle of mediocrity").
There will be many more observer moments in a "post-human" society than a "pre-posthuman" one (at human level or lower), because the population is much larger and observers live longer. So if the universe contains both sorts of society, a typical observer (or observer moment) would be much more likely to be in a "post-human" society. If the universe only contains "pre-posthuman" societies (e.g. because societies self-destruct before reaching a post-human level of technology) then an observer would have to be in one of the "pre-posthuman" ones because there aren't any others.
I'd suggest you look at Nick Bostrom's web-site for more details, including his discussion on reference classes.
Replies from: drnickbone↑ comment by drnickbone · 2012-10-02T17:13:25.388Z · LW(p) · GW(p)
P.S. It is also possible to use the "Self-Indication Assumption" as an alternative to the "Self-Sampling Assumption". Or to use a non-anthropic model like "Full Non-Indexical Conditioning". But these don't get rid of the argument that we are unlikely to turn into a post-human society, and for some rather interesting reasons, which Katja Grace discusses here
So far, I think reference classes are the only counter-argument that might work.
Replies from: pleeppleep↑ comment by pleeppleep · 2012-10-02T18:19:18.932Z · LW(p) · GW(p)
I figured the reasoning behind that. I just thought it would be a good idea for you to post the explanation with your argument.
Replies from: drnickbone↑ comment by drnickbone · 2012-10-02T18:54:14.950Z · LW(p) · GW(p)
Ah, thanks!
comment by Epiphany · 2012-09-30T06:19:39.899Z · LW(p) · GW(p)
What if the future is hellish and I won't be able to die? (Current objection)
I realize there are lots of interesting technologies coming our way, but there are a lot of problems, too. I don't know which will win. Will it be environmental collapse or green technology? FAI or the political/other issues created by AI? Will we have a world full of wonders or grey goo? Space colonies or alien invasions? As our power to solve problems grows, so does our ability to destroy everything we know. I do not believe in the future any more than I believe in heaven. I recognize it as a potential utopia / dystopia / neither. I do not assume that the ability to revive preserved people would make us utopia-creating demigods any more than our current abilities to do CPR or fly make our world carefree.
A new twist, waking up into this world, would be that I may not be able to die. The horrors that I could experience in a technologically advanced dystopia might be much worse than the ones we have currently. Dictators with ufAI armies, mind control brain implants, massive environmental and/or technological catastrophes.
There is one thing worse than dying, and that's living an unnaturally long time in a hellish existence. If I sign up for cryo, I'll be taking a risk with that, too.
Replies from: SilasBarta, jsalvatier, DanArmak, shokwave↑ comment by SilasBarta · 2012-09-30T18:03:49.158Z · LW(p) · GW(p)
What if the future is hellish and I won't be able to die? (Current objection)
From this post (which is a great source of insight on many particular cryonics objections):
(5) You somehow know that a singularity-causing intelligence explosion will occur tomorrow. You also know that the building you are currently in is on fire. You pull an alarm and observe everyone else safely leaving the building. You realize that if you don’t leave you will fall unconscious, painlessly die, and have your brain incinerated. Do you leave the building?
Answering yes to (5) means you probably shouldn’t abstain from cryonics because you fear being revived and then tortured.
(This comment, including copying over text and links, was composed entirely without the mouse due to the Pentadactyl Firefox extension.)
Replies from: JenniferRM, army1987↑ comment by JenniferRM · 2012-10-01T17:44:23.588Z · LW(p) · GW(p)
That scenario is full of fail in terms of helping someone to weigh the issue in an ecologically valid way. Answers to the the trolley problem empirically hinge on all kinds of consequentially irrelevant details like whether you have to physically push the person to be sacrificed. The details that matter are hints about your true rejection and handling them in a sloppy way is less like grounded wisdom and more like a high pressure sales tactic.
In this case, for example, "leaving the building" stands in for signing up for cryonics, and "everyone else safely leaving the building" is the reason your unconscious body won't be dragged out to safety... but that means you'd be doing a socially weird thing to not do the action that functions as a proxy for signing up for cryonics, which is the reverse of the actual state of affairs in the real world.
A more accurate scenario might be that your local witch doctor has diagnosed you with a theoretically curable degenerative disorder that will kill you in a few months, but you live in a shanty town by the sea where the cure is not available. The cure is probably available, but only in a distant and seemingly benevolent country across the sea where you don't speak the language or understand the economy or politics very well. The sea has really strong currents and you could float downstream to the distant civilization, but they can't come to you. You have heard from some people that you have a claim on something called "government assistance checks" in that far nation that will be given initially to whoever is taking care of you and helping you settle in while you are still sick.
You will be almost certainly be made a ward of some entity or another while there, but you don't understand the details. It could be a complex organization or a person. There might be some time waiting for details of the cure to be worked out and there is a possibility that you could be completely cured but that this might cost an unknown amount of extra money, and its a real decision that reasonable people could go different ways on depending on their informed preferences, but the details and decisions will be made by whoever your benefactor ends up being.
That benefactor might have incentives to leave you the equivalent of "a hospital bed in the attic" for decades with lingering pain and some children's books and audio tapes from the 1980's for entertainment, pocketing some of the assistance checks for personal use, with your actual continued consciousness functioning as the basis of their moral claim to the assistance checks, and your continued ignorance being the basis of their claim to control the checks.
If you get bored/unhappy with your situation, especially over time, they might forcibly inject you with heroin or periodically erase your memory as a palliative. This is certainly within their means and is somewhat consistent with some of the professed values of some people who plan to take the raft trip themselves at some point, so there might actually be political cover for this to happen even if you don't want that. Given the drugs and control over your information sources, they might trick you into nominally agreeing to the treatments.
You don't get to pick your benefactor in advance, you don't know what the details of the actual options that will exist to do the cost/benefit yourself in advance, and you don't know what kind of larger political institutions will exist to oversee their decision making. You'd have to build your own raft and show up on their shores as a sort of refugee, and your family is aware of roughly the same things as you, and they could use the raft making materials to build part of a new shack for your sister, or perhaps a new outhouse for the family. Do you get on the raft and rely on the kindness of strangers or accept your fate and prepare to die in a way that leaves less bad memories for your loved ones than the average death.
Also, human nature being what it is, if you talk about it too much but then decide to not build the raft and make the attempt, then your family may feel worse than average about your death, because there will be lingering guilt over the possibility that they shamed or frightened you into sacrificing your chances at survival so that they could have a new outhouse instead. And knowing all of these additional local/emotional issues as well as you do, they might resent the subject being brought up in a way that destabilizes the status quo "common knowledge" of how family resources will be allocated. And your cousin got sick from an overflowing outhouse last year, so even though it sounds banal, the outhouse is a real thing that really verifiably matters.
Replies from: handoflixue, army1987↑ comment by handoflixue · 2012-10-01T20:04:56.442Z · LW(p) · GW(p)
That is an awesome metaphor :)
↑ comment by A1987dM (army1987) · 2012-10-02T22:28:31.014Z · LW(p) · GW(p)
Each of the questions in that post was meant to address one argument against cryo. The argument ‘hardly anyone I know will be alive when I'm revived’ is addressed by the second question. (Which I would answer “I don't know, I'd have to think about it”, BTW.)
↑ comment by A1987dM (army1987) · 2012-10-01T09:37:51.700Z · LW(p) · GW(p)
[realizes he has been rationalizing] Oh...
↑ comment by jsalvatier · 2012-09-30T07:32:43.772Z · LW(p) · GW(p)
Here's the reason I don't find this very scary. As a frozen person, you have very little of value to offer people, and will probably take some resources. Thus, if someone wants to bring you back it will likely must be mostly for your benefit, rather than because they want to enslave you or something. If the universe just has people who don't care about you, then they just won't revive you, and it will be the same as if you had died.
In order for you to be revived in a hellish world, the people who brought you back have to be actively malicious, which doesn't seem very likely to me.
What do you think?
Replies from: mwengler, Risto_Saarelma, Jesper_Ostman, Epiphany↑ comment by mwengler · 2012-09-30T15:25:43.787Z · LW(p) · GW(p)
Many among us will spend the better part of a million dollars to preserve the life of children born so deformed and disabled that they actually will spend a significant amount of their lives in pain and the rest of it not being able to do much of what gives the rest of us pleasure or status. You don't have to be actively malicious to think that life at any cost is a Good Thing (tm).
There's also the theoretical possibility that the world you are revived in to is perceived as a good one by the people born in to it, but is too hard to adjust to for a very old person from a very different world. I doubt the majority of slaves would prefer death to the lives they had, but someone who had lived 80 years in freedom and the best the 21st century could offer in terms of material comforts might not be as blase about a very different status quo in the future.
Replies from: jsalvatier↑ comment by jsalvatier · 2012-09-30T18:00:15.884Z · LW(p) · GW(p)
If the people reviving you are not malicious then you would expect to have the option of dying again unless they don't believe you that your life sucks too much.
Also the psychology of happiness seems to suggest that people adjust pretty well to big life changes.
Replies from: mwengler↑ comment by mwengler · 2012-10-01T10:27:45.155Z · LW(p) · GW(p)
Unless you are defining malicious to mean "lets me kill myself if I want to," then being revived into a society with similar laws and values as the current U.S. would certainly make it illegal for you to kill yourself. Most of us realize we could do it if we wanted anyway, but a society that can revive you probably has more effective means of enforcing prohibitions. Even now, we already have "chemical castration" for some sex criminals.
Replies from: jsalvatier↑ comment by jsalvatier · 2012-10-02T20:22:49.538Z · LW(p) · GW(p)
Okay, that's a good point. (I assume you meant "defining 'not malicious' to mean 'lets me kill myself...'")
↑ comment by Risto_Saarelma · 2012-09-30T08:22:01.496Z · LW(p) · GW(p)
In order for you to be revived in a hellish world, the people who brought you back have to be actively malicious, which doesn't seem very likely to me.
They might also be high-functioning but insane, from some of the very many ways tech at the level of mucking around with physical human brains to the degree of successfully reanimating cryonics patients can go wrong. With the original imperative to revive cryonics patients intact, the ability to do so also somehow intact, but things being very, very wrong otherwise.
I think "you might wake up in hell" is actually one of the better arguments for opting out of cryonics, since some of the sort of tech you need to revive cryonics patients is also tech you could use to build unescapeable virtual hells.
↑ comment by Jesper_Ostman · 2012-10-03T22:56:55.235Z · LW(p) · GW(p)
Although the hellish world scenario seems unlikely it might be important to consider. At least according to my own values things like being confined to children's books and being injected with heroin would contribute very little negative utility (if negative at all) compared to even 1 in 1000 of enduring the worst psychologically possible torture for, say, a billion years.
↑ comment by Epiphany · 2012-09-30T08:34:03.710Z · LW(p) · GW(p)
Ok, the cost benefit ratio between reviving someone and profiting off of their slavery might be worth considering. I'm not sure how many resources it would take to revive me or if it would be safe to assume that my brain's abilities (or whatever was valued) would not outweigh the resources required to revive me but it seems likely now that I think of it, especially considering that all my skills would be out of date and they'd probably have eugenics or intelligence enhancers by then which would outdo my brain.
Also, the people who enslaved me would not have to be the same ones as the people who revive me. They would not be subject to the cost-benefit ratio. The people who revive me could be well-meaning, but if the world has gone to hell, there might be nothing they can do about bad entities doing horrible things.
The reviver may only revive me because they're required to, because the company storing me has a legal agreement and can be prosecuted if they don't. The timing of my revival may be totally arbitrary in the grand scheme of things. It might have more to do with the limit for how long a person can stay in cryo (Whether that means a tangible one, or my account runs out of money with which to stay frozen or they reach some legal limit where they're forced to honor my contract) than with the state of the world at that time.
I don't assume that there would be a benevolent person waiting for me. There's just too much time between here and there and you never know what is going to happen. Maybe none of my friends sign up for cryo. Maybe there's only a 1 in 10 chance of successful revival and I'm the only one of my group who makes it.
So, I'm not convinced that the world will not have gone to hell or that I'll be revived by friends, but I think slavery is less likely.
↑ comment by DanArmak · 2012-09-30T15:18:58.031Z · LW(p) · GW(p)
Consider that you might reach such a future in your natural lifespan, without cryonics. Does this cause you to spend resources on maintaining a suicide button that would ensure information-theoretical erasure of yourself, so no sudden UFAI foom could get hold of you? If not, what is the difference?
Replies from: Decius↑ comment by Decius · 2012-10-01T21:51:53.537Z · LW(p) · GW(p)
It's not quite information-theoretical, but does a snub nose .357 count? I carry because statistically the safest thing to do as the attempted victim of a violent crime is to resist using a firearm.
Replies from: gjm, Jesper_Ostman, BrassLion↑ comment by gjm · 2012-10-04T09:30:42.607Z · LW(p) · GW(p)
[EDITED to add: oops, I completely misinterpreted what Decius wrote. What follows is therefore approximately 100% irrelevant. I'll leave it there, though, because I don't believe in trying to erase one's errors from history :-). Also: I fixed a small typo.]
Assuming this isn't a statistical joke like the one about always taking a bomb with you when you fly (because it's very unlikely that there'll be two bombs on a single plane) ... do you have reason to think that having-but-deliberately-not-using the firearm actually causes this alleged improved safety?
It seems like there are some very obvious ways in in which that association could exist without the causal link -- e.g., people are more likely to be able to resist when the danger is less, people who are concerned enough about their safety to carry for that reason but sensible enough not to shoot are also more likely to take other measures that improve their safety, etc.
Replies from: Decius↑ comment by Decius · 2012-10-04T13:30:42.255Z · LW(p) · GW(p)
Who said anything about not using? I have never seen statistics regarding outcomes of victims of violent crime having a firearm but never drawing it.
There could be other confounding factors as well, like underreporting by people who are mugged, cooperate, and experience no injury; or a tendency among people who carry legally to know how to use their weapons better than criminals and typical people; or difficulty determining whether a dead victim resisted or not. But the statistics aren't even remotely vague: Among reported victims of violent crime, a larger percentage of those who cooperated with the criminal died than those who resisted the crime using a firearm.
Not that something already known would be able to prevent a post-singularity hostile AI from accomplishing the goals it has, much less a firearm that has about as long an effective range when fired as when performing a lunging swing.
Replies from: gjm↑ comment by Jesper_Ostman · 2012-10-03T22:54:03.011Z · LW(p) · GW(p)
Interesting. Do you have a source on that?
Replies from: Decius↑ comment by shokwave · 2012-09-30T06:38:08.576Z · LW(p) · GW(p)
Read this hypothetical objection:
Some optimistic future scenarios speculate that we might be able to revive even those who don't cryopreserve (current cloning techniques on preserved remnants can recreate genetic phenotypes; some sort of simulation on records of your behaviour might be able to recreate your behavioural phenotype, and so on for every part that makes up you). That applies to the pessimistic future scenarios too: if you don't sign up for cryo, you'll be taking a risk that the future is hellish as well.
It would be extremely surprising if our current or traditional death ceremonies are the optimal minimisation of that risk. Almost certainly, we should be trying to minimise the risk further. Cremation, destruction of records pertaining to ourselves, erasure of Facebook profile, planting deliberately false information, and other such tactics should be considered.
Does this objection strike you as reasonable, or unreasonable?
Replies from: Epiphany↑ comment by Epiphany · 2012-09-30T07:07:41.937Z · LW(p) · GW(p)
If a copy of me were made, would this instance of me experience the next instance's experiences? I don't think so. As far as whether I could suffer from being re-created, I doubt that. However, I'd be very concerned about future instances of me being abused, if I thought there were an interest in reviving me. If I was famous, I'd be concerned that fans might want to make a bunch of clones of me, and I'd be concerned about how the clones were treated. Unless I had reason to think that A. People are going to reconstruct me against my will and B. The people reconstructing me would do something unethical with the clones, I wouldn't worry about it.
Why do you ask?
Replies from: lsparrish, shokwave↑ comment by lsparrish · 2012-09-30T14:07:59.697Z · LW(p) · GW(p)
From the perspective that you are your instances, it matters because if you fear being abused, you would fear any instance of you being abused. You wouldn't want to walk into an atomically precise copying machine with the knowledge that the copy is going to be used in cruel experiments.
The question becomes, where do you draw the line? Is a rough copy of you based on your facebook posts and whatever advanced AI can extrapolate from that just as worthy of you anticipating their experiences? Or perhaps you should fear ending up as that person on a relative scale depending how similar it is -- if it is 50% similar, have 50% of the fear, etc. Fear and other emotions don't have to be a simple binary relationship, after all.
Empathy is an emotion that seems to be distinctly different (meaning, it feels different and probably happens differently on a biochemical and neurological level) from the emotion of actually anticipating being the individual. So while yes I would feel empathy for any abused clones that applies regardless of the degree to which I have fear of waking up as them, it would not be the only emotion because I believe I would be the clones. Any information I had that indicates that clones might be abused in the future becomes much more near to me and I am more likely to take action on it if I think it likely that I will actually be one of them.
Thus if you think the future is bad in a way that prohibits wanting to wake up from cryonics to any serious degree, then it might be smart to be concerned for the safety of clones who could be you anyway. Since you haven't stated a desire to be cloned, being cloned against your will is more likely to be carried out by unethical people relative to ethical people, so even if the prospect is fairly remote it is more worrying than the prospect with cryonics, where caring people must keep you frozen and do have your consent to bring you back.
Replies from: mwengler↑ comment by mwengler · 2012-09-30T15:19:29.350Z · LW(p) · GW(p)
I fear a rough copy of myself made from my facebook posts (and lesswrong comments) being tortured about as much as I fear an intricate 3d sculpture of me being made and then being used as a target in a gun range. Is that really just me?
Replies from: shokwave, lsparrish↑ comment by lsparrish · 2012-09-30T16:18:50.629Z · LW(p) · GW(p)
Hmm. How do you feel about the prospect of an atomically precise copy of yourself being used as a living target at a gun range?
Replies from: mwengler↑ comment by mwengler · 2012-10-01T10:24:30.040Z · LW(p) · GW(p)
Is my corpse an atomically precise copy of myself? I wouldn't care much about that.
If you mean the classic sci-fi picture of an exact and recent clone of myself, I would certainly prefer that a copy of myself be used at a gun range than that a copy of my daughters or a few of my relatives be used. And certainly prefer that a copy of myself be used than that the single original of any of my relatives be used.
It is an ironic thing that a rationalist discussion of values comes down to questions like "how do you feel about..." Personally, much of my rational effort around values is to make choices that go against some or even many of my feelings, presumably to get at values that I think are more important. I highly value not being fooled by appearances, I highly value minimizing the extent to which I succumb to "cargo cult" reasoning. I'm not sure how much identifying myself with a copy of myself is valid (whatever that means in this context) and how much is cargo cult. But I'm pretty sure identifying myself with my corpse or a caricature of myself is cargo cult.
Replies from: lsparrish↑ comment by lsparrish · 2012-10-04T01:37:27.719Z · LW(p) · GW(p)
If you undergo dementia or some other neuro-degenerative condition for a few years, it will turn you into a very different person. A "rough" copy made from information mined from the internet could perhaps be much closer than this to the healthy version of the person than the version kept alive in a nursing home in their later years. Because of this argument, I don't see how you can come to the conclusion that identifying with a "caricature" is cargo-cult by definition.
Your corpse is definitely not an atomically precise copy of yourself. Corpses are the subject of extensive structural damage which makes their state of unconsciousness irreversible. If this were not the case, we would neither call them corpses nor consider it unreasonable to identify with them.
A more interesting grey area would be if you were subjected to cryonics or plastination, copied while in a completely ametabolic and unconscious state, and then reanimated. You could look across at a plastic-embedded or frozen copy of yourself and not even know if they are the original. In fact, there could be many of them, implying that you are probably not the original unless you can obtain information otherwise.
If you value your original self sufficiently, that seems to imply that if say you wake up in a room with 99 other versions of you still in stasis and have a choice to a) destroy them all and live or b) suicide and reanimate them all, you should pick suicide in advance so that it becomes 99% likely your copy will pick that option.
On the other hand if you don't care whether you are the original or a copy you can destroy all those nonsentient copies (99% chance of including the original) without worrying about it.
↑ comment by shokwave · 2012-09-30T18:58:09.803Z · LW(p) · GW(p)
I've had success explaining cryonics to people by using the "reconstruct" (succinct term, thank you!) spectrum - on one end, maybe reconstruction is easy, and we'll all get to live forever. On the other end, maybe it's impossible, and you simply cannot spend more than a few days de-animated before being lost forever. In the future, there will be scientists who do research and experiments and actually determine where on the spectrum the technology actually is. Cryonics is just a particular corpse preservation method that prepares for reconstruction being difficult.
More succinctly, cryonics is trying to reach the future, and this hypothetical objection is trying to avoid the future.
I asked because it seemed that, if a fear of bad future is a reason not to try harder to reach the future, it should also be a reason to try harder avoid the future, and I was curious to examine this fear of the future.
comment by Epiphany · 2012-09-30T21:17:01.324Z · LW(p) · GW(p)
Unexpected consequences (current objection):
There must be psychological consequences (waking up in a world where your skills are all useless and everything has changed), environmental consequences (a bunch of people being frozen aren't going to have zero environmental impact), medical consequences (revival may not go as expected, there are probably risks) and possibly completely unexpected consequences (akin to the tumors x-ray technicians got because they were testing the x-ray machines on their hands every day to make sure they were warmed up).
Can anyone recommend good reading materials on these?
Replies from: Eneasz, Bruno_Coelho↑ comment by Eneasz · 2012-10-01T16:04:23.863Z · LW(p) · GW(p)
I don't have reading material on these, but there are unexpected consequences to anything we do. Should we stop using electricity because there could be unexpected consequences to it?
More to the point, most of these are possible consequences of simply continuing to live. Two centuries from now it's likely most of your current skills will be useless and everything will have changed. Living for an extra century will not have zero environmental impact. Etc. Is the best solution to these problems personal annihilation? Is that even in the top ten? There are better ways of solving these problems than death.
↑ comment by Bruno_Coelho · 2012-10-02T14:23:53.115Z · LW(p) · GW(p)
If the chances of death is high, why unexpected consequences would be a objection?
The main reason to not sign is the low probability of sucess, in cases where people already know what cryo is, and have the money. If they will die anyway, losing money now makes cryo a bad investiment.
comment by duckduckMOO · 2012-09-30T13:51:01.950Z · LW(p) · GW(p)
If you wake up not too severely damaged and in a decent environment (possibly with all kinds of wonderful improvements) where your life wil be better than non existence you will have a lot more time for living. If not you can always kill yourself.
If you get yourself frozen only for revival upon major life extension breakthroughs as well as unfreezing damage repair etc the important possibilities for the revival are probability of happy revival vs probability of unhappy revival where you can't kill yourself.
I'm not aware of there ever having been any actual supervillains. I'm aware people are enslaved and forbidden from killing themselves but almost never are they actually prevented from doing so. Who cares about their slaves little enough to forbid them from killing themselves but enough to diligently enforce the rule (unless you are short on slaves which anyone with the resources to revive you to enslave you wouldn't be)
Having to kill yourself would suck but it puts a comparitively low cap on your max loss in the vast majority of scenarios. I'm not sure it can even be called a loss as it replaces having to die of old age or illness in the scenario where you don't freeze yourself.
Also you are probably underestimating the extent to which advancements over the years would improve your quality of life.
While the possibility of the bad scenarios does reduce the expected value of freezing it's on a different order of magnitude to the potential benefits because the vast majority of the bad scenarious can be opted out of.
Replies from: Viliam_Bur, prase, NancyLebovitz, Dolores1984, Epiphany↑ comment by Viliam_Bur · 2012-10-01T06:34:50.780Z · LW(p) · GW(p)
I'm not aware of there ever having been any actual supervillains. I'm aware people are enslaved and forbidden from killing themselves but almost never are they actually prevented from doing so.
One thing behaviorally close to actual supervillains is bureaucracy.
So the realistic antiutopian scenario is that you are revived by employees of some future Department of Historical Care. Personally, those people don't care about you at all; just are just another prehistorical ape for them. All they want is to have their salaries, with as little work as possible.
They don't care about costs of your revival, because those costs are paid by state; by taxes of citizens who get some epsilon warm fuzzies for saving prehistorical people. They don't care about your pain, because emotionally you mean nothing for them; they emotionally don't even consider you human. But they do care about your life -- because their salaries depend on how many revived prehistorical people will survive. So their highest priority is to prevent your suicide; and they can use the technology of future for this; for example they can prevent you any movement and feed you intravenously.
People outside the Department of Historical Care will not save you, because they honestly don't care about you. They get some warm fuzzies from knowing that you are alive (and imagining how grateful you must be for this), but they have no desire to meet with you personally. It's a future, where they have things much more interesting than you; for example genetically engineered pokemons, artificial intelligences, etc.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2012-10-01T17:16:45.562Z · LW(p) · GW(p)
And you might have to keep replaying the more interesting (that is, painful) parts of history.
↑ comment by prase · 2012-09-30T16:43:34.726Z · LW(p) · GW(p)
If not you can always kill yourself.
Not if you don't have courage to do such things. Not if you wake up damaged and unable to access / use suicidal weapons. Not if you wake up as a subject of medical experiments. Being a slave isn't the only horrible outcome that could happen.
↑ comment by NancyLebovitz · 2012-09-30T18:22:59.125Z · LW(p) · GW(p)
Prisoners are generally prevented from killing themselves, as are the insane. What if the society of the future simply thinks it's wrong for you to kill yourself and won't let you do it?
There's a general category of waking up to find yourself in a low-status situation. This would include slavery, torture, imprisonment (we don't know what they'll consider to be a crime), and the one I think is most likely-- that you'll simply never be able to catch up. If you're going to be you, you're going to have a mind which was shaped by very different circumstances from the people in the future. Life might be well worth living or intermittently well worth living, but you will never be a full member of the society.
Is there any science fiction about fairly distinct cohorts of people from different times in a high-longevity and/or cryonics society?
↑ comment by Dolores1984 · 2012-09-30T20:12:14.369Z · LW(p) · GW(p)
If you're revived via whole brain emulation (dramatically easier, and thus more likely, than trying to convert a hundred kilos of flaccid, poisoned cell edifices into a living person), then you could easily be prevented from killing yourself.
That said, whole brain emulation ought to be experimentally feasible, in what, fifteen years? At a consumer price point in 40? (Assuming the general trend of Moore's law stays constant). That's little enough time that I think the probability of such a dytopian future is not incredibly large. Especially since Alcor et all can move around if the laws start to get draconian. So it doesn't just require an evil empire - it requires a global evil empire.
The real risk is that Alcor will fold before that happens, and (for some reason) won't plastinate the brains they have on ice. In which case, you're back in the same boat you started in.
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-10-01T20:26:57.182Z · LW(p) · GW(p)
That said, whole brain emulation ought to be experimentally feasible, in what, fifteen years?
Maybe, but scanning a vitrified brain with such a high resolution that a copy would feel more or less like the same person might take a bit longer.
Replies from: Dolores1984↑ comment by Dolores1984 · 2012-10-01T21:01:05.985Z · LW(p) · GW(p)
Most of the sensible people seem to be saying that the relevant neural features can be observed at a 5nm x 5nm x 5nm spatial resolution, if supplemented with some gross immunostaining to record specific gene expressions and chemical concentrations. We already have SEM setups that can scan vitrified tissue at around that resolution, they're just (several) orders of magnitude too slow. Outfitting them to do immunostaining and optical scanning would be relatively trivial. Since multi-beam SEMS are expected to dramatically increase the scan rate in the next couple of years, and since you could get excellent economies of scale for scanning on parallel machines, I do not expect the scanners themselves to be the bottleneck technology.
The other possible bottleneck is the actual neuroscience, since we've got a number of blind spots in the details of how large-scale neural machinery operates. We don't know all the factors we would need to stain for, we don't know all of the details of how synaptic morphology correlates with statistical behavior, and we don't know how much detail we need in our neural models to preserve the integrity of the whole (though we have some solid guesses). We also do not, to the best of my knowledge, have reliable computational models of glial cells at this point. There are also a few factors of questionable importance, like passive neurotransmitter diffusion and electrical induction that need further study to decide how (if at all) to account for them in our models. However, progress in this area is very rapid. The Blue Brain project alone has made extremely strong progress in just a few years. I would be surprised if it took more than fifteen years to solve the remaining open questions.
Large scale image processing and data analytics, for parsing the scan images, is a sufficiently mature science that it's not my primary point of concern. What could really screw it up is if Moore's law craps out in ten years like Gordon Moore has predicted, and none of the replacement technologies are advanced enough to pick up the slack.
↑ comment by Epiphany · 2012-09-30T22:40:13.905Z · LW(p) · GW(p)
If not you can always kill yourself.
WRONG! If they're able to re-animate preserved people, what makes you think they won't be able to prevent suicide?
What if they don't believe in a right to die? There's no guarantee that you'll be able to die, if you wake up in a world where cryo revival actually worked.
Or, if I woke up disabled or in an R2D2 robot body, how would I actually go about killing myself? I mean, you can say "roll off a cliff" but if there are no cliffs nearby, or the thing is made out of titanium?
There is no guarantee I'd be able to die in that scenario.
Also you are probably underestimating the extent to which advancements over the years would improve your quality of life.
I think you're underestimating the extent to which advancements may cause catastrophes. We made all these chemicals and machines, now the environment is being destroyed. We made x-ray machines, the first techs to use them used to x-ray their hands to see if the machine was on in the morning - you can imagine what resulted. We've learned a lot about science in the last 100 years, great, but now we have nuclear bombs. We may make AI, and there are about 10,000 ways for that to go wrong. I don't assume technological advancement will lead to a utopia. I hope it does. But to assume that it will is a bad idea. I'd be very interested to see a thorough and well thought out prediction of whether we'll have a utopia or dystopia in the future, or something that's neither. I'm really not sure.
Replies from: GeraldMonroe↑ comment by GeraldMonroe · 2012-10-01T23:45:14.936Z · LW(p) · GW(p)
Worse : a sensible system would in fact not ONLY give you a "robot body made of titanium" but would maintain multiple backup copies in vaults (and for security reasons, not all of the physical vault locations would be known to you, or anyone) and would use systems to constantly stream updated memory state data to these backup records. (stored as incremental backups, of course)
More than likely, the outcome for "successfully" committing suicide would be to wake up again and face some form of negative consequences for your actions. Suicide could actually be prosecuted as a crime.
comment by Curiouskid · 2012-10-04T03:12:01.310Z · LW(p) · GW(p)
Great post Epiphany. I'd like to volunteer myself as another guinea pig, but with one caveat. Rather than having this experiment end with just two people's opinions being changed, I'd like to create an argument map for the best arguments on cryonics so that more people can be persuaded by the best arguments that we can aggregate into an argument map.
There are a lot of argument mapping tools out there, but my favorite one isn't actually intended to be used as an argument map. I created a rough sketch of an argument map on cryonics.
Replies from: Epiphany↑ comment by Epiphany · 2012-10-04T03:53:35.241Z · LW(p) · GW(p)
I am planning to put a list of my objections with links and whether they're resolved into the OP. So there will be some organization to it.
I'm not sure that LW wants more guinea pigs, some feel that this is a waste of time - you can tell by the karma on my thread that this isn't really popular. Thanks for the compliment, though.
Also, I am not expecting to be convinced. I'm actually leaning toward "no" right now, as surprising as I bet some think that is. I'll explain that when I make my next run of responding to comments again.
You know, I think we should argument map the whole friggin site. Except that I WOULD NOT want to see that being put onto someone else's software. They'll have control of the data. I'd prefer to see it in open source software, editable by the world, and copyright free so anyone can make it backup without a problem.
comment by DataPacRat · 2012-09-30T06:08:39.312Z · LW(p) · GW(p)
If I may ask you something; as you write out your various objections here, if you were to consider, on the one hand, the risk of whatever unpleasantness arises from that objection, and on the other hand, that if you don't take that risk, you will be permanently and irrevocably dead... do you really feel that you'd rather be dead than take that particular risk?
Replies from: Epiphany↑ comment by Epiphany · 2012-09-30T06:59:03.794Z · LW(p) · GW(p)
To me, death is merely non-existence. I won't suffer after that. I won't know that I'm dead.
Replies from: DataPacRat, Jayson_Virissimo↑ comment by DataPacRat · 2012-09-30T07:14:31.745Z · LW(p) · GW(p)
After you're dead, no; but the you of this moment can look forward at the various possible futures, and make choices that make some of those futures more likely than others. One of your objections was to being put in an R2D2 body - so imagine that you, right now, have a choice to make. One choice is that you end up permanently dead. The other choice offers you a chance at life, but with, say, a 5% chance of being put into an R2D2 body.
Are you so certain that such an existence is so terrible, that even a remote chance of it is a worse fate than total oblivion?
Replies from: Epiphany↑ comment by Epiphany · 2012-09-30T07:54:11.903Z · LW(p) · GW(p)
Ok, I won't be able to speak, enjoy food, express emotion, have sex or do any of the things I normally do with my hands. I would be severely disabled. That would be almost like being a paraplegic but with wheels. And I might not be able to see or hear well (does R2D2 have the ability to enjoy HD quality or is it more like recognizable blurs and discernible murmurs?).
What the hell would I realistically do with myself if I couldn't even communicate? I find meaning in doing constructive projects. Where would I find meaning in a body like R2D2? Without the ability to experience even sensory pleasures, I would become so bored. Imagine staring at a wall for a whole week. That's how I think it would feel to be trapped in an R2D2 body - but maybe I'd be stuck like that for years.
If you've looked into the concept of "flow" (From the book "Flow: The psychology of optimal experience.") you'll know that not being able to do activities that provide an appropriate challenge might mean you aren't able to be happy. Gifted children, for instance, develop learned helplessness in schooling environments that go at a much slower pace than they do. I am not satisfied by games - I couldn't just zoom around on my wheels in patterns and be amused. I am not a gnat, I'm a human being and I need fulfillment. Boredom is a formidable affliction which I don't dare underestimate.
I think I have to classify the R2D2 body as life support, and say pull the plug or put me back in cryo. I'd rather not just wheel around in little circles while my brain tortures me because of boredom. No R2D2 body.
Good try though.
Replies from: lsparrish, MixedNuts, NancyLebovitz↑ comment by lsparrish · 2012-09-30T15:10:46.928Z · LW(p) · GW(p)
I'd rate the R2D2 much lower than 5%, at least as far as your conscious experience goes. Your brain might technically be kept in a vault or canister somewhere, but there would be extremely good virtual reality linkups to the brain. Look how good movies are getting with current VR. They have to simulate physics and human anatomy in considerable detail, but often take shortcuts to make the characters cuter and sexier. This is much more likely to be what you have to look forward to. Weirder than you're used to, but much more appealing than you are thinking here. And that's all just talking about a possible non-uploaded existence as a meat-brain. If you were to be uploaded, the possibility of being limited in your communication to your environment is even lower.
Even if you were stuck in an R2D2 body or something for years on end with no high-end virtual reality, it is doubtful that you would experience boredom or depression. Boredom and depression is an emotional state with particular neurological characteristics. These can be disrupted (even now) by drugs. Furthermore, it seems likely that boredom is dependent on hormonal and/or electrical responses from the rest of the body. A brain by itself probably could not feel boredom without significant prosthetic assistance.
The very notion of existing as a brain in a can means we've solved the problem of figuring out how to synthesize and deliver every chemical and stimulus the brain depends on. The delivery mechanism would be digitally regulated, and thus we could feel excitement, boredom, or any other emotion on demand -- perhaps even copying these sensations from healthy volunteers. That may not be an optimal human existence, but as an in-between state while waiting on life support to be restored to more optimal humanity it does not seem likely to be unbearable.
For a pop-culture example, take the Cybermen from Dr. Who. (Ridiculous show with ridiculous premises, just using it to make a point.) Their emotions are turned off, but only because their bodies are total pieces of junk that can't support a brain with emotions. However we've seen that the emotions of the brain can in principle be turned back on again. Thus if you were to take away their tendency to be fanatical killing machines and replace it with something else (fanatical lab equipment manufacturers, say), since they can't feel pain it wouldn't be a bad thing to be a Cyberman for a few years while waiting to be transplanted into a non-stupid body.
Replies from: Epiphany↑ comment by Epiphany · 2012-09-30T23:06:44.674Z · LW(p) · GW(p)
I could wake up in the matrix... I don't know if I'd want that. Even if it was designed to make me happy. I want meaning, this requires having access to reality. I'll think about it.
drugs
Why would I want to do that? That is even worse. I am disgusted by the idea of having no ability to do anything of use, and even more disgusted by the idea that the solution to this situation is to drug me so that I can't properly care about the problem. If I'm not able to interact with reality, what is the point in existing?
it wouldn't be a bad thing to be a Cyberman for a few years
Three years, okay. But why bring me back at all then? Why not keep me frozen? If I can't have quality of life, I would prefer that.
Replies from: Dolores1984↑ comment by Dolores1984 · 2012-10-01T04:19:00.598Z · LW(p) · GW(p)
I want meaning, this requires having access to reality. I'll think about it.
Does it? You can have other people in the simulation with you. People find a lot of meaning in companionship, even digitally mediated. People don't think a conversation with your mother is meaningless because it happens over VOIP. You could have lots of places to explore. Works of art.. Things to learn. All meaningful things. You could play with the laws of physics. Find out what if feels like to turn gravity off one day and drift out of your apartment window.
If you wake up one morning in your house, go make a cup of coffee, breathe the fresh morning air, and go for a walk in the park, does it really matter if the park doesn't really exist? How much of your actual enjoyment of the process derives from the knowledge that the park is 'real'? It's not something I normally even consider.
Replies from: Epiphany↑ comment by Epiphany · 2012-10-01T04:47:47.987Z · LW(p) · GW(p)
Why is reality important to me? Hmm. Because without access to reality, you always have to wonder what's happening around you. Wouldn't there come a point where you went HOLY CRAP someone could be sneaking up behind me right now and I'd never know.
Do you trust the outside world enough not to worry about that?
I don't.
I'd eventually spill coffee on my computer or something and it would dawn on me "What if they spill coffee on my brain?"
I'd want to speak to the outside world. We'd probably be able to access them on the internet or some such. Things would be happening there. I would know about them. Political problems, disasters. Things I couldn't get involved in.
And if not, then I'd be left to wonder. What's going on in the outside world? Are things okay?
Imagine this: Imagine being cut off from the news. Not knowing what's going on in the world.
Imagine realizing that you are asleep. Not knowing whether there's a burglar in your house, whether it's on fire. Not being able to wake up.
Imagine your friends all have the same problem. You have no access to reality, so there's no way you can help them. If something affects them from the outside world, you can give them a hug. A virtual hug. But both of you knows that there's nothing you can do.
With friendship, one of the things that creates bonds is knowing that if I'm in trouble at 3:00 am, I can call my friend. If all the problems are happening in a world that neither of you has access to, if you're stuck inside a great big game where nothing can hurt you for real, what basis is there for friendship? What would companionship be good for?
You'll be like a couple of children - helpless and living in a fantasy.
Why are you learning rationality if you don't see value in influencing reality?
Replies from: Dolores1984↑ comment by Dolores1984 · 2012-10-01T05:38:11.921Z · LW(p) · GW(p)
Well, there's no reason to think you'd be completely isolated from top level reality. Internet access is very probable. Likely the ability to rent physical bodies. Make phone calls. That sort of thing. You could still get involved in most of the ways you do now. You could talk to people about it, get a job and donate money to various causes. Sign contracts, make legal arrangements to keep yourself safe. That sort of thing.
With friendship, one of the things that creates bonds is knowing that if I'm in trouble at 3:00 am, I can call my friend. If all the problems are happening in a world that neither of you has access to, if you're stuck inside a great big game where nothing can hurt you for real, what basis is there for friendship? What would companionship be good for?
Wait, you only value friendship in so far as it directly aids you? I hate to be the bearer of bad news, but if that's actually true, then you might be a sociopath.
Why are you learning rationality if you don't see value in influencing reality?
Rationality is about maximizing your values. I happen to think that most of my values can be most effectively fulfilled in a virtual environment. If the majority of humanity winds up voluntarily living inside a comfortable, interesting, social, novel Matrix environment, I don't think that's a bad future. It would certainly solve the over-crowding problem, for quite a while at least.
Replies from: Epiphany↑ comment by Epiphany · 2012-10-01T06:13:09.369Z · LW(p) · GW(p)
Well, there's no reason to think you'd be completely isolated from top level reality.
Hmm. I hadn't thought very much about blends of reality and virtual reality like that. I've encountered that idea but hadn't really thought about it.
you might be a sociopath.
You took one example way too far. That wasn't intended as an essay on my views of friendship. The words "one of the things that creates bonds" should have been a big hint that I think there's more to friendship than that. Why did you suddenly start wondering if I'm a sociopath? That seems paranoid, or it suggests that I did something unexpected.
Rationality is about maximizing your values.
Okay, but the reason why rationality has a special ability to help you get more of what you want is because it puts you in touch with reality. Only when you're in touch with reality can you understand it enough to make reality do things you want. In a simulation, you don't need to know the rules of reality, or how to tell the difference between true and false. You can just press a button and make the sun revolve around the earth, turn off laws of physics like gravity, or cause all the calculators to do 1+1 = 3.
In a virtual world where you can get whatever you want by pressing a button, what value would rationality have?
Replies from: NancyLebovitz, Dolores1984, CronoDAS, Dolores1984↑ comment by NancyLebovitz · 2012-10-01T17:06:47.002Z · LW(p) · GW(p)
In a virtual world where you can get whatever you want by pressing a button, what value would rationality have?
You still need to figure out what you want.
Replies from: wedrifid↑ comment by wedrifid · 2012-10-02T05:50:59.392Z · LW(p) · GW(p)
In a virtual world where you can get whatever you want by pressing a button, what value would rationality have?
You still need to figure out what you want.
Unless the virtual world is capable of figuring out what you want itself at least as well as you can. In which case bravo, press the button, you win.
↑ comment by Dolores1984 · 2012-10-01T18:29:41.924Z · LW(p) · GW(p)
Additionally, reality and virtual reality can get a lot fuzzier than that. If AR glasses become popular, and a protocol exists to swap information between them to allow more seamless AR content integration, you could grab all the feeds coming in from a given location, reconstruct them into a virtual environment, and insert yourself into that environment, which would update with the real world in real time. People wearing glasses could see you as though you were there, and vice versa. If you rented a telepresence robot, it would prevent people from walking through you, and allow you to manipulate objects, shake hands, that sort of thing. The robot would simply be replaced by a rendering of you in the glasses. Furthermore, you could step from that real environment seamlessly into an entirely artificial environment, and back again, and overlay virtual content onto the real world. I suspect that in the next twenty years, the line between reality and virtual reality is going to get really fuzzy, even for non-uploads.
↑ comment by CronoDAS · 2012-10-01T06:44:45.004Z · LW(p) · GW(p)
In a simulation, you don't need to know the rules of reality, or how to tell the difference between true and false. You can just press a button and make the sun revolve around the earth, turn off laws of physics like gravity, or cause all the calculators to do 1+1 = 3.
Try doing that in World of Warcraft, and you'll find your account canceled.
↑ comment by Dolores1984 · 2012-10-01T06:28:18.728Z · LW(p) · GW(p)
The words "one of the things that creates bonds" should have been a big hint that I think there's more to friendship than that. Why did you suddenly start wondering if I'm a sociopath? That seems paranoid, or it suggests that I did something unexpected.
Well, then there's your answer to the question 'what is friendship good for' - whatever other value you place on friendship that makes you neurotypical. I was just trying to point out that that line of reasoning was silly.
Okay, but the reason why rationality has a special ability to help you get more of what you want is because it puts you in touch with reality. Only when you're in touch with reality can you understand it enough to make reality do things you want. In a simulation, you don't need to know the rules of reality, or how to tell the difference between true and false. You can just press a button and make the sun revolve around the earth, turn off laws of physics like gravity, or cause all the calculators to do 1+1 = 3. In a virtual world where you can get whatever you want by pressing a button, what value would rationality have?
Well, you have to get to that point, for starters. And, yes, you do need some level of involvement with top-level reality. To pay for your server space, if nothing else. Virtual environments permit a big subset of life (play, communication, learning, etc. much more efficiently than real life), with a few of the really horrifying sharp edges rounded off, and some additional possibilities added.
There are still challenges to that sort of living, both those imposed by yourself, and those imposed by ideas you encounter and by your interactions with other people. Rationality still has value, for overcoming these sorts of obstacles, even if you're not in imminent danger of dying all the time.
↑ comment by MixedNuts · 2012-09-30T12:21:37.355Z · LW(p) · GW(p)
You're only expressing personal preferences, but I feel enormously uneasy to hear you say "Human beings need fulfillment, therefore I'd rather die than be like a paraplegic with wheels". People who can't speak, are fed through tubes, get around on wheels, express emotion in nonstandard ways, lack functioning hands, and can't have most forms of sex, don't usually want to die, but when they're murdered by an "angel of mercy" serial killer you get people saying stuff like
How much life did she really take? All of the victims weren’t even living. They enjoyed nothing, experienced nothing and were going to die. The families at the time of death were relieved at the end of suffering . . . I know they had no right to play God . . . but when you decide how much of her life should be taken or lost to prison, shouldn’t it be equal to what was taken from their victims?
- Ken Wood, ex-husband of one of the Grand Rapids killers
You might be a very atypical person who'd prefer death to severe disability, but if you are, could you pepper statements like that with disclaimers? That's kind of a dangerous meme to reinforce.
Replies from: Epiphany, mwengler, Richard_Kennaway↑ comment by Epiphany · 2012-09-30T23:08:34.976Z · LW(p) · GW(p)
If they want to live, I have no problem with it. I am not advocating killing them. I realize this is my personal preference. Feel better now?
I don't know what kind of disclaimer I would even add. "Don't become a serial killer because I said this?"
And I question whether it really is uncommon for people to choose death over severe disability. Why do so many people have living wills?
I don't think this is dangerous. What's dangerous is if the person doesn't realize that not everyone shares their personal preference.
↑ comment by mwengler · 2012-09-30T15:14:05.909Z · LW(p) · GW(p)
You might be a very atypical person who'd prefer death to severe disability, but if you are, could you pepper statements like that with disclaimers? That's kind of a dangerous meme to reinforce.
This idea that we need to censor ourselves when having honest discussions is a meme I would not like to see reinforced. I would propose to work against this meme by arguing emotionally and rationally against it rather than by trying to censor it.
You might be a very atypical person who'd prefer death to severe disability,
Your values are leaking all over your statements of fact. It is not plausible to me that you have not seen the idea of preferring death to severe disability in lots of places at this point in your rational career. From this I conclude your describing those who feel that way as "very atypical" is not only false, but badly motivated as well.
On the (in my estimation) extremely small chance that you really don't know what a common idea preferring death to severe disability is, google "living will," "kervorkian" "suicide law oregon" to get a jump start into the large world of people who discuss a myriad of versions and implications of this pretty common meme.
↑ comment by Richard_Kennaway · 2012-09-30T13:08:34.347Z · LW(p) · GW(p)
People who can't speak, are fed through tubes, get around on wheels, express emotion in nonstandard ways, lack functioning hands, and can't have most forms of sex, don't usually want to die
You might be a very atypical person who'd prefer death to severe disability, but if you are, could you pepper statements like that with disclaimers? That's kind of a dangerous meme to reinforce.
Tony Nicklinson's case is by no means the only one I've heard of. How do you know that these people are "very atypical" of the severely disabled?
Of course, the idea does lend itself to rationalisations, and according to this blog post, Ken Wood, who you quoted, is doing exactly that:
This view contrasts sharply with the reality that most of the patients killed were not particularly debilitated and perpetrator Cathy Wood’s own statement that “we did it because it was fun” (quoted in Cauffiel, 1992, p. 254).
↑ comment by NancyLebovitz · 2012-09-30T17:01:21.583Z · LW(p) · GW(p)
Nerd alert: R2D2 was able to talk with C3P0. Presumably under normal circumstances, there would be a robot culture. This doesn't address whether such a life would be satisfying for someone who was born human.
Replies from: Epiphany↑ comment by Epiphany · 2012-09-30T22:59:46.167Z · LW(p) · GW(p)
I realize R2D2 could communicate to C3P0, however I would not qualify that as "being able to speak". Needing an interpreter would leave me disabled in any situation where the interpreter was not present. Communicating in beeps is a disability, not an ability.
↑ comment by Jayson_Virissimo · 2012-09-30T07:13:38.949Z · LW(p) · GW(p)
To me, death is merely non-existence. I won't suffer after that. I won't know that I'm dead.
Are you claiming to be indifferent to death?
Replies from: Epiphany↑ comment by Epiphany · 2012-09-30T07:30:47.583Z · LW(p) · GW(p)
That's a good question. I'm not exactly indifferent. I experienced a major illness where I not only learned what it was like to suffer so much that I understood that there were things worse than death, but had to face the possibility of death and make peace with it. If you haven't experienced something that caused it to sink in that there are experiences worse than non-existence, you'll probably be running on the assumption that living is an opportunity for enjoyment. This is biased. Life is also an opportunity for suffering.
And if you haven't faced death - I mean really faced it, felt like you were going to die, you probably wouldn't feel that there was anything gained from making peace with it. This is pretty easy to understand if you consider that thinking about death is really upsetting and if you're sick enough, you'll be kind of motivated to think about it constantly, which is not particularly useful and it's definitely not pleasant. At the point where you realize "Gee I'm thinking about this constantly and it isn't pleasant or useful." you realize the utility in making peace with death.
I haven't completely lost interest in life or anything. I have some very strong reasons to be here. But death itself just doesn't provoke the same terror it once did. I think what I mean by "peace" is not that I am indifferent - I do have a preference - it's that it's not provoking the same terror that it used to.
Replies from: MixedNuts, mwengler↑ comment by MixedNuts · 2012-09-30T13:04:12.419Z · LW(p) · GW(p)
On making peace with death:
It's usually a good idea in the short term to make peace with what you can't change, but when it turns out you can change it, it sort of bites you in the ass. This is true of all forms of learned helplessness, not just accepting death. See what people do to cope with abuse: enormous gain while the abuse lasts, enormous handicap for getting back to life.
On life:
Usual phrasings treat life as neutral and death as insanely bad. I think more of death as neutral and life as insanely good. (Utility is relative, so it makes no difference.) It's not always (or even often) pleasant and enjoyable, but it's always interesting. That's my main problem with pain: it's bad that it hurts, but it's worse that it fills your mind and won't let you focus on something new. Obviously some lives are worse than death (torture, long-term sensory deprivation) and some are better (cake, books). What I'm trying to get at is that "neutral" in terms of pleasure and pain isn't "neutral" in terms of existence.
Life is full of things; taking in everything about even a tiny detail of a perfectly ordinary object is enough to send you into sensory overload, even before you abstract away curves and colors to categorize the pattern as a single solid object with a given shape, recognize this particular object it as a tin and start getting curious about what it contains and how it's made and why light reflects off metal that way and a thousand other things about this tin and your model of tins. I don't spend all my waking hours in childlike wonder over everything, though I can whenever I want if I'm not feeling horrible, but I constantly get tiny slices of novelty. That's why I value life so highly; the cake is just icing.
(All this sounded a lot less confused in my head.)
Replies from: Epiphany, CronoDAS↑ comment by Epiphany · 2012-09-30T22:52:56.755Z · LW(p) · GW(p)
It's usually a good idea in the short term to make peace with what you can't change, but when it turns out you can change it, it sort of bites you in the ass.
Absence of terror is not biting me in the ass. I am so much stronger than I used to be. I came out of that illness in a state of bliss like I've never felt before - and I still feel it. It isn't just because I'm healthy, it's also because I learned so many tricks to reduce my stress. Such as not feeling terrified of death.
You are confusing lack of fear with learned helplessness. I didn't say that I let go of control. I said that I stopped feeling terrified. You're confusing what I said for something else. Ask yourself this: Does feeling helpless do anything to stop your terror? No. So why would it stop mine? That is not the method by which I learned not to be terrified.
You are also confusing "making peace with death" for "accepting death". Obviously, I don't accept it - otherwise, why would I have made this thread?
Please try and interpret what I am actually saying.
Usual phrasings treat life as neutral and death as insanely bad. I think more of death as neutral and life as insanely good.
I see them both as neutral, but I have a wish to make a difference in the world that burns and drives me to live, and I want to experience interacting with others like me (for reasons I don't totally understand - it is probably some kind of social instinct). For these reasons, I want to live. However, I separate my wishes from my view of whether life and death are good and bad -- for the same reason I separate desire from reality. Just because I want something out of life, doesn't mean that life will give it to me. I could get quite the opposite. Therefore, it doesn't make sense to me to see life as good or bad. Life is an opportunity for both enjoyment and suffering, and you never know which one you are going to receive next.
but it's always interesting
You have never been bored?
Also, have you considered that a life full of meaningless pleasure, or nonconstructive senses of wonder will not be fulfilling? It sounds like you've never been through anything horrible enough that the possibility for deep and prolonged misery feels real to you. You are likely to be experiencing normalcy bias.
Replies from: MixedNuts↑ comment by MixedNuts · 2012-10-01T09:25:33.258Z · LW(p) · GW(p)
Okay, then I'm mistaken about what you mean by "peace with death". What I thought it meant is "GAH I'M GOING TO DIE!! ...ehn, it wouldn't be so bad. At least all this crap would be over. And it's easier to just let it happen than do it myself. I just hope it doesn't take too long.". Obviously this isn't what you're getting at. So... you would have signed up for cryo to avoid both death and the fear of death, but just avoiding death isn't good enough, because death is only bad if life is good, which it might not be. Is that right?
I see them both as neutral
If you don't see death as inherently worse than life, I don't think I can convince you to sign up for cryo! (Well, any future in which you get revived is more likely to get you a good life than an inescapable bad one. And you could always ask Mike Darwin if you can state conditions for revival. But still, if you like anchovy and I like pineapple, I can't convince you to order the Hawaiian pizza.)
I could point to people in awful situations who get an overwhelming drive to survive. The archetypal example would be Saw, which is about people who don't like life all that much forced to do very painful things to survive, thus revealing a preference for life. (It's a terrible example, because it's fictional and the characters have good things to return to, not just life. But you get the point.) But I don't have stats on how many switch to survivor mode and how many just sort of give up or get suicidal, and even if most people did, you could just say "So? Many people are like you. I'm not."
You have never been bored?
Well... I get bored when I can't focus on the shiny, because there's something I can't block out (noise, pain, a droning teacher) or because I don't have enough room in my brain (and any writing material I might have) to comprehend the shiny. I also get bored when I can't find any new things, because there's nothing to prompt me to think about a new question (my trick was to start thinking about the psychology of boredom, but that's exhausted by now).
a life full of meaningless pleasure, or nonconstructive senses of wonder will not be fulfilling
Nonconstructive? Where do you think physicists come from?
More seriously, it wouldn't be very fulfilling, but I prefer feeling nothing but pleasure to feeling nothing at all.
It sounds like you've never been through anything horrible enough that the possibility for deep and prolonged misery feels real to you.
To steelman your argument, I might not remember now what it really felt like, and thus have lost any aliefs I acquired then. I distinctly remember thinking "I'm gonna eat up this plate of shit and demand seconds", but even that wasn't at the worst of times.
↑ comment by CronoDAS · 2012-10-01T09:56:48.141Z · LW(p) · GW(p)
I try to feed "not existing" into my brain's utility evaluation module (a.k.a. the "how would I feel if this happened" test) and all it returns is confusion. On the pleasure-pain hedonism scale, not existing doesn't evaluate to zero, it evaluates to "syntax error". I can easily calculate that my sudden death would make the world a worse place, but I can't figure out if I should prefer a world in which my mom had a genetically different child (who would then grow up to be a person that isn't "me") to one in which "I" exist.
Of all the possible worlds, why should I prefer those in which "I" came into existence to those in which someone else existed instead? Similarly, why should I prefer a distant future in which I'm resurrected from cryonic suspension to one in which I'm not?
Replies from: MixedNuts↑ comment by MixedNuts · 2012-10-01T14:14:02.151Z · LW(p) · GW(p)
Agree that the utility of death is undefined on the hedonic scale. Still gotta measure it somehow.
why should I prefer those in which "I" came into existence to those in which someone else existed instead? Similarly, why should I prefer a distant future in which I'm resurrected from cryonic suspension to one in which I'm not?
This is not similar! The you algorithm is currently embodied and running. Making it stop running forever, whether by dropping a piano on your head or by neglecting to thaw you, kills you. I don't want people to die, and I don't think you do either.
I am indifferent between various people being born, and I think indifferent to how many are born, except insofar as they will lead good or bad lives. You don't seem to be a very happy person, so I wish you'd never been born. (Zing.) But we can't unbirth you, and clever tricks like pretending you already die and we have an opportunity to birth you again won't help.
Replies from: CronoDAS↑ comment by CronoDAS · 2012-10-02T10:02:41.383Z · LW(p) · GW(p)
But we can't unbirth you, and clever tricks like pretending you already die and we have an opportunity to birth you again won't help.
I'm not so sure; "you've already died and we have an opportunity to birth you again" isn't very relevant to the question of whether one should commit suicide or not, but it does seem, to me, to be exactly what cryonics is offering.
It seems like most of the external effects of my death happen regardless of whether I'm revived from cryonic suspension or not. Suppose that a piano is about to fall on my head, but at the last minute, a wormhole opens up beneath me, and I end up in the middle of the Delta Quadrant surrounded by friendly, English-speaking aliens. ;) Now, in this (silly) scenario, I happen to be alive and well, but everyone else saw me get flattened by a falling piano and thinks I'm dead. As far as its effect on the rest of the Earth is concerned, this is basically just as bad as if the piano actually did hit me: my family and friends will still grieve, etc. And since all I get is confusion when I ask myself if it is better for me if I exist or not, I don't know if I have a reason to prefer "piano + wormhole" to "piano + splat". (Ignore the effect my presence will have on the aliens.)
Replies from: MixedNuts↑ comment by MixedNuts · 2012-10-02T10:25:12.266Z · LW(p) · GW(p)
I prefer you not to die even if I don't know about it. I'm allowed to have preferences about events I can't observe and there's nothing you can do about it, so there.
Also, wouldn't people you care about we happier hoping you'll make it to the future than knowing you're dead and gone? Some of them might even be around when you get thawed.
↑ comment by mwengler · 2012-09-30T15:05:10.263Z · LW(p) · GW(p)
I told my 14 year old daughter about cryo, she was amazed, incredulous. She said something like "those people don't believe in life after death?" I said "no, do you?" She said she did.
I realized that there would be a case that if there is a life after death that cryo would interfere with that.
I think there are a lot of reasons I don't buy in to cryo. But one of them is that I think the extremely small chance of successful and happy revival is at a similar level with the extremely small chance that there is some sort of "next step" for us after death. If the people buying in to cryo are making a sort-of Pascal's wager with death, I feel like I'm the guy saying "but what if god is buddha? What if god is Islamic?"
When it comes down to it, my estimate is cryo is 99.999+% likely to be meaningless, epsilon likely to result in a happy revival and epsilon likely to screw up my afterlife.
I'm a neurotypical straight male, but I suspect my reaction to cryo is similar to the caricature of female reactions. That's my intuition anyway.
Replies from: Dolores1984↑ comment by Dolores1984 · 2012-09-30T20:25:45.517Z · LW(p) · GW(p)
Really? I wouldn't put odds of revival for best-case practices any lower than maybe 10%. How on earth do you have such a high confidence that WBE emulation won't be perfected in the next couple of hundred years?
Replies from: mwengler↑ comment by mwengler · 2012-10-01T10:36:41.721Z · LW(p) · GW(p)
I put the odds that we will have nanobots in our bloodstream killing cancer cells and regulating our chemistry to avoid a lot of metabolic problems, repair injuries, and so on, at a pretty high number. I put the odds that we will figure out how to put a living human into some sort of suspended animation and bring them back into regular animation at some sort of reasonable odds. I put the odds that if we did our best effort to freeze a living person now without damage that we would be able to eventually revive them at maybe 10%. The odds that we will be able to revive a person frozen or otherwise preserved after they are legally dead, that's getting down towards time-machine to the past odds, since I think you are freezing after important parts of the information are lost.
Conditioned on having the technical ability to revive the frozen, that might raise the odds of eventually being revived towards 10%. There are a lot of things that might keep revival from happening other than it not being possible technically.
Replies from: Dolores1984↑ comment by Dolores1984 · 2012-10-01T17:38:16.703Z · LW(p) · GW(p)
If you're talking about people frozen after four plus hours of room temperature ischemia, I'd agree with you that the odds are not good. However, somebody with a standby team, perfused before ischemic clotting can set in and vitrified quickly, has a very good chance in my book. We've done SEM imaging of optimally vitrified dead tissue, and the structural preservation is extremely good. You can go in and count the pores on a dendrite. There simply isn't much information lost immediately after death, especially if you get the head in ice water quickly.
I also have quite a high confidence that we'll be seeing WBE technology in the next forty years (I'd wager at better than even odds that we'll see it in the next twenty). The component technologies already exist (and need only iterative improvements), and many of them are falling exponentially in cost. That combined with what I suspect will be a rather high demand when the potential reaches the public consciousness, is a pretty potent combination of forces.
So, for me, I lose most of my probability mass to the idea that, if you're vitrified now, something will happen to Alcor within 40 years, or, more generally, some civilization-disrupting event will occur in the same time frame. That your brain isn't preserved (under optimal conditions), or that we'll never figure out how to slice up and emulate a brain, are not serious points of concern to me.
comment by advancedatheist · 2012-10-02T16:50:27.783Z · LW(p) · GW(p)
Women in recent decades have clamored to get into social spaces traditionally dominated by men and associated with male power and privilege, because these women want to raise their own status to male levels. For example, women want to get on Facebook's board of directors. By contrast, notice women's lack of interest in becoming guards in men's prisons.
Cryonics has a reputation (wrongly) as a rich white man's social space, so why haven't women wanted to colonize the cryonics community for status reasons? (For that matter, why haven't we heard calls for more "diversity" and "vibrancy" in the cryonics movement from minorities' spokespeople?) Instead cryonics acts like "female Kryptonite" much of the time.
You can go to Mike Darwin's and the de Wolfs' essay about cryonics hostile-wives to get about as much insight into the problem as I've read, but I don't see any obvious way to turn this around so that the cryonics movement becomes more women-friendly. I've wondered if we can find a good model by studying new American religious movements in the 19th Century which attracted many women as early adopters, like Mormonism, Seventh Day Adventism and Christian Science. Women even played roles in founding Adventism and Christian Science (Ellen White and Mary Baker Eddy, respectively), which makes their examples even more interesting because Western culture has traditionally not accepted women as religious authority figures.
The cryonics idea also has a fiction problem: Three novels I know of which portray cryonics or suspended animation positively all show a man who takes advantage of an underage girl as part of his plan for self-fulfillment, while disregarding the possibility that the girl upon reaching her majority might have other plans for her life. Will McIntosh's story "Bridesicle" in Asimov's magazine a few years ago shows an even more repulsive exploitation of women involving cryonics.
In other words, the cryonics movement would probably benefit by disavowing those kinds of stories and replacing them with ones which treat the womenfolk better.