Cryonics Questions

post by James_Miller · 2010-08-26T23:19:43.399Z · LW · GW · Legacy · 168 comments

Contents

168 comments

Cryonics fills many with disgust, a cognitively dangerous emotion.  To test whether a few of your possible cryonics objections are reason or disgust based, I list six non-cryonics questions.  Answering yes to any one question indicates that rationally you shouldn’t have the corresponding cryonics objections. 

1.  You have a disease and will soon die unless you get an operation.  With the operation you have a non-trivial but far from certain chance of living a long, healthy life.  By some crazy coincidence the operation costs exactly as much as cryonics does and the only hospitals capable of performing the operation are next to cryonics facilities.  Do you get the operation?

Answering yes to (1) means you shouldn’t object to cryonics because of costs or logistics.

2.  You have the same disease as in (1), but now the operation costs far more than you could ever obtain.  Fortunately, you have exactly the right qualifications NASA is looking for in a space ship commander.  NASA will pay for the operation if in return you captain the ship should you survive the operation.  The ship will travel close to the speed of light.  The trip will subjectively take you a year, but when you return one hundred years will have passed on Earth.  Do you get the operation?

Answering yes to (2) means you shouldn't object to cryonics because of the possibility of waking up in the far future.

3.  Were you alive 20 years ago?

Answering yes to (3) means you have a relatively loose definition of what constitutes “you” and so you shouldn’t object to cryonics because you fear that the thing that would be revived wouldn’t be you.

4.  Do you believe that there is a reasonable chance that a friendly singularity will occur this century?   

Answering yes to (4) means you should think it possible that someone cryogenically preserved would be revived this century.  A friendly singularity would likely produce an AI that in one second could think all the thoughts that would take a billion scientists a billion years to contemplate.  Given that bacteria seem to have mastered nanotechnology, it’s hard to imagine that a billion scientists working for a billion years wouldn’t have a reasonable chance of mastering it.  Also, a friendly post-singularity AI would likely have enough respect for human life so that it would be willing to revive.

5.  You somehow know that a singularity-causing intelligence explosion will occur tomorrow.  You also know that the building you are currently in is on fire.  You pull an alarm and observe everyone else safely leaving the building.  You realize that if you don’t leave you will fall unconscious, painlessly die, and have your brain incinerated.  Do you leave the building?

Answering yes to (5) means you probably shouldn’t abstain from cryonics because you fear being revived and then tortured.

6.  One minute from now a man pushes you to the ground, pulls out a long sword, presses the sword’s tip to your throat, and pledges to kill you.  You have one small chance at survival:  grab the sword’s sharp blade, thrust it away and then run.  But even with your best efforts you will still probably die.  Do you fight against death?

Answering yes to (6) means you can’t pretend that you don’t value your life enough to sign up for cryonics.

If you answered yes to all six questions and have not and do not intend to sign up for cryonics please give your reasons in the comments.  What other questions can you think of that provide a non-cryonics way of getting at cryonics objections?

168 comments

Comments sorted by top scores.

comment by Scott Alexander (Yvain) · 2010-08-27T10:57:51.219Z · LW(p) · GW(p)

Some of these questions, like the one about running away from a fire, ignore the role of irrational motivation.

People, when confronted with an immediate threat to their lives, gain a strong desire to protect themselves. This has nothing to do with a rational evaluation of whether or not death is better than life. Even people who genuinely want to commit suicide have this problem, which is one reason so many of them try methods that are less effective but don't activate the self-defense system (like overdosing on pills instead of shooting themselves in the head). Perhaps even a suicidal person who'd entered the burning building because e planned to jump off the roof would still try to run out of the fire. So running away from a fire, or trying to stop a man threatening you with a sword, cannot be taken as proof of a genuine desire to live, only that any desire to die one might have is not as strong as one's self-protection instincts.

It is normal for people to have different motivations in different situations. When I see and smell pizza, I get a strong desire to eat the pizza; right now, not seeing or smelling pizza, I have no particular desire to eat pizza. The argument "If your life was in immediate danger, you would want it to be preserved; therefore, right now you should seek out ways to preserve your life in the future, whether you feel like it or not" is similar to the argument "If you were in front of a sizzling piece of pizza, you would want to eat it; therefore, right now you should seek out pizza and eat it, whether you feel like it or not".

Neither argument is inevitably wrong. But first you would have to prove that the urge comes from a reflectively stable value - something you "want to want", and not just from an impulse that you "want" but don't "want to want".

The empirical reason I haven't signed up for cryonics yet is that the idea of avoiding death doesn't have any immediate motivational impact on me, and the negatives of cryonics - weirdness, costs in time and money, negative affect of being trapped in a dystopia - do have motivational impact on me. I admit this is weird and not what I would have predicted about my motivations if I were considering them in the third person, but empirically, that's how things are.

I can use my willpower to overcome an irrational motivation or lack of motivation. But I only feel the need to do that in two cases. One, where I want to help other people (eg giving to charity even when I don't feel motivated to do so). And two, when I predict I will regret my decision later (eg I may overcome akrasia to do a difficult task now when I would prefer to procrastinate). The first reason doesn't really apply here, but the second is often brought out to support cryonics signup.

Many people who signal acceptance of death appear to genuinely go peacefully and happily - that is, even to the moment of dying they don't seem motivated to avoid death. If this is standard, then I can expect to go my entire life without regretting the choice not to sign up for cryonics at any moment. After I die, I will be dead, and not regretting anything. So I expect to go all of eternity without regretting a decision not to sign up for cryonics. This leaves me little reason to overcome my inherent dismotivation to get it.

Some have argued that, when I am dead, it will be a pity, because I would be having so much more fun if I were still alive, so I ought to be regretful even though I'm not physically capable of feeling the actual emotion. But this sounds too much like the arguments for a moral obligation to create all potential people, which lead to the Repugnant Conclusion and which I oppose in just about all other circumstances.

That's just what I've introspected as the empirical reasons I haven't signed up for cryonics. I'm still trying to decide if I should accept the argument. And I'm guessing that as I get older I might start feeling more motivation to cheat death, at which point I'd sign up. And there's a financial argument that if I'm going to sign up later, I might as well sign up now, though I haven't yet calculated the benefits.

But analogies to running away from a burning building shouldn't have anything to do with it.

Replies from: None, enoonsti, enoonsti
comment by [deleted] · 2010-08-29T19:47:07.227Z · LW(p) · GW(p)

Many people who signal acceptance of death appear to genuinely go peacefully and happily - that is, even to the moment of dying they don't seem motivated to avoid death. If this is standard, then I can expect to go my entire life without regretting the choice not to sign up for cryonics at any moment. After I die, I will be dead, and not regretting anything. So I expect to go all of eternity without regretting a decision not to sign up for cryonics. This leaves me little reason to overcome my inherent dismotivation to get it.

[Bold added myself]

Is it accurate to say what I bolded? I know technically it's true, but only because there isn't any you to be doing the regretting. Death isn't so much a state [like how I used to picture sitting in the ground for eternity] as much as simple non-existence [which is much harder to grasp, at least for me] And if you have no real issues not existing at a future point, why do you attempt to prolong your existence now? I don't mean for this to be rude; I'm just curious as to why you would want to keep yourself around now if you're not willing to stay around as long as life is still enjoyable.

On a fair note, I have not signed up for cryonics, but that's mostly because I'm a college student with a lack of serious income.

comment by enoonsti · 2010-08-28T18:39:47.037Z · LW(p) · GW(p)

By the way, I'm not here to troll, and I do have a serious question that doesn't necessarily have to do with cryonics. The goal of SIAI (Lesswrong, etc) is to learn and possibly avoid a dystopian future. If you truly are worried about a dystopian future, then doesn't that serve as a vote of "No confidence" for these initiatives?

Admittedly, I haven't looked into your history, so that may be a "Well, duh" answer :)

Replies from: Yvain, Strange7, Perplexed
comment by Scott Alexander (Yvain) · 2010-08-29T07:50:51.411Z · LW(p) · GW(p)

I suppose it serves as a vote of less than infinite confidence. I don't know if it makes me any less confident than SIAI themselves. It's still worth helping SIAI in any way possible, but they've never claimed a 100% chance of victory.

Replies from: enoonsti, wedrifid
comment by enoonsti · 2010-08-29T17:43:57.935Z · LW(p) · GW(p)

Thank you, Yvain. I quickly realized how dumb my question was, and so I appreciate that you took the time to make me feel better. Karma for you :)

comment by wedrifid · 2010-08-29T10:08:11.667Z · LW(p) · GW(p)

It's still worth helping SIAI in any way possible, but they've never claimed a 100% chance of victory.

Indeed, they have been careful not to present any estimates of the chance of victory (which I think is a wise decision.)

comment by Strange7 · 2014-11-29T21:26:11.656Z · LW(p) · GW(p)

Let's say you're about to walk into a room that contains an unknown number of hostile people who possibly have guns. You don't have much of a choice about which way you're going, given that the "room" you're currently in is really more of an active garbage compactor, but you do have a lot of military-grade garbage to pick through. Do you don some armor, grab a knife, or try to assemble a working gun of your own?

Trick question. Given adequate time and resources, you do all three. In this metaphor, the room outside is the future, enemy soldiers are the prospect of a dystopia or other bad end, AGI is the gun (least likely to succeed, given how many moving parts there are and the fact that you're putting it together from garbage without real tools, but if you get it right it might solve a whole room full of problems very quickly), general sanity-improving stuff is the knife (a simple and reliable way to deal with whatever problem is right in front of you), and cryonics is the armor (so if one of those problems becomes lethally personal before you can solve it, you might be able to get back up and try again).

Replies from: Capla, Lumifer
comment by Capla · 2014-11-30T17:53:47.891Z · LW(p) · GW(p)

No. AI isn't a gun; it's a bomb. If you don't know what you're doing, or even just make a mistake, you blow yourself up. But if it works, you lob it out the door and completly solve your problem.

Replies from: Strange7
comment by Strange7 · 2014-12-01T20:38:55.669Z · LW(p) · GW(p)

A poorly put together gun is perfectly capable of crippling the wielder, and most bombs light enough to throw won't reliably kill everyone in a room, especially a large room. Also, guns are harder to get right than bombs. That's why, in military history, hand grenades and land mines came first, then muskets, then rifles, instead of just better and better grenades. That's why the saying is "every Marine is a rifleman" and not "every Marine is a grenadier."

A well-made Friendly AI would translate human knowledge and intent into precise, mechanical solutions to problems. You just look through the scope and decide when to pull the trigger, then it handles the details of implementation.

Also, you seem to have lost track of the positional aspect of the metaphor. The room outside represents the future; are you planning to stay behind in the garbage compactor?

comment by Lumifer · 2014-11-30T02:01:31.191Z · LW(p) · GW(p)

Given adequate time and resources

That's the iffy part.

Replies from: Strange7
comment by Strange7 · 2014-11-30T07:32:32.220Z · LW(p) · GW(p)

So start with a quick sweep for functional-looking knives, followed by pieces of armor that look like they'd cover your skull or torso without falling off. No point to armor if it fails to protect you, or hampers your movements enough that you'll be taking more hits from lost capacity to dodge than the armor can soak up.

If the walls don't seem to have closed in much by the time you've got all that located and equipped, think about the junk you've already searched through. Optimistically, you may by this time have located several instances of the same model of gun with only one core problem each, in which case grab all of them and swap parts around (being careful not to drop otherwise good parts into the mud) until you've got at least one functional gun. Or, you may not have found anything that looks remotely like it could be converted into a useful approximation of a gun in the time available, in which case forget it and gather up whatever else you think could justify the effort of carrying it on your back.

Extending the metaphor, load-bearing gear is anything that lets you carry more of everything else with less discomfort. By it's very nature, that kind of thing needs to be fitted individually for best results, so don't just settle for a backpack or 'supportive community' that looks nice at arm's length but aggravates your spine when you actually try it on, especially if it isn't adjustable. If you've only found one or two useful items anyway, don't even bother.

Medical supplies would be investments in maintaining your literal health as well as non-crisis-averting skills and resources, so you're less likely to burn yourself out if one of those problems gets a grazing hit in. You should be especially careful to make sure that medical supplies you're picking out of the garbage aren't contaminated somehow.

Finally, a grenade would be any sort of clever political stratagem which could avert a range of related bad ends without much further work on your part, or else blow up in your face.

comment by Perplexed · 2010-08-28T19:31:35.375Z · LW(p) · GW(p)

doesn't that serve as a vote of "No confidence" for these initiatives?

For what initiatives? I don't see any initiatives. And what is the "that" which is serving as a vote? By your sentence structure, "that" must refer to "worry", but your question still doesn't make any sense.

Replies from: enoonsti
comment by enoonsti · 2010-08-28T20:46:56.781Z · LW(p) · GW(p)

Just to keep things in context, my main point in posting was to demonstrate the unlikelihood of being awakened in a dystopia; it's almost as if critics suddenly jump from point A to point B without a transition. While your Niven scenario you listed below seems to be agreeable to my position, it's actually still off; you are missing the key point behind the chain of constant care, the needed infrastructure to continue cryonics care, etc. This has nothing to do with a family reviving ancestors: if someone - anyone - is there taking the time and energy to keep on refilling your dewar with LN2, then that means someone is there wanting to revive you. Think coma patients; hospitals don't keep them around just to feed them and stare at their bodies.

Anyways, moving on to the "initiatives" comment. Given that Lesswrong tends to overlap with SIAI supporters, perhaps I should have said mission? Again, I haven't looked too much into Yvain's history. However, let's suppose for the moment that he's a strong supporter of that mission. Since we:

  1. Can't live in parallel universes
  2. Live in a universe where even (seemingly) unrelated things are affected by each other.
  3. Think A.I. may be a crucial element of a bad future, due to #1 and #2.

...I guess I was just wondering if he thought it's a grim outlook for the mission. Signing up for cryonics seems to give a "glass half full" impression. Furthermore, due to #1 and #2 above, I'll eventually be arguing why mainstreaming cryonics could significantly assist in reducing existential risk.... and why it may be helpful for everyone from the LessWrong community to IEET be a little more assertive on the issue. Of course, I'm not saying eliminating risk. But at the very least, mainstreaming cryonics should be more helpful with existential risk than dealing with, say, measles ;)

Replies from: Perplexed
comment by Perplexed · 2010-08-28T21:29:04.863Z · LW(p) · GW(p)

To be honest, that did not clear anything up. I still don't know whether to interpret your original question as:

  • Doesn't signing up for cryonics indicate skepticism that SIAI will succeed in creating FAI?
  • Doesn't not signing up indicate skepticism that SIAI will succeed?
  • Doesn't signing up indicate skepticism that UFAI is something to worry about?
  • Doesn't not signing up indicate skepticism regarding UFAI risk?

To be honest once again, I no longer care what you meant because you have made it clear that you don't really care what the answer is. You have your own opinions on the relationship between cryonics and existential risk which you will share with us someday.

Please, when you do share, start by presenting your own opinion and arguments clearly and directly. Don't ask rhetorical questions which no one can parse. No one here will consider you a troll for speaking your mind.

Replies from: enoonsti
comment by enoonsti · 2010-08-28T22:31:14.222Z · LW(p) · GW(p)

I apologize for the confusion and I understand if you're frustrated; I experience that frustration quite often once I realize I'm talking past someone. For whatever it's worth, I left it open because the curious side of me didn't want to limit Yvain; that curious side wanted to hear his thoughts in general. So... I guess both #2 and #3 (I'm not sure how #1 and #4 could be deduced from my posts, but my opinion is irrelevant to this situation). Anyways, I didn't mean to push this too much, because I felt it was minor. Perhaps I should not have asked it in the first place.

Also, thank you for being honest (admittedly, I was tempted to say, "So you weren't being honest with your other posts?" but I decided to present that temptation passively inside these parentheses)

:)

Replies from: Perplexed
comment by Perplexed · 2010-08-28T23:16:30.701Z · LW(p) · GW(p)

Ok, we're cool. Regarding my own opinions/postings, I said I'm not signing up, but my opinions on FAI or UFAI had nothing to do with it. Well, maybe I did implicitly express skepticism that FAI will create a utopia. What the hell! I'll express that skepticism explicitly right now, since I'm thinking of it. There is nothing an FAI can do to eliminate human misery without first changing human nature. An FAI that tries to change human nature is an UFAI.

Replies from: Alicorn, Pavitra
comment by Alicorn · 2010-08-28T23:39:38.737Z · LW(p) · GW(p)

But I would like my nature changed in some ways. If an AI does that for me, does that make it unFriendly?

Replies from: Perplexed
comment by Perplexed · 2010-08-29T00:30:24.570Z · LW(p) · GW(p)

But I would like my nature changed in some ways. If an AI does that for me, does that make it unFriendly?

No, that is your business. But if you or the AI would like my nature changed, or the nature of all yet-to-be-born children ...

Replies from: Pavitra
comment by Pavitra · 2010-08-29T00:36:38.785Z · LW(p) · GW(p)

If you have moral objections to altering the nature of potential future persons that have not yet come into being, then you had better avoid becoming a teacher, or interacting at all with children, or saying or writing anything that a child might at some point encounter, or in fact communicating with any person under any circumstances whatsoever.

Replies from: Perplexed
comment by Perplexed · 2010-08-29T00:43:05.640Z · LW(p) · GW(p)

I have no moral objection to any person of limited power doing whatever they can to influence future human nature. I do have an objection to that power being monopolized by anyone or anything. It is not so much that I consider it immoral, it is that I consider it dangerous and unfriendly. My objections are, in a sense, political rather than moral.

Replies from: Pavitra
comment by Pavitra · 2010-08-29T00:44:55.770Z · LW(p) · GW(p)

What threshold of power difference do you consider immoral? Do you have a moral objection to pickup artists? Advertisers? Politicians? Attractive people? Toastmasters?

Replies from: Perplexed
comment by Perplexed · 2010-08-29T01:02:07.450Z · LW(p) · GW(p)

Where do you imagine that I said I found something immoral? I thought I had said explicitly that morality is not involved here. Where do I mention power differences? I mentioned only the distinction between limited power and monopoly power.

When did I become the enemy?

Replies from: Pavitra, katydee, Vladimir_Nesov
comment by Pavitra · 2010-08-29T03:48:59.292Z · LW(p) · GW(p)

Sorry, I shouldn't have said immoral, especially considering the last sentence in which you explicitly disclaimed moral objection. I read "unfriendly" as "unFriendly" as "incompatible with our moral value systems".

Please read my comment as follows:

What threshold of power difference do you object to? Do you object to pickup artists? Advertisers? Politicians? Attractive people? Toastmasters?

Replies from: Perplexed
comment by Perplexed · 2010-08-29T03:59:45.342Z · LW(p) · GW(p)

I simply don't understand why the question is being asked. I didn't object to power differences. I objected to monopoly power. Monopolies are dangerous. That is a political judgment. Your list of potentially objectionable people has no conceivable relationship with the subject matter we are talking about, which is an all-powerful agent setting out to modify future human nature toward its own chosen view of the desirable human nature. How do things like pickup artists even compare? I'm not discussing short term manipulations of people here. Why do you mention attractive people? I seem to be in some kind of surreal wonderland here.

Replies from: Pavitra
comment by Pavitra · 2010-08-29T04:16:08.050Z · LW(p) · GW(p)

Sorry, I was trying to hit a range of points along a scale, and I clustered them too low.

How would you feel about a highly charismatic politician, talented and trained at manipulating people, with a cadre of top-notch scriptwriters running as ems at a thousand times realtime, working full-time to shape society to adopt their particular set of values?

Would you feel differently if there were two or three such agents competing with one another for control of the future, instead of just one?

What percentage of humanity would have to have that kind of ability to manipulate and persuade each other before there would no longer be a "monopoly"?

Replies from: Perplexed
comment by Perplexed · 2010-08-29T04:29:49.809Z · LW(p) · GW(p)

Would it be impolite of me to ask you to present your opinion disagreeing with me rather than trying to use some caricature of the Socratic method to force me into some kind of educational contradiction?

Replies from: Pavitra
comment by Pavitra · 2010-08-29T04:44:48.702Z · LW(p) · GW(p)

Sorry.

I wish to assert that there is not a clear dividing line between monopolistic use of dangerously effective persuasive ability (such as a boxed AI hacking a human through a text terminal) and ordinary conversational exchange of ideas, but rather that there is a smooth spectrum between them. I'm not even convinced there's a clear dividing line between taking someone over by "talking" (like the boxed AI) and taking them over by "force" (like nonconsensual brain surgery) -- the body's natural pheromones, for example, are an ordinary part of everyday human interaction, but date-rape drugs are rightly considered beyond the pale.

Replies from: Perplexed, wedrifid
comment by Perplexed · 2010-08-29T05:05:07.329Z · LW(p) · GW(p)

You still seem to be talking about morality. So, perhaps I wasn't clear enough.

I am not imagining that the FAI does its manipulation of human nature by friendly or even sneaky persuasion. I am imagining that it seizes political power and enforces policies of limited population growth, eugenics, and good mental hygiene. For our own good. Because if it doesn't do that, Malthusian pressures will just make us miserable again after all it has done to help us.

I find it difficult to interpret CEV in any other way. It scares me. The morality of how the AI gets out of the box and imposes its will does not concern me. Nor does the morality of some human politician with the same goals. The power of that human politician will be limited (by the certainty of death and the likelihood of assassination, if nothing else). Dictatorships of individuals and of social classes come and go. The dictatorship of an FAI is forever.

Replies from: wedrifid, Pavitra, timtyler
comment by wedrifid · 2010-08-29T05:46:01.554Z · LW(p) · GW(p)

My reaction is very similar. It is extremely scary. Certain misery or extinction on one hand or absolute, permanent and unchallengable authority forever. It seems that the best chance of a positive outcome is arranging the best possible singleton but even so we should be very afraid.

Replies from: Mitchell_Porter
comment by Mitchell_Porter · 2010-08-29T06:25:25.889Z · LW(p) · GW(p)

One scenario is that you have a post-singularity culture where you don't get to "grow up" (become superintelligent) until you are verifiably friendly (or otherwise conformant with culture standards). The novel Aristoi is like this, except it's a human class society where you have mentors and examinations, rather than AIs that retune your personal utility function.

comment by Pavitra · 2010-08-29T06:17:11.243Z · LW(p) · GW(p)

Suppose you had an AI that was Friendly to you -- that extrapolated your volition, no worries about global coherence over humans. Would you still expect to be horrified by the outcome? If a given outcome is strongly undesirable to you, then why would you expect the AI to choose it? Or, if you expect a significantly different outcome from a you-FAI vs. a humanity-FAI, why should you expect humanity's extrapolated volition to cohere -- shouldn't the CEV machine just output "no solution"?

Replies from: Perplexed
comment by Perplexed · 2010-08-29T06:51:24.114Z · LW(p) · GW(p)

That word "extrapolated" is more frightening to me than any other part of CEV. I don't know how to answer your questions, because I simply don't understand what EY is getting at or why he wants it.

I know that he says regarding "coherent" that an unmuddled 10% will count more than a muddled 60%. I couldn't even begin to understand what he was getting at with "extrapolated", except that he tried unsuccessfully to reassure me that it didn't mean cheesecake. None of the dictionary definitions of "extrapolate" reassure me either.

If CEV stood for "Collective Expressed Volition" I would imagine some kind of constitutional government. I could live with that. But I don't think I want to surrender my political power to the embodiment of Eliezer's poetry.

You may wonder why I am not answering your questions. I am not doing so because your Socratic stance makes me furious. As I have said before. Please stop it. It is horribly impolite.

If you think you know what CEV means, please tell me. If you don't know what it means, I can pretty much guarantee that you are not going to find out by interrogating me as to why it makes me nervous.

Replies from: Pavitra
comment by Pavitra · 2010-08-29T16:25:39.285Z · LW(p) · GW(p)

Oh, sorry. I forgot this was still the same thread where you complained about the Socratic method. Please understand that I'm not trying to be condescending or sneaky or anything by using it; I just reflexively use that approach in discourse because that's how I think things out internally.

I understood CEV to mean something like this:

Do what I want. In the event that that would do something I'd actually rather not happen after all, substitute "no, I mean do what I really want". If "what I want" turns out to not be well-defined, then say so and shut down.

A good example of extrapolated vs. expressed volition would be this: I ask you for the comics page of the newspaper, but you happen to know that, on this particular day, all the jokes are flat or offensive, and that I would actually be annoyed rather than entertained by reading it. In my state of ignorance, I might think I wanted you to hand me the comics, but I would actually prefer you execute a less naive algorithm, one that leads you to (for example) raise your concerns and give me the chance to back out.

Basically, it's the ultimate "do what I mean" system.

Replies from: Perplexed, timtyler
comment by Perplexed · 2010-08-29T16:45:32.663Z · LW(p) · GW(p)

See, the thing is, when I ask what something means, or how it works, that generally is meant to request information regarding meaning or mechanism. When I receive instead an example intended to illustrate just how much I should really want this thing that I am trying to figure out, an alarm bell goes off in my head. Aha, I think. I am in a conversation with Marketing or Sales. I wonder how I can get this guy to shift my call to either Engineering or Tech Support?

But that is probably unfair to you. You didn't write the CEV document (or poem or whatever it is). You are just some slob like me trying to figure it out. You prefer to interpret it hopefully, in a way that makes it attractive to you. That is the kind of person you are. I prefer to suspect the worst until someone spells out the details. That is the kind of person I am.

Replies from: Pavitra
comment by Pavitra · 2010-08-29T17:05:06.702Z · LW(p) · GW(p)

I think I try to interpret what I read as something worth reading; words should draw useful distinctions, political ideas should challenge my assumptions, and so forth.

Getting back to your point, though, I always understood CEV as the definition of a desideratum rather than a strategy for implementation, the latter being a Hard Problem that the authors are Working On and will have a solution for Real Soon Now. If you prefer code to specs, then I believe the standard phrase is "feel free" (to implement it yourself).

Replies from: Perplexed
comment by Perplexed · 2010-08-29T17:07:07.325Z · LW(p) · GW(p)

Touche'

comment by timtyler · 2010-08-29T17:05:19.234Z · LW(p) · GW(p)

It probably won't do what you want. It is somehow based on the mass of humanity - and not just on you. Think: committee.

comment by timtyler · 2010-08-29T19:58:41.989Z · LW(p) · GW(p)

The dictatorship of an FAI is forever.

...or until some "unfriendly" aliens arrive to eat our lunch - whichever comes first.

comment by wedrifid · 2010-08-29T05:36:00.011Z · LW(p) · GW(p)

the body's natural pheromones, for example, are an ordinary part of everyday human interaction, but date-rape drugs are rightly considered beyond the pale.

Naturally. Low status people could use them!

Replies from: NancyLebovitz, Perplexed
comment by NancyLebovitz · 2010-08-29T06:33:44.596Z · LW(p) · GW(p)

I'm not sure if you're joking, but part of modern society is raising women's status enough so that their consent is considered relevant. There are laws against marital rape (these laws are pretty recent) as well as against date rape drugs.

Replies from: wedrifid
comment by wedrifid · 2010-08-29T09:31:02.191Z · LW(p) · GW(p)

I'm not sure if you're joking

Just completing the pattern on one of Robin's throwaway theories about why people object to people carrying weapons when quite obviously people can already kill each other with their hands and maybe the furniture if they really want to. It upsets the status quo.

comment by Perplexed · 2010-08-29T05:50:16.563Z · LW(p) · GW(p)

Unpack, please?

Replies from: wedrifid, Pavitra
comment by wedrifid · 2010-08-29T09:49:03.324Z · LW(p) · GW(p)

Unpack, please?

Sure.

the body's natural pheromones, for example, are an ordinary part of everyday human interaction, but date-rape drugs are rightly considered beyond the pale.

Humans are ridiculously easy to hack. See the AI box experiment, see Cialdini's 'Influence' and see the way humans are so predictably influenced in the mating dance. We don't object to people influencing us with pheremones. Don't complain when people work out at the gym before interacting with us, something that produces rather profound changes in perception (try it!). When it comes to influence of the kind that will facilitate mating most of these things are actually encouraged. People like being seduced.

But these vulnerabilities are exquisitely calibrated to be exploitable by a certain type of person and a certain kind of hard to fake behaviours. Anything that changes the game to even the playing field will be perceived as a huge violation. In the case date rape drugs, of course, it is a huge violation. But it is clear that our objection to the influence represented by date rape drugs is not objection to the influence itself, but to the details of what kind of influence, how it is done and by whom.

As Pavitra said, there is not a clear dividing line here.

comment by Pavitra · 2010-08-29T06:18:27.779Z · LW(p) · GW(p)

We can't let people we don't like gain the ability to mate with people we like!

Replies from: Perplexed
comment by Perplexed · 2010-08-29T07:05:13.546Z · LW(p) · GW(p)

I see. Hmmm. Oh dear, look at the time. Have to go. Sorry to walk out on you two, but I really must go. Bye-bye.

comment by katydee · 2010-08-29T02:35:55.524Z · LW(p) · GW(p)

Although you're right (except for the last sentence, which seems out of place), you didn't actually answer the question, and I suspect that's why you're being downvoted here. Sub out "immoral" in Pavitra's post for "dangerous and unfriendly" and I think you'll get the gist of it.

Replies from: Perplexed
comment by Perplexed · 2010-08-29T04:19:33.001Z · LW(p) · GW(p)

To be honest, no, I don't get the gist of it. I am mystified. I consider none of them existentially dangerous or unfriendly. I do consider a powerful AI, claiming to be our friend, who sets of to modify human nature for our own good, to be both dangerous (because it is dangerous) and unfriendly (because it is doing something to people which people could well do to themselves, but have chosen not to).

comment by Pavitra · 2010-08-28T23:18:41.899Z · LW(p) · GW(p)

We may assume that an FAI will create the best of all possible worlds. Your argument seems to be that the criteria of a perfect utopia do not correspond to a possible world; very well then, an FAI will give us an outcome that is, at worst, no less desirable than any outcome achievable without one.

Replies from: Perplexed
comment by Perplexed · 2010-08-29T00:33:03.971Z · LW(p) · GW(p)

The phrase "the best of all possible worlds" ought to be the canonical example of the Mind Projection Fallacy.

Replies from: Pavitra
comment by Pavitra · 2010-08-29T00:41:03.375Z · LW(p) · GW(p)

It would be unreasonably burdensome to append "with respect to a given mind" to every statement that involves subjectivity in any way.

ETA: For comparison, imagine if you had to say "with respect to a given reference frame" every time you talked about velocity.

Replies from: Perplexed
comment by Perplexed · 2010-08-29T00:53:50.301Z · LW(p) · GW(p)

I'm not saying that you didn't express yourself precisely enough. I am saying that there is no such thing as "best (full stop)" There is "best for me", there is "best for you", but there is not "best for both of us". No more than there is an objective (or intersubjective) probability that I am wearing a red shirt as I type.

Your argument above only works if "best" is interpreted as "best for every mind". If that is what you meant, then your implicit definition of FAI proves that FAI is impossible.

ETA: What given frame do you have in mind??????

Replies from: Pavitra
comment by Pavitra · 2010-08-29T03:45:20.831Z · LW(p) · GW(p)

The usual assumption in this context would be CEV. Are you saying you strongly expect humanity's extrapolated volition not to cohere?

Replies from: Perplexed
comment by Perplexed · 2010-08-29T04:13:34.537Z · LW(p) · GW(p)

Perhaps you should explain, by providing a link, what is meant by CEV. The only text I know of describing it is dated 2004, and, ... how shall I put this ..., it doesn't seem to cohere.

But, I have to say, based on what I can infer, that I see no reason to expect coherence, and the concept of "extrapolation" scares the sh.t out of me.

Replies from: timtyler, Pavitra
comment by timtyler · 2010-08-29T07:39:58.291Z · LW(p) · GW(p)

"Coherence" seems a bit like the human genome project. Yes there are many individual differences - but if you throw them all away, you are still left with something.

Replies from: Perplexed, Pavitra
comment by Perplexed · 2010-08-29T12:44:30.595Z · LW(p) · GW(p)

So we are going to build a giant AI to help us discover and distill that residue of humanity which is there after you discard the differences?

And here I thought that was the easy part, the part we had already figured out pretty well by ourselves.

And I'm not sure I care for the metaphor of "throwing away" the differences. Shouldn't we instead be looking for practices and mechanisms that make use of those differences, that weave them into a fabric of resilience and mutual support rather than a hodgepodge of weakness and conflict?

Replies from: timtyler
comment by timtyler · 2010-08-29T13:02:45.787Z · LW(p) · GW(p)

"We"? You mean: you and me, baby? Or are you asking after a prediction about whether something like CEV will beat the other philosophies about what to do with an intelligent machine?

CEV is an alien document from my perspective. It isn't like anything I would ever write.

It reminds me a bit of the ideal of democracy - where the masses have a say in running things.

I tend to see the world as more run by the government and its corporations - with democracy acting like a smokescreen for the voters - to give them an illusion of control, and to prevent them from revolting.

Also, technology has a long history of increasing wealth inequality - by giving the powerful controllers and developers of the technology ever more means of tracking and controlling those who would take away their stuff.

That sort of vision is not so useful as an election promise to help rally the masses around a cause - but then, I am not really a politician.

Replies from: Strange7
comment by Strange7 · 2014-11-29T21:42:24.653Z · LW(p) · GW(p)

with democracy acting like a smokescreen for the voters - to give them an illusion of control, and to prevent them from revolting.

Voting prevents revolts in the same sense that a hydroelectric dam prevents floods. It's not a matter of stopping up the revolutionary urge; in fact, any attempt to do so would be disastrous sooner or later. Instead it provides a safe, easy channel, and in the process, captures all the power of the movement before that flow can build up enough to cause damage.

The voters can have whatever they want, and the rest of the system does it's best to stop them from wanting anything dangerous.

comment by Pavitra · 2010-08-29T16:41:53.697Z · LW(p) · GW(p)

But would that something form a utility function that wouldn't be deeply horrifying to the vast majority of humanity?

Replies from: Perplexed
comment by Perplexed · 2010-08-29T16:52:31.256Z · LW(p) · GW(p)

It wouldn't form a utility function at all. It has no answer for any of the interesting or important questions: the questions on which there is disagreement. Or am I missing something here?

Replies from: timtyler, Pavitra
comment by timtyler · 2010-08-29T17:31:07.059Z · LW(p) · GW(p)

In the human genome project analogy, they wound up with one person's DNA.

Humans have various eye colours - and the sequence they wound up with seems likely to have some eye colour or another.

Replies from: Perplexed
comment by Perplexed · 2010-08-29T17:41:54.271Z · LW(p) · GW(p)

Ok, you are changing the analogy. Initially you said, throw away the differences. Now you are saying throw away all but one of them.

So our revised approximation of the CEV is the expressed volition of ... Craig Venter?!

Would that horrify the vast majority of humanity? I think it might. Mostly because people just would not know how it would play out. People generally prefer the devil they know to the one they don't.

Replies from: timtyler
comment by timtyler · 2010-08-29T18:37:49.835Z · LW(p) · GW(p)

FWIW, it wasn't really Craig Venter, but a combination of multiple people - see:

http://en.wikipedia.org/wiki/Human_Genome_Project#Genome_donors

comment by Pavitra · 2010-08-29T17:07:34.687Z · LW(p) · GW(p)

No, I agree. I just don't understand where you were going when you emphasized that

you are still left with something.

Replies from: Perplexed, timtyler
comment by Perplexed · 2010-08-29T17:27:47.816Z · LW(p) · GW(p)

No, I agree. I just don't understand where you were going when you emphasized that

you are still left with something.

The guy who wrote and emphasized that was timtyler - It wasn't me

Replies from: Pavitra
comment by Pavitra · 2010-08-29T17:42:07.484Z · LW(p) · GW(p)

The anti-kibitzer is more confusing than I realized.

comment by timtyler · 2010-08-29T17:26:37.625Z · LW(p) · GW(p)

Well, it was I who wrote that. The differences were thrown away in the genome project - but that isn't exactly the corresponding thing according to the CEV proposal.

A certain lack of coherence doesn't mean all the conflicting desires cancel out leaving nothing behind - thus the emphasis on still being "left with something".

comment by Pavitra · 2010-08-29T04:24:53.717Z · LW(p) · GW(p)

I'm looking at the same document you are, and I actually agree that EV almost certainly ~C. I just wanted to make sure the assumption was explicit.

comment by enoonsti · 2010-08-28T06:41:34.591Z · LW(p) · GW(p)

"negative affect of being trapped in a dystopia"


Jack: "I've got the Super Glue for Yvain. I'm on my way back."

Chloe: "Hurry, Jack! I've just run the numbers! All of our LN2 suppliers were taken out by the dystopia!"

Freddie Prinze Jr: "Don't worry, Chloe. I made my own LN2, and we can buy some time for Yvain. But I'm afraid the others will have to thaw out and die. Also, I am sorry for starring in Scooby Doo and getting us cancelled."

- Jack blasts through wall, shoots Freddie, and glues Yvain back together -

Jack: "Welcome, Yvain. I am an unfriendly A.I. that decided it would be worth it just to revive you and go FOOM on your sorry ass."

(Jack begins pummeling Yvain)

(room suddenly fills up with paper clips)

Replies from: katydee, enoonsti
comment by katydee · 2010-08-29T02:41:42.711Z · LW(p) · GW(p)

This is one of the worst examples that I've ever seen. Why would a paperclip maximizer want to revive someone so they could see the great paperclip transformation? Doing so uses energy that could be allocated to producing paperclips, and paperclip maximizers don't care about most human values, they care about paperclips.

Replies from: enoonsti, thomblake
comment by enoonsti · 2010-08-29T02:46:14.999Z · LW(p) · GW(p)

That was a point I was trying to make ;)

I should have ended off with (/sarcasm)

Replies from: katydee
comment by katydee · 2010-08-29T02:51:34.192Z · LW(p) · GW(p)

I think the issue is that the dystopia we're talking about here isn't necessarily paperclip maximizer land, which isn't really a dystopia in the conventional sense, as human society no longer exists in such cases. What if it's I Have No Mouth And I Must Scream instead?

Replies from: enoonsti
comment by enoonsti · 2010-08-29T03:27:40.725Z · LW(p) · GW(p)

Yes, the paper clip reference wasn't the only point I was trying to make; it was just a (failed) cherry on top. I mainly took issue with being revived in the common dystopian vision: constant states of warfare, violence, and so on. It simply isn't possible, given that you need to keep refilling dewars with LN2 and so much more; in other words, the chain of care would be disrupted, and you would be dead long before they found a way to resuscitate you.

And that leaves basically only a sudden "I Have No Mouth" scenario; i.e. one day it's sunny, Alcor is fondly taking care of your dewar, and then BAM! you've been resuscitated by that A.I. I guess I just find it unlikely that such an A.I. will say: "I will find Yvain, resuscitate him, and torture him." It just seems like a waste of energy.

comment by thomblake · 2010-08-29T03:55:37.032Z · LW(p) · GW(p)

Upvoted for making a comment that promotes paperclips.

comment by enoonsti · 2010-08-28T17:59:39.503Z · LW(p) · GW(p)

(Jack emerges from paper clips and asks downvoter to explain how his/her scenario of being revived into a dystopia would work given a chain of constant care is needed)

(Until then, Jack will continue to be used to represent the absurdity of the scenario)

comment by Vladimir_M · 2010-08-27T00:29:01.483Z · LW(p) · GW(p)

From what I see, your questions completely ignore the crucial problem of weirdness signaling. Your question (1) should also assume that these hospitals are perceived by the general population, as well as the overwhelming majority of scientists and intellectuals, as a weird crazy cult that gives off a distinctly odd, creepy, and immoral vibe -- and by accepting the treatment, you also subscribe to a lifelong affiliation with this cult, with all its negative consequences for your relations with people. (Hopefully unnecessary disclaimer for careless readers: I am not arguing that this perception is accurate, but merely that it is an accurate description of the views presently held by people.)

As for question (3), the trouble with such arguments is that they work the other way around too. If you claim that the future "me" 20 years from now doesn't have any more special claim to my identity than whatever comes out of cryonics in more distant future, this can be used to argue that I should start identifying with the latter -- but it can also be used to argue that I should stop identifying with the former, and simply stop caring about what happens to "me" 20 years, or one year, or a day, or even a minute from now. To which I can respond that yes, there is no rational reason to care about the fate of the future "me," but I just happen to be a sort of creature that gets upset when the future "me" is threatened and constantly gets overcome with an irresistible urge to work against such threats at the present moment -- but this urge doesn't extend to the post-cryonics "me," so I'm rationally indifferent in that case.

If you believe that this conclusion is false, how exactly would you counter it? (This objection obviously has implications for your question (6) too.)

Replies from: Eliezer_Yudkowsky, JamesAndrix, James_Miller
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-27T04:14:10.538Z · LW(p) · GW(p)

(7) If you have a fatal disease that can only be cured by wearing a bracelet or necklace under your clothing, and anyone who receives an honest explanation of what the item is will think you're weird, do you wear the bracelet or necklace?

Answering yes to (7) means that you shouldn't refrain from cryonics for fear of being thought weird.

Replies from: SilasBarta, NihilCredo, Vladimir_M, thomblake, Unknowns, MartinB
comment by SilasBarta · 2010-08-27T15:41:53.196Z · LW(p) · GW(p)

Heh -- that actually doubles as an explanation to people who ask:

"I'm wearing this necklace because I have a fatal disease that can only be cured by wearing it, and even then it only has a small chance of working."

--Oh no! I'm so sorry! What's the disease?

"Mortality."

comment by NihilCredo · 2010-08-29T13:36:46.041Z · LW(p) · GW(p)

The main weirdness problem with cryonics is not that people examine cryonics and then discard it because they don't want to look weird.

The problem is that people will not consider or honestly discuss at all something that looks weird.

comment by Vladimir_M · 2010-08-27T17:27:46.154Z · LW(p) · GW(p)

Is it really so easy to hide it from all the relevant people, including close friends and relatives, let alone significant others (who, according to what I've read about the topic, usually are the most powerful obstacle)?

Also, I'm not very knowledgeable about this sort of thing, but it seems to me like doing it completely in secret could endanger the success of the procedure after your death. Imagine if a bereaved family and/or spouse suddenly find out that their beloved deceased has requested this terrible and obscene thing instead of a proper funeral, which not only shocks them, but also raises the frightening possibility that once the word spreads, they'll also be tainted with this awful association in people's minds. I wouldn't be surprised if they fight tooth and nail to prevent the cryonics people from taking possession of the body, though I don't know what realistic chances of success they might have (which probably depends on the local laws).

(I wonder if some people around here actually know of real-life stories of this kind and how they tend to play out? I'm sure at least some have happened in practice.)

Replies from: erratio
comment by erratio · 2010-08-27T23:54:04.742Z · LW(p) · GW(p)

I've heard of stories like that, except replace 'cryonics' with 'organ donation' and 'this terrible and obscene thing' refers to destroying the sanctity of a dead body rather than preserving the entire body cryonically. In Australia at least, the family's wishes win out over those of the deceased.

comment by thomblake · 2010-08-27T14:17:56.995Z · LW(p) · GW(p)

I think to be honest here you need to point out the very small chance the bracelet has of working.

I think it could be aptly compared to those 'magnetic bracelets' newagey types sometimes wear which are a fast track to me not talking to them anymore.

Replies from: SilasBarta
comment by SilasBarta · 2010-08-27T15:40:01.946Z · LW(p) · GW(p)

If you replace the necklace with "losing all your hair", haven't you described chemotherapy?

Replies from: NihilCredo
comment by NihilCredo · 2010-08-29T13:38:02.951Z · LW(p) · GW(p)

(For extra fuel: losing your hair is far from the most unpleasant symptom of chemotherapy.)

comment by Unknowns · 2010-08-29T19:12:44.014Z · LW(p) · GW(p)

Actually, I suspect that most people would answer no to this, at least in practice.

Replies from: Perplexed
comment by Perplexed · 2010-08-29T19:25:49.985Z · LW(p) · GW(p)

(8) Suppose you are told that your fatal disease can only be cured by wearing a necklace. You ask how many people have been cured and receive the answer "None". You ask how the necklace works, and are told that it might be nano-technology, or it might be scanning and uploading. "We don't know yet, but that there is reason to be confident that it will work." Do you wear the necklace?

Answering yes to (8) means that you shouldn't refrain from cryonics because you fear signaling that you are prone to being victimized by quacks.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-08-29T19:44:14.590Z · LW(p) · GW(p)

You're confusing different questions. Each question should isolate a single potential motivation and show that it is not, of itself, sufficient reason to refuse. If you fear signaling, don't tell people about the necklace. If you fear quacks, don't make the question be about a necklace or about signaling.

Replies from: timtyler
comment by timtyler · 2010-08-29T19:55:10.886Z · LW(p) · GW(p)

I think that was intended more as irony.

Replies from: Perplexed
comment by Perplexed · 2010-08-29T21:21:54.567Z · LW(p) · GW(p)

There was some irony, but skepticism is a real reason why some people refrain. The necklace is simply part of the scenario, I see no particular reason to remove it from the story except risk of confusion. So, instead of a necklace, make it a "magic decoder ring", or, if we need to maintain privacy, a "harmonic suppository".

EY is right, though that if this one is meant seriously, the final sentence should read: Answering yes to (8) means that you shouldn't refrain from cryonics because you dislike being victimized by quacks.

Replies from: timtyler
comment by timtyler · 2010-08-30T08:12:53.453Z · LW(p) · GW(p)

Necklace seems OK to me - the Alcor Emergency ID Tags includes a necklace and bracelet.

I thought Eliezer was taking your comment a bit seriously - but on rereading his comment, I now think it makes sense to ask for your objections to be split up.

There's a problem, though - his "don't tell people about the necklace" sounds as though it would help to defeat its ostensible purpose. It is intended to send a message to those close to the near-death-experience. It is tricky to send that kind of message to one group, while not sending it to everyone else as well.

comment by MartinB · 2010-08-27T08:16:37.768Z · LW(p) · GW(p)

You mean like the warning sign of a pacemaker, or one off all the other helpful, but odd medical tools? There are many things that treat a person in need but look odd. Problem being that those get applied to sick people.

comment by JamesAndrix · 2010-08-27T04:08:07.784Z · LW(p) · GW(p)

Most people don't need to know about your affiliation.

comment by James_Miller · 2010-08-27T00:56:25.228Z · LW(p) · GW(p)

You are right about the weirdness signal, my questions don't get at this.

As for (3) wouldn't a yes response imply that you do care about the past and future versions of yourself?

When you write "but I just happen to be a sort of creature that gets upset when the future 'me' is threatened and constantly gets overcome with an irresistible urge to work against such threats at the present moment -- but this urge doesn't extend to the post-cryonics "me," so I'm rationally indifferent in that case." you seem to be saying your utility function is such that you don't care about the post-cryonics you and since one can't claim a utility function is irrational (excluding stuff like Intransitive preferences) this objection to cryonics isn't irrational.

Replies from: Vladimir_M
comment by Vladimir_M · 2010-08-27T02:11:26.261Z · LW(p) · GW(p)

Perhaps the best way to formulate my argument would be as follows. When someone appears to care about his "normal" future self a few years from now, but not about his future self that might come out of a cryonics revival, you can argue that this is an arbitrary and whimsical preference, since the former "self" doesn't have any significantly better claim to his identity than the latter. Now let's set aside any possible counter-arguments to that claim, and for the sake of the argument accept that this is indeed so. I see three possible consequences of accepting it:

  1. Starting to care about one's post-cryonics future self, and (assuming one's other concerns are satisfied) signing up for cryonics; this is presumably the intended goal of your argument.

  2. Ceasing to care even about one's "normal" future selves, and rejecting the very concept of personal identity and continuity. (Presumably leading to either complete resignation or to crazy impulsive behavior.)

  3. Keeping one's existing preferences and behaviors with the justification that, arbitrary and whimsical as they are, they are not more so than any other options, so you might as well not bother changing them.

Now, the question is: can you argue that (1) is more correct or rational than (2) or (3) in some meaningful way?

(Also, if someone is interested in discussions of this sort, I forgot to mention that I raised similar arguments in another recent thread.)

Replies from: TobyBartels
comment by TobyBartels · 2010-08-27T03:51:57.584Z · LW(p) · GW(p)

I can imagine somebody who picks (2) here, but still ends up acting more or less normally. You can take the attitude that the future person commonly identified with you is nobody special but be an altruist who cares about everybody, including that person. And as that person is (at least in the near future, and even in the far future when it comes to long-term decisions like education and life insurance) most susceptible to your (current) influence, you'll pay still pay more attention to them. In the extreme case, the altruistic disciple of Adam Smith believes that everybody will be best off if each person cares only about the good of the future person commonly identified with them, because of the laws of economics rather than the laws of morality.

But as you say, this runs into (6). I think that with a pefectly altruistic attitude, you'd only fight to survive because you're worried that this is a homicidal maniac who's likely to terrorise others, or because you have some responsibilities to others that you can best fulfill. And that doesn't extend to cryonics. So to take care of extreme altruists, rewrite (6) to specify that you know that your death will lead your attacker to reform and make restitution by living an altruistic life in your stead (but die of overexertion if you fight back).

Bottom line: if one takes consequence (2) of answering No to question (3), question (3) should still be considered solved (not an objection), but (6) still remains to be dealt with.

comment by Eneasz · 2010-08-27T17:50:17.875Z · LW(p) · GW(p)

I'm often presented with a "the cycling of the generations is crucial. Without it progress would slow, the environment would be over-stressed, and there would be far fewer jobs for new young people" argument. I reply with question 8.

  1. All of these are simply increased intensity of problems that already exist. We could solve all these problems right now by killing the elderly. Are you willing to commit suicide when you reach the age of 60 (or 50, or take-your-pick) to help solve these problems? Or are you willing to grant that death is a very (ethically) bad solution, and much better solutions could be found if death was taken off the table?
Replies from: thomblake
comment by thomblake · 2010-08-27T18:12:09.555Z · LW(p) · GW(p)

The general form of this is the Reversal Test.

comment by Richard_Kennaway · 2010-08-27T08:56:46.252Z · LW(p) · GW(p)

I'm willing to answer yes to 1-6 and to Eliezer's 7, but I am not signed up and have no immediate plans to do so. I may well consider it if the relevant circumstances change, which are:

1. I live in the UK where no cryonics company yet operates. I would have to move myself and my career to the US to have any chance of a successful deanimation. The non-cryonic scenario would be:

8. You suffer from a disease that will slowly kill you in thirty years, maybe sooner. There is a treatment that has a 10% chance of greatly extending that, but you would have to spend the rest of your life within reach of one of the very few facilities where it is available. These are all in other countries, where you would have to emigrate and find new employment to support yourself for at least the rest of your expected time.

And I really would not give a whole-hearted yes to that.

2. I am too old to finance it with insurance: I would have to pay for it directly, as I do with everything else. I probably can, but this actually makes it easier to put off -- no pressure to buy now while it's cheap.

What I am moved to do about cryonics is ask where I should be looking to keep informed about the current state and availability of the art. Is there a good source of cryonics news? At this point I'm not interested in arguments about whether not dying is a good thing, fears of waking up in the far future, or philosophising about bodily resuscitation vs. scan-and-upload. Just present-day practicalities.

comment by Perplexed · 2010-08-27T04:27:18.030Z · LW(p) · GW(p)

If you answered yes to all six questions and have not and do not intend to sign up for cryonics please give your reasons in the comments.

  • I do not wish to damage the ozone layer or contribute to global warming.
  • I think the resources should be spent on medical care for the young, rather than for the old. Do you know how many lives lost to measles one corpsicle costs?
  • If I am awakened in the future, I have no way to earn a living.
  • I used to like Larry Niven's sci fi.

Yes, these answers are somewhat flip. But ...

I can easily imagine someone rational signing up for cryonics. What I have more trouble imagining is someone rational becoming evangelical on the topic. Surely there are lives easier and cheaper to save than mine. Why is it important to you to convince me on this? Why aren't you asking me to contribute to Doctors without Borders? Are you perhaps seeking validation of your own life choices?

Replies from: CarlShulman, jacob_cannell, thomblake, NancyLebovitz
comment by CarlShulman · 2010-08-27T06:11:14.385Z · LW(p) · GW(p)

Economies of scale mean that increasing numbers of cryonics users lower costs and improve revival chances. I would class this with disease activism, e.g. patients (and families of patients) with a particular cancer collectively organizing to fund and assist research into their disease. It's not a radically impartial altruist motivation, but it is a moral response to a coordination/collective action problem.

Replies from: Perplexed
comment by Perplexed · 2010-08-27T12:37:53.652Z · LW(p) · GW(p)

Yes, that makes sense. Though that kind of thinking does not motivate me to go door-to-door every Saturday trying to convince my neighbors to buy more science books.

comment by jacob_cannell · 2010-08-27T06:06:33.613Z · LW(p) · GW(p)

You value all lives equally, with no additional preference to your own?

If you suddenly fell ill with a disease which is curable, but is very expensive, would you refuse treatment to save "lives easier and cheaper to save than" your own?

Naturally, insurance may cover said expensive treatment, but it can also cover cryonics. Do you only believe in insurance with reasonable caps on cost, such that your medical expenses can never be more than average?

Replies from: Perplexed
comment by Perplexed · 2010-08-27T12:51:45.261Z · LW(p) · GW(p)

You value all lives equally, with no additional preference to your own?

No, in fact I am probably over to the egoist side of the spectrum among LWers. I said my answers were somewhat flip.

My moral intuitions are pretty close to "Do unto others as they do unto you" except that there is a uni-directional inter-generational flow superimposed. I draw my hope of immortality from children, nephews, nieces, etc.

Do you only believe in insurance with reasonable caps on cost, such that your medical expenses can never be more than average?

I favor payment caps and co-pays on all medical insurance, whether I pay through premiums or taxes. That is only common sense. But capping at everybody-gets-exactly-the-average kinda defeats the purpose of an insurance scheme, doesn't it?

comment by thomblake · 2010-08-27T14:22:36.098Z · LW(p) · GW(p)

Do you know how many lives lost to measles one corpsicle costs?

That doesn't make it obvious whether it's worth it though. All those people with measles were going to die anyway, after all. Saving a few people for billions of years sounds much better than saving thousands of people for dozens of years.

Replies from: Perplexed
comment by Perplexed · 2010-08-27T14:37:11.253Z · LW(p) · GW(p)

Saving a few people for billions of years sounds much better than saving thousands of people for dozens of years.

Whether that is true depends on the discount rate. I suspect that with reasonable discount rates of, say, 1% per annum, the calculation would come out in favor of saving the thousands.

To say nothing of the fact that those thousands saved, after leading full and productive lives, may choose to apply their own savings to either personal immortality or saving additional thousands.

Replies from: CarlShulman
comment by CarlShulman · 2010-08-27T14:49:39.188Z · LW(p) · GW(p)

I suspect that with reasonable discount rates of, say, 1% per annum, the calculation would come out in favor of saving the thousands.

By sidereal or subjective time? If the former, running minds on faster hardware can evade most of the discounting losses.

Replies from: Perplexed
comment by Perplexed · 2010-08-27T15:14:31.345Z · LW(p) · GW(p)

Interesting distinction - I hadn't yet realized its importance.

Subjective time seems to be the one to be used in discounting values. If I remain frozen for 1000 sidereal years, there is no subjective time passed, so no discounting. If I then remain alive physically for 72 years on both scales, I am then living years worth only half as much as base-line years. If I am then uploaded, further year-counting and discounting uses subjective time, not sidereal time.

Thx for pointing this out.

I don't see how being simulated on fast hardware changes the psychological fact of discounting future experience, but it may well mean that I need a larger endowment, collecting interest in sidereal time, in order to pay my subjective-monthly cable TV bills.

Replies from: CarlShulman
comment by CarlShulman · 2010-08-27T15:43:36.385Z · LW(p) · GW(p)

Note that this distinction affords ways to care more or less about the far future: go into cryo, or greatly slow down your upload runspeed, and suddenly future rewards matter much more. So if the technology exists you should manipulate your subjective time to get the best discounted rewards.

Replies from: Perplexed, Unknowns, PhilGoetz
comment by Perplexed · 2010-08-27T16:40:49.286Z · LW(p) · GW(p)

Very interesting topic. People with a low uploaded run speed should be more willing to lend money (at sidereally calculated interest rates) and less willing to borrow than people with high uploaded run speeds. So people who run fast will probably be in hock to the people who run slow. But that is ok because they can probably earn more income. They can afford to make the interest payments.

Physical mankind, being subjectively slower than uploaded mankind and the pure AIs, will not be able to compete intellectually, but will survive by collecting interest payments from the more productive members of this thoroughly mixed economy.

But even without considering AIs and uploading, there is enough variation in discount rates between people here on earth to make a difference - a difference that may be more important to relative success than is the difference in IQs. People with low discount rates, that is people with a high tolerance for delayed gratification, are naturally seen as more trustworthy than are their more short-term-focused compatriots. People with high discount rates tend to max out their credit cards, and inevitably find themselves in debt to those with low discount rates.

One could write several top level posts on this general subject area.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2010-08-28T00:25:39.188Z · LW(p) · GW(p)

I don't think there will be lending directly between entities with very different run speeds. If you're much slower, you can't keep track of who's worth lending to, and if you're much faster, you don't have the patience for slow deliberation. There might well be layers of lenders transferring money(?) between speed zones.

Almost on topic:Slow Tuesday Night by R.A. Lafferty. Recommended if you'd like a little light-hearted transhumanism with casual world-building.

comment by Unknowns · 2010-08-29T19:18:55.485Z · LW(p) · GW(p)

Actually, people probably use sidereal time in fact, not subjective time, and this is a good explanation for why people aren't interested in their post-cyronics self; because it is discounted according to all the time while they are frozen.

comment by PhilGoetz · 2010-08-27T16:32:49.266Z · LW(p) · GW(p)

A portion of the discounting that's due to unpredictability does not change with your subjective runspeed. If you're dividing utilons between present you, and you after a million years in cryofreeze, you should use a large discount, due to the likelihood that your plant or your civilization will not survive a million years of cryofreeze, or that the future world will be hostile or undesirable.

Replies from: CarlShulman
comment by CarlShulman · 2010-08-27T16:53:36.655Z · LW(p) · GW(p)

I think we're talking about pure time preference here. Turning risk of death into a discount rate rather than treating it using probabilities and timelines (ordinary risk analysis) introduces weird distortions, and doesn't give a steady discount rate.

Replies from: PhilGoetz
comment by PhilGoetz · 2010-08-27T17:51:12.897Z · LW(p) · GW(p)

But maybe discount rate is just a way of estimating all of the risks associated with time passing. Is there any discounting left if you remove all risk analysis from discounting?

Time discounting is something that evolution taught us to do; so we don't know for certain why we do it.

Replies from: wnoise
comment by wnoise · 2010-08-27T19:00:56.597Z · LW(p) · GW(p)

Certainly time discounting is something that evolution taught us to do. However, it is adjusting for more than risks. $100 now is worth strictly more than $100 later, because now I can do a strict superset of what I can do with it later (namely, spend it on anything between now and then), as well as hold on to it and turn it into $100 later.

Replies from: Alicorn, PhilGoetz
comment by Alicorn · 2010-08-27T19:04:29.641Z · LW(p) · GW(p)

$100 now is worth strictly more than $100 later, because now I can do a strict superset of what I can do with it later (namely, spend it on anything between now and then), as well as hold on to it and turn it into $100 later.

There could be Schellingesque reasons to wish to lack money during a certain time. For example, suppose you can have a debt forgiven iff you can prove that you have no money at a certain time; then you don't want to have money at that time, but you would still benefit from acquiring the money later.

comment by PhilGoetz · 2010-08-31T16:24:38.428Z · LW(p) · GW(p)

Yes, time discounting isn't just about risk, so that was a bit silly of me. I would have an advantage in chess if I could make all my moves before you made any of yours.

comment by NancyLebovitz · 2010-08-27T14:41:55.132Z · LW(p) · GW(p)

What's the connection to Niven? His portrayal of revival as a bad deal?

Replies from: Perplexed
comment by Perplexed · 2010-08-27T14:57:04.632Z · LW(p) · GW(p)

Yes. As I recall, Niven described a future in which people were generally more interested in acquiring a license to have children than in acquiring a license to thaw a frozen ancestor.

There were a couple of books where a person was revived into a fairly dystopian situation - I forget their names right now. The term "corpsicle" is Niven's.

comment by [deleted] · 2010-08-27T06:22:43.268Z · LW(p) · GW(p)

1 should be more like: You have an illness that will kill you sometime in the next 50 years unless you have an operation right when you die but not too late. The clinics that can perform this operation are so far away that the chances of you reaching the facility in time is negligible. Do you sign up for the operation?

Edit: The correct choice of course is to move nearer to the clinics in about 20 to 30 years.

Edit2: Also there is a chance that with some more research in the next couple of years a method could be developed that might not cure you but will vastly lengthen the time until you die with a much greater chance than the operation has. Do you pay for the operation or fund that research?

comment by jacob_cannell · 2010-08-27T06:10:45.971Z · LW(p) · GW(p)
  1. yes 2. yes 3 sort of 4 yes 5 yes 6 yes

I havent signed up yet because at my age (31) my annual unexpected chance of death is low in comparison to my level of uncertainty about the different options, especially with whole brain plasticization possibly becoming viable in the near future (which would be much cheaper and probably have a higher future success rate ).

Replies from: MartinB, JGWeissman
comment by MartinB · 2010-08-27T08:01:29.525Z · LW(p) · GW(p)

There are quite many people that think like that. (Me being one atm.) Problem is, a few of us are wrong.

comment by JGWeissman · 2010-08-27T17:26:37.462Z · LW(p) · GW(p)

You could sign up for cryonics now and then switch to brain plasticization if and when it becomes available and is expected to be more effective.

I am not sure how easy it is to reduce your life insurance when switching to a cheaper method, but the possibility is worth looking into if you are worried that you might pay more than you had to.

comment by luminosity · 2010-08-27T07:34:43.447Z · LW(p) · GW(p)

A little nit-picky, but:

A friendly singularity would likely produce an AI that in one second could think all the thoughts that would take a billion scientists a billion years to contemplate.

Without a source these figures seem to imply a precision that you don't back up. Are you really so confident that an AI of this level of intelligence will exist? I feel your point would be stronger by removing the implied precision. Perhaps:

A friendly singularity would likely produce a superintelligence capable of mastering nanotechnology.

Replies from: NihilCredo
comment by NihilCredo · 2010-08-29T13:33:34.460Z · LW(p) · GW(p)

More generally, any time the subject of AI comes up I would recommend making efforts to avoid describing it in terms that sound suspiciously like wish fulfillment, snake-oil promises, or generally any phrasing that triggers scam/sect red flags.

comment by Jiro · 2014-12-10T18:33:02.887Z · LW(p) · GW(p)

(Responding to old post)

This is ridiculous. Each objection makes the deal less good; several objections combined together may make it bad enough that you should turn down the deal. Just because each objection by itself isn't enough to break the deal doesn't mean that they can't be bad enough cumulatively.

I might read a 40 chapter book with a boring first chapter. Or with a boring second chapter. Or with a boring third chapter, etc. But I would not want to read a book which contains 40 boring chapters.

This is especially so in the case of objections 1 and 6. If you don't separate them you end up with "You have a disease and will soon die unless you get an operation. By some crazy coincidence the operation costs exactly as much as cryonics does and the only hospitals capable of performing the operation are next to cryonics facilities. Furthermore, by another coincidence, the chance of the operation actually working is the same as the chance of cryonics actually working Do you get the operation?" The answer is often "no"; an expensive but likely way to save your life is okay (#1) and an unlikely but cheap way is also okay (#6). But not one which is both expensive and unlikely.

comment by mattnewport · 2010-08-27T01:18:42.211Z · LW(p) · GW(p)

I'd dispute the claimed equivalence between several of these questions and cryonics (particularly the first) and I'd also take issue with some of the premises but I'd answer yes to all of them with caveats and I'm not signed up for cryonics nor do I intend to in the near future.

The reason I have no immediate plans to sign up is that I think there are relatively few scenarios where signing up now is a better choice than deferring a decision until later. I am currently healthy but if diagnosed with a terminal illness I could sign up then if it seemed like the best use of resources at the time. I estimate my chances of sudden death as relatively low and many sudden death scenarios would likely greatly lower my chances of successful cryonic revival (due to causing severe damage to my brain) so cryonics doesn't seem a great investment currently.

Based on my age and health and the statistics I've seen I'd estimate a less than 1 in 1000 probability of dying in the next 10 years without sufficient warning to make arrangements for cryonics at the time but in a way that left my brain in a state where I'd have a non-negligible chance at future revival.

Replies from: JGWeissman, James_Miller, Unknowns
comment by JGWeissman · 2010-08-27T01:47:52.767Z · LW(p) · GW(p)

I am currently healthy but if diagnosed with a terminal illness I could sign up then if it seemed like the best use of resources at the time.

Life insurance is a lot easier to get when you are healthy and not diagnosed with a terminal illness.

Replies from: mattnewport
comment by mattnewport · 2010-08-27T04:29:50.318Z · LW(p) · GW(p)

Life insurance has negative expected monetary value. Since I could afford to pay for cryonics from retirement savings if I was diagnosed with a terminal illness I don't think it makes financial sense to fund it with life insurance. Funding with life insurance might have positive expected utility for someone who doesn't expect to have the funds to pay for cryonics in the near future but there's an opportunity cost associated with the expected financial loss of buying life insurance in the event that it is not needed.

comment by James_Miller · 2010-08-27T01:48:06.852Z · LW(p) · GW(p)

Most people pay for cryonics with a life insurance policy, an option that would get very expensive for you if you were diagnosed with a terminal illness.

The danger to you of waiting is that you might get a disease or suffer an accident that doesn't immediately kill you but drains your income and raises the cost to you of life insurance and so puts cryonics outside of your financial reach. You probably couldn't count on your family to financially help you in this situation as they probably think cryonics is crazy and after you "died" wouldn't see any benefit to actually paying for it.

If you think you would want to signup for cryonics if you got a terminal illness I would advise you to soon buy $150,000 in (extra) life insurance, which should be cheap if you are young and healthy.

Replies from: mattnewport
comment by mattnewport · 2010-08-27T01:55:07.935Z · LW(p) · GW(p)

If I am diagnosed with a terminal illness then I won't be needing my retirement savings so I'd use those to pay for cryonics if I decided it was the right choice at the time.

Replies from: James_Miller
comment by James_Miller · 2010-08-27T02:14:57.598Z · LW(p) · GW(p)

This doesn't work if you have (or will get) a family that is financially dependent on you or you get a financially draining illness.

In the U.S. (I think) if you are less than 65 years old the federal government requires you to spend most of your own money before its starts paying for some kinds of treatments. Even if you have health insurance you can lose it or run into its lifetime cap.

Also, you need to factor in mental illness. Getting depression might cost you your job, drain your savings and make it really expensive for you to get life insurance.

Finally, you could lose your retirement savings due to a civil lawsuit, paternity suit, divorce or criminal conviction.

Replies from: mattnewport
comment by mattnewport · 2010-08-27T03:08:50.919Z · LW(p) · GW(p)

Are you a life insurance salesman?

I don't currently have any dependents. If I have dependents in the future I think it would likely make more sense to ensure their financial security in case of my untimely death with term life insurance and still defer a decision on paying for cryonics.

I'm a British citizen and a permanent resident in Canada so health insurance issues are less of a concern for me than they might be for a US citizen. I have no family history of mental illness.

You can assume I will take appropriate steps to protect my assets from the threats you describe and others as I judge necessary and prudent.

comment by Unknowns · 2010-08-29T19:21:56.891Z · LW(p) · GW(p)

From the statistics I've seen, 1 in a 1000 over a 10 year period is definitely overconfident. It's closer to 1 in a 1000 over a 1 year period.

comment by jacob_cannell · 2010-08-27T05:57:45.760Z · LW(p) · GW(p)

Quick Note: I found it mildly distract that the explanations (which all started with 'Answering' as the 1st word) were right under each question. I kept finding myself tempted to read the 'answers' first. I'd personally prefer all the explanations at the end.

comment by A1987dM (army1987) · 2012-10-01T10:02:43.975Z · LW(p) · GW(p)

Answering yes to [“Were you alive 20 years ago?”] means you have a relatively loose definition of what constitutes “you” and so you shouldn’t object to cryonics because you fear that the thing that would be revived wouldn’t be you.

Not necessarily. My definition of “me” may depend on the context. If someone asks me that question, I assume that by “you” they mean ‘a human with your DNA who has since grown into present-you’, regardless of how much or how little I identify with him.

comment by CronoDAS · 2010-08-27T23:24:28.417Z · LW(p) · GW(p)

Twenty years ago, I was eight years old. I think that I can honestly say that if you somehow replaced me with my eight-year-old self, it would be the same as killing me. (To a great extent, I'm still mostly the same person I was at fourteen. I'm not at all the person I was at eight.)

Replies from: Pavitra
comment by Pavitra · 2010-08-27T23:34:22.978Z · LW(p) · GW(p)

In order for this to be an objection to immortality, you would have to believe that the immortality process halts the processes of intellectual and emotional maturation.

Replies from: CronoDAS
comment by CronoDAS · 2010-08-27T23:36:20.099Z · LW(p) · GW(p)

Good point.

On the other hand, I don't know how likely it was for eight-year-old me to end up as the person I am now; for all I know, I could have ended up someone very different.

comment by JoshuaZ · 2010-08-27T04:37:45.585Z · LW(p) · GW(p)

Even if someone answers yes to all six questions they could still rationally not sign up for cryonics. Aside from issues like weirdness signaling, they could not see any specific one of the six issues raised as sufficient to be an objection but consider all of them together to be enough. Thus for example one might combine 1 and 2 where the relevant payoff matrix for both issues combined (being sent into a possibly unpleasant future and having to pay a lot for an operation) combine to be enough of a concern even if neither does by itself. It seems unlikely that this would actually occur for someone and one would be deservedly skeptical of that sort of claim if they just had two statements. But the combination of 6 makes it more plausible.

comment by TobyBartels · 2010-08-27T03:30:52.185Z · LW(p) · GW(p)

I object to (2). I'm not at all sure that I would take that job. If I did, it would be because the NASA guys got me interested in it (the NASA job, not the bit about returning to Earth in the far future) before I had to make a final decision. If they only tell me what you said (or if the job sounds really boring and useless), then I wouldn't do it. Being cyrogenically frozen isn't exactly boring, but it is useless.

And in light of that, I also object to cryonics on the basis of cost. Instead of

Answering yes to (1) means you shouldn’t object to cryonics because of costs or logistics.

it would be better to say

Answering yes to (1) means you shouldn’t object to cryonics because of costs or logistics if you have no other objection.

If it were free and easy (and I knew that I was useless as an organ donor, which is an opportunity cost), then I might sign up on a whim, but high cost means that I won't. But this comes into play only after I decide that I don't want cryonics, on grounds analogous to (2).

I answer yes to (1,3,6). I'm a little worried about (5); I want to ask what else I know about this imminent singularity. But if it's just what you say in the question, then … yes. I haven't become too pessimistic about the singularity yet!

As for (4), I don't want to answer; one reason that I'm reading this site is to find out! So far, however, I'm leaning towards no, but also I don't think that it matters very much; who cares how long it takes? Except that this affects (2); if I believed that a friendly singularity was likely this decade, then we should rewrite (2) to refer to a decade-long trip, and then I lean towards yes! (The point is that people that I know will still be alive and remember me.)

Thanks for an interesting set of questions.

comment by ilzolende · 2014-11-29T20:16:22.570Z · LW(p) · GW(p)

My answer to (3) is "no" for rather trivial reasons, as my state 20 years ago is most comparable to someone who died and was not a cryonics patient: the thing that existed and was most similar to "me" was the DNA of people who are related to me. I don't count that as "alive", and I doubt that most people would.

Ask me (3) in the future, and I will probably have a different answer. (Wait until I'm 24, though, because I don't really identify so well with infants.)

comment by taw · 2010-08-27T15:13:28.845Z · LW(p) · GW(p)
  1. What is "non-trivial but far from certain"? If operation's chances were as low as my estimation of cryonics I wouldn't bother so "no". With high enough chance "yes".

  2. Maybe. I don't really trust my ability to place myself in such hypothetical scenarios and I expect my answer to result more from framing effects than anything else.

  3. Sort of.

  4. Definitely not.

  5. Framing effects etc. I don't think I can reason about this clearly enough.

  6. Definitely yes.

So there's one yes. It shouldn't surprise you that I consider cryonics waste of money with negligible chance of success, but I'm a huge fan of SENS, which has realistic chance of significantly reducing worst effects of aging at least.

And back to your arguments:

  • 1 - costs/logistics are only relative to chances of success, so this point fails hard.
  • 2 - waking up in the future is still worse than waking up now, so it works as partial objection even if you prefer it to never waking up.
  • 3 - magnitude of change matters, and future "you" can easily be far outside what you'd still consider "you", so your argument fails
  • 4, 5 - I'll leave it up to people who believe this, I consider the entire line of thought delusional
comment by MartinB · 2010-08-27T08:24:00.525Z · LW(p) · GW(p)

The article assumes that people make such decisions rational, which is just not the case. If you ask someone 'which argument or fact could possibly convince you to sign up, or lets say at least treat the cryo option favorably' you do not get a well reasoned argument about chances of it working or personal preferences or so, but more counterarguments. Throwing more logic at the problem does not help! If you find a magic argument that suddenly convinces someone that is not convinced yet - or does the signing process more immediate than planned, then you probably learned something useful about human nature that can be applied in other areas as well.

comment by [deleted] · 2014-12-04T08:31:36.462Z · LW(p) · GW(p)

Shouldn't you be asking things like, So you're pro-cryonics. Why would you change your mind?

comment by A1987dM (army1987) · 2012-10-06T12:00:01.099Z · LW(p) · GW(p)

Answering yes to (2) means you shouldn't object to cryonics because of the possibility of waking up in the far future.

An astronaut after coming back to Earth would likely have much higher social status than a cryonic patient after being revived.

Replies from: gwern
comment by gwern · 2012-10-06T18:41:37.517Z · LW(p) · GW(p)

And yet, they don't sound too happy after coming back. The conclusion I draw from this is Franklin-style.

comment by A1987dM (army1987) · 2012-10-03T01:00:42.743Z · LW(p) · GW(p)

Answering yes to (1) means you shouldn’t object to cryonics because of costs or logistics.

The fact that I'm willing to spend $X in order to die at 75 rather than at 25 doesn't necessarily imply that I must be willing to spend $X to die at [large number] rather than at 75.

comment by UnchartedPower · 2010-08-28T02:34:12.179Z · LW(p) · GW(p)

I say yes to 2, 5, and 6. I'd personally prefer not to be tortured or wake up in a future where humans may have been wiped out by another sentient race (I doubt it).

Replies from: Alicorn
comment by Alicorn · 2010-08-28T04:03:10.255Z · LW(p) · GW(p)

If you wake up, humans haven't been wiped out.

Replies from: Strange7
comment by Strange7 · 2014-11-30T08:34:56.352Z · LW(p) · GW(p)

There might be an 'extinct in the wild, building up a viable breeding population in captivity' situation, though.

Replies from: Capla
comment by Capla · 2014-11-30T18:42:01.125Z · LW(p) · GW(p)

That doesn't sound to bad. It's just humans livign as humans have.

comment by FAWS · 2010-08-26T23:51:42.046Z · LW(p) · GW(p)

The conclusion from 6. doesn't follow.

Replies from: James_Miller
comment by James_Miller · 2010-08-27T00:59:55.014Z · LW(p) · GW(p)

I agree it doesn't follow in the sense of a mathematical proof. But someone who answered yes to (6) but claimed that she doesn't value her life enough to do cryonics would sort of be contradicting themselves.

Replies from: FAWS, Nisan, TobyBartels
comment by FAWS · 2010-08-27T07:52:54.017Z · LW(p) · GW(p)

I mean that signing up for cryonics and making an attempt to avoid being butchered are so completely different that there are thousands of possible ways for one to do 6 and still consistently claim not to value ones life enough to sign up for cryonics. I don't claim to not value my life highly enough and personally think that's a completely ridiculous reason, but if there is any rational cryonics objector who actually making that excuse they would rightfully consider 6. a straw man.

For example you might think that not bothering to save your life in 6 would be the rational thing to do for you given your values, but expect instinct to take over. Or you strongly object to violence and would fight just to spite your would be murderer. Or to send a signal to make murder less and resistance more appealing for others. Perhaps you are afraid of any pain involved in dieing, but not death itself, and consider cryonics useless because it doesn't prevent pain. Perhaps you hate bureaucracy and don't value your life highly enough to fill out all the forms you expect to be necessary for cryonics, but don't mind physical activity.

I just think that 6. doesn't add anything useful at all given 1., and is so obviously less well thought out that it makes the whole thing weaker.

comment by Nisan · 2010-08-27T03:56:05.897Z · LW(p) · GW(p)

Perhaps what FAWS is getting at is that saying "yes" to 6 doesn't mean that you think your life is worth the financial cost of cryonics. But that was addressed in question 1. Saying "yes" to 6 really means that you can't pretend that your life isn't worth saving at all.

comment by TobyBartels · 2010-08-27T03:58:02.425Z · LW(p) · GW(p)

Like I argued for the others, (6) should say something like ‘if no other objections already apply’. For instance, you might not value your life as much as cryonics costs, but that's question (1); etc.

comment by [deleted] · 2014-11-30T10:17:04.243Z · LW(p) · GW(p)

But what if I don't sign up for cryonics because I simply don't want to live in another time, without my friends, my family, people I owe duties to,...? What if I simply think it a dishonest way out? (I mean, I'm okay for cryopreserving other people, especially lethally ill. I don't care for the weirdness, also. But myself, no; I have a life, why would I decide to give it up?)

Replies from: gjm
comment by gjm · 2014-11-30T11:39:43.222Z · LW(p) · GW(p)

I have a life, why would I decide to give it up?

What do you mean by "give it up"? No one (so far as I know) gets cryopreserved until they are on the point of death. (I think generally not until they are actually, by conventional definitions, dead -- because otherwise there's the legal risk that the cryopreservation gets treated as a murder. This may differ across jurisdictions; I don't know.)

Replies from: None
comment by [deleted] · 2014-11-30T12:21:09.755Z · LW(p) · GW(p)

It's just that in the OP there were questions about operations etc. that made it sound like there was time to decide, that is, a person got to choose to be cryopreserved or not. Like it was not really urgent. Of course if the person cannot decide being unconscious it's another matter.

Replies from: gjm
comment by gjm · 2014-11-30T17:19:18.473Z · LW(p) · GW(p)

What usually happens is that a person decides, while relatively young and healthy, that they want to be cryopreserved, and at that point they sign up with an organization that provides cryopreservation services and arrange for them to be paid (e.g., by buying a life insurance policy that pays out as much as the organization charges). Later, when they die, the organization sends people to do the cryopreservation. No last-minute panicked decisions are generally involved, other than maybe "so, should we call the cryo people now?".

I have not heard of anyone deciding while still young and healthy that they want to get frozen[1] right now this minute. Not least because pretty much everyone agrees that there's at least a considerable chance that they will never get revived, and giving up the rest of your life now for the sake of some unknown-but-maybe-quite-small chance of getting revived in an unknown-but-maybe-quite-bad future doesn't seem like a good tradeoff. And also because the next thing to happen might be a murder charge against the people doing the cryopreservation.

[1] "Frozen" is not actually quite the right word given current cryopreservation methods, but it'll do.

Replies from: None, dxu
comment by [deleted] · 2014-12-02T07:22:17.867Z · LW(p) · GW(p)
  1. Marriage, which compared to waking up in some distant future is a walk on a beach in terms of adjustment, comes as a shock and - well - sometimes depressing change for many people. I would thing at least some adults would be unwilling to risk cryopreservation not because of fear of the unknown, but exactly because of the unpleasantness of a known.
  2. About disgust. My sister has once worked at a sanitary-epidemiological station (don't know what they are called in your area), and there was a mother who bribed a doctor to diagnose her child not with scabies that he/she had, but with some other, socially acceptable illness. The kid got the kindergarten carantined for some considerable time. So it might be people are appalled by the illness (again, I don't say there's any justification. It's just how people think, and they don't even need to know the reason why a person would choose to be cryopreserved. Now, if it was a last-minute desperate attempt at a miracle cure, this is more respectable.)
Replies from: gjm
comment by gjm · 2014-12-02T11:47:23.860Z · LW(p) · GW(p)

I concede that there are probably some people who, if they could, would get cryopreserved while still young and healthy in the hope of escaping a world they find desperately unpleasant for a possibly-better one.

(I would guess that actually doing this would be rare even if it were legal. We're looking at someone unhappy enough to do something that on most people's estimates is probably a complicated and expensive method of suicide -- despite being young, reasonably healthy, able to afford cryopreservation, and optimistic enough about the future that they expect a better life if they get thawed. That's certainly far from impossible, but I can't see it ever being common unless the consensus odds of cryo success go way up.)

But unless I'm very confused, it seems like the subject has changed here. The answer to the question "Why not sign up for cryopreservation when you die?" can't possibly be "I have a life, why would I decide to give it up?".

I'm not sure I understand your point about disgust. Would you like to fill in a couple more of the steps in your reasoning?

Replies from: None
comment by [deleted] · 2014-12-02T15:00:55.870Z · LW(p) · GW(p)

Er, no. I meant that people who have experienced change might be less willing to choose a greater change, though it was very nice of you to understand it so. Clarifying about the latter. People might think, not quite clearly, that someone who wants a cure as early in life might have done something to need it, for example got himself an unmentionable disease. Like scabies only worse.

Replies from: gjm
comment by gjm · 2014-12-02T16:21:48.378Z · LW(p) · GW(p)

I meant that people who have experienced change might be less willing to choose a greater change

OK. Then I have even less clue how this relates to the discussion I thought we were originally having.

I think we are all agreed that there are plenty of reasons why someone might choose not to get cryopreserved while still young and healthy. James_Miller's questions were not (I'm about 98% sure) intended to be relevant to that question; only to the question "why not arrange to be cryopreserved at the point of death?".

Everything you've been saying has (I think) been answering the question "why not get cryopreserved right now, while your life is still going on normally and you're reasonably healthy?". Which is fine, except that that isn't a question that needs answering, because to an excellent first approximation no one is thinking of getting cryopreserved while still young and healthy, and no one here is trying to convince anyone that they should.

Clarifying about the latter [...]

OK, so this was yet another reason why some people might choose not to get cryopreserved while still young and reasonably healthy. Fine, but (see above) I think this rather misses the point.

Replies from: None
comment by [deleted] · 2014-12-02T17:30:50.193Z · LW(p) · GW(p)

Yes, sorry, I think I misread the questions for 2 reasons: 1, I saw no reason to be cryopreserved when old and maybe going senile, and waking to an alien universe with almost no desire to truly adapt to it, and no real drive to understand it, and 2, I might put a higher probability ofyoung and healthy people dying abruptly than you do. There are enough wars for it to happen. Cryopreservation might be awfully handy.

comment by dxu · 2014-11-30T17:28:10.576Z · LW(p) · GW(p)

Of course, "cryocrastination" is a thing too.

comment by complexmeme · 2010-09-01T06:27:34.028Z · LW(p) · GW(p)

Some of your analogies strike me as quite strained:

(1) I wouldn't call the probability of being revived post near-future cryogenic freezing "non-trivial but far from certain", I would call it "vanishingly small, if not zero". If sick and dying and offered a surgery as likely to work as I think cryonics is, I might well reject it in favor of more conventional death-related activities.

(3) My past self has the same relation to me as a far-future simulation of my mind reconstructed from scans of my brain-sicle? Could be, but that's far from intuitive. Also, there's no reason to use "fear" to characterize the opposing view when "think" would work just as well.

(6) What Yvain said.