Costs to (potentially) eternal life

post by bgrah449 · 2010-01-21T21:46:31.316Z · LW · GW · Legacy · 111 comments

Imagine Omega came to you and said, "Cryonics will work; it will be possible for you to be resurrected and have the choice between a simulation and a new healthy body, and I can guarantee you live for at least 100,000 years after that. However, for reasons I won't divulge, your surviving to experience this is wholly contingent upon you killing the next three people you see. I can also tell you that the next three people you see, should you fail to kill them, will die childless and will never sign up for cryonics. There is a knife on the ground behind you."

You turn around and see someone. She says, "Wait! You shouldn't kill me because ... "

What does she say that convinces you?

[Cryonics] takes the mostly correct idea "life is good, death is bad" to such an extreme that it does violence to other valuable parts of our humanity (sorry, but I can't be more specific).

That's a quote from a comment in a post about cryonics. "I can't be more specific" is not doing this comment any favors, and overall the comment was rebutted pretty well. But I did try to imagine other these other valuable parts, and I realized something that remains unresolved for me.

Guaranteed death places a limit on the value of my life to myself. Parents shield children with their bodies; Casey Jones happens more often. People run into burning buildings more often. (Suicide bombers happen more often, too, I realize.)

I think this is a valuable part of humanity, and I think that an extreme "life is good, death is bad" view does do violence to it. You can argue we should effect a world that makes this willingness unnecessary, and I'll support that; but separate from making the willingness useless, eliminating that willingness does violence to our humanity. You can argue that our humanity is overrated and there's something better over the horizon, i.e. the cost is worth it.

But the incentives for saving 1+X many lives at the cost of your own just got lessened. How do you put a price on heaven? orthonormal suggests that we should rely on human irrationality here to keep us moral, that thankfully we are too stupid and slow to actually change the decisions we make after recognizing the expected value of our options has changed, despite the opportunity cost of these decisions growing considerably. I think this a) underestimates humans' ability to react to incentives and b) underestimates the reward the universe bestows on those who do react to incentives.

I don't see a good "solution" to this problem, other than to rely on cognitive dissonance to make this not seem as offensive as it is now in the future. The people for whom this presents a problem will eventually die out, anyway, as there is a clear advantage to favor it. I guess that's the (ultimately anticlimactic) takeaway: Morals change in the face of progress.

So, which do you favor more - your life, or identity?

EDIT: Well, it looks like this is getting fast-tracked for disappeared status. I think it's interesting that people seem to think I'm making a statement about a moral code. I'm not; I'm talking about incentives and what would happen, not what the right thing to do is.

Let's say Eliezer gets his wish and cryonics many, many parents sign up for cryonics and sign their children up for cryonics. Does anyone really expect that this population would not respond to its incentives to avoid more danger? Anecdotes aside; do you expect them to join the military with the same frequency, be firemen with the same frequency, to be doctors administering vaccinations in jungles with the same frequency? I don't think it's possible to say that with a straight face and mean it; populations respond to incentives, and the incentives just changed for that population.

111 comments

Comments sorted by top scores.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-21T22:09:44.853Z · LW(p) · GW(p)

Guaranteed death places a limit on the value of my life to myself

It puts a limit on the value of other lives, too.

Whatever a life is worth, so long as it's the same factor affecting the potential worth of all lives, the dilemma of altruism or selfishness is the same.

Replies from: bgrah449
comment by bgrah449 · 2010-01-21T23:50:19.723Z · LW(p) · GW(p)

A standard measure of how much a life is worth is an estimate of the value "time until death."

comment by Psychohistorian · 2010-01-22T00:48:18.374Z · LW(p) · GW(p)

Your overall point seems to be: "If some people live a really, really long time, and others don't, we won't value the lives of the 'mortals' as much as we do those of the 'immortals.'"

But don't we value saving nine-year-olds more than ninety-year-olds? The real question is, "If I'm immortal, why aren't they?"

You also miss the obvious positive effects of valuing life more greatly. War would be virtually impossible between immortal nations, at least insofar as it requires public support and soldiers. It would also be (to some degree) morally defensible for immortal nations to value citizen-lives higher than they value the lives of mortal-nations, which means they would be more willing to use extreme force, which means mortal nations would be much more hesitant to provoke immortal nations. Also, our expenditures on safety and disaster preparedness would probably increase exponentially, and our risk-taking would also decrease dramatically.

In other words, I'm not sure this post clearly communicates your point, and, to the extent it does, your point seems underdeveloped and quite probably bad.

Replies from: soreff, bgrah449
comment by soreff · 2010-01-23T00:03:42.447Z · LW(p) · GW(p)

Also, our expenditures on safety and disaster preparedness would probably increase exponentially, and our risk-taking would also decrease dramatically.

This depends to an extent on the nature of the immortalizing technology. I agree with you if the technology doesn't permit backups, but I disagree with you if backups can be done (at least with respect to the risk of local death). In particular an uploading-based technology, with an easy way to make backups, might result in the average person taking more risks (at least risks of one copy being killed - but not the whole ensemble of backups) than they do now.

Replies from: Psychohistorian, mattnewport
comment by Psychohistorian · 2010-01-24T02:57:17.949Z · LW(p) · GW(p)

I'm not yet sold on the perfect substitutability of backups, but the point, while interesting, is quite irrelevant in this context. If backups aren't perfect substitutes, they won't affect people's behaviour. If they are, then increased risk is essentially immaterial. If I don't care about my mortality because I can be easily resurrected, then the fundamental value of me taking risks changes, thus, the fact that I take more risks is not a bad thing.

Now, there may be a problem that people are less concerned with other people's lives, because, since those people are backed up, they are expendable. The implications there are a bit more complex, and that issue may result in problems, though such is not necessarily the case.

comment by mattnewport · 2010-01-23T00:46:11.432Z · LW(p) · GW(p)

Richard Morgan's sci-fi trilogy, Altered Carbon, Broken Angels and Woken Furies have an entertaining take on the implications of universal backups.

Replies from: soreff
comment by soreff · 2010-01-23T01:33:35.444Z · LW(p) · GW(p)

Many thanks!

comment by bgrah449 · 2010-01-22T00:57:56.179Z · LW(p) · GW(p)

We value saving lives who have a high expected time until death, so yes, we value saving nine-year-olds more than ninety-year-olds. This would presumably become reversed if the child had 1/10th the expected time until death as the old man.

The real answer is it doesn't matter - not everyone will enroll.

Our expenditures on safety and disaster preparedness would increase, but you're probably overrating the relative benefit, because the tragedy from accidents would increase suddenly while our ability to mitigate them lags - we would be playing catch-up on safety measures for a long time.

Replies from: Nick_Tarleton, pengvado
comment by Nick_Tarleton · 2010-01-22T01:36:29.687Z · LW(p) · GW(p)

We value saving lives who have a high expected time until death, so yes, we value saving nine-year-olds more than ninety-year-olds. This would presumably become reversed if the child had 1/10th the expected time until death as the old man.

At least to the extent that this preference comes from deliberative knowledge, rather than free-floating norms about the value of children, or instinct.

Replies from: bgrah449
comment by bgrah449 · 2010-01-22T01:44:23.446Z · LW(p) · GW(p)

Yes, to that extent. The amount that we value the child's life does start with an advantage against the amount we value the old man's life, which is why I chose a drastic ratio.

comment by pengvado · 2010-01-23T10:27:22.731Z · LW(p) · GW(p)

the tragedy from accidents would increase suddenly while our ability to mitigate them lags

If you mean that humans intuitively measure things on a comparative scale, and thus increasing the value of an outcome that you failed to get can make you feel worse than not having had the chance in the first place -- yes, I agree that it is descriptively true. But the consequentialist in me says that that emotion runs skew to reality. On reflection, I won't choose to discount the value of potential-immortality just because it increases the relative tragedy of accidental death.

comment by Psychohistorian · 2010-01-21T23:21:15.824Z · LW(p) · GW(p)

What does she say that convinces you?

"The entity that gave you instruction did not provide you adequate evidence in support of its claims! The odds that it's just messing with you are more orders of magnitude than you can count more likely than the truth of its statement."

comment by JulianMorrison · 2010-01-22T13:17:08.003Z · LW(p) · GW(p)

What does she say that convinces you?

She doesn't have to say anything - she would have had to push herself well out of the norm and into the range of "people whose richly deserved death would improve the world" before I would even consider it.

I would just say "Omega, you're a bastard", and continue living normally.

Replies from: bgrah449
comment by bgrah449 · 2010-01-22T21:21:20.056Z · LW(p) · GW(p)

Imagine Omega said, "The person behind you will live for 30 seconds if you don't kill her. If you kill her, you will continue leading a long and healthy life. If you don't, you will also die in 30 seconds."

Do you say the same thing to Omega and continue enjoying your 30 seconds of life?

Replies from: JulianMorrison, Cyan, mattnewport, Morendil
comment by JulianMorrison · 2010-01-23T02:10:26.457Z · LW(p) · GW(p)

No difference. I won't buy my life with murder at any price. (Weighing one-against-many innocents is a different problem.)

And I'd be calling Omega a bastard because, as an excellent predictor, he'd know that, but decided to ruin my day by telling me anyway.

Replies from: bgrah449
comment by bgrah449 · 2010-01-23T05:08:19.754Z · LW(p) · GW(p)

Can you explain, then, how this is different then suicide, since your theft of her life is minimal, yet your sacrifice of your own life is large?

Replies from: JulianMorrison
comment by JulianMorrison · 2010-01-23T20:26:29.569Z · LW(p) · GW(p)

It's not suicide, I'm just bumping into a moral absolute - I won't murder under those circumstances, so outcomes conditional on "I commit murder" are pruned from the search tree. If the only remaining outcome is "I die", then drat.

comment by Cyan · 2010-01-22T22:11:34.348Z · LW(p) · GW(p)

For 30 seconds, I kill her. For an hour, we both die. I think my indecision point is around 15 minutes.

Replies from: Eliezer_Yudkowsky, bgrah449
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-22T22:21:05.469Z · LW(p) · GW(p)

...thank you for your honest self-reporting, but I do feel obliged to point out that this does not make any sense.

Replies from: Cyan
comment by Cyan · 2010-01-23T01:33:11.783Z · LW(p) · GW(p)

I didn't think it through for any kind of logical consistency -- it's pure snap judgment. I think my instinct when presented with this kind of ethical dilemma is treat my own qalys (well, qalsecs) as far less valuable than those of another person. Or possibly I'm just paying an emotional surcharge for actually taking the action of ending another person's life. There was some sense of "having enough time to do something of import (e.g., call loved ones)" in there too.

comment by bgrah449 · 2010-01-22T22:18:48.717Z · LW(p) · GW(p)

But isn't this time relative to lifespan? What if your entire lifespan were only 30 minutes?

comment by mattnewport · 2010-01-22T21:38:01.793Z · LW(p) · GW(p)

I think my reaction would be "fuck you Omega". If an omniscient entity decides to be this much of a douchebag then dying giving them the finger seems the only decent thing to do.

Replies from: bgrah449
comment by bgrah449 · 2010-01-22T21:41:12.088Z · LW(p) · GW(p)

My implied assumption was Omega was an excellent predictor, not an actor - I thought this was a standard assumption, but maybe it isn't.

comment by Morendil · 2010-01-22T21:42:28.585Z · LW(p) · GW(p)

Showing that the original question had little to do with cryonics...

Replies from: bgrah449
comment by bgrah449 · 2010-01-22T21:45:17.834Z · LW(p) · GW(p)

This question is a highly exaggerated example to display the incentives, but cryonics subscribers will be facing choices of this kind, with much more subtle probabilities and payoffs.

comment by wedrifid · 2010-01-21T23:41:13.724Z · LW(p) · GW(p)

What does she say that convinces you?

  • I am wired with explosives triggered by an internal heart rate monitor.
  • My husband, right next to me, is 100 kg of raw muscle and armed.
  • I was the lead developer of an AGI that is scheduled to hit start in three weeks. I quit when I saw that the 'Friendliness' intended is actually a dystopia and my protested were suppressed. I have just cancelled my cryonics membership and the reason your cryonic revival is dependent on killing me is that I am planning to sabotage the AI.
  • A catch all: Humans can always say with sincerity that they would never do something so immoral under any circumstances without it necessarily changing their behaviour in the moment.
  • Awareness of the above tendency in oneself often comes with the (necessary) willingness to lie explicitly lie about their values for the same reasons that they would otherwise have lied to themselves.
  • Related to the above, it is the natural instinct to speak out in outrage against anyone who doesn't condemn such immoral actions or even those who don't imply that the answer should be known a priori.
  • This plays a part in the votes your post has received, which is unfortunate. I thank you for making it and hope the magnified downvotes do not put you under the threshold for posting.
Replies from: Technologos, Kevin
comment by Technologos · 2010-01-22T02:58:34.877Z · LW(p) · GW(p)

I was the lead developer of an AGI that is scheduled to hit start in three weeks. I quit when I saw that the 'Friendliness' intended is actually a dystopia and my protests were suppressed. I have just cancelled my cryonics membership and the reason your cryonic revival is dependent on killing me is that I am planning to sabotage the AI.

Is it weird that my first reaction is to ask her specific questions about the Sequences to test the likelihood of that statement's veracity?

comment by Kevin · 2010-01-22T10:40:07.911Z · LW(p) · GW(p)

Upvoted for being the only one to actually answer that question. I'm not comfortable answering, but let's just say that I would have an eternity to atone for my sins.

comment by jimrandomh · 2010-01-23T19:54:43.060Z · LW(p) · GW(p)

Imagine Omega came to you and said, "Cryonics will work; it will be possible for you to be resurrected and have the choice between a simulation and a new healthy body, and I can guarantee you live for at least 100,000 years after that. However, for reasons I won't divulge, your surviving to experience this is wholly contingent upon you killing the next three people you see.

This offer could have positive expected value in terms of number of lives if, for example, you were a doctor who expected to save more than three lives during the next 100,000 years. However, no matter what any decision theory or expected utility calculation says, Omega's offer falls into several reference classes which mean it cannot be accepted without formal safeguards.

First, it involves trading for a resource (years of life) in an amount several orders of magnitude different from what we normally deal with. An entity which accepts offers in that class is likely to be a paperclipper. Second, it involves a known immoral act - killing people, as opposed to failing to save them. And third, it is so implausible that confusion, deception, brain damage or misprogramming are more likely than the offer being valid. Omega can remove statements from this last reference class in thought experiments, but no entity can do so in real life.

comment by Richard_Kennaway · 2010-01-22T12:59:00.152Z · LW(p) · GW(p)

Once upon a time, there was a policeman, called John Perry.

comment by Tiiba · 2010-01-22T03:33:28.035Z · LW(p) · GW(p)

I just feel like saying this:

FOOLISH MORTAL!

Sorry. (I don't mean anyone here, I just had to say it.)

comment by Zachary_Kurtz · 2010-01-21T21:55:17.670Z · LW(p) · GW(p)

Cryonics is good because life is good. The subjective value of my life doesn't make it ok to kill someone I perceive as less valuable.

Here's another argument against: if murder suddenly becomes a defensible position in support of cryonics, then how do you think society, and therefore societal institutions, will respond if murder becomes the norm? I think it becomes less likely that cryonic institutions will succeed, and thus jeopardize everyone's chances of living 100,000+ years.

Replies from: bgrah449, knb
comment by bgrah449 · 2010-01-22T00:08:12.797Z · LW(p) · GW(p)

It's not about what's okay; it's about what people will actually do when their life expectancy goes up drastically.

Replies from: Zachary_Kurtz
comment by Zachary_Kurtz · 2010-01-22T15:17:57.227Z · LW(p) · GW(p)

That's the point I'm trying to make. An action that could appear to increase life expectancy drastically could actually have the opposite affect (in the situation I propose by affecting the institutional structure required for cryonics to succeed).

Replies from: bgrah449
comment by bgrah449 · 2010-01-22T15:40:21.907Z · LW(p) · GW(p)

Yes, once cryopreservation is widespread across the globe. But when only some people access and others don't, and we have a decent shot of actually being revived, the tragedy from a cryonics subscriber losing their life is much greater than when a non-cryonics subscriber loses their life.

comment by knb · 2010-01-21T23:24:25.078Z · LW(p) · GW(p)

Also known as the Categorical Imperative.

comment by blogospheroid · 2010-01-23T18:46:20.984Z · LW(p) · GW(p)

People's willingness to sacrifice their own lives might change drastically, agreed.

But there are counteracting factors.

People will think far more long term and save more. They might even put more thought into planning. The extra saving might result in an extra safety widget that saves more lives. You can't really disregard that.

They will be more polite and more honest . Because life's too long and the world's too small. Think ten times before cheating anyone. The extra business that will generate and the prosperity that will bring might save more lives than firemen and missionary doctors ever could.

Presently our finite lifespan does violence to these aspects which we all consider moral. Will Non-aging humans really ignore climate change, peak oil, supervolcanoes and asteroids? I don't think so.

So, I'm not sure which part of the human utitlity calculus will weigh in here, but its my hunch that atleast in my country, India, we would drastically improve matters if we thought a little more long-term.

comment by roland · 2010-01-22T20:53:20.864Z · LW(p) · GW(p)

There are other similar dilemmas like: why do you go to the cinema if that money could be spent saving one of 16,000 children who die from hunger every day?

My answer is: we are all selfish beings but whereas in our primitive environment(cavemen) the disparities wouldn't be that great for lack of technology nowadays those who have access to the latest technology can leverage much more advantage for them. But unfortunately if you have to make the decision between cryonics for yourself vs. saving N children from starvation: If you still want to be alive in 100,000 years you will have to be selfish, and I don't blame you.

I don't think it is correct to make moral judgements, it's just the way we are.

Morals change in the face of progress.

Morals is what you can get away with without being reprimanded by your tribe.

So, which do you favor more - your life, or identity?

What do you mean with identity?

Replies from: bgrah449
comment by bgrah449 · 2010-01-22T21:14:48.898Z · LW(p) · GW(p)

I agree that I don't think moral judgments are the issue.

I don't think that's what mores are.

Values are critical to identity; changing them to increase life expectancy changes your identity.

comment by Richard_Kennaway · 2010-01-22T11:33:01.681Z · LW(p) · GW(p)

I would be interested to hear from those who are actually signed up for cryonics. In what ways, if any, have you changed your willingness to undertake risks?

For example, when flying, do you research the safety records of the airlines that you might travel with, and always choose the best? Do you ride a motorbike? Would you go skydiving or mountaineering? Do you cross the road more carefully since discovering that you might live a very long time? Do you research diet, exercise, and other health matters? Do you always have at the back of your mind the thought: if I had a heart attack right now, what plans are in place for a controlled deanimation? And so on.

The same question applies to those who, whether or not they take cryonics seriously, do take seriously the possibility of radical life-extension coming soon enough to radically extend their own lives. How strenuously are you trying to stay alive and healthy long enough for your fragile vessel to reach the promised land?

ETA: bgrah449 writes in a comment below:

It's not about what's okay; it's about what people will actually do when their life expectancy goes up drastically.

Someone who takes cryonics seriously and is signed up, already believes their life expectancy has gone up drastically. Or at least, drastically*probability of revival.

Replies from: scotherns, soreff, AngryParsley
comment by scotherns · 2010-01-22T15:01:45.871Z · LW(p) · GW(p)

I haven't significantly changed my willingness to take risks, but then again I have always been very risk averse.

I would never ride a motorbike or go mountaineering etc. I eat well, don't smoke, try to avoid stress and exercise regularly.

I did all these things even before I took cryonics seriously . This is because it was obvious that being alive is better than being dead, and these things seemed like obvious ways in which to preserve my life as long as possible.

If I found out tomorrow that cryonics was proven to NOT work, I'd still continue crossing the road very carefully.

Replies from: Morendil
comment by Morendil · 2010-01-22T15:26:44.725Z · LW(p) · GW(p)

That matches my intuition, which I'd express as: it's a particular disposition toward life risks that makes someone interested in cryonics, rather than signing up for cryonics which makes someone more prudent. (Just a hunch, I'm not saying I've thought this through.)

There are some activities I like which seem riskier than they are, such as treetop courses; the equipment makes them perfectly safe but I enjoy the adrenalin rush. When I travel by plane I enjoy takeoff and landing for similar reasons, and flying in general whenever there is a clear view of land. (Not everything abouf flying is enjoyable.)

comment by soreff · 2010-01-23T00:22:34.964Z · LW(p) · GW(p)

I'm another one who is signed up, has always been risk-averse, and hasn't changed risk-averse behavior as a result of cryonics membership.

One general comment: To my mind, infinite life has something like a net present value with a finite interest rate. I probably don't apply a consistent discount rate (yes, I've read the hyperbolic discounting article). For a order-of-magnitude guess, assume that I discount at 1% annually and treat a billion year life expectancy as being roughly as valuable as 100 years of life - and a 1% chance of that as being roughly equal to gaining an extra year. I'm now 51, so adding 1 year to say 25 years is a 4% gain. Not trivial, but not a drastic increase either.

Replies from: bgrah449
comment by bgrah449 · 2010-01-23T00:31:43.487Z · LW(p) · GW(p)

I'm not following this. Is the billion year life expectancy roughly as valuable as 100 years certain?

Replies from: soreff
comment by soreff · 2010-01-23T01:42:24.522Z · LW(p) · GW(p)

I'm saying that, roughly speaking, I value next year at 99% of this year, the year after that at (99%)^2 of this year, the year after that at (99%)^3 and so on. The integral of this function out to infinity gives 100 times the value of one immediate year. I'm not quite sure what you mean by "certain". Could you elaborate? I'm not trying to calculate the probability that I will actually get to year N, just to very grossly describe a utility function for how much I'd value year N from a subjective view from the present moment.

Replies from: bgrah449
comment by bgrah449 · 2010-01-23T05:10:47.142Z · LW(p) · GW(p)

But you can't value year N at a time-discounted rate, because the unit you are discounting is time itself. Why discount 1,000,000,000 years if you won't discount 100 years? I don't understand why one can be discounted, yet the other cannot.

Replies from: Jordan
comment by Jordan · 2010-01-23T05:14:57.757Z · LW(p) · GW(p)

It's not time you're discounting; it's the experience of living that quantity of time.

Replies from: bgrah449
comment by bgrah449 · 2010-01-23T05:23:49.703Z · LW(p) · GW(p)

I don't see the difference. Money can be spent at once, or over a period of time, so the time value of money makes sense. But you can't live 100 years over any period other than 100 years. I don't understand the time value of the experience of living.

Replies from: Jordan, soreff
comment by Jordan · 2010-01-23T05:35:57.452Z · LW(p) · GW(p)

Me-in-100 years is not me-now. I can barely self identity with me-10-years-ago. Why should I value a year someone else will live as much as I value this year that I'm living?

Replies from: jhuffman, bgrah449
comment by jhuffman · 2010-01-26T20:37:46.125Z · LW(p) · GW(p)

I have a pretty different set of values than I had fifteen years ago, but I still consider all those experiences to have happened to me. That is, they didn't happen to a different identity just because I was 18 instead of 33.

It is possible that if the me of 18 and the me of 33 talked through IRC we wouldn't recognize each other, or have much at all to talk about (assuming we avoid the subject of personal biography that would give it away directly).

The me of 67 years from now, at 100 years of age, I can also expect to be very different from the me of now, even more different from the me of now than the me of 18. We might have the same difficulty recognizing ourselves in IRC.

Yet I'm confident I'll always say I'm basically the same person I was yesterday, and that all prior experiences happened to me, not some other person; regardless of how much I may have changed. I have no reason not to think I'd go on saying that for 100,000 years.

Replies from: Jordan
comment by Jordan · 2010-01-27T03:43:35.727Z · LW(p) · GW(p)

I remember all (well, most) of my prior experiences, but memory is a small aspect of personal identity, in my opinion. Compared to my old self 10 years ago, I react different, feel differently, speak differently, have different insights, different philosophical ideas, different outlooks. It comes down to a definition though, which is arbitrary. What's important is: do you identify with your future self enough to have a 1-1 trade off between utility for yourself (you-now) and utility for them (you-in-the-future)?

Replies from: jhuffman
comment by jhuffman · 2010-01-27T14:57:14.073Z · LW(p) · GW(p)

It is not just memory of experiences from fifteen years ago that make me consider it may be the same identity but the fact that those experiences shaped my identity today, and it happened slowly and contiguously. Every now and then I'd update based on new experiences, data or insights but that didn't make me a different person the moment it happened.

If identity isn't maintained through contiguous growth and development then there really is no reason to have any regard at all for your possible future, because it isn't yours. So smoke 'em if you got 'em.

Replies from: Jordan
comment by Jordan · 2010-01-29T19:51:33.531Z · LW(p) · GW(p)

I think you're making things artificially binary. You offer these two possibilities:

  • Contiguous experience implies I completely identify with my past and future selves

  • Identity isn't maintained through contiguous growth so there is no reason to have any regard at all for my future

Why can't contiguous experience lead one to partially identify with their past and future selves?

Replies from: jhuffman
comment by jhuffman · 2010-01-29T20:37:20.724Z · LW(p) · GW(p)

Good point. Maybe my future self isn't exactly me, but its enough like me that I still value it.

It doesn't really matter though, because I never get to evaluate my future self. I can only reflect on my past. And when I do I do I feel like it is all mine...

comment by bgrah449 · 2010-01-23T05:42:44.784Z · LW(p) · GW(p)

So can you state the discount rate equivalencies in terms of Me-in-an-amount-of-years?

comment by soreff · 2010-01-23T05:46:23.936Z · LW(p) · GW(p)

Jordan states it correctly. bgrah449, to put it in terms of decisions: I sometimes have to make decisions which trade off the experience at one point in time vs the experience at another. As you noted in your most recent post, money can be discounted in this way, and money is useful because it can be traded for things that improve the experience of any given block of time. By discounting money exponentially, I'm really discounting the value of experience - say eating a pizza - at say 5 years from now vs. now. Now I also need to be alive (I'm taking that to include an uploaded state) to have any experiences. I make choices (including having set up my Alcor membership) that affect the probability of being alive at future times, and I trade these off against goods that enhance my experience of life right now. When I say that I value a year of time N years from now at around .99^N of the value I place on my current year, it means that I would trade the same goods for the same probability change with that difference of weights for those two years. If I, say, skip a pizza to improve my odds of surviving this year (and experiencing all of the events of the year) by 0.001%, I would only be willing to skip half a pizza to improve my odds of surviving from year 72 to year 73 (and having the more distant experiences of that year) by 0.001%. Is that clearer?

Replies from: bgrah449
comment by bgrah449 · 2010-01-23T15:01:21.449Z · LW(p) · GW(p)

Yes - thanks!

comment by AngryParsley · 2010-01-23T00:39:44.402Z · LW(p) · GW(p)

In what ways, if any, have you changed your willingness to undertake risks?

I don't think I have. Compared to most people I'm probably a bit of a risk-taker.

I don't research airline safety records. I ride a motorcycle. I haven't gone skydiving or mountaineering but both of those sound fun. I don't cross the road more carefully. I exercise (I've always enjoyed running), but don't diet (I've always enjoyed ice cream).

Unless I'm discussing cryonics with someone else I mostly don't think about it.

Replies from: Blueberry
comment by Blueberry · 2010-01-23T01:06:44.276Z · LW(p) · GW(p)

Most of this is just scope insensitivity and the availability bias: for instance, air travel is ridiculously safe, but airplane accidents are well publicized. Researching airline safety records is a little silly given how safe air travel is. I ride a motorcycle too, and it's much safer than it appears to some people, as long as you are properly trained, wear a helmet, and don't drink and drive. Same goes for skydiving or mountaineering.

Replies from: AngryParsley
comment by AngryParsley · 2010-01-23T01:41:04.998Z · LW(p) · GW(p)

I agree with you except for when it comes to motorcycles. Motorcycles are 4-5 times more deadly than cars in terms of death rate per vehicle and 30 times more deadly in terms of deaths per miles traveled. (See http://en.wikipedia.org/wiki/Motorcycle_safety#Accident_rates ) Some of that is due to unsafe behavior by riders, but the fact is that a metal cage protects you better than a metal horse. Also, losing tire grip in a car means you slide. Losing grip on a bike means you fall.

comment by rwallace · 2010-01-22T06:06:11.638Z · LW(p) · GW(p)

My actual reaction in the scenario you describe would be to say "Piss off" before I turned around.

But cryonics is a wash as far as taking risks goes. First, nobody is sure it will work, only that it gives you better odds than burial or cremation. Second, suppose you were sure it will work, becoming a fireman looks like a better deal -- die in the line of duty and be immortal anyway. Granted something might happen to destroy your brain instantly, but there's no reason to believe that's more likely than the scenario where you live to be old and your brain disintegrates cell by cell while the doctors prolong your agony as far as they can.

Replies from: arundelo, rwallace
comment by arundelo · 2010-01-22T14:37:23.503Z · LW(p) · GW(p)

"I don't want to achieve immortality through [dying while rushing into a burning building]; I want to achieve immortality through not dying."

comment by rwallace · 2010-01-22T12:28:56.077Z · LW(p) · GW(p)

I'm curious, what's the flaw in my logic that the downvoters are seeing? Or is there some other reason for the downvotes?

Replies from: AdeleneDawner, JGWeissman, Vladimir_Gritsenko
comment by AdeleneDawner · 2010-01-22T14:26:42.526Z · LW(p) · GW(p)

The only logic flaw I see is that dying in a fire doesn't seem conducive to having a well-preserved brain - being on fire is sure to cause some damage, and as I understand it buildings that are on fire are prone to collapsing (*squish*). (There is an upside: If cryo was common, there'd likely be a cryo team on standby for casualties during a fire that was being fought. That wasn't obvious to me when I first thought about it, though.)

comment by JGWeissman · 2010-01-22T22:28:07.717Z · LW(p) · GW(p)

Are you serious? You conflated the fame of a firefighter who dies in the line of duty (which doesn't even last very long) with the immortality of actually living forever.

Replies from: rwallace
comment by rwallace · 2010-01-22T23:53:41.117Z · LW(p) · GW(p)

Ah! Thanks for the clarification -- I don't know why people thought I was talking about fame, but given that they did, that would certainly account for the down votes!

What I mean is that in most cases where you die in the line of duty, your body will be recoverable and brain preservable. Yes, there are ways for this to not happen -- but there are also ways for it to not happen when you die of old age. Any claim that cryonics makes taking hazardous jobs irrational from a self-preservation viewpoint would have to provide some basis for believing the latter to have better odds than the former.

Replies from: JGWeissman
comment by JGWeissman · 2010-01-23T00:10:44.147Z · LW(p) · GW(p)

Ah, your original comment makes more sense with that explanation.

I had originally interpreted your statement

But cryonics is a wash as far as taking risks goes.

as meaning that the risks/costs and rewards of cryonics was a wash, and with that framing, I misinterpreted the rest of it.

comment by Vladimir_Gritsenko · 2010-01-22T21:51:39.856Z · LW(p) · GW(p)

First, the only certainties in life are death and taxes. Cryonics aside, we should talk in probabilities, not certainties, and this is true of pretty much everything, including god, heliocentrism, etc.

Second, cryonics may have a small chance of succeeding - say, 1% (number pulled out of thin air) - but that's still enormously better than the alternative 0% chance of being revived after dieing in any other way. Dieing in the line of duty or after great accomplishment is similar to leaving a huge estate behind - it'll help somebody, just not you.

Third, re senile dementia, there is the possibility of committing suicide and undergoing cryonics. (Terry Pratchett spoke of a possible assisted suicide, although I see no indication he considered cryonics.)

If cryonics feels like a wash, that's a problem with our emotions. The math is pretty solid.

Replies from: bgrah449, pdf23ds
comment by bgrah449 · 2010-01-22T21:54:30.482Z · LW(p) · GW(p)

Cryonics aside, we should talk in probabilities, not certainties, and this is true of pretty much everything, including god, heliocentrism, etc. Second, cryonics may have a small chance of succeeding - say, 1% (number pulled out of thin air) - but that's still enormously better than the alternative 0% chance of being revived after dieing in any other way.

Did these two sentences' adjacency stick out to anybody else?

Replies from: RobinZ, Vladimir_Gritsenko
comment by RobinZ · 2010-01-22T22:04:03.579Z · LW(p) · GW(p)

Good eyes! And it drills down to the essential problem with the but-it's-a-chance argument for cryonics: is it enough of a chance relative to the alternatives to be worth the cost?

comment by Vladimir_Gritsenko · 2010-01-22T22:06:43.400Z · LW(p) · GW(p)

Pardon me, now I'm the one feeling perplexed: where did I screw up?

Replies from: RobinZ, Zack_M_Davis, bgrah449
comment by RobinZ · 2010-01-22T22:10:20.248Z · LW(p) · GW(p)

0% is a certainty.

comment by bgrah449 · 2010-01-22T22:10:49.314Z · LW(p) · GW(p)

Expressing certainty ("0% chance of being revived after dieing in any other way").

Replies from: Vladimir_Gritsenko, pdf23ds
comment by Vladimir_Gritsenko · 2010-01-22T22:34:46.812Z · LW(p) · GW(p)

You are strictly correct, but after brain disintegration, probability of revival is infinitesimal. You should have challenged me on the taxes bit instead :-)

Replies from: JGWeissman
comment by JGWeissman · 2010-01-22T22:45:32.060Z · LW(p) · GW(p)

If you represent likelyhoods in the form of log odds, it is clear that this makes no sense. Probabilities of 0 or infinitesimal both are equivalent to having infinite evidence against a proposition. Infinitesimal is really the same as 0 in this context.

Replies from: Vladimir_Gritsenko
comment by Vladimir_Gritsenko · 2010-01-22T22:54:49.520Z · LW(p) · GW(p)

I accept this correction as well. Let me rephrase: the probability, while being positive, is so small as to be on the magnitude of being able to reverse time flow and to sample the world state at arbitrary points.

This doesn't actually change the gist of my argument, but does remind me to double-check myself for nitpicking possibilities...

Replies from: RobinZ
comment by RobinZ · 2010-01-23T01:02:40.171Z · LW(p) · GW(p)

I like epsilon and epsilon-squared to represent too-small-to-be-worth-calculating quantities.

comment by pdf23ds · 2010-01-23T12:31:44.457Z · LW(p) · GW(p)

I don't have a problem with that usage. 0% or 100% can be used as a figure of speech when the proper probability is small enough that x < .1^n (4 (or something appropriate) < n) in 0+x or 1-x. If others are correct that probabilities that small or large don't really have much human meaning, getting x closer to 0 in casual conversation is pretty much pointless.

Of course, a "~0%" would be slightly better, if only to avoid the inevitable snarky rejoinder.

comment by pdf23ds · 2010-01-23T12:22:21.757Z · LW(p) · GW(p)

Third, re senile dementia, there is the possibility of committing suicide and undergoing cryonics.

http://lesswrong.com/lw/1mh/that_magical_click/1hp5

comment by PlaidX · 2010-01-22T08:21:05.094Z · LW(p) · GW(p)

http://angryflower.com/evilbu.gif

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2010-01-22T14:04:49.851Z · LW(p) · GW(p)

"What weighs seventy kilos?"

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2010-01-22T14:29:16.235Z · LW(p) · GW(p)

I remember that vividly! Though I tend to prefer to quote the next line, much as I think "twenty seconds to comply" is less cool than "you now have fifteen seconds to comply"...

comment by MichaelGR · 2010-01-22T03:23:34.233Z · LW(p) · GW(p)

Guaranteed death places a limit on the value of my life to myself. Parents shield children with their bodies; Casey Jones happens more often. People run into burning buildings more often. (Suicide bombers happen more often, too, I realize.)

I'm not sure I interpret this the same way you do.

My understanding is that parents are willing to risk their lives for their children mostly because that's how we've been programmed by evolution by natural selection, not because we consciously or unconsciously feel that our death is putting a limit on the value of our lives. We could have the very same genes even if we became more or less immortal (say by curing aging), and the same actions would result.

Killing yourself for religious reasons is a whole other problem, but IMO it is more an example of valuing life (in a really misguided way) rather than feeling that the value of life is limited by a future death. By this I mean, people willing to kill themselves for religious reasons have usually been convinced that they aren't really killing themselves, but will rather keep living in some supernatural afterlife.

Replies from: bgrah449
comment by bgrah449 · 2010-01-22T15:49:32.047Z · LW(p) · GW(p)

We're trying to develop the means to overcome weaknesses evolution has left in us. As life expectancy grows much higher, there will be an incentive to overcome those instincts that cause us to risk our life, no matter what our current moral instincts say about the reason for doing so.

Replies from: MichaelGR
comment by MichaelGR · 2010-01-22T16:09:51.712Z · LW(p) · GW(p)

That's possible. But that wouldn't happen in a vacuum.

Fewer people might risks their lives for others, but at the same time, society would probably put a lot more resources in making everything much safer, so the overall effect would be that fewer people will end up in situations where someone else would have to risk their lives to save them (something that isn't reliable even now, which is why it's so mediatized when it happens).

Replies from: thomblake
comment by thomblake · 2010-01-22T16:42:48.349Z · LW(p) · GW(p)

making everything much safer

When I hear the word 'safer' I reach for my gun.

Replies from: RobinZ
comment by RobinZ · 2010-01-22T16:48:06.767Z · LW(p) · GW(p)

I assume "safer" means things like "Click It or Ticket" - what are you referring to?

Replies from: MichaelGR
comment by MichaelGR · 2010-01-22T17:03:31.547Z · LW(p) · GW(p)

Yeah, I was mostly thinking about things like safer cars (more safety testing and more stringent tests, better materials, next generation 'vehicle stability control' and laser cruise control used for emergency braking, mesh networking, 4-point seatbelts, etc), better design of sidewalks and bike paths, the hardening of buildings in earthquake and hurricane-prone areas, automatic monitoring systems on swimming pools to prevent accidental drowning, etc.

But really, what do people die from in stable rich countries? It's really the diseases of aging that we need to cure (see SENS.org). After that, your chances of dying from an accident are already very low as things stand, and there are still lots of low-hanging fruit ways to make things safer...

I don't think making us very safe in the near-term requires a Big Brother state keeping us in plastic bubbles.

And if we take care of aging, most people will probably live long enough without dying in an accident to either see Friendly AI or some form of brain backup technology further reduce risk, or they'll all die from an existential risk that we've failed to prevent.

comment by Technologos · 2010-01-22T03:00:37.919Z · LW(p) · GW(p)

Does anyone really expect that this population would not respond to its incentives to avoid more danger? Anecdotes aside; do you expect them to join the military with the same frequency, be firemen with the same frequency, to be doctors administering vaccinations in jungles with the same frequency?

Agreed--indeed, I suspect that one of the first steps to fundamentally altering the priorities of society may be the invention of methods to materially prolong life, such that it really does become an unspeakable tragedy to lose somebody permanently.

comment by whpearson · 2010-01-22T00:00:40.359Z · LW(p) · GW(p)

Humans risk their lives for less noble causes as well. Extreme sports and experimental air craft being some examples. I have a romantic streak in me that says that yes death is worse than life, but worrying overly about death also devalues life.

Should I pore over actuarial statistics and only select activities that do not increase my chance of death?

Replies from: bgrah449, scotherns
comment by bgrah449 · 2010-01-22T00:07:12.202Z · LW(p) · GW(p)

The question isn't should you; the question is whether you would, especially considering that people do it now.

Replies from: whpearson
comment by whpearson · 2010-01-22T00:24:19.336Z · LW(p) · GW(p)

As I said I don't like to worry about death, not that I find my death unpleasant to talk about, just that valuable brain cycles/space will be used doing so. And I'd much rather be thinking about how people can live well/efficiently/happily than obsess over extending my life. So I wouldn't.

In my question I was trying to gauge the activism of the community. I already have people trying to convince me to freeze myself, will they also be campaigning against mountain climbing/hiking in the wilderness?

ETA: I do worry about the destruction of the human species, but that has less impact on my life than worrying about death would.

Replies from: bgrah449
comment by bgrah449 · 2010-01-22T00:34:04.198Z · LW(p) · GW(p)

1) You're being recruited to sign up for cryonics because it makes the recruiters' own cryonics investment a) better and b) less weird. A large population of frozen people encourages more investment than a small number of frozen cranks.

2) Probably to the same extent that people discourage smoking and riding a bike without a helmet - subtly try to make their own safety precautions seem less timid by trying to label those who disregard them as stupid.

comment by scotherns · 2010-01-22T15:11:51.006Z · LW(p) · GW(p)

Surely you already take into account how dangerous various activities are before deciding to do them?

Everyone has different thresholds for how much risk they are willing to take. Anyone that does not take risk into account at all will die very rapidly.

Replies from: Morendil, whpearson
comment by Morendil · 2010-01-22T15:28:59.392Z · LW(p) · GW(p)

And anyone who obsesses over risk too much will have a life not worth living, which - compared to the risk of injury from mundane activities - is the greater risk.

“Life is not measured by the number of breaths we take but by the moments that take our breath away." That's perhaps a slight exaggeration, a long life of small pleasures would compare favorably to a shorter life filled with ecstatic experiences, but the point is that a warm breathing body does not a life make.

Replies from: bgrah449
comment by bgrah449 · 2010-01-22T15:41:15.822Z · LW(p) · GW(p)

People's actions reveal that they do not measure life this way.

comment by whpearson · 2010-01-22T22:47:19.235Z · LW(p) · GW(p)

There is a difference between conscious thought and gut feeling. I'm quite happy to rely on my gut feeling for danger(as I get it for free), but do not want to promote it to conscious worrying in my every day life.

Replies from: scotherns, bgrah449
comment by scotherns · 2010-01-25T11:03:14.063Z · LW(p) · GW(p)

I'm kind of the opposite. My 'gut' feelings tend to rate most things as being dangerous, and I rely on my awareness of actual risk to be able to do pretty much anything.

I don't think I obsess over risk either - but that's maybe because I have been doing this all my life :-). I also don't think my life has not been worth worth living - quite the opposite, or I wouldn't have signed up for Cryonics!

comment by bgrah449 · 2010-01-22T22:54:21.807Z · LW(p) · GW(p)

Your gut feeling is informed by what you consciously choose to read.

comment by jhuffman · 2010-01-26T20:56:31.238Z · LW(p) · GW(p)

An alternate scenario: Omega forms an army and conscripts three people into it, and orders them to kill you. Omega then hands you a knife, with which you can certainly dispatch the unarmed, untrained conscripts who obediently follow their commander's wishes (despite vague apprehensions about war and violence and lack of a specific understanding for why they are to kill you).

Unfortunately, Omega is a very compelling commander and no surrender or retreat is possible. Its kill or be killed.

What do you do?

comment by Stuart_Armstrong · 2010-01-25T12:19:45.795Z · LW(p) · GW(p)

The debate already exists, for altruists who care about future generations as well: would you kill three people to stop an asteroid/global warming/monster of the week from killing more in future?

This is just the same question, made slightly more immediate and selfish by including yourself in that future.

comment by Psychohistorian · 2010-01-22T00:30:07.748Z · LW(p) · GW(p)

ETA: To the extent that your post is asking about personal behaviour, you perhaps should have made that point clear. You appear to be making a general point about morality, and your "kill three people" hypothetical appears to distract from your actual point, and is probably a large part of why you're getting downvoted, as it's rather antagonistic. I'll keep the rest of my comment intact, as I believe it to be generally relevant.

This would be more constructive were it not self-centered, i.e. if the question were, "I'll grant so-and-so 100,000 years of life, but you need to kill the next three people you see, and you will suffer no legal or reciprocal consequences for doing so." (I assume the no-jail-time part was part of your hypothetical.) This gets more at the utilitarian weight of lives, as opposed to individual selfishness.

I can have a perfectly consistent moral framework that is strictly selfish, i.e. "I am the ultimate important thing, and I should do anything that will provide me with a net benefit." Similarly, and less objectionably, "I should weight my utility somewhat more heavily than I weight that of others, because (A) I am at best making uninformed guesses as to what other people want, (B) if I don't look out for me, no one else will, and (C) other people have a term in their utility function describing 'me minding my own business.'"

Your question as phrased is thus more of a measure of the selfishness of one's value system than of the consistency of being pro-cryopreservation.

Replies from: bgrah449
comment by bgrah449 · 2010-01-22T00:35:38.627Z · LW(p) · GW(p)

I assumed rational readers would know that they are not immune to incentives that affect "other people."

comment by knb · 2010-01-21T23:16:53.149Z · LW(p) · GW(p)

You turn around and see someone. She says, "Wait! You shouldn't kill me because ... "

UTILITARIAN

She says, "Wait! You shouldn't kill me because I'm signed up for cryonics too! This means that the total utility change will be negative if you kill me and the other people!"

VIRTUE ETHICS

"Wait! You shouldn't kill me because selfishly murdering others for personal gain is not a characteristic of a virtuous man!"

DEONTOLOGICAL ETHICS

"Wati! You shouldn't kill me because it's against the rules! Against the Categorical Imperative! Against the Law! Against the Social Contract!"

Of course, if the guy is a sociopath, no ethical argument will work. But if the guy is a true sociopath, he'd kill you for a much smaller reward.

Replies from: Alicorn
comment by Alicorn · 2010-01-21T23:19:00.752Z · LW(p) · GW(p)

The utilitarian justification doesn't work because Omega said the victims aren't signed up for cryonics.

Replies from: knb
comment by knb · 2010-01-21T23:23:33.883Z · LW(p) · GW(p)

Thanks for pointing that out. Comment deleted.

Replies from: Blueberry
comment by Blueberry · 2010-01-21T23:41:44.310Z · LW(p) · GW(p)

I wish you'd kept the rest of that comment: the other justifications were good. There are other utilitarian justifications as well, based on the harm that murder does to society. (See Zachary_Kurtz's comment above.)

comment by Kevin · 2010-01-22T06:09:25.477Z · LW(p) · GW(p)

I don't see what there is to learn from this question.

comment by LucasSloan · 2010-01-21T22:51:06.354Z · LW(p) · GW(p)

If I kill the next three people, are they cryogenically preserved? Or is the next sentence implying an upper bound to the value of their life as opposed to contrasting with what would happen should you kill them?

comment by MatthewB · 2010-01-26T09:49:05.017Z · LW(p) · GW(p)

I can also tell you that the next three people you see, should you fail to kill them, will die childless and will never sign up for cryonics. There is a knife on the ground behind you."

So, if you fail to kill them, they wind up childless without cryonics

Does this mean that if you do kill them that they will get Cryonics and Children?