"Immortal But Damned to Hell on Earth"

post by Bound_up · 2015-05-29T19:55:33.681Z · LW · GW · Legacy · 20 comments

http://www.theatlantic.com/technology/archive/2015/05/immortal-but-damned-to-hell-on-earth/394160/

 

With such long periods of time in play (if we succeed), the improbable hellish scenarios which might befall us become increasingly probable.

With the probability of death never quite reaching 0, despite advanced science, death might yet be inevitable.

But the same applies also to a hellish life in the meanwhile. And the longer the life, the more likely the survivors will envy the dead. Is there any safety in this universe? What's the best we can do?

20 comments

Comments sorted by top scores.

comment by James_Miller · 2015-05-29T21:22:55.581Z · LW(p) · GW(p)

Imagine that for the last thousand years governments had the ability to send to eternal hell anyone whose body they controlled. How big would hell now be?

Edit: Imagine that it was possible to create a hell ICBM that would strike a large area sending everyone in it to eternal hell. Given this technology, I bet the U.S. and Russia would have numerous launch-on-warning hell ICBMs aimed at each other.

comment by advancedatheist · 2015-05-30T14:34:55.940Z · LW(p) · GW(p)

Actually this sounds worse than hell. In the traditional christian hell, you know that your life has meaning and purpose as one of god's creatures, so at least you have a transcendent reason for your suffering.

By contrast, the Atlantic article describes a nihilistic hell where your experience has no redeeming features whatsoever.

comment by Unknowns · 2015-05-29T22:50:49.143Z · LW(p) · GW(p)

Robert Ettinger wrote a story in 1948 called "The Penultimate Trump" in which the main character, the first cryonicist, awakes seemingly trumphantly only to be sent to Hell in punishment for his crimes.

comment by Lumifer · 2015-05-29T20:45:21.398Z · LW(p) · GW(p)

Didn't in one of Iain Banks' books some civilizations run Hells, that is, computer simulations into which they uploaded the sinners (from their point of view) and made eternal hell real for them?

Replies from: gwern
comment by gwern · 2015-05-29T21:18:34.338Z · LW(p) · GW(p)

https://en.wikipedia.org/wiki/Surface_Detail

See also Rebecca Roache's discussion of the topic: http://blog.practicalethics.ox.ac.uk/2013/08/enhanced-punishment-can-technology-make-life-sentences-longer/

(Kind of relevant because Ross Ulbricht was just sentenced to maximum charges - two concurrent life sentences and so on - and he's young enough with many favorable demographic traits, 31/white/well-educated/intelligent/fit, that he could easily live another 50 years to 2065, and who knows what will happen by then?)

Replies from: Ander
comment by Ander · 2015-05-29T22:05:55.837Z · LW(p) · GW(p)

Indeed, Surface Detail was an excellent book, one of the best Culture novels, imo.

Yes, technology which made immortality possible could also making torturing/punishing people forever possible, but this does not mean that death is good, rather it means that its important for people to have empathy, and that we need to evolve away from retributive justice.

I usually find articles like this from the deathists annoying, and this wasn't an exception.

comment by RomeoStevens · 2015-05-30T09:27:22.134Z · LW(p) · GW(p)

If we get only one thing right, it should be right to exit.

comment by shminux · 2015-05-30T01:25:17.449Z · LW(p) · GW(p)

What's the best we can do?

Learn to recognize Pascal mugging (the scenario you are describing is an instance of it) and ignore it.

Replies from: None, Bound_up
comment by [deleted] · 2015-06-04T15:27:15.760Z · LW(p) · GW(p)

It's not just a Mugging. It's also a model that takes no account of agents trying to alter the chances of Hells. Sure, if Hell has a finite probability at any given time, then eventually it should happen, except that an agent is deliberately exerting continual optimization pressure to push that probability down over time.

P(Hell) exists, but its derivative over time is negative.

comment by Bound_up · 2015-05-30T21:44:00.773Z · LW(p) · GW(p)

I'm familiar with Pascal's wager. Is mugging just its application for manipulative purposes?

For me, the convincing counter to Pascal is the recognition of an infinity of third possible explanations. No reason to subscribe to one possible God in mindspace than another in the face of no evidence.

But this situation is different. That counter doesn't work, and no others have come immediately to my mind. Do you know of one?

Replies from: shminux
comment by shminux · 2015-05-31T03:15:15.567Z · LW(p) · GW(p)

Pascal's mugging. You are doing a similar thing to yourself: focusing on one possible scary outcome to the neglect of other, equally probable ones. E.g. eternal bliss, creating multiple clones of yourself, all enjoying eternal happy life beyond your wildest dreams. The odds are tiny and unknown, but both risks and rewards are potentially huge. You have no frame of reference, no intuition and no brain capacity to make a rational choice in this case. So don't bother.

comment by artemium · 2015-06-01T22:14:55.954Z · LW(p) · GW(p)

I don't think that we should worry about this specific scenario. Any society advanced enough to develop mind uploading technology would have excellent understanding of the brain, consciousness and the structure of thought. In this circumstances retributive punishment would seem be totally useless as they could just change properties of the perpetrator brain to make him non-violent. and eliminate the cause of any anti-social behaviour.

It might be a cultural thing though, as america seems to be quite obsessed with retribution. I absolutely refuse to believe any advanced society with mind uploading technology would be so petty to use this in such horrible way . At that point I expect they would treat bad behaviour as a software bug.

comment by gurugeorge · 2015-05-29T20:25:23.472Z · LW(p) · GW(p)

Isn't suicide always an option? When it comes to imagining immortality, I'm like Han Solo, but limits are conceivable and boredom might become insurmountable.

The real question is whether intelligence has a ceiling at all - if not, then even millions of years wouldn't be a problem.

Charlie Brooker's Black Mirror tv show played with the punishment idea - a mind uploaded to cube experiencing subjectively hundreds of years in a virtual kitchen with a virtual garden, as punishment for murder (the murder was committed in the kitchen). In real time, the cube is just left on casually overnight by the "gaoler" for amusement. Hellish scenario.

(In another episode- or it might be the same one? - a version of the same kind of "punishment" - except just a featureless white space for a few years - is also used to "tame" a copy of a person's mind that's trained to be a boring virtual assistant for the person.)

Replies from: Bound_up, Lumifer
comment by Bound_up · 2015-05-29T20:31:05.192Z · LW(p) · GW(p)

The truly worrying scenarios are the ones which disallow escape of any kind, including suicide.

In an advanced society, anyone that wanted to do that could come up with a lot of ways to make it happen.

comment by Lumifer · 2015-05-29T20:43:40.773Z · LW(p) · GW(p)

Isn't suicide always an option?

Not if you're an upload.

Replies from: None, gurugeorge
comment by [deleted] · 2015-06-01T07:47:34.512Z · LW(p) · GW(p)

Perhaps it was discussed in more depth before I join LW but I think far, far more cautiousness should be exercised at considering an upload could ever be you.

If you can reduce personhood to information representable in bits, it also means each and any part of it is changeable and replacable, thus there is no lasting essence of individualhood. (My former Buddhist training is really kicking in here, although it is possible I am looking it up in a cache.) Thus there are infinite amount of potential lumps of information, each of which are "more you" and "less you" depending on the difference. Basically from the second you thought a new thought or seen something new, you are not the same you anymore.

Fortunately, our lack of infinite brain plasticity protects us now from every experience radically rewiring what we are, we have an illusion of unchanging selfhood more or less due to this lack of plasticity.

Uploads are infinitely plastic. Probably nobody will care about keeping you intact just for the sentimental and nostalgic value of being attached to your former, meat-based, unplastic self. You will be changed so radically that it will not be you in any meaningful sense. Also there is no promise they will bother about uploading many meat minds. They may as well figure uploading one Really Nice Person and making a hundred billion copies delivers more utility.

And quite frankly if we give up all our last shreds of illusionary attachment to having souls, I am not sure we will care about utility anyway. I think I find it hard to care about whether a mere algorithm feels joy or suffering. After all a mere algorithm can put the label "joy" or "suffering" on anything. For an algorithm, what is even the difference between "real" suffering and simply putting the word, the label, the referent "suffering" on certain things? I need the illusion of some scrap of a not-literally-supernatural-but-it-feels-so type of soul to know the difference between suffering and "suffering". A software function that basically goes print("OUCH! Augh! Nooo!...") does not actually suffer, and I think the "actualness" is where the supernaturalistic illusion is necessary.

Otherwise, we would just engineer out the ability to suffer from the upload, and/or find the function that takes experiences as an input, judges them, and emits joy as an output, and change it so that it always emits joy. We would from that point on not care about the world.

Replies from: Lumifer
comment by Lumifer · 2015-06-01T15:08:50.238Z · LW(p) · GW(p)

I am ignoring here all the problems with the concept of an upload (or an "em" in Hanson's terminology) -- that's a separate subject altogether.

For the record, I don't subscribe to the Hansonian view of a society of ems.

comment by gurugeorge · 2015-05-30T21:38:57.661Z · LW(p) · GW(p)

Oh, true for the "uploaded prisoner" scenario, I was just thinking of someone who'd deliberately uploaded themselves and wasn't restricted - clearly suicide would be possible for them.

But even for the "uploaded prisoner", given sufficient time it would be possible - there's no absolute impermeability to information anywhere, is there? And where there's information flow, control is surely ultimately possible? (The image that just popped into my head was something like, training mice via. flashing lights to gnaw the wires :) )

But that reminds me of the problem of trying to isolate an AI once built.

Replies from: Lumifer
comment by Lumifer · 2015-06-01T15:07:00.864Z · LW(p) · GW(p)

I was just thinking of someone who'd deliberately uploaded themselves and wasn't restricted - clearly suicide would be possible for them.

That is not self-evident to me at all. If you don't control the hardware (and the backups), how exactly would that work? As a parallel, imagine youself as sole mind, without a body. How will your sole mind kill itself?

And where there's information flow, control is surely ultimately possible?

Huh? Of course not. Information is information and control is control. Don't forget that as you accumulate infomation, so do your jailers.

comment by OrphanWilde · 2015-06-01T15:49:39.477Z · LW(p) · GW(p)

Is there a situation so terrible you could never adapt to it?