Hacking Quantum Immortality

post by CasioTheSane · 2012-05-08T07:06:06.748Z · LW · GW · Legacy · 17 comments

Contents

17 comments

Quantum immortality sounds exactly like the mythical hell: living forever in perpetual agony, unable to die and in your own branch of existence separate from everyone else you ever knew.

What if we can hack quantum immortality to force continued good health, and the mutual survival of our loved ones in the same branch of the universe as us?

It seems like one would "simply" need a device which monitors your health with biosensors, and if anything goes out of range- it instantly kills you in a manner with extremely low probability of failure. All of your friends and family would wear a similar device, and they would be coupled such that if one person becomes "slightly unhealthy" you all die instantly, keeping you all alive and healthy together.

We nearly have the technology to build such a thing now. Would you install one in your own body? If not, why not?

 

 

Who wants to invest in my new biotech startup which promises to stop all disease and human suffering within the next decade? Just joking, there is a serious technical problem here that makes it considerably more difficult than it sounds: for such a device to work the probability of it's failure must be much much less than the probability of your continued healthy survival. You also never get to test the design before you use it.

17 comments

Comments sorted by top scores.

comment by Thomas · 2012-05-08T08:14:02.543Z · LW(p) · GW(p)

If you wish to play with the so called QI, make sure it is a real thing.

Instead of committing a suicide, just swallow a big sleeping pill every time a Geiger ticks. Say it has 50% probability that it ticks in a 5 minutes period. If you are still awake hours after you have started this experiment, since there was no gamma ray emission in your branch, then it works! You can try it at home.

Replies from: CasioTheSane, David_Gerard
comment by CasioTheSane · 2012-05-08T15:43:46.978Z · LW(p) · GW(p)

Yes, but after the experiment is over the copy of me in universes where I fell asleep would also wake up and the probability of remembering that I stayed awake through the entire experiment afterwards will always be 0.5^(minutes/5).

You actually have to kill yourself in the branches where the Geiger ticks to make observations about the validity of QI which persist afterwards, which you can then use to inform future decisions.

Replies from: dlthomas
comment by dlthomas · 2012-05-08T18:40:31.473Z · LW(p) · GW(p)

But note that no one else can do this experiment for you.

Replies from: CasioTheSane
comment by CasioTheSane · 2012-05-08T22:07:06.737Z · LW(p) · GW(p)

I think I'm actually going to try it... it should be fun during the process, too bad I won't remember the results afterwards.

comment by David_Gerard · 2012-05-08T11:23:49.021Z · LW(p) · GW(p)

Indeed. Does the standard philosophy literature on quantum immortality (don't tell me there isn't any) point out that, if it's real, there is a you who doesn't go to sleep at night, and will never go to sleep again? To me this seems of qualitatively less import than quantum immortality, but I can't see how it would actually work differently.

Replies from: CasioTheSane
comment by CasioTheSane · 2012-05-08T15:48:19.212Z · LW(p) · GW(p)

Yes, if QI is real every night there's a copy of you which spontaneously develops something like fatal familial insomnia and never goes to sleep again... however you would be unable to observe this without killing yourself in all universe branches where you sleep soundly. Don't try it at home.

comment by Manfred · 2012-05-08T08:53:54.572Z · LW(p) · GW(p)

Quantum immortality sounds exactly like the mythical hell

And is about as real.

The idea of quantum suicide is that we should act as if we won't die, because there will be some future "us"es who don't die. But this argument is wrong because the person who makes the decisions isn't one of those future people - it's the present person, who is completely entitled to not want to die.

We can demonstrate that this fails even classically. Imagine a pill that, when taken, causes the taker to believe that taking the pill is a good idea. A rationalization drug. And suppose that this pill also causes violent diarrhea. Is taking this pill a good idea? The person who takes the pill would certainly say so! If one is to avoid similar entirely-classical silliness, one has to invalidate appeals to what future people will think as a reasonable way of deciding, and instead just make decisions according to the present person's preferences.

One response I've seen to this was "dying is not like a pill that modifies your judgement" - the trouble with this is that as modifying the output of your brain goes, putting a big hole through it is a highly effective example. Another possible response is that the present person is not "rationally allowed" to not want to die, for some philosophical reason. But that position is inconsistent with all our shiny theoretical models of what "rational choice" means, not to mention human behavior.

Ultimately, understanding quantum suicide boils down to understanding the difference between future utility and expected utility.

Replies from: dxu, AndHisHorse
comment by dxu · 2015-04-21T02:50:54.327Z · LW(p) · GW(p)

Or you could simply consider all of those versions of you with holes in their brains sufficiently different from yourself as to be no longer considered "you", making it so that you only care about the "you" who lives.

The above seems to me like a fairly defensible position. After all, as modifying the output of your brain goes, putting a big hole through it is a highly effective example, as you yourself noted.

comment by AndHisHorse · 2013-11-16T11:17:14.473Z · LW(p) · GW(p)

It seems to me as if you aren't arguing that the phenomenon to which we give the name "quantum immortality" doesn't exist, but rather that it is undesirable to the point such that it is clear we have misnamed it. In which case, I would suggest that what we really need to ask ourselves is why we don't want to die, and whether or not this aversive impulse would or would not apply to the chain of events which we describe as "quantum immortality."

I don't have an answer to either question.

Replies from: Manfred
comment by Manfred · 2013-11-16T16:41:05.562Z · LW(p) · GW(p)

Well, the historical reason for us not wanting to jump off a cliff is that you don't have many offspring (or amplitude of having offspring) if you go around jumping off cliffs. So that's a neat anthropic demonstration that we come from a high-amplitude world.

Replies from: AndHisHorse
comment by AndHisHorse · 2013-11-16T17:05:02.843Z · LW(p) · GW(p)

I believe that I've misphrased my question. The question I should have asked is: what makes death undesirable to us as we are now, regardless of why (causally, historically) we have this preference, and does this phenomenon of being aware of only those branches which we can observe qualify as undesirable or not?

Replies from: Manfred
comment by Manfred · 2013-11-16T19:18:44.005Z · LW(p) · GW(p)

You may as well ask why I don't like being hit on the head with a baseball bat.

I note that you're using the word "undesirable" as a property of dying, or of the anthropic principle, rather than a fact about human preferences. Not sure if this is linguistic convenience or the mind projection fallacy.

Replies from: AndHisHorse
comment by AndHisHorse · 2013-11-16T19:59:16.198Z · LW(p) · GW(p)

But there is a question to dissect, even in this most basic of preferences. For example, I suspect that a large portion of your dislike for the prospect of being hit in the head with a baseball bat is the pain. Your objection, I assume, is not to the fact that wood (or aluminum, or whatever) will be touching your head, or to the proximity of an artificial object to your brain.

There is an aspect of the experience - being hit in the head with a baseball bat - which makes it undesirable where similar experiences are not. For example, I have far fewer objections to being hit in the head by an inflatable baseball bat, particularly if I am forewarned and the situation is appropriate.

Similarly, I would guess that there are particular parts of the experience of dying which persons like us would find undesirable (I clarify to distinguish rationalists from persons who may attach more superstition to death; we do not fear, for example, having our hearts weighed against a feather by the Egyptian deity Osiris) more than others.

In this case, I find it valuable to clarify: do we wish to avoid the experience of dying (which occurs with increasingly high probability in one's lifetime, even assuming quantum immortality), the limitations on the amount of fun we can have in a finite lifespan, or some combination of these and other properties of dying?

And yes, I did attach the adjective "undesirable" to death as a matter of linguistic convention. Thank you for pointing that out.

comment by Sly · 2012-05-08T07:08:08.090Z · LW(p) · GW(p)

I would not because I am not a big fan of suicide.

comment by IlyaShpitser · 2012-05-08T21:19:58.010Z · LW(p) · GW(p)

Your proposed solution does not work. As long as there is a chance of failure of this device (and any device will have such a chance), you will not avoid the problems assumed by your premises.

Replies from: CasioTheSane
comment by CasioTheSane · 2012-05-11T04:02:03.034Z · LW(p) · GW(p)

I disagree: the failure rate of the device doesn't have to be zero for this to work, it just has to be many orders of magnitude lower than the natural failure rate of your body such that you're vastly more likely to keep living in good health, than experience a device failure.

It's a difficult, but not inherently impossible engineering problem. Only "false negative" failures (where you're in poor health but it fails to kill you) count here, so making the device err on the side of killing you for no reason during a suspected failure would actually be "safer" from a QI perspective.

comment by vi21maobk9vp · 2012-05-08T19:18:04.478Z · LW(p) · GW(p)

Well, with "all friends and close family" clause this will probably have cascading effects...

It looks like even people who consider MWI the best model would still prefer to survive in all the branches where it is possible and more or less comfortable.

If someone creates such device, it will probably be set to truly crippling damage as threshold and will lack synchronisation. This way it will gain its market - but the users won't consider the branch where they survive as relevant... QI stories will be used simply to paint accusations of assisting suicide as religious intolerance.