Quantum immortality: Is decline of measure compensated by merging timelines?
post by avturchin · 2018-12-11T19:39:28.534Z · LW · GW · 8 commentsContents
8 comments
I wrote an article about the quantum immortality which, I know, is a controversial topic, and I would like to get comments on it. The interesting twist, suggested in the article, is the idea of measure increase which could compensate declining measure in quantum immortality. (There are other topics in the article, like the history of QM, its relation to the multiverse immortality, the utility of cryonics, impossibility of euthanasia and the relation of QI to different decision theories.)
The standard argument against quantum immortality in MWI runs as following. One should calculate the expected utility by multiplying the expected gain on the measure of existence (roughly equal to the one's share of the world’s timelines). In that case, if someone expects to win 10.000 USD in the Quantum suicide lottery with 0.01 chance of survival, her actual expected utility is 100 USD (ignoring negutility of death). So, the rule of thumb is that the measure declines very quickly after series of quantum suicide experiments, and thus this improbable timeline should be ignored. The following equation could be used for U(total) = mU, where m is measure and U is expected win in the lottery.
However, if everything possible exists in the multiverse, there are many my pseudo-copies, which differ from me in a few bits, for example, they have a different phone number or different random child memory. The difference is small but just enough for not regard them as my copies.
Imagine that this different child memory is 1kb (if compressed) size. Now, one morning both me and all my pseudo-copies forget this memory, and all we become exactly the same copies. In some sense, our timelines merged. This could be interpreted as a jump in my measure, which will as high as 2power1024 = (roughly) 10E300. If I use the equation U(total) = mU I can get an extreme jump of my utility. For example, I have 100 USD and now my measure increased trillion of trillion of times, I supposedly get the same utility as if I become mega-multi-trillioner.
As a result of this absurd conclusion, I can spend the evening hitting my head with a stone and thus losing more and more memories, and getting higher and higher measure, which is obviously absurd behaviour for a human being - but could be a failure mode for an AI, which uses the equation to calculate the expected utility.
In case of the Quantum suicide experiment, I can add to the bomb, which kills me with 0.5 probability, also a laser, which kills just one neuron in my brain (if I survive), which - let's assume it - is equal to forgetting 1 bit of information. In that case, QS reduces my measure in half, but forgetting one bit increases it in half. Obviously, if I play the game for too long, I will damage my brain by the laser, but anyway, brain cells are dying so often in aging brain (millions a day), that it will be completely non-observable.
BTW, Pereira suggested the similar idea as an anthropic argument against existence of any superintelligence https://arxiv.org/abs/1705.03078
8 comments
Comments sorted by top scores.
comment by TheWakalix · 2018-12-11T23:02:24.174Z · LW(p) · GW(p)
There is a quantum mechanical property which you may not be aware of. It is incredibly hard, if not impossible to cause two worlds to merge. (Clarification: It may be possible when dealing with microscopic superposed systems, but I suspect it would take a god to merge two worlds which have varied as much as you describe.) There is regular merging of "worlds", but that occurs on the quantum level between "worlds" that don't differ macroscopically.
This is because any macroscopic difference between two worlds is sufficient for them to not be the same, and it's not feasible to put everything back the way it was.
However, this does not apply to merging people. I suspect that most theories of personhood that allow this are not useful, though. (Why are the two near-copies not the same person, but you remain the same person after losing the small memory? That's a strange definition of personhood.) That is, you can define personhood any way you like, and you can hack this - for example, to remove "yourself" from bad worlds. (Simply define any version of you in a bad world to not be a person, or to be a different person.) But that doesn't mean that you can actually expect the world to suddenly become good. (Or you can, but at that point everyone starts ignoring you.)
Replies from: avturchin↑ comment by avturchin · 2018-12-11T23:42:39.194Z · LW(p) · GW(p)
You right - I didn't mean the merge of quantum worlds, but mean the merge of personhoods, which could happen even in classical but infinitely large universe. I wanted to show that assuming the possibility of such merger has absurd consequences. However, this absurdity is also applicable to other calculations where the "measure of an observer" changes, as it happens in the one of important objections to the quantum immortality.
In the other words, we can't kill the quantum immortality idea by saying "the measure will decline to infinitely small values", as in fact, the measure could even grow if we properly calculate it.
What could be done to resolve this conundrum? We could ignore changes of absolute measure, and look only on relative measure, that is relation between shares of the different outcomes where observer is alive. For example, if QS thought experiment has 3 outcomes: a) non existence 0.9, b) winning 1000 usd with 0.09 с) losing 100 000 with 0.01 probability (e.g. injury) , we should in that case ignore (a) and compare expected utility of (b) and (c), which are +90 and -1000, so the game in this case has negative utility of -910.
Another solution is to accept that we could change our measure in the world by forgetting things, and to build something like a magic based on it. (This idea was discussed on LW as "flux universe" on a series of posts where a person had panic attacks based on idea that if he forgets parts of his personality before sleep, he would be never able to return back to his initial self.) This "magic" may look like: (1) a person learns that he has rare deadly disease. (2) he meditates and remove from his mind all clues about this fact (3) his observer-moment becomes so simple that it is equal to zillions of observer-moments of other peoples (4) if human mind is only numbers, they "merge", (5) now he returns to awake state, but his probability to be in the world-line where he has the rare disease is equal to the level of incidents of the disease in the population, that is very low. (6) Profit. But this thing seems more absurd than quantum immortality.
Another possible solution is to get rid of the idea of identity in favor of some form of open individualism. The problem here is that most human preferences are formulated in the way that they assume existence of the some form of identity: "I want a cake"
comment by Pattern · 2018-12-11T20:51:09.708Z · LW(p) · GW(p)
Just because you don't remember something, doesn't mean it doesn't affect you. (Just because you don't know X about yourself, doesn't mean the sentence "you possess the property X" isn't true.)
Replies from: avturchin↑ comment by avturchin · 2018-12-11T21:03:24.316Z · LW(p) · GW(p)
For the sake of argument we assume that the information is erased sufficiently good for two pseudo-copies to be become exact copies - and this information is not immediately restored by interaction with the outside world. If we were digital minds, it will be simpler: I can generate a random string of data in advance and then delete bit by bit.
comment by Donald Hobson (donald-hobson) · 2018-12-11T23:10:26.499Z · LW(p) · GW(p)
One way to avoid the absurd conclusion is to say that it doesn't matter if another mind is you.
Suppose I have a utility function over the entire quantum wave function. This utility function is mostly focused on beings that are similar to myself. So I consider the alternate me, that differs only in phone number, getting £100, about equal to the original me getting £100. As far as my utility function goes, both the versions of me would just be made worse off by forgetting the number.
Replies from: avturchin↑ comment by avturchin · 2018-12-11T23:56:41.936Z · LW(p) · GW(p)
Most human preferences has an embedded idea of identity as a receiver of the profit. However, the idea of "beings similar to me" assumes that there are "beings which are not enough similar to me for to be regarded as me" - but still have some of my traits. In other words, any definition of identity creates possibility of "pseudo-copies": if we define the identity wider, the circle of the pseudo-copies around it will become also wider, but will not disappear until we include all possible beings and end up with open individualism.
If we assume total "open individualism", it results in perfect effective altruism and the utility function will be akin "I prefer that total wellbeing of all sentient beings in the universe will increase on 100 pounds". However, this is not how most human preferences work, and there is also a risk of starvation.
So playing with the definition of identity will not help to escape the problem of existence of pseudo-copies, which could become "real me", if some information is erased from both of us.
comment by AprilSR · 2018-12-12T00:45:13.550Z · LW(p) · GW(p)
This assumes that there's some point where things sharply cut off between being me and not being me. I think it makes more sense for my utility function to care more about something the more similar it is to me. The existence of a single additional memory means pretty much nothing, and I still care a lot about most human minds. Something entirely alien I might not care about at all.
Even if this actually raises my utility, it does it by changing my utility function. Instead of helping the people I care about, it makes me care about different people.
Replies from: avturchin↑ comment by avturchin · 2018-12-19T11:01:50.929Z · LW(p) · GW(p)
Even if the cut-off is not sharp, something - which was completely not me - may become partly me after my simplification. Adding diffuse personality border is a correct step to the proper calculations of "me" and "not-me", if we ever come to this level, but this doesn't change the idea of the post: The simplification of the definition of "me" results in wider "bell curve" and thus larger share of all possible observer which are me.