Quantum Immortality: A Perspective if AI Doomers are Probably Right
post by avturchin, James_Miller · 2024-11-07T16:06:08.106Z · LW · GW · 53 commentsContents
Part 1. Thought experiment Guessing the digit of π via quantum immortality Giving Up on Estimate Probabilities Part 2. Anthropic Reasoning 1. It is actually dilemma 2. War in Taiwan and “future anthropic shadow” 3. The counterargument based on path-dependent identity 4.God incubator and logical anthropic shadow 5. Other universal x-risks similar to AI Doomers 6. Transcendental advantage Generalized QI and no free lunch I am most likely to be born in the universe where life extension is possible Transcendental advantage: attractor in the space of measure None 53 comments
Epistemic status: This text presents a thought experiment suggested by James Miller, along with Alexey Turchin's musings on possible solutions. While our thoughts are largely aligned (we both accept high chances of quantum Immortality and the timeline selection principle), some ideas are more personal (e.g., Turchin's "transcendental advantage") in Part 2.
TL;DR: If quantum immortality is true, I will survive AI doom either by unlikely luck or because p(DOOM) is small. Knowing that I will survive anyway, can I bet that P(doom) is small? Can we now observe a "future anthropic shadow," such as a Taiwan war, which would slow AI development?
Part 1. Thought experiment
Guessing the digit of π via quantum immortality
Before sleeping, I try to guess the 10th digit of π, presently a mystery to me. After falling asleep, seven coins will be flipped. Assume quantum uncertainty affects how the coins land. I survive the night only if I correctly guess the 10th digit of π and/or all seven coins land heads, otherwise I will be killed in my sleep.
Convinced of quantum immortality, I am confident of surviving the night. How then should I expect my future self to likely rationalize this survival? According to simple Bayesian reasoning, the most probable cause for my survival would be accurately guessing the 10th digit of π because I face a 10% chance of correctly guessing a digit of π but only a 1 in 128 chance of surviving because all coin lands heads. However, this suggests that before sleeping, I ought to consider my guess regarding 10th digit of π as probably correct, a concept that appears nonsensical.
Quantum immortality should influence my belief about whether the future me will think all the coins came up heads because my consciousness is more likely to persist in the branches of the multiverse where this happens. But quantum immortality should not affect whether future me thinks I have already guessed the 10th digit of π correctly because the accuracy of my guess is consistent across the multiverse. By this chain of logic, if I am convinced future me will survive, I should think it far more likely I will survive because of the coin flips than guessing the 10th digit of π correctly.
Now imagine that I am an AI doomer who thinks there are two ways I will survive: (a) if I am wrong about AI existential risk, (b) if humanity gets extremely lucky. Furthermore, assume that (a) is not influenced by quantum luck, but (b) is. Imagine I estimate (a) at 10% and (b) at 1/128. If I am convinced of quantum immortality, I assume that (a) and/or (b) will occur. Which possibility should I consider more probable?"
In short, we have three basic ways of handling the paradox:
(1) Give up on estimating probabilities (or just ignore QI).
(2) Bite the Bayesian Bullet and accept I can use quantum immortality to have a very accurate prediction of a digit of π, and
(3) “Anthropic future shadow”: future events can manifest themselves now if they help my future survival, e.g. the current development of life extension technologies. In AI Doom, future anthropic shadow can manifest itself, for example, as higher chances of war around Taiwan which presumably would slow AI development.
(The difference between 2 and 3 is that they give different interpretations, while giving similar predictions)
(3) should be seriously considered because (1) and (2) are so unsatisfactory.
Yudkowsky used [LW · GW] a similar experiment with guessing the π digit to claim the inconsistency of anthropics in general.
Giving Up on Estimate Probabilities
Perhaps the notion of quantum immortality makes it impossible to estimate probabilities, and so AI doomers who believe in quantum immortality should not seek to estimate their likely cause of survival. But giving up has significance compared to going with straightforward Bayesian probabilities.
Assume I am very status-conscious and would only publicly support AI doomers if it bolsters my future reputation for wisdom. If humanity survives just due to quantum luck, validating the AI doomers' accuracy, future generations may well perceive them as wise, as it will be apparent that we only survived because of amazing luck. On the other hand, if AI doomers are proven incorrect, they will be deemed foolish by posterity. Thus, demonstrating that simplistic Bayesian estimation often overreaches might persuade status-conscious individuals to endorse the AI doomer viewpoint.
This issue might also be relevant to investment strategies. Imagine that, assuming the AI doomers are right, AI will likely become much more powerful in the short run. This is largely due to a primary reason for the doomers' potential miscalculation: AI might only reach human-level intelligence in several vital areas. Assuming the doomers are correct yet humanity survives through quantum luck, a long-term investment in an AI-heavy company like Microsoft would yield the highest returns. Since I will only benefit from my long-term investment if humanity survives, giving up on estimating the likely causes for my survival would make it nearly impossible to develop an optimal investment strategy.
Part 2. Anthropic Reasoning
(The rest of the article is mostly the work of Alexey Turchin.)
1. It is actually dilemma
Yudkowsky wrote a similar argument in his famous The Anthropic Trilemma [LW · GW] about manipulating future observable probabilities by creating many copies and later merging them. The trilemma is the following:
1. Bite the bullet: "You could say, 'There's no reason why you shouldn't be able to exert anthropic psychic powers.'"
2. You will be a winner in 5 seconds but a loser in 15.
3. No-continuing-personal identity: "That there's any meaningful sense in which I can anticipate waking up as myself tomorrow, rather than Britney Spears."
And two additional possibilities: "The fourth horn of the trilemma... would be denying that two copies of the same computation had any more weight of experience than one" and the use of the quantum measure procedure which does not allow cheating this way.
The solution of the paradox discussed here can also be presented as a trilemma:
(1) Give up on estimating probabilities.
(2) Bite the Bayesian Bullet and accept that I can use quantum immortality and suicide to have a very accurate prediction of a digit of π.
(3) "Anthropic future shadow": future events can manifest themselves now if they help my future survival, e.g., the current development of life extension technologies, but it works only for non-deterministic events.
There is an obvious similarity between our trilemma and Yudkowsky's. Actually, both trilemmas boil down to dilemmas: we either bite the bullet in some form or accept inconsistency in probabilities and/or personal identity, that is, we are paying a theoretical cost. In our case, (3) is also a type of accepting that something weird is happening, and all counterarguments boil down to (1).
In Yudkowsky's trilemma, he either bites the bullet that he can manipulate probabilities, OR accepts that either probability updating or consciousness continuity is wrong.
Biting the bullet in our case is something to seriously consider (3): that I observe the world where I have higher future survival chances.
2. War in Taiwan and “future anthropic shadow”
It was suggested (by gwern) that a possible war in Taiwan would cause hardware shortages which will pause AI development globally, and that US sanctions on China’s AI tech increase the chances of such a war. Moreover, commentators suggested that the fact that we are in such a timeline can be explained by quantum immortality (they incorrectly use the wording “anthropic shadow” which originally was used by Bostrom to denote something like survivorship bias – that is, the underestimation of past risks, but not the change in the future probabilities caused by quantum immortality; let’s call this modified idea “future anthropic shadow”:
‘This is also related to the concept of an anthropic shadow: if artificial intelligence was to cause human extinction but required a lot of computing power, you would be more likely to find yourself in world lines in which the necessary conditions for cheap computing are not met. In such world lines, crypto miners causing a GPU shortage, supply chain disruptions due to a pandemic, and a war between the United States and China over Taiwan in which important chip fabrication plants are destroyed are more likely to occur in world lines that are not wiped out. An anthropic shadow hides evidence in favour of catastrophic and existential risks by making observations more likely in worlds where such risks did not materialize, causing an underestimation of actual risk’ https://twitter.com/XiXiDu/status/1582440301716992000
We can define “future anthropic shadow” as finding evidence now that you will survive via QI an impending future catastrophe in the future.
Note that there is a significant difference between ‘AI Doomers are wrong globally because alignment is easy” and this idea. AI hardware shortage will happen only in some worlds: it is not a deterministic outcome.
However, hardware shortages are not completely equal to random coins in our thought experiment: hardware shortages may already happen, but the coins will be tossed only in the future. Thus, hardware shortages are more deterministic in the sense that we already know that they are here (assuming for the sake of the argument that such shortages are real – it looks like NVIDIA will produce 3 million H100 GPUs in 2024 – but the risk of the war in Taiwan remains high plus recent earthquake swarms indicate a high risk of a natural disasters hindering AI progress.)
In some sense, future anthropic shadow is a reverse version of Doomsday argument: instead of “I live in a world which will end soon”, we have “I live in the world best suitable for survival”.
We may address this in future writings, but there is an important difference between AI Doomers' thought experiment and War in Taiwan – the first is predicting universal distribution, and the second is only about local circumstances. This type of difference appears many times in discussions about anthropics, like discussions about SIA, Presumptuous Philosopher and local vs Universal Doomsday argument.
3. The counterargument based on path-dependent identity
One can suggest the following counterargument to the proposed thought experiment:
• If my π-guess is wrong, my only chance to survive is getting all-heads.
• With 0.9 probability, my π-guess is wrong (but I will survive anyway), so I will survive because of all-heads.
• The low chances of all-heads don't matter, as quantum immortality will "increase" the probability to 1.
• So, I should expect my guess about π to be wrong and be more likely to survive because of random tossing of all-heads.
The argument is based on counting not the final states of the experiments, but the paths to the final states: if I am in the path with a wrong π digit, I will survive anyway, but by another mechanism.
Path dependence often appears in thought experiments about copies. Another example where the way of calculating copies affects the result: if 10 copies are created from me simultaneously, my chances of being each will be 0.1. But if each copy is created from a previous copy, then the last copy will have only 1 in 1024 chances of being me. The difference here is similar – we either follow paths or calculate probabilities by comparing the pools of resulting copies. The difference depends on the nature of personal identity – is it path-dependent (continuity as a carrier of identity) – or state-dependent?
Note that quantum immortality based on MWI is path-dependent, but big-world immortality based on chaotic inflation is state-dependent. Calculating probabilities in big-world immortality is more difficult as we don't know the distribution of all possible worlds, including simulations and non-exact copies. A deeper answer here would require an understanding of the relationship of continuity, qualia, and identity, which is a difficult question outside the scope of this paper.
In this thought experiment, we get different probabilities depending on the order in which we compute anthropic effects, which is a rather typical situation for anthropic paradoxes – e.g., Sleeping Beauty.
In other words:
- From the outside point of view: 9 out of 10 of my copies survive because they guess the π-digit correctly.
- From my point of view: there is only a 1 in 10 chance of surviving by guessing π correctly; if I guess incorrectly, I am sure to survive because of coins.
The Self-sampling assumption states that I am randomly selected from all my copies (in some reference class). If applied to survivors, it supports the outside view, but not an inside-path-dependent view. But Bostrom suggested the Strong SSA, in which not observers, but observer-moments are selected. SSSA is not path-dependent. Bostrom applied it to his "hybrid solution" in Sleeping Beauty. SSSA also creates strange anthropic effects – see Turchin's recent post "Magic by forgetting." [LW · GW]
However, abandoning SSSA also has a serious theoretical cost:
If observed probabilities have a hidden subjective dimension (because of path-dependency), all hell breaks loose. If we agree that probabilities of being a copy are distributed not in a state-dependent way, but in a path-dependent way, we agree that there is a 'hidden variable' in self-locating probabilities. This hidden variable does not play a role in our π experiment but appears in other thought experiments where the order of making copies is defined.
In other words, both views produce strange probability shifts: SSSA over future states provides the ability to guess a digit of π, and the path-dependent view gives strange probabilities based on the way copies are created.
An interesting question arises: Are path-dependent and state-dependent views similar to the SSA and SIA dichotomy? The state-dependent view clearly looks like SSA. SIA uses the mere fact of my existence as evidence (of a larger group), so there appears to be a similarity between SIA and path-dependent identity, which assumes an externally invisible "measure" of existence.
It is tempting to apply this line of reasoning to the Sleeping Beauty problem – in a nutshell, SB is about path-dependency – at the first step, two copies are created using a coin, and after that, the tail copy is split by choosing the day of the week (halfers). Or all three copies are created simultaneously (thirds).
Conclusion: In the state-dependent model, we get a paradoxical ability to predict the future, but this is a well-known feature of SSA: even the Doomsday Argument, which is based on SSA, predicts the future.
The hidden subjective (path-dependent) part of probability makes 'future anthropic shadow" hypothetically possible. But we haven't proved it yet, as it is still not clear how the measure will move back in time.
One way this could happen is if the subjects of selection are not observer-moments, but whole paths: in that case, "fatter" paths with more observers are more likely, and I should find myself in the observer path which has more observers in the future.
I call this view the "two thirder position" in SB: In that case, I update on the fact that there are more tails than heads but later do not update on the fact that today is Monday. I will wrote separate post about this idea.
4.God incubator and logical anthropic shadow
Another difference between P(Doom) and π digit guessing is that in the whole universe there will be many π -guessing-experiments and there will always be survivors, but in the case of "easy alignment" it is applicable to any civilization and there are no regions of the multiverse with different results.
Surviving through 'easy alignment' is different from surviving via guessing the correct digits of π, as the whole world history will be different in the easy-alignment-world; for example, neural networks will be more effective than symbolic AI. Thus, my surviving variant will not be an exact copy of me in the worlds where I will not survive, as I will know about what is going on in the AI field. But type-me, that is, my psychological sameness, will be the same, as my identity core is not affected by my knowledge about the news in the AI field (This may not be true for a scientist who has devoted herself to a particular field of AI that has become part of her personality core, like neural nets for Hinton.) Here we want to say that quantum immortality works not only for exact copies but for type-copies too when some known information is not used for self-identification, and this is not a problem for our thought experiment.
The problem with the experiment with π is that in other regions of the universe, there are worlds absolutely similar to ours, but the experiment is performed on another digit of π. Thus, there is a class of my type-copies who win in similar worlds even if I lose here. But it is difficult to imagine a non-arbitrary variable that affects the distribution of my copies in all possible worlds, and AI alignment difficulty is one of them (see more in the section "other x-risks").
This is similar to the Incubator gedankenexperiment (God incubator thought experiment) by Bostrom discussed by Perrira in the sense that the number of copies is pre-defined but you just don't know how: in this experiment, God creates some number of copies and nothing else exists anywhere, so I should not think about my copies in other variants of the experiment. In the experiment, God flips a coin and creates either 1 copy on heads or 1000 copies on tails. What should be my estimation of the result of the coin toss based on the fact that I exist at all? It either remains one-half (as the fact of my existence doesn't provide any new information, the non-updating position) or is 1000/1001 (as I am more likely to be in the position where I am one of many copies, the updating position.)
Expecting a low a priori probability of AI Doom based on QI is similar to the updating position in the God incubator thought experiment. It is a much stronger claim than just the future anthropic shadow, which acts "locally", and says that only in our timeline do I observe more chances to survive. In other words, the future anthropic shadow predicts only the random part of survival – the 7 coins tosses in our initial thought experiment, as if I have a premonition about how the coins will land. Observing increasing chances of war in Taiwan is an example.
If I survive because P(AI doom) is low universally, there is no need for coincidence-based anthropic shadow, like wars: alignment will be easy and/or AI will be inherently limited. Though there can be a logical anthropic shadow: I will observe that AI is producing diminishing returns or that some alignment method works better than expected. If I were Gary Marcus, I would say that this is what happens with neural nets and RLHF.
Note that both shadows may work together, if P(AI doom) is small but not zero.
5. Other universal x-risks similar to AI Doomers
There are several deterministic x-risks-related factors which affect the survival chances of any civilization (they will also help to explain the Fermi paradox as they apply to all civilizations, if they are bad):
- Timing of AI relative to the timing of other disruptive technologies (likely bad if it is too long).
- General tendency of different x-risks to interact in a complex and chaotic manner. Chaos is bad for x-risk prevention.
- The general ability to prevent x-risks by a civilization and more generally, the ability to cooperate.
- Some more concrete but universal things: false vacuum decay, biological risk easiness.
If we expect that universal AI Doom probability should be low.
6. Transcendental advantage
Generalized QI and no free lunch
The idea of quantum immortality in a nutshell is that the personal history of me, the observer, is different from the average person's history – I will achieve immortality. But there is no free lunch here – it could be a bad quantum immortality, like eternal aging without the ability to die.
What we have suggested here could be called 'generalized quantum immortality" – and it is even better news, at first glance, than normal QI. Generalized QI says that I am more likely to be born in a world in which life extension technologies will be successfully developed in my lifetime, so bad QI like eternal aging is unlikely. It is a "future anthropic shadow" but for immortality.
However, even generalized QI doesn't provide a free lunch, as it doesn't exclude s-risk worlds.
I am most likely to be born in the universe where life extension is possible
If we think that updating the probability that I correctly guess π before going to sleep is the right line of reasoning, then all hypotheses about the nature of the universe which increase my survival chances must also be true. For example, if I survive for 10,000 years, I shouldn't be surprised to have been born into a world conducive to my survival.
For example, there are two main theories of aging, and one of them makes life extension easier. This is the theory that aging is a program (and it is a general evolutionary principle everywhere in the multiverse), and therefore it will be much easier to stop aging for any type of organism just by finding the correct switch. Alternatively, aging may have many mechanisms, which are pre-synchronized by evolution, and in that case, fighting aging will be extremely difficult. (See more in Turchin’s [No theory for old man]). Applying the same logic, as for AI Doomers, we should conclude that the first theory is more likely, as aging will be defeated sooner, and more likely during my lifetime.
An alternative view is that some local properties increase my personal chances of survival:
- I was born in a period of history when life extension technologies are likely to appear (this could be explained by confounders, like thinking about anthropics naturally coinciding with the development of life extension technologies).
- Or that my personal life history already has elements which ensure chances of my more likely survival (interest in life extension, cryocontract – but all this again can be confounders).
This includes not just beliefs but also unknown circumstances. This leads to a situation which I term 'transcendental advantage': if all unknown factors favor me, I should be in a highly advantageous position for extended survival. I should find myself in a world where life extension and mind uploading are imminent, where AI doomsday scenarios are false, and where I will eventually merge with an eternal AI. Some of these conditions may already be true.
Transcendental advantage: attractor in the space of measure
We can take one more step beyond generalized quantum immortality (QI). For this, we need to remind the reader of the idea of 'measure'. This concept originated from quantum mechanics, initially denoting something like blobs of probability or amplitude, but ultimately settled as an amount of existence of an observer or reality fluid.
The measure can be defined as follows: If there are two identical observers, but one has a higher measure, I am more likely to find myself to be the one with a higher measure. It is similar to the Ebborians described by Eliezer Yudkowsky – two-dimensional beings with different thicknesses: thickness here represents the measure.
If my timeline splits, my measure declines.
Now I can state the following: An observer who lives longer has a higher measure in time. Therefore, QI can be reformulated: I will have a future with the highest measure in time. However, we can drop 'in time' here, as the measure is not necessarily accumulated only in time. If there are several copies of me in the future, I am more likely to be the one with the highest level of reality fluid or measure, by definition.
This means that my personal life history has an attractor – a being with the highest possible measure among all possible Many-Worlds Interpretation (MWI) timelines. Who is it? Is it God? Not necessarily. Another idea is that I will merge with a future superintelligent AI which will also be able to cooperate between MWI branches and thus increase its measure – in a way described by Scott Alexander in The hour I first believed. In some theories, measure can grow if MWI timelines fail to split: If theories that consciousness causes wave function collapse are true (as proposed by David Chalmers), an observer may accumulate measure (e.g., by withholding the act of measurement and not splitting its timeline) – but this is purely speculative.
I call this idea transcendental advantage – the notion that the observer's fate will slowly but inevitably bend towards becoming god-like. I call “transcendental” because it is only observable from first-point perspective, but not objectively. This may sound absurd, but it is similar to the anthropic principle, projected into the future. In the anthropic principle, the whole set of properties of the observable universe including the existence of neutron stars, supernovas, and Jupiter, as well as the entire history of biological evolution and civilization, are necessary conditions for my appearance as an observer who can think about anthropics.
53 comments
Comments sorted by top scores.
comment by DaemonicSigil · 2024-11-17T21:08:08.031Z · LW(p) · GW(p)
Anthropic shadow isn't a real thing, check this post: https://www.lesswrong.com/posts/LGHuaLiq3F5NHQXXF/anthropically-blind-the-anthropic-shadow-is-reflectively [LW · GW]
Also, you should care about worlds proportional to the square of their amplitude.
Replies from: christopher-king, avturchin↑ comment by Christopher King (christopher-king) · 2024-11-26T16:59:56.102Z · LW(p) · GW(p)
Also, you should care about worlds proportional to the square of their amplitude.
It's actually interesting to consider why this must be the case. Without it, I concede that maybe some sort of Quantum Anthropic Shadow could be true. I'm thinking it would lead to lots of wacky consequences.
↑ comment by avturchin · 2024-11-17T22:18:19.432Z · LW(p) · GW(p)
I know this post and have two problems with it: what they call 'anthropic shadow" is not proper term as Bostrom defined anthropic shadow as underestimation of past risks based on the fact of survival in his article this the same name. But it's ok.
The more serious problem is that quantum immortality and angel immortality eventually merges: for example, if we survive 10 LHC failures because of QI, we most likely survive only on those timeline where some alien stops LHC. So both QI and angel immortality can be true and support one another and there is no contradiction.
↑ comment by Christopher King (christopher-king) · 2024-11-22T18:47:14.627Z · LW(p) · GW(p)
The more serious problem is that quantum immortality and angel immortality eventually merges
An interesting observation, but I don't see how that is a problem with Anthropically Blind? I do not assert anywhere that QI and anthropic angel are contradictory. Rather, I give QI as an example of an anthropic angel.
Replies from: avturchin↑ comment by avturchin · 2024-11-22T19:18:38.495Z · LW(p) · GW(p)
I understood your argument as following; anything which is an argument for QI, can also be argument for alien saving us. Thus, nothing is evidence for QI.
However, apriory probabilities of QI and alien are not mutually independent. QI increases chances of alien with every round. We can't observe QI directly. But we will observe the alien and this is what is predicted by QI.
↑ comment by Christopher King (christopher-king) · 2024-11-22T20:43:06.892Z · LW(p) · GW(p)
No, the argument is that the traditional (weak) evidence for anthropic shadow is instead evidence of anthropic angel. QI is an example of anthropic angel, not anthropic shadow.
So for example, a statistically implausible number of LHC failures would be evidence for some sort of QI and also other related anthropic angel hypotheses, and they don't need to be exclusive.
Replies from: avturchin↑ comment by avturchin · 2024-11-22T21:09:40.929Z · LW(p) · GW(p)
Past LHC failures are just civilization-level QI. (BTW, there are real things like this related to the history of earth atmosphere, in which CO2 content was anti-correlated with Sun's luminosity which result in stable temperatures). But it is not clear to me, what are other anthropic effects, which are not QI – what do you mean here? Can you provide one more example?
Replies from: christopher-king↑ comment by Christopher King (christopher-king) · 2024-11-22T22:27:26.734Z · LW(p) · GW(p)
A universe with classical mechanics, except that when you die the universe gets resampled, would be anthropic angelic.
Beings who save you are also anthropic angelic. For example, the fact that you don't die while driving is because the engineers explicitly tried to minimize your chance of death. You can make inferences based on this. For example, even if you have never crashed, you can reason that during a crash you will endure less damage than other parts of the car, because the engineers wanted to save you more than they wanted to save the parts of the car.
Replies from: avturchin↑ comment by avturchin · 2024-11-23T15:04:27.461Z · LW(p) · GW(p)
The first idea seems similar to Big World immortality: the concept that due to chaotic inflation, many copies of me exist somewhere, and some of them will not die in any situation. While the copies are the same, the worlds around them could be different, which opens other options for survival: in some worlds, aliens might exist who could save me. The simulation argument can also act as such an anthropic angel, as there will be simulations where I survive. So there can be different observation selection effects that ensure my survival, and it may be difficult to observationally distinguish between them.
Therefore, survival itself is not evidence of MWI, Big World, or simulation. Is that your point?
Regarding the car engineers situation, It is less clear. I know that cars are designed safe, so there is no surprise. Are you suggesting they are anthropic because we are more likely to be driving later in the car evolution timeline when cars are safer?
Replies from: christopher-king↑ comment by Christopher King (christopher-king) · 2024-11-26T16:55:28.221Z · LW(p) · GW(p)
I suppose the main point you should draw from "Anthropic Blindness" to QI is that:
- Quantum Immortality is not a philosophical consequence of MWI, it is an empirical hypothesis with a very low prior (due to complexity).
- Death is not special. Assuming you have never gotten a Fedora up to this point, it is consistent to assume that that "Quantum Fedoralessness" is true. That is, if you keep flipping a quantum coin that has a 50% chance of giving you a Fedora, the universe will only have you experience the path that doesn't give you the Fedora. Since you have never gotten a Fedora yet, you can't rule this hypothesis out. The silliness of this example demonstrates why we should likewise be skeptical of Quantum Immortality.
↑ comment by avturchin · 2024-11-26T19:52:38.156Z · LW(p) · GW(p)
It is not clear for me why you call
an empirical hypothesis with a very low prior
If MWI is true, there will be timelines where I survive any risk. This claim is factual and equivalent to MWI, and the only thing that prevents me from regarding it as immortality are questions related to decision theory. If MWI is true, QI has high a priori probability and low associated complexity.
The Fedora case has high complexity and no direct connection to MWI, hence a low a priori probability.
Now for the interesting part: QI becomes distinct from the Fedora case only when the chances are 1 in a trillion.
First example:
When 1000 people play Russian roulette and one survives (10 rounds at 0.5), they might think it's because of QI. (This probability is equivalent to surviving to 100 years old according to the Gompertz law.)
When 1000 people play Quantum Fedora (10 rounds at 0.5), one doesn't get a Fedora, and they think it's because they have a special anti-Fedora survival capability. In this case, it's obvious they're wrong, and I think this first example is what you're pointing to.
(I would note that even in this case, one has to update more for QI than for Fedora. In the Fedora case, there will be, say, 1023 copies of me with Fedora after 10 flips of a quantum coin versus 1 copy without Fedora. Thus, I am very unlikely to find myself without a Fedora. This boils down to difficult questions about SSA and SIA and observer selection. Or, in other words: can I treat myself as a random sample, or should I take the fact that I exist without a Fedora as axiomatic? This question arises often in the Doomsday argument, where I treat myself as a random sample despite knowing my date of birth.)
However, the situation is different if one person plays Russian roulette 30 times. In that case, externalization of the experiment becomes impossible: only 8 billion people live on Earth, and there are no known aliens. (This probability is equivalent to surviving to 140 years old according to the Gompertz law.) In this case, even if the entire Earth's population played Russian roulette, there would be only a 1 percent chance of survival, and the fact of surviving would be surprising. But if QI is true, it isn't surprising. That is, it's not surprising to survive to 100 years old, but surviving to 140 is.
Now if I play Fedora roulette 30 times and still have no Fedora, this can be true only in MWI. So if there's no Fedora after 30 rounds, I get evidence that MWI is true and thus QI is also true. But I am extremely unlikely to find myself in such a situation.
comment by Dagon · 2024-11-07T19:36:35.008Z · LW(p) · GW(p)
If quantum immortality is true
This is a big if. It may be true (though it also implies that events as unlikely as Boltzmann Brains are true as well), but it's not true in a way that has causal impact on my current predicted experiences. If so, then the VAST VAST MAJORITY of universes don't contain me in the first place, and the also-extreme majority of those that do will have me die.
Assume quantum uncertainty affects how the coins land. I survive the night only if I correctly guess the 10th digit of π and/or all seven coins land heads, otherwise I will be killed in my sleep.
In a literal experiment, where a human researcher kills you based on their observations of coins and calculation of pi, I don't think you should be confident of surviving the night. If you DO survive, you don't learn much about uncorrelated probabilities - there's a near-infinite number of worlds, and fewer and fewer of them will contain you.
I guess this is a variant of option (1) - Deny that QI is meaningful. You don't give up on probability - you can estimate a (1/2)^7 * 1/10 = 0.00078 chance of surviving.
Replies from: avturchin↑ comment by avturchin · 2024-11-07T20:10:55.911Z · LW(p) · GW(p)
If QI is false, it must have theoretical cost:
Either:
- Universe is finite and no MWI
- Personal identity is based on some kind of destructible soul and my copy is not me
- We use some variant of updateless decision theory, especially designed to prevent this type of problems (which is rather circular counterargument)
↑ comment by Dagon · 2024-11-07T22:00:19.240Z · LW(p) · GW(p)
I'm not sure why
- Universe is finite (or only countably infinite), and MWI is irrelevant (makes it larger, but doesn't change the cardinality). When you die, you die. There may or may not exist near-but-not-exact duplicates outside of current-you's lightcone.
is not one of your considerations. This seems most likely to me.
Replies from: avturchin, avturchin, green_leaf↑ comment by avturchin · 2024-11-07T22:55:21.094Z · LW(p) · GW(p)
A more interesting counterargument is "distribution shift." My next observer-moments have some probability distribution P of properties - representing what I am most likely to do in the next moment. If I die, and MWI is false, but chaotic inflation is true, then there are many minds similar to me and to my next observer-moments everywhere in the multiverse. However, they have a distribution of properties P2 - representing what they are more likely to observe. And maybe P ≠ P2. Or may be we can prove that P=P2 based on typicality.
↑ comment by avturchin · 2024-11-07T22:38:41.400Z · LW(p) · GW(p)
If there is no identity substance, then copies even outside the light cone matter. And even non-exact copies matter if the difference is almost unobservable. So I think that countable infinity is enough.
Replies from: Dagon↑ comment by Dagon · 2024-11-07T22:53:07.660Z · LW(p) · GW(p)
I suspect we don't agree on what it means for something to matter. If outside the causal/observable cone (add dimensions to cover MWI if you like), the difference or similarity is by definition not observable.
And the distinction between "imaginary" and "real, but fully causally disconnected" is itself imaginary.
There is no identity substance, and only experience-reachable things matter. All agency and observation is embedded, there is no viewpoint from outside.
Replies from: avturchin↑ comment by avturchin · 2024-11-07T23:02:43.255Z · LW(p) · GW(p)
The problem with observables here is that there is another copy of me in another light cone, which has the same observables. So we can't say that another light cone is unobservable - I am already there and observing it. This is a paradoxical property of big world immortality: it requires actually existing but causally disconnected copies, which contradicts some definitions of actuality.
BTW, can you comment below to Vladinir Nesov, who seems to think that first-person perspective is illusion and only third-person perspective is real?
↑ comment by Vladimir_Nesov · 2024-11-08T00:36:58.484Z · LW(p) · GW(p)
who seems to think that first-person perspective is illusion and only third-person perspective is real
The taste of cheese is quite real, it's just not a technical consideration relevant for chip design. Concepts worth noticing are usually meaningful in some way, but most of them are unclear and don't offer a technical foothold in any given endeavor.
Replies from: avturchin↑ comment by green_leaf · 2024-11-13T11:17:02.394Z · LW(p) · GW(p)
When you die, you die.
The interesting part of QI is that the split happens at the moment of your death. So the state-machine-which-is-you continues being instantiated in at least one world. The idea of your consciousness surviving a quantum suicide doesn't rely on it continuing in implementations of similar state machines, merely in the causal descendant of the state machine which you already inhabit.
It's like your brain being duplicated, but those other copies are never woken up and are instantly killed. Only one copy is woken up. Which guarantees that prior to falling asleep, you can be confident you will wake up as that one specific copy.
There is no alternative to this, unless we require that personal identity requires something else than the continuity of pattern.
Replies from: avturchin↑ comment by avturchin · 2024-11-13T12:19:33.836Z · LW(p) · GW(p)
In big world immortality there are causally disconnected copies which survive in very remote regions of the universe. But if we don't need continuity, but only similarity of minds, for identity, it is enough.
Replies from: green_leaf↑ comment by green_leaf · 2024-11-16T05:47:11.473Z · LW(p) · GW(p)
I don't know about similarity... but I was just making a point that QI doesn't require it.
↑ comment by dirk (abandon) · 2024-11-08T00:58:02.656Z · LW(p) · GW(p)
I would argue that personal identity is based on a destructible body.
Replies from: avturchincomment by FireStormOOO · 2024-11-08T05:11:17.529Z · LW(p) · GW(p)
Your examples seem to imply that believing QI means such an agent would in full generality be neutral on an offer to have a quantum coin tossed, where they're killed in their sleep on tails, since they only experience the tosses they win. Presumably they accept all such trades offering epsilon additional utility. And presumably other agents keep making such offers since the QI agent doesn't care what happens to their stuff in worlds they aren't in. Thus such an agent exists in an ever more vanishingly small fraction of worlds as they continue accepting trades.
I should expect to encounter QI agents approximately never as they continue self-selecting out of existence in approximately all of the possible worlds I occupy. For the same reason, QI agents should expect to see similar agents almost never.
From the outside perspective this seems to be in a similar vein to the fact all computable agents exist in some strained sense (every program, more generally every possible piece of data, is encodable as some integer, and exist exactly as much as the integers do) , even if they're never instantiated. For any other observer, this QI concept is indistinguishable in the limit.
Please point out if I misunderstood or misrepresented anything.
Replies from: avturchin↑ comment by avturchin · 2024-11-08T12:04:50.650Z · LW(p) · GW(p)
I think you are right. We will not observe QI agents and it is a bad policy to recommend it as I will end in empty world soon. Now caveats. My measure declines because of branching anyway very quickly, so no problem. There is an idea of civilization-level quantum suicide by Paul Almond. In that case, the whole civilization performs QI coin trick, and no problem with empty world - but can explain Fermi paradox. QI make sense from first-person perspective, but not from third.
comment by green_leaf · 2024-11-21T10:35:09.735Z · LW(p) · GW(p)
(This comment is written in the ChatGPT style because I've spent so much time talking to language models.)
Calculating the probabilities
The calculation of the probabilities consists of the following steps:
The epistemic split
Either we guessed the correct digit of () (branch ), or we didn't () (branch ).
The computational split
On branch , all of your measure survives (branch ) and none dies (branch ), on branch , survives (branch ) and dies (branch ).
Putting it all together
Conditional on us subjectively surviving (which QI guarantees), the probability we guessed the digit of correctly is
The probability of us having guessed the digit of prior to us surviving is, of course, just .
Verifying them empirically
For the probabilities to be meaningful, they need to be verifiable empirically in some way.
Let's first verify that prior to us surviving, the probability of us guessing the digit correctly is . We'll run experiments by guessing a digit each time and instantly verifying it. We'll learn that we're successful in, indeed, just of the time.
Let's now verify that conditional on us surviving, we'll have probability of guessing correctly. We perform the experiment times again, and this time, every time we survive, other people will check if the guess was correct. They will observe that we guess correctly, indeed, of the time.
Conclusion
We arrived at the conclusion that the probability jumps at the moment of our awakening. That might sound incredibly counterintuitive, but since it's verifiable empirically, we have no choice but to accept it.
Replies from: avturchin↑ comment by avturchin · 2024-11-21T13:08:31.714Z · LW(p) · GW(p)
Thanks. By the way, the "chatification" of the mind is a real problem. It's an example of reverse alignment: humans are more alignable than AI (we are gullible), so during interactions with AI, human goals will drift more quickly than AI goals. In the end, we get perfect alignment: humans will want paperclips.
comment by Christopher King (christopher-king) · 2024-11-07T17:25:41.527Z · LW(p) · GW(p)
Believing QI is the same as a Bayesian update on the event "I will become immortal".
Imagine you are a prediction market trader, and a genie appears. You ask the genie "will I become immortal" and the genie answers "yes" and then disappears.
Would you buy shares on a Taiwan war happening?
If the answer is yes, the same thing should apply if a genie told you QI is true (unless the prediction market already priced QI in). No weird anthropics math necessary!
Replies from: avturchin↑ comment by avturchin · 2024-11-07T18:17:31.510Z · LW(p) · GW(p)
It is a good summary, but the question is can we generalize it the idea that "I am more likely to be born in the world where life extensions technologies are developing and alignment is easy". Simple Bayesian update does not support this.
However, if the future measure can somehow "propagate" back in time, it increases my chances to be born in the world where there is logical chances for survival: alignment is easy.
A simple example of a world-model where measure "propagates" back in time is the simulation argument: if I survive in human-interested AI world, there will be many more my copies in the future who think that I am the past.
However, there could be more interesting ways for measure to propagate back in time. One of them is that the object of anthropic selection is not observer-moments, but the whole observer-timelines. Another is two-thirder solution to sleeping beauty.
↑ comment by Christopher King (christopher-king) · 2024-11-07T19:56:57.166Z · LW(p) · GW(p)
"I am more likely to be born in the world where life extensions technologies are developing and alignment is easy". Simple Bayesian update does not support this.
I mean, why not?
P(Life extension is developing and alignment is easy | I will be immortal) = P(Life extension is developing and alignment is easy) * (P(I will be immortal | Life extension is developing and alignment is easy) / P(I will be immortal))
Replies from: avturchin↑ comment by avturchin · 2024-11-07T21:25:08.558Z · LW(p) · GW(p)
Typically, this reasoning doesn't work because we have to update once again based on our current age and on the fact that such technologies do not yet exist, which compensates for the update in the direction of "Life extension is developing and alignment is easy."
This is easier to understand through the Sleeping Beauty problem. She wakes up once on Monday if it's heads, and on both Monday and Tuesday if it's tails. The first update suggests that tails is two times more likely, so the probability becomes 2/3. However, as people typically argue, after learning that it is Monday, she needs to update back to 1/3, which yields the same probability for both tails and heads.
But in the two-thirders' position, we reject the second update because Tails-Monday and Tails-Tuesday are not independent events (as was recently discussed on LessWrong in the Sleeping Beauty series).
comment by Ape in the coat · 2024-11-16T08:11:59.654Z · LW(p) · GW(p)
If my π-guess is wrong, my only chance to survive is getting all-heads.
Your other chance for survival is that whatever means are used to kill you somehow does not succeed due to quantuum effects. And this is what QI+path-based identity approach actually predicts. The universe isn't going to reotroactively change the digit of pi, but neither it's going to influence the probability of the coin tosses due to the fact that someone may die. QI influence will trigger only at the moment of your death, turning it into near death. And then for the next attempt. And for the next one. Potentially locking you in a state of eternal torture.
However, abandoning SSSA also has a serious theoretical cost:
If observed probabilities have a hidden subjective dimension (because of path-dependency), all hell breaks loose. If we agree that probabilities of being a copy are distributed not in a state-dependent way, but in a path-dependent way, we agree that there is a 'hidden variable' in self-locating probabilities. This hidden variable does not play a role in our π experiment but appears in other thought experiments where the order of making copies is defined.
I fail to see this cost. Yes, we agree that there is an additional variable. Namely, my causal history. It's not necessary hidden but can as well be. So what? What is so hellbreaking about it? This is exactly how probability theory works in every other case. Why should it have a special case for conscious experience?
If there are two bags one with 1 red ball and another with 1000 blue balls and then the coin is tossed and based on the outcome I'm either getting a ball from the first or the second bag, I'm expecting to receive red ball with 50% chance. I'm not supposed to assume out of nowhere that every ball have to have equal probabilities to be given, therefore postulate a ball-shadow that will modify the fairness of the coin.
Replies from: avturchin↑ comment by avturchin · 2024-11-16T17:45:03.802Z · LW(p) · GW(p)
As we assume that coin tosses are quantum, and I will be killed if (I didn't guess pi) or (coin toss is not heads) there is always a branch with 1/128 measure where all coins are heads, and they are more probable than surviving via some errors in the setup.
All hell breaks loose" refers here to a hypothetical ability to manipulate perceived probability—that is, magic. The idea is that I can manipulate such probability by changing my measure.
One way to do this is described in Yudkowsky's " The Anthropic Trilemma [LW · GW]," where an observer temporarily boosts their measure by increasing the number of their copies in an uploaded computer.
I described a similar idea in "Magic by forgetting [LW · GW]," where the observer boosts their measure by forgetting some information and thus becoming similar to a larger group of observers.
Hidden variables also appear depending on the order in which I make copies: if each copy is made from subsequent copies, the original will have a 0.5 probability, the first copy 0.25, the next 0.125, and so on.
"Anthropic shadow" appear only because the number of observers changes in different branches.
↑ comment by Ape in the coat · 2024-11-16T19:41:53.261Z · LW(p) · GW(p)
As we assume that coin tosses are quantum, and I will be killed if (I didn't guess pi) or (coin toss is not heads) there is always a branch with 1/128 measure where all coins are heads, and they are more probable than surviving via some errors in the setup.
Not if we assume QI+path-based identity.
Under them the chance for you to find yourself in a branch where all coins are Heads is 1/128, but your over chance to survive is 100%. Therefore the low chance of failed execution doesn't matter, quantum immortality will "increase" the probability to 1.
All hell breaks loose" refers here to a hypothetical ability to manipulate perceived probability—that is, magic. The idea is that I can manipulate such probability by changing my measure.
One way to do this is described in Yudkowsky's " The Anthropic Trilemma [LW · GW]," where an observer temporarily boosts their measure by increasing the number of their copies in an uploaded computer.
I described a similar idea in "Magic by forgetting [LW · GW]," where the observer boosts their measure by forgetting some information and thus becoming similar to a larger group of observers.
None of these tricks works with path-based identity. That's why I consider it to be true - it seem to be totally adding up to normality. No matter how many clones of you exist in a different path - only yours path matters for your probability estimate.
Seems that, path-based identity is the only approach according to which all hell doesn't break lose. So what counterargument you have against it?
Hidden variables also appear depending on the order in which I make copies: if each copy is made from subsequent copies, the original will have a 0.5 probability, the first copy 0.25, the next 0.125, and so on.
Why do you consider it a problem? What kind of counterintuitive consequences does it imply? It seems to be exactly how we reason about anything else.
Suppose there is the original ball, then an indistinguishable copy of it is created. Then one of these two balls is picked randomly and put into a bag 1, while the other ball is put into the bag 2 and then indistinguishable 999 copies of this ball is also put into bag 2.
Clearly we are supposed to expect that ball from bag 1 has 50% to be the original ball, while a random ball from bag 2 only 1/2000 chance to be the original ball. So what's the problem?
"Anthropic shadow" appear only because the number of observers changes in different branches.
By the same logic "Ball shadow" appears because the number of balls is different in different bags.
Replies from: avturchin↑ comment by avturchin · 2024-11-17T20:38:20.030Z · LW(p) · GW(p)
Under them the chance for you to find yourself in a branch where all coins are Heads is 1/128, but your over chance to survive is 100%. Therefore the low chance of failed execution doesn't matter, quantum immortality will "increase" the probability to 1
You are right, and it's a serious counterargument to consider. Actually, I invented path-dependent identity as a counterargument to Miller's thought experiment.
You are also right that the Anthropic Trilemma and Magic by Forgetting do not work with path-dependent identity.
However, we can almost recreate the magic machine from the Anthropic Trilemma using path-based identity:
Imagine that I want to guess in which room I will be if there are two copies of me in the future, red or green.
I go into a dream. A machine creates my copy and then one more copy of that copy, which will result in 1/4 and 1/4 chances each. The second copy then merges with the first one, so we end up with only two copies, but I have a 3/4 chance to be the first one and 1/4 to be the second. So we've basically recreated a machine that can manipulate probabilities and got magic back.
The main problem of path-dependent identity is that we assume the existence of a "global hidden variable" for any observer. It is hidden as it can't be measured by an outside viewer and only represents the subjective chances of the observer to be one copy and not another. And it is global as it depends on the observer's path, not their current state. It therefore contradicts the view that mind is equal to a Turing computer (functionalism) and requires the existence of some identity carrier which moves through paths (qualia, quantum continuity, or soul).
Also, path-dependent identity opens the door to back-causation and premonition, because if we normalize outputs of some black box where paths are mixed, similar to the magic machine discussed above, we get a shift in its input probability distribution in the past. This becomes similar to the 'timeline selection principle' (which I discussed in a longer version of this blog post but cut to fit format) in which not observer-moments are selected, but the whole timelines without updating on my position in the timeline. This idea formalizes the future anthropic shadow as I am more likely to be in the timeline that is fattest and longest in the future.
Replies from: Ape in the coat↑ comment by Ape in the coat · 2024-11-18T07:28:02.541Z · LW(p) · GW(p)
You are right, and it's a serious counterargument to consider.
You are also right that the Anthropic Trilemma and Magic by Forgetting do not work with path-dependent identity.
Okay, glad we are on the same page here.
However, we can almost recreate the magic machine from the Anthropic Trilemma using path-based identity
I'm not sure I understand your example and how it recreates the magic. Let me try to describe to it with my own words, and then correct me if I got something wrong.
You are put to sleep. Then you are splitted into two people. Then, on random, one of them is put into red room and one into green room. Let's say that person 1 is in red room and 2 in green room. Then the person 2 is splitted into two people: 21 and 22. Both of them are keept in green rooms. Then everyone is awaken. What should be your credence to awake in a red room?
Here there are three possibilities: 50% to be 1 in a red room and 25% chance to be either 21 or 22 in green rooms. No matter how much a person in a green room is split, the total probability for greenness stays the same. All is quite normal and there is no magic.
Now let's add a twist.
Instead of putting both 21 and 22 in green rooms, one of them - let it be 21 - is put in a red room.
In this situation, total probability for red room is P(1) + P(21) = 75%. And if we split the 2 more and put more of its parts in red rooms we get highter and highter probability to be in red room. Therefore we get magical ability to manipulate probability.
Am I getting you correctly?
I do not see anything problematic with such "manipulation of probability". We do not change our estimate just because more people with the same experience are created. We change the estemate because different fraction of people get different experience. This is no more magical than putting both 1 and 2 into red rooms and noticing that suddenly the probability for being in red room reached 100%, compared to the initial formulation where it was mere 50%. Of course it did! That's completely lawful behaviour of probability theoretic reasoning.
Notice that we can't actually recreate the anthropic trilemma and be certain to win lottery this way. Because we can't move people between branches. Therefore everything adds up to normality.
Also, path-dependent identity opens the door to back-causation and premonition, because if we normalize outputs of some black box where paths are mixed, similar to the magic machine discussed above
We just need to restrict the mixing of the paths, which is the restriction of QM anyway. Or maybe I'm missing something? Could you give me an example with such backwards causality? Because as far as I see, everything is quite straightforward.
The main problem of path-dependent identity is that we assume the existence of a "global hidden variable" for any observer. It is hidden as it can't be measured by an outside viewer and only represents the subjective chances of the observer to be one copy and not another. And it is global as it depends on the observer's path, not their current state. It therefore contradicts the view that mind is equal to a Turing computer (functionalism) and requires the existence of some identity carrier which moves through paths (qualia, quantum continuity, or soul).
Seems like we are just confused about this "identity" thingy and therefore don't know how to correctly reason about it. In such situations we are supposed to
- Acknowledge that we are are confused
- Stop speculating on top of our confusion and jumping to conclusions based on it
- Outline the possible options to the best of our understanding and keep an open mind until we manage to resolve the confusion
It's already clear that "mind" and "identity" are not the same thing. We can talk about identities of things that do not possess a mind, and identities are unique while, there can exist copies of the same mind.So minds can very well be Turing computers, but identities are something else, or even not a thing at all.
Our intuitive desire to drag in consciousness/qualia/soul also appears completely unhelpful after thinking about it for the first five minutes. Non-conscious minds can do the same probability theoretic reasonings as conscious ones. Nothing changes if 1, 21 and 22 from the problem above are not humans but programs executed on different computers.
Whatever extra variable we need it seems to be something that a Laplace's demon would know. It's a knowledge about whether a mind was split into n instances simultaneously or through multiple steps. It indeed means that something else except the immediate state of the mind is important for "indentity" considerations, but this something can very well be completely physical - just the past history of causes and effects that led to this state of the mind.
Replies from: avturchin↑ comment by avturchin · 2024-11-19T11:36:12.649Z · LW(p) · GW(p)
Thanks for your thoughtful answer.
To achieve magic, we need the ability to merge minds, which can be easily done with programs and doesn't require anything quantum. If we merge 21 and 1, both will be in the same red room after awakening. If awakening in the red room means getting 100 USD, and the green room means losing it, then the machine will be profitable from the subjective point of view of the mind which enters it. Or we can just turn off 21 without awakening, in which case we will get 1/3 and 2/3 chances for green and red.
The interesting question here is whether this can be replicated at the quantum level (we know there is a way to get quantum magic in MWI, and it is quantum suicide with money prizes, but I am interested in a more subtle probability shift where all variants remain). If yes, such ability may naturally evolve via quantum Darwinism because it would give an enormous fitness advantage – I will write a separate post about this.
Now the next interesting thing: If I look at the experiment from outside, I will give all three variants 1/3, but from inside it will be 1/4, 1/4, and 1/2. The probability distribution is exactly the same as in Sleeping Beauty, and likely both experiments are isomorphic. In the SB experiment, there are two different ways of "copying": first is the coin and second is awakenings with amnesia, which complicates things.
Identity is indeed confusing. Interestingly, in the art world, path-based identity is used to define identity, that is, the provenance of artworks = history of ownership. Blockchain is also an example of path-based identity. Also, in path-based identity, the Ship of Theseus remains the same.
Replies from: Ape in the coat↑ comment by Ape in the coat · 2024-11-20T06:35:48.785Z · LW(p) · GW(p)
To achieve magic, we need the ability to merge minds, which can be easily done with programs and doesn't require anything quantum.
I don't see how merging minds not across branches of the multiverse produces anything magical.
If we merge 21 and 1, both will be in the same red room after awakening.
Which is isomorphic to simply putting 21 to another red room, as I described in the previous comment. The probability shift to 3/4 in this case is completely normal and doesn't lead to anything weird like winning the lottery with confidence.
Or we can just turn off 21 without awakening, in which case we will get 1/3 and 2/3 chances for green and red.
This actually shouldn't work. Without QI, we simply have 1/2 for red, 1/4 for green and 1/4 for being turned off.
With QI, the last outcome simply becomes "failed to be turned off", without changing the probabilities of other outcomes
The interesting question here is whether this can be replicated at the quantum leve
Exactly. Otherwise I don't see how path based identity produces any magic. For now I think it doesn't, which is why I expect it to be true.
Now the next interesting thing: If I look at the experiment from outside, I will give all three variants 1/3, but from inside it will be 1/4, 1/4, and 1/2.
Which events you are talking about, when looking from the outside? What statements have 1/3 credence? It's definitely not "I will awake in red room", because it's not you who are too be awaken. For the observer it has 0 probability.
On the other hand, an event "At least one person is about to be awaken in red room" has probability 1, for both the participant and the observer. So what are you talking about? Try to be rigorous and formally define such events.
The probability distribution is exactly the same as in Sleeping Beauty, and likely both experiments are isomorphic.
Not at all! In Sleeping Beauty on Tails you will be awaken both on Monday and on Tuesday. While here if you are in a green room you are either 21 or 22, not both.
Suppose that 22 get their arm chopped off before awakening. Then you you have 25% chance to lose an arm while participating in such experiment. While in Sleeping Beauty, if your arm is chopped off on Tails before Tuesday awakening, you have 50% probability to lose it, while participating in such experiment.
Interestingly, in the art world, path-based identity is used to define identity
Yep. This is just how we reason about identities in general. That's why SSSA appears so bizarre to me - it assumes we should be treating personal identity in a different way, for no particular reason.
Replies from: avturchin↑ comment by avturchin · 2024-11-20T18:11:50.413Z · LW(p) · GW(p)
For the outside view: Imagine that an outside observer uses a fair coin to observe one of two rooms (assuming merging in the red room has happened). They will observe either a red room or a green room, with a copy in each. However, the observer who was copied has different chances of observing the green and red rooms. Even if the outside observer has access to the entire current state of the world (but not the character of mixing of the paths in the past), they can't determine the copied observer's subjective chances. This implies that subjective unmeasurable probabilities are real.
Even without merging, an outside observer will observe three rooms with equal 1/3 probability for each, while an insider will observe room 1 with 1/2 probability. In cases of multiple sequential copying events, the subjective probability for the last copy becomes extremely small, making the difference between outside and inside perspectives significant.
When I spoke about the similarity with the Sleeping Beauty problem, I meant its typical interpretation. It's an important contribution to recognize that Monday-tails and Tuesday-tails are not independent events.
However, I have an impression that this may result in a paradoxical two-thirder solution: In it, Sleeping Beauty updates only once – recognizing that there are two more chances to be in tails. But she doesn't update again upon knowing it's Monday, as Monday-tails and Tuesday-tails are the same event. In that case, despite knowing it's Monday, she maintains a 2/3 credence that she's in the tails world. This is technically equivalent to the 'future anthropic shadow' or anti-doomsday argument – the belief that one is now in the world with the longest possible survival.
Replies from: Ape in the coat↑ comment by Ape in the coat · 2024-11-21T14:07:36.243Z · LW(p) · GW(p)
Imagine that an outside observer uses a fair coin to observe one of two rooms (assuming merging in the red room has happened). They will observe either a red room or a green room, with a copy in each. However, the observer who was copied has different chances of observing the green and red rooms.
Well obviously. The observer and the person being copied participate in non-isomorphic experiments with different sampling. There is nothing surprising about it [LW · GW]. On the other hand, if we make the experiments isomorphic:
Two coins are tossed and the observer is brought into the green room if both are Heads, and is brought to the red room, otherwise
Then both the observer and the person being copied will have the same probabilities.
Even without merging, an outside observer will observe three rooms with equal 1/3 probability for each, while an insider will observe room 1 with 1/2 probability.
Likewise, nothing is preventing you from designing an experimental setting where an observer have 1/2 probability for room 1 just as the person who is being copied.
When I spoke about the similarity with the Sleeping Beauty problem, I meant its typical interpretation.
I'm not sure what use is investigating a wrong interpretation. It's a common confusion that one has to reason about problems involving amnesia the same way as about problems involving copying. Everyone just seem to assume it for no particular reason and therefore got stuck.
However, I have an impression that this may result in a paradoxical two-thirder solution: In it, Sleeping Beauty updates only once – recognizing that there are two more chances to be in tails. But she doesn't update again upon knowing it's Monday, as Monday-tails and Tuesday-tails are the same event. In that case, despite knowing it's Monday, she maintains a 2/3 credence that she's in the tails world.
This seems to be the worst of both worlds. Not only you update on a completely expected event, you then keep this estimate, expecting to be able to guess a future coin toss better than chance. An obvious way to lose all your money via betting.
comment by Vladimir_Nesov · 2024-11-07T20:11:55.700Z · LW(p) · GW(p)
If quantum immortality is true...
To discuss truth of a claim, it's first crucial to clarify what it means. What does it mean for quantum immortality to be true or not? The only relevant thing that comes to mind is whether MWI is correct. Large quantum computers might give evidence to that claim (though ASI very likely will be here first, unless there is a very robust AI Pause).
Once we know there are physical branching worlds, there is no further fact of "quantum immortality" to figure out. There are various instances of yourself in various world branches, a situation that doesn't seem that different from multiple instances that can occur within a single world. Decision theory then ought to say how to weigh the consequences of possible influences and behaviors spread across those instances.
Replies from: avturchin↑ comment by avturchin · 2024-11-07T20:35:22.809Z · LW(p) · GW(p)
QI is a claim about first-person perspective observables – that I will always observe the next observer moment. This claim is stronger than just postulating that MWI is true and that there are many me-like minds in it from a third-person perspective. This difference can be illustrated by some people's views about copies. They say: "I know that somewhere there will be my copy, but it will not be me, and if I die, I will die forever." So they agree with the factual part but deny the perspectival part.
I agree that the main consideration here is decision-theoretic. However, we need to be suspicious of any decision theory that was designed specifically to prevent paradoxes like QI, or we end up with circular logic: "QI is false because our XDT, which was designed to prevent things like QI, says that we should ignore it."
There is a counterargument (was it you who suggested it?) that there is no decision difference regardless of whether QI is valid or not. But this argument only holds for altruistic and updateless theories. For an egoistic EDT agent, QI would recommend playing Russian roulette for money.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2024-11-07T20:47:19.168Z · LW(p) · GW(p)
A person is a complicated machine, we can observe how this machine develops or could develop through processes that we could set up in the world or hypothetically. This is already quite clear, and things like "first person perspective" or "I will observe" don't make this clearer.
So I don't see a decision theory proclaiming "QI is false!", it's just not a consideration it needs to deal with at any point, even if somehow there was a way of saying more clearly what that consideration means. Like a chip designer doesn't need to appreciate the taste of good cheese to make better AI accelerators.
Replies from: avturchin↑ comment by avturchin · 2024-11-10T12:17:56.246Z · LW(p) · GW(p)
We can escape the first-person perspective question by analyzing the optimal betting strategy of a rational agent regarding the most likely way of survival.
In the original thought experiment, there are 10 similar timelines where 10 otherwise identical agents guess a digit of pi (each guessing a different digit). Each agent has a 1/128 chance to survive via a random coin toss.
The total survival chances are either 1/10 via guessing pi correctly (one agent survives) or approximately 10/128 via random coin tosses (ignoring here the more complex equation for combining probabilities). 1/10 is still larger.
The experiment can be modified to use 10 random coins to get more decisive results.
Therefore, any agent can reasonably bet that if they survive, the most likely way of survival would be through correctly guessing the pi digit. (Here goes also all caveats about limits of betting).
Whether to call this "immortality" is more of an aesthetic choice, but the fact remains that some of my copies survive any risk in Many-Worlds Interpretation. The crux is whether agent should treat his declining measure as a partial death.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2024-11-10T12:52:16.576Z · LW(p) · GW(p)
Death/survival/selection have the might makes right issue, of maintaining the normativity/actuality distinction. I think a major use of weak orthogonality thesis is in rescuing these framings. That is, for most aims, there is a way of formulating their pursuit as "maximally ruthless" without compromising any nuance of the aims/values/preferences, including any aspects of respect for autonomy or kindness within them. But that's only the strange framing adding up to normality, useful where you need that framing for technical reasons.
Making decisions in a way that ignores declining measure of influence on the world due to death in most eventualities doesn't add up to normality. It's a bit like saying that you can be represented by a natural number, and so don't need to pay attention to reality at all, since all natural numbers are out there somewhere, including those representing you. I don't see a way of rescuing this kind of line of argument.
Replies from: avturchin↑ comment by avturchin · 2024-11-11T11:40:40.309Z · LW(p) · GW(p)
Indeed, QI matters depending on what I care. If mother cares about her child, quantum suicide will be a stupid act for her, as in the most worlds the child will be left alone. If a person cares only about what he feels, QC has more sense (the same way as euthanasia has sense only if quantum immortality is false).
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2024-11-12T23:37:17.153Z · LW(p) · GW(p)
A decision theory needs to have orthogonality, otherwise it's not going to be applicable. Decisions about content of values are always wrong, the only prudent choice is to defer them.
Replies from: avturchin, avturchin↑ comment by avturchin · 2024-11-16T12:13:08.561Z · LW(p) · GW(p)
Orthogonality between goals and DT makes sense only if I don't have preferences about the type of DT or the outcomes which one of them necessitates.
In the case of QI, orthogonality works if we use QI to earn money or to care about relatives.
However, humans have preferences about existence and non-existence beyond normal money utility. In general, people strongly don't want to die. It means that I have a strong preference that some of my copies survive anyway, even if it is not very useful for some other preferences under some other DT.
Another point is the difference between Quantum suicide and QI. QS is an action, but QI is just a prediction of future observations and because of that it is less affected by decision theories. We can say that those copies of me who survive [high chance of death event] will say that they survived because of QI.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2024-11-16T13:02:31.930Z · LW(p) · GW(p)
Having preferenes is very different from knowing them. There's always a process of reflection that refines preferences, so any current guess is always wrong at least in detail. For a decision theory to have a shot at normativity, it needs to be able to adapt to corrections and ideally anticipate their inevitability (not locking in the older guess and preventing further reflection; instead facilitating further reflection and being corrigible).
Orthogonality asks the domain of applicability to be wide enough that both various initial guesses and longer term refinements to them won't fall out of scope. When a theory makes assumptions about value content, that makes it a moral theory rather than a decision theory. A moral theory explores particular guesses about preferences of some nature.
So in the way you use the term, quantum immortality seems to be a moral theory, involving claims that quantum suicide can be a good idea. For example "use QI to earn money" is a recommendation that depends on this assumption about preferences (of at least some people in some situations).
comment by weightt an (weightt-an) · 2024-11-10T14:23:58.129Z · LW(p) · GW(p)
Before sleeping, I assert that the 10th digit of π equals to the number of my eyes. After falling asleep, seven coins will be flipped. Assume quantum uncertainty affects how the coins land. I survive the night only if number of my eyes equals to the 10th number of π and/or all seven coins land heads, otherwise I will be killed in my sleep.
Wil you wake up with 3 eyes?
Like, your decisions to name some digit are not equallly probable. Maybe you are the kind of person who would name 3 only if 10^12 cosmic rays hit you in precise sequence or whatever, and you name 7 with 99% prob.
AND if you are very unlikely to name the correct digit you will be unlikely to enter into this experiment at all, because you will die in majority of timelines. I.e. at t1 you decide to enter or not. At t2 experiment happens or you'll just waste time doomscrolling. At t3 you look up the digit. Your distribution at t3 is like 99% of you who chickened out.