Looking for answers about quantum immortality.
post by Fivehundred · 2019-09-09T02:16:03.435Z · LW · GW · 4 commentsThis is a question post.
Contents
Answers 10 Viliam 8 Adele Lopez 6 avturchin 5 gilch 2 Slider 2 rosyatrandom 1 shminux 0 superads91 None 4 comments
I've been recently been obsessing over the risks of quantum torment, and in the course of my research downloaded this article: https://philpapers.org/rec/TURFAA-3
Here's a quote:
"4.3 Long-term inescapable suffering is possible
If death is impossible, someone could be locked into a very bad situation where she can’t die, but also can’t become healthy again. It is unlikely that such an improbable state of mind will exist for too long a period, like millennia, as when the probability of survival becomes very small, strange survival scenarios will dominate (called “low measure marginalization” by (Almond 2010). One such scenario might be aliens arriving with a cure for the illness, but more likely, the suffering person will find herself in a simulation or resurrected by superintelligence in our world, perhaps following the use of cryonics.
Aranyosi summarized the problem: “David Lewis’s point that there is a terrifying corollary to the argument, namely, that we should expect to live forever in a crippled, more and more damaged state, that barely sustains life. This is the prospect of eternal quantum torment” (Aranyosi 2012; Lewis 2004). The idea of outcomes infinitely worse than death for the whole of humanity was explored by Daniel (2017), who called them “s-risks”. If MI is true and there is no high-tech escape on the horizon, everyone will experience his own personal hell.
Aranyosi suggested a comforting corollary (Aranyosi 2012), based on the idea that multiverse immortality requires not remaining in the “alive state”, but remaining in the conscious state, and thus damage to the brain should not be very high. It means, according to Aranyosi, that being in the nearest vicinity of death is less probable than being in just “the vicinity of the vicinity”: the difference is akin to the difference between constant agony and short-term health improvement. However, it is well known that very chronic states of health exist which don’t affect consciousness are possible, e.g. cancer, whole-body paralysis, depression, and lock-in syndrome. However, these bad outcomes become less probable for people living in the 21st century, as developments in medical technology increase the number of possible futures in which any disease can be cured, or where a person will be put in cryostasis, or wake up in the next level of a nested simulation. Aranyosi suggested several other reasons why eternal suffering is less probable:
1) Early escape from a bad situation: “According to my line of thought, you should rather expect to always luckily avoid life-threatening events in infinitely many such crossing attempts, by not being hit (too hard) by a car to begin with. That is so because according to my argument the branching of the world, relevant from the subjective perspective, takes place earlier than it does according to Lewis. According to him, it takes place just before the moment of death, according to my reasoning it takes place just before the moment of losing consciousness”
(Aranyosi 2012, p.255).
2) Limits of suffering. “The more damage your brain suffers, the less you are able to suffer”
(Aranyosi 2012, p.257).
3) Inability to remember suffering. “Emergence from coma or the vegetative state is followed by amnesia is not an eternal life of suffering, but rather one extremely brief moment of possibly painful self-awareness – call it the ‘Momentary Life’ scenario.” (Aranyosi 2012, p.257).
4.4 Bad infinities and bad circles
Multiverse immortality may cause one to be locked into a very stable but improbable world – much like the scenario in the episode “White Christmas” of the TV series “Black Mirror (Watkins 2014),” in which a character is locked into a simulation of a room for a subjective 30 million years. Another bad option is a circular chain of observer-moments. Multiverse immortality does not require that the “next” moment will be in the actual future, especially in the timeless universe, where all moments are equally actual. Thus a “Groundhog Day” scenario becomes possible. The circle could be very short, like several seconds, in which a dying consciousness repeatedly returns to the same state as several seconds ago, and as it doesn’t have any future moments it resets to the last similar moment. Surely, this could happen only in a very narrow state of consciousness, where the internal clock and memory are damaged."
Look, I'm not at all knowledgeable in these matter (besides having read Permutation City and The Finale of the Ultimate Meta Mega Crossover). Based on what I've read online on the possibility of quantum immortality, I don't think it is probable, and quantum torment less so. But there's something about a published article giving serious consideration to us suffering eternally or going through 'The Jaunt' from that Stephen King story which is creating a nice little panic attack (in addition to the already scary David Lewis article).
I plan to die and have no intention of signing up for cryonics. (EDIT: This meant die naturally. I have no desire to expedite the process, it's just that I'm not on board with the techno-immortalism popular around here.) All I want to know is, is this stuff just being pulled out of his butt? Like, an extremely unlikely hypothetical that nonetheless carries huge negative utility? I'd be okay with that, as I'm not a utilitarian. Or have these scenarios actually been considered plausible by AI theorists?
I'm also desperate to get in contact with someone who's studied quantum mechanics and can answer questions of this nature. An actual physicist (especially a believer in MWI) would be great. I'd think an understanding of neuroscience would also be very important for analyzing the risks, but how many people have studied both fields? With some exceptions, the only ones I do see discussing it are philosophers.
I'm in a bad place right now; any help would go a long way.
Answers
I read the first link, and to me it seems that the author actually stumbles upon the right answer in the middle of the paper, only to dismiss it immediately with "we have no good way to justify it" and proceed towards things that make less sense. I am talking about what he calls the "intensity rule" in the paper.
Assuming a non-collapse interpretation, the entire idea is that literally everything happens all the time, because every particle has a non-zero amplitude at every place, but it all adds up to normality anyway, because what matters is the actual value of the amplitude, not just the fact whether it is zero or non-zero. (Theoretically, epsilon is not zero. Practically [LW · GW], the difference between zero and epsilon is epsilon.) Outcomes with larger amplitudes are the normal ones; the ones we should expect more. Outcomes with epsilon amplitudes are the ones we should only pay epsilon attention to.
It is possible that the furniture in my room will, due to some very unlikely synchronized quantum tunneling, transform into a hungry tiger? Yes, it is theoretically possible. (Both in Copenhagen and many-worlds interpretations, by the way.) How much time should I spend contemplating such possibility? Just by mentioning it, I already spent many orders of magnitude more than would be appropriate.
The paper makes some automatic assumption about time, which I am going to ignore for the moment. Let's assume that, because of quantum immortality, you will be alive 1000000 years from now. Which path is most likely to get you from "here" to "there"?
In any case, some kind of miracle is going to happen. But we should still expect the smallest necessary miracle. In absolute numbers, the chances of "one miracle" and "dozen miracles" are both pretty close to zero, but if we are going to assume that some miracle happened, and normalize the probabilities accordingly, "one miracle" is almost certainly what happened, and the probability of "dozen miracles" remains pretty close to zero even after the normalization. (Assuming the miracles are of comparable size, mutually independent, et cetera.)
Comparing likelihoods of different miracles is, by definition, outside of our usual experience, so I may be wrong here. But it seems to me that the horror scenario envisioned by the author requires too many miracles. (In other words, it seems optimized for shock value, not relative probability.) Suppose that in 10 years you get hit by the train, and by a miracle, a horribly disfigured fragment of you survives in an agony beyond imagination. Okay, technically possible. So, what is going to happen during the following 999990 years? It seems that further surviving in this state would require more miracles than further surviving as a healthy person. (The closer to death you are, the more unlikely it is for you to survive another day, or year.) And both these paths seem to require more miracles than being frozen now, and later resurrected and made forever young using advanced futuristic technology. Even just dying now, and being resurrected 1000000 years later, would require only one miracle, albeit a large one. If you are going to be alive in 1000000 years, you are most likely to get there by a relatively least miraculous path. I am not sure what exactly it is, but being constantly on the verge of death and surviving anyway seems too unlikely (and being frozen and later unfrozen, or uploaded to a computer, seems almost ordinary in comparison).
Now, let's take a bit more timeless [LW · GW] perspective here. Let's look at the universe in its entirety. According to quantum immortality, there are you-moments in the arbitrarily distant future. Yes; but most of them are extremely thin. Most of the mass of the you-moments is here, plus or minus a few decades. (Unless there is a lawful process, such as cryonics, that would stretch a part of the mass into the future enough to change the distribution significantly. Still not as far as quantum immortality, which can probably overcome even the death heat of the universe and get so far that the time itself stops making sense.) So, according to anthropic principle, whenever you find yourself existing, you most likely find yourself in the now -- I mean, in your ordinary human lifespan. (Which is, coincidentally, where you happen to find yourself right now, don't you?) There are a few you-moments at a very exotic places, but most of them are here. Most of your life happens before your death; most instances of you experiencing yourself are the boring human experience.
↑ comment by TAG · 2019-09-12T12:47:14.185Z · LW(p) · GW(p)
Now, let’s take a bit more timeless perspective here. Let’s look at the universe in its entirety. According to quantum immortality, there are you-moments in the arbitrarily distant future. Yes; but most of them are extremely thin. Most of the mass of the you-moments is here, plus or minus a few decades.
Why does that matter?
Under single universe assumptions, there is no quantum immortality or torment, because low probability things generally don't happen.
Under the single-mind multi-universe view -- where there is one "real" you that switches tracks in proportion to their measure -- you are also unlikely to find yourself immortal or tormented. But it's a form of dualism -- it assumes that mind and matter operate by different rules.
Under actual multiversal assumptions, multi-mind+multi-world, everything that is unlikely but above zero measure is real, from an objective point of view. The question is what happens from a subjective POV. If its at all possible for consciousness to transfer between worlds, then the subjective probability of ending up very old in a low-measure world is actually high, because there as you age past a normal human lifespan, you run out of high-measure worlds where you are alive. The assumption that beings in low-measure worlds have a faint, zombie-like consciousness can still stave off QI and QT , but lacks independent motivation. Physics doens't say how consciousness works.
Replies from: Viliam↑ comment by Viliam · 2019-09-13T21:37:48.481Z · LW(p) · GW(p)
If its at all possible for consciousness to transfer between worlds
I suppose it's not.
Physics doens't say how consciousness works.
It exists in brains, brains are made of atoms, and physics has a story or two about the atoms.
Replies from: TAG↑ comment by TAG · 2019-09-14T17:25:03.317Z · LW(p) · GW(p)
If its at all possible for consciousness to transfer between worlds
I suppose it’s not.
Then you dont need the lengthy detour about measure.
Physics doens’t say how consciousness works.
It exists in brains, brains are made of atoms, and physics has a story or two about the atoms
And consciousness isn't a common or hard higher level phenomenon. Again, the point of reductionism is to understand the higher level phenomena in terms of Lower level activity, not just to notice that big things are made of little things.
[Note: potential info hazard, but probably good to read if you already read the question.]
[Epistemic status: this stuff is all super speculative due to the nature of the scenarios involved. Based on my understanding of physics, neuroscience, and consciousness, I haven't seen anything that would rule this possibility out.]
All I want to know is, is this stuff just being pulled out of his butt? Like, an extremely unlikely hypothetical that nonetheless carries huge negative utility? I'd be okay with that, as I'm not a utilitarian. Or have these scenarios actually been considered plausible by AI theorists?
FWIW, I've thought about this a lot and independently came up with and considered all the scenarios mentioned in the Turchin excerpt. It used to really really freak me out, and I believed it on a gut level. Avoiding this kind of outcome was my main motivation for actually getting the insurance for cryonics (the part I was previously cryocrastinating on). However, I now believe that QI is not an s-Risk and don't feel personally worried about the possibility anymore.
One thing to note is that this is a potential problem in any sufficiently large universe, and doesn't depend on a many-worlds style interpretation being correct. Tegmark has a list of various multiverses, which are different and affect what scenarios we might face. I do believe in many-worlds (as a broad category of interpretations) though.
Lots of the comments here seem confused about how this works, so I'll recap. If I'm at the point of death where I'm still conscious, the next moment I'll experience will be (in expectation) whatever conscious state has the highest probability mass in the multiverse, which is also a valid next conscious moment from the previous moment. Note that this next conscious moment is not necessarily in the future of the previous moment. If the multiverse contains no such moments, then we would just die the normal way. If the multiverse includes lots of humans doing ancestor simulations, you potentially could end up in one of those, etc... The key is that out of all conscious beings in the multiverse who feel like this just happened to them, those are (tautologically) the ones having the subjective experience of the next valid conscious moment. And it's valid to care about these potential beings, and is AFAICT the reason I care about my future selves (who do not exist yet) in the normal sense.
Regarding cryonics, it seems like the best way to preserve a significant amount of information about my last conscious moment. To whatever extent information about this is lost, a civilization that cares about this could optimize for likelihood of being a valid next conscious moment. I think this is the main actionable thing you can do for this. Of course, this only passes the buck to the future, since there is still the inevitable heat death of the universe to contend with.
Another thing that seems especially plausible for sudden deaths Aranyosi's 1 scenario. In this case, the highest probability mass next conscious moment will be a moment based on the moment from a few seconds before, but with a "false" memory of having survived a sudden death. This has relatively high probability because people sometimes report having kind of experience when they have a close call. But this again simply passes the buck to the future, where you're most likely to die from a gradual decline.
However, I think that by far, the most likely situation is common to death by aging, illness, or heat death of the universe. At the last moment of consciousness, the only next conscious moments that will be left will be in highly improbable worlds. But which world you are most likely to "wake up" in is still determined by Occam's razor. People seem to imagine that these improbable worlds will be ones where your consciousness remains in a similar state to the one you died in, but I think this is wrong.
Think carefully about what things are actually happening to support a conscious experience. Some minimal set of neurons would need to be kept functional -- but beyond that, we should expect entropy to effect things which are not causally upstream of the functionality of this set of neurons. Since strokes happen often, and don't always cause loss of consciousness, we can expect them to eventually occur for every non-essential (for consciousness) region of the brain. Because people can experience nerve damage to their sensory neurons without losing consciousness, we can expect that the ability to experience physical pain will decay. Emotional pain doesn't seem to be that qualitatively different from physical pain (e.g. is also mitigated by NSAIDs), so I expect this will be true for pain in general.
So most of your body and most of your mind will still decay as normal, only the absolutely essential neuronal circuitry (and whatever else, perhaps blood circulation) to induce a valid next conscious moment will miraculously survive. Anesthesia works by globally reducing synapse activity. So the initial stages this would likely feel like going under anesthesia, but where you never quite go out. Because anesthetics stop pain (remember this is still true if applied locally), and because by default, we do not experience pain, I'm now pretty sure that given QI being real: infinite agony is very unlikely.
I wrote the article quoted above. I think I understand your feelings as when I came to the idea of QI, I realised - after first period of excitement - that it implies the possibility of eternal sufferings. However, in current situation of quick technological progress such eternal sufferings are unlikely, as in 100 years some life extending and pain reducing technologies will appear. Or, if our civilization will crash, some aliens (or owners of simulation) will eventually bring pain reduction technics.
If you have thoughts about non-existence, it may be some form of suicidal ideation, which could be side effect of antidepressants or bad circumstances. I had it, and I am happy that it is in the past. If such ideation persists, ask professional help.
While death is impossible in QI setup, a partial death is still possible, when a person forgets those parts of him-her which want to die. Partial death has already happened many times with average adult person, when she forgets her childhood personality.
↑ comment by Fivehundred · 2019-09-09T11:58:30.998Z · LW(p) · GW(p)
However, in current situation of quick technological progress such eternal sufferings are unlikely, as in 100 years some life extending and pain reducing technologies will appear. Or, if our civilization will crash, some aliens (or owners of simulation) will eventually bring pain reduction technics.
What if I don't agree?
If you have thoughts about non-existence, it may be some form of suicidal ideation, which could be side effect of antidepressants or bad circumstances. I had it, and I am happy that it is in the past. If such ideation persists, ask professional help.
I only meant that I plan to die naturally, with no attempt at cryogrenic freezing. I've no wish to die before my natural lifespan.
While death is impossible in QI setup, a partial death is still possible, when a person forgets those parts of him-her which want to die. Partial death has already happened many times with average adult person, when she forgets her childhood personality.
I'm afraid I don't understand what you're saying here.
Replies from: avturchin↑ comment by avturchin · 2019-09-09T12:28:58.935Z · LW(p) · GW(p)
If QI is true, no matter how small is the share of the worlds where radical life extension is possible, I will eventually find myself in it, if not in 100, maybe in 1000 years.
Replies from: Fivehundred
↑ comment by Fivehundred · 2019-09-09T13:01:08.256Z · LW(p) · GW(p)
What was that talk about 'stable but improbable' worlds? If someone cares enough to revive me (I assume my measure would mostly enter universes where I was being simulated), then that doesn't seem likely. I also can't fathom that an AI wanting to torture humans would take up a more-than-tiny share of such universes. Do you think such things are likely, or is it that their downsides are so bad that they must be figured into the utilitarian calculus?
Replies from: avturchin↑ comment by avturchin · 2019-09-09T13:05:05.999Z · LW(p) · GW(p)
The world where someone wants to revive you has low measure (may be not, but let's assume), but if they will do it, they will preserve you there for very long time. For example, some semi-evil AI may want to revive you only to show red fishes for the next 10 billion years. It is a very unlikely world, but still probable. And if you are in, it is very stable.
Replies from: Fivehundred↑ comment by Fivehundred · 2019-09-09T13:47:44.826Z · LW(p) · GW(p)
But wait, doesn't that require the computational theory of mind and 'unification' of identical experiences? If they don't hold, then we can't go into other universes regardless of whether MWI is true (if they do, then we could even if MWI is false). I would have to already be simulated, and if I am, then there's no reason to suppose it is by the sort of AI you describe.
Your suggestion was based on the assumption of an AI doing it, correct? It isn't something we can naturally fall into? Also, even if all your other assumptions are true, why suppose that 'semi-evil' AIs, which you apparently think have low measure, take the lion's share of highly degraded experiences? Why wouldn't a friendly (or at least friendlier) AI try to rescue them?
Replies from: avturchin↑ comment by avturchin · 2019-09-09T14:18:20.005Z · LW(p) · GW(p)
QI works only if at least three main assumptions hold, but we don't know for sure if they are true or not. One is very large size of the universe, the second is "unification of identical experiences" and the third one is that we could ignore the decline of measure corresponding to survival in MWI. So, QI validity is uncertain. Personally I think that it is more likely to be true than untrue.
It was just a toy example of rare, but stable world. If friendly AIs are dominating the measure, you most likely will be resurrected by friendly AI. Moreover, friendly AI may try to dominate total measure to increase human chances to be resurrected by it and it could try to rescue humans from evil AIs.
Replies from: Fivehundred
↑ comment by Fivehundred · 2019-09-09T14:57:18.621Z · LW(p) · GW(p)
the second is "unification of identical experiences"
I disagree. Quantum Immortality can still exist without it; it's only this supposition of the AI 'rescuing you' that requires that. Also, if AIs are trying to grab as many humans as possible, there's no special reason to focus on dying ones. They could just simulate all sorts of brain states with memories and varied experiences, and then immediately shut down the simulation.
If we assume that we cannot apply self-locating belief to our experience of time (and assume AIs are indeed doing this), we should expect at every moment to enter an AI-dominated world. If we can apply self-locating beliefs, then the simulation would almost certainly be already shut down and we would be in that world. Since we aren't, there's no reason to suppose that these AIs exist or that they can 'grab a share of our souls' at all.
The question is, can we apply self-locating belief to our experience of time?
and the third one is that we could ignore the decline of measure corresponding to survival in MWI
How would measure affect this? If you're forced to follow certain paths due to not existing in any others, then why does it matter how much measure it has?
Replies from: avturchin↑ comment by avturchin · 2019-09-09T16:21:55.475Z · LW(p) · GW(p)
How would measure affect this? If you're forced to follow certain paths due to not existing in any others, then why does it matter how much measure it has?
Agree, but some don't.
We could be (and probably are) in AI-created simulation, may be it is a "resurrectional simulation". But if friendly AIs dominate, there will be no drastic changes.
Replies from: Fivehundred↑ comment by Fivehundred · 2019-09-09T16:33:28.811Z · LW(p) · GW(p)
Why? Surely they're trying to rescue us. Maintaining the simulation would take away resources from grabbing even more human-measure.
Replies from: avturchin↑ comment by avturchin · 2019-09-09T16:55:43.732Z · LW(p) · GW(p)
To escape creating just random minds, the future AI has to create a simulation of the history of the whole humanity, and it is still running, not maintained. I explored the topic of the resurrectional simulations here: https://philpapers.org/rec/TURYOL
Replies from: Fivehundred↑ comment by Fivehundred · 2019-09-09T18:23:50.729Z · LW(p) · GW(p)
Why wouldn't it create random minds if it's trying to grab as much 'human-space' as possible?
EDIT: Why focus on the potential of quantum immortality at all? There's no special reason to focus on what happens when we *die*, in terms of AI simulation.
If quantum torment is real, attempting suicide would only get you there faster. It would restrict your successor observer-moments into only those without enough control to choose to die (since those who succeed in the attempt have no successors). Locked-in syndrome and the like.
Signing up for cryonics, on the other hand, would probably be a good idea, since it would increase the diversity of possible future observer moments to include cases where you get revived.
Enlightenment (in the Buddhist sense) might possibly be an escape. Some rationalists seem to take the possibility seriously and say you don't have to believe in anything supernatural. Meditation is just happening in your brain. If you do reach Nirvana, perhaps you can decide not to suffer at all, even if you do get locked-in (or worse). This kind of sounds like wireheading to me, but if the alternative is Literally Hell, then maybe you should take the deal. (Epistemic status: I'm not enlightened or anything. I've just heard people talk about it.)
↑ comment by Spiracular · 2019-09-15T21:06:24.769Z · LW(p) · GW(p)
As someone who has had stream-entry, and the change-in-perception called Enlightenment... I endorse your read of it as being potentially useful in this case?
I'm going to give more details in a sub-comment, to give people who are already rolling their eyes a chance to skip over this.
Replies from: Spiracular, Spiracular, Spiracular↑ comment by Spiracular · 2019-09-15T21:07:20.004Z · LW(p) · GW(p)
So, here's the specific thing I can think of that seems like it might be helpful...
I try to be cautious about using meditation-based wire-heading or emotional-dulling, but at minimum, there's a state one step down from enlightenment (equanimity) that perceives suffering as merely "dissonance" in vibrations. The judging/negative-connotation gets dropped, and internal-perception of emotional affect is pretty flat (Note of caution: the emotions probably aren't gone, it's more like you perceive them differently. I'm not 100% sure how it works, myself. While it might sound similar, it's not quite the same as dissociation; the movement is more like you lean into your experience rather than out of it. Also, I read in a paper that its painkiller properties are apparently not based on opiods? Weird, right? So neurologically, I don't really know how it works, although I might develop theories if I researched it a bit harder.).
Enlightenment/fruition proper doesn't even form memories, although I've never been able to sustain that state for longer than a few seconds. But when it drops, it usually drops back into equanimity... so I guess between the two, it'd be a serious improvement on "eternal conscious suffering"?
Unfortunately, to get into Enlightenment territory, there's a series of intermediate steps that tend to set off existential crises, of widely-varying severity. Any book or teacher that doesn't take this and the wireheading potential seriously, is probably less good than one who does. That said, I still recommend it, especially for people who seem to keep having existential crises anyway. But it's a perception-alteration workbench; its sub-skills can sometimes be used to detrimental ends, if people aren't careful about what they install.
↑ comment by Spiracular · 2019-09-15T21:09:36.798Z · LW(p) · GW(p)
Relatedly: I would bet someone money that Greg Egan does something insight-meditation-adjacent.
I started reading his work after someone noted my commentary on "the unsharableness of personal qualia" bore a considerable resemblance to Closer. And since then, whenever I read his stuff, I keep seeing him giving intelligent commentary and elaboration on things I had perceived and associated with deep meditation or LSD (the effects are sometimes similar for me). He's obviously a big physics fan, but I suspect insight meditation is another one of his big "creativity" generators. (Before someone inevitably asks: No, I don't say that about everything.)
To me, Egan's viewpoint reads as very atheist, but also very Buddhist. If you shear off all the woo and distill the remainder, Buddhism is very into seeing through "illusions" (even reassuring ones), and he seems to have a particular interest in this.
I can make up a plausible story that developing an obsession with how we coordinate-and-manifest the illusion of continuity from disparate brain-parts... could be a pretty natural side-effect of sometimes watching the mental sub-processes that generate the illusion of "a single, conscious, continuous self" fall apart from one another? (Meditation can do that, and it's very unsettling the first time you see it.).
↑ comment by Spiracular · 2019-09-15T21:06:43.824Z · LW(p) · GW(p)
Here's one plus-side that you don't need the additional context to understand: I kinda suspect that at least most people would eventually find the right combination of insights and existential-crises to bumble into enlightenment by themselves, if they had an eternity of consecutive experiences to work with. Especially given that there seem to be multiple simple practices that get around to it eventually (although it might take a couple of lifetimes for some people).
↑ comment by Fivehundred · 2019-09-10T03:14:49.180Z · LW(p) · GW(p)
Actually, I just realized there's no reason you would remain conscious in QI. Surely the damage to your brain and body would put you into a coma - a fate I'd like to avoid, but definitely better than Literally Hell.
Also, what is all this talk about suicide? All I said was that I plan to die normally. You guys are reading weird things into that...
Replies from: gilch, gilch↑ comment by gilch · 2019-09-10T03:33:31.579Z · LW(p) · GW(p)
It was mostly just for contrast with the cryonics bit. Also, Quantum Suicide is another name for the same thought experiment. The others might be reacting to the "I'm in a bad place right now" combined with all this talk of death.
And I don't see how a death being "natural" makes it OK. Death is Bad.
if people got hit on the head by a baseball bat [LW · GW] every week, pretty soon they would invent reasons why getting hit on the head with a baseball bat was a good thing.
If you want to live today, and expect to feel the same way tomorrow, then by induction, why not at 80? Ill health? Medicine might be more advanced by then.
Replies from: Fivehundred↑ comment by Fivehundred · 2019-09-10T04:50:19.639Z · LW(p) · GW(p)
And I don't see how a death being "natural" makes it OK.
That's not what I said (though it is a good reason to be suspicious of attempts to remove it.) I'll just leave it that I have some philosophical opinions which lead me to believe it is not annihilation.
Also, the baseball example is not a natural phenomenon. If it were, I'd consider it rational to accept it as a good thing.
Replies from: gilch↑ comment by gilch · 2019-09-10T14:54:24.372Z · LW(p) · GW(p)
I wouldn't consider it rational even if natural. You know what else is natural? Smallpox. The Appeal to Nature is generally considered a weak argument. A "natural life" is a stone-age life. You could certainly do worse, but it's not setting the bar very high.
Replies from: Slider↑ comment by Slider · 2019-09-15T23:25:57.539Z · LW(p) · GW(p)
If you think something is bad you are likely to oppose it or suffer experiencing it.
If you have opposed it for quite a while then there is inductive proof that opposing it is not effective. Those resources are then not producing anything. You are better of moving resources from opposition to other tasks.
If you experience it often without opposition thinking that it should not happen to you might make you suffer more. There you can cut your losses by making the adverse event hurt you as little as possible.
Magic baseball bats are ambigious how easy it would be to oppose them. Smallpox clearly does admit effective opposing.
↑ comment by gilch · 2019-09-10T03:40:19.056Z · LW(p) · GW(p)
A coma where you're semiconscious maybe. You can't get a successor observer-moment to the current one without the "observer". And have you considered more exotic possibilities like Boltzmann brains?
Replies from: Fivehundred↑ comment by Fivehundred · 2019-09-10T04:55:21.294Z · LW(p) · GW(p)
But you still experience things when you sleep, hence are observing. Also, quantum insomnia should exist if you're correct, but it doesn't.
I don't see how a Boltzmann brain spontaneously forming could ever be more likely than existing in a universe with all the infrastructure necessary to support a natural brain - even if that infrastructure beats some amazing odds, it only has to maintain itself. The theory further requires that mind unification be true.
Replies from: gilch↑ comment by gilch · 2019-09-10T15:00:05.246Z · LW(p) · GW(p)
As I said elsewhere [LW(p) · GW(p)] observer moments need not be contiguous. And I agree that you could count as an observer if you're dreaming ("semiconsciouse maybe"), but not if you're anesthetized or similarly unconscious. This is probably the case in deep sleep and likely in comatose states.
Replies from: Fivehundred↑ comment by Fivehundred · 2019-09-10T16:38:53.315Z · LW(p) · GW(p)
I've been anesthetized twice. I don't remember any dreams whatsoever, but I had the distant feeling that I did dream upon waking (though they may have happened as the drug was loosening its hold).
The survivors are living in an area that shatters the illusion of classical reality. The survivor probabilities favour classical probabilities so you should be able to expect for things to get classical. In a non-classical universe it might be possible to get a very rapid regeneration that might bounce you far from the torment zone for a long time. Even if you do not get a particuarly stellar regeneration you will constantly be tunneling out of the torment zone too. At some point the tunneling to torment and tunneling to relief should balance out where you have 50% chance of being in a bad scenario and 50% chance of being in a good scenario. That is if you are sustained longer in a scenario that classically would be considered bad the less faith you can have that the mechanics of the scenario continue to work. It will either resolve to a classical situation different from current or it is such a jumbled mess that "being stuck in a bad place" is not representative.
In general the jumbling might also target your personality and then the question of how much of your alteration really counts as you starts to get relevant. If you escape with a cunning deduction you made because you tought you were Sherlock Holmes because cosmic rays fabricated your memories for it does that count as Sherlock Holmes or you waking up or both? One might need classical mechanics to maintain a sense of identity stability (that is the you on the next second has very similar personality) and when that is taken away it is not sure the concept applies with the same strength. Sure somebody concious will get to expererience a bunch of stuff and it will be structurally reminicient of you. But will it really be you?
I used to be heavily into this area, and after succumbing somewhat to an 'it all adds up to normality' shoulder-shrugging, my feeling on this is that it's not just the 'environment' that is subject to radical changes, but the mind itself. It might be that there's a kind of mind-state attractor, by which minds tend to move along predictable paths and converge upon weirdness together. All of consciousness may, by different ways of looking at it, be considered as fragments of that endstate.
↑ comment by Szymon Kucharski · 2021-01-19T00:08:04.167Z · LW(p) · GW(p)
Imagine a benevolent AI on a universal scale, that simulates the greates achievable number of copies of one specific "life". Namely if we imagine that it would simulate cintinuous states from emergence of consciousness to some form of nirvana. If we assume during brain death experience is getting simpler, to eventually reach the simplest observer moment (it would be identical to all dying minds) we can ask ourselves what than the next observer moment should be, and if we already have the simplest one next should be more complex, maybe if the complexity would have a tendency to grow, next moments would be one in an emerging mind of some creature (it would be a form of multiverse reincarnation, yet there is no way to keep memory in such a scenario). We could imagine that some benevolent AI would create greater (from the simplest state to computationally achievable, simple nirvanic state) measure of one simple, suffering-less life, to minimize the amount of suffering, what would be a form of mind attractor.
Nevertheless after considering the idea I think it has a great objection, namely it would not be a way to save anyone, because there would be no "person" to be saved.
(Excuse my englisch)
You've been basilisked. There is no empirical evidence for MWI, but a number of physicists do believe that it can be something related to reality, with some heavy modifications, since, as stated, it contradicts General Relativity. Sean Carroll, an expert in both Quantum Mechanics and General Relativity, is one of them. Consider reading his blog. His latest article about the current (pitiful) state of fundamental research in Quantum Mechanics, can be found in the New York Times. His book on the topic is coming out in a couple of days, and it is guaranteed to be a highly entertaining and insightful read, which might also alleviate some of your worries.
↑ comment by Fivehundred · 2019-09-09T12:18:12.410Z · LW(p) · GW(p)
You've been basilisked.
Yes, but how plausible are such scenarios considered? If I die naturally? I don't find AI superintelligence very plausible.
What about that talk of being 'locked in a very unlikely but stable world'? Where is he getting that from?
There is no empirical evidence for MWI, but a number of physicists do believe that it can be something related to reality, with some heavy modifications, since, as stated, it contradicts General Relativity. Sean Carroll, an expert in both Quantum Mechanics and General Relativity, is one of them. Consider reading his blog. His latest article about the current (pitiful) state of fundamental research in Quantum Mechanics, can be found in the New York Times. His book on the topic is coming out in a couple of days, and it is guaranteed to be a highly entertaining and insightful read, which might also alleviate some of your worries.
Thanks, but I need someone who specifically addresses quantum immortality. Or better yet, a non-celebrity physicist who I can talk to.
EDIT: You claim here [LW(p) · GW(p)] to have a Phd in Physics, so aren't you at least as qualified?
Replies from: shminux↑ comment by Shmi (shminux) · 2019-09-09T14:43:24.828Z · LW(p) · GW(p)
I do have a PhD in Physics, classical General Relativity specifically. But you wanted something who adheres to MWI, and that is not me.
Some thoughts from Sean Carroll on the topic of Quantum Immortality:
https://www.reddit.com/r/seancarroll/comments/9drd25/quantum_immortality/e5l663t/
And this one from Scott Aaronson:
https://www.scottaaronson.com/blog/?p=2643#comment-1001030
Celebrity or not, both are quite likely to reply to a polite yet anxious email, since they can actually relate to your worries, if maybe not on the same topic.
Replies from: Fivehundred
↑ comment by Fivehundred · 2019-09-09T15:14:39.962Z · LW(p) · GW(p)
But you wanted something who adheres to MWI, and that is not me.
That would be optimal, but I still would like to hear your thoughts.
Some thoughts from Sean Carroll on the topic of Quantum Immortality:
https://www.reddit.com/r/seancarroll/comments/9drd25/quantum_immortality/e5l663t/
And this one from Scott Aaronson:
https://www.scottaaronson.com/blog/?p=2643#comment-1001030
Celebrity or not, both are quite likely to reply to a polite yet anxious email, since they can actually relate to your worries, if maybe not on the same topic.
Unfortunately, neither of them seem to grasp the argument - the whole point of it is that as a conscious being, you cannot experience any outcome where you die. So even if your survival is ridiculously improbable in the universal wavefunction, you can't 'wake up dead'. Hence you will always find your subjective self in that improbable branch.
Another terrible thought: what if it doesn't depend on you dying as a whole? What if no part of your consciousness can be removed or degrade?
EDIT: Sleep doesn't refute that as there is no real proof that you experience less when unconscious (rather, you may simply just not be self-aware). But it would imply people with brain damage are P-zombies, so that seems untenable.
Replies from: Vladimir_Nesov, gilch↑ comment by Vladimir_Nesov · 2019-09-09T15:33:01.752Z · LW(p) · GW(p)
Hence you will always find your subjective self in that improbable branch.
The meaning of "you will always find" has a connotation of certainty or high probability, but we are specifically talking about essentially impossible outcomes. This calls for tabooing [LW · GW] "you will always find" to reconcile an intended meaning with extreme improbability of the outcome. Worrying about such outcomes might make sense when they are seen as a risk on the dust speck side of Torture vs. Dust Specks [LW · GW] (their extreme disutility overcomes their extreme improbability). But conditioning on survival seems to be a wrong way of formulating values [LW · GW] (see also [LW · GW]), because the thing to value is the world, not exclusively subjective experience, even if subjective experience manages to get significant part of that value.
Replies from: Fivehundred, TAG↑ comment by Fivehundred · 2019-09-09T15:44:06.814Z · LW(p) · GW(p)
The meaning of "you will always find" has a connotation of certainty or high probability, but we are specifically talking about essentially impossible outcomes.
Why? Nothing is technically impossible with quantum mechanics. It is indeed possible for every single atom of our planet to spontaneously disappear.
This could make sense as a risk on the dust speck side of , but conditioning on survival seems to be just wrong [LW · GW] as a way of formulating values (see also [LW · GW]).
You're not understanding that all of our measure is going into those branches where we survive.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2019-09-09T15:56:34.318Z · LW(p) · GW(p)
Nothing is technically impossible with quantum mechanics.
By "essentially impossible" I meant "extremely improbable". The word "essentially" was meant to distinguish this from "physically impossible".
You're not understanding that all of our measure is going into those branches where we survive.
There is a useful distinction between knowing the meaning of an idea and knowing its truth. I'm disagreeing with the claim that "all of our measure is going into those branches where we survive", understood in the sense that only those branches have moral value (see What Are Probabilities, Anyway? [LW · GW]), in particular the other branches taken together have less value. See the posts linked from the grandparent comment for a more detailed discussion (I've edited it a bit).
This meaning could be different from one you intend, in which case I'm not understanding your claim correctly, and I'm only disagreeing with my incorrect interpretation of it. But in that case I'm not understanding what you mean by "all of our measure is going into those branches where we survive", not that "all of our measure is going into those branches where we survive" in the sense you intend, because the latter would require me to know the intended meaning of that claim first, at which point it becomes possible for me to fail to understand its truth.
Replies from: Fivehundred↑ comment by Fivehundred · 2019-09-09T16:39:11.661Z · LW(p) · GW(p)
By "essentially impossible" I meant "extremely improbable". The word "essentially" was meant to distinguish this from "physically impossible".
I don't see how it refutes the possibility of QI, then.
There is a useful distinction between knowing the meaning of an idea and knowing its truth. I'm disagreeing with the claim that "all of our measure is going into those branches where we survive", understood in the sense that only those branches have moral value (see What Are Probabilities, Anyway? [LW · GW]), in particular the other branches taken together have less value. See the posts linked from the grandparent comment for a more detailed discussion (I've edited it a bit).
This meaning could be different from one you intend, in which case I'm not understanding your claim correctly, and I'm only disagreeing with my incorrect interpretation of it. But in that case I'm not understanding what you mean by "all of our measure is going into those branches where we survive", not that "all of our measure is going into those branches where we survive" in the sense you intend, because the latter would require me to know the intended meaning of that claim first, at which point it becomes possible for me to fail to understand its truth.
According to QI, we (as in our internal subjective experience) will continue on only in branches where we stay alive. Since I care about my subjective internal experience, I wouldn't want it to suffer (if you disagree, press a live clothes iron to your arm and you'll see what I mean).
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2019-09-09T17:06:53.861Z · LW(p) · GW(p)
I don't see how it refutes the possibility of QI, then.
See the context of that phrase. I don't see how it could be about "refuting the possibility of QI". (What is "the possibility of QI"? I don't find anything wrong with QI scenarios themselves, only with some arguments about them, in particular the argument that their existence has decision-relevant implications because of conditioning on subjective experience. I'm not certain that they don't have decision-relevant implications that hold for other reasons.)
[We] (as in our internal subjective experience) will continue on only in branches where we stay alive.
This seems tautologously correct. See the points about moral value in the grandparent comment and in the rest of this comment for what I disagree with, and why I don't find this statement relevant.
Since I care about my subjective internal experience, I wouldn't want it to suffer
Neither would I. But this is not all that people care about. We also seem to care about what happens outside our subjective experience, and in quantum immortality scenarios that component of value (things that are not personally experienced) is dominant.
Replies from: Fivehundred↑ comment by Fivehundred · 2019-09-09T18:29:43.455Z · LW(p) · GW(p)
No, it isn't. The same thing will happen to everyone in your branch (you don't see it, of course, but it will subjectively happen to them).
Perhaps you don't understand what the argument says. You, as in the person you are right now, is going to experience that. Not a infinitesimal proportion of other 'yous' while the majority die. Your own subjective experience, 100% of it.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2019-09-09T18:58:39.025Z · LW(p) · GW(p)
You, as in the person you are right now, is going to experience that.
This has the same issue with "is going to experience" as the "you will always find" I talked about in my first comment [LW(p) · GW(p)].
Not a infinitesimal proportion of other 'yous' while the majority die. Your own subjective experience, 100% of it.
Yes. All of the surviving versions of myself will experience their survival. This happens with extremely small probability. I will experience nothing else. The rest of the probability goes to the worlds where there are no surviving versions of myself, and I won't experience those worlds. But I still value those worlds more than the worlds that have surviving versions of myself. The things that happen to all of my surviving subjective experiences matter less to me than the things that I won't experience happening in the other worlds. Furthermore, I believe that not as a matter of unusual personal preference, but for general reasons about the structure of valuing of things that I think should convince most other people, see the links in the above comments.
Replies from: Fivehundred↑ comment by Fivehundred · 2019-09-09T19:57:18.253Z · LW(p) · GW(p)
To be clear: your argument is that every human being who has ever lived may suffer eternally after death, and there are good reasons for not caring...?
That requires an answer that, at the very least, you should be able to put in your own words. How does our subjective suffering improve anything in the worlds where you die?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2019-09-09T20:19:03.185Z · LW(p) · GW(p)
To be clear: your argument is that every human being who has ever lived may suffer eternally after death, and there are good reasons for not caring...?
It's not my argument, but it follows from what I'm saying, yes. Even if people should care about this, there are probably good reasons not to, just not good enough to tilt the balance. There are good reasons for all kinds of wrong conclusions, it should be suspicious when there aren't [LW · GW]. Note that caring about this too much is the same as caring about other things too little. Also, as an epistemic principle, appreciation of arguments shouldn't depend on consequences of agreeing with them.
How does our subjective suffering improve anything in the worlds where you die?
Focusing effort on the worlds where you'll eventually die (as well as the worlds where you survive in a normal non-QI way) improves them at the cost of neglecting the worlds where you eternally suffer for QI reasons.
Replies from: Fivehundred↑ comment by Fivehundred · 2019-09-09T21:02:20.579Z · LW(p) · GW(p)
...and here's about when I realize what a mistake it was setting foot in Lesswrong again for answers.
Replies from: gilch↑ comment by gilch · 2019-09-10T00:36:25.980Z · LW(p) · GW(p)
I don't buy that argument about sleep, but what about anesthesia? I see no reason why successor observer-moments have to be contiguous in either time or space. They likely will be due to the laws of physics, but we're talking about improbable outcomes here. Your unconscious body is not your successor. It's an inanimate object that has a high probability of generating a successor observer-moment at a later time. (That is, it might wake up as you.)
Some thought experiments that can contradict QI:
- Quantum anesthesia. If the premise in QI is that you can't branch out into a non-conscious state, then anesthesia would be impossible as well. And mind you: anesthesia is not sleep, where consciousness is simply diminished. Anesthesia is total anihilation of consciousness (when properly given at least). Those of us who have had surgery know this: you don't feel any passage of time during anesthesia. When you wake up, after several hours, you feel like you had just been put under a second ago.
1b) Quantum temporary-death. Coming out of temporary unconscious states like temporary death (where your heart can stop beating (and you remain unconscious) for up to several hours) would be impossible, since you would also branch out into states of consciousness before any unconsciousness would settle in.
-
Entropy. Dodging death infinitely is impossible. There might be branches where you die and others where you survive, but even then, on the ones where you survive, your body is still decaying. To keep decaying forever and never die would simply contradict biology. After a certain threshold of damage, death is inevitable. But would this prolong itself for thousands of years of agony? No. It would occur under normal timelines. Maybe, say, if you get headshot, you die instantly in branch A and survive damaged on branch B, but it doesn't mean that you're not on your way to death on branch B if the damage is severe enough. If it isn't, you go to the hospital and make a recovery. In short: to keep suffering forever for eternity simply contradicts biology - no one can live forever, and damage will always lead to death (sooner or later, BUT on a normal biological timescale related to the different degrees of damage in each branch).
-
Maybe MWI is just bs, lol.
I think in short QI is Zeno's Paradox. Ancient Greek philosopher Zeno concluded that for each distance between A and B, you first have to reach halfway between A and B before getting to B. Therefore, you can never reach B, since you always have to reach the halfway point first, and there is always a halfway point, no matter how small the distance. This led Zeno into concluding that movement is impossible. In reality, we know that it isn't, you eventually will reach B, and in a normal timescale.
Even if MWI is true, you eventually will die (in normal timescales), just as you will eventually reach point B (in normal timescales).
4 comments
Comments sorted by top scores.
comment by Fivehundred · 2019-09-09T12:21:24.028Z · LW(p) · GW(p)
What about Tegmark's argument that dying would have to be a binary event in order to experience immortality? If so, wouldn't your consciousness just dissolve? Or can no iota of consciousness be lost?
Replies from: Pattern↑ comment by Pattern · 2019-09-09T17:46:11.117Z · LW(p) · GW(p)
Is winning the lottery a binary event?
Replies from: Fivehundred↑ comment by Fivehundred · 2019-09-09T18:30:25.928Z · LW(p) · GW(p)
Yes, you either lose or you win. Two choices.
Replies from: akram-choudhary↑ comment by Akram Choudhary (akram-choudhary) · 2023-02-04T13:40:22.696Z · LW(p) · GW(p)
its a 2x2 matrix if you are married tho