Does immortality imply eternal existence in linear time?
post by turchin · 2016-04-17T23:17:52.486Z · LW · GW · Legacy · 37 commentsContents
37 comments
The question is important, as it’s often used as an argument against idea of immortality, on the level of desirability as well as feasibility. It may result in less interest in radical life extension as "result will be the same", we will die. Religion, on the other hand is not afraid to "sell" immortality, as it has God, who will solve all contradiction in immortality implementation. As a result, religion win on the market of ideas.
Immortality (by definition) is about not dying. The fact of eternal linear existence follows from it, seems to be very simple and obvious theorem:
“If I do not die in the time moment N and N+1, I will exist for any N”.
If we prove that immortality is impossible, then any life would look like: Now + unknown very long time + death. So, death is inevitable, and the only difference is the unknown time until it happens.
It is an unpleasant perspective, by the way.
So we have or “bad infinity”, or inevitable death. Both look unappealing. Both also look logically contradictory. "Infinite linear existence" requires infinite memory of observer, for example. "Death of observer" is also implies an idea of the ending of stream of experiences, which can't be proved empirically, and from logical point of view is unproved hypothesis.
But we can change our point of view if we abandon the idea of linear time.
Physics suggests that near black holes closed time-like curves could be possible. https://en.wikipedia.org/wiki/Closed_timelike_curve (Idea of "Eternal recurrence" of Nietzsche is an example of such circle immortality.)
If I am in such a curve, my experiences may recur after, say, one billion years. In this case, I am immortal but have finite time duration.
It may be not very good, but it is just a starting point in considerations that would help lead us away from the linear time model.
There may be other configurations in non-linear time. Another obvious one is the merging of different personal timelines.
Another is the circular attractor.
Another is a combination of attractors, merges and circular timelines, which may result in complex geometry.
Another is 2 (or many)- dimensional time, with another perpendicular time arrow. It results in a time topology. Time could also include singularities, in which one has an infinite number of experiences in finite time.
We could also add here idea of splitting time in quantum multiverse.
We could also add an idea that there is a possible path between any two observer-moment, and given that infinitely many such paths exist in splitting multiverse, any observer has non zero probability to become any other observer, which results in tangle of time-like curves in the space of all possible minds.
Timeless physics ideas also give us another view on idea of “time” in which we don’t have “infinite time”, but not because infinity is impossible, but because there is no such thing as time.
TL;DR: The idea of time is so complex that we can’t state that immortality results in eternal linear existence. These two ideas may be true or false independently.
Also I have a question to the readers: If you think that superintelligence will be created, do you think it will be immortal, and why?
37 comments
Comments sorted by top scores.
comment by Kyre · 2016-04-20T04:46:08.193Z · LW(p) · GW(p)
If we take "immortality" to mean "infinitely many distinct observer moments that are connect to me through moment-to-moment identity", then yes, by Konig's Lemma.
(Every infinite graph with finite-degree verticies has an infinite path)
(edit: hmmm, does many-worlds give you infinite-branching into distinct observer moments ?)
Replies from: woodchopper↑ comment by woodchopper · 2016-04-24T17:38:30.583Z · LW(p) · GW(p)
Can you elaborate on the concept of a connection through "moment-to-moment identity"? Would for example "mind uploading" break such a thing?
Replies from: Kyre↑ comment by Kyre · 2016-04-26T05:42:13.850Z · LW(p) · GW(p)
Heh, that was really just me trying to come up with a justification for shoe-horning a theory of identity into a graph formalism so that Konig's Lemma applied :-)
If I were to try to make a more serious argument it would go something like this.
Defining identity, whether two entities are 'the same person' is hard. People have different intuitions. But most people would say that 'your mind now' and 'your mind a few moments later' are do constitute the same person. So we can define a directed graph with verticies as mind states (mind states would probably have been better than 'observer moments') with outgoing edges leading to mind states a few moments later.
That is kind of what I meant by "moment-by-moment" identity. By itself it is a local but not global definition of identity. The transitive closure of that relation gives you a global definition of identity. I haven't thought about whether its a good one.
In the ordinary course of events these graphs aren't very interesting, they're just chains coming to a halt upon death. But if you were to clone a mind-state and put it into two different environments, they that would give you a vertex with out-degree greater than one.
So mind-uploading would not break such a thing, and in fact without being able to clone a mind-state, the whole graph-based model is not very interesting.
Also, you could have two mind states that lead to the same successor mind state - for example where two different mind states only differ on a few memories, which are then forgotten. The possibility of splitting and merging gives you a general (directed) graph structured identity.
(On a side-note, I think generally people treat splitting and merging of mind states in a way that is way too symmetrical. Splitting seems far easier - trivial once you can digitize a mind-state. Merging would be like a complex software version control problem, and you'd need very carefully apply selective amnesia to achieve it.)
So, if we say "immortality" is having an identity graph with an infinite number of mind-states all connected through the "moment-by-moment identity" relation (stay with me here), and mind states only have a finite number of successor states, then there must be at least one infinite path, and therefore "eternal existence in linear time".
Rather contrived, I know.
Replies from: woodchopper↑ comment by woodchopper · 2016-04-27T15:41:44.185Z · LW(p) · GW(p)
So, the graph model of identity sort of works, but I feel it doesn't quite get to the real meat of identity. I think the key is in how two vertices of the identity graph are linked and what it means for them to be linked. Because I don't think the premise that a person is the same person they were a few moments ago is necessarily justified, and in some situations it doesn't meld with intuition. For example, a person's brain is a complex machine; imagine it were (using some extremely advanced technology) modified seriously while a person was still conscious. So, it's being modified all the time as one learns new information, has new experiences, takes new substances, etc, but let's imagine it was very dramatically modified. So much so that over the course of a few minutes, one person who once had the personality and memories of, say, you, ended up having the rough personality and memories of Barack Obama. Could it really be said that it's still the same identity?
Why is an uploaded mind necessarily linked by an edge to the original mind? If the uploaded mind is less than perfect (and it probably will be; even if it's off by one neuron, one bit, one atom) and you can still link that with an edge to the original mind, what's to say you couldn't link a very, very dodgy 'clone' mind, like for example the mind of a completely different human, via an edge, to the original mind/vertex?
Some other notes: firstly, an exact clone of a mind is the same mind. This pretty much makes sense. So you can get away from issues like 'if I clone your mind, but then torture the clone, do you feel it?' Well, if you've modified the state of the cloned mind by torturing it, it can no longer be said to be the same mind, and we would both presumably agree that me cloning your mind in a far away world and then torturing the clone does not make you experience anything.
comment by OrphanWilde · 2016-04-18T15:28:59.398Z · LW(p) · GW(p)
Given that the human mind has a finite number of states, any given linear-immortal being, at some point in time, becomes indistinguishable from immortality with finite time duration.
Replies from: turchin↑ comment by turchin · 2016-04-18T19:00:03.338Z · LW(p) · GW(p)
I could become non human with arbitrary large mind size. But if we suggest that a finite upper limit of possible mind exist, like no more than 1 exobyte, then your objection works.
If we suggest that there is no upper limits of complexity of possible minds and AIs, then your objection doesn't work.
But some form of it may be still true. Because the larger the mind, the slower it works. Galaxy size mind would think one thought for 100 000 years. So it may result in some kind of limit of actual size of the minds, but more calculations are needed.
comment by Deku-shrub · 2016-04-23T19:20:14.863Z · LW(p) · GW(p)
Immortalism is losing popularity within H+ due to its popular interpretation as 'forced immortality', a trope well explored in fiction. :/
Replies from: turchin↑ comment by turchin · 2016-04-23T20:35:33.908Z · LW(p) · GW(p)
And what is gaining traction? To become a God using FAI?
Replies from: Deku-shrub↑ comment by Deku-shrub · 2016-04-24T09:06:20.360Z · LW(p) · GW(p)
Asking me to sum up the current state of the H+ movement is tricky because I track the entire thing on H+Pedia.
Atheism is growing very strong, due to the influence of Zoltan Istvan, where as I believe Aubrey de Grey is toning down some of his immortalist rhetoric in favour of alternative terms like "Regenerative Medicine" and the like.
There are all kinds of trends I could comment on.
Replies from: turchincomment by woodchopper · 2016-04-23T11:58:00.085Z · LW(p) · GW(p)
What does it mean to be immortal? We haven't solved key questions of personal identity yet. What is it for one personal identity to persist?
Replies from: turchin↑ comment by turchin · 2016-04-23T13:14:34.985Z · LW(p) · GW(p)
It is good question. The problem of personal identity is one of most complex, like aging. I am working on the map of identity solutions, and it is very large.
If the decide that identity has definition I, the death os abrupt disappearance of I. And immortality is idea that death never happens. It seems that this definition of immortality doesn't depends of definition of identity.
But practically the more fragile is identity, the more probable is death.
Replies from: woodchopper↑ comment by woodchopper · 2016-04-24T17:31:58.589Z · LW(p) · GW(p)
The thing is, I'm just not sure if it's even a reasonable thing to talk about 'immortality' because I don't know what it means for one personal identity ('soul') to persist. I couldn't be sure if a computer simulated my mind it would be 'me', for example. Immortality will likely involve serious changes to the physical form our mind takes, and once you start talking about that you get into the realm of thought experiments like the idea that if you put someone under a general anaesthetic, take out one atom from their brain, then wake them up, you have a similar person but not the one who originally went under the anaesthetic. So from the perspective of the original person, undergoing their operation was pointless, because they are dead anyway. The person who wakes from the operation is someone else entirely.
I guess I'm just trying to say that immortality makes heaps of sense if we can somehow solve the question of personal identity, but if we can't, then 'immortality' may be pretty nonsensical to talk about, simply because if we cannot say what it takes for a single 'soul' to persist over time, the very concept of 'immortality' may be ill-defined.
I like your post about the heat death of the universe, if you ever figure anything out regarding the persistence of a personal identity, I'd like you to message me or something.
Replies from: qmotus↑ comment by qmotus · 2016-04-24T18:15:42.607Z · LW(p) · GW(p)
Isn't it purely a matter of definition? You can say that a version of you with one atom of yourself is you or that it isn't; or that a simulation of you either is or isn't you; but there's no objective right answer. It is worth nothing, though, that if you don't tell the different-by-one-atom version, or the simulated version, of the fact, they would probably never question being you.
Replies from: woodchopper↑ comment by woodchopper · 2016-04-25T07:35:49.611Z · LW(p) · GW(p)
If there's no objective right answer, then what does it mean to seek immortality? For example, if we found out that a simulation of 'you' is not actually 'you', would seeking immortality mean we can't upload our minds to machines and have to somehow figure out a way to keep the pink fleshy stuff that is our current brains around?
If we found out that there's a new 'you' every time you go to sleep and wake up, wouldn't it make sense to abandon the quest for immortality as we already die every night?
(Note, I don't actually think this happens. But I think the concept of personal identity is inextricably linked to the question of how separate consciousnesses, each feeling their own qualia, can arise.)
Replies from: qmotus↑ comment by qmotus · 2016-04-25T10:18:41.485Z · LW(p) · GW(p)
If there's no objective right answer, you can just decide for yourself. If you want immortality and decide that a simulation of 'you' is not actually 'you', I guess you ('you'?) will indeed need to find a way to extend your biological life. If you're happy with just the simulation existing, then maybe brain uploading or FAI is the way to go. But we're not going to "find out" the right answer to those questions if there is no right answer.
But I think the concept of personal identity is inextricably linked to the question of how separate consciousnesses, each feeling their own qualia, can arise.
Are you talking about the hard problem of consciousness? I'm mostly with Daniel Dennett here and think that the hard problem probably doesn't actually exist (but I wouldn't say that I'm absolutely certain about this), but if you think that the hard problem needs to be solved, then I guess this identity business also becomes somewhat more problematic.
Replies from: woodchopper↑ comment by woodchopper · 2016-04-25T10:48:07.036Z · LW(p) · GW(p)
I think consciousness arises from physical processes (as Denett says), but that's not really solving the problem or proving it doesn't exist.
Anyway, I think you are right in that if you think being mind-uploaded does or does not constitute continuing your personal identity or whatever, it's hard to say you are wrong. However, what if I don't actually know if it does, yet I want to be immortal? Then we have to study that to figure out what things we can do keep the real 'us' existing and what don't.
What if the persistence of personal identity is a meaningless pursuit?
Replies from: qmotus↑ comment by qmotus · 2016-04-25T11:27:36.424Z · LW(p) · GW(p)
Let's suppose that the contents of a brain are uploaded to a computer, or that a person is anesthesized and a single atom in their brain is replaced. What exactly would it mean to say that personal identity doesn't persist in such situations?
Replies from: woodchopper↑ comment by woodchopper · 2016-04-25T14:59:17.736Z · LW(p) · GW(p)
So, let's say you die, but a super intelligence reconstructs your brain (using new atoms, but almost exactly to specification), but misplaces a couple of atoms. Is that 'you'?
If it is, let's say the computer then realises what it did wrong and reconstructs your brain again (leaving its first prototype intact), this time exactly. Which one is 'you'?
Let's say the second one is 'you', and the first one isn't. What happens when the computer reconstructs yet another exact copy of your brain?
If the computer told you it was going to torture the slightly-wrong copy of you (the one with a few atoms missing), would that scare you?
What if it was going to torture the exact copy of you, but only one of the exact copies? There's a version of you not being tortured, what's to say that won't be the real 'you'?
Replies from: qmotus↑ comment by qmotus · 2016-04-25T17:17:05.557Z · LW(p) · GW(p)
Maybe; it would probably think so, at least if it wasn't told otherwise.
Both would probably think so.
All three might think so.
I find that a bit scary.
Wouldn't there, then, be some copies of me not being tortured and one that is being tortured?
↑ comment by woodchopper · 2016-04-27T13:30:58.140Z · LW(p) · GW(p)
Wouldn't there, then, be some copies of me not being tortured and one that is being tortured?
If I copied your brain right now, but left you alive, and tortured the copy, you would not feel any pain (I assume). I could even torture it secretly and you would be none the wiser.
So go back to the scenario - you're killed, there are some exact copies made of your brain and some inexact copies. It has been shown that it is possible to torture an exact copy of your brain while not torturing 'you', so surely you could torture one or all of these reconstructed brains and you would have no reason to fear?
Replies from: qmotus↑ comment by qmotus · 2016-04-27T15:29:41.851Z · LW(p) · GW(p)
If I copied your brain right now, but left you alive, and tortured the copy, you would not feel any pain (I assume). I could even torture it secretly and you would be none the wiser.
Well.. Let's say I make a copy of you at time t. I can also make them forget which one is which. Then, at time t + 1, I will tickle the copy a lot. After that, I go back in time to t - 1, tell you of my intentions and ask you if you expect to get tickled. What do you reply?
Does it make any sense to you to say that you expect to experience both being and not being tickled?
comment by Chacreton190 · 2016-04-20T04:40:36.640Z · LW(p) · GW(p)
The idea of immortality always brings up the logical holes in religious beliefs. For example, if God is immortal and all powerful, existing in the past present and future, he would by definition be out side of time. Why would a being outside of time care when or where we die or anything else that could happen to us. To it, we would never die and always be dead.
Plus, the 1st 100 years in heaven sounds good, but the next 2 billion.......
comment by turchin · 2016-04-19T10:22:23.891Z · LW(p) · GW(p)
There is another important difference between words "immortality" and "indefinite life extension". That is their relation to the hypothetical event of resurrection, like reconstruction of me by future AI. Immortality include it, but life extension seems to be speaking only about continuous existence.
comment by WalterL · 2016-04-18T16:59:26.470Z · LW(p) · GW(p)
Seems like nothing will ever be immortal. Second Law of Thermodynamics and all.
Replies from: Deku-shrub, turchin↑ comment by Deku-shrub · 2016-04-23T19:21:16.925Z · LW(p) · GW(p)
Immortality comes with all kinds of definitions, literal immortality sounds more of a type-X civilization problem to me :)
↑ comment by turchin · 2016-04-18T19:01:46.298Z · LW(p) · GW(p)
I discussed the ways how to survive the end of the universe elsewhere http://lesswrong.com/lw/mfa/a_roadmap_how_to_survive_the_end_of_the_universe/
comment by entirelyuseless · 2016-04-18T15:02:11.039Z · LW(p) · GW(p)
I think "both look unappealing" because neither one makes a good story.
The right answer is to stop hoping that your life is going to make a good story: life is not a story at all.
comment by ike · 2016-04-18T00:55:42.279Z · LW(p) · GW(p)
I've heard it argued that given the assumption of infinitely divisible time, one can theoretically achieve all the purported benefits of immortality in a finite amount of time, using a derivative of Zeno's paradox.
Replies from: turchin, MrMind↑ comment by turchin · 2016-04-18T01:22:38.938Z · LW(p) · GW(p)
I think you may refer to Tippler Omega point. Also John Smart had similar ideas, as I remember, when he said that civilization would evolve to smaller and smaller entities, running on higher and higher speed. In result technological singularity will be also physical singularity.
Now we can't say if time is infinitely divisible. Plank time may be the limit.
comment by Shmi (shminux) · 2016-04-18T00:13:11.539Z · LW(p) · GW(p)
A more interesting question for me is that of a silent 't': Does immortality imply immorality?
Replies from: turchin↑ comment by turchin · 2016-04-18T00:33:17.685Z · LW(p) · GW(p)
We may try to quantify it. If an agent is creating a virus, which have 10 per cent to give him immortality and 1 per cent to result in human extinction, is it moral to him to proceed? Clearly not, even from selfish point of view. If we have 1000 such agents, extinction is inevitable.
So clearly agressive and selfish quest for immortality is immoral and will convert a person in social cancer cell. But in reality the situation is opposite.
You need to give immortality to as many people as possible if you want it to be tested, cheap and predictable technology. Think about Iphone - it is cheap, high quality and reliable because of economy of scale.
So I think that fighting for life extension is second most important and positive thing after prevention x-risks (and it seems to be underestimated by EA.)