Identity and Death

post by Tenoke · 2014-02-18T11:35:49.393Z · LW · GW · Legacy · 56 comments

Contents

56 comments

This recent SMBC comic illustrates the old question of what exactly is you by referencing the Star Trek Teleporter Problem. Do you actually get teleported or does the teleporter just kill you before making a copy of you somewhere else?

Well, the answer that a lot of rationalist seem to accept is Pattern Identity Theory proposed by Hans Moravec (skim the link or do a google search for the theory if you have no idea what I am referring to). I am very sympathetic to this view and it definitely ties with my limited understanding of physics and biology - elementary particles are interchangeable and do not have 'identity', at least some of the atoms in your body (including some of those who form neurons) get replaced over time etc.


This is all fine and dandy, but if you take this view to its logical extreme it looks like a sufficently modified version of you shouldn't actually qualify as you - the difference in the pattern might be as great or greater than the difference in the patterns of any two random people.

Let's say something happens to Eliezer and he gets successfully cryo-preserved in 2014. Then 80 years later the singularity hasn't arrived yet but the future is still pretty good - everyone is smart and happy due to enhancements, ageing is a thing of the past and we have the technology to wake cryopreserved people up. The people in that future build Eliezer a new body, restore the information from his brain and apply all the standard enhancements on him and then they wake him up. The person who wakes up remembers all that good old Eliezer did and seems to act like you would expect an enhanced Eliezer to act. However, if you examine things closely the difference between 2014!Eliezer and 2094!Eliezer is actually bigger than the difference between 2014!Eliezer and let's say 2014!Yvain due to having all the new standard enhancements. Does that person really qualify as the same person according to Pattern Identity Theory, then? Sure, he originates from Eliezer and arguably the difference between the two is similar to the difference between kid!Eliezer and adult!Eliezer but is it really the same pattern? If you believe that you really are the pattern then how can you not think of Eliezer!2014 as a dead man?

Sure, you could argue that continual change (as opposed to the sudden change in the cryo!Eliezer scenario) or 'evolution of the pattern' is in some way relevant but why would that be? The only somewhat reasonable argument for that I've seen is 'because it looks like this is what I care about'. That's fine with me but my personal preference is closer to 'I want to continue existing and experiencing things'; I don't care if anything that looks like me or thinks it's me is experiencing stuff - I want me (whatever that is) to continue living and doing stuff. And so far it looks really plausible that me is the pattern which sadly leaves me to think that maybe changing the pattern is a bad idea.

I know that this line of thinking can damn you to eternal stagnation but it seems worth exploring before teleporters, uploading, big self-enhancements etc. come along which is why I am starting this discussion. Additionally, a part of the problem might be that there is some confusion about definitions going on but I'd like to see where. Furthermore, 'the difference in the pattern' seems both somehow hard to quantify and more importantly - it doesn't look like something that could have a clear cut-off as in 'if the pattern differs by more than 10% you are a different person'. At any rate, whatever that cut-off is, it still seems pretty clear that tenoke!2000 differs enough from me to be considered dead.

As an exercise at home I will leave you to think about what this whole line of thinking implies if you combine it with MWII-style quantum immortality.

56 comments

Comments sorted by top scores.

comment by cousin_it · 2014-02-18T23:44:44.902Z · LW(p) · GW(p)

Most people imagine their past and future selves as links in a continuous chain, each one connected to the previous one and the next one. It seems to me that the structure might be more like a singly linked list: each moment has a "pointer" to the previous one, by virtue of remembering it, but there's no corresponding pointer to the next moment, and in fact there's no unique next moment, because we are branching all the time.

(In functional programming languages, different linked lists can "share structure" by having pointers to the same tail. That idea also works for other data structures like trees, as described in Chris Okasaki's book "Purely functional data structures". That's a digression, but a really interesting one.)

So why do we feel that we have a pointer to the next moment, or a probability distribution over next moments? I think it's because we learn by induction. Looking at our memory, we see that each moment had a previous one and a next one, so we think the same must be true for the present moment as well. But it seems to me that the whole picture can be understood by just looking at links to previous moments, and nature doesn't need to have any special laws about links to next moments, "observer fluid" and the like.

One of the benefits of such a "memory identity theory", compared to "pattern identity theory", is that it makes it easier to imagine merging two creatures. A merged creature is simply a creature that remembers being both of its predecessors. Questions about teleportation and cloning get similarly simple-minded answers. The big remaining problem is where observed frequencies come from. Maybe there's a big probability distribution over all observer-moments ("ASSA"), or maybe there's something else.

Replies from: torekp
comment by torekp · 2014-02-19T00:59:13.518Z · LW(p) · GW(p)

Your comment contains an excellent point that can stand independently of Many Worlds (branching all the time). Namely, memory explains anticipation. Anticipation feels like a pointer to the next moment, but it's just an inference based on a long sequence of memories.

There is nothing wrong with anticipating a future experience, but there is also no constraint against anticipating other future experiences as well. And most of us often do; we call that "empathy". We have much more reliable history of knowing how correct/incorrect our anticipations about our own body were, though, and much less ability to ignore those outcomes. So self-concern feels different. But in some sense, self-concern is empathy for our future selves.

Replies from: cousin_it
comment by cousin_it · 2014-02-19T13:18:57.451Z · LW(p) · GW(p)

Just a nitpick, you don't need many-worlds to have branching. Even in a classical world, if it's large enough, there will be creatures with identical memories but different futures.

Replies from: torekp
comment by torekp · 2014-02-19T22:17:51.929Z · LW(p) · GW(p)

Right, as long as you don't require causal connection for branching. To my mind "branching" suggests a causal connection, but the OP favors a pattern identity theory, so causal connection may be irrelevant.

comment by Brillyant · 2014-02-18T15:47:15.001Z · LW(p) · GW(p)

There is no reason I see to believe in the 'identity' of physical things—especially not at the level of a human being. We're conscious, and aware of that consciousness. We enjoy/suffer from a peristent delusion of 'self'. We know we're gonna die and we've evolved in such a way that we are programmed to fear and avoid it.

Our mortality + 'self' delusion + consciousness makes us say stuff like:

I want me (whatever that is) to continue living and doing stuff.

comment by jobe_smith · 2014-02-18T16:05:10.039Z · LW(p) · GW(p)

I think that most people are able to at least implicitly bite the bullet of continuous personal change. And that is why they apply some reasonable temporal discounting. Here is an SMBC that explains the basic concept. One of the very weird things about LWers is their aversion to discounting. Eliezer even wrote an emotional post about it once. Normal people can discount the preferences of their futures selves, current others and future others in a way that saves them a lot of ethical/philosophical headaches.

comment by chaosmage · 2014-02-18T12:28:50.057Z · LW(p) · GW(p)

All that confusion arises from the attempt to find identity between physical objects. I don't think there's any identity outside of minds. Identity is a "="-relation between two representations, not two actual physical things.

That has its own problems. If a Tibetan thinks the Dalai Lama is in some essential way identical to some previous holder of that office, that's the same kind of judgement as myself believing I'm identical to the memory of myself from yesterday. I lose any objective character of identity.

But at least I don't need to have identity "emerge" (or something) from patterns. And many of the questions of the type in the SMBC comic are easily answered this way: Whether persons are "identical" before and after teleportation (or any other change, including flossing) becomes an entirely subjective judgement, with no objective truth that might be violated.

Replies from: Viliam_Bur, Tenoke
comment by Viliam_Bur · 2014-02-18T16:10:10.799Z · LW(p) · GW(p)

Nice thing about the Dalai Lama example is that he probably gets strong social pressure to believe he is a continuation of his previous incarnation. I don't know about the details of his office, but if I really believed in reincarnation, I would prepare a lot of notes for my future incarnation... something like what every reasonable person would prepare for themselves if they expected to have a sudden memory loss at some moment in the near future, assuming they would want to follow their original plans. (Perhaps how much of this Dalai Lama really does could be used as a measure of how much he literally believes in his reincarnation.)

There are also social pressures in the opposite direction, just smaller ones. People are supposed to "change" during various rituals, not necessarily religious ones: finishing a university and having a title added to your name is a secular example. In a work or in military, when you change your position in the hierarchy, it's not only about what you do, but also how you behave towards others, and how the others behave towards you: so it's like a minor surgery to your personality.

I am curious whether (as a part of a mad science experiment) it would be possible to create an opposite of the Dalai Lama effect; to create a Monday Man who would believe he is a different person that the Sunday Man, despite having the same body. More precisely, how far you could get, using only beliefs and social pressure on a neurotypical person.

To some degree this experiment was already done, by various cults. The cultists are supposed to believe they are someone more or less different than they were before; they even use "born again" to describe the change. But it seems like the old personality continues to exist -- at least some descriptions of "deprogramming" claim that if a cultist is kidnapped, prevented contact with other cultists, and prevented from doing their mental rituals (e.g. if you remove all the external and internal pressure towards the cult) that is usually enough for the old personality to reappear. If this is true, I would consider this an unsuccessful change. A successful change would be where the new person is completely free to do whatever they want, and they still naturally remain the new person. Even better, if the Sunday Man could regularly become a Monday Man every week, and then regularly change back again, both personalities having their own lives.

One obvious problem is memory. It stays with the body. The Dalai Lama can partially copy it from the old body to the new body, using notes. But you cannot keep two separate instances for the Sunday Man and Monday Man. Could people believe (if this is what their culture would tell them) that they can have a memory of another person in their head, but it doesn't make them identical with the person? Would the Monday Man accept that he has Sunday Man's memories, but he is a different person? The environment could be different, e.g. the Monday Man would live in a different house, wear a different uniform, meet different people, and the people would behave differently...

In other words, if you are not insane, you are probably not experimenting enough.

Replies from: chaosmage
comment by chaosmage · 2014-02-18T17:06:55.294Z · LW(p) · GW(p)

Something like the Monday Man already exists. In African and Afro-American religions such as Voodoo and Candomblé, people who get possessed say that whatever was moving their body in that time was not them, but some other entity. They frequently claim amnesia about the event, saying their normal non-possessed selves were "not there" to even notice what was happening. I don't know if anyone has done experiments to see whether they actually lack access to memories of that period, or whether they're merely denying them.

Of course, these religions, and particularly possession states, often involve great amounts of strong alcohol, so maybe the amnesia thing isn't so far-fetched.

Replies from: None
comment by [deleted] · 2014-02-19T01:19:18.762Z · LW(p) · GW(p)

A lot of the time the only language that really exists to talk about these things is the language of shamanism and occultists and ritual magic because psychology doesn't really go there much in professional medicine, and I'm not sure that the concept of 'genuine amnesia' is terribly useful in this context. These things happen, it doesn't matter that there's 'just' a material human brain and nervous system doing it, people can and do come under the control and influence of what is experienced as agency that is not 'their own' under many circumstances. I know someone who is as reductionist/materialist as they get, but through various methods has been known to channel Carl Sagan (and have interesting conversations with him about how odd it is to speak with him that way) and has been repeatedly briefly possessed by Hindu gods. She knows it's entirely coming from the operation of her own nervous system, but she thinks that if she has a complete enough identity to 'invite in' it is actually meaningful to say that something that comes from her mouth when she is in such an altered state comes 'from' that identity and not from her. To loosely quote her, "I'm pretty darn sure they're a function of my nervous system weirdness, but it just doesn't matter what the gods are when they come calling."

comment by Tenoke · 2014-02-18T12:36:30.784Z · LW(p) · GW(p)

Whether persons are "identical" before and after teleportation (or any other change, including flossing) becomes an entirely subjective judgement, with no objective truth that might be violated.

Sure. I'm fine with learning the subjective answer - would I continue to experience things in this situation. Teleported me sure will but does that mean that what I am pre-teleportation actually will subjectively experience anything? (if you believe in Pattern Identity Theory then substitute the teleportation with immense self-enhancement)

Replies from: chaosmage
comment by chaosmage · 2014-02-18T13:07:49.917Z · LW(p) · GW(p)

Sort of, but it's the other way around: It will not first "be you" and then have subjective experiences. Instead, it will have subjective experiences and then consider itself identical to you.

Suppose the teleportation (or enhancement or flossing or whatever) leaves a functioning brain. A functioning brain will have experiences. If it also has memories of previous experiences, it will correlate the new ones with the previous ones and come up with the complex network of circular representations we like to simplify into a quasi-monolothic object we call the mind - like it does when you awaken from deep sleep. If in those memories is contained a habit of referring to itself as you, the post-teleportation (or post-whatever) mind will continue to refer to itself as you. It'll think it is you, which is nothing less than what you do.

Replies from: Tenoke
comment by Tenoke · 2014-02-18T13:20:12.123Z · LW(p) · GW(p)

It'll think it is you, just like you do.

Yeah, no doubt there. However, I don't really care if something identical to me thinks its me though - I care if the me right now (which is to be teleported/copied in some years) will itself continue to experience things after the teleportation occurs (and the answer is yes if you believe that in Pattern Identity Theory and no if you believe Folk Identity Theory).

Replies from: TheOtherDave, chaosmage
comment by TheOtherDave · 2014-02-18T18:00:58.005Z · LW(p) · GW(p)

The you-right-now (which I label U1 for clarity) won't even continue to experience things after you finish reading this comment. Some other entity, very similar to but not quite identical to U1, will be experiencing things instead... call it U2.

Fortunately, neither U1 nor U2 consider the difference between U1 and U2 particularly important, so both entities will agree that identity was preserved.

And I agree with chaosmage here: that's all there is to say about that. There is no special objectively correct essence of youness that can be preserved or fail to be preserved; there are simply two systems at different times that share some properties and fail to share others.

Which of those properties we consider definitive of usness is not the sort of thing we can be wrong about, any more than we can be wrong about our own aesthetic judgments.

Replies from: Tenoke
comment by Tenoke · 2014-02-18T18:49:15.224Z · LW(p) · GW(p)

Fortunately, neither U1 nor U2 consider the difference between U1 and U2 particularly important, so both entities will agree that identity was preserved.

We both do actually and we are both not very impressed that such a large amount of entities similar to us (including us) are dying. And if I really accept that this is the case (as it seems to be) then most of the reason for wanting to stay alive at all seems to logically vanish.

Replies from: torekp, TheOtherDave
comment by torekp · 2014-02-19T00:32:19.196Z · LW(p) · GW(p)

And if I really accept that this is the case (as it seems to be) then most of the reason for wanting to stay alive at all seems to logically vanish.

Not exactly. What's called for is a reinterpretation of your values, given that you have previously couched them in incoherent terms (insofar as those terms presuppose a metaphysical fact of "identity" that got shaved away by Occam's Razor).

A good place to start in that reinterpretation is with TheOtherDave's questions about the vast set of possible U2 states.

comment by TheOtherDave · 2014-02-18T22:20:18.450Z · LW(p) · GW(p)

I wouldn't exactly describe what U1 and U2 are doing as "dying", any more than if U1 could somehow continue to exist in perpetuity -- if you were frozen in stasis forever, for example, such that you never got to the end of this comment -- I would exactly describe that as "living". Our normal understanding of life and death is defined by the continual transition between one state and another; those terms don't apply too readily to conditions like indefinite stasis.

But, terminology notwithstanding, if the passage of time constitutes the destruction of the same value that using a hypothetical transporter does, I'm not sure how your original thought experiment holds up. Why not use the transporter, in that case? Refusing to doesn't actually preserve anything.

As for reasons to stay alive... well, that depends on what we value.

There's a vast set of possible histories. In some of them U1 ceases to exist and U2 comes into existence (what we normally call "you stay alive"), in others U1 ceases to exist and U2 doesn't come into existence (what we normally call "you die"). Do you have a preference between those?

A different way of putting it: there's a vast set of possible U2s. Some of them are living beings and some of them are corpses. Do you have a preference between those?

EDIT: Whoops! I just realized that I got your OP confused with someone else's comment. Ignore the stuff about the transporter...

comment by chaosmage · 2014-02-18T13:56:21.669Z · LW(p) · GW(p)

How do you distinguish between "me" and "something identical to me"? You're implying it can be done, but I really don't see how. As soon as you find a difference, that something isn't identical to you anymore.

Replies from: Tenoke
comment by Tenoke · 2014-02-18T14:04:16.866Z · LW(p) · GW(p)

How do you distinguish between "me" and "something identical to me"?

We are just going in circles now. Yes, I believe that too... which is why this post is about arguing whether changing yourself sufficently is 'killing yourself'.. since there are some observable differences between 'me' and 'enhanced me'.

Or to put it in another way (a bit of a false dichotomy) - you either kill yourself when you 'teleport' as the 'original' is no longer there or alternatively you are data and you kill yourself when you change that data significantly.

Replies from: chaosmage
comment by chaosmage · 2014-02-18T14:54:37.848Z · LW(p) · GW(p)

I don't think we're going in circles. It is just that problems related to the Anthropic Trilemma aren't easy.

Pattern Identity Theory does not have a distinction between "me" and "something identical to me". You believe in the existence of such a distinction, so you want Pattern Identitfy Theory to not be true. So you are, quite rightly, pointing out the absurdities of Pattern Identity Theory: Sufficient changes being like "killing yourself" and other such nonsense.

I agree Pattern Identity Theory is false, if for entirely different reasons. I do not agree that the falsehood of Pattern Identity Theory means that the distinction exists.

Replies from: Tenoke
comment by Tenoke · 2014-02-18T14:58:29.443Z · LW(p) · GW(p)

You believe in the existence of such a distinction, so you want Pattern Identitfy Theory to not be true.

I do? Since when?

comment by buybuydandavis · 2014-02-19T02:53:40.720Z · LW(p) · GW(p)

the difference between 2014!Eliezer and 2094!Eliezer is actually bigger than the difference between 2014!Eliezer and let's say 2014!Yvain due to having all the new standard enhancements.

Which is similarly true of all the enhanced people who didn't cryopreserve.

Is he the same? That's up to you. One of the hallmarks of sanity is knowing that you're not a slave to the concepts you happen to have, and can craft your concepts to serve your values. Don't ask whether a label applies, ask yourself what you want to do in this situation.

comment by Manfred · 2014-02-19T02:47:35.143Z · LW(p) · GW(p)

The person sitting in my chair in one minute is me as much as anyone is going to be me, and we're both okay with that.

Before I grappled with this issue, I was totally okay with that person being me, and didn't feel any existential dread. Then I realized that that person was very different from current me, and felt some dread. My pattern now will be gone forever before I post this, your pattern reading this will begone before you begin the next sentence. But you know what? That's what we've got. That's what, really, we are. That person who slept in my bed 10 years ago was fine with just making the most of it, and so am I.

We can react to finding out that things are weirder than we thought either by being weirded out all the time, or by increasing our standard for weirdness.

Replies from: niceguyanon
comment by niceguyanon · 2014-02-20T19:36:07.642Z · LW(p) · GW(p)

...your pattern reading this will begone before you begin the next sentence.

This echos the same feeling I have about all of this. If presented with an opportunity to make use of an instantaneous teleporter, I would. The patterns that ceased to exist during teleportation is no different from the patterns that ceases to exist every second of our lives.

Would I be mistaken if I said that the teleporter death problem does not present any philosophical quandary to you and you would use a teleporter?

Replies from: Manfred
comment by Manfred · 2014-02-21T01:33:57.709Z · LW(p) · GW(p)

I wouldn't say it's a quandry, but I'd still pay a few dollars to avoid it.

comment by Squark · 2014-02-18T18:57:19.303Z · LW(p) · GW(p)

I suggest you perform the following thought experiment. Imagine that you(t+1) is not the same person as you(t) for any value of t=time. Every moment you die. The "you" of a moment ago died, being replaced by you (of now). You are going to die within a moment, being replaced by the "you" of in a moment.

In that reality, would you still have non-trivial preferences? If the answer is "yes", consider adapting those as your actual preferences. Suddenly you won't need answers to (IMO) meaningless questions anymore.

Replies from: Tenoke, Viliam_Bur
comment by Tenoke · 2014-02-18T19:00:26.973Z · LW(p) · GW(p)

The problem here is that I am not convinced that this is just a thought experiment - it looks like something that might be more or less true.

And yes, I have non-death related preferences but those are way less important to me.

Replies from: shminux
comment by Shmi (shminux) · 2014-02-18T21:57:11.462Z · LW(p) · GW(p)

The problem here is that I am not convinced that this is just a thought experiment - it looks like something that might be more or less true.

Seems like a non-central fallacy. The usual definition of death assumes a permanent (and irreversible) cessation of all detectable signs of life. Your definition of death would have none of that. Feel free to clarify your definition, but be aware that it is non-standard and so you are better off picking a different word for it.

You might also find A Human's Guide to Words of some use.

Replies from: Tenoke
comment by Tenoke · 2014-02-18T22:07:36.871Z · LW(p) · GW(p)

The definition I was going with was 'ceasing to exist' or if you are referring to something in the post then the more accurate definition there is probably something along the line of 'no longer having subjective experiences'.

Replies from: shminux
comment by Shmi (shminux) · 2014-02-18T22:16:44.017Z · LW(p) · GW(p)

How do you know if something ceases to exist if there are no outward signs of this? What measurable and testable definition of "ceasing to exist" do you use? If "ceasing to exist" is only in your mind, how is it different from being afraid of a monster under your bed?

comment by Viliam_Bur · 2014-02-19T09:01:34.327Z · LW(p) · GW(p)

My first reaction is that if I consider the person as t+1 to be someone different, these are the reactions that make sense:

a) selfish behavior, including selfishness against the future me. For example, when I am in the shop, I would take the most tasty thing and start eating it, because I want some pleasure now, and I don't care about the future person getting in trouble.

b) altruist behavior, but considering the future me completely equal to future anyone-else. For example, I would donate all my money to someone else if I thought they need it just a little more than me, because I am simply choosing between two strangers.

c) some mix of the former two behaviors.

The important thing here is that the last option doesn't add up to normality. My current behavior is partially selfish and partially altruist, but is not a linear combination of the first two options: both of them care about the future me exactly the same as about the future someone-else; but my current behavior doesn't.

A possible way to fix this is to assume that I care equally about future-anyone intrinsically, but I care more about future-me instrumentally. What I do now has larger impact on what my future-me will do than what future someone-else will do; especially because by "doing" in this context I also mean things like "adopting beliefs" etc. Simply said: I am thousand times more efficient at programming the future-me than programming future someone-else, so my paths to create more utility in the future naturally mostly go through my future-self. --- However, this whole paragraph smells like a rationalization for a given bottom line.

Replies from: Squark
comment by Squark · 2014-02-19T20:26:04.675Z · LW(p) · GW(p)

For me, the obvious answer is b. This is the answer for all forms of consequentialism which treat all people symmetrically e.g. utilitarianism. However, you can adapt the "personal identity isn't real" viewpoint and still prefer people who are similar to yourself (e.g. future you).

comment by [deleted] · 2014-02-18T16:22:38.340Z · LW(p) · GW(p)

One thing I am noting about some of the philosophical quandaries raised above about both teleportation and enhancements is that it only considers a single life, without linking it to others.

For instance, assume you are attempting to save your child from a burning building. You can either teleport in, grab your child, and teleport out with a near perfect success rate (although both you and your child will have teleported, you twice, your child once) or you can attempt to run into the building to do a manual rescue at some lower percent success rate X. Other incidental costs and risks are roughly the same and are trivial.

The obvious answer to me appears to be "I pick the Teleporter instead of the lower X."

And If I consider the alternative:

You are attempting to save your child from a burning building. You can either take standard enhancements, and then run in, grab your child, enhance them, and then run out with a near perfect success rate (although both you and your child be enhanced, permanently) or you can attempt to run into the building to do a manual rescue at some lower percent success rate X. Other incidental costs and risks are roughly the same and are trivial.

The obvious answer to me still appears to be "I pick the enhancements instead of the lower X."

It seems like if a person were worried about either teleportation or enhancements, they would have to at have a counter argument such as "Well, X is lower, but it's still pretty high and above some threshold level, so in a case like that I think I'm going to either not have me and my child teleport or not have me and my child take the enhancements: I'll go for the manual rescue."

That argument just doesn't seem convincing to me. I've tried mentally steelmanning it to get a better one, but I can't seem to get anywhere, particularly when considering the perspective of being the person inside the building who needed help, and the possibility that given a strong enough stance against the procedures, the person outside the building could plausibly think within their value system "It would be better to let the person burn to death than for me risk my life to save them at such a low X, or to use procedures that will harm us both according to my values."

Am I missing something that makes these types of counterargument more persuasive than I am giving them credit for?

Replies from: TheOtherDave
comment by TheOtherDave · 2014-02-18T18:13:04.895Z · LW(p) · GW(p)

My understanding of what makes these types of counterargument persuasive is a belief system that goes roughly like this:

I comprise a set of attributes. For convenience, I categorize those attributes into two sets, S1 and S2, such that the "teleporter" preserves S1 but destroys S2. What comes out of the "teleporter," S1, is similar to me but not identical; the difference is S2. S2 is so valuable that even an X% chance of preserving (S1 + S2) for some very low X is more valuable than a 100% chance of preserving S1.

My own response to this is that I see no reason to value S2 at all.

But I accept that other people do.

comment by Armok_GoB · 2014-02-19T23:11:53.321Z · LW(p) · GW(p)

To me the obvious answer in this thought experiment is that 2014!Eliezer and 2014!Yvain are also the same person. Seems pretty likely to actually be the case looking at them as well.

comment by Richard_Kennaway · 2014-02-18T17:00:59.709Z · LW(p) · GW(p)

Consider all of the enhancements that you have had since you were, say, five years old. Are you the "same" person as that five year old? Do you need an answer to that question?

Everything you do changes who you are. Is that a reason for doing as little as possible, seeking no new experiences, learning no new things, in order to cling onto the person "you" are right now, a single three-dimensional slice of a four-dimensional entity?

Personally, my answer is no.

comment by [deleted] · 2014-02-18T15:31:12.612Z · LW(p) · GW(p)

"However, if you examine things closely the difference between 2014!Eliezer and 2094!Eliezer is actually bigger than the difference between 2014!Eliezer and let's say 2014!Yvain due to having all the new standard enhancements."

I question how well your delta-person-function corresponds to an intuitive notion of "similar people".

"Same person" is a slightly-blurry blob of feature space concentrated heavily around a-person-as-they-are-at-the-moment. It generally includes the person they were a second ago, and (to a lesser extent) the person they were a year ago. But it's a continuous function, not a binary one - there's not a sharp cutoff between "you" and "not you" in possible-person-space.

Replies from: Tenoke
comment by Tenoke · 2014-02-18T15:42:06.741Z · LW(p) · GW(p)

Well, to quote myself:

Furthermore, 'the difference in the pattern' seems both somehow hard to quantify and more importantly - it doesn't look like something that could have a clear cut-off as in 'if the pattern differs by more than 10% you are a different person'. At any rate, whatever that cut-off is, it still seems pretty clear that tenoke!2000 differs enough from me to be considered dead.

Or in this example - enhanced Eliezer (imagine that the number of enhancements is massive) and non-enhanced Eliezer who can't even begin to think about the stuff that enhanced Eliezer does.

comment by Matthew_Opitz · 2014-05-02T12:08:59.283Z · LW(p) · GW(p)

I wish there were an article that dealt more precisely with "Experience and Death," rather than "identity and death," because maintaining an experience is what really interests me. After all, we already don't stay the same person from moment to moment. We acquire new neural structures, associations, and memories (and lose some old ones that we aren't even aware of losing), and that doesn't particularly bother me. So maintaining a particular "identity" does not seem to me to be the problem worth really worrying about.

In fact, let's suppose that I was about to die due to some brain tumor, and there was one medical procedure that could save me, but it would entail destroying a lot of my existing memories, incidentally creating some new ones, re-arranging a lot of neural associations, and generally changing my whole personality in a drastic way.

If death and non-experience were not threatening me, then all other things being equal (meaning, assuming this personality will be relatively as happy and functional as my current self), my current self would NOT prefer to undergo this procedure (although the preference is not a particularly strong one. It's more of an "avoiding Buridan's Ass, risk-averse, all other things being equal, might as well stick with this current personality that I already know about" preference). However, if this procedure involving a lot of relatively neutral changes to my personality meant the difference between having future experiences of any sort and not having future experiences of any sort, then I would absolutely jump onboard with the procedure.

Let's kick it up one notch further, though. Let's say there's a brain procedure that will change a lot of memories and associations in such a way that I will be a happier and more successful/functional person. Let's say the procedure will raise my IQ by 100 points, increase my willpower, and so on. Then my current self would absolutely elect to undergo the procedure, even without being threatened with death otherwise.

When it comes to the classic teleporter thought-experiment, what really interests me is not the usual question people focus on of "will society recognize the duplicate as me," or "will I 'identify' with my future duplicate self," but rather, "will I experience my future duplicate self." I do not want the teleporter to be a suicide machine as far as my first-person experience is concerned.

When people usually try to address this question, I often hear things like, "your first-person experience will continue if you want to identify with that duplicate" or statements that imply something similar. This just doesn't make any sense to me.

Here's why: imagine a teleporter experiment, except in this case when your first body steps into the teleporter chamber and gets vaporized, two duplicates get re-constructed in different neighboring rooms.

The first duplicate gets re-constructed in a torture chamber, where it will get tortured for the rest of its life.

The second duplicate gets re-constructed in a normal waiting room, gets handed a billion dollars, and is set free back into society.

Now, if it is at all possible, I would like to experience the experiences of the second duplicate. How can I make sure that that happens? From what I have read, people make it sound like it is as easy as making your pre-teleporter self pre-commit to not caring about duplicates of yourself that get materialized in the torture chamber rather than the waiting room.

That just doesn't make sense to me. Normally reality doesn't work like that. I can try to pre-commit to not caring about the pain signals coming from my finger before I smash it with a hammer, but pain I will feel nonetheless. Granted, as of now I don't have full control over the self-modification of my own source code / nerves and neurons. If I did, I suppose I could re-program myself to not to feel pain or care about pain in that circumstance.

Still, this only goes so far. If I wanted to experience Neil deGrasse Tyson's experience, or experience his brain (because maybe I perceived him as having higher IQ than me or more interesting memories than me or more wealth than me), I cannot just go to sleep tonight and pre-commit to caring only about Neil deGrasse Tyson and expect to wake up tomorrow morning experiencing Neil deGrasse Tyson's reality, with all of his memories, feeling as if I had always been Neil deGrasse Tyson, with no memory of ever having experienced anything different.

Or maybe I can? How would I know that I have not repeatedly done this? How do I know that I did not just do this 5 seconds ago? I guess I don't know. But...it just doesn't FEEL LIKE I have.

Okay, NOW I am experiencing Matthew Opitz. And...NOW I am experiencing Matthew Opitz. And...NOW I am experiencing Matthew Opitz.

How could I seriously believe that I really just started experiencing Matthew Opitz after the 2nd "NOW" just now, and the first two "NOWs" are just false memories that I now have?

But still, it could be possible, when I look at the issue from the vantage point of this new NOW.

Really, when you think about it, the experience of time does not make any sense at all...

comment by diegocaleiro · 2014-02-19T21:17:21.985Z · LW(p) · GW(p)

Read up to number three, and the links, ignore the rest. http://lesswrong.com/lw/jke/how_not_to_make_money/ My view on Selves and identity.

comment by erikbjare · 2014-02-18T22:44:53.849Z · LW(p) · GW(p)

Personally, I wouldn't care if my body was changed (for the better), I'd still call it me. To me, pattern identity theory is only interesting as it could be used as a theory which would allow us to copy our consciousness, if the difference is merely additions, I see no harm to the original pattern as long as the original me was in the set {me, enhancements}.

comment by moridinamael · 2014-02-18T17:53:36.863Z · LW(p) · GW(p)

This kind of thing used to bother me a lot more before I read The Quantum Theif. The cavalier attitude of the characters toward copying and modification rubbed off on me I guess.

comment by gjm · 2014-02-18T16:42:03.725Z · LW(p) · GW(p)

The person who wakes up remembers all that good old Eliezer did and seems to act like you would expect an enhanced Eliezer to act. However, if you examine things closely the difference between 2014!Eliezer and 2094!Eliezer is actually bigger than the difference between 2014!Eliezer and let's say 2014!Yvain due to having all the new standard enhancements.

I'm not convinced that that's possible, at least according to my idea of what count as big differences.

Further: I think the usual opinion around here is that questions of the form "is this the same person as this?" generally don't have well-defined answers, and that if you find yourself asking one you should figure out what thing it is you really care about. (And, yes, that "does this have cognitive structure and memories very similar to this?" is often quite close to what you really care about when asking that question. But that's not the same as endorsing any particular positive theory of personal identity.) If so, then saying

I don't care if anything that looks like me or thinks it's me is expereincing stuff - I want me (whatever that is) to continue living and doing stuff.

may be a preference that's no more meaningful than, e.g., "I want my immortal soul to continue living and doing stuff".

[EDITED to fix a typo.]

Replies from: Tenoke
comment by Tenoke · 2014-02-18T16:47:12.799Z · LW(p) · GW(p)

may be a preference that's no more meaningful than, e.g., "I want my immortal soul to continue living and doing stuff".

Could be, I will definitely be happy to see any solid reasons as to why that might be the case.

comment by Thomas · 2014-02-18T17:20:58.373Z · LW(p) · GW(p)

Imagine, everyone is your incarnation. Co-incarnation. Then, there is no logical problem, suddenly.

Replies from: byrnema, Tenoke
comment by byrnema · 2014-02-18T23:08:43.109Z · LW(p) · GW(p)

To try steel-manning your perspective, if I'm not misrepresenting it, the idea is that every identity feels the same from the inside, it doesn't matter which one you have or which one is you.

I agree with this.

However (in response to Tenoke below) the situations of identities, and relationships between identities, do matter so it doesn't follow that you can change situations (or kill people) without creating value differences.

Replies from: Viliam_Bur, Thomas
comment by Viliam_Bur · 2014-02-19T08:36:45.033Z · LW(p) · GW(p)

every identity feels the same from the inside

What exactly it means to "feel the same" in this context? The same memories? No. The same plans? No. The same emotions? No.

Seems to me the "same" things are: (1) being a consciousness, and (2) having the traits that all humans have. In the latter sense we are the same to all humans, and in the former sense, we are the same to all conscious being.

But it seems like stretching the meaning of the word "same" extremely far, almost to its opposite.

Replies from: byrnema
comment by byrnema · 2014-02-19T10:36:17.873Z · LW(p) · GW(p)

What exactly it means to "feel the same" in this context? The same memories? No. The same plans? No. The same emotions? No.

Memories, plans, emotions and even qualities of what it feels like to be a general or specific human are all aspects I would bundle with an identity's 'situation'. For example, in philosophies that assert there is 'one shared consciousness', they don't mean we all think the same thoughts or have the same plans.

Rather, there would be something in common with specifically the ways it feels on the inside to be an 'I', to be an observer looking out through all the details of their situation. There seems to be some sort of intuition (possibly false) that there is something qualitatively identical about any computation that results in feeling like you are person. (For example, it would feel different, and 'identity' would be different, if you instead were part of a hive mind, maybe.)

Exploiting this particular intuition, it can't possibly matter if you're copied and destroyed. The situational details are the same for a copy, and then the elusive second part ("identity") is identical for everyone, including adjacent copies, and so they are the same.

...looking at it from the other way, if someone didn't think a copy was identical, what part would be different? This thing is what I'm suggesting is the same for all persons.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2014-02-19T11:14:11.630Z · LW(p) · GW(p)

in philosophies that assert there is 'one shared consciousness', they don't mean we all think the same thoughts or have the same plans

So it's like the same algorithm operating on different data?

On some level of abstraction this is both trivial and meaningless: at the bottom we all are just "particles following the same laws of physics". The question is, can we make it more specific while it still remains true? How far?

if someone didn't think a copy was identical, what part would be different? This thing is what I'm suggesting is the same for all persons.

That's a great way to put it!

Replies from: byrnema
comment by byrnema · 2014-02-19T14:17:37.599Z · LW(p) · GW(p)

So it's like the same algorithm operating on different data?

To be clear, just the part about feeling like I'm "me". I think it would feel very different to be an alien, but I expect I would feel the same way about being myself.

On some level of abstraction this is both trivial and meaningless: at the bottom we all are just "particles following the same laws of physics"

I agree about the triviality. Especially for the thesis that we all share one consciousness -- that we are all a physical computation is both obvious and meaningless (everything is a physical computation) but it also means it doesn't matter if that particular computation is displaced in space or time or copied -- there's nothing unique that doesn't get carried over (if it's true that all our senses of self are essentially the same computation).

comment by Thomas · 2014-02-19T07:12:53.898Z · LW(p) · GW(p)

To try steel-manning your perspective

Please, continue!

comment by Tenoke · 2014-02-18T17:52:13.041Z · LW(p) · GW(p)

Imagine, everyone is your incarnation. Co-incarnation. Then, there is no logical problem, suddenly.

So since you can imagine that you have no problem whatsoever with someone killing you?

Replies from: savageorange, Thomas
comment by savageorange · 2014-02-19T02:56:47.664Z · LW(p) · GW(p)

That would be equivalent to self-sabotage or an attempt to systematically deny that 'you' possess some particular attribute A (eg. homosexuality, atheism..) which you do in fact possess, so.. no.

comment by Thomas · 2014-02-18T19:24:11.150Z · LW(p) · GW(p)

It doesn't follow at all!