I notice that I am confused about Identity and Resurrection
post by ialdabaoth · 2013-11-14T20:38:22.845Z · LW · GW · Legacy · 80 commentsContents
80 comments
I've spent quite a bit of time trying to work out how to explain the roots of my confusion. I think, in the great LW tradition, I'll start with a story.
[Editor's note: The original story was in 16th century Mandarin, and used peculiar and esoteric terms for concepts that are just now being re-discovered. Where possible, I have translated these terms into their modern mathematical and philosophical equivalents. Such terms are denoted with curly braces, {like so}.]
Once upon a time there was a man by the name of Shen Chun-lieh, and he had a beautiful young daughter named Ah-Chen. She died.
Shen Chun-lieh was heartbroken, moreso he thought than any man who had lost a daughter, and so he struggled and scraped and misered until he had amassed a great fortune, and brought that fortune before me - for he had heard it told that I was could resurrect the dead.
I frowned when he told me his story, for many things are true after a fashion, but wisdom is in understanding the nature of that truth - and he did not bear the face of a wise man.
"Tell me about your daughter, Ah-Chen.", I commanded.
I frowned, for my suspicions were confirmed.
"You wish for me to give you this back?", I asked.
He nodded and dried his tears. "More than anything in the world."
"Then come back tomorrow, and I will have for you a beautiful daughter who will do all the things you described."
His face showed a sudden flash of understanding. Perhaps, I thought, this one might see after all.
"But", he said, "will it be Ah-Chen?"
I smiled sagely. "What do you mean by that, Shen Chun-lieh?"
"I mean, you said that you would give me 'a' daughter. I wish for MY daughter."
I bowed to his small wisdom. "Indeed I did. If you wish for YOUR daughter, then you must be much, much more precise with me."
He frowned, and I saw in his face that he did not have the words.
"You are wise in the way of the Tao", he said, "surely you can find the words in my heart, so that even such as me could say them?"
I nodded. "I can. But it will take a great amount of time, and much courage from you. Shall we proceed?"
He nodded.
I am wise enough in the way of the Tao. The Tao whispers things that have been discovered and forgotten, and things that have yet to be discovered, and things that may never be discovered. And while Shen Chun-lieh was neither wise nor particularly courageous, his overwhelming desire to see his daughter again propelled him with an intensity seldom seen in my students. And so it was, many years later, that I judged him finally ready to discuss his daughter with me, in earnest.
"Shen", I said, "it is time to talk about your Ah-Chen."
His eyes brightened and he nodded eagerly. "Yes, Teacher."
"Do you understand why I said on that first day, that you must be much, much more precise with me?"
"Yes, Teacher. I had come to you believing that the soul was a thing that could be conjured back to the living, rather than a {computational process}."
"Even now, you are not quite correct. The soul is not a {computational process}, but a {specification of a search space} which describes any number of similar {computational processes}. For example, Shen Chun-lieh, would you still be Shen Chun-lieh if I were to cut off your left arm?"
"Of course, Teacher. My left arm does not define who I am."
"Indeed. And are you still the same Shen Chun-lieh who came to me all those years ago, begging me to give him back his daughter Ah-Chen?"
"I am, Teacher, although I understand much more now than I did then."
"That you do. But tell me - would you be the same Shen Chun-lieh if you had not come to me? If you had continued to save and to save your money, and craft more desperate and eager schemes for amassing more money, until finally you forgot the purpose of your misering altogether, and abandoned your Ah-Chen to the pursuit of gold and jade for its own sake?"
"Teacher, my love for Ah-Chen is all-consuming; such a fate could never befall me."
"Do not be so sure, my student. Remember the tale of the butterfly's wings, and the storm that sank an armada. Ever-shifting is the Tao, and so ever-shifting is our place in it."
Shen Chun-lieh understood, and in a brief moment he glimpsed his life as it could have been, as an old Miser Shen hoarding gold and jade in a great walled city. He shuddered and prostrated himself.
"Teacher, you are correct. And even such a wretch as Miser Shen, that wretch would still be me. But I thank the Buddha and the Eight Immortal Sages that I was spared that fate."
I smiled benevolently and helped him to his feet. "Then suppose that you had died and not your daughter, and one day a young woman named Ah-Chen had burst into my door, flinging gold and jade upon my table, and described the caring and wonderful father that she wished returned to her? What could she say about Shen Chun-lieh that would allow me to find his soul amongst the infinite chaos of the Nine Hells?"
"I..." He looked utterly lost.
"Tell me, Shen Chun-lieh, what is the meaning of the parable of the {Ship of Theseus}?"
"That personal identity cannot be contained within the body, for the flow of the Tao slowly strips away and the flow of the Tao slowly restores, such that no single piece of my body is the same from one year to the next; and within the Tao, even the distinction of 'sameness' is meaningless."
"And what is the relevance of the parable of the {Shroedinger's Cat} to this discussion?"
"Umm... that... let me think. I suppose, that personal identity cannot be contained within the history of choices that have been made, because for every choice that has been made, if it was truly a 'choice' at all, it was also made the other way in some other tributary of the Great Tao."
"And the parable of the tiny {Paramecium}?"
"That neither is the copy; there are two originals."
"So, Shen. Can you yet articulate the dilemma that you present to me?"
"No, Teacher. I fear that yet again, you must point it out to your humble student."
"You ask for Ah-Chen, my student. But which one? Of all the Ah-Chens that could be brought before you, which would satisfy you? Because there is no hard line, between {configurations} that you would recognize as your daughter and {configurations} which you would not. So why did my original offer, to construct you a daughter that would do all the things you described Ah-Chen as doing, not appeal to you?"
Shen looked horrified. "Because she would not BE Ah-Chen! Even if you made her respond perfectly, it would not be HER! I do not simply miss my six-year-old girl; I miss what she could have become! I regret that she never got to see the world, never got to grow up, never got to..."
"In what sense did she never do these things? She died, yes; but even a dead Ah-Chen is still an Ah-Chen. She has since experienced being worms beneath the earth, and flowers, and then bees and birds and foxes and deer and even peasants and noblemen. All these are Ah-Chen, so why is it so important that she appear before you as YOU remember her?"
"Because I miss her, and because she has no conscious awareness of those things."
"Ah, but then which conscious awareness do you wish her to have? There is no copy; all possible tributaries of the Great Tao contain an original. And each of those originals experience in their own way. You wish me to pluck out a {configuration} and present it to you, and declare "This one! This one is Ah-Chen!". But which one? Or do you leave that choice to me?"
"No, Teacher. I know better than to leave that choice to you. But... you have shown me many great wonders, in alchemy and in other works of the Tao. If her brain had been preserved, perhaps frozen as you showed me the frozen koi, I could present that to you and you could reconstruct her {configuration} from that?"
I smiled sadly. "To certain degrees of precision, yes, I could. But the question still remains - you have only narrowed down the possible {configurations}. And what makes you say that the boundary of {configurations} that are achievable from a frozen brain are correct? If I smash that brain with a hammer, melt it, and paint a portrait of Ah-Chen with it, is that not a {configuration} that is achievable from that brain?"
Shen looked disgusted. "You... how can you be so wise and yet not understand such simple things? We are talking about people! Not paintings!"
I continued to smile sadly. "Because these things are not so simple. 'People' are not things, as you said before. 'People' are {sets of configurations}; they are {specifications of search spaces}. And those boundaries are so indistinct that anything that claims to capture them is in error."
Now it was Shen's turn to look animated. "Just because the boundary cannot be drawn perfectly, does not make the boundary meaningless!"
I nodded. "You have indeed learned much. But you still have not described the purpose of your boundary-drawing. Do you wish for Ah-Chen's resurrection for yourself, so that you may feel less lonely and grieved, or do you wish it for Ah-Chen's sake, so that she may see the world anew? For these two purposes will give us very different boundaries for what is an acceptable Ah-Chen."
Shen grimaced, as war raged within his heart. "You are so wise in the Tao; stop these games and do what I mean!"
And so it was that Miser Shen came to live in the walled city of Ch'in, and hoarded gold and jade, and lost all memory and desire for his daughter Ah-Chen, until it was that the Tao swept him up into another tale.
So, there we are. My confusion is in two parts:
1. When I imagine resurrecting loved ones, what makes me believe that even a perfectly preserved brain state is any more 'resurrection' than an overly sophisticated wind-up toy that happens to behave in ways that fulfill my desire for that loved one's company? In a certain sense, avoiding true 'resurrection' should be PREFERABLE - since it is possible that a "wind-up toy" could be constructed that provides a superstimulus version of that loved one's company, while an actual 'resurrection' will only be as good as the real thing.
2. When I imagine being resurrected "myself", how different from this 'me' can it be and still count? How is this fundamentally different from "I will for the future to contain a being like myself", which is really just "I will for the future to contain a being like I imagine myself to be" - in which case, we're back to the superstimulus option (which is perhaps a little weird in this case, since I'm not there to receive the stimulus).
I'd really like to discuss this.
80 comments
Comments sorted by top scores.
comment by Nisan · 2013-11-15T00:05:26.918Z · LW(p) · GW(p)
"I suppose, that personal identity cannot be contained within the history of choices that have been made, because for every choice that has been made, if it was truly a 'choice' at all, it was also made the other way in some other tributary of the Great Tao."
This might sound like a nitpick and a pet peeve, but in this case I think it's important and essential: Your decisions do not split you. At least, not in the way one would naively expect.
See Thou Art Physics: To the extent one make choices at all, one does so in a deterministic manner. When one is on a knife's edge, it's natural to feel like one's decision is indeterminate until one actually makes a decision, but that doesn't mean it's not determined by one's decision process. I don't know to what degree typical decisions are deterministic. Reasons can move one to action, but one's true reasons for action are obscured by later rationalization. It may be possible to control the degree to which one's decisions depend on quantum indeterminacy. If there's a lot of indeterminacy, it might be best to think of identity as a probabilistic computation instead of a deterministic one.
One's decisions can also depend on quantum indeterminacy in the environment, some of which might be mediated by millisecond delays in one's nerve firings. I don't know very much about this. This is the kind of thing that might turn Shen into a miser. But note that Shen's environment might deterministically make Shen a miser, depending on their disposition.
There is also a chance that otherwise deterministic computations in one's head can be occasionally frustrated by freak quantum tunneling events. But this accounts for a very small amount of amplitude, and you could think of it as overriding one's decision process (like an aneurysm would), rather than part of one's personality.
This subject warrants a lot of discussion — I'm not an expert, please correct me if I said something incorrect — but I don't think it has much bearing on the question of what identity is.
Replies from: ialdabaoth↑ comment by ialdabaoth · 2013-11-15T00:37:32.864Z · LW(p) · GW(p)
This might sound like a nitpick and a pet peeve, but in this case I think it's important and essential: Your decisions do not split you. At least, not in the way one would naively expect.
I think I see it the opposite way: The splits forge your decisions.
When Shen said:
"I suppose, that personal identity cannot be contained within the history of choices that have been made, because for every choice that has been made, if it was truly a 'choice' at all, it was also made the other way in some other tributary of the Great Tao."
his Teacher did not correct him, although the Teacher might have said it quite differently:
"Personal identity cannot be contained within the nature of choices that might be made, because for every you that grew to choose one way, another you was grown to choose differently in some other tributary of the Great Tao."
Certainly, we can make deterministic choices - and the sorts of choices that are predetermined for us to make define who we are. But events conspired to combine particular bits of meat in particular ways, and those events could have conspired differently - and in each universe that they did so, there is another possible 'you'.
But "measure" is actually at the heart of this: when we talk about resurrecting someone, we're talking about pulling from something like their notional measure a distinct instantiation, and I would like to understand what makes one particular instantiation more 'them' than another. Or even better - what makes a particular instantiation of 60 kg or so of matter part of my 'measure', while another instantiation of 60ish kg of matter NOT part of my measure, but part of some other notional thing's measure?
Replies from: hairyfigment↑ comment by hairyfigment · 2013-11-16T23:14:24.754Z · LW(p) · GW(p)
But events conspired to combine particular bits of meat in particular ways, and those events could have conspired differently - and in each universe that they did so, there is another possible 'you'.
And I would identify less with them to the extent that their memories/histories differ.
Nevertheless, the "superstimulus" version of #2 might tempt me if it didn't seem like a guaranteed failure in practice.
comment by So8res · 2013-11-14T23:15:26.078Z · LW(p) · GW(p)
"Ah, but then which conscious awareness do you wish her to have? There is no copy; all possible tributaries of the Great Tao contain an original. And each of those originals experience in their own way. You wish me to pluck out a {configuration} and present it to you, and declare "This one! This one is Ah-Chen!". But which one? Or do you leave that choice to me?"
The one where she miraculously recovered from smallpox unscathed, all those years ago.
But which one?
What do you mean? Are you yourself, right now, one person? You are not a fully constrained decision process. An infinitude of possibilities lie before you.
Why, then, do you insist that I pick out one Ah-Chen? She was, like you are, a fuzzy process. Do not limit her possibilities and strip her of her choices! I do not ask for a single point in process-space, I ask for Ah-Chen, as she was before the disease, brimming with opportunity.
Replies from: hyporational, DanielLC↑ comment by hyporational · 2013-11-15T12:43:57.290Z · LW(p) · GW(p)
The one where she miraculously recovered from smallpox unscathed, all those years ago.
Which one of those? I also fail to see why that particular time coordinate is important.
What do you mean? Are you yourself, right now, one person? You are not a fully constrained decision process. An infinitude of possibilities lie before you. Why, then, do you insist that I pick out one Ah-Chen? She was, like you are, a fuzzy process.
I think this is the whole point of the discussion, and you seem to be dodging the hard parts. How fuzzy is acceptable? Do you suggests you want to pick the herd of all possible Ah-Chens? How would you define what all possible means? Where do you draw the line between those and someone else?
Replies from: So8res, DaFranker↑ comment by So8res · 2013-11-15T16:54:39.222Z · LW(p) · GW(p)
I also fail to see why that particular time coordinate is important.
Because I asked you for it? I mean, I'd also be happy with her before she contracted the disease, and any time during which she had the disease (assuming she's brought back sans smallpox), and probably everything up till about a week after she's cured of the disease (assuming she's been in a coma-state since), under reasonable assumptions. But you asked for one, and that's my strongest preference (and an easy one to describe).
How fuzzy is acceptable?
This fuzzy. [points at head] Give or take.
More specifically, the present state of the world determines many histories, but all of them are very similar (from our perch, way up here above physics). I want her within the bounds that are forced by the present.
(I suspect that the present is entangled enough such that the So8res' that hallucinated Ah-Chen are distinguishable from myself, and that in all histories forced by now, she was truly there. If this is not the case, and the variance of history is wider than expected, then you should choose the median Ah-Chen within the boundaries forced by the present.)
Do you suggests you want to pick the herd of all possible Ah-Chens?
No more than I am currently the herd of all possible So8res.
How would you define what all possible means?
Need I? I'm asking for a girl, as she was when she died, as if she had (counter-factually) recovered from smallpox. Given adequate knowledge of the world (enough to let me deduce the precise state of the universe in her vicinity when she died) and sufficient ability (to reconstruct that state of matter, sans-virus) I could construct this unambiguously (enough so for my satisfaction, again given our perch towering above physics). I don't see why the facts that "there are many processes that she could have become" or "there are many other ways she could have been before the sickness" make this unclear.
As for the fact that there is quantum fuzziness and I cannot get the "precise state" of her when she died, I am perfectly happy with anything within the boundaries forced by the present.
The argument seems to be along the lines of "there are many exact configurations that you call Ah-Chen, which one do you want?" But it's not as if you're going to build me an exact configuration. It's not like the past forces that the electron had amplitude in this fuzzy area, and you have to pick an exact place to put it.
I do not need you to pick a precise place to put each atom, when history does not constrain you to do so. Rather, when history says "the amplitude for this electron was in this fuzzy circle", I need you to build something where there's an electron with amplitude in that fuzzy circle.
If you cannot deduce the boundaries on history forced by the present, if you must rely upon me for an exact description of the way she was, then I cannot give you enough precision reconstruct her. But I heard you could bring back the dead.
↑ comment by DaFranker · 2013-11-15T16:54:29.128Z · LW(p) · GW(p)
I think this is the whole point of the discussion, and you seem to be dodging the hard parts. How fuzzy is acceptable? Do you suggests you want to pick the herd of all possible Ah-Chens? How would you define what all possible means? Where do you draw the line between those and someone else?
If I'm reading So8res correctly, he doesn't particularly dodge the hard part.
At a timepoint X, which is when she fell sick or some other schelling point for avoidance of fatal illness, there exists a vector matrix/machine state of all the interactions that, at that point in time within the reality observed by this Shen, together are the essence of the {computational process} that this Ah-Chen was then, along with all the possibilities and movements there.
So8res!Shen wants to copy that particular set of computational process state vectors and transplant it into a different point in spacetime, on a medium (functioning human brain within functioning human body, preferably) that is sufficiently similar to the old one to hold at least the same computational instructions that led to that Ah-Chen-state.
The copied state of interaction vectors encodes all the possibilities of then-Ah-Chen's future, yet will play out differently as per a not-exactly-identical environment and different "flows of the Tao". One of those environmental differences is, as per the request specifications, that the body housing the brain on which she is then computed is not fatally ill.
↑ comment by DanielLC · 2013-11-16T21:55:13.526Z · LW(p) · GW(p)
Why, then, do you insist that I pick out one Ah-Chen?
I got the impression that the problem was the opposite. As you've already shown, it's easy to pick one Ah-Chen that's definitely her. The hard part is deciding if an arbitrary being is Ah-Chen. I just decided to pretend that the thought experiment was better-designed and deciding if an arbitrary being was important.
comment by DataPacRat · 2013-11-14T22:33:02.895Z · LW(p) · GW(p)
What is the purpose to making any sort of distinction between the identity of one person, and the identity of another?
On a practical level, it often seems to have something to do with people being more willing to work harder for the benefit of people they identify as 'themselves' than they would work for 'other people', such as being willing to do things that are unpleasant now so their 'future selves' will enjoy less unpleasantness.
Out of the various people in the future who might or might not fall under the category of 'yourself', for which of them would you be willing to avoid eating a marshmallow now, so that those people could enjoy /two/ marshmallows?
Replies from: AlanCrowe, ialdabaoth, hyporational↑ comment by AlanCrowe · 2013-11-15T17:45:30.533Z · LW(p) · GW(p)
I think that is the right question and plunge ahead giving a specific answer, basically that "the self" is an instinct, not a thing.
The self is the verbal behaviour that results from certain instincts necessary to the functioning of a cognitive architecture with intelligence layered on top of a short term reward system. We can notice how slightly different instincts give rise to slightly different senses of self and we can ask engineers' questions about which instincts, and hence which sense-of-self, give the better functioning cognitive architecture. But these are questions of better or worse, not true or false.
But I express myself too tersely. I long for spell of good health, so that I can expand the point to an easy-read length.
↑ comment by ialdabaoth · 2013-11-14T23:02:46.972Z · LW(p) · GW(p)
On a practical level, it often seems to have something to do with people being more willing to work harder for the benefit of people they identify as 'themselves' than they would work for 'other people', such as being willing to do things that are unpleasant now so their 'future selves' will enjoy less unpleasantness.
Out of the various people in the future who might or might not fall under the category of 'yourself', for which of them would you be willing to avoid eating a marshmallow now, so that those people could enjoy /two/ marshmallows?
It seems like abstracting that a bit could lead to a memetic equivalent to kin selection. I am intrigued, and will meditate on this further.
Replies from: NancyLebovitz, niceguyanon↑ comment by NancyLebovitz · 2013-11-15T14:54:04.942Z · LW(p) · GW(p)
I think I'd just eat an ordinary marshmallow now, but (for myself or someone else) make the effort to get two marshmallows if it was something like the artisanal marshmallow with a delicate maple sugar crust (carmellized maple syrup?) that I had recently.
And that's one of the ways you can tell whether it's me or not.
↑ comment by niceguyanon · 2013-11-15T07:56:51.994Z · LW(p) · GW(p)
What is the purpose to making any sort of distinction between the identity of one person, and the identity of another?
Here is what Parfit had to say:
This appeals to me, however like you mentioned, on a practical level there might be a desire make distinctions. Your example of forgoing a marshmallow now, so that those like you can have two, is a good example that.
↑ comment by hyporational · 2013-11-15T12:56:46.417Z · LW(p) · GW(p)
What is the purpose to making any sort of distinction between the identity of one person, and the identity of another?
Say you have a perfect copy of yourself excluding your spatial coordinates. You're faced with a choice of terminating either yourself or your copy. How do you make that choice?
The intellectually honest answer to this question seems easy, but I'm inclined to believe that if you claim not having conflicting intuitions, you're lying and/or signalling.
Out of the various people in the future who might or might not fall under the category of 'yourself', for which of them would you be willing to avoid eating a marshmallow now, so that those people could enjoy /two/ marshmallows?
EDIT: umm, never mind.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-15T14:49:07.745Z · LW(p) · GW(p)
Say you have a perfect copy of yourself excluding your spatial coordinates. You're faced with a choice of terminating either yourself or your copy. How do you make that choice? The intellectually honest answer to this question seems easy, but I'm inclined to believe that if you claim not having conflicting intuitions, you're lying and/or signalling.
Like a lot of the rarefied hypotheticals that come up here, I find that it helps clarify my thinking about this one to separate the epistemological confusion from the theoretical question.
That is... OK, say I (hereafter TheOtherDave, or TOD) have a perfect copy of myself (hereafter TheOtherOtherDave, or TOOD). If TOD is given a choice between (C1) terminating TOD + giving TOOD $N, and (C2) terminating TOOD, for what N (if any) does TOD choose C1? The "intellectually honest" answer is that this depends critically on TOD's confidence that TOOD is a perfect copy of TOD.
But if we assert that TOD is 1-minus-epsilon confident, which seems to be what you have in mind, then I think I can honestly say (no lying or signaling involved) that TOD chooses C1 for any N that TOD would bother to bend over and pick up off the street. Maybe not a penny, but certainly a dollar.
I don't understand this question. Is it assuming some privileged hypothesis of how MWI works?
My understanding of the question does not depend on any MWI-theorizing.
I expect there to exist ~7B people in an hour, who might or might not qualify as "myself" (I expect one and only one of them to do so, though there's a small non-zero chance that none will do so, and a much smaller chance that more than one will). Of that set, for which ones would I forego a marshmallow so they could have two? (The actual answer to that question is "almost all of them"; I don't care for marshmallows and I far prefer the warm-fuzzy feeling of having been generous. I'd answer differently if you replaced the marshmallow with something I actually want.)
Replies from: hyporational↑ comment by hyporational · 2013-11-15T17:30:03.531Z · LW(p) · GW(p)
The "intellectually honest" answer is that this depends critically on TOD's confidence that TOOD is a perfect copy of TOD.
This is not what I had in mind, I assumed the certainty is a given. I really need some kind of a tabooing software to remind me not to use value-laden expressions...
But if we assert that TOD is 1-minus-epsilon confident, which seems to be what you have in mind, then I think I can honestly say (no lying or signaling involved) that TOD chooses C1 for any N that TOD would bother to bend over and pick up off the street. Maybe not a penny, but certainly a dollar.
This is what I meant by an intellectually honest answer, and I don't disagree with it at all, if I look at it from a safe distance. If you actually imagine being in that situation, do you have no intuitions/fears siding with preserving TOD? If you do, are they zero evidence/value to you? If you don't, should I believe you don't, considering what's typical for humans? What is TOD's confidence that the problem of personal identity has been dissolved? Is it 1-minus-epsilon? Does 1$ represent this confidence also?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-15T19:54:29.874Z · LW(p) · GW(p)
If you actually imagine being in that situation, do you have no intuitions/fears siding with preserving TOD? If you do, are they zero evidence/value to you?
You're inviting me to imagine having 1-minus-epsilon confidence that this guy I'm looking at, TOOD, really is a perfect copy of me.
My first question is: how am I supposed to have arrived at that state? I can't imagine it, personally. It seems utterly implausible... I can't think of any amount of observation that would raise my confidence that high.
I haven't given a huge amount of thought to this, but on casual thought I don't think I can get above .8 confidence or so. Possibly not even close to that high.
But if I ignore all of that, and imagine as instructed that I really am that confident... somehow... then yeah, I expect that the evidentiary value of my intuitions/fears around siding with preserving TOD are sufficiently negligible that multiplied by the value of me they work out to less than a dollar.
should I believe you don't, considering what's typical for humans?
How confident do you think it's reasonable to be of the typical behavior for a human in a situation that no human has ever actually found themselves in? How confident do you think it's reasonable to be of the typical behavior for a human in a situation that I cannot imagine arriving at even a reasonable approximation of?
Implausible situations ought to produce implausible behavior.
What is TOD's confidence that the problem of personal identity has been dissolved?
I am not sure enough of what this question means to essay an answer.
EDIT: Or are you asking how confident I am, given 1-epsilon confidence that TOOD is a perfect copy of me, that there isn't some other imperceptible aspect of me, X, which this perfect copy does not contain which would be necessary for it to share my personal identity? If that's what you mean, I'm not sure how confident I am of that, but I don't think I care about X enough for it to affect my decisions either way. I wouldn't pay you $10 to refrain from using your X-annihilator on me, either, if I were 1-epsilon confident that I would not change in any perceptible way after its use.
Replies from: hyporational↑ comment by hyporational · 2013-11-16T09:23:54.166Z · LW(p) · GW(p)
Well, it seems I'm utterly confused about subjective experience, even more so than I thought before. Thanks for calling my bs, again.
My first question is: how am I supposed to have arrived at that state? I can't imagine it, personally. It seems utterly implausible... I can't think of any amount of observation that would raise my confidence that high [...] Implausible situations ought to produce implausible behavior.
I can't imagine it either. This could be an argument against thought experiments in general.
EDIT: Or are you asking how confident I am, given 1-epsilon confidence that TOOD is a perfect copy of me, that there isn't some other imperceptible aspect of me, X, which this perfect copy does not contain which would be necessary for it to share my personal identity?
If I copied myself, I expect HR1 and HR2 would both think they're the real HR1. HR1 wouldn't have the subjective experience of HR2, and vice versa. Basically they cease to be copies when they start receiving different sensory information. For HR1, the decision to terminate his own subjective experience seems like suicide, and for HR2, termination of subjective experience seems like being murdered. I can't wrap my head around this stuff, and I can't even reliably pinpoint where my source of confusion lies. Thinking about TOD and TODD is much easier, since I haven't experienced being either one, so they seem perfectly isomorphic to me.
It seems if you make a perfect physical copy, what makes your subjective experience personal should be part of it, since it must be physics, but I can't imagine how copying it would be like. Will there be some kind of unified consciousness of two subjective experiences at once?
I'm not sure English is sufficient to convey my meaning, if you have no idea of what I'm talking about. In that case it's probably better not to make this mess even worse.
comment by Ishaan · 2013-11-14T20:59:32.194Z · LW(p) · GW(p)
When I imagine resurrecting loved ones, what makes me believe that even a perfectly preserved brain state is any more 'resurrection' than an overly sophisticated wind-up toy that happens to behave in ways that fulfill my desire for that loved one's company?
Nothing. It's just a question of definition, and social consensus hasn't set one yet. My answer is that, if the past version of said loved one would have considered this being as themselves, then I too can consider this being as them (at least in part).
When I imagine being resurrected "myself", how different from this 'me' can it be and still count?
Again, that's up to you - this is a question of what you desire, not of what reality is like. My quick answer is that the resurrected being must have all all the first order desires and values of my current self, as well as retention of key knowledge and memories, for me to consider it "myself". Any changes in desires and values must be changes which could potentially be brought about in my current self strictly via non-neurologically damaging experiences, for it to still be "me" (and I'd hesitate to define these mutable sorts of desires and values as "first order"). Additionally, the cognitive style must either be roughly similar to my current cognitive style, or superior to it, but not inferior in any major metric.
So basically, I define myself by my preferences ("utility function"? emotions?), my experiences("priors"?, memory, knowledge, acquired skill?), and my mental abilities ("rationality"?, innate talents, cognitive style?). I think "personality" is probably contained in those three...
Replies from: ialdabaoth↑ comment by ialdabaoth · 2013-11-14T21:10:22.752Z · LW(p) · GW(p)
if the past version of said loved one would have considered this being as themselves, then I too can consider this being as them.
How do you distinguish this from "If I can convince myself that the past version of said loved one would have considered this being as themselves, then I too can consider this being as them"?
My answer is that the resurrected being must have all all the first order desires and values of my current self, as well as retention of a few key memories, for me to consider it "myself".
If that fails, but the so-called "resurrected" being BELIEVES it has all your first order desires and values, and BELIEVES it retains a few key memories, but this "you" is no longer there to verify that it does not, how is that different?
Replies from: passive_fist, Ishaan↑ comment by passive_fist · 2013-11-14T21:33:04.433Z · LW(p) · GW(p)
How do you distinguish this from "If I can convince myself that the past version of said loved one would have considered this being as themselves, then I too can consider this being as them"?
You're missing the point. That's exactly what Ishaan is saying. We cannot make the distinction, therefore the answer to your question as it was phrased: "what makes me believe" is: "Nothing."
If that fails, but the so-called "resurrected" being BELIEVES it has all your first order desires and values, and BELIEVES it retains a few key memories, but this "you" is no longer there to verify that it does not, how is that different?
Here you're getting into questions about consciousness, and I don't believe we are at the level of understanding of it to be able to give a satisfactory answer. At the very least, I'd like a unified theory of everything before attempting to attack the consciousness question. The reason I'm saying this is because a lot of people seem to try to attack consciousness using quantum theory (Elezier included) but we don't even know if quantum theory is the fundamental theory of everything or just some approximation to a deeper theory.
Replies from: Ishaan, None↑ comment by Ishaan · 2013-11-15T20:50:37.690Z · LW(p) · GW(p)
You're missing the point. That's exactly what Ishaan is saying. We cannot make the distinction, therefore the answer to your question as it was phrased: "what makes me believe" is: "Nothing."
Yes, confirming that this is a correct interpretation of what I was saying.
Here you're getting into questions about consciousness, and I don't believe we are at the level of understanding of it to be able to give a satisfactory answer. At the very least, I'd like a unified theory of everything before attempting to attack the consciousness question. The reason I'm saying this is because a lot of people seem to try to attack consciousness using quantum theory (Elezier included) but we don't even know if quantum theory is the fundamental theory of everything or just some approximation to a deeper theory.
This, however, makes me grumpy. I don't think we need to know physics before we understand consciousness. We merely need to pin down some definitions as to what we mean when we say conscious. Our definition should be sufficiently vague as to be applicable within a wide spectrum mathematical systems.
That is, we should be able to construct a mathematical system in which conscious beings exist even without understanding the precise mathematical architecture of our own reality, so why is physics relevant?
It's just like "free will"...as we looked at the brain more, the answers to some of the philosophical questions became more glaringly obvious, but the solutions were there all along, accessible without any particular empirical knowledge.
Replies from: passive_fist↑ comment by passive_fist · 2013-11-15T22:12:08.936Z · LW(p) · GW(p)
I'm referring to the Hard Problem of Consciousness. I agree with you that a theory of everything might not be necessary to understand consciousness, which is why I said I'd like a unified theory of everything before attempting to attack it. The reason for this preference is twofold: 1. I've seen many attempts at trying to fit quantum physics and consciousness together, which makes me uneasy, and 2. I really think we will arrive at a theory of everything in the Universe before we crack the consciousness question.
That is, we should be able to construct a mathematical system in which conscious beings exist even without understanding the precise mathematical architecture of our own reality,
It makes me sad that you would say this. A bayesian brain could definitely feel conscious from the inside, but we cannot tell if it's conscious from the outside. It's entirely possible that we could come up with theories of, say, styles of message-passing or information processing that exist in conscious systems (like human beings) but not in unconscious ones (unconscious people or simple automatons) and use this as a meter stick for determining consciousness. But until we crack the hard problem, we will never be 100% sure that our theories are correct or not. I have a feeling that we cannot solve this problem by simply doing more of the same.
It's just like you cannot prove the consistency of Principia Mathematica from inside the theory. You have to 'step out' of the box, like Godel did, and once you do you realize that the consistency of the theory cannot be proven from the inside. Similarly, I have a feeling (again, just a feeling, not supported by any rigorous argument) that to solve the hard problem we have to 'step outside'. Which is why I'd like a unified theory of everything, because once we know how the Universe really works, it becomes much easier to do that. It was the rigorous formulation of PI itself that gave Godel the framework to work outside of it.
Replies from: Ishaan↑ comment by Ishaan · 2013-11-15T22:49:25.521Z · LW(p) · GW(p)
It makes me uneasy as well when I see people fitting together quantum physics and consciousness, but the reason it makes me uneasy is that there is no need to introduce physics into the conversation. There are those (myself included) who consider the "Hard Problem of Consciousness" more or less solved (or perhaps dissolved), so I naturally disagree that we'll arrive at a ToE first. Indeed, I think the problem has already been posed and solved multiple times in human history, with varying degrees of rigor. The trouble is that the solution is really hard to put into words, and human intuition really tends to fight it.
It's entirely possible that we could come up with theories of, say, styles of message-passing or information processing that exist in conscious systems (like human beings) but not in unconscious ones (unconscious people or simple automatons) and use this as a meter stick for determining consciousness. But until we crack the hard problem, we will never be 100% sure that our theories are correct or not. I have a feeling that we cannot solve this problem by simply doing more of the same.
To rephrase: "we might define a set of information processing systems under the label "conscious" but when we say "conscious" we are also talking about qualia-having, and we can't know whether information processing systems have qualia so therefore we can't know if they are really conscious", is that correct?
But this statement is predicated on the assumption that a certain notion of how qualia works (personalized, totally inaccessible, separate boxes for "my qualia" and "your qualia", the notion that some things "have qualia" and other things do not...pretty much dualist "souls" by another name) actually corresponds to something in the universe.
There's a whole lot of implicit assumptions that we just instinctively make in this area, as a result of our instinctive attraction towards dualism. The Hard Problem of Consciousness is just the hole left behind when you remove souls as an explanation without addressing the underlying dualist assumptions.
That is, we should be able to construct a mathematical system in which conscious beings exist even without understanding the precise mathematical architecture of our own reality,
I aim to convince you of this. My argument:
If we can understand consciousness after constructing the ToE, then we should be able to understand consciousness after constructing systems which are similar enough to the ToE but do not in fact correspond to reality. If you can agree with this statement, then you might also agree that If we can understand consciousness after constructing the ToE, then there is in fact a space of mathematical systems which would provide an adequate framework to understand consciousness.
Does it not follow that we should theoretically be able to construct a mathematical system within which conscious beings could exist?
This basically is all part of the notion that we don't need to know empirical stuff about physics to tackle this question.
Replies from: passive_fist, passive_fist↑ comment by passive_fist · 2013-11-15T23:41:54.196Z · LW(p) · GW(p)
To see why I find your argument unconvincing, replace 'consciousness' with 'transistors', and 'ToE' with 'quantum theory'.
"If we can understand transistors after constructing quantum theory, then we should be able to understand transistors after constructing systems which are similar enough to quantum theory but do not in fact correspond to reality. If you can agree with this statement,"
I do,
"then you might also agree that If we can understand transistors after constructing quantum theory, then there is in fact a space of mathematical systems which would provide an adequate framework to understand transistors"
There is. But this says nothing, because even though people could have understood transistors before quantum theory, there would have been many competing hypotheses but no prior to help them sort out the hypotheses. Quantum physics provided a prior which allowed people to selectively narrow down hypotheses.
"Does it not follow that we should theoretically be able to construct a mathematical system within which transistors could exist?"
We could, but there would be so many mathematical systems with this property that finding the correct one would be hopeless.
Replies from: Ishaan↑ comment by Ishaan · 2013-11-16T03:55:07.670Z · LW(p) · GW(p)
"Does it not follow that we should theoretically be able to construct a mathematical system within which transistors could exist?"
We could, but there would be so many mathematical systems with this property that finding the correct one would be hopeless.
Oh okay. I previously misunderstood your argument and thought you were saying it's impossible, but I think we both agree that it's possible to do this for consciousness.
I guess the definition of consciousness as constructed in my own head is broad enough to exist within many different systems (certainly almost any every system that contains computers, and that seems a broad enough set). So via the definition i'm working off of, it seems practical as well as possible.
Replies from: passive_fist↑ comment by passive_fist · 2013-11-16T06:58:39.302Z · LW(p) · GW(p)
I think we agree on plausibility, but disagree on practicality. Anyway, it's been an unexpectedly enlightening conversation; I'm sad that you got downvoted (it wasn't me!)
Replies from: Ishaan↑ comment by passive_fist · 2013-11-15T23:19:45.993Z · LW(p) · GW(p)
I see what you're saying, and I agree with you that human intuition tends to fight these things and physics is often used when it is unnecessary. You make a lot of valid points.
But, as I'm emphasizing, neither me nor you can give a rigorous logical explanation of why either of our viewpoints are correct. Or, failing that, to even ascribe a meaningful probability or likelihood to our viewpoints.
Replies from: Ishaan↑ comment by Ishaan · 2013-11-16T03:58:30.790Z · LW(p) · GW(p)
But, as I'm emphasizing, neither me nor you can give a rigorous logical explanation of why either of our viewpoints are correct. Or, failing that, to even ascribe a meaningful probability or likelihood to our viewpoints.
Wait, why is that? The viewpoint that I have stated here is primarily that the hard problem of consciousness isn't an empirical question in the first place, but a philosophical one. If I add in a definition of consciousness into the mix, Isn't that a claim that could be logically proven or refuted by someone?
Additionally, neither of us have really given our definitions of consciousness, but couldn't quite a few definitions of consciousness be refuted solely on the basis of internal inconsistency?
Replies from: passive_fist↑ comment by passive_fist · 2013-11-16T07:00:24.032Z · LW(p) · GW(p)
I hope my reply to your question above answers the question. If not, I'll be glad to explain.
↑ comment by [deleted] · 2013-11-15T00:43:58.203Z · LW(p) · GW(p)
Here you're getting into questions about consciousness, and I don't believe we are at the level of understanding of it to be able to give a satisfactory answer. At the very least, I'd like a unified theory of everything before attempting to attack the consciousness question.
An idea occurs, of an Extrapolated Philosophical Disposition. If I were to ask a djinni to resurrect a loved one, I'd tell it to do it in a way I would want if I knew and understood everything there is to know about physics, neuroscience, philosophy of consciousness, etc.
a lot of people seem to try to attack consciousness using quantum theory (Elezier included)
Huh. Where did that happen?
Replies from: Leonhart↑ comment by Leonhart · 2013-11-15T11:44:11.201Z · LW(p) · GW(p)
Huh. Where did that happen?
It didn't. EY has consistently said the opposite.
From here, among many other places:
Replies from: passive_fistIn retrospect, supposing that quantum physics had anything to do with consciousness was a big mistake.
↑ comment by passive_fist · 2013-11-15T22:32:15.223Z · LW(p) · GW(p)
I didn't mean to say Elezier thought consciousness was created due to some quantum mechanism. If that's what it seemed like I was saying, I apologize for the misunderstanding. I am referring to, for example this: http://lesswrong.com/lw/pv/the_conscious_sorites_paradox/
However the whole debate over consciousness turns out, it seems that we see pretty much what we should expect to see given decoherent physics. What's left is a puzzle, but it's not a physicist's responsibility to answer. ...is what I would like to say. But unfortunately there's that whole thing with the squared modulus of the complex amplitude giving the apparent "probability" of "finding ourselves in a particular blob".
As Elezier himself admitted, his interpretation of the question hinges on MWI. If the copenhagen interpretation is taken, it breaks down. Unfortunately we have no idea if MWI is the correct interpretation to take. There are other interpretations, like the Bohmian interpretation, that also lack all the nasty properties of Copenhagen but avoid many-worlds.
↑ comment by Ishaan · 2013-11-14T23:01:41.267Z · LW(p) · GW(p)
How do you distinguish this from "If I can convince myself that the past version of said loved one would have considered this being as themselves, then I too can consider this being as them"?
You can't, but that's true by definition for any empirical question. You can only do the best you can with the information you've got.
If that fails, but the so-called "resurrected" being BELIEVES it has all your first order desires and values, and BELIEVES it retains a few key memories, but this "you" is no longer there to verify that it does not, how is that different?
I'm not sure what you mean. I'd say that most importantly from you perspective, it's different because you failed to achieve your objective of creating a future being which falls under the criteria that you defined as ialdabaoth? The fact that neither you nor the ressuructed being will be aware of the failure happening doesn't change that a failure happened.
comment by Baughn · 2013-11-14T22:37:19.895Z · LW(p) · GW(p)
You say this is adapted from a 16th century story.
I find this story strange and unusual, for that age, but you have adjusted it to fit Lesswrong. Is there a more direct translation available?
Replies from: ialdabaoth↑ comment by ialdabaoth · 2013-11-14T22:47:07.418Z · LW(p) · GW(p)
Sorry, this part:
[Editor's note: The original story was in 16th century Mandarin, and used peculiar and esoteric terms for concepts that are just now being re-discovered. Where possible, I have translated these terms into their modern mathematical and philosophical equivalents. Such terms are denoted with curly braces, {like so}.]
was Watsonian in nature. The Doylist equivalent would have been to say "This is a story set in 16th century China, but pretend that the speaker is a wise Daoist sage who has independently come up with everything we talk about here on LW, and uses the same terms for them that we do."
Replies from: hyporational↑ comment by hyporational · 2013-11-15T13:51:17.760Z · LW(p) · GW(p)
From this moment, all my lies will be Watsonian in nature ;)
comment by [deleted] · 2013-11-15T00:29:47.943Z · LW(p) · GW(p)
"Umm... that... let me think. I suppose, that personal identity cannot be contained within the history of choices that have been made, because for every choice that has been made, if it was truly a 'choice' at all, it was also made the other way in some other tributary of the Great Tao."
I don't like that part at all. As far as I understand those things, just now almost all of your measure/amplitude/whatever went into the version of you that didn't spontaneously stand up and jump out of the window and the difference between that and what would happen in a completely deterministic universe (in which you also wouldn't have jumped) doesn't seem very important.
In fact, I think that the less determined your actions are, the less they are 'choices'. Not jumping out of the window because a quantum coin came up heads may be more 'free' in some sense but if its independent from your past mental states then it's not really something 'you' do.
comment by [deleted] · 2013-11-15T14:54:56.000Z · LW(p) · GW(p)
I think war raged within Shen's heart right at a key point of this.
I think to resolve resurrection, you may first have to resolve mind control. Because in cases like this, the person who is doing the resurrection is given mind control powers over the person who is dead. As a good example of this, I think I can regenerate most of the dilemma without any physical death at all:
Example: Assume that you and a loved one are the only people who can stop a terrible apocalypse. There is a mad scientist in the next room, the final obstacle to saving everyone. He has a ray gun that makes people simpletons.
Thankfully, there are two of you, and one of him. You've been planning strategies, and after discarding several that are distinctly suboptimal, here is your best bet:
1: You and your loved one enter the room, both begin aiming at the scientist. 2: The mad scientist fires his quicker simpleton gun, hitting one of you. The other one shoots the scientist, stopping him, and then stops the apocalypse. 3: However, the unhit person is left with the duty of reestablishing their loved one's thought processes from a simpleton.
In this case, physical death doesn't even seem to enter into it, and you still seem to have to resolve that feels to me like an incredibly similar set of conundrums, most of it focused on "Someone needs to rebuild someone else's utility function, metautility function, etc... and you may be either of the two people in this scenario."
Does that sound reasonable?
comment by Dentin · 2013-11-14T21:13:01.386Z · LW(p) · GW(p)
Thank you for the story. It succinctly describes my stance on identity, and similarly describes my frustration with people who do not understand the lessons in the story.
1) Who cares if it's a wind-up toy or not, if it provides indistinguishable outputs for a given set of inputs? Does it really matter if the result of a mathematical calculation is computed on an abacus, a handheld calculator, in neural wetware, or on a supercomputer?
2) Where you draw the line is up to you. If you have a stroke and lose a big chunk of your brain, are you still you? If you're reduced to an unthinking blob due to massive brain damage, is that still you? It's up to you to decide where you draw the line, so long as you recognize that you're putting it in an arbitrary place determined by you, and that other people may decide to put it elsewhere.
A good set of thought experiments that helped me work through this is to imagine that you have a magical box you can step into that will create a perfect copy of you. Said box will also magically destroy copies that enter it and press the 'destruct' button.
What mindset would you need to have to be able to properly use the box?
Under what circumstances would be able to create a copy, then enter the box and press the destruct button yourself?
↑ comment by ialdabaoth · 2013-11-14T21:19:55.257Z · LW(p) · GW(p)
Where you draw the line is up to you. If you have a stroke and lose a big chunk of your brain, are you still you? If you're reduced to an unthinking blob due to massive brain damage, is that still you?
Personally, I have trouble accepting that I'm still the same "me" that went to bed last night, when I wake up in the morning.
Replies from: lmm, Dentin↑ comment by Dentin · 2013-11-14T22:02:05.962Z · LW(p) · GW(p)
Wheras I'm the same '"me" that I was a year ago. The "me" of five and ten years ago are farther from that, while the "me" I was at age 10 is probably not very close at all. I'd allow a pretty big amount of slop to exist in different copies of myself.
↑ comment by niceguyanon · 2013-11-15T06:41:55.699Z · LW(p) · GW(p)
...a magical box you can step into that will create a perfect copy of you. Said box will also magically destroy copies that enter it and press the 'destruct' button.
This thought always gets me thinking. When I come across variations of the above thought experiment it makes me wonder if a magical box is even necessary. Are copies of me being destroyed as I type? Haven't I died an infinite number of deaths from the time I started typing till now? Couldn't me hitting the return key at the end of this sentence be sufficient to replicate the copy/kill box a la MWI?
I am having a hard time distinguishing what MWI says about my death at branch points, and simultaneously copy/kill yourself in a copy machine.
Was that also your point or am I mistaken?
Replies from: Dentin↑ comment by Dentin · 2013-11-15T17:46:51.556Z · LW(p) · GW(p)
I think having an explicit box, which allows for two or more simultaneous copies of you to exist and look at each other, is pretty important. Just being created and destroyed in the normal course of things, when everything looks normal, doesn't have the same impact.
My interpretation is that MWI says precisely nothing about you at branch points, because you don't die there - or rather, I don't necessarily consider a single branch point change to be sufficient to make me not 'me'. Further, creating a copy, or multiple copies, doesn't mean anything died in my view.
↑ comment by hyporational · 2013-11-15T13:15:45.167Z · LW(p) · GW(p)
Where you draw the line is up to you.
Where do you draw the line as in not caring about destroying yourself versus your copy? How did you make that decision?
Replies from: Dentin↑ comment by Dentin · 2013-11-15T18:18:35.793Z · LW(p) · GW(p)
For me, whether or not I'm me is an arbitrary line in the sand, a function of the mental and physical 'distance' or difference between copies. I think that's part of the point of the story - which version of the daughter is the daughter? Which one is close enough? You can't get it exact, so draw a line in the sand somewhere, according to your personal preferences and/or utility functions.
My line is apparently pretty unusual. I'm not sure I can define exactly where it is, but I can give you some use cases that are in the 'clear and obvious' category. Understand that the below is predicated on 1) I have extremely high belief that the box is creating 'good enough' copies and will not fail, and 2) the box has a failsafe that prevents me from destroying the last copy, if only one copy exists, and 3) it's better if there's a small number of copies, from a resource conservation standpoint.
I step in the box and create another copy. I lose a coin toss, which means I get to do the bills and take out the trash, wheras the copy continues gets to do interesting work that is expected to be of value in the long run. In this case, I do the bills and take out the trash, then return to the box and destroy myself.
In the above situation, I win the coin toss and begin doing interesting work. Later, my copy returns and tells me that he witnessed a spectactular car crash and rushed to the scene to aid people and probably saved somebody's life. His accumulated experience exceeds what I gained from my work, so I write down or tell him the most critical insights I uncovered, then return to the box and destroy myself.
I step into the box and create a copy. One of us wins the coin toss and begins a major fork: the winner will dedicate the next ten years to music and performance. In a year, the two of us meet and discuss things. We've both had incredible experiences, but they're not really comparable. Neither of us is willing to step into the box to terminate, and neither asks the other to do so.
Upon losing a coin toss, I take a trip to a third world country and am imprisoned unfairly and indefinitely for reasons beyond my control. The cost, time, and effort to fix the situation is prohibitive, and I do not have access to a destruction box. If possible, I communicate my status to my other copies, then commit suicide using whatever means necessary.
There are much more questionable cases between these, where the question of which one to destroy ends up weighting one against the other as best I can - but frankly if I had said box, I'd be very careful and strict about it, so as to keep the situations as clear as possible.
Replies from: None↑ comment by [deleted] · 2013-11-16T16:31:06.520Z · LW(p) · GW(p)
You sir, have a very strange sense of identity. I'm not sure I'd give my copy anything more than the time of day. And I certainly don't extend self-preservation to be inclusive of him. I'm not even going to touch the suicide. A line of thinking which leads you to suicide should be raising all sorts of red flags, IMHO.
Replies from: Dentin↑ comment by Dentin · 2013-11-16T16:54:51.949Z · LW(p) · GW(p)
Imagine that you're a program, and creating a new copy of you is as simple as invoking fork().
Voluntarily stepping into the box is no different than suicide, and frankly if you're resource constrained, it's a better option than murdering a copy. IMHO, you shouldn't be allowed to make copies of yourself unless you're willing to suicide and let it take your place. People unable to do that lack the mindset to properly manage copy creation and destruction.
Replies from: None↑ comment by [deleted] · 2013-11-16T17:00:18.513Z · LW(p) · GW(p)
I think you misunderstand me. It doesn't matter how easy it is to do, if you're a program. I wouldn't step into the box any more than I would commit suicide, and either one would be tantamount to murder.
I guess parents should be ready to kill themselves when their kids reach 18, to make sure there's room for them in the world? No, that's a terrible line of reasoning.
Replies from: Dentin↑ comment by Dentin · 2013-11-16T17:10:54.790Z · LW(p) · GW(p)
The fact that you considered that parent/kid question to be a valid argument, indicates strongly to me that you don't have the mindset or understanding to make copies safely.
Replies from: None↑ comment by [deleted] · 2013-11-16T19:07:56.376Z · LW(p) · GW(p)
How does it not follow from what you said?
IMHO, you shouldn't be allowed to make copies of yourself unless you're willing to suicide and let it take your place. People unable to do that lack the mindset to properly manage copy creation and destruction.
Sexual reproduction is a form of reproduction. Anyone who is a parent knows that children are a limited means of carrying identity in the form of drives, goals, likes & dislikes, etc. in to the future, even if vicariously (both because of your influence on them, and their influence on you). If inputs/outputs are all that matter in determining identity, then identity is a fuzzy concept and a continuous scale, as we are all constantly changing. Your children carry on some part of your personal identity, even if in nothing but their internal simulations of you. The same arguments apply.
If we're going to talk about societal proscriptions, then I would say those who think their sentient creations should be prepared to commit suicide for any reason are the ones who shouldn't be dabbling in creation...
Replies from: Dentin↑ comment by Dentin · 2013-11-16T21:17:36.409Z · LW(p) · GW(p)
Yes, sexual reproduction is a form of reproduction, one which we were explicitly not talking about. We were talking about perfect copies.
You may continue beating at the straw man if you wish, but don't expect me to respond.
Replies from: None↑ comment by [deleted] · 2013-11-16T22:41:07.093Z · LW(p) · GW(p)
There is no such thing as a perfect copy. That's what the OP is about! Even if there were some sort of magical philosophy box that made perfect replicas, you would cease to be perfect copies of each other as soon as you exited the box and started receiving different percepts - you would become different physical sentient entities leading separate lives. If you want to believe that these two clones are in fact the same identity, then you have to provide a specific reason - for example: related histories, similarity of behavior, motivation & drives, etc. Furthermore it would have to be a fuzzy comparison because as soon as you exit the box you start to diverge. How much change does it take until you can no longer claim that you and your clone are the same person? A week? A year? One hundred years? At that point you and your clone will have lived move time separately than your shared history. Do you still have the right to claim the other as a direct extension of yourself? What if a million years pass? I am quite confident that in a million years, you will have less in common with your clone than you currently do with your own children (assuming you have children).
So no, it's not a strawman. It's a direct conclusion from where your reasoning leads. And when a line of reasoning leads to absurd outcomes, it's often time to revisit the underlying assumptions.
Replies from: hairyfigment↑ comment by hairyfigment · 2013-11-16T23:22:26.761Z · LW(p) · GW(p)
This looks like an argument for extreme time preference, not an argument against copies. Why identify with one million-years-later version of yourself and exclude the other, unless we beg the question?
Replies from: None↑ comment by [deleted] · 2013-11-17T01:06:41.385Z · LW(p) · GW(p)
That's what I'm saying. I myself wouldn't identify with any of the copies, no mater how near or distant. My clone and I have a lot in common, but were are separate sentient beings (hence: requesting suicide of the other is tantamount to murder). But if you do identify with clones (as in: they are you, not merely other beings that are similar to you), then at some point you and they must cross the line of divergence where they no longer are identifiable, or else the argument reduces to absurdity. Where is that line? I see no non-arbitrary way of defining it.
EDIT: which led me to suspect that other than intuition I have no reason to think that my clone and I share the same identity, which led me to consider other models for consciousness and identity. My terseness isn't just because of the moral repugnance of asking others to suicide, but also because this is an old, already hashed argument. I first encountered it in philosophy class 10+ years ago. If there is a formal response to the reduction to absurdity I gave (which doesn't also throw out consciousness entirely), I have yet to see it.
Replies from: hairyfigment↑ comment by hairyfigment · 2013-11-18T02:10:43.065Z · LW(p) · GW(p)
Maybe you already got this part, but time preference is orthogonal to copies vs originals.
Eliezer says he defines personal identity in part by causal connections, which exist between you and the "clone" as well as between you and your "original" in the future. This definition also suggests a hole in your argument for strong time preference.
↑ comment by [deleted] · 2013-11-18T16:26:30.254Z · LW(p) · GW(p)
You are misreading me. I don't have time preference. If an exact perfect replica of me were made, it would not be me even at the moment of duplication.
I have continuation-of-computation preference. This is much stricter than Eliezer's causal connection based identity, but also avoids many weird predictions which arise from that.
And yes, you would need a bright line in this case. Fuzziness is in the map, not the territory on this item.
comment by DanielLC · 2013-11-15T07:51:18.724Z · LW(p) · GW(p)
If you want your resurrected self to be "you," then it's up to you to decide if your values are satisfied. A soul is something you made up. You are not barely aware of something simple and fundamental. The heuristic you use is not an approximation for something nicer. It's just what it is. If you value having simple and elegant values, there's not much you can do besides abandoning that one altogether.
If you just want to know what you would consider your resurrected self, because you have never been in that situation so you're not sure what you'd think, I'd say most of it is about memory. If someone with a completely different personality inherits your memory, I think they'd probably think of themselves as "you". Similarly, if your memory was replaced with someone else's, you'd start thinking of yourself as them. You would have a different personality, but you'd likely think of it as having changed, not being someone else altogether. That's just my guess, though.
If you want to resurrect a loved one, you should probably restrict them to also having a similar personality.
comment by Pentashagon · 2013-11-16T03:09:59.592Z · LW(p) · GW(p)
"You have indeed learned much. But you still have not described the purpose of your boundary-drawing. Do you wish for Ah-Chen's resurrection for yourself, so that you may feel less lonely and grieved, or do you wish it for Ah-Chen's sake, so that she may see the world anew? For these two purposes will give us very different boundaries for what is an acceptable Ah-Chen."
Poor Shen Chun-lieh should have just said he wanted the {CEV} of Ah-Chen + Shen-Chun-lieh.
I don't think anything less than CEV or equivalent will actually pinpoint individual identity sufficiently well that we will have no complaints about resurrecting people from the best information available. If I had to manually pinpoint my own identity I would accidentally carve off entire swaths that I don't even know I have and distort the other parts. The physical description of my body at the planck scale is too limited; I would never change. Au-Chen brought back from the dead perfectly would be a little girl dying of smallpox; curing the smallpox would destroy her planck-scale identity. I don't know what level of accuracy is sufficient. I can only assume that it's somewhere between planck scale (or even atomic scale) accuracy and the mess that is ~80 years of gene-driven growth and change of a human body.
Replies from: ialdabaoth↑ comment by ialdabaoth · 2013-11-16T03:28:49.580Z · LW(p) · GW(p)
Au-Chen brought back from the dead perfectly would be a little girl dying of smallpox; curing the smallpox would destroy her planck-scale identity.
You have captured the essence of the problem, here.
comment by ephion · 2013-11-15T20:20:08.727Z · LW(p) · GW(p)
I've got nothing to contribute, other than that this story really helped resolve some personal crises on identity. This part especially:
"Even now, you are not quite correct. The soul is not a {computational process}, but a {specification of a search space} which describes any number of similar {computational processes}. For example, Shen Chun-lieh, would you still be Shen Chun-lieh if I were to cut off your left arm?"
Thank you for writing this.
comment by NancyLebovitz · 2013-11-15T15:17:06.178Z · LW(p) · GW(p)
I may be kidding myself, but I think of my identity as being at least as much tied up in something about how my experience usually feels as it's tied up with my memory.
I do care a lot about my knowledge of golden age sf, and was upset when I lost access to it after trying welbutrin briefly. (I don't know how often this sort of thing happens, but it damaged my access to long term memory for months. It was bad for my short term memory, too.) However, I think I'd still be me in some important sense if I cared about something else the way I care about sf, and wouldn't be me if I cared about sf in some other way. This is getting hard to define, because when I think about, I'm not sure about other ways of caring about sf. There are other people with much better memories of the details, and I wouldn't mind having that. I'm pretty sure I'd still be me if I could put a lot of work into trying to figure out who Severian's parents are. (Gene Wolfe, Book of the New Sun). I'm not sure I'd be me if I developed a huge preference for science fiction vs. fantasy or vice versa.
Here's one: a major thing I want from sf is the feeling of spending some time in a world which is different from and more interesting than this world. I can enjoy nitpicking the world-building, but it's not a primary pleasure.
A while ago, I tried D-phenylalanine, and I dropped it because I didn't feel like me. Sorry, too long ago to remember details.
I have a sense of rightness which drives the way I do calligraphy. I wouldn't want to lose that, but having a sense of rightness is an important part of how I approach creativity and I'd want to have something else it applied to. I'm not sure everyone else does it that way.
It's not that memory or physical continuity are nothing to me, but I can tell I'm me because I feel like me. If I became someone who found their identity in their memories, I'd be someone else. And if you resurrected someone who looked like me and did calligraphy like me, but who found their identity in their memory, you've gotten it wrong, at least by my standards. Not that the pseudo-me would necessarily care, and I'm not sure about whether you're obligated to care.
Possibly one of the ways you can tell I'm me is that I'm not taking a crack at the possibly harder question of what you'd want from resurrecting someone else.
comment by Brillyant · 2013-11-14T21:59:45.872Z · LW(p) · GW(p)
Excellent job on this post! It is very well written with some awesome & very memorable passages. (And it's going to make me think about the nature of identity way too much over next few days... :)
I watched a couple lectures from this course. It really helped me approach the issue of identity (and death) from a new perspective. Specifically, I think memories are the defining characteristic of identity.
From my recall, Kagan gave the example of someone who lived forever, but whose memory was fully erased every X years. Who would they be at any given moment? It seems to me, in that case, they would lose identity each time their memories were fully erased. You'd have a completely new person after each "reboot". Even if personality and every other aspect of identity were perfectly preserved, memories are the key in my view. (It seems really obvious to me now [and maybe it is really obvious to most people], but I remember it shifting my understanding pretty significantly at the time I first encountered the idea.)
Replies from: ialdabaoth, Gunnar_Zarncke↑ comment by ialdabaoth · 2013-11-14T22:14:35.314Z · LW(p) · GW(p)
From my recall, Kagan gave the example of someone who lived forever, but whose memory was fully erased every X years. Who would they be at any given moment? It seems to me, in that case, they would lose identity each time their memories were fully erased. You'd have a completely new person after each "reboot".
What if, instead of perfect erasure, those memories were simply altered slightly, every time they recalled them - thus creating an ever-shifting continuum of identities, each subtly different from the last? When is someone a completely new person, then?
Replies from: Brillyant↑ comment by Brillyant · 2013-11-14T22:42:31.072Z · LW(p) · GW(p)
I don't know. I suppose that would feel a lot like what we feel in our current state, since, as you point out, memory recall isn't flawless. I guess we are always shifting identities; re-engaging with our memories, preserved at whatever level of fidelity they may be, in each present moment.
The question of "when we become a new person" seems to be asking for something that may not be possible to define or answer. It feels like the only answer that makes sense is that we are a completely new person perpetually, and that static identity, of any kind, is only a (pretty damn persistent) illusion.
↑ comment by Gunnar_Zarncke · 2013-11-14T22:22:50.674Z · LW(p) · GW(p)
Identity seems to be a bit more than memory. Consider this post which explicitly tries to brdige a memory gap:
http://lesswrong.com/lw/itx/how_do_i_backup_myself/
One avenue to this end is collective memory which is the topic of these comments:
http://lesswrong.com/lw/itx/how_do_i_backup_myself/9we3
Replies from: Brillyant↑ comment by Brillyant · 2013-11-15T14:41:00.925Z · LW(p) · GW(p)
Identity seems to be a bit more than memory. Consider this post which explicitly tries to brdige a memory gap:
I'm not sure that is "identity" in the way I'm defining it.
If pre-memory-erase me writes down everything he can about himself in extreme detail -- events he has experienced, how they made him feel, what goals he was interested in, what he had learned about how to optimize his pursuits, etc. -- and then post-memory-erase me is given those volumes, it still seems to me the essence of identity is lost.
I'm not sure if I'm using it correctly, but the term that comes to mind would be qualia -- the pre-erase me and post-erase me will be experiencing fundamentally different conscious subjective experiences.
The process described above (manual memory back-up plan) would go a long way to making the two mes seem the same from the outside, I think. But they'd be different from the perspective of each me. I can imagine pre-erase me saying, "I'll write this stuff for him, so he can be like me -- so he can become me", where post-erase me might say, "I'm glad he wrote this stuff down because it is interesting and helpful...we sure do have a lot in common. But it's weird: It's almost like he wants to become, and take over, me and my body. That can't really happen though, because I am me, and he cannot be me."
comment by jockocampbell · 2013-11-19T18:00:43.912Z · LW(p) · GW(p)
Excellent post.
I have pondered the same sort of questions. Here is an excerpt from my 2009 book.
My father is 88 years old and a devout Christian. Before he became afflicted with Alzheimer’s he expected to have an afterlife where he would be reunited with his deceased daughter and other departed loved ones. He doesn’t talk of this now and would not be able to comprehend the question if asked. He is now almost totally unaware of who he is or what his life was. I sometimes tell him the story of his life, details of what he did in his working life, stories of his friends, the adventures he undertook. Sometimes these accounts stir distant memories. I have recently come to understand that there is more of ‘him’ alive in me then there is in him. When he dies and were he to enter the afterlife in his present state and be reunited with my sister he would not recognize or remember her. Would he be restored to some state earlier in his life? Would he be the same person at all?
I originally wrote this to illustrate problems with the religious idea of resurrection. I now believe that this problem of identity is common to all complex evolving systems including 'ourselves'. For example species evolve over their lifetime and although we intuitively know that we are identifying something distinct when we name a species such as homo-sapiens the exact nature of the distinction is slippery. The debate in biology over the definition of species has been long, heated and unresolved. Some definition referring to species are attempts along the line of interbreeding populations that do not overlap with other populations. However this is a leaky definition. For example it has recently been found that modern human populations contain some Neanderthal DNA. Our 'species' interbred in the past, should we still be considered separate species?
Replies from: buybuydandavis↑ comment by buybuydandavis · 2014-04-21T19:55:39.077Z · LW(p) · GW(p)
I sometimes tell him the story of his life, details of what he did in his working life, stories of his friends, the adventures he undertook.
That seems like a good thing for people to do for themselves. Make a bunch of videos recounting your life. Useful if the mind falters, and useful even if it doesn't falter so much. Our recollections no doubt wander over time. Even without any claim about which recollection is more/less accurate, they're all useful data. At least to someone looking to reminisce.
Replies from: Baughn↑ comment by Baughn · 2015-02-13T16:33:13.426Z · LW(p) · GW(p)
A rather large fraction of my discussions happen via IRC; I log every bit of it, and carefully back the logs up.
Occasionally, I go back and read some random fraction of the logs. It is usually a valuable experience. I am doing so right now, albeit without IRC.
comment by buybuydandavis · 2013-11-15T19:55:41.628Z · LW(p) · GW(p)
"Is it really Ah-Chen?" is a question of value, which is up to Shen Chun-lieh in the first place.
That he, or we, have value algorithms that get confused and contradictory in situations that humans have never faced is hardly surprising .
Values are choices. Identity masquerades as a fact, but it is fundamentally about value, and therefore choice as well.
Replies from: ialdabaoth↑ comment by ialdabaoth · 2014-04-21T16:34:45.730Z · LW(p) · GW(p)
"Is it really Ah-Chen?" is a question of value, which is up to Shen Chun-lieh in the first place.
That he, or we, have value algorithms that get confused and contradictory in situations that humans have never faced is hardly surprising .
Values are choices. Identity masquerades as a fact, but it is fundamentally about value, and therefore choice as well.
This is brilliantly succinct, and I am stealing this explanation. Thank you for articulating it.
comment by linkhyrule5 · 2013-11-21T08:03:41.276Z · LW(p) · GW(p)
Both questions seem to boil down to the hard question of continuity-of-consciousness. When I say I want someone resurrected, I mean that I want the do-what-I-mean equivalent of pressing "play" on a paused movie: someone resuming their life as if they had never left it.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-21T14:37:19.664Z · LW(p) · GW(p)
Can you provide some examples of what "resuming their life as if they had never left it" looks like?
Right now, the image in my mind is (e.g.) I wake up in the morning, make lunch plans with my husband, start working on the presentation for my client, die, am resurrected ten years later, finish the presentation for that client, and have lunch with my husband... is that what you have in mind as well?
Replies from: linkhyrule5↑ comment by linkhyrule5 · 2013-11-22T21:32:45.409Z · LW(p) · GW(p)
That would be ideal. In practice, I would settle for "die, am resurrected ten years later, suffer a week's worth of culture shock, am re-hired (or, if we're past the Singularity, go do something interesting), have lunch with my cooperating husband", etc.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-11-22T23:14:02.919Z · LW(p) · GW(p)
Fair enough; thanks for clarifying.
For my own part, I think the "ideal" version would terrify me, but the settle-for version I could tolerate.
comment by hyporational · 2013-11-15T13:32:27.896Z · LW(p) · GW(p)
Even assuming just one possible past, I wouldn't care to cryonically preserve my 10 years younger self. It's possible that somewhere between that point in time and this present moment lies a sweet spot, but I can't really figure out where it is. Even if cryonics works, it's too likely to work so roughly that it doesn't really matter to present me who was vitrified in the first place.
With perfect altruism and infinite resources, I have a vision how this problem could go away completely. Too bad I was born with a brain heavily predisposed to egoism, and grew out of a relatively poor world.
comment by DSherron · 2013-11-15T05:23:32.393Z · LW(p) · GW(p)
After considering this for quite some time, I came to a conclusion (imprecise though it is) that my definition of "myself" is something along the lines of:
- In short form, a "future evolution of the algorithm which produces my conscious experience, which is implemented in some manner that actually gives rise to that conscious experience"
- In order for a thing to count as me, it must have conscious experience; anything which appears to act like it has conscious experience will count, unless we somehow figure out a better test.
- It also must have memory, and that memory must include a stream of consciousness which leads back to the stream of consciousness I am experiencing right now, to approximately the same fidelity as I currently have memory of a continuous stream of consciousness going back to approximately adolescence.
Essentially, the idea is that in order for something to count as being me, it must be the sort of thing which I can imagine becoming in the future (future relative to my conscious experience; I feel like I am progressing through time), while still believing myself to be me the whole time. For example, imagine that, through some freak accident, there existed a human living in the year 1050 AD who passed out and experienced an extremely vivid dream which just so happens to be identical to my life up until the present moment. I can imagine waking up and discovering that to be the case; I would still feel like me, even as I incorporated whatever memories and knowledge he had so that I would also feel like I was him. That situation contains a "future evolution" of me in the present, which just means "a thing which I can become in the future without breaking my stream of consciousness, at least not any more than normal sleep does today".
This also implies that anything which diverged from me at some point in the past does not count as "me", unless it is close enough that it eventually converges back (this should happen within hours or days for minor divergences, like placing a pen in a drawer rather than on a desk, and will never happen for divergences with cascading effects (particularly those which significantly alter the world around me, in addition to me)).
Obviously I'm still confused too. But I'm less confused than I used to be, and hopefully after reading this you're a little less confused too. Or at least, hopefully you will be after reflecting a bit, if anything resonated at all.