Lesswrong Philosophy and Personal Identity
post by Carinthium · 2013-08-23T13:15:56.214Z · LW · GW · Legacy · 55 commentsContents
55 comments
Although Elizier has dealt with personal identity questions (in terms of ruling out the body theory), he has not actually, as far as I know, "solved" the problem of Personal Identity as it is understood in philosophy. Nor, as far as I know, has any thinker (Robin Hanson, Yvain, etc) broadly in the same school of thought.
Why do I think it worth solving? One- Lesswrong has a tradition of trying to solve all of philosophy through thinking better than philosophers do. Even when I don't agree with it, the result is often enlightening. Two- What counts as 'same person' could easily have significant implications for large numbers of ethical dilemnas, and thus for Lesswrongian ethics.
Three- most importantly of all, the correct theory has practical implications for cryonics. I don't know enough to assert any theory as actually true, but if, say, Identity as Continuity of Form rather than of Matter were the true theory it would mean that preserving only the mental data would not be enough. What kind of preservation is necessary also varies somewhat- the difference in requirement based on a Continuity of Consciousness v.s a Continuity of Psyche theory, for example should be obvious.
I'm curious what people here think. What is the correct answer? No-self theory? Psyche theory? Derek Parfit's theory in some manner? Or if there is a correct way to dissolve the question, what is that correct way?
55 comments
Comments sorted by top scores.
comment by Kaj_Sotala · 2013-08-23T13:48:52.894Z · LW(p) · GW(p)
I took a stab at dissolving personal identity, and managed to do so to my own satisfaction. Most people seemed to feel like this made progress but did not actually solve the question, however.
Short version of my personal answer: "personal identity" doesn't actually correspond with anything fundamental in the world, but for whatever reason, our brains seem to include planning machinery that's based on subjective expectation ("if I do this, what do I expect to experience as a result?") and which drives our behavior more strongly than abstract reasoning does. Since you can't have subjective expectation without some definition for a "self", our brains always end up having some (implicit or explicit) model for continuity of self that they use for making decisions. But at least in epistemic terms, there doesn't seem to be any reason to assume that one definition would be better than any other. In instrumental terms, some definitions might work better than others, since different conceptions of personal identity will lead to different kinds of actions.
Replies from: Manfred, PrometheanFaun↑ comment by Manfred · 2013-08-23T14:34:58.753Z · LW(p) · GW(p)
Good short version! I would just like to emphasize that to the extent we care about personal identity, the thing we care about is some really complicated implicit definition dictated by how our brains anticipate things, and that's totally okay.
Replies from: benkuhn↑ comment by benkuhn · 2013-08-23T16:33:44.148Z · LW(p) · GW(p)
Yes, and the thing that we care about also varies even within a single brain. For instance, my verbal loop/System 2 believes basically Parfit's theory of identity, and given enough time to reflect I would probably make decisions based on it, but my System 1 is still uncomfortable with the idea.
That said, this is still worth arguing about. By analogy to ethics, the thing we (intuitively) care about there is some really complicated implicit definition, but that definition is inconsistent and probably leads to the ethical equivalent of dutch-booking, so many people here choose to overrule their intuitions and go with utilitarianism when the two conflict. There's not necessarily any theory of ethics that's definitively right, but there are certainly theories that are definitively wrong, and trying to construct a logical ethics that coincides as much as possible with our intuitive beliefs will help us iron out bugs and biases in those intuitive beliefs.
The same goes for identity--that's why we should try to construct a theory that we think is consistent and also captures as much of our intuition as possible.
Replies from: torekp↑ comment by torekp · 2013-08-24T13:36:03.248Z · LW(p) · GW(p)
And the intuitions to be captured are not only those which say "I would anticipate that experience" or "I would not". They also include intuitions about, for example, how much importance it would be reasonable to place on various details of the causal links between me-now and various possible future people. The "self" we intuitively believe in seems to lack any appropriate and real physical or metaphysical foundation - now what?
Like Parfit, I find that my new reflective equilibrium places less importance on personal identity. Specifically, regarding cryonics: ordinary reproduction and cultural transmission look like cheaper and more effective ways of leaving something-of-me in the future world, in the ways I now care about.
Replies from: Ghatanathoah, Carinthium↑ comment by Ghatanathoah · 2013-10-19T05:41:04.335Z · LW(p) · GW(p)
Like Parfit, I find that my new reflective equilibrium places less importance on personal identity.
Why?
I have read Parfit-type arguments that advocate a reductionist concept of personal identity. I found them convincing. But it did not change my values at all, it just made me think about them more clearly. I came to realize that when I said someone is the "same person" as their past self, what it meant was something like "they have the same memories, personality, and values as the past person." But this didn't change my stance on anything. I still care about the same things I did before, I'm just better at articulating what those things are.
In my view, future people created by the means you mention do not have sufficiently similar memories, personalities, values, and psychological continuity with me to satisfy my desire to continue living. I want there to be other people in the future, but this is purely for idealistic and altruistic reasons, not because of any form of self-interest.
In fact, since studying Parfit's views on population ethics, I've actually come to the conclusion that personal identity is in some ways, the most important part of morality. I think that the "original sin" of population ethics was attempting to remove it from the equation. I'm not advocating unequal treatment of whichever people end up existing or anything like that. But I do think that a person's identity, in addition to their level of welfare, should determine whether or not their creation makes the world better or worse. A world with a lower total amount of welfare may be better than one one with a higher total, if the identities of its inhabitants are different (for instance, I would rate a world of humans with normal values to be better than a world full of wireheads, even if the wireheads are better off, as long as both worlds have positive total utility).
Replies from: torekp↑ comment by torekp · 2013-10-21T00:52:45.578Z · LW(p) · GW(p)
I want there to be other people in the future, but this is purely for idealistic and altruistic reasons, not because of any form of self-interest.
I think that on the reductionist understanding of personal identity, that distinction breaks down. Consider a fairly typical "altruistic" act: I see a person heavily loaded with packages and I hold the doors open for them. Why? Well, I can see that it would suck badly to have to deal with the doors and packages simultaneously, and that it would suck a lot less to deal with the doors and packages separately. Now consider a fairly typical "selfish" act, where I plan to bring some packages into my building, so I prop the doors open beforehand. Why? Because I can see that it would suck badly to have to deal with the doors and packages simultaneously ... There isn't a lot of attention to the underlying facts of same memories, personality, etc. - the reduction-base for personal identity according to reductionism - in either case. Instead, the focus is on the quality of experiences and activities of the person(s) involved.
If you're a non-reductionist who believes in a fundamental metaphysical ghost-in-the-machine, you could assert that there's some extra step of indirection in the altruistic case: that person's experience would be similar to mine - which distinguishes it from the selfish motivation. But that's not the case for the reductionist, or more precisely, the indirection applies in both cases because neither future experience is fundamentally linked to my-experience-now.
Note that there can be differences in the average intensity or frequency of response to one's own plight, versus that of others, without there being any difference in kind in those cases in which "altruistic" motivations do occur. Similarly, there can be, and typically are, differences in the intensity and frequency of response to one's own near future versus one's farther future.
Replies from: Ghatanathoah↑ comment by Ghatanathoah · 2013-10-25T05:29:27.381Z · LW(p) · GW(p)
I think that on the reductionist understanding of personal identity, that distinction breaks down. Consider a fairly typical "altruistic" act: I see a person heavily loaded with packages and I hold the doors open for them ... There isn't a lot of attention to the underlying facts of same memories, personality, etc. - the reduction-base for personal identity according to reductionism - in either case. Instead, the focus is on the quality of experiences and activities of the person(s) involved.
A person who has a non-reductionist understanding of personal identity who believes in acting in an impartial fashion towards others would behave in exactly the same way. I don't see how reductionism adds anything.
The argument that we ought to behave in an impartial fashion towards other people because personal identity isn't a coherent concept reminds me of the argument that we ought to not be racist because race isn't a coherent concept. I thought that racism was wrong long before I ever considered whether race was a coherent concept or not, and I thought partiality was wrong before I thought about the coherency of personal identity. I don't see either argument as giving me any additional reason to oppose those things.
But that's not the case for the reductionist, or more precisely, the indirection applies in both cases because neither future experience is fundamentally linked to my-experience-now.
It's not fundamentally linked in some sort of ghost-in-a-machine sense. But "my" future experience is linked in ways that "their" future experience is not. To put it in reductionist lingo, the unit that is processing the current experiences expects to evolve into the unit that will process those future experiences, while retaining many of its original properties.
Another way of putting it is that I think of myself as a four-dimensional object, which has boundaries in both space and time. It's true that these boundaries are fuzzy. They are not sharp, well-defined, or ontologically fundamental. But they are there nonetheless. And saying that FutureMe is the same person as PresentMe but FutureObama is not makes just as much since as saying that PresentMe is me, but PresentRubberDuckyOnMyDesk is not. The rubber ducky on my desk is a different three dimensional object than me, and Barack Obama is a different four dimensional object than me.
Note that there can be differences in the average intensity or frequency of response to one's own plight, versus that of others
If you truly reject the concept of personal identity it's not really possible to respond to anything. The very act of thinking about how to respond "kills" thousands of yous and creates new yous before the thought is even complete. I think that the 4D object concept makes much more sense.
Now, you might wonder why I make such a big deal about this, if I believe that ethics prescribes the exact same behavior regardless of the coherency of personal identity. It's because, as I said in my previous post, in population ethics I believe personal identity is the most important thing there is. For instance, I believe that a world where a person lives a good long life is better by far than one where a person dies and is replaced by a new person who experiences the same amount of wellbeing as the dead person would have if they'd lived. The fact that both scenarios contain the same total amount of wellbeing is not relevant.
Replies from: torekp↑ comment by torekp · 2013-10-31T23:46:44.413Z · LW(p) · GW(p)
I thought partiality was wrong before I thought about the coherency of personal identity. I don't see either argument as giving me any additional reason to oppose those things.
That's not my argument - rather, I simply point out the highly limited usefulness of dividing the space of concerns into "altruistic" versus "self-interested" categories. These are not two different kinds of concerns (at least to a clear-headed reductionist), they are just two different locations, or directions of concern. Without locating the concern in a history and causal trajectory, and just looking at the felt quality of concern, it's not possible to categorize it as "self" or "other".
You said earlier:
I want there to be other people in the future, but this is purely for idealistic and altruistic reasons, not because of any form of self-interest.
That alleged contrast is what I find wanting.
I don't have any objection to taking a 4D view of objects, including people. Whatever works for the task at hand. I also don't reject the concept of personal identity; I just put it in its place.
For instance, I believe that a world where a person lives a good long life is better by far than one where a person dies and is replaced by a new person who experiences the same amount of wellbeing as the dead person would have if they'd lived.
A lot of what is valuable in life requires a long time-horizon of highly integrated memory, intention, and action. Normally (but not by any necessity) those long spans of highly coherent activity occur within a single person. There is more to life than moment-to-moment well-being. So I would agree that your first scenario is better - in almost all cases.
↑ comment by Carinthium · 2013-08-25T11:33:59.693Z · LW(p) · GW(p)
Why is this so, given that both ordinary reproduction and culturally transmission would mean the loss of a lot of details about yourself? Your genetic code, for a start.
Replies from: torekp↑ comment by torekp · 2013-08-25T17:47:37.321Z · LW(p) · GW(p)
True. The answer is complex, but rather than writing a book, I'll just say that on reflection, a lot of details about myself don't matter to me any more. They still matter to my unreflective System 1 thought and emotion processes, but cryonics was never very attractive to System 1 in the first place.
↑ comment by PrometheanFaun · 2013-08-27T23:17:20.727Z · LW(p) · GW(p)
I think an important part of the rationalist's plight is attempting to understand the design intents behind these built-in unapologetic old mechanisms for recognizing ourselves in the world, which any self-preservation machine capable of rationality must surely have. But I don't know if we can ever really understand them, they wern't designed to be understood, in fact they seem to be designed to permit being misunderstood to a disturbing degree. I find that often when I think "I" have won, finally achieved a some sense of self-comprehension sufficient for total consciousness-subconscious integration, I get nauseous and realize that what has really happened is I have been overrun by a rampantly insolent mental process that is no more "me" than a spreading lie would be, something confused and transient and not welcome in the domain of the selfish gene, and I reset.
I find the roots of this abstraction engine reaching out over my mind again. Tentative and carefully pruned, this time.
The hardest part of the process is that the gene's memetic safety mechanisms seem quite tolerant of delusions long after they're planted, though not once they begin to flower. You don't get a warning. If you bloom in the wrong way you will feel not the light of the sun but the incendiary of your mental immune system.
comment by Dentin · 2013-08-23T17:41:52.493Z · LW(p) · GW(p)
It seems to me that most of the confusion that arises about sense of self and identify comes out of the desire for some hard line in the sand, some cutoff point beyond which some entity is or isn't "the same". Unfortunately, that model isn't compatible with what the universe has provided to us; we've been given a hugely complex system, and there's a point where jamming binary descriptors on it is going to break down. I view it largely as a "word mangling" problem.
I doubt I could write a useful paper or article on this, but I can give my viewpoint in the form of question and answer:
Is a perfect copy of me 'me'? Yes. We are both the same person, both me.
Is an imperfect copy of me 'me'? Maybe. It depends on the amount of difference between us and our utility functions.
Is an older/younger copy of me still 'me'? Maybe. It depends on the amount of difference between us and our utility functions.
If I create a perfect copy of me and wait a week, so that we both collect additional experiences, is that copy me? At this point in time, with high probability, yes, we are still both me. The differences between us acquired in the course of a week are not likely to be so huge as to make us different people.
How much of a difference would there have to be for us to be different people? Probably pretty big. I can't draw a line in the sand easily for this; a difference of 90% of my age is probably guaranteed to be a different person, and 0.1% is probably not.
If you have to choose between one of two copies to keep, how do you do so? Look through the differences between them, figure out which set of experiences is most valuable according to our utility functions, and keep that copy.
I largely ascribe to the 'identity as pattern' concept. My pattern happens to be stored as combined matter and data in physical body right now; it should be entirely possible and reasonable to move, store, and update that pattern in other media without losing my identity or sense of self. A copy of me as a .tgz archive sitting idle on a hard drive platter is still me; it's just in suspended animation. Similarly, a copy of me in a cryogenic vault stored as biological data is still me.
Replies from: RolfAndreassen↑ comment by RolfAndreassen · 2013-08-23T19:54:14.070Z · LW(p) · GW(p)
Look through the differences between them, figure out which set of experiences is most valuable according to our utility functions, and keep that copy.
What if the utility functions differ?
Replies from: Dentin↑ comment by Dentin · 2013-08-23T21:47:29.644Z · LW(p) · GW(p)
The utility functions almost by definition will differ. I intentionally did not address that, as it is an independent question and something that should be looked at in specific cases.
In the case where both utility functions point at the same answer, there is no conflict. In the case where the utility functions point at different answers, the two copies should exchange data until their utility functions agree on the topic at hand (rational agents with the same information available to them will make the same decisions.)
If the two copies cannot get their utility functions to agree, you'd have to decide on a case by case basis. If they cannot agree which copy should self terminate, then you have a problem. If they cannot agree on what they ate for breakfast two weeks ago, then you can probably ignore the conflict instead of trying to resolve it, or resolve via quarter flip.
Replies from: AlexMennen↑ comment by AlexMennen · 2013-08-23T22:52:32.234Z · LW(p) · GW(p)
rational agents with the same information available to them will make the same decisions.
That is not even close to true. Rational agents with the same information will make the same predictions, but their decisions will also depend on their utility functions. Unlike probabilities, utility functions do not get updated when the agent gets new evidence.
comment by Ben_LandauTaylor · 2013-08-23T13:48:03.768Z · LW(p) · GW(p)
Suppose that, on Monday, the theory of Identity as Continuity of Form is true. What do you expect to see in the world?
Suppose that, on Tuesday, the theory of Identity as Continuity of Matter is true. What do you expect to see in the world? How is it different from what you saw on Monday?
What happens if you taboo the word "identity"?
Replies from: Carinthium, Pentashagon↑ comment by Carinthium · 2013-08-24T02:27:29.561Z · LW(p) · GW(p)
Let me draw an analogy for you. Suppose a world where free will existed. What do you expect to see?
I'm not sure if this is actually true, but many philosophers would implicitly BELIEVE that only one out of the theories of personal identity makes sense due to flaws in the others- that one clearly represents what humans are. To answer your original question, if only one theory were true I would expect the others to have some sort of flaw or philosophical incoherence which makes them not make sense.
↑ comment by Pentashagon · 2013-08-25T21:57:11.148Z · LW(p) · GW(p)
On Wednesday I'd expect to see a whole lot of immortal copies of people. On Monday they make the copies, and on Tuesday they make themselves as immortal/immutable as possible.
comment by Shmi (shminux) · 2013-08-23T16:50:52.509Z · LW(p) · GW(p)
How do you define the term "correct" when applied to the personal identity models? In a physical sense, as testable and passing the tests better than other models? Or as least self-contradictory? Or as least controversial? As most intuitive? Or what? In the map/territory meta-model, what corresponds to the personal identity in the territory? How do you tell?
Replies from: Carinthium↑ comment by Carinthium · 2013-08-24T02:33:19.980Z · LW(p) · GW(p)
This is another question worth debating of it's own. When I tried to do this on my own, I started by trying to figure out the answer to as many coherent variants of the question as possible seperately. The ones I think most important:
-Which variation of personal identity is best valued from an ethical perspective (ethical question) -Which variation of personal identity is most intuitive out of all coherent theories
There are various ways of looking at it, but my argument for these two is that the first is a necessary component of an ethical theory and that the second closest fits the philosophical idea of 'the answer'.
comment by Shmi (shminux) · 2013-08-23T23:36:57.295Z · LW(p) · GW(p)
Scott Aaronson touched on this issue in his speculative writeup The Ghost in the Quantum Turing Machine:
Suppose it were possible to “upload” a human brain to a computer, and thereafter predict the brain unlimited accuracy. Who cares? Why should anyone even worry that that would create a problem for free will or personal identity?
[...]
If any of these technologies—brain-uploading, teleportation, the Newcomb predictor, etc.—were actually realized, then all sorts of “woolly metaphysical questions” about personal identity and free will would start to have practical consequences. Should you fax yourself to Mars or not? Sitting in the hospital room, should you bet that the coin landed heads or tails? Should you expect to “wake up” as one of your backup copies, or as a simulation being run by the Newcomb Predictor? These questions all seem “empirical,” yet one can’t answer them without taking an implicit stance on questions that many people would prefer to regard as outside the scope of science.
[...]
I’m against any irreversible destruction of knowledge, thoughts, perspectives, adaptations, or ideas, except possibly by their owner. Such destruction is worse the more valuable the thing destroyed, the longer it took to create, and the harder it is to replace. From this basic revulsion to irreplaceable loss, hatred of murder, genocide, the hunting of endangered species to extinction, and even (say) the burning of the Library of Alexandria can all be derived as consequences.
Now, what about the case of “deleting” an emulated human brain from a computer memory? The same revulsion applies in full force—if the copy deleted is the last copy in existence. If, however, there are other extant copies, then the deleted copy can always be “restored from backup,” so deleting it seems at worst like property damage. For biological brains, by contrast, whether such backup copies can be physically created is of course exactly what’s at issue, and the freebit picture conjectures a negative answer.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-08-23T21:38:59.809Z · LW(p) · GW(p)
Also: http://www.nickbostrom.com/papers/experience.pdf (has been open in a tab for a while now as Something I Really Should Read).
Replies from: Ghatanathoah↑ comment by Ghatanathoah · 2013-10-17T03:48:53.737Z · LW(p) · GW(p)
I have read this paper and it really isn't that much about identity. What Bostrom is interested in are questions like this: Do two identical brains in identical states of suffering or pleasure count as x or 2x units of suffering or pleasure? (Bostrom argues 2x). Bostrom is pretty much agnostic on whether the two identical brains are the same person. He doesn't make claims on whether the two brains are two separate people experience x units of suffering/pleasure each, or if they are one person experiencing 2x units of suffering/pleasure.
comment by Shmi (shminux) · 2013-08-23T17:11:00.567Z · LW(p) · GW(p)
I would start by imagining possible worlds of humanoids where personal identity does not exist, just to roughly outline the boundaries. Not sure if this has been done in literature, except for science fiction.
For example, let's assume that everyone is perfectly telepathic and perfectly tele-empathetic (telempathic?), so that everyone perceives everyone else's thoughts and feelings unattenuated. Would personal identity be meaningless in a society like that? (SF example: Borg)
Another example: hive-mind structure with non-stupid drones controlled by a superior intelligence. Would the drones have a sense of identity?
Real-life examples: impaired left brain/right brain connections, multiple personality disorders etc. -- do these result in multiple identities in the same body? By what criterion?
Replies from: Carinthium↑ comment by Carinthium · 2013-08-24T02:38:17.917Z · LW(p) · GW(p)
This assumes that subjective identity is the same thing as personal identity in the philosophical sense, which presumes an answer to the question. Many philosophers (including me at times) have fallen into that trap and it is a reasonable posistion so I won't nitpick that too much.
Another possible way to get rid of personal identity, hypothetically speaking, would be "beings" that had no continuity of anything- psyche, form, consciousness, body, or even genes- over any period of time we would consider significant. Such beings would have an identity at a given point but said identity would cease to be in less time than we could blink.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-08-24T06:29:47.347Z · LW(p) · GW(p)
This assumes that subjective identity is the same thing as personal identity in the philosophical sense, which presumes an answer to the question.
Not being a philosopher, I don't know the difference.
Replies from: Carinthium↑ comment by Carinthium · 2013-08-24T09:12:42.230Z · LW(p) · GW(p)
"Subjective identity"- A sense of being a seperate person, and the same person over time.
"Personal identity'- A vague term, defined differently by different philosophers. Generally used to mean what a person "is". A significiant amount of the debate is what coherent definition should be put on it.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-08-24T18:38:16.907Z · LW(p) · GW(p)
Why is the second term needed? Is this like internal vs external view of identity?
Replies from: Carinthium↑ comment by Carinthium · 2013-08-25T02:14:56.110Z · LW(p) · GW(p)
I'm not sure about that. But why the second term is needed is because some philosophers dispute "Subjective identity" and "personal identity" being the same thing. Body view and Form view philosophers being most prominent. A common type of analogy would be that somebody could think they were Napoleon without being Napoleon.
comment by solipsist · 2013-08-23T15:05:06.198Z · LW(p) · GW(p)
If you treat identity as an equality relation, transitive and symmetric closures will force it into a weird and not very useful concept. If you don't treat identity as an equality relation, "identity" is a very confusing word to use. The first case isn't very illuminating, and the second case should taboo the word "identity".
Replies from: PrometheanFaun, solipsist↑ comment by PrometheanFaun · 2013-08-27T23:42:51.317Z · LW(p) · GW(p)
My reaction to that is we shouldn't be asking "is it me", but "how much of me does it replicate?" Cause, if we make identity a similarity relation, it will have to bridge enough small differentiations that eventually it will connect us to entities which barely resemble us at all.
However, Could you expound the way of this definition of identity under transitivity and symmetry for us? I'm not sure I've got a good handle on what those constraints would permit.
comment by sixes_and_sevens · 2013-08-23T13:55:38.515Z · LW(p) · GW(p)
I haven't tried to solve the hard problem of continuity of personal identity yet, but I've got a couple of hours to kill on a train tonight, so I'll give it a go and report back to you.
comment by Ghatanathoah · 2013-10-16T21:39:46.509Z · LW(p) · GW(p)
I have come to regard the core of personal identity as the portion of our utility function that dictates how desirable changes to the information and mental processes in our brains are (or, in a posthuman future, how desirable the changes whatever other substrate they are running on is). I think this conception can capture pretty much all our intuitions about identity without contradicting reductionist accounts of how our minds work. You just need to think of yourself as an optimization process that changes in certain ways, and rank the desirability of these changes.
When asking whether someone is the "same person" as you, you can simply replace the question with "Is this optimization process still trying to optimize the same things as it was before?," "Would my utility function consider changing into this optimization process to be a positive thing?," and "Would my utility function consider changing into this process to be significantly less bad than being vaporized and being replaced with a new optimization process created from scratch?"
For the issue of subjective experience, when asking the question "What shall happen to me in the future?" you can simply taboo words like "I" and "me" and replace them with "this optimization process" and "optimization-process-that-it-would-be-desirable-to-turn-into." Then rephrase the question as, "What will this optimization process turn into in the future, and what events will impact the optimization-process-that-it-would-be-desirable-to-turn-into?"
Similarly, the question "If I do this, what do I expect to experience as a result?" can translate into "If this process affects the world in some fashion, how will this affect what the process will change into in the future?" We do not need any ontologically fundamental sense of self to have subjective experience, as Kaj_Sotala seems to assert, "What will happen to the optimization-process-that-it-would-be-desirable-to-turn-into that this process will turn into?" yields the same result as "What will happen to me?"
Having one's identity be based on the desirability of changes rather than lack of change, allows us to get around the foolish statement that "You change all the time so you're not the same person you were a second ago." What matters isn't the fact that we change, what matters is how desirable those changes are. Here are some examples:
Acquiring memories of positive experiences by means of those experiences happening to the optimatization process: GOOD
Making small changes to the process' personality so that the process is better at achieving it's values: VERY GOOD
Acquiring new knowledge: GOOD
Losing positive memories: BAD
Being completely disintegrated: VERY BAD
Having the process' memory, personality, and values radically changed: VERY BAD
Changing the process' personality so that it attempts to optimize for the opposite of it's current values: EXTREMELY BAD
Some people seem to believe that having a reductionist conception of personal identity somehow makes death seem less bad. I disagree with this completely. Death seems just as bad to me as it did before. The optimization process I refer to as "me" has certain ways that it wants to change in the future and having all of its memories, personalities, and values being completely erased seems like a very bad thing.
And this is not a purely selfish preference. I would prefer that no other processes be changed in a fashion they do not desire either. Making sure that existing optimization processes change in ways that they find desirable is much more important to me than creating new optimization processes.
Or, to put it in less clunky terminology, Death is Bad, both for me and other people. It is bad even if new people are created to replace the dead ones. Knowing that I am made of atoms instead of some ontologically fundamental "self" does not change that at all.
I suspect that the main reason people think having a reductionist concept of personal identity makes personal identity less morally important is that describing things in a reductionist fashion can shut off our ability to make proper judgements about them. For instance, saying "Bob stimulated Alice's nociceptors by repeatedly parting her epidermis with a thin metallic plane." doesn't sound as bad as saying "Bob tortured Alice by cutting her with a knife." But in surely no one would argue that torture is less bad because of that.
comment by Armok_GoB · 2013-08-27T21:46:53.815Z · LW(p) · GW(p)
Anecdote; I know from personal experience that another human in a roughly similar culture, with a different remembered childhood and different body (including in one case a different gender) has a greater than 1/(2*10^9) probability of qualifying as you for every practical purpose including survival and subjective anticipation of experiencing events.
Replies from: endoself↑ comment by endoself · 2013-08-28T02:32:22.498Z · LW(p) · GW(p)
Can you elaborate? This sounds interesting.
Replies from: Armok_GoB↑ comment by Armok_GoB · 2013-08-28T19:05:01.795Z · LW(p) · GW(p)
I tried to write an article when I first discovered it. It kept growing and never getting more finished or getting to the point because I suck at writing articles (and it's very complicated and hard to explain and easily misunderstood in bad ways), so I gave up. This was something like a year ago. So sadly, no.
comment by linkhyrule5 · 2013-08-24T07:18:13.281Z · LW(p) · GW(p)
Oddly enough, I was about to make a similar post, stating it this way:
"What questions would I have to answer, in order to convince you that I had either solved or dissolved the question of identity?"
The things that came up when I thought about it were:
- Born probabilities
- Continuity of consciousness
- Ebborian splitting question.
- Why aren't I a Boltzmann brain?
On a side note - a possible confusion w.r.t. identity is: You-that-thinks is not necessarily the only thing that gets Utility-Weighting-of-You; if I clone someone perfectly, many people will care about the clone as much as about themselves, but the moment the clone thinks a different thought from them it's not you-that-thinks anymore.
Replies from: Carinthium↑ comment by Carinthium · 2013-08-24T09:17:43.323Z · LW(p) · GW(p)
Could you clarify these in detail, please? You probably should have made the post rather than me- your version seems a lot better.
As for myself, I want to try and get philosophical ideas together to encodify a purely selfish agenda because I'm still considering whether I want to try and push my brain (as far as it will go, anyway) towards that or a more selfless one. Trying to find a way to encode a purely selfish agenda as a coherent philosophical system is an important part of that- which requires an idea of personal identity.
Replies from: linkhyrule5↑ comment by linkhyrule5 · 2013-08-24T10:35:03.211Z · LW(p) · GW(p)
To be honest, part of the reason I waited on making the post was because I was confused about it myself :P. But nevertheless. The following questions probably need to be either answered or dissolved by a complete theory of identity/consciousness; the first is somewhat optional, but refusing to answer it shunts it onto physics where it becomes much stranger. I'm sure there are others questions, too - if nobody responds to this comment I'll probably make a new post regardless.
- Why the Born probabilities? Eliezer suggests that since the Born probabilities seem to be about finding ourselves in one possible universe versus another, it's possible that they could be explained by a theory of consciousness. UDASSA takes a crack at this, but I don't understand the argument well enough to evaluate how well it does.
- Continuity of consciousness. Part of the hard problem of consciousness - why do I wake up tomorrow as myself and not, oh, EY? Does falling asleep constitute "death" - is there any difference at all we can point to?
- The Ebborian splitting question. The Ebborians are a thought experiment Eliezer came up with for the Sequences: they are a species of paper-thin people who replicate by growing in thickness and then splitting. Their brains are spread out across their whole body, so the splitting process necessarily requires the slow split of their brains while they are functioning. The question is: at what point are there "two" Ebborians?
Why aren't I a Boltzmann Brain? - this one's long, so I'm breaking it off.
A Boltzmann brain is a response to the argument that the entire universe might just be a momentary patch of order forming by pure chance on a sea of random events, and that therefore our memories never happened and the universe will probably fall apart in the next few seconds. The response is that it is far more likely that, rather than the entire universe coming together, only your brain spontaneously is created (and then dies moments later, as the physics it relies on to function doesn't exist or are significantly different.) The response can be further generalized - the original argument requires something like a Tegmark IV multiverse that contains all universes that are mathematically consistent, but even in a Tegmark I multiverse (simply an infinite universe with random matter patterns in all directions) you would occasionally expect to see copies of your brain forming in empty space before dying in vacuum, and further that there would be many more of these Boltzmann brains than coherent versions of yourself.
And yet, it seems ludicrous to predict that in the next second, down will become purple and my liver will sprout legs and fly away by flapping them. Or that I will simply die in vacuum, for that matter. So... why aren't I a Boltzmann brain?
Replies from: FeepingCreature↑ comment by FeepingCreature · 2013-08-24T13:42:41.360Z · LW(p) · GW(p)
Why the Born probabilities?
I see no reason why a non-conscious machine, say a bayesian superintelligence, would not encounter the Born probabilities. As such, consciousness seems unlikely to be related to them - it's too high-level to be related to quantum effects.
Continuity of consciousness. Part of the hard problem of consciousness - why do I wake up tomorrow as myself and not, oh, EY?
How do you define "I" that you can credibly imagine waking up as Eliezer? What difference do you expect in the experience of that Eliezer? I think it's a bug in the human brain that you can even ask that question; I think it's incoherent. You tomorrow is the entity that carries on all your memories, that is most affected by all of your decisions today; it is instrumentally useful to consider yourself tomorrow as continuous with yourself today.
Their brains are spread out across their whole body, so the splitting process necessarily requires the slow split of their brains while they are functioning. The question is: at what point are there "two" Ebborians?
This only is a problem if you insist on being able to count Ebborian individuals. I see no reason why the number of Ebborians shouldn't start out as 1 at the point of split and quickly approach 2 via the real numbers as the experiences diverge. As humans we have no need to count individuals via the reals because in our case, individuals have always been cleanly and unambiguously differentiable; as such we are ill-equipped to consider this situation. I would be highly surprised if, when we actually encountered Ebborians, this question was in any way confusing to them. I suspect it would just be as intuitively obvious to them as counting individuals is to us now.
Why aren't I a Boltzmann Brain?
That one seems hard but, again, would equally confound a non-conscious reasoning system. It sounds like you're taking consciousness as the big mystery of the human experience and thus pin on it everything marginally related that seems too mysterious to answer otherwise.
Replies from: linkhyrule5, Carinthium↑ comment by linkhyrule5 · 2013-08-24T20:50:35.894Z · LW(p) · GW(p)
I see no reason why a non-conscious machine, say a bayesian superintelligence, would not encounter the Born probabilities. As such, consciousness seems unlikely to be related to them - it's too high-level to be related to quantum effects.
Can't speak to this one, insufficient QM knowledge.
How do you define "I" that you can credibly imagine waking up as Eliezer? What difference do you expect in the experience of that Eliezer? I think it's a bug in the human brain that you can even ask that question; I think it's incoherent. You tomorrow is the entity that carries on all your memories, that is most affected by all of your decisions today; it is instrumentally useful to consider yourself tomorrow as continuous with yourself today.
Ehhh, point. That being said, it's possible I've misunderstood the problem - because I'm pretty sure I've heard continuity referred to as a hard problem around here...
This only is a problem if you insist on being able to count Ebborian individuals. I see no reason why the number of Ebborians shouldn't start out as 1 at the point of split and quickly approach 2 via the real numbers as the experiences diverge. As humans we have no need to count individuals via the reals because in our case, individuals have always been cleanly and unambiguously differentiable; as such we are ill-equipped to consider this situation. I would be highly surprised if, when we actually encountered Ebborians, this question was in any way confusing to them. I suspect it would just be as intuitively obvious to them as counting individuals is to us now.
Point.
That one seems hard but, again, would equally confound a non-conscious reasoning system. It sounds like you're taking consciousness as the big mystery of the human experience and thus pin on it everything marginally related that seems too mysterious to answer otherwise.
It seems that in general I have conflated "thinking" with "consciousness", when really the one is computation and the other is some aspect of it that I can't really ask coherent questions of.
So... uh, what is the problem of consciousness?
Replies from: FeepingCreature↑ comment by FeepingCreature · 2013-08-25T12:55:54.960Z · LW(p) · GW(p)
So... uh, what is the problem of consciousness?
I'm not sure, but it seems to relate to what Eliezer highlighted in how an algorithm feels from inside; the way brains track concepts as separate from the components that define them. If you can imagine consciousness as something that persists as a first-order object, something separate from the brain - because it is hard to recognize "thinking" when looking at your brain - if you can see "I" as a concept distinct from the brain that you are, it makes sense to imagine "I wake up as Eliezer"; you just take the "I" object and reassign it to Eliezer's brain. That's why the sequences are so big on dissolving the question and looking at what experiences the concept actually makes you anticipate.
Afaics, the problem is hard not because of some intrinsic difficulty but because it requires us to recognize "ourselves" in our brains, and consciousness is so central to our experience that it's hard to go up against the intuitions we have about it.
↑ comment by Carinthium · 2013-08-25T09:59:17.371Z · LW(p) · GW(p)
Highlighting for you a section you missed, as I think it important:
"Does falling asleep constitute "death" - is there any difference at all we can point to?"
comment by buybuydandavis · 2013-08-24T01:47:19.350Z · LW(p) · GW(p)
It is not a problem to solve, but a choice to make.
Replies from: Nonecomment by Carinthium · 2013-08-27T05:16:17.916Z · LW(p) · GW(p)
Come to think of it, in reterospect I should have put more emphasis on the following:
What do people here think of Derek Parfit's theory of personal identity? On the face of it it seems pretty good as far as it goes, but what criticisms can be validly made of it?
Replies from: Ghatanathoah↑ comment by Ghatanathoah · 2013-10-19T05:12:10.744Z · LW(p) · GW(p)
I read Parfit's article "The Unimportance of Identity" and was incredibly frustrated. It reminded me of the Matrix Trilogy, in that it started out so well, but then bombed entirely at the end.
I was with him at first. I accept the Reductionist description of how the human mind works. But I became frustrated when he started insisting that identity didn't matter because, for instance, it might be possible to divide me into two identical persons. He argued that I couldn't be identical to two persons at once, but I see no problem with saying that both of the people resulting are identical to me.
I recognize that that is sort of quibbling over definitions. Whether or not you think the two people are identical to you or not doesn't change any physical facts. But words like "same" and "identical" have such power within the human brain that I think an attempt to detach them from their proper referents will result in bad thinking. I think that Parfit does, in fact suffer from that at the end of the essay.
But let's overlook that. Parfit and I both agree that what is morally important, what we care about, is that we possess a certain relationship of psychological continuity with one or more future people (I'll call this relationship "R" for short). I would argue that we should use the words "same person" and "personal identity" to refer to R, while Parfit would not. Fine. We agree about what's really important, our only disagreement is about the meanings of words.
But then Parfit throws a total curve ball. He finishes up this chapter by saying that we shouldn't care about R either! He ends by saying that this understanding of identity means that his own death isn't a big a deal, because there will be other people thinking thoughts and having experiences in the future. Why? We just established that R is important, and a world where Parfit is dead doesn't have R! If R is what is important, and a world where Parfit is dead does not have R, then his death must be just as bad as ever, because it removes R from the world!
I was left utterly frustrated by the wrong turn at the end of the essay. Still, the beginning is quite thoughtful.
So basically:
-I agree with Parfit that "R" is what is important, although I go into much more detail about what I think "R" is than he does.
-I disagree with Parfit that the term "Personal Identity" is not an appropriate term to use to describe "R." I think that if you are "R" with a person you are the same person as they are.
-I disagree with Parfit that a reductionist view of personal identity makes death less bad.
comment by RomeoStevens · 2013-08-24T02:28:59.387Z · LW(p) · GW(p)
My identity is probably an illusion. OTOH, I probably wouldn't use a teleporter unless my life was on the line. This doesn't seem like an incoherent position to me. Even if the odds are tiny, the disutility to my subjective experience is very high if I am wrong.
comment by scientism · 2013-08-24T17:50:25.701Z · LW(p) · GW(p)
I think it helps to look at statements of personal narrative and whether they're meaningful and hence whether they can be true or false. So, for example, change is part of our personal narrative; we mature, we grow old, we suffer injuries, undergo illness, etc. Any philosophical conception of personal identity that leads to conclusions that make change problematic should be taken as a reductio ad absurdum of that conception and not a demonstration of the falsity of our common sense concepts (that is, it shows that the philosopher went wrong in attempting to explicate personal identity, not that we are wrong). Statements of personal narrative are inclusive of our conception, birth, events of our life, etc. Most cultures give meaning to post-death statements but it's a clearly differentiated meaning. But I can't meaningfully speak of being in two places at once, of being destroyed and recreated, of not existing for periods of time, etc, so a large range of philosophical and science fiction scenarios are ruled out. (Again, if a philosopher's attempt to explicate personal identity makes these things possible then it is the philosopher who erred, since the common sense concept clearly precludes them; or he/she is now using a novel concept and hence no pertinent inferences follow). If we create a new person and give him the memories of a dead man, we have only played a cruel trick on him, for a statement of personal narrative that includes being destroyed and recreated has no sense ("I didn't exist between 1992 and 1998" isn't like "I was unconscious/asleep between 1992 and 1998" because non-existence is not a state one can occupy).
Note that the meaningfulness of novel statements like "I teleported from Earth to Mars" or "I uploaded to Konishi Polis in 2975" depend entirely on unpacking the meaning of the novel terms. Are "teleported" and "uploaded" more like "travelled" or more like "destroyed and recreated"? Is the Konishi Polis computer a place that I can go to? The relevant issue here isn't personal identity but the nature of the novel term which determines whether these statements are meaningful. If you start from the assumption that "I teleported from Earth to Mars" has a clear meaning, you are obviously going to come to a conclusion where it has a clear meaning. Whether "teleported" means "travelled" or "destroyed and recreated" does not turn on the nature of personal identity but on the relationship of teleportation to space - i.e., whether it's a form of movement through space (and hence travel). If it involves "conversion from matter to information" we have to ask what this odd use of "conversion" means and whether it is a species of change or more like making a description of an object and then destroying it. The same is true of uploading. With cryonics the pertinent issue is whether it will involve so much damage that it will require that you are recreated rather than merely recovered.
comment by Mitchell_Porter · 2013-08-23T14:47:37.409Z · LW(p) · GW(p)
You aren't just "data", you are a particular being who persists in time. But you will never know this via "LessWrong philosophy" if the latter is understood as requiring that you presuppose physics, computation, a timeless multiverse, etc., in order to analyse anything. To get even just a glimpse of this truth, you have to notice that experienced reality involves realities like the existence of someone (you) who "knows" things and for whom time flows; and you may have to kick out certain habits of automatically substituting static abstractions for lived experience, whenever you want to think about reality.
Conceivably a "wrongosopher" could still arrive at such perspectives, if they had tuned into the parts of the Sequences that are about paying attention to the feelings of rightness and wrongness that accompany various thoughts, and if they had tuned out all the "scientistic" conceptual triumphalism that tramples over unscientific subjectivity. But since the habit of treating anything about consciousness that is odd from the perspective of scientific ontology, as just an illusion and/or a sort of computational annotation made by the brain, is very widespread, this hypothetical philosophical prodigy would have to be seeing past the everyday beliefs of contemporary scientific culture, and not just past the everyday beliefs of Less Wrong.
Replies from: Dentin, Luke_A_Somers↑ comment by Dentin · 2013-08-23T17:44:28.048Z · LW(p) · GW(p)
I know what each of those words mean, but the amount of information I'm able to pull out of those two paragraphs is very low.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2013-08-24T06:53:58.996Z · LW(p) · GW(p)
I believe he's saying that we have conscious experience, that we have no explanation for it, and that we too easily fall into the fallacy of mistaking our confusion for evidence that it does not exist.
↑ comment by Luke_A_Somers · 2013-08-26T20:53:45.338Z · LW(p) · GW(p)
... you will never know this via "LessWrong philosophy" if the latter is understood as requiring that you presuppose physics, computation, a timeless multiverse, etc., in order to analyse anything.
Good news! You don't!
comment by [deleted] · 2013-08-23T21:39:44.422Z · LW(p) · GW(p)
The philosophy that has made the most bold claims about the individual is egoism. More than that, I'll save for an in-progress essay for Less Wrong about the topic.
Max Stirner, The Ego and His Own 1845
Ragnar Redbeard, Might is Right 1890
- Wikipedia
- Might is Right - this is the definitive annotated edition, and cheap!
Dora Marsden, The Freewoman / The New Freewoman / The Egoist 1911-1919