The Useful Definition of "I"
post by plex (ete) · 2014-05-28T11:44:23.789Z · LW · GW · Legacy · 47 commentsContents
So if none of those are what "I"/Identity is based on, what is? Why do we have a sense of identity? So what is it? Your "I"/identity is a concept (in the conceptspace/thingspace sense), centred on current you, with configurations of matter being considered more central to the "I" cluster the more similar they are to current you in the ways which current you values. What consequences does this theory have for common issues with identity? Why does this matter? None 47 comments
aka The Fuzzy Pattern Theory of Identity
Background reading: Timeless Identity, The Anthropic Trilemma
Identity is not based on continuity of physical material.
Identity is not based on causal links to previous/future selves.
Identity is not usefully defined as a single point in thingspace. An "I" which only exists for an instant (i.e. zero continuity of identity) does not even remotely correspond to what we're trying to express by the word "I" in general use, and refers instead to a single snapshot. Consider the choice between putting yourself in stasis for eternity against living normally; a definition of "I" which prefers self-preservation by literally preserving a snapshot of one instant is massively unintuitive and uninformative compared to a definition which leads us to preserve "I" by allowing it to keep living even if that includes change.
Identity is not the current isolated frame.
So if none of those are what "I"/Identity is based on, what is?
Some configurations of matter I would consider to be definitely me, and some definitely not me. Between the two extremes there are plenty of border cases wherever you try to draw a line. As an exercise: five minutes in the past ete, 30 years in the future ete, alternate branch ete brought up by different parents, ete's identical twin, ete with different genetics/body but a mindstate near-identical to current ete, sibling raised in same environment with many shared memories, random human, monkey, mouse, bacteria, rock. With sufficiently advanced technology, it would be possible to change me between those configurations one atom at a time. Without appeals to physical or causal continuity, there's no way to cleanly draw a hard binary line without violating what we mean by "I" in some important way or allowing, at some point, a change vastly below perceptible levels to flip a configuration from "me" to "not-me" all at once.
Or, put another way, identity is not binary, it is fuzzy like everything else in human conceptspace.
It's interesting to note that examining common language use shows that in some sense this is widely known. When someone's changed by an experience or acting in a way unfitting with your model of them it's common to say something along the lines of "he's like a different person" or "she's not acting like herself", and the qualifier!person nomenclature that is becoming a bit more frequent, all hint at different versions of a person having only partially the same identity.
Why do we have a sense of identity?
For something as universal as the feeling of having an identity there's likely to be some evolutionary purpose. Luckily, it's fairly straightforward to see why it would increase fitness. The brain's learning is based on reward/punishment and connecting behaviours which are helpful/harmful to them, which is great for some things but could struggle with long term goals since the reward for making the right/punishment for wrong decision comes very distantly from the choice, so relatively weakly connected and reinforced. Creatures which can easily identify future/past continuations using an "I" concept of their own presence have a ready-built way to handle delayed gratification situations. Evolution needs to connect up "doing this will make "I" concept future be expected to get reward" to some reward in order to encourage the creature to think longer term, rather than specifically connecting each possible long term beneficial reward to each behaviour. Kaj_Sotala's attempt to dissolve subjective expectation and personal identity contains another approach to understanding why we have a sense of identity, as well as many other interesting thoughts.
So what is it?
If you took yourself from right now and changed your entire body into a hippopotamus, or uploaded yourself into a computer, but still retained full memories/consciousness/responses to situations, you would likely consider yourself a more central example of the fuzzy "I" concept than if you made the physically relatively small change of removing your personality and memories. General physical structure is not a core feature of "I", though it is a relatively minor part.
Your "I"/identity is a concept (in the conceptspace/thingspace sense), centred on current you, with configurations of matter being considered more central to the "I" cluster the more similar they are to current you in the ways which current you values.
To give some concrete examples: Most people consider their memories to be very important to them, so any configuration without a similar set of memories is going to be distant. Many people consider some political/social/family group/belief system to be extremely important to them, so an alternate version of themselves in a different group would be considered moderately distant. An Olympic athlete or model may put an unusually large amount of importance on their body, so changes to it would move a configuration away from their idea of self quicker than for most.
This fits very nicely with intuition about changing core beliefs or things you care about (e.g. athlete becomes disabled, large change in personal circumstances) making you in at least some sense a different person, and as far as I can tell does not fall apart/prove useless in similar ways to alternative definitions.
What consequences does this theory have for common issues with identity?
- Moment to moment identity is almost entirely, but not perfectly retained.
- You will wake up as yourself after a night's sleep in a meaningful sense, but not as quite as central example of current-you's "I" as you would after a few seconds.
- The teleporter to Mars does not kill you in the most important sense (unless somehow your location on Earth is a particularly core part of your identity).
- Any high-fidelity clone can be usefully considered to be you, however it originated, until it diverges significantly.
- Cryonics or plastination do present a chance for bringing you back (conditional on information being preserved to reasonable fidelity), especially if you consider your mind rather than your body as core to your identity (so would not consider being an upload a huge change).
- Suggest more in comments!
Why does this matter?
Flawed assumptions and confusion about identity seem to underlie several notable difficulties in decision theory, anthropic issues, and less directly problems understanding what morality is, as I hope to explore in future posts.
Thanks to Skeptityke for reading through this and giving useful suggestions, as well as writing this which meant there was a lot less background I needed to explain.
47 comments
Comments sorted by top scores.
comment by Stuart_Armstrong · 2015-12-14T12:20:43.395Z · LW(p) · GW(p)
Most people consider their memories to be very important to them, so any configuration without a similar set of memories is going to be distant. Many people consider some political/social/family group/belief system to be extremely important to them, so an alternate version of themselves in a different group would be considered moderately distant. An Olympic athlete or model may put an unusually large amount of importance on their body, so changes to it would move a configuration away from their idea of self quicker than for most.
This hints at the interesting idea that the concept of I could be strongly influence by culture and personality.
comment by David_Gerard · 2014-05-29T17:01:03.752Z · LW(p) · GW(p)
How about the Sequences-favoured definition of identity that says that you should feel like sufficiently-fidelitous copies of you are actually the same you, rather than being near-twins who will thenceforth diverge? (As espoused in Timeless Identity and Identity Isn't In Specific Atoms.) This has always struck me as severely counterintuitive, and its consistency doesn't remedy that; if two instances can fork they will, barring future merges being a meaningful concept (something that I don't think anything in the Sequences shows).
Replies from: Matthew_Opitz, torekp, ete↑ comment by Matthew_Opitz · 2014-05-31T12:59:14.505Z · LW(p) · GW(p)
When you (or the sequences) say that two copies of me "should" feel the same to me, is that word "should" being used in a normative or a descriptive sense? What I mean is, am I being told that I "ought" to adopt this perspective that the other copy is me, or am I being told that I will naturally experience that other copy's sensory input as if it were the input going into my original body?
Replies from: David_Gerard↑ comment by David_Gerard · 2014-05-31T20:44:57.677Z · LW(p) · GW(p)
It reads to me like an "ought".
Replies from: Matthew_Opitz↑ comment by Matthew_Opitz · 2014-06-03T12:16:26.497Z · LW(p) · GW(p)
Okay, thank you for clarifying that. In that case, though, I fail to see the support for that normative claim. Why "should" my copy of me feel like me (as if I have any control over that in the first place)? As far as I see it, even if my copy of me "should" feel like me in a normative sense, then that won't matter in a descriptive sense because I have no way of affecting which copy of me I experience. Descriptively, I either experience one or the other, right? Either the teleporter is a suicide machine or it isn't, right?
Things can be in a super-position of states while there is still uncertainty, but at some point it will come down to a test. Let's say the teleporter makes a copy of me on Mars, but doesn't destroy the original. Instead, scientists have to manually shoot the original a few minutes after the teleportation experiment. What do I experience? Do I immediately wake up on Mars? Do I wake up still on Earth and get shot, and my experience of anything ceases? Do I wake up on Earth and get shot, and then my subjective experiencing instantly "teleports" to the only copy of me left in the universe, and I wake up on Mars with no memory or knowledge that the other me just got shot? What do you predict that I will experience?
Replies from: CAE_Jones, David_Gerard↑ comment by CAE_Jones · 2014-06-04T17:38:03.521Z · LW(p) · GW(p)
If it is possible for the copying to be nondestructive, then why make it destructive?
At the moment, all we get is multiple versions from the same base entity, separated by time but not space. So it's frustrating that most of the thought experiments avoid the complement: multiple versions of the same base entity, separated by space but not time. The "Kill you to teleport a new you" scenario comes across as contrived. Let's look at "scan you to teleport a new you, but no one dies", and see what model of identity comes out.
Well, I'd expect that spatially-but-not-temporally-separated copies would lack the memory/body method of communication/continuity that temporally-but-not-spatially-separated instances have. They'd probably share the same sense of identity at the moment of copying, but would gradually diverge.
If it is objectively impossible to determine which is the original (for example, replication on the quantum level makes the idea of an original meaningless), that would differ from the version where a copy gets teleported to Mars, and thus the Martian copy knows it is a copy, and the original knows that it is the original. I don't really know what to make of either scenario, only that I'd expect Martian Me to be kinda upset in the second, but still to prefer it to destructive copying.
Replies from: Matthew_Opitz↑ comment by Matthew_Opitz · 2014-06-05T13:56:13.984Z · LW(p) · GW(p)
In the case of non-destructive copying, which copy will I end up experiencing? If it is a 50/50 chance of experiencing either copy, then in cases where the copy would inhabit a more advantageous spatial location than the one I was currently in (such as, if I were stuck on Mars and wanted to go back to Earth), it would be in my interest to copy myself many many times via a Mars-Earth teleporter in order to give myself a good probability that I would end up back on Earth where I wanted to be.
Let's say I valued being back home on Earth more than anything else, and I was willing to split whatever legal property I had back on Earth with 100 other copies of me. Then it would make sense for my original self on Mars to tell the scientists: "Copy me 100 times onto Earth. No more, no less, regardless of whatever I, the copy on Mars, say after this, and regardless of whatever the copies on Earth say after this."
I would end up with a very high probability of experiencing one of those copies back on Earth. Of course, all of the copies on Earth would insist that THEY were the successful case of subjective teleportation and that no further teleportation would be required. But they would always say that, regardless of whether I was really one of those experiencing them. That is why I pre-committed to copying 100 times, even if the first copy reports, "Yay! The teleportation was a success! No need to make the other 99 copies!" Because at that point, there is still a 50% chance that I am still experiencing the copy back on Mars—too high for my tastes.
Likewise, the pre-commitment to copy myself no more than 100 times is important because you have to draw the line somewhere. If I had $100,000 in a bank account back on Earth, I'd like to start out with at least $1,000 of that. If you leave it up to the original Mars copy to decide, then the teleportation copying will go on forever. Even after the 100th copying (by which point I might have already been fortunate to get my subjective experience transferred onto maybe the 55th Earth copy or the 78th Earth copy or the 24th Earth copy or the 3rd Earth copy, who knows?), the copy on Mars will still insist, "No! no! no! The entire experiment has been a huge stroke of bad luck! 100 times I have tried to copy myself, and 100 times the coin has landed on tails, so to speak. We must make some more copies until my subjective experience gets transferred over!" At this point, the other copies would say, "That's just what we would expect you to always say. You will never say that the experiment was a success. Very likely the original Matthew Opitz's subjective experience got transferred over to one of us. Which one, nobody can tell from the outside by any experiment, as we will all claim to be that success. But the odds are in favor of one of us being the one that the original Matthew Opitz is subjectively experiencing right now, which is what he wanted all along when he set up this experiment. Sorry!"
But then, what if tails really had come up 100 times in a row? What if one's subjective experience really was still attached to the Martian copy? Or what if this idea of a 50/50 chance is total bunk in the first place, and subjective experience simply cannot transfer to spatially-separated copies? That would suck.
What if, as the original you on Mars before any of the teleportation copying, you had a choice between using your $100,000 back on Earth to fund a physical rescue mission that would have a 10% chance of success, versus using that $100,000 back on Earth to fund a probe mission that would send a non-destructive teleportation machine to Mars that would make a copy of you back on Earth? If you believe that such an experiment would give you a 50/50 chance of waking up as the Earth copy, then it would make more sense to do that. However, if you believe that such an experiment would give you a 0% chance of waking up as the Earth copy, then it would make more sense just to do the physical rescue mission attempt.
These questions really do have practical significance. They are not just sophistry.
↑ comment by David_Gerard · 2014-06-04T13:14:41.132Z · LW(p) · GW(p)
Yeah, it doesn't work for me either. But apparently there are people for whom it does.
Replies from: Matthew_Opitz↑ comment by Matthew_Opitz · 2014-06-04T16:41:31.819Z · LW(p) · GW(p)
For the people for whom it does seem to make sense to identify with copies of themselves, do those people come to that conclusion because they anticipate being able to experience the input going into all of those copies somehow? Or is there some other reason that they use?
Replies from: David_Gerard↑ comment by David_Gerard · 2014-06-04T22:27:38.803Z · LW(p) · GW(p)
I don't understand it. I hypothesise that they take on the idea ("you suggest I should think that? oh, OK") and have a weak sense of self, but I don't have much data to go on except handling emails from distressed Basilisk victims (who buy into this idea).
↑ comment by torekp · 2014-05-30T01:03:49.953Z · LW(p) · GW(p)
should feel like [...]
Interesting that you put it this way, rather than "should think that". If indeed the Sequences say "should feel like", I agree with them. But if they say we "should think that" the copies are the same you, that's mistaken (because it either violates transitivity of identity, or explodes so many practices of re-identification that it would be much better to coin a new word).
A few words on "feel" and "like": by "feel" I take it we mean both that one experiences certain emotions, and that one is generally motivated to protect and enhance the welfare of this person. By "like" we mean that the emotions and motivations are highly similar, and clearly cluster together with self-concern simpliciter, even if there are a few differences.
↑ comment by plex (ete) · 2014-05-29T17:45:57.443Z · LW(p) · GW(p)
Fuzzy Pattern Identity agrees with the ideas put forward in the posts you link to.
It is counterintuitive, but our intuitions can be faulty, and on close inspection the other candidates (physical and causal continuity) for a useful definition of I break down at important edge cases.
Consider: Imagine you are about to be put into a cloning device which will destructively scan your current body and build create two perfect copies. Beforehand, both of the expected results of this procedure are reasonably referred to as "you", just as you would normally refer to a version of yourself from a day in the future. Immediately after the procedure "you" share vastly more in common with your clone than past or future versions of your physical continuity, and your responses are more strongly entangled in a decision theoretic sense.
Replies from: David_Gerard, David_Gerard↑ comment by David_Gerard · 2014-05-29T23:24:48.676Z · LW(p) · GW(p)
Yes, but this leads to trivially obvious problems like this one (aliens attempt blackmail by making and torturing thousands of copies of you). I submit that the proposed solution fails intuition badly enough and obviously enough that it would require removing people's intuition to be acceptable to them, and you're unlikely to swing this merely on the grounds of consistency. You'd need convincing, non-contrived real-life examples of why this is obviousy a superior solution as a practical philosophy.
Replies from: ete↑ comment by plex (ete) · 2014-05-30T08:13:53.524Z · LW(p) · GW(p)
That problem is not almost as strong with other humans being simulated, I'm not sure considering same pattern=same person makes it notably worse.
Additionally if I had strong reason to believe that my decision to surrender was not in some way entangled (even acausally) with their decision to mass-torture simulations, I may surrender in either decision, since I don't see a strong reason to prefer the preferences of the real humans to the simulated ones in the least convenient possible world.
However, in general, it's handy to have a pre-commitment to fighting back as strongly as possible on these kinds of blackmail situations, because it discourages use of extreme harm being used as leverage. If I think that my disposition to surrender would make those tactics more likely to have been used against me, that provides a basis to not surrender despite it being "better" in the current situation.
I don't think it fails intuition quite as thoroughly as you're suggesting, but I take the point that good examples of how it works would help. However, real-life examples are going to be very hard to come by since fuzzy pattern theory only works differently from other common identity theories in situations which are not yet technologically possible and/or involve looking at other everett branches. In every normal normal everyday scenario, it acts just like causal continuity, but unlike causal or physical continuity it does not fail the consistency test under the microscope (and, in my opinion, does less badly on intuition) when you extend it to handle important edge cases which may well be commonplace or at least possible in the future. The best I've done is link to things which show how other ways of thinking about identity fall apart, and that this way as far as I have been able to tell does not, but I'll keep looking for better ways to show its usefulness.
↑ comment by David_Gerard · 2014-05-30T13:12:58.906Z · LW(p) · GW(p)
I'll note also that intuitively, the two instances of me will have more in common with each other than with me the day before ... but they immediately diverge, and won't remerge, so I think that each would intuit the other as its near-identical twin but nevertheless a different person, rather than the same "I".
If remerging was a thing that could happen, that would I think break the intuition.
(I haven't looked, but I would be surprised if this hadn't been covered in the endless discussions on the subject on Extropians in the 1990s that Eliezer notes as the origin of his viewpoint.)
comment by chaosmage · 2014-05-28T14:25:55.480Z · LW(p) · GW(p)
Are you suggesting a concept can conceive of concepts?
If so, I'd like to see a discussion of how a concept can do that, and what seperates a concept that can from one that can't.
If not, that which is considered to be me (a concept), and that which considers something to be me, are two seperate things. You seem to be suggesting they can be the same, implying the latter can also be a concept.
I disagree with that implication. It seems to me that some things (such as higher mammal brains) can somehow create local conceptspaces, and identify concepts inside that local conceptspace with each other, i.e. identify an image on a retina with a concept of some particular person. So my concept of myself is what is referred to as myself inside the conceptspace of the brain that types this.
Importantly, this does not need concepts to be actors in any way - concepts such as ourselves (the concept of you and the concept of me, meeting in the local conceptspaces of everyone who comprehends this string of letters in the way the brain that typed this intended it) can simply be data.
This may have unsatisfactory implications - for example, if somebrain believes it is allowed to make decisions for itself and then decides its concept of itself is identical to its concept of you, it will believe itself allowed to make decisions for itself/you.
However, it helps look past personal identity into the question of what those conceptspaces are, and if those can in any way be considered to persist from one moment to the next. They're evidently able to have an effect upon the concepts inside themselves (allowing or disallowing them to be identified with each other, at least), so they can't be simply data, which in my book means they can't be concepts. But if they're things, not concepts, that makes them fundamentally different from our concepts of ourselves.
Replies from: ete↑ comment by plex (ete) · 2014-05-28T15:55:05.851Z · LW(p) · GW(p)
Are you suggesting a concept can conceive of concepts?
That's not what I'm meaning to suggest, at least not directly. A configuration/thing in thingspace can contain a representation of a concept in conceptspace (concrete example: a computer with a program which sorts images into color or black/white, thing with physical representation of concept).
Current-you is a thing which contains a representation of "I", which is used in the algorithm to determine "is this me". "I" seems most usefully defined as a fuzzy concept, rather than a specific instance or "single frame" of self, physical continuity, or causal continuity.
The near-central examples (things) of the "I" concept will all have their own slightly different concept of "I" with themselves as the center, so in a sense the concept is approximately self-referential (in that core examples of each concept of "I" have representations of very similar concepts of "I"), but ultimately the each version of the "I" concept is defined by a thing not a concept.
my concept of myself is what is referred to as myself inside the conceptspace of the brain that types this
I think our reasonings are compatible, I agree pretty much entirely with the above.
I don't think concepts are directly actors, a concept without physical embodiment can have no affect on the physical world.
Maybe it would help if I differentiated more clearly between the "I" of "this is my identity, here are things I identify as me" and the more messy consciousness/subjective experience issues which I'm not attempting to address directly here?
Replies from: chaosmage, None↑ comment by chaosmage · 2014-05-28T16:24:40.561Z · LW(p) · GW(p)
Interesting.
If "Current-you is a thing", then it can't be identical with any other thing, because things cannot be identical to things. You can place an equality operator between two concepts, but not between two things. Identity is a property of concepts, not of things. Do you agree?
Replies from: ete↑ comment by plex (ete) · 2014-05-28T16:39:57.243Z · LW(p) · GW(p)
hm, to be more clear, current-me is a point in thingspace which just happens to exist in the physical universe as I'm typing this, not necessarily a physical object.
One point in thingspace cannot be identical to any different object in thingspace, yes. I'm not sure I understand how the last sentence follows?
A single "frame" of ete five minutes ago is not equal to the frame "ete now", but both fall near or on the center of ete now's physically embodied concept of "I", therefore both can be meaningfully described as "I" even if they are slightly different from each other.
Replies from: chaosmage↑ comment by chaosmage · 2014-05-28T16:52:32.987Z · LW(p) · GW(p)
I do not understand how a point in thingspace can be not necessarily a physical object. I thought the point of thingspace was that it contained nothing but things (and especially no identities).
Other than that, we seem to pretty much agree. I'm merely saying that the judgement that your two frames can be meaningfully described as "I" is happening inside your conceptspace, and in the conceptspaces of those frames that agree, rather than anywhere in thingspace.
Replies from: ete↑ comment by plex (ete) · 2014-05-28T18:26:13.484Z · LW(p) · GW(p)
Should have said existent physical object. Each point in thingspace is a possible configuration of matter, but not all possible configurations of matter necessarily exist (see the post on Logical Zombies). It's a Big Universe so maybe all possibles exist, but my point was to differentiate the abstract "this is an abstract possible configuration of matter" from the "this is a particular instance of this configuration of matter".
For the last part.. yes, I think, kind of? The concepts are being processed by a physical brain, which means in a significant sense the judgement must be being made by a physical object, but at the same time the physical embodiment of that concept in the brain is key to the judgement so the concept is vital.
↑ comment by [deleted] · 2014-05-28T16:15:34.059Z · LW(p) · GW(p)
I think Chaos' question is the right one to ask: the claim that 'I' am a concept runs into difficulties when we note, for example, that I'm in my office right now, or that I'm going to be at home in six hours. Concepts don't have places. Nor are they born, as I was, nor can they die, as I (most likely) will.
Also, I get that discussions of the self are going to involve self reference, but it seems to me problematic to suggest that I contain the concept of myself, which just is myself. So I contain myself? What does it mean for something to contain itself? Surely, if I contain myself, than what I contain also contains itself, else I wouldn't fully contain myself, but...actually, I'm lost, so I'm going to stop there.
Replies from: chaosmage, ete↑ comment by chaosmage · 2014-05-28T16:26:17.016Z · LW(p) · GW(p)
Try to taboo the word "I", use terms like "what the brain that types this refers to as itself", and you won't be lost anymore.
Replies from: None↑ comment by [deleted] · 2014-05-28T17:22:19.706Z · LW(p) · GW(p)
That sounds much, much worse than just using the word 'I'. And why do you think that's the correct taboo-replacement?
Edit: I worked out what bothers me about this advice.
A: I'm lost, can you help me find the hospital?
B: Just call this street corner 'the hospital'. Now you're there!
Replies from: chaosmage↑ comment by chaosmage · 2014-05-28T18:06:09.735Z · LW(p) · GW(p)
Because it helps me not be confused, and I imagine it would help you to not be confused either.
It is unfortunate the more precise terms are hard to express in the languages developed by our tribes of hominids, but it appears nature wasn't written in those.
Replies from: ete↑ comment by plex (ete) · 2014-05-28T18:20:00.988Z · LW(p) · GW(p)
I think this is useful. "I" seems to refer to two quite different coherent things current-me (specific thing) and general-me (collection of things I consider to be in group "I"), plus sometimes a few others which fall apart at edge cases, like physical and causal continuity "I"s. Consciously going over exactly what you mean by "I" makes it much easier to not skip around different definitions, though it is super-clunky in English.
↑ comment by plex (ete) · 2014-05-28T17:00:59.450Z · LW(p) · GW(p)
I think this is maybe useful.. it seems like there's two meanings of "I" which are generally tricky to differentiate between, with other options for how identity could work (physical and causal continuity) dismissed by things linked in the post.
When you ask "was I at work yesterday", what I think you're asking is "do I believe that a configuration of matter which was I would identify as me was at work yesterday", essentially asking about a category of objects which is a fuzzy category over thingspace.
You're right, concepts don't have places and are not themselves created or destroyed. But they can gain and lose embodiments. An example: Removing every physical example of a the the concept "Object containing a clear representation of the concept platonic solids" from the universe does not destroy the platonic solids, but it does destroy all representations of them which will reduce the impact of the properties of the platonic solids from the universe until someone makes new representations, because examples of the concept cannot causally interact with the universe. Evolution needs objects to interact with the environment a lot to make more of themselves, so we care not about the higher concept being destroyed, but about physical embodiments of the concept.
As for self-reference.. I don't think you contain yourself fully, but I do think you contain a compressed and fuzzy representation of your current self which blurs into nearby thingspace along axes you care less about (e.g. length of your hair, exact molecular weight of your spleen) but is fairly focused along axes you care about greatly (e.g. memories, social groups).
Replies from: None↑ comment by [deleted] · 2014-05-28T17:26:41.203Z · LW(p) · GW(p)
As for self-reference.. I don't think you contain yourself fully, but I do think you contain a compressed and fuzzy representation of your current self
Well, this is tricky. If I can't completely contain a representation of myself, then we have to distinguish on the one hand the containing I, and the contained, fuzzy I. If the containing I is not identical to the fuzzy I, then it seems to me you've been talking about the fuzzy I in the above post. But what we're really interested in is the containing I.
Replies from: ete↑ comment by plex (ete) · 2014-05-28T18:13:07.050Z · LW(p) · GW(p)
Sorry, I need to be more careful with words. Let me rephrase:
I don't think current-you contains all of the details every example of you-the-concept or a representation of all of the details about current-you, but I do think current-you contains a very compressed and fuzzy representation of your current self which is treated by the brain as a concept to sort points in thingspace into "me" and "not me" with some middle ground of "kinda me".
I don't think the single snapshot current-me is the only interesting part, the fuzzy concept of "me"ness which it contains seems useful in many more situations where you need to work out how to act (which will come up more clearly when I get to decision theory).
Replies from: None↑ comment by [deleted] · 2014-05-28T20:30:40.393Z · LW(p) · GW(p)
I don't think the single snapshot current-me is the only interesting part,
No, I agree, the containing I isn't the only interesting part. But it is an interesting part and it remains undefined. One of the most interesting elements of it is that nothing in the fuzzy me-ness really sticks: I can say I'm a student, or an atheist, or a man, but none of those things is really and essentially me. Those are continent facts. I can cease to be all of those things (to one degree or another) without ceasing to exist, and I can cease to be them at will.
But the question is really this: what do you think the containing I is, given that it's not a concept?
Replies from: ete↑ comment by plex (ete) · 2014-05-28T20:50:43.568Z · LW(p) · GW(p)
hm, the way I see it "I" am a sum of my parts. If you remove or change any one non-core aspect current!me still consider the result to be me, but remove and change a large number of aspects or any particularly core ones and the result is only slightly me from current!me's point of view.
I think that the idea of a containing "I" outside of the current!me's physical representation of the fuzzy me concept is essentially a confusion, caused by evolution hardwring a sense of self in an environmentally effective but epistemologically incoherent way.
In a sense, though a not particularly useful one, "I" am my parts right now (single frame view).
In another sense, which I consider more useful, "I" am the high-level pattern in the arrangement of those parts.
How exactly that higher level pattern generates subjective experience is beyond what I'm trying to cover in this post, but so far the general idea that my conscious experience of being "me" is purely generated by the physical embodiment of a moderately enduring pattern is the only one which has held up to my inspection.
Replies from: None↑ comment by [deleted] · 2014-05-28T21:18:12.015Z · LW(p) · GW(p)
hm, the way I see it "I" am a sum of my parts. If you remove or change any one non-core aspect current!me still consider the result to be me, but remove and change a large number of aspects or any particularly core ones and the result is only slightly me from current!me's point of view.
It seems to me that there are two problems with this: first, we use indexicals like 'I' to say things like 'I was never the same after that', or 'I was born 30 years ago' or 'One day I'll be dead' all of which seem to assume a radical independence of the I from particular facts about it (even being alive!). I don't mean to say that all of these sentences necessarily express truths, but just that they all seem to be grammatical, that is, they're not nonsense.
The second problem is that so many of the facts about me are actually constituted by what I take the facts about me to be. I mean that I am an atheist because I take myself to be an atheist, a student because (with some paperwork) I take myself to be a student, etc. These things are true of me because my believing them to be true makes it so. And I often think of these kinds of facts about me as the most essential and important ones. The point is that I can give up on such things, and just cease to be a student, or an atheist, etc. And the one giving up on these things will be me, and in virtue of my power to do so. My giving up on these things won't (indeed can't) just be something that happens to me.
I think that the idea of a containing "I" outside of the current!me's physical representation of the fuzzy me concept is essentially a confusion, caused by evolution hardwring a sense of self in an environmentally effective but epistemologically incoherent way.
Maybe, but then it's a confusion fundamental to the proposal you're offering: the 'containing I' is that 'current!me' that's representing itself (imperfectly) to itself. So if the containing I is a confusion, isn't your proposal in the same situation?
Replies from: ete↑ comment by plex (ete) · 2014-05-28T23:08:19.457Z · LW(p) · GW(p)
I think the first part can be answered by considering that those common language uses refer to the causal continuum "I", which has issues when you look closely. It's fine and handy as a shortcut for most situations because we don't have cloning and reloading available, but talking about "I" in that way when you have more complex situations it can become incoherent.
The second.. if I'm understanding you correctly, you're saying that you have the ability to change parts of yourself you consider to be important, and it would be you making those changes? If so, I agree with that, and would add that those changes would likely be bit by bit with each step moving further away from current!you. I'm not sure how this is a problem?
And the last part, I don't think so, because current!me is a distinct snapshot. A physical instance of a central example of current!me's concept of "I" which also happens to contain a representation of a concept it fits within, which anchors it to reality in a concrete way. Thinking of a "me" other than the physical and the concept seems like a confusion, but you can reason about "I" with just a physical instance and a concept.
Replies from: None↑ comment by [deleted] · 2014-05-29T14:07:39.150Z · LW(p) · GW(p)
A physical instance of a central example of current!me's concept of "I" which also happens to contain a representation of a concept it fits within, which anchors it to reality in a concrete way.
...er, can I make a suggestion? I'm not sure if this has been taken up elsewhere, but maybe it's just a mistake to find some physical or metaphysical explanation of personal identity. Maybe we should try thinking about it as an ethical category. I mean, maybe what counts as hen, and what doesn't, is an ethical question sensitive to ethical contexts. Like, my arm counts as me when we're talking about someone assaulting me, but it doesn't when we're talking about my accidentally whacking someone on the bus.
I'm not trying to say that personal identity is somehow non-physical, but just that asking for a physical explanation is a bit like trying to find a physical explanation of what it is to be a good bargain at the grocery store. Observing that there's no good physical explanation of something doesn't commit us to denying its reality, or to any kind of super-naturalism.
Replies from: ete↑ comment by plex (ete) · 2014-05-29T17:58:03.107Z · LW(p) · GW(p)
Excellent. That's actually something I hope to explore more later.
I agree that "I" as a concept is very importantly viewed as an ethical or moral category, but was hoping to do a detour through some evolution before trying to tackle it in full.
Yes, I think "is this me" is an ethical question, and I also think ethics is purely physical (or more specifically a concept which only affects reality via physical representations of itself). This post is mostly about trying to establish a foundation that "I" is necessarily a caused by a physical thing in our brain running a concept to approximately sort things into "me" and "not-me", with some blurriness. More details of how "I" works and its importance are planned :).
Replies from: None↑ comment by [deleted] · 2014-05-29T20:00:26.973Z · LW(p) · GW(p)
Yes, I think "is this me" is an ethical question, and I also think ethics is purely physical (or more specifically a concept which only affects reality via physical representations of itself).
Well, let's take for granted that there's nothing that's super-natural or anything like that. But I guess I'd still caution against looking for certain kinds of physical explanations when they might not be appropriate to the subject matter. Let me explain by way of a couple of clear cases, so we can try to figure out where 'I' stand relative to each.
So if we want an explanation of hydrogen, I think we'd do well to look into a physical explanation. For every case of hydrogen, we can observe the relevant physical system and its properties, and these physical observations will directly inform and explanation (even, a complete explanation) of hydrogen in general. Hydrogen is an ideal case of physical explanation.
But what about the 'rook' in the game of Chess? Every rook and every chess game is a physical system. Indeed, we could go about and look for cases of a rook, and we will always find some physical object.
But we won't learn much about chess rooks that way. For one thing, we won't see that much in common in how rooks are physically instantiated. Some rooks will be little plastic castles, others will be made of wood or stone. Some will be computer code, others will be just neurological. And even if we did come up with a complete list of the physical instances of chess rooks, that wouldn't do much to explain them: in principle, I can use anything as a rook: a penny, a wad of paper, a patch of colored light, a vintage Porsche 959, anything so long as I can move it around a chess board. The rook has to have some physical properties, but observations about these properties just aren't very interesting. We can do a physics of chess rooks, but we won't get very much out of it.
I think ethics, politics, economics, etc. are all more like chess rooks than they are like hydrogen. There's nothing supernatural about the ethical, but that doesn't mean physics, or even biology, is a good place to go looking for an explanation.
Replies from: ete↑ comment by plex (ete) · 2014-05-30T08:32:06.930Z · LW(p) · GW(p)
Okay, I think I see where you're coming from. Let me sum it up to see if I'm getting this right:
The important aspects of some categories of objects (aka concepts) which humans recognize is not easily reducible to constituent parts (e.g. disassemble a plastic Rook, a wooden Rook, and a memory of a Rook and there's nothing "Rooklike" to link them).
Even not-easily reducible concepts are technically reducible (they are still physical), but looking at the smallest structure is a hilariously inefficient and ineffective way to approach understanding them.
Identity is not an easily reducible by disassemble-to-parts-method category, and like ethics, politics, etc it is vastly more sensible to understand them by higher level patterns which the human brain is good at recognizing thanks to a few billion years of evolution.
If that's what your point is, I agree with you entirely, and I think it's compatible with fuzzy pattern theory. I don't think it would be sane to try and work out what to identify as current!ete by disassembling my brain and trying to construct the pattern of the "is this me" algorithm, but it is important to realize that that algorithm exists. I know I'm saying it's useful/important a lot without showing how and why, but that is coming. I just think it requires a full post to explain and justify at all properly.
Replies from: None↑ comment by [deleted] · 2014-05-30T16:40:26.738Z · LW(p) · GW(p)
You have my point exactly. But...
the human brain is good at recognizing thanks to a few billion years of evolution.
It's got something to do with evolution, but I'd say much more to do with a few thousand years of cultural maturation, science, and philosophy. I don't expect the brains of Babylonians to be much different from ours, but I also don't think we'd get very far trying to explain ethics as we understand it to the slaves of a god-king (and certainly not to the god-king).
I know I'm saying it's useful/important a lot without showing how and why, but that is coming. I just think it requires a full post to explain and justify at all properly.
Fair enough. I'll look forward to your future posts.
comment by Douglas_Reay · 2014-07-18T10:23:36.226Z · LW(p) · GW(p)
You might be interested in this Essay about Identity, that goes into how various conceptions of identity might relate to artificial intelligence programming.
comment by Matthew_Opitz · 2014-05-30T01:41:36.363Z · LW(p) · GW(p)
"The teleporter to Mars does not kill you in the most important sense (unless somehow your location on Earth is a particularly core part of your identity)."
My location on Earth is not a particularly core part of my identity. If I traveled to Mars in a space shuttle in several months, I would still consider myself to be the same person.
But my being able to experience what my body is experiencing, or what replicas of my body are experiencing, is a core part of my identity—perhaps the core part of my identity, personally.
If I teleport to Mars, do "I" get to experience what that body then experiences on Mars? If not, then yes, I would consider that teleporter to be a suicide machine.
comment by Thomas · 2014-05-28T15:36:56.236Z · LW(p) · GW(p)
The only view which makes sense is this:
You are a co-incarnation of me, and of everybody else, who is conscious.
It's quite scarey and it's a big memetic hazard as well, but nothing else makes sense.
Replies from: ete, Slider, chaosmage↑ comment by plex (ete) · 2014-05-28T15:57:36.587Z · LW(p) · GW(p)
Could you unpack that? In particular, what do you mean by co-incarnation?
It's potentially related to some things I'm hoping on writing later on in the series, but I'm not sure if you mean the same thing.
Replies from: Thomas↑ comment by Thomas · 2014-05-28T19:09:37.261Z · LW(p) · GW(p)
what do you mean by co-incarnation
I am a today re-incarnation of yesterme. Tomorrow, I'll be a re-incarnation of today-me.
It works this way, very well. I could even die and be recreated, there is no natural law against that. That would be a proper reincarnation, wouldn't it be? I mean, Alcor promises it.
Well, but I could also be split into two or more. Those would be co-incarnations.
How else should we call it?
↑ comment by Slider · 2014-05-28T17:56:08.377Z · LW(p) · GW(p)
Why exclude the unconcious?
And you still have the very analog problem of separating different incarnations. I guess the plus side is you can be content when experience doesn't neatly factor into multiple agents instead of treating it as a problem.
Replies from: Thomas↑ comment by Thomas · 2014-05-28T18:58:46.234Z · LW(p) · GW(p)
Who said anything about separation?
The consciousnesses operates inside a mental architecture, memories, emotions around it, whatever those surroundings might be.
A techno-telepathic link should be enlightening. The narrow bandwidth between brains we have now, creates this illusion of uniqueness and of dependence of certain memories.
It's counter-intuitive, but no absolute up-down direction -- is also counter-intuitive!
↑ comment by chaosmage · 2014-05-28T16:36:20.991Z · LW(p) · GW(p)
If you're going to identify with anything, you might as well identify with everything. So what? How is that scary? Some people have had beliefs extremely similar to that since before written history, and it drove them (what we call) religious, but not outright nuts.
Replies from: Thomas