You'll be who you care about
post by Stuart_Armstrong · 2011-09-20T17:52:44.113Z · LW · GW · Legacy · 24 commentsContents
24 comments
Eliezer wonders about the thread of conscious experience: "I don't think that, if I were really selfish, I could jump off a cliff knowing smugly that a different person would experience the consequence of hitting the ground."
Instead of wondering whether we should be selfish towards our future selves, let's reverse the question. Let's define our future selves as agents that we can strongly influence, and that we strongly care about. There are other aspects that round out our intuitive idea of future selves (such as having the same name and possessions, and a thread of conscious experience), but this seems the most fundamental one.
In future, this may help clarify issues of personal identity once copying is widespread:
These two future copies, Mr. Jones, are they both 'you'? "Well yes, I care about both, and can influence them both."
Mr Jones Alpha, do you feel that Mr Jones Beta, the other current copy, is 'you'? "Well no, I only care a bit about him, and have little control over his actions."
Mr Evolutionary-Jones Alpha, do you feel that Mr Evolutionary-Jones Beta, the other current copy, is 'you'? "To some extent; I care strongly about him, but I only control his actions in an updateless way."
Mr Instant-Hedonist-Jones, how long have you lived? "Well, I don't care about myself in the past or in the future, beyond my current single conscious experience. So I'd say I've lived a few seconds, a minute at most. The other Mr Instant-Hedonist-Jones are strangers to me; do with them what you will. Though I can still influence them strongly, I suppose; tell you what, I'll sell my future self into slavery for a nice ice-cream. Delivered right now."
24 comments
Comments sorted by top scores.
comment by Vaniver · 2011-09-20T20:58:57.542Z · LW(p) · GW(p)
Eliezer wonders about the thread of conscious experience: "I don't think that, if I were really selfish, I could jump off a cliff knowing smugly that a different person would experience the consequence of hitting the ground."
Consider: people drink and get hangovers.
Replies from: Stuart_Armstrong, Manfred↑ comment by Stuart_Armstrong · 2011-09-21T11:21:25.627Z · LW(p) · GW(p)
...and may pre-buy hangover cures the night before. And the hangovered you may feel that the night before was worth it, at least to some extent. Being somewhat altruistic towards you future self has to be balanced against your current enjoyment.
And people aren't rational and consistent and all the usual caveats.
comment by Wei Dai (Wei_Dai) · 2011-09-21T21:17:35.787Z · LW(p) · GW(p)
If the definition of "future selves" will not be used to determine who to care about and how much, then it has no consequences on one's decisions, so you might as well say that there is no such thing as future selves, that there is only how much we care about various person-moments, which is essentially arbitrary.
This position seems analogous to position 4 in What Are Probabilities, Anyway?, which says there is no such thing as "reality fluid" or "measure" in an objective sense, that there's only how much we care about various universes.
But what if there is such a thing as future selves, or reality fluid? It seems to me in that case we probably want to care more about our future selves, and about universes that have more reality fluid. Shouldn't we keep these questions open until we have better arguments one way or another?
Replies from: Stuart_Armstrong, cousin_it↑ comment by Stuart_Armstrong · 2011-09-22T06:31:27.297Z · LW(p) · GW(p)
I don't see what reality fluid or similar ideas have to do with it. If you don't care about your future selve, I see no reality fluid or measure-based argument that would convince you otherwise.
I just note that "caring about them" is a strong characteristic of our current concept of future selves, and should probably be part of any definition.
comment by Oscar_Cunningham · 2011-09-21T19:49:59.086Z · LW(p) · GW(p)
It occurs to be that this definition isn't symmetric. I have influence over my future self, but they have little influence over me. So they are me, but I am not they.
Replies from: None, pedanterrific↑ comment by pedanterrific · 2011-09-21T20:31:17.863Z · LW(p) · GW(p)
This assumes your future self will not invent or obtain a method of time travel.
Edit: Let me rephrase - your 'future self' is someone you have power over. So no, current!You is presumably not future!You's future self.
Editedit: What Misha said.
comment by Oscar_Cunningham · 2011-09-21T06:48:44.495Z · LW(p) · GW(p)
If I had a slave whom I happened to care about, then by this definition they would be me, which isn't true.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2011-09-21T11:19:19.982Z · LW(p) · GW(p)
There are other aspects to the current idea of identity, as I said - I wasn't claiming this was the total solution to the problem.
But if I had a willing slave that I cared deeply about, I'd say that it would be fair to consider them as an extension of myself. Especially if we communicated a lot.
comment by Scott Alexander (Yvain) · 2011-09-21T11:58:17.570Z · LW(p) · GW(p)
Let's define our future selves as agents that we can strongly influence, and that we strongly care about.
This predicts our children are to some degree our future selves. I'm not sure if that's a plus or a minus for this theory.
I don't think there's any metaphysical meaning to "X is the same person as Y", but our mental programs take it as a background assumption that we're the same as our future selves for the obvious evolutionary reasons. Identity with our future selves is on the bottom level of human values, like preferring pleasure to pain, and there's no need to justify it further.
I don't know if this is the same as your theory or not.
Replies from: wedrifid↑ comment by wedrifid · 2011-09-21T13:30:18.054Z · LW(p) · GW(p)
Let's define our future selves as agents that we can strongly influence, and that we strongly care about.
This predicts our children are to some degree our future selves.
Predictions seem to be a different thing in nature to definitions. The definition is terrible but it, well, by definition doesn't make a prediction.
Replies from: Yvain, Stuart_Armstrong↑ comment by Scott Alexander (Yvain) · 2011-09-21T18:42:26.710Z · LW(p) · GW(p)
Should I have used a different word? Probably! But I will now proceed to a complex justification of my word choice anyway!
A lot of philosophy seems to be coming up with explicit definitions that fit our implicit mental categories - see Luke's post on conceptual analysis (which I might be misunderstanding). Part of this project is the hope that our implicit mental categories are genuinely based off, or correspond to, an explicit algorithmizable definition. For example, one facet of utilitarianism is the hope that the principle of utility is a legitimate algorithmization of our fuzzy mental concept of "moral".
This kind of philosophy usually ends up in a give-and-take, where for example Plato defines Man as a featherless biped, and Diogenes says that a plucked chicken meets the definition. Part of what Diogenes is doing is saying that if Plato's definition were identical to our implicit mental category, we would implicitly common-sensically identify a chicken as human. But we implicitly common-sensically recognize a chicken is not human, therefore our minds cannot be working off the definition "featherless biped".
This is the link between defining and predicting. Plato has proposed a theory, that when the mind evaluates humanity, it uses a featherless-biped detector. Diogenes is pointing out Plato's theory makes a false prediction: that people implicitly recognize chickens as humans. This disproves Plato's theory, and so the definition is wrong.
I suppose this must be my mental concept of what we're doing when defining a term like "self", which is what impels me to use "define" and "predict" in similar ways.
Replies from: wedrifid↑ comment by wedrifid · 2011-09-21T20:46:14.912Z · LW(p) · GW(p)
Was the irony intentional? If not that is just priceless!
Humans being what they are, when they define things it will inevitably tend to influence what predictions they make. Where a boundedly rational agent prescribed a terrible definition would be merely less efficient a human will also end up with biased predictions when reasoning from the prediction. Also, as you note, declaring a definition can sometimes imply that a prediction is likely to be made that the definition matches the mental concept while also carving reality effectively at it's joints.
The above being the case definitions can and should be dismissed as wrong. This is definitely related to the predictions that accompany them. This is approximately a representation of the non-verbal reasoning that flashed through my mind prompting my own rejection of the 'self as future folks you care about and can influence' definition. It is also what flashes through my mind when I reject why I must reject any definition of 'define' and 'predict' which doesn't keep the two words distinct. Just because 'human' is closely related to 'featherless biped' it doesn't mean they are the same thing!
I suppose this must be my mental concept of what we're doing when defining a term like "self", which is what impels me to use "define" and "predict" in similar ways.
Just so long as you don't mind if you mislabel a whole lot of plucked chickens.
Understanding the various relationships between definitions and predictions is critical for anyone trying to engage in useful philosophy. But it isn't helpful just to mush the two concepts together. Instead we can let our understanding the predictions involved govern how we go about proposing and using definitions.
↑ comment by Stuart_Armstrong · 2011-09-22T06:35:33.279Z · LW(p) · GW(p)
I don't agree the definition is terrible. I agree it's incomplete. My point boils down to: we should include "caring about" in our intuitive definition of future selves, rather than using other definition and wondering if we can deduce caring from that. Humans do generally care about their future selves, so if we ommit that from the definition, we're talking about something else.
comment by wedrifid · 2011-09-21T13:46:56.598Z · LW(p) · GW(p)
Let's define our future selves as agents that we can strongly influence, and that we strongly care about.
Technical implication: My worst enemy is an instance of my self.
Actual implication: Relationships that don't include a massive power differential or a complete lack of emotional connection are entirely masturbatory.
It is critical to consider that thing which is "future agents that we strongly care about and can influence" but calling those things our 'future selves' makes little sense unless they are, well, actually our future selves.
Replies from: pedanterrific↑ comment by pedanterrific · 2011-09-21T14:18:43.378Z · LW(p) · GW(p)
Technical implication: My worst enemy is an instance of my self.
This explains so much.
Actual implication: Relationships that don't include a massive power differential [...] are entirely masturbatory.
The other way around, surely? If your 'future self' is defined as something you have power over, how could a relationship of equals be masturbatory?
Replies from: wedrifid↑ comment by wedrifid · 2011-09-21T18:56:12.424Z · LW(p) · GW(p)
The other way around, surely? If your 'future self' is defined as something you have power over, how could a relationship of equals be masturbatory?
Equal or greater implies influence therefore they are yourself. If you have much much less power than them then perhaps you have no influence and so they may not be yourself.
comment by Owen · 2011-09-20T19:48:47.797Z · LW(p) · GW(p)
Upvoted because I like to see this kind of brainstorming, although I feel like the "strongly care about" criterion is a bit ad hoc and maybe unnecessary. To me it sounds more correct to say that Mr. IHJ doesn't care about his future selves, not that he doesn't have any.
Replies from: Stuart_Armstrong↑ comment by Stuart_Armstrong · 2011-09-21T11:23:59.496Z · LW(p) · GW(p)
Currently, I'd agree with you. But when copying, especially imperfect copying, becomes available, then "strongly care about" may become a better guide to what a future self is.
comment by [deleted] · 2011-09-21T16:11:06.720Z · LW(p) · GW(p)
Hello there, Mr. Heidegger. I didn't realize the zombie plague had started. Any chance you saw where Erdos went? I have a paper for him to cosign...
comment by MinibearRex · 2011-09-20T20:59:08.364Z · LW(p) · GW(p)
but I only control his actions in an updatless way."
*updateless