On Irrational Theory of Identity

post by SilentCal · 2014-03-19T00:06:27.008Z · LW · GW · Legacy · 9 comments

Contents

9 comments

Meet Alice. Alice alieves that losing consciousness causes discontinuity of identity.

 

Alice has a good job. Every payday, she takes her salary and enjoys herself in a reasonable way for her means--maybe going to a restaurant, maybe seeing a movie, normal things. And in the evening, she sits down and does her best to calculate the optimal utilitarian distribution of her remaining paycheck, sending most to the charities she determines most worthy and reserving just enough to keep tomorrow-Alice and her successors fed, clothed and sheltered enough to earn effectively. On the following days, she makes fairly normal tradeoffs between things like hard work and break-taking, maybe a bit on the indulgent side.

 

Occasionally her friend Bob talks to her about her strange theory of identity. 

 

"Don't you ever wish you had left yourself more of your paycheck?" he once asked.

"I can't remember any of me ever thinking that." Alice replied. "I guess it'd be nice, but I might as well wish yesterday's Bill Gates had sent me his paycheck."

 

Another time, Bob posed the question, "Right now, you allocate yourself enough to survive with the (true) justification that that's a good investment of your funds. But what if that ever ceases to be true?"

Alice resopnded, "When me's have made their allocations, they haven't felt any particular fondness for their successors. I know that's hard to believe from your perspective, but it was years after past me's started this procedure that Hypothetical University published the retrospective optimal self-investment rates for effective altruism. It turned out that Alices' decisions had tracked the optimal rates remarkably well if you disregard as income the extra money the deciding Alices spent on themselves.

So me's really do make this decision objectively. And I know it sounds chilling to you, but when Alice ceases to be a good investment, that future Alice won't make it. She won't feel it as a grand sacrifice, either. Last week's Alice didn't have to exert willpower when she cut the food budget based on new nutritional evidence."

 

"Look," Bob said on a third occasion, "your theory of identity makes no sense. You should either ignore identity entirely and become a complete maximizing utilitarian, or else realize the myriad reasons why uninterrupted consciousness is a silly measure of identity."

"I'm not a perfect altruist, and becoming one wouldn't be any easier for me than it would be for you," Alice replied. "And I know the arguments against the uninterrupted-consciousness theory of identity, and they're definitely correct. But I don't alieve a word of it."

"Have you actually tried to internalize them?"

"No. Why should I? The Alice sequence is more effectively altruistic this way. We donate significantly more than HU's published average for people of similar intelligence, conscientiousness, and other relevant traits."

"Hmm," said Bob. "I don't want to make allegations about your motives-"

"You don't have to," Alice interrupted. "The altruism thing is totally a rationalization. My actual motives are the usual bad ones. There's status quo bias, there's the desire not to admit I'm wrong, and there's the fact that I've come to identify with my theory of identity.

I know the gains to the total Alice-utility would easily overwhelm the costs if I switched to normal identity-theory, but I don't alieve those gains will be mine, so they don't motivate me. If it would be better for the world overall, or even neutral for the world and better for properly-defined-Alice, I would at least try to change my mind. But it would be worse for the world, so why should I bother?"

 

.

 

.

 

If you wish to ponder Alice's position with relative objectivity before I link it to something less esoteric, please do so before continuing.

 

.

 

.

 

.

 

Bob thought a lot about this last conversation. For a long time, he had had no answer when his friend Carrie asked him why he didn't sign up for cryonics. He didn't buy any of the usual counterarguments--when he ran the numbers, even with the most conservative estimates he considered reasonable, a membership was a huge increase in Bob-utility. But the thought of a Bob waking up some time in the future to have another life just didn't motivate him. He believed that future-Bob would be him, that an uploaded Bob would be him, that any computation similar enough to his mind would be him. But evidently he didn't alieve it. And he knew that he was terribly afraid of having to explain to people that he had signed up for cryonics.

So he had felt guilty for not paying the easily-affordable costs of immortality, knowing deep down that he was wrong, and that social anxiety was probably preventing him from changing his mind. But as he thought about Alice's answer, he thought about his financial habits and realized that a large percentage of the cryonics costs would ultimately come out of his lifetime charitable contributions. This would be a much greater loss to total utility than the gain from Bob's survival and resurrection.

He realized that, like Alice, he was acting suboptimally for his own utility but in such a way as to make the world better overall. Was he wrong for not making an effort to 'correct' himself?

 

Does Carrie have anything to say about this argument?

9 comments

Comments sorted by top scores.

comment by Gunnar_Zarncke · 2014-03-19T08:20:21.887Z · LW(p) · GW(p)

Alice doesn't really make an consistent impression of discuntiuous identity. That's probably intended judging from the title.

there's the fact that I've come to identify with my theory of identity, and also, at the end of every payday, there's a big, virtuous thrill when I send my check.

So she knows that her other mes feel so, she seems to identify with that. That breaks suspension of disbelief (disalief?).

Regarding the recent secretly secret sensations I'd guess that there are probably people out there that do not have all the kinds of identity everyone takes for granted - in this case the agent self.

See http://en.wikipedia.org/wiki/Psychology_of_self#Parts_of_the_self

Replies from: SilentCal
comment by SilentCal · 2014-03-19T15:16:37.951Z · LW(p) · GW(p)

That quote is a writing error, which I'll fix. Thanks!

comment by [deleted] · 2014-03-19T05:39:34.327Z · LW(p) · GW(p)

What is alieve, Is it some variation of a belief? Did a quick search for "alieve lesswrong" and couldn't find anything.

Replies from: hamnox, Gurkenglas
comment by hamnox · 2014-03-22T17:32:12.112Z · LW(p) · GW(p)

Twas good of you to ask. It's bad enough when you have to have a dozen requisite links to unpacked concepts, putting jargon in completely unexplained expands the inferential distance and makes Lesswrong less useful to people who are not already obsessive readers.

comment by Gurkenglas · 2014-03-19T05:49:19.469Z · LW(p) · GW(p)

http://wiki.lesswrong.com/wiki/Alief

I could rub in the weakness of your Google-Fu, so I will. Ha-ha!

comment by Squark · 2014-03-23T19:50:15.576Z · LW(p) · GW(p)

I think it makes sense to want to be more altruist than you actually have the will power to be. In that case, it is perfectly rational to exploit your own irrational aliefs in order to behave more altruistically.

comment by torekp · 2014-03-19T23:18:15.258Z · LW(p) · GW(p)

I think Bob's decisions are tenable, but the language he uses to defend them is a little confused. I think that on further reflection, Bob will realize that he doesn't exactly care about "me" or "not-me" as he has so far understood those terms, but about something that closely corresponds in most circumstances. The cryonics survivor might "technically" be Bob, but Bob doesn't really care and that's why he prefers charitable giving.

If this is correct, then once Bob understands what he does actually care about, he will face a terminological decision. He can revise his definition of personal identity and then care about "me" versus "not-me" as defined in these new terms. Alternatively, he can keep using pronouns in the old way, say "it's not ultimately about personal identity", and go on caring about this other thing that roughly corresponds to personal identity. This terminological decision needn't be made in any hurry(footnote); more important matters should command Bob's attention first. Like explaining his decision to Carrie.

Footnote: cards on table, I think the latter alternative makes for less confusion in discussions. Also, I am Bob.

I also have doubts about the felt need to have a nice simple boundary between cases in which one strongly envisions/empathizes-with the future life episodes and cases in which one weakly does so. Why does the pattern of concern have to be simple?

comment by buybuydandavis · 2014-03-19T21:02:04.006Z · LW(p) · GW(p)

The title seems to me true only in the sense that Alice thinks theories of identity are "correct" that contradict her particular set of allegiances. Meanwhile, she at least acts according to her actual preferences.

What do I think of her position? She's an odd duck, but more conceptually with it than Bob.

Seems unlikely she would be interested in cryonics. That's fine. Doesn't mean I am uninterested, as I have different preferences than she does.

Replies from: SilentCal
comment by SilentCal · 2014-03-19T22:12:50.868Z · LW(p) · GW(p)

I'll edit the post to make it clearer what analogy I intended. The question I meant to raise is that if someone has an Alice-like position with respect to cryonics, where they don't intuitively identify with their resurrected future self, does a cryonics evangelist have an argument against them?

I'm absolutely not trying to show that it's wrong to sign up for cryonics. I'm raising a possible way it might not be wrong not to.