What makes you YOU? For non-deists only.

post by cleonid · 2009-11-10T19:59:41.456Z · LW · GW · Legacy · 93 comments

Contents

93 comments

From the dawn of civilization humans believed in eternal life. The flesh may rot, but the soul will be reborn. To save the soul from the potential adverse living conditions (e.g. hell), the body, being the transient and thus the less important part, was expected to make sacrifices. To accumulate the best possible karma, pleasures of the flesh had to be given up or at least heavily curtailed.

 

Naturally the wisdom of this trade-off was questioned by many skeptical minds. The idea of reincarnation may have a strong appeal to imagination, but in absence of any credible evidence the Occam’s razor mercilessly cuts it into pieces. Instead of sacrificing for the sake of the future incarnations, a rationalist should live for the present. But does he really? 

 

Consider the “incarnations” of the same person at different ages. Upon reaching the age of self-awareness, the earlier “incarnations” start making sacrifices for the benefit of the later ones. Dreams of becoming an astronaut at 25 may prompt a child of nine to exercise or study instead of playing. Upon reaching the age of 25, the same child may take a job at the bank and start saving for the potential retirement. Of course, legally all these “incarnations” are just the same person. But beyond jurisprudence, what is it that makes you who you are at the age of nine, twenty five or seventy?

 

Over the years your body, tastes, goals and the whole worldview are likely to undergo dramatic change.  The single thing which remains essentially constant through your entire life is your DNA sequence. Through natural selection, evolution has ensured that we preferentially empathize with those whose DNA sequence is most similar to our own, i.e. our children, siblings and, most importantly, ourselves. But, instinct excepted, is there a reason why a rational self-conscious being must obey a program implanted in us by the unconscious force of evolution? If you identify more with your mind (personality/views/goals/…) than with the DNA sequence, why should you care more for someone who, living many years from now will resemble you less than some actual people living today? 

 

P.S. I am aware that the meaning of “self” was debated by philosophers for many years, but I am really curious about the personal answers of “ordinary” rationalists to this question.

93 comments

Comments sorted by top scores.

comment by DanArmak · 2009-11-10T21:58:02.822Z · LW(p) · GW(p)

It amazes me that many people (not just in this post) appear to completely ignore the existence of the subjective thread of experience.

Consciousness is real. It's a real problem to be solved, and it's a real fact to live with. If there were a million atomically precise copies of me on the other side of the planet, I wouldn't care about them. I would be interested, because I would see a way to learn more about myself or to cooperate on shared goals. But I wouldn't care about their own lives and experiences and happiness any more than those of randomly chosen strangers.

I care about myself because my life is what I experience. I care about my future self because that's the life I will inescapably experience in the future. However different the future me will be from the current me, I will still experience his life. His life will inevitably be the totality of my future experience. How can you possibly ask, then, why I would care about him?

Replies from: Furcas, Tyrrell_McAllister
comment by Furcas · 2009-11-10T22:14:28.439Z · LW(p) · GW(p)

No one here is denying that subjective experience is real, but that's not enough to conclude that there is a magical thread linking DanArmak, DanArmak-10, and DanArmak+10. In other words, that DanArmak, DanArmak-10, and DanArmak+10 are all conscious doesn't mean there's anything connecting them beside similarity.

Also, if you try specifying what you mean by 'I', as you used it in your post, I'm pretty sure you'll run into a few problems.

Replies from: DanArmak
comment by DanArmak · 2009-11-10T22:42:15.770Z · LW(p) · GW(p)

Subjective experience is that thread, and it is the precise and complete meaning of I. And it is also the same thing I, and some but not all other people, mean by the word consciousness.

You can't distinguish between two DanArmaks appearing in sequence (one is the future version of the other), and two appearing together (no necessary relation between them, causal or otherwise). But I can because I am one and will in due time become the other.

Yes, it's magic. In the sense that we don't understand it. We can't even properly define or describe it - that is, we can't describe it in the same framework and terms we use for the physical universe. (Or in any other framework either.) It's not required to explain my behavior in any way. It's completely subjective. That doesn't mean I can just ignore it.

I don't know whether there is or even can be a legitimate explanation or reduction of the phenomenon of consciousness and subjective experience. But I do know the phenomenon is there. The experience is there. The essential difference between the subjective and the objective (and there exists a perfectly valid objective view point of myself, that I can apply to myself - the two things aren't exclusive).

Replies from: DanArmak
comment by DanArmak · 2009-11-10T23:08:17.928Z · LW(p) · GW(p)

A challenge to all doubters.

Omega comes to you with a proposition - heck, I may be able to do it myself in a few decades. I offer to create N atomically precise clones of yourself on the other side of the planet, and give each one the dollar value of all your assets. You can set N as high as you like, provided there's a value that will make you accept the bargain. The price is that I'll kill you ten seconds later.

I assure you that, from past experience, you will not notice the creation of these clones during the ten seconds you have to live. Your extreme similarity will not cause any magical sharing of experiences. And the clones will not notice anything when, ten seconds from their creation, you die; no empirically measurable (including self-reported) consciousness transfers from you to them.

Do you accept the offer?

If you do, and you don't care about your own death, and you don't give as the reason the accomplishment of some external goals that are more important to you than life - then there's a fundamental disconnect between me and you. Indeed, between you and what I naively consider to be universal human ways of thinking. (Of course, if this is the case, the fault lies in my understanding; I'm not trying to denigrate anyone who responds.)

If someone thinks the external goals thingy is a problem (e.g., that all decisions are ultimately taken to satisfy external goals, because you believe a continuous self does not exist) then I can try to formulate a version of the scenario where this is not a consideration.

Replies from: pengvado, AngryParsley, Zack_M_Davis, loqi, RobinZ, kpreid, Johnicholas, pengvado, CannibalSmith
comment by pengvado · 2009-11-10T23:47:23.930Z · LW(p) · GW(p)

Yes, I accept the offer. If it's repeatable and I get to pick where the copy ends up, I'd do it for N=1: that's a teleporter. Getting rich in the process for N>1 (we would of course pool our assets for any large project; only a small fraction of that is needed to cover the increased living expenses), and obtaining some backups (this world is too dangerous to keep only one copy of anything important) are gravy.

Replies from: Jordan
comment by Jordan · 2009-11-11T00:36:53.426Z · LW(p) · GW(p)

If death were instantaneous I'd have fewer qualms with using this as a teleporter, but 10 seconds is a long time to lock in to a particular "string of consciousness". I'd also be curious about the way I'd be killed.

comment by AngryParsley · 2009-11-13T21:06:06.632Z · LW(p) · GW(p)

I would accept the offer and try to make N as large as possible. When you create the copies, I have an N/(N+1) chance of being one of them and only a 1/(N+1) chance of being the poor schmuck who gets killed.

comment by Zack_M_Davis · 2009-11-10T23:41:20.841Z · LW(p) · GW(p)

I assure you that, from past experience, you will not notice the creation of these clones during the ten seconds you have to live.

Of course, and neither will they; that's the point. From the duplicate's perspective (I'm going to avoid the word clone, which I usually take to mean something else), it just seems like being magically teleported to the other side of the world.

Do you accept the offer?

Eagerly! Say N equals, oh, I don't know, nine. There's a lot of stuff I want to do with my life, and if I don't get the chance to do it all sequentially, then I might as well do it in parallel.

Replies from: DanArmak
comment by DanArmak · 2009-11-10T23:46:17.954Z · LW(p) · GW(p)

But why exactly do you apply your I to these duplicates?

Replies from: GuySrinivasan
comment by GuySrinivasan · 2009-11-11T01:57:11.898Z · LW(p) · GW(p)

Because I anticipate that 15 seconds from now, if I reject then there will be 1 thing with the subjective experience I anticipate I will have 15 seconds from now, and if I accept then there will be 9 things with the subjective experience I anticipate I will have 15 seconds from now.

That's not exactly why, but I think it's not bad for the amount of time I spent thinking about how to phrase it...

Replies from: DanArmak
comment by DanArmak · 2009-11-11T09:31:15.454Z · LW(p) · GW(p)

But each separate person will only experience one subjective experience. Why does it matter how many persons there will be?

comment by loqi · 2009-11-11T04:28:49.423Z · LW(p) · GW(p)

I accept. After which I may possibly decide to execute a coordinated suicide of something like N-3 or N-4 copies so as to increase the subjective payoff.

I certainly do acknowledge a subjective experience of consciousness. However, I don't understand why you treat its interruption as so catastrophic. Sleep does it all the time. If you define "I" solely in terms of consciousness, do you agree that "you" have only truly existed since you last awoke?

The intuition behind my acceptance of the local death you propose is based very much in my subjective experience of consciousness. Have you ever awakened in the middle of the night, had a verbal exchange with someone, and then fell asleep and forgot the entire experience, even when reminded of it? This seems entirely analogous to local death: You experience a few minutes of subjective existence, but then your thread then gets "reverted" to a previous state, and continues from there.

That's also why I'd have no problem suiciding most of the copies. If someone came to me right now and offered me a large sum of money in exchange for reverting all trace and memory of the previous two hours of my existence, I'd accept in a heartbeat.

Replies from: DanArmak
comment by DanArmak · 2009-11-11T10:28:46.595Z · LW(p) · GW(p)

I certainly do acknowledge a subjective experience of consciousness. However, I don't understand why you treat its interruption as so catastrophic. Sleep does it all the time. If you define "I" solely in terms of consciousness, do you agree that "you" have only truly existed since you last awoke?

It does seem probable. Certainly if I didn't need to sleep, I wouldn't agree to start sleeping, on these grounds - it looks much too dangerous!

As I just wrote in another reply, I am convinced that my way of looking at things is incomplete and does not extend to new types of experiences (e.g., cloning), but I am not at all convinced that any of the alternative theories proposed in this thread are better.

comment by RobinZ · 2009-11-10T23:19:56.880Z · LW(p) · GW(p)

Omega comes to you with a proposition - heck, I may be able to do it myself in a few decades. I offer to create N atomically precise clones of yourself on the other side of the planet, and give each one the dollar value of all your assets. You can set N as high as you like, provided there's a value that will make you accept the bargain. The price is that I'll kill you ten seconds later.

Quick confirmation: you'll kill the dude on this side of the planet - not any of the ones on the other side. Right?

Assuming that's the case, and assuming I can quell my guilt at ripping you off for a substantial sum of money, and assuming proper guarantees are put into effect so that I may be certain that the clones are properly created and financed before the original is destroyed ... N = 2. RobinX can give RobinY the money for a plane ticket back to the States and use the remainder to sign up for philosophy classes at Adelaide, and all shall be well in my book.

I think you'll find many of us willing to bite this particular bullet. I had the same reaction when Dennett and Hofstadter described a similar thought experiment in The Mind's I.

Replies from: Alicorn, DanArmak
comment by Alicorn · 2009-11-10T23:24:34.968Z · LW(p) · GW(p)

The price is that I'll kill you ten seconds later.

Will you do it while I'm unconscious? If so, N=2 here also (provided "the other side of the planet" is not a hostile environment).

Replies from: RobinZ
comment by RobinZ · 2009-11-10T23:28:40.338Z · LW(p) · GW(p)

Ooh, another good caveat - maybe we should just put in the general good-genie clause: "Assuming you're not ripping us off..."

(Actually, I'm not that bothered about the while-I'm-unconscious part.)

comment by DanArmak · 2009-11-10T23:36:12.176Z · LW(p) · GW(p)

I'm really surprised that so many people here think this way. Even a bit shocked. (Especially pengvado with N=1(!)). Which on the whole is a great experience :-)

Could someone try to explain to me, please, why you feel this way? How you came to feel this way? I have never seen any reason to stop thinking in terms of a thread of consciousness.

Replies from: Zack_M_Davis, RobinZ, RobinZ
comment by Zack_M_Davis · 2009-11-11T00:46:46.028Z · LW(p) · GW(p)

Could someone try to explain to me, please, why you feel this way? How you came to feel this way?

Feel probably isn't the right word here. I'm sure we all feel the same kind of irreducible subjective thread of consciousness that you do. Just like we all have the same illusions of free will and objective morality. But on careful reflection, taking into account all the science you know, and performing a few relevant thought experiments, it gradually becomes clear that these folk concepts just don't make sense. The species-typical intuitions don't go away; you just learn to stop trusting them.

Replies from: DanArmak
comment by DanArmak · 2009-11-11T01:24:13.558Z · LW(p) · GW(p)

Free will and objective morality are claims about how the universe works, objectively. On reflection it becomes clear that they are contradictory and false, respectively.

But the subjective thread of consciousness isn't a fact about the universe. It's a fact about my experience. It makes no sense to say, as you seem to be suggesting, "I may feel conscious, but really it's an illusion". Because if I deny it, then the whole concept of feeling is undefined, and consequently, the concept of illusion is undefined. The idea of illusions, after all, implies that we might instead experience or believe something else which is not an illusion but is true.

You can't claim that subjective thread consciousness is "wrong" because it's not an objective, empirical claim. We can imagine experiencing counterfactuals, but what would it be like not to have experiences? It's not a meaningful question, so there's no answer.

What I was asking is how, due to objective, physical events (you had ideas, read books...) you came to adopt this belief - although I don't quite understand the belief yet, either.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2009-11-11T12:17:08.165Z · LW(p) · GW(p)

Just look at all this "reality" business as a framework for understanding experience: how do you know that "reality" is "out there"? Why do you believe such claims? You are not entitled to your subjective experience, no more than to believing that Venus the planet is a goddess of love and beauty.

comment by RobinZ · 2009-11-11T04:26:08.874Z · LW(p) · GW(p)

I'm going to pull my parenthetical about spells of unconsciousness out in a separate comment, because the point seems to have been lost in the course of the discussion:

DanArmak, you seem to propose that the key process which defines the identity of a person at one time with a person at the other time is the thread of consciousness trailing through spacetime from one to the other. How does your model deal with human beings - individual human organisms! - which undergo literal loss of consciousness? I do not refer to sleep, but to the actual shutdown of conscious perception, such as occurred to Jo Walton (papersky) when she hit her head and (I have heard) to many people under general anesthesia. In these cases, the persons describing their memories explicitly state that there is a finite period of time during which no conscious recollection occurs. Does that imply that the speculative fiction writer named Jo Walton living in Montreal on February 20th, 2006 is a different individual than the speculative fiction writer named Jo Walton living in Montreal on February 22nd, 2006? If not, why not?

Replies from: DanArmak
comment by DanArmak · 2009-11-11T10:24:42.261Z · LW(p) · GW(p)

To address the loss of consciousness scenario: I can't speak from experience, and as I said, my theory is not formal and strict and provable enough to be sure of things outside my experience.

The basic problem here is discontinuity. If the loss of consciousness is brief enough (a fraction of a second) and does not affect my future mental processes, it seems likely "I" will continue to exist. Any other boundary (of length and severity of unconsciousness) would be arbitrary, and so unlikely.

But my experience is discrete. I always have exactly one experience at a time (if conscious), so I always experience being "me", not three-quarters me. "Me" is whatever "I" experience at the time :-) This is no good as a definition, but it's a description every human understands. How to reconcile the two? I have no clear idea.

As I said before, my theory is far from complete - it's more a list of facts than a structured model. It only describes those things that happen in typical human life. It may not be extensible to events like loss of consciousness, let alone cloning. In fact I've been pretty much convinced by this whole thread that my naive model probably can't be fixed and extended to describe the entire space of physical and experential possibilities. I'll drop it happily for a better alternative - please give me one!

Any new theory has got to include the fact that I have actual experiences of being me. The theories being proposed here of anticipating equally to "become" one of my future clones, smack to me of just doing away with the conception of anticipation entirely.

Replies from: RobinZ
comment by RobinZ · 2009-11-11T14:29:33.246Z · LW(p) · GW(p)

As I said before, my theory is far from complete - it's more a list of facts than a structured model. It only describes those things that happen in typical human life. It may not be extensible to events like loss of consciousness, let alone cloning. In fact I've been pretty much convinced by this whole thread that my naive model probably can't be fixed and extended to describe the entire space of physical and experential possibilities. I'll drop it happily for a better alternative - please give me one!

I think we have, at least in sketch form - here's Alicorn's nutshell summary, and here's mine. Both of our theories, if they are distinct, fit this intuition of yours - that a person is not destroyed and a new person created after a spell of unconsciousness - better that the thread of consciousness approach.

As for the rest, quite frankly you should expect to get weird results in weird situations like duplication. One weird result I expect is that, if you are duplicated, there will be two people afterwards, both of whose experiences suggest that they are DanArmak.

Replies from: DanArmak
comment by DanArmak · 2009-11-13T23:52:39.915Z · LW(p) · GW(p)

I took the time to think all this through before replying. I think I grok now your and Alicorn and the other posters' theory(s). And I pretty much accept it now. Thanks for your explanations.

The problem with my old approach, as I now see it, is the impossibility of empirically distinguishing it from infinitely many other possible theories. In such a situation, it is indeed best to choose an approach that optimizes outcome over all my configuration-descendants, because I might subjectively become any of them.

Of course, if I give up personal continuity, then the above statement becomes merely "because each of them will have memories indicating it is my descendant". But I am forced to this point of view due to the apparent impossibility of describing a personal continuity in terms of physics, which does not break down in the face of (arbitrarily short) lapses of consciousness.

Thanks again to everyone else who participated and helped convince me.

comment by RobinZ · 2009-11-10T23:52:55.489Z · LW(p) · GW(p)

I don't see any reason to privilege the thread of consciousness - I'm confident it doesn't actually work the way you're supposing. My personal instinct is that I at every instant am identical to this particular configuration of particles, and given that such a configuration of particles will persist after the experiment (though on the other side of the world), it doesn't seem particularly as if I've been killed in any permanent way. (I'm fairly sure I couldn't collect on my estate, for example.) Sure, it's risky, but if sufficient safeguards are in place, it's teleporting, as pengvado said (?).

A note: even if I hadn't had this instinct before, the idea of a persistent and real thread of consciousness is brought into doubt in a number of ways by Daniel Dennett's revolutionary work, Consciousness Explained. My copy is on my shelf at home at the moment, but Dennett explains several instances in which the naive perception of consciousness is shown to be unreliable. I don't think it's a valid marker to use to identify identity.

(Besides, what of spells of unconsciousness? Should someone whose thread of consciousness is interrupted be considered to have been literally killed and reborn as a facsimile?)

Replies from: DanArmak, DanArmak, eirenicon
comment by DanArmak · 2009-11-11T01:29:13.559Z · LW(p) · GW(p)

I don't see any reason to privilege the thread of consciousness

I offer you a choice: either you suffer torture for an hour. Or I create a clone of you, torture it for ten hours, and then kill it. In the second option, you are not affected in any way.

From what you've said, I gather you'll choose the first option. You won't privilege what you actually experience. But... I truly cannot understand why.

It's often useful to think as if you're deciding for all copies of yourself. You can maximize each copy's expected outcome that way in many situations. You can also optimize your expected experience if you don't know in advance which of ten thousand copies you'll be. This kind of argument has often been made on LW. Perhaps I mistook some instances of arguments like yours, which truly don't privilege experience, for this milder version (which I endorse).

Replies from: Alicorn, RobinZ
comment by Alicorn · 2009-11-11T03:12:40.598Z · LW(p) · GW(p)

You're begging a very important question when you use "you" to refer only to the template for subsequent duplication. On my wacky view, if you duplicate me perfectly, she's also me. If it's time T1 and you're going to duplicate Alicorn-T1 at T2, then Alicorn-T1 has two futures - Alicorn-T2 and Alicorn-T2\* - and Alicorn-T1 will make advance choices for them both just as if no duplication occurred. If you speak to Alicorn-T1 about the future in which duplication occurs, "you" is plural.

Replies from: RobinZ, DanArmak
comment by RobinZ · 2009-11-11T04:02:25.953Z · LW(p) · GW(p)

Your "wacky view" sounds quite similar to mine - I would be interested to read that thesis when it is published.

comment by DanArmak · 2009-11-11T09:42:43.602Z · LW(p) · GW(p)

The 'you' I used referred only to the pre-duplication person, who is making the choice, and who is singular.

The view you describe is the view I described in the last paragraph of my previous comment (the one you replied to). I understand and agree that you decide things for all identical copies of you who may appear in the next second - because they're identical, they preserve your decisions. But you can only anticipate experiencing some one thing, not a plurality.

If someone creates a clone (or several) of me, I may not even know about it; and I do not expect to experience anything differently due to the existence of that clone.

If someone destroys my body, I presume I'll stop experiencing, although I have no idea what that would be like.

By inference, if someone creates a precise clone of me elsewhere and destroys my body at the same moment, I won't suddenly start experiencing the clone's life. I.e., I don't expect to suddenly experience a complete shift in location. Rather, I would experience the same thing (or lack of it) that I would experience if someone killed me without creating a clone.

Yes, this begs the question of why I experience continuity in this body, since physics has no concept of a continuous body. And why do I experience continuity across sleep and unconsciousness? I don't have an answer, but neither do the alternative you're proposing. The only real answer I've seen is the timeless hypothesis: that at each moment I have a separate moment-experience, which happens to include memories of previous experiences, but they are not necessarily true - they are just the way my brain makes sense of the universe, and it highlights or even invents continuity. But this is too much like the Boltzmann's Brain conjecture - consistent and with explanatory power, but unsatisfying.

Replies from: Alicorn
comment by Alicorn · 2009-11-11T15:32:49.155Z · LW(p) · GW(p)

"You" sure seemed like it referred to only one of the postduplication individuals here:

In the second option, you are not affected in any way.

(This when one of me seems to be quite seriously affected, in that you plan to torture and kill that one.)

And here:

You won't privilege what you actually experience.

(This when I do privilege what I actually experience, and simply think of "I" in these futures as a plural.)

you don't know in advance which of ten thousand copies you'll be

(But I'll be all of them! It's not as though 9,999 of these people are p-zombies or strangers or even just brand-new genetically identical twins! They're my futures!)

comment by RobinZ · 2009-11-11T01:46:22.577Z · LW(p) · GW(p)

No, I wouldn't. I'd choose the second option so as to prevent my torture from being compounded with my total death.

Replies from: DanArmak
comment by DanArmak · 2009-11-11T01:52:59.100Z · LW(p) · GW(p)

Er, the second option is the one where I kill the clone. In both options, only you remain alive after 10 hours, no clone.

How about this cleaner version: I create a clone of you (no choice here). Then I torture you for an hour, OR the clone for ten hours, after which you're both free to go.

Replies from: Kyre, RobinZ
comment by Kyre · 2009-11-11T05:42:27.329Z · LW(p) · GW(p)

I'd choose one hour I think.

How about this: choose between 59 minutes of torture for you and 10 hours for the clone, vs 1 hour for you and the clone, with the experience for both you and the clone being indistinguishable for the first 59 minutes.

If you choose the 59min/10hrs, what's going through your mind in minute 59 ? Is it "this is all about to stop, but some other poor bastard is going to have a rough 9 hours" ? Or is it "ohgodohgodohgod I hope I'm not the clone" ?

Replies from: jasonmcdowell, DanArmak
comment by jasonmcdowell · 2009-11-11T07:18:42.674Z · LW(p) · GW(p)

I'd choose one hour also.

In your new formulation, I'd choose the 1 hour for both of us and we (both copies of me) would both expect it to be over soon at the 59th minute. My copies and I would be in agreement in our expectations of each other's behavior.

I identify closely with anything sufficiently similar to me - including close past and future versions of me. For instance, if there was a copy of me made an hour ago (whether or not the copy had runtime during that hour), and he or I were given the choice during your test, we would choose the same thing, as mentioned above.

comment by DanArmak · 2009-11-11T10:31:39.953Z · LW(p) · GW(p)

If you choose the 59min/10hrs, what's going through your mind in minute 59 ? Is it "this is all about to stop, but some other poor bastard is going to have a rough 9 hours" ? Or is it "ohgodohgodohgod I hope I'm not the clone" ?

It's true that both the original, and the clone, don't know if they're the clone or not at minute 59. But the original, who really made the decision before the clone was created, correctly optimized his own future experience to have 59 minutes of torture instead of an hour. The original doesn't care about the clone's experiences. (That is, I wouldn't care.)

Replies from: Kyre
comment by Kyre · 2009-11-12T05:45:16.258Z · LW(p) · GW(p)

Sounds consistent. Forgive me if I probe a bit further: I'm not trying to be rude, I'm interested in the boundaries of your theory.

In an unconsciousness - clone - destroy original brain - wake scenario, do you anticipate surviving ?

In an unconsciousness - clone twice - destroy original brain - wake scenario, do you identify with / anticipate being zero, one or two of the clones ?

Replies from: DanArmak
comment by DanArmak · 2009-11-14T00:07:56.751Z · LW(p) · GW(p)

I changed my mind since then. So I would make different decisions now... more in line with what others here have been proposing.

I would try to optimize for all projected future clones. But in a scenario where I know some clones are going to die no matter what they do (your previous question), I would partially discount the experiences such clones have before they die and try to optimize more for the experiences of the survivor. That's just my personal preference: the lifelong memory of the survivor matters more than the precise terminal existence of the killed clone.

Regarding your new questions about anticipation, under the new theory that has no concept of personal continuity, there doesn't seem to be such a thing as personal survival where duplication&termination are involved.

comment by RobinZ · 2009-11-11T03:50:09.526Z · LW(p) · GW(p)

...I see. That doesn't change my answer, as it happens; my clone dies, yes, and you bear moral culpability for it, but it is better for one to make it out (relatively) unscathed than for the only survivor to be traumatized. In the new version, I would prefer there be only one hour of torture between us, and accept the first option.

See, the thing is: in my utility function, I don't have a special ranking for "my" experiences over everyone else's. When I do the math, I come out paying a lot more attention to my own situation for purely pragmatic reasons.

Replies from: DanArmak
comment by DanArmak · 2009-11-11T10:00:59.572Z · LW(p) · GW(p)

Even given your theory and your utility function, I don't see how your clone's 10 hours of torture and subsequent death would leave you traumatized. Isn't it best for the survivor to have experienced no torture at all (so we'd torture the clone)?

Replies from: RobinZ, RobinZ, DanArmak
comment by RobinZ · 2009-11-11T14:53:41.561Z · LW(p) · GW(p)

Also, regarding Scenario B: imagine that I decided to get in on it, only with a variation. Instead of duplicating you and offering those two options - either your original be tortured for an hour or your duplicate be tortured for ten - I created an entirely new individual, no more similar to you than any other human being, and gave you the choice between being tortured yourself for an hour and the new guy being tortured for ten.

Which would you choose? Me, I think it's perfectly obvious that it is better for less torture to occur.

Replies from: DanArmak
comment by DanArmak · 2009-11-13T23:59:15.965Z · LW(p) · GW(p)

Better for whom? To me it's perfectly obvious that it's better for me to have someone else tortured, and I would choose that. I would only choose to be hurt myself if the tradeoff was very unequal (a speck in the eye for me, ten hours of torture for him), and even then I would soon stop agreeing to be hurt if I had to face such a choice repeatedly.

If someone were to mount a campaign to stop your entire torturing project, and if I could participate by being hurt (but not by endangering my life), then I would agree to pay a much higher price. But that's because such participation is a form of social capital and also helps enforce social norms elsewhere (this is both an evolutionary and a personal reason).

comment by RobinZ · 2009-11-11T14:20:55.485Z · LW(p) · GW(p)

I'm sorry - in Scenario A (I suffer torture, or duplicate suffers more torture and is killed), I would choose the second option for essentially the reasons you propose. In Scenario B (I'm duplicated, then either I suffer torture or duplicate suffers more torture, then we both live), I would choose the first option because that's less torture. I don't see the complexity.

comment by DanArmak · 2009-11-13T23:55:11.539Z · LW(p) · GW(p)

Making sure I was understood correctly: after the clone is killed (which happens in both scenarios), the only identifiable "you" is the survivor. Therefore any considerations of lasting trauma should apply to the survivor, or not all. So to minimize trauma (without minimizing the amount of torture), we should ensure that the survivor - who is known even before the clone is killed - is the one who is not tortured.

Replies from: RobinZ
comment by RobinZ · 2009-11-14T00:33:53.744Z · LW(p) · GW(p)

I was under the impression that the clone survived in the second scenario - that that was the difference between the two scenarios. This might explain some confusion about my answers, if this was a confusion.

Replies from: DanArmak
comment by DanArmak · 2009-11-14T00:57:43.586Z · LW(p) · GW(p)

No, the scenarios I originally proposed were:

  1. A clone is made.

2a. If you choose, you're tortured for an hour.

2b. Or if you so choose, the clone is tortured for ten hours.

3, Ten hours from now, the clone is destroyed in any case.

comment by DanArmak · 2009-11-11T01:16:31.327Z · LW(p) · GW(p)

I'm confident it doesn't actually work the way you're supposing.

How do you think it does work? That is, are you suggesting there is a thread of consciousness that sometimes works differently from how I've experienced it so far? I haven't seen a good model so far.

I've read Dennett's book. It does a good job of deconstructing and disproving existing models, but I don't remember that it proposed a good new model, just some interesting ideas and pointers.

Meanwhile, your model:

I at every instant am identical to this particular configuration of particles

has its own share of problems. For instance, you have no idea how many configurations identical or epsilon-similar to yours exist elsewhere at any given moment. You can't know when they're created or destroyed or modified. How can you not privilege the pattern-instance that right now is posting on LW, if you have no idea if it's the only instance or one of a million, the others being clones I just created in my basement?

Replies from: RobinZ
comment by RobinZ · 2009-11-11T01:44:26.227Z · LW(p) · GW(p)

How can you not privilege the pattern-instance that right now is posting on LW, if you have no idea if it's the only instance or one of a million, the others being clones I just created in my basement?

Okay, I'll grant you that I privilege the one I am at the moment, but the nine hundred ninety nine thousand nine hundred and ninety nine duplicates will each privilege themselves - and if I knew that they would be created in advance, I would be concerned for what they would experience for the same reason I care about any other future experience of mine.

comment by eirenicon · 2009-11-11T00:10:51.829Z · LW(p) · GW(p)

Would you still say yes if there was more than 10 seconds between copying you and killing you - say, ten hours? Ten years? What's the maximum amount of time you'd agree to?

Replies from: RobinZ
comment by RobinZ · 2009-11-11T00:30:45.073Z · LW(p) · GW(p)

...no, I don't think so. It would change what the original RobinZ would do, but not a lot else.

Replies from: eirenicon
comment by eirenicon · 2009-11-11T01:10:43.124Z · LW(p) · GW(p)

So ten seconds isn't enough time to create a significant difference between the RobinZs, in your opinion. What if Omega told you that in the ten seconds following duplication, you, the original RZ, would have an original thought that would not occur to the other RZs (perhaps as a result of different environments)? Would that change your mind? What if Omega qualified it as a significant thought, one that could change the course of your life - maybe the seed of a new scientific theory, or an idea for a novel that would have won you a Pulitzer, had original RZ continued to exist?

I think the problem with this scenario is that saying "ten seconds" isn't meaningfully different from saying "1 Planck time", which becomes obvious when you turn down the offer that involves ten hours or years. Our answers are tied to our biological perception of time - if an hour felt like a second, we'd agree to the ten hour option. I don't think they're based on any rational observation of what actually happens in those ten seconds. A powerful AI would not agree to Omega's offer - how many CPU cycles can you pack into ten seconds?

Replies from: DanArmak, RobinZ
comment by DanArmak · 2009-11-11T01:34:56.142Z · LW(p) · GW(p)

I don't quite understand the idea that someone who accepted the original offer (timespan = 10 seconds) would turn down the offer for any greater timespan. Surely more lifespan for the original (or for any one copy) is a good thing? If you favor creation of clones at cost of your life, why wouldn't you favor creation of clones at no immediate cost at all?

Replies from: jasonmcdowell
comment by jasonmcdowell · 2009-11-11T07:30:38.655Z · LW(p) · GW(p)

I like your point. I think I would accept such an offer with a greater time span, if N was > 1, if I knew how long I had and if I could be with my copies.

  1. The psychological stress of anticipating dying wouldn't be worth it to me for just N=1
  2. Not knowing when that one would die (only that he would) would be too psychologically stressful to be worth it.
  3. The one of me with an expiration date would live his remaining time differently than those who kept going. The ones of me who kept going would do things to honor him and fulfill his needs. The doomed one would expect them to do this.
  4. For longer expiration dates, we collectively would need greater compensation (more copies) to make it worth it.
Replies from: jasonmcdowell
comment by jasonmcdowell · 2009-11-11T07:35:45.616Z · LW(p) · GW(p)

For short time spans (1 second), I would accept N=1 for teleportation.

comment by RobinZ · 2009-11-11T01:32:36.454Z · LW(p) · GW(p)

So ten seconds isn't enough time to create a significant difference between the RobinZs, in your opinion.

I don't know if I'll claim that.

comment by kpreid · 2009-11-11T12:51:43.260Z · LW(p) · GW(p)

I'd accept the offer. (With some irrelevant-to-the-point-of-the-thought-experiment qualifications.)

The “subjective thread of experience” is a useful shortcut in thinking (note I don't mean that we consciously invent it), not an essential.

comment by Johnicholas · 2009-11-11T11:51:48.335Z · LW(p) · GW(p)

I think in practice I'd probably set N pretty high (10? 100? 10k?) - it's hard to know what one will do in extreme situations, particularly such unlikely ones.

But an alternative question might be: what should a rational entity do? The answer to this alternative is much easier to compute, and I think it's where the N=1 or N=2 answers are coming from. Would you agree that a creature evolved in an environment with such teleporter-and-duplicators would casually use them at N=1 and eagerly use them at N=2?

Replies from: DanArmak, RobinZ
comment by DanArmak · 2009-11-11T12:02:20.167Z · LW(p) · GW(p)

Yes, of course such a creature would agree at N=2 and 1. It's a direct way to maximize number of descendants.

Don't describe it as the rational choice though. Rationality has nothing to do with goals. It's the right thing to do only if your goal is to maximize the number of descendants, or clones.

Replies from: Johnicholas
comment by Johnicholas · 2009-11-11T16:56:27.848Z · LW(p) · GW(p)

I agree with you that an entity with different goals would behave differently, and that evolution's "goal" isn't (entirely) the same as my goals.

However, there's a sense of coherence with the physical world that I admire about evolution's decisions, and I want to emulate that coherence in choosing my own goals.

The fact that evolution values "duplicate perfectly, then destroy original" equivalently to "teleport" isn't a conclusive argument I should value them equivalently, but it's a suggestive argument towards that conclusion. The fact that my evolutionary environment never contained anything like that is suggestive that my gut feeling about it isn't likely to be helpful.

The balance of evidence seems to be against any such thing as continuous experience existing - an adaptive illusion analogous to the blind spot. Valuing continuous experience highly just doesn't seem to cut nature at its joints.

comment by RobinZ · 2009-11-11T15:02:44.181Z · LW(p) · GW(p)

I think you run into logistical problems when N gets large, by the way.

comment by pengvado · 2009-11-10T23:32:37.019Z · LW(p) · GW(p)

Yes, I accept the offer. If it's repeatable and I get to pick where the clone ends up, I'd do it for N=1: that's a teleporter. Getting rich in the process for N>1 (my clones would of course pool their resources for any large project; only a small fraction of that is needed for the increased living expenses), and obtaining some backups (this world is too dangerous to keep only one copy of anything important) are gravy.

comment by CannibalSmith · 2009-11-11T13:24:14.590Z · LW(p) · GW(p)

I accept. N=34. They have an orgy. I die happy.

comment by Tyrrell_McAllister · 2009-11-11T01:35:25.148Z · LW(p) · GW(p)

I posted the following comment in this thread, but it almost seems more appropriate here:

What would be the consequence of giving up the idea of a subjective thread of consciousness?

I wonder if believers in subjective threads of consciousness can perform a thought experiment like Chalmers' qualia-zombie thought experiment. I gather that advocates of the subjective thread hold that it is something more than just having certain clumps of matter existing at different times holding certain causal relationships with one another. (Otherwise you couldn't decide which of two future copies of yourself gets to inherit your subjective thread). So, advocates, does this mean that you can imagine an alternate universe in which matter is arranged in the same way as in our own throughout time, but in which no subjective threads bind certain clumps together? That is, do you think that "subjective-thread zombies" are possible in principle?

Just as in the Chalmers thought experiment, subjective-thread zombies would go around insisting that they have subjective threads. After all, their brains and lips would be participating in the same causal processes that lead you to say such things in this universe. And yet they would be wrong. They would not be saying these things because they have subjective threads, since they don't. And so, it seems, your insistence that you have a subjective thread also cannot have anything to do with whether you in fact do.

It seems that the idea of subjective-thread zombies is subject to all the problems that qualia zombies have. How do advocates of the subjective thread address or evade these problems?

Replies from: DanArmak
comment by DanArmak · 2009-11-11T01:51:13.022Z · LW(p) · GW(p)

The experience of a subjective thread of consciousness - let's call it STC - is pretty much the experience of experience, or qualia. So Chalmers' experiment is apt.

The relevant fact is that STC is purely subjective. The reason I think you and everyone else have STCs is because you say you do, and it's a simpler hypothesis than the one saying I'm different from everyone else. But the reason I think I have an STC is completely different: I know it from direct experience, which is pretty much by definition incontrovertible. (I may be mistaken about the real world, my experience may not match reality, I may even have false memories, but I can't be literally mistaken about what I'm experiencing right now. Unless you suppose I constantly have false memories of the last half-second and that the induction hypothesis is therefore inapplicable to subjective experience.)

So yes, I can imagine a universe without STCs. By construction, I won't exist in that universe. This is exactly the same question as: can I rule out that no-one else but me in this universe has STCs, or qualia, or consciousness? No, I can't; I can't test the suggestion. But I also can't proceed to the idea that I have no STC, because that would be denying my actual moment-to-moment experience. It would be as pointless as saying I'm colorblind when I'm not.

So to recap, I have no idea how to solve the hard problem of consciousness. I have no idea how to explain the subjective experience and connect it to the objective world, or to test for subjective experiences in somebody else. But I can't pretend consciousness and experience don't exist, either, at least not for me. I hope someone comes along and solves these problems and explains the solution to me - I don't know if this is actually possible, but I hope so - but lack of a solution doesn't make me ignore the problem.

Replies from: Tyrrell_McAllister, pengvado
comment by Tyrrell_McAllister · 2009-11-11T04:45:40.688Z · LW(p) · GW(p)

The experience of a subjective thread of consciousness - let's call it STC - is pretty much the experience of experience, or qualia. So Chalmers' experiment is apt.

There's a difference between the questions of STC and of qualia. We might live in a universe where you can have multiple causal descendants, each of them related to you-now in the same way that you-now are to you-one-year-ago, and none of them distinct from the others in a morally relevant way. Of course, each of these descendants would go on to experience qualia. The question of whether they have qualia is distinct from the question of whether one of them must be the unique inheritor of your STC.

So yes, I can imagine a universe without STCs.

Well, you can't really. None of us can really imagine a universe. We can imagine a bundle of properties which don't, so far as we can see, contradict each other. But the caveat "so far as we can see" is important.

Replies from: DanArmak
comment by DanArmak · 2009-11-11T09:46:34.527Z · LW(p) · GW(p)

The question of whether they have qualia is distinct from the question of whether one of them must be the unique inheritor of your STC.

Yes, you're right. You're describing a "branching thread of consciousness" model. Next, can there be a branching-and-merging model? Does it even make sense to ask the question?

If STC is a purely subjective experience unobservable from the outside, then we can't really count the STCs. We may not be able to actually assert that the number of STCs at time n+1 is greater than at time n. In other words, we don't really know if STCs can branch and/or merge even in our actual universe.

comment by pengvado · 2009-11-11T03:28:41.708Z · LW(p) · GW(p)

So you knowingly execute an inference process that has no correlation with the truth, because the possible-universes where it's wrong aren't morally significant, so you don't care what consequences the incorrect conclusions have here/there? (Is there a demonstrative pronoun that doesn't specify whether the object is near or far?) ("You" refers to whatever replies to this comment, not any epiphenomenon that may or may not be associated with it.) In the absence of a causal relation, where did you get the idea that your morality has anything to do with STCs?

If you have a single STC, why do you hypothesize that the bridging law associates it with the copy of you that's going to die rather than one of the ones that survives? Or would you decline in the thought-experiment, not due to certainty of death, but due to uncertainty?

What about consciousness that isn't arranged in threads? e.g. a branching structure, like the causal graph of the physics involved. If you frequently branch and rarely or never merge, then any given instance of you will still remember only a single history, so introspection (even epiphenomenal introspection, if you think there is such a thing, let alone introspection by a biological instantiation) can't distinguish this from the STC theory.

Replies from: DanArmak
comment by DanArmak · 2009-11-11T09:58:46.645Z · LW(p) · GW(p)

So you knowingly execute an inference process that has no correlation with the truth, because the possible-universes where it's wrong aren't morally significant, so you don't care what consequences the incorrect conclusions have here/there?

I don't see what morals have to do with it. I didn't talk about morals.

If you have a single STC, why do you hypothesize that the bridging law associates it with the copy of you that's going to die rather than one of the ones that survives? Or would you decline in the thought-experiment, not due to certainty of death, but due to uncertainty?

I am indeed uncertain, and so won't risk death. However, I do hypothesize that it's much more likely that, if a copy is created, my STC will remain with the original - inasfar as an original is identifiable. And when it is not identifiable, I fear that my STC will die entirely and not posses any of the clones.

What about consciousness that isn't arranged in threads? e.g. a branching structure, like the causal graph of the physics involved.

This is perfectly possible. In fact it's very likely. Because, if we create a clone, and if it necessarily has an STC of its own that is (in terms of memories and personality) a clone of the original's STC, then it makes at least as much sense to say that the STC branched than to say we somehow "created" a new STC to specification.

In this case I would anticipate becoming some one of the branches (clones). However, I do not know yet the probability weight of becoming each one, and AFAICS this can only be established empirically - and to test it at all I would have to risk the aforementioned chance of death.

In this scenario I wouldn't want people to torture clones of me in case I became them - but as long as a clearly identifiable original body remains intact, I very strongly expect my STC to remain associated with it, and not pass to a random clone, so if I'm not threatened I can mistreat my clones. And I certainly would never accept destruction of the original body no matter how many clones were created in exchange.

comment by LauraABJ · 2009-11-11T14:46:25.784Z · LW(p) · GW(p)

I think this is an important question that is all too easy to gloss over. I for one don't care all that much about myself at 90, especially if I try to put away biases and actually imagine what I will be like at 90 (assuming life extension/uploading fail). Maybe I care a little more about her than I did about my grandparents, and maybe a little less even, since the idea of me being 90 is revolting, and I admittedly don't do a lot of things that would preserve my health over such a long period of time at the cost of pleasure to my current self, and no I haven't done an expected utility calculation involving the imminence of uploading.

Also, the first time I was seriously informed about uploading, I was told that the digital copy of myself could quickly evolve since it wouldn't be shackled by the slowness of neurons, and that it would rapidly become unrecognizable as it correlated its own contents and repaired inconsistencies and set off to fulfill its ultimate utility function.

My reaction was, "Such an abomination must not be! Surely the extreme conclusions of my presently disjoint and unconscious preferences will be something that present me will find horrid. How disgusting to have a perversion of one's own self 'evolve' in a box moments after uploading... Why should present Laura give a damn about boxed Laura if she's so amazingly completely different?" I have since softened this reaction in light of other reflections and other ideas of uploading, but I still don't have a great answer to this basic question.

comment by Johnicholas · 2009-11-11T09:36:08.711Z · LW(p) · GW(p)

I think it is valuable to consider continuous identity to be something that we build, rather than something that we have automatically, or a percept of some kind. Those of us who are good at building continuous identity don't notice the effort involved, and perceive it as more-or-less automatic. Those of us who are bad at building continuous identity may, as the original poster suggests, identify more with similar peers than with a future self.

Movies like Memento (antergrade amnesia) and 50 First Dates (no memory from one day to the next) have pointed out that it's still possible to live, even without the inbuilt facility (memory) that we normally use for building continuous identity. Essentially, the protagonists in those movies practice stigmergy, shaping their environment (tattoos and photographs, letters to future self) in order to shape their future actions and act continuously.

I've been trying to write a post on how to amplify one's intertemporal rationality (including akrasia) by deliberate stigmergic manipulation of one's personal space, but real life keeps getting in the way.

comment by anonym · 2009-11-10T21:27:09.688Z · LW(p) · GW(p)

I think of myself as the 'software' that operates in my brain (i.e., a functionalist view of mind as a particular pattern that could be instantiated in many different ways, on different substrates). Hardware is important too, but I'm focusing on software here.

The software that I am changes quite slowly, and thus I see lots of continuity with the software that operated 24 hours ago, less with that from a year ago, and much, much less with that from 20+ years. My future self will be reached through a long sequence of incremental changes from my present self, so if there is something like a transitivity rule regarding caring about a future self, then I should care about all my future selves.

More pragmatically, who I am right now is partly responsible for who I will be tomorrow and for the experience of my incrementally different future self, whose experience will be processed by only a slightly different me, so present me has a vested interest in making sure future me is able to accomplish present me's deepest goals and the goals that I anticipate future me will have.

Replies from: cleonid
comment by cleonid · 2009-11-10T22:29:43.895Z · LW(p) · GW(p)

I think you make a nice point. Since the goals of our future selves tend to be similar to our own, we should care about our future selves ability to accomplish their (and by extension our own) goals. However, I see two potential problems with this argument.

1) A large part of our investment into the future is concerned with general well being rather than with abstract goals. For instance, the fate of most intellectual endeavors (publishing a book, making a scientific discovery etc) is usually determined long before the age 65. Still most educated people try to save money for a comfortable retirement.

2) If the life is transient it seems hard to motivate any goals which are not immediately related to our present well being. Naturally our present well being depends on our ability to believe in existence of such goals. However, fooling oneself into believing something false seems like a classic example of “irrationality”.

Replies from: anonym
comment by anonym · 2009-11-14T21:32:10.742Z · LW(p) · GW(p)

1) Isn't general well-being a precondition for many of our other goals, and one that we know we will value in and of itself about as highly in retirement as we do now? The aspect of ourselves that is concerned with general well-being is probably one of the most invariant, so I can be sure with very high probability that all of my future selves will value it highly. Perhaps I'm not understanding your point here, but it seems to me that though I will be different in 30 years, I will still (justifiably) perceive myself as the same person, so I should care now about what I will experience then. There are also lots of very important present goals (like better understanding the human mind, the universe, mathematics, etc.) that I am nearly certain I will consider very important in 30 years. Saving money for retirement will help me continue to pursue those goals as well as appreciate and enjoy the results of the goals I'll have pursued and achieved in the intervening years.

2) I don't see how life being transient -- by which I assume you mean that mind/self is transient -- implies not caring about future well being. I perceive my future self as being the same person as I am now, so I pursue goals that I anticipate my future self will have as well as those I presently have (although many are the same).

I think I am justified in conceiving of my future self as being "me" and thus in having a special interest in that future self. Every mind is unique by virtue of the genes and the physical environment that created the initial substrate and the sum total of experience that has changed the mind to this point. The chance of a mind equivalent to "me in 30 years" naturally existing without being based on my genes and my life experience thus far is so close to zero as to be practically impossible. "I" trace out a path in mind-space over time, but that path is unique to me; in a sense, "I" just am that path, the entire path. There is a part of mind that creates a representation of itself, and there is a part that gives special value to itself; these are both operative across all points of the path (all self models), even if they distinguish points on the path that "have happened" from those "happening now" and those that "haven't happened yet" differently and reason about them differently. They're still all "me", and I care about them all.

comment by CannibalSmith · 2009-11-11T13:08:20.020Z · LW(p) · GW(p)

My GUID servant does. I've hired a servant whose only task is to follow me around and point at me with an arrow shaped sign with a GUID that I've assigned myself on it. Any copies of me Omega makes won't have my GUID and therefore won't be me.

comment by abigailgem · 2009-11-14T10:15:14.156Z · LW(p) · GW(p)

Deism in the 17th century was a move towards rationalism, away from the idea of a God who interfered in the world. Rationalists now will not be deists, but deists during the Enlightenment were more rational than society in general, and were moving towards atheism. I suggest that you use the word "atheists" rather than "non-deists" in the title.

Replies from: whpearson
comment by whpearson · 2009-11-14T11:00:10.687Z · LW(p) · GW(p)

I'd use "materialist". I'm sure there are some philosophies out there that don't believe in gods but do have an idea of the immortal soul.

Replies from: RobinZ
comment by RobinZ · 2009-11-14T16:55:12.798Z · LW(p) · GW(p)

I'm sure there are some philosophies out there that don't believe in gods but do have an idea of the immortal soul.

Do we have any Buddhists in the house?

(Yes, I know the Buddhist idea of the soul is not particularly similar to the Christian, but there is the idea of something surviving death under some circumstances...)

comment by [deleted] · 2009-11-11T18:04:16.739Z · LW(p) · GW(p)

Operant conditioning. If I pull a lever and someone in Antarctica gets struck by lightning, nothing happens to my brain. If I pull a lever and I get struck by lightning, I instantly receive a strong desire not to let that happen again. The fact that nobody but me is conditioned by my experiences is what makes me me. If I suddenly began having the experiences of another person as well as my own, that person and I would both become me; if I accidentally wandered into a giant helium balloon and died, nobody would be me. If I for some reason developed anterograde amnesia, me would have a very short lifespan; there would be no reason to care about my long-future self any more than any other person.

Replies from: Vladimir_Nesov, torekp
comment by Vladimir_Nesov · 2009-11-11T18:51:01.131Z · LW(p) · GW(p)

If I pull a lever and someone in Antarctica gets struck by lightning, nothing happens to my brain. If I pull a lever and I get struck by lightning, I instantly receive a strong desire not to let that happen again.

Unless you are a scientist!

Replies from: RobinZ
comment by RobinZ · 2009-11-11T18:53:53.921Z · LW(p) · GW(p)

Exception to be made for professional scientists.

comment by torekp · 2009-11-15T16:53:34.189Z · LW(p) · GW(p)

Operant conditioning is an excellent answer as to why you do care more about your future self than a random future person. But the original post asks why should you care more.

Of course, it's open to you to argue that there's less room in between "should care" and "do care" than most people think. Perhaps when it comes to both whom and when we care about, there isn't much room at all.

Even going by what people do care about, however, I doubt that anterograde amnesia generally leads to disregard of one's next-day fate. Should it?

Replies from: None
comment by [deleted] · 2009-11-15T22:01:20.048Z · LW(p) · GW(p)

Even if we have anterograde amnesia, we certainly shouldn't disregard our future selves more than we disregard other people.

I think that we should care about ourselves over other people for whatever is the simplest reason consistent with when we do care more. It seems like the simplest reason to care about ourselves is operant conditioning.

comment by Morendil · 2009-11-10T23:50:44.097Z · LW(p) · GW(p)

I identify with, and care about, the person I will be tomorrow because we share a large fraction of our plans and sub-plans. The same goes, though to a lesser extent, about the person I will be one month from now, and so on.

I identify with the person I used to be yesterday because we share a lot of past experiences, and these experiences are resources I (and my future selves) may use to fulfill our plans. Even if I'm now very different from the kid who read Heinlein and Hofstadter, his experiences - my past - is what I draw upon. No one else has quite the same - I would dispute the assertion that "many years from now [a future me] will resemble [me] less than some actual people living today".

Some people do undergo large discontinuities; luck or catastrophy changes their plans to such a large extent that they have less in common with their past selves than others. Some people joke about that: "In a previous life I used to be a high powered executive, now I run a small restaurant." (More realistically, I could say of myself that in a previous life I was a programmer, and now I organize conferences and sell my services as a consultant.)

Even if my future changes dramatically, my past will still be the same, and my past will continue to supply me with resources for bringing about futures I desire. If my desires about the future change dramatically, I still need to weave a consistent story about "myself", in order to make effective use of those resources. The CxO's experience is repurposed as preparation for the running of a restaurant. His expensive health club, symbol of wealth, has become excellent preparation for a job that requires robust health. (In the less dramatic example, my project management experience turns out to be applicable to conferences and consulting work.)

What makes me me is this combination of a fixed past, and plans for the future.

comment by Jordan · 2009-11-11T00:50:45.861Z · LW(p) · GW(p)

By past and current experience, I'm pretty sure my future self isn't going to like me very much. I find it hard to justify doing something for some jerk that doesn't even like me, just because he might share some vestiges of my utility function and memories.

I'm guessing the reason I do anything with the future in mind is simply because that's how my 'utility function' was set up. I don't think there's anything irrational about that, though.

comment by Furcas · 2009-11-12T03:15:38.433Z · LW(p) · GW(p)

Does the nature of time (discrete or continuous) have any relevance to this problem?

Replies from: cleonid
comment by cleonid · 2009-11-12T13:29:37.009Z · LW(p) · GW(p)

I don't think so. In practice, even if the time is continuous, our consciousness is already divided into discrete spans separated by sleep.

comment by Thomas · 2009-11-10T20:41:16.054Z · LW(p) · GW(p)

Nothing special makes me, me. It is just an illusion, that I am unique and nonrepeatable.

I am just a copy of yesterme.

Replies from: RobinZ
comment by RobinZ · 2009-11-10T20:48:50.460Z · LW(p) · GW(p)

Furthermore, the consequences of biting this particular bullet aren't as severe as cleonid implies. A David Morgan-Mar said:

Sometimes I think of my future self in the third person. And sometimes when I do things I don't like, but which I know will benefit me in the future, I like to think of that as giving a gift to the person who is "my future self". I don't see any benefit right now, but that guy will.

I'm pretty lazy. Given the choice between exercise or sitting in front of a TV or computer, I'll go for the latter. But I make sure I do some exercise fairly regularly, because I know the future me will be the healthier for it. This is a gift to that guy. I hope he appreciates it.

P.S. To the guy who has exercised regularly for the past few years, yes, I appreciate it. Thanks.

Replies from: cleonid
comment by cleonid · 2009-11-10T22:42:07.105Z · LW(p) · GW(p)

The question is – why send the gift to that particular guy rather than anyone else?

Replies from: RobinZ
comment by RobinZ · 2009-11-10T23:05:53.624Z · LW(p) · GW(p)

Two reasons spring quickly to mind:

  1. Future-me is relatively nearby - when it comes to the well-being and prosperity of an individual, it is a universally-accepted principle that the first responsibility for this falls upon those closest to the person in question. This is why most children are raised by their parents, rather than a selection of random strangers from across the country.*

  2. Present-me is unusually aware of the needs and desires and circumstances of future-me, and further has an unusually strong degree of control over the development of all three, and further has an unusually strong influence over all three even when not acting with future-me directly in mind.

Either of these would provide strong reasons for present-me to pay particular attention to the plight of future-me.

* Edit 2009-11-12: This is known generally as the principle of subsidiarity.

Replies from: torekp
comment by torekp · 2009-11-15T17:18:24.510Z · LW(p) · GW(p)

While those are good reasons, I suggest that the biggest reason is simply that present-me cares about "me". And "me" is a temporally extended person who includes future-me. There doesn't seem to be anything particularly irrational about self-concern, any more than there is anything irrational about being a Red Wings fan if you happen to live in Detroit. (Or particularly rational, for that matter.)

comment by Vladimir_Nesov · 2009-11-10T20:22:02.348Z · LW(p) · GW(p)

The font in this post is wrong.

Replies from: Jonathan_Graehl
comment by Jonathan_Graehl · 2009-11-10T21:00:24.313Z · LW(p) · GW(p)

Also, every paragraph ends in a question mark.

Also, the useful facts seem to be missing.