Cryonics without freezers: resurrection possibilities in a Big World

post by Scott Alexander (Yvain) · 2012-04-04T22:48:05.585Z · LW · GW · Legacy · 140 comments

And fear not lest Existence closing your
Account, should lose, or know the type no more;
The Eternal Saki from the Bowl has pour'd
Millions of Bubbles like us, and will pour.

When You and I behind the Veil are past,
Oh, but the long long while the World shall last,
Which of our Coming and Departure heeds
As much as Ocean of a pebble-cast.

    -- Omar Khayyam, Rubaiyat

 

A CONSEQUENTIALIST VIEW OF IDENTITY

The typical argument for cryonics says that if we can preserve brain data, one day we may be able to recreate a functioning brain and bring the dead back to life.

The typical argument against cryonics says that even if we could do that, the recreation wouldn't be "you". It would be someone who thinks and acts exactly like you.

The typical response to the typical argument against cryonics says that identity isn't in specific atoms, so it's probably in algorithms, and the recreation would have the same mental algorithms as you and so be you. The gap in consciousness of however many centuries is no more significant than the gap in consciousness between going to bed at night and waking up in the morning, or the gap between going into a coma and coming out of one.

We can call this a "consequentialist" view of identity, because it's a lot like the consequentialist views of morality. Whether a person is "me" isn't a function of how we got to that person, but only of where that person is right now: that is, how similar that person's thoughts and actions are to my own. It doesn't matter if we got to him by having me go to sleep and wake up as him, or got to him by having aliens disassemble my brain and then simulate it on a cellular automaton. If he thinks like me, he's me.

A corollary of the consequentialist view of identity says that if someone wants to create fifty perfect copies of me, all fifty will "be me" in whatever sense that means something.

GRADATIONS OF IDENTITY

An argument against cryonics I have never heard, but which must exist somewhere, says that even the best human technology is imperfect, and likely a few atoms here and there - or even a few entire neurons - will end up out of place. Therefore, the recreation will not be you, but someone very very similar to you.

And the response to this argument is "Who cares?" If by "me" you mean Yvain as of 10:20 PM 4th April 2012, then even Yvain as of 10:30 is going to have some serious differences at the atomic scale. Since I don't consider myself a different person every ten minutes, I shouldn't consider myself a different person if the resurrection-machine misplaces a few cells here or there.

But this is a slippery slope. If my recreation is exactly like me except for one neuron, is he the same person? Signs point to yes. What about five neurons? Five million? Or on a functional level, what if he blinked at exactly one point where I would not have done so? What if he prefers a different flavor of ice cream? What if he has exactly the same memories as I do, except for the outcome of one first-grade spelling bee I haven't thought about in years anyway? What if he is a Hindu fundamentalist?

If we're going to take a consequentialist view of identity, then my continued ability to identify with myself even if I naturally switch ice cream preferences suggests I should identify with a botched resurrection who also switches ice cream preferences. The only solution here that really makes sense is to view identity in shades of gray instead of black-and-white. An exact clone is more me than a clone with different ice cream preferences, who is more me than a clone who is a Hindu fundamentalist, who is more me than LeBron James is.

BIG WORLDS

There are various theories lumped together under the title "big world".

The simplest is the theory that the universe (or multiverse) is Very Very Big. Although the universe is probably only 15 billion years old, which means the visible universe is only 30 billion light years in size, inflation allows the entire universe to get around the speed of light restriction; it could be very large or possibly infinite. I don't have the numbers available, but I remember a back of the envelope calculation being posted on Less Wrong once about exactly how big the universe would have to be to contain repeating patches of about the size of the Earth. That is, just as the first ten digits of pi, 3141592653, must repeat somewhere else in pi because pi is infinite and patternless, and just as I would believe this with high probability even if pi were not infinite but just very very large, so the arrangement of atoms that make up Earth would recur in an infinite or very very large universe. This arrangement would obviously include you, exactly as you are now. A much larger class of Earth-sized patches would include slightly different versions of you like the one with different ice cream preferences. This would also work, as Omar Khayyam mentioned in the quote at the top, if the universe were to last forever or a very very long time.

The second type of "big world" is the one posited by the Many Worlds theory of quantum mechanics, in which each quantum event causes the Universe to split into several branches. Because quantum events determine larger-level events, and because each branch continues branching, some these branches could be similar to our universe but with observable macro-scale differences. For example, there might be a branch in which you are the President of the United States, or the Pope, or died as an infant. Although this sounds like a silly popular science version of the principle, I don't think it's unfair or incorrect.

The third type of "big world" is modal realism: the belief that all possible worlds exist, maybe in proportion to their simplicity (whatever that means). We notice the existence of our own world only for indexical reasons: that is, just as there are many countries, but when I look around me I only see my own; so there are many possibilities, but when I look around me I only see my own. If this is true, it is not only possible but certain that there is a world where I am Pope and so on.

There are other types of "big worlds" that I won't get into here, but if any type at all is correct, then there should be very many copies of me or people very much like me running around.

CRYONICS WITHOUT FREEZERS

Cryonicists say that if you freeze your brain, you may experience "waking up" a few centuries later when someone uses the brain to create a perfect copy of you.

But whether or not you freeze your brain, a Big World is creating perfect copies of you all the time. The consequentialist view of identity says that your causal connection with these copies is unnecessary for them to be you. So why should a copy of you created by a far-future cryonicist with access to your brain be better able to "resurrect" you than a copy of you that comes to exist for some other reason?

For example, suppose I choose not to sign up for cryonics, have a sudden heart attack, and die in my sleep. Somewhere in a Big World, there is someone exactly like me except that they didn't have the heart attack and they wake up healthy the next morning.

The cryonicists believe that having a healthy copy of you come into existence after you die is sufficient for you to "wake up" as that copy. So why wouldn't I "wake up" as the healthy, heart-attack-free version of me in the universe next door?

Or: suppose that a Friendly AI fills a human-sized three-dimensional grid with atoms, using a quantum dice to determine which atom occupies each "pixel" in the grid. This splits the universe into as many branches as there are possible permutations of the grid (presumably a lot) and in one of those branches, the AI's experiment creates a perfect copy of me at the moment of my death, except healthy. If creating a perfect copy of me causes my "resurrection", then that AI has just resurrected me as surely as cryonics would have.

The only downside I can see here is that I have less measure (meaning I exist in a lower proportion of worlds) than if I had signed up for cryonics directly. This might be a problem if I think that my existence benefits others - but I don't think I should be concerned for my own sake. Right now I don't go to bed at night weeping that my father only met my mother through a series of unlikely events and so most universes probably don't contain me; I'm not sure why I should do so after having been resurrected in the far future.

RESURRECTION AS SOMEONE ELSE

What if the speculative theories involved in Big Worlds all turn out to be false? All hope is still not lost.

Above I wrote:

An exact clone is more me than a clone with different ice cream preferences, who is more me than a clone who is a Hindu fundamentalist, who is more me than LeBron James is.

I used LeBron James because from what I know about him, he's quite different from me. But what if I had used someone else? One thing I learned upon discovering Less Wrong is that I had previously underestimated just how many people out there are *really similar to me*, even down to weird interests, personality quirks, and sense of humor. So let's take the person living in 2050 who is most similar to me now. I can think of several people on this site alone who would make a pretty impressive lower bound on how similar the most similar person to me would have to be.

In what way is this person waking up on the morning of January 1 2050 equivalent to me being sort of resurrected? What if this person is more similar to Yvain(2012) than Yvain(1995) is? What if I signed up for cryonics, died tomorrow, and was resurrected in 2050 by a process about as lossy as the difference between me and this person?

SUMMARY

Personal identity remains confusing. But some of the assumptions cryonicists make are, in certain situations, sufficient to guarantee personal survival after death without cryonics.

140 comments

Comments sorted by top scores.

comment by wedrifid · 2012-04-05T02:35:40.107Z · LW(p) · GW(p)

The only downside I can see here is that I have less measure (meaning I exist in a lower proportion of worlds) than if I had signed up for cryonics directly. This might be a problem if I think that my existence benefits others - but I don't think I should be concerned for my own sake.

Yvain is quantum suicidal? I didn't expect that! People whose preferences 'add up to normal' do (implicitly) care about measure.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2012-04-05T11:20:39.413Z · LW(p) · GW(p)

Quantum suicide seems like a good idea to me if we know that the assumptions behind it (both quantum and identity-related) are true, if we're purely selfish (eg don't care about the bereaved left behind), and if we don't assume our actions are sufficiently correlated with those of others to make everyone try quantum suicide and end up all alone in our own personal Everett branch.

However, I might have the same "It's a good idea, but I am going to refuse to do this for reasons of personal sanity" reaction as I have with Pascal's Mugging.

Replies from: wedrifid, Dmytry
comment by wedrifid · 2012-04-05T15:00:46.464Z · LW(p) · GW(p)

Quantum suicide seems like a good idea to me if we know that the assumptions behind it (both quantum and identity-related) are true, if we're purely selfish (eg don't care about the bereaved left behind), and if we don't assume our actions are sufficiently correlated with those of others to make everyone try quantum suicide and end up all alone in our own personal Everett branch.

Fortunately, if you combine the second and third potential problems you end up with a solution that eliminates both of them. Then you just have the engineering problem involved in building a bigger death box.

However, I might have the same "It's a good idea, but I am going to refuse to do this for reasons of personal sanity" reaction as I have with Pascal's Mugging.

I hope so. Your position is entirely consistent - I cannot fault it on objective grounds and what you say in your post does directly imply what you confirm in your comment. That said, the preferences you declare here are vastly different to those that I consider 'normal' and so there remains the sneaking suspicion that you are wrong about what you want. That is, that you incorrectly extrapolate your volition.

On the other hand the existence of people with the preferences you describe here is a great potential boon to the rest of us. Whenever parties have vastly different values there is the potential for trade between them. And the difference in values between those that care about measure and those that don't rounds off to absolute. When you act on your preferences we can essentially just inherit all of your stuff in exchange for a (from our perspective) token probabilistic payout. Everybody wins!

Replies from: Will_Newsome, Yvain
comment by Will_Newsome · 2012-04-05T23:33:22.844Z · LW(p) · GW(p)

(Not sure where to put this:) Yvain's position doesn't seem sane to me, and not just for reasons of preference; attempting to commit suicide will just push most of your experienced moments backwards to regions where you've never heard of quantum suicide or where for whatever reason you thought it was a stupid idea. Anticipating ending up in a world with basically no measure just doesn't make sense: you're literally making yourself counterfactual. If you decided to carve up experience space into bigger chunks of continuity then this problem goes away, but most people agree that (as Katja put it) "anthropics makes sense with shorter people". Suicide only makes sense if you want to shift your experience backwards in time or into other branches, not in order to have extremely improbable experiences. I mean, that's why those branches are extremely improbable: there's no way you can experience them, quantum suicide or no.

Replies from: wedrifid, amit
comment by wedrifid · 2012-04-06T00:58:44.377Z · LW(p) · GW(p)

(Not sure where to put this:)

Here is fine.

Yvain's position doesn't seem sane to me, and not just for reasons of preference; attempting to commit suicide will just push most of your experienced moments backwards to regions where you've never heard of quantum suicide or where for whatever reason you thought it was a stupid idea.

This doesn't seem to be a problem that comes from being quantum suicidal but rather from an entirely different kind of anthropic based suicidal insanity. That is, I would not predict that experience as evaluated by Yvain's model of caring would be perceived this way. It certainly could be but that would be an additional insanity to the one that makes quantum roulette desirable. (No offense to Yvain and his Quantum Suicidal ilk by referring to this as 'insanity'. I mean only 'drastically different preferences preferences to my own in an agent similar enough to me that such comparison is meaningful'. In fact, if you're going to limit your optimisation to tiny amounts of measure then go ahead and exterminate humanity to maximise paperclips for all I care!)

To expand somewhat: Quantum suiciding at (subjective) time t results in you at time t-1 having more measure than you at time t+1 but under default quantum-suicidal preferences these are in no way in competition. Relative measure between past and future selves isn't any particular issue. There are just various different subjective experiences at t-1, t and t+1, a desire to have each of them as positive-on-average as can be but no particular inclination to trim measure in one part of a timeline to increase it in another. For example, I wouldn't expect Yvain to (consider it rational to) commit conventional-and-complete suicide whenever it seemed like all his peak experiences are in the past and all that remained in life is to make the most of the remaining dregs.

comment by amit · 2012-04-16T00:32:54.261Z · LW(p) · GW(p)

You're not saying that if I perform QS I should literally anticipate that my next experience will be from the past, are you? (AFAICT, if QS is not allowed, I should just anticipate whatever I would anticipate if I was going to die everywhere, that is, going to lose all of my measure.)

Replies from: FAWS
comment by FAWS · 2012-04-25T00:18:13.177Z · LW(p) · GW(p)

(Not Will, but I think I mostly agree with him on this point)

There is no such thing as an uniquely specified "next experience". There are going to be instances of you that remember being you and consider themselves the same person as you, but there is no meaningful sense in which exactly one of them is right. Granted, all instances of you that remember a particular moment will be in the future of that moment, but it seems silly to only care about the experiences of that subset of instances of you and completely neglect the experiences of instances that only share your memories up to an earlier point. If you weight the experiences more sensibly then in the case of a rigorously executed quantum suicide the bulk of the weight will be in instances that diverged before the decision to commit quantum suicide. There will be no chain of memory leading from the QS to those instances, but why should that matter?

comment by Scott Alexander (Yvain) · 2012-04-07T11:00:33.650Z · LW(p) · GW(p)

So, if Omega was willing to put a copy of you in an Everett branch that didn't already have one, how much money would you be willing to bid for this service?

If Omega was going to charge $100, and the offer remained open for as many Everett branches as you wanted, how many $100s would you give Omega?

Replies from: wedrifid
comment by wedrifid · 2012-04-07T15:31:12.751Z · LW(p) · GW(p)

So, if Omega was willing to put a copy of you in an Everett branch that didn't already have one, how much money would you be willing to bid for this service?

I'm not used to evaluating the worth of Everett branches by count. But for the purpose of this question may I assume you mean "another Everett branch of equal measure to this one, as of the time I click 'comment'"?

As for an answer... um... I'm not sure, a fair bit? Working out my preferences - quantitatively - in situations so far outside the usual realm of operation is tricky.

If Omega was going to charge $100, and the offer remained open for as many Everett branches as you wanted, how many $100s would you give Omega?

After I gave him everything I had I would get a new job that more closer matched my potential for financial gain.

Two extra considerations:

  • Even aside from a terminal preference for measure maximization I would consider buying more measure purely for the purpose of giving me acausal bargaining power. (I'm even less sure about qualitatively evaluating the usefulness of acausal bargaining power.)
  • Buying more equal-measure branches is different to trying to preserve measure in the one we are in. While I think I have preferences such that I would buy a new one I'm not sure if the default behavior of humans would be to do so.
comment by Dmytry · 2012-04-05T17:16:50.757Z · LW(p) · GW(p)

But would you take a partial amnesia pill before you go to sleep (which acts while you're not conscious and does or does not dissolve based on a quantum event)? As a partial quantum suicide. What about total amnesia? How much of your state has to be lost until its a suicide where QM will save you? Why won't QM save you from quantum-hangover where you do or don't get a terrible headache and IQ impairment of 20 points?

Replies from: wedrifid
comment by wedrifid · 2012-04-05T17:49:33.635Z · LW(p) · GW(p)

Why won't QM save you from quantum-hangover where you do or don't get a terrible headache and IQ impairment of 20 points?

Just because. Really, that's all there is to it. It so happens that Yvain's (alleged) subjectively objective preferences are such that given the existence of a non-hangover branch the existence of a hangover branch is less desirable than if the hangover was substituted for death in that branch.

This is not entirely unrelated to the natural outcome of 'average utilitarianism'. Under that value system the most simple solution is to go around killing the unhappiest person in the world until such time as the remaining people would be made on average less happy by the removal of the lowest happiness people. Sure, this is totally crazy. But if you have crazy preferences you should do crazy things.

Replies from: Dmytry
comment by Dmytry · 2012-04-05T18:06:03.781Z · LW(p) · GW(p)

Well, that was kind of rhetorical question; I want to make him be less into quantum-suicide by appeal to how (my guess) he's not expecting MWI to save him from experiencing quantum-hangover.

edit: to explain, I myself believe death to be a real value from epsilon to 1 rather than a binary value, the epsilon being the shared memories remaining in other people, other people nearby who think like you, etc. and the 1 being the state where you are never forgetting anything at all, with living being somewhere close to 1, and amnesias somewhere between 0 and that. Having binary valued anything seem to result in really screwed up behaviours. (by other people nearby i mean within distances that are much smaller than your information content; the other people at distances that have as many bits in their relative coordinate as you have in your brain, it seems to me those really should be counted as substantially different even though I can't quite pin point why)

Replies from: wedrifid
comment by wedrifid · 2012-04-05T18:17:18.651Z · LW(p) · GW(p)

Well, that was kind of rhetorical question;

It was a question that got a straight answer that is in accord with Yvain's position. To the extent that such rhetorical questions get answers that do not require the rhetoric victim to change their mind, the implied argument can be considered refuted.

In practical social usage rhetorical questions can, indeed, often be used as a way to make arguments that an opponent is not allowed to respond to. Here on lesswrong we are free to reject that convention and so not only are literal answers to rhetorical questions acceptable, arguments that are hidden behind rhetorical questions should be considered at least as subject to criticism as arguments made openly.

Replies from: Dmytry
comment by Dmytry · 2012-04-05T18:24:18.263Z · LW(p) · GW(p)

Well, it is the case that humans often strive to have consistent goal systems (perhaps minimizing their descriptive complexity?), so while he can just say 'because I defined my goal system to be this' he is also likely to think and try to come up with some kind of general principle that does not have weird discontinuities at some point when the amnesia is too much like death for his taste.

edit: i think we are talking about different issues; i'm not making a point about his utility function, i'm making a point that he is expecting to become 'him 20 points dumber and with a terrible headache', who's a rather different person, rather than to become someone in a galaxy far far away who doesn't have the hangover and is thus more similar to him before the hangover.

Replies from: wedrifid
comment by wedrifid · 2012-04-05T18:29:20.757Z · LW(p) · GW(p)

Well, it is the case that humans often strive to have consistent goal systems (perhaps minimizing their descriptive complexity?), so while he can just say 'because I defined my goal system to be this'

Yvain does present a consistent goal system. It is one that may appear either crazy or morally abhorrent to us but all indications are that it is entirely consistent. If you were attempting to demonstrate to Yvain an inconsistency in his value system that requires arbitrary complexity to circumvent then you failed.

Replies from: Dmytry
comment by Dmytry · 2012-04-05T18:31:17.911Z · LW(p) · GW(p)

I think you're misunderstanding me, see edit. The point I am making is not so much about his values, but about his expectations of subjective experience.

Replies from: wedrifid
comment by wedrifid · 2012-04-05T18:33:35.575Z · LW(p) · GW(p)

The point I am making is not so much about his values, but about his expectations of subjective experience.

Yvain's expectations of subjective experience actually seem sane to me. Only his values (and so expected decisionmaking) are weird.

Replies from: Dmytry
comment by Dmytry · 2012-04-05T18:40:48.971Z · LW(p) · GW(p)

Well, my argument is that you can propose a battery of possible partial quantum suicide set ups involving a machine that partially destroys you (e.g. you are anaesthetised and undergo lobotomy with varying extent of cutting, or something of this sort such as administration of a sublethal dose of neurotoxin). At some point, there's so little of you left that you're as good as dead; at some other point, there's so much of you left that you don't really expect to be quantum-saved. Either he has some strange continuous function inbetween, that I am very curious about, or he has a discontinuity, which is weird. (and I am guessing a discontinuity but i'd be interested to hear about function)

comment by Kaj_Sotala · 2012-04-05T06:31:08.841Z · LW(p) · GW(p)

Isn't dissolving the concept of personal identity relatively straightforward?

We know that we've evolved to protect ourselves and further our own interests, because organisms who didn't have that as a goal didn't fare very well. So in this case at least, personal identity is merely a desire to make sure that "this" organism survives.

Naturally, the problem is defining in "this organism". One says, "this" organism is something defined by physical continuity. Another says, "this" organism is something defined by the degree of similarity to some prototype of this organism.

One says, sound is acoustic vibrations. Another says, sound is the sensation of hearing...

There's no "real" answer to the question "what is personal identity", any more than there is a "real" answer to the question "what is sound". You may pick any definition you prefer. Of course, truly dissolving "personal identity" isn't as easy as dissolving "sound", because we are essentially hard-wired to anticipate that there is such a thing as personal identity, and to have urges for protecting it. We may realize on an intellectual level that "personal identity" is just a choice of words, but still feel that there should be something more to it, some "true" fact of the matter.

But there isn't. There are just various information-processing systems with different degrees of similarity to each other. One may draw more-or-less arbitrary borders between the systems, designating some as "me" and some as "not-me", but that's a distinction in the map, not in the territory.

Of course, if you have goals about the world, then it makes sense to care about the information-processing systems that are similar to you and share those goals. So if I want to improve the world, it makes sense for me to care about "my own" (in the commonsense meaning of the word) well-being - even though future instances of "me" are actually distinct systems from the information-processing system that is typing these words, I should still care about their well-being because A) I care about the well-being of minds in general, and B) they share at least part of my goals, and are thus more likely to carry them out. But that doesn't mean that I should necessarily consider them "me", or that the word would have any particular meaning.

And naturally, on some anticipation/urge-level I still consider those entities "me", and have strong emotions regarding their well-being and survival, emotions that go above and beyond that which is justified merely in the light of my goals . But I don't consider that as something that I should necessarily endorse, except to the extent that such anticipations are useful instrumentally. (E.g. status-seeking thoughts and fantasies may make "me" achieve things which I would not otherwise achieve, even though they make assumptions about such a thing as personal identity.)

Replies from: Vladimir_Nesov, Grognor, wedrifid, Dmytry
comment by Vladimir_Nesov · 2012-04-05T08:58:25.970Z · LW(p) · GW(p)

So if I want to improve the world, it makes sense for me to care about "my own" ... well-being - even though future instances of "me" are actually distinct systems ... because A) I care about the well-being of minds in general, and B) they share at least part of my goals, and are thus more likely to carry them out.

I think it's clear that there is also terminal value in caring about the well-being of "me". As with most other human psychological drives, it acts as a sloppily optimized algorithm of some instrumental value, but while its purpose could be achieved more efficiently by other means, the particular way it happens to be implemented contributes an aspect of human values that is important in itself, in a way that's unrelated to the evolutionary purpose that gave rise to the psychological drive, or to instrumental value of its present implementation.

(Relevant posts: Evolutionary Psychology, Thou Art Godshatter, In Praise of Boredom.)

Replies from: steven0461, Kaj_Sotala, Will_Newsome
comment by steven0461 · 2012-04-06T21:25:04.815Z · LW(p) · GW(p)

It's not clear to me. To get us to behave selfishly, evolution could have instilled false aliefs to the effect that other people's mental processes aren't as real as ours, in which case we may want to just disregard those. Even if there's no such issue, there's not necessarily any simple one-to-one mapping from urges to components of reflected preference, especially when the urges seem to involve concepts like "me" that are hard to extend beyond a low-tech human context. (If I recall correctly, on previous occasions when you've made this argument, you were thinking of "me" in terms of similarity in person-space, which is not as hard to make sense out of as the threads of experience being discussed in this thread.)

comment by Kaj_Sotala · 2012-04-05T09:51:02.697Z · LW(p) · GW(p)

Fair enough. I don't personally endorse it as a terminal value, but it's everyone's own decision whether to endorse it or not.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-04-05T10:06:04.919Z · LW(p) · GW(p)

I don't personally endorse it as a terminal value, but it's everyone's own decision whether to endorse it or not.

I don't believe it is, at least it's relatively easy to decide incorrectly, so the fact of having (provisionally) decided doesn't answer the question of what the correct decision is. "It's everyone's own decision" or "everyone is entitled to their own beliefs" sounds like very bad epistemology.

I cited what seems to me like a strong theoretical argument for antipredicting terminal indifference to personal well-being. Your current conclusion being contrary to what this argument endorses doesn't seem to address the argument itself.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-04-05T11:07:28.859Z · LW(p) · GW(p)

I thought that your previous comment was simply saying that

1) in deciding whether or not we should value the survival of a "me", the evolutionary background of this value is irrelevant
2) the reason why people value the survival of a "me" is unrelated to the instrumental benefits of the goal

I agree with those claims, but don't see them as being contrary to my decision not to personally endorse such a value. You seem to be saying that the question of whether or not a "me" should be valued is in some sense an epistemological question, while I see it as a choice of personal terminal values. The choice of terminal values is unaffected by epistemological considerations, otherwise they wouldn't be terminal values.

Replies from: torekp
comment by torekp · 2012-04-08T13:26:31.816Z · LW(p) · GW(p)

You seem to be saying that the question of whether or not a "me" should be valued is in some sense an epistemological question, while I see it as a choice of personal terminal values. The choice of terminal values is unaffected by epistemological considerations, otherwise they wouldn't be terminal values.

Wait - what? Are you partly defining terminal values via their being unaffected by epistemic considerations? This makes me want to ask a lot of questions for which I would otherwise take answers for granted. Like: are there any terminal values? Can a person choose terminal values? Do choices express values that were antecedent to the choice? Can a person have "knowledge" or some closely related goal as a personal terminal value?

comment by Will_Newsome · 2012-04-05T23:48:09.310Z · LW(p) · GW(p)

(Interestingly, seditious values deathist that I am, I am not inclined to believe in "terminal value" of ecologically-contingent approximations of actual morality (i.e., the actually justified decision policy, i.e., God); but God seems to care about those hasty approximations on their own terms, and so I end up caring, by transitivity, about me qua me and individuals qua individuals. So the humanist and the theist end up in the same non-Buddhist place. Ave meta!)

Replies from: Rain
comment by Rain · 2012-04-11T14:32:45.824Z · LW(p) · GW(p)

Such comments remind me of Time Cube with a dash of sanity, if only you would strip out the nonsense words (about 90 percent of the content) and clearly define everything that's left.

Replies from: amit
comment by amit · 2012-04-15T22:40:50.408Z · LW(p) · GW(p)

I think it's clearly not nonsense, but deserves to be downvoted anyway for casually assuming weird god stuff that he hasn't really properly explained anywhere. Still interesting to the W_N connoiseur, and the parentheses are a mitigating factor.

comment by Grognor · 2012-04-05T10:15:17.736Z · LW(p) · GW(p)

Isn't dissolving the concept of personal identity relatively straightforward?

Nay, I don't think it is.

I don't take issue with anything in particular you said in this comment, but it doesn't feel like a classic, non-greedy Reduction of the style used to reduce free will into cognitive algorithms or causality into math.

The sense in which you can create another entity arbitrarily like yourself and say, "I identify with this creature based on so-and-so definition" and then have different experiences than the golem no matter how like you it is is the confused concept that I do not think has been dissolved; I am not sure if it a non-fake dissolving of it has ever even started. (Example: Susan Blackmore's recent "She Won't Be Me". This is clearly a fake reduction; you don't get to escape the difficulties of personal identity confusion by saying a new self pops up every few minutes/seconds/plancktimes. Your comment is less obviously wrong but still sidesteps the confusion instead of Solving it.)

Hell, it's not just a Confusing Problem. I'd say it's a good candidate for The Most Confusing Problem.

Edit (one of many little ones): I made this comment pretty poorly, but I hope the point both makes sense and got through relatively intact. Mitchell Porter's comment is also really good until the penultimate paragraph.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-04-05T11:25:39.576Z · LW(p) · GW(p)

The sense in which you can create another entity arbitrarily like yourself and say, "I identify with this creature based on so-and-so definition" and then have different experiences than the golem no matter how like you it is is the confused concept that I do not think has been dissolved; I am not sure if it a non-fake dissolving of it has ever even started.

I tried responding to this example, but I find the whole example so foreign and confused in some sense that I don't even know how to make enough sense of it to offer a critique or an explanation. Why wouldn't you expect there to exist an entity with different experiences than the golem, and which remembers having identified with the golem? You're not killing it, after all.

comment by wedrifid · 2012-04-15T09:59:23.227Z · LW(p) · GW(p)

And naturally, on some anticipation/urge-level I still consider those entities "me", and have strong emotions regarding their well-being and survival, emotions that go above and beyond that which is justified merely in the light of my goals . But I don't consider that as something that I should necessarily endorse, except to the extent that such anticipations are useful instrumentally.

I don't consider them something I should necessarily endorse. I consider them something that I do actually endorse because all things considered I want to.

(Although given that the endorsement of such emotional considerations thereby makes them parts of my goals I suppose we could declare technically and tautologically that any emotions that go above and beyond what that which is incorporated into my goals should not be endorsed, depending on the definition of a few of the terms.)

comment by Dmytry · 2012-04-05T17:20:23.999Z · LW(p) · GW(p)

Interesting. I was thinking about same regarding the early stages of AGI - it is difficult to define the 'me' precisely, and its unclear why would one have a really precise definition of 'me' in an early AGI. It's good enough if the life is 'me' to AI but the Jupiter isn't 'me' , that's a negligible loss in utility from the life not being kosher food for the AI.

comment by Wei Dai (Wei_Dai) · 2012-04-05T22:14:43.796Z · LW(p) · GW(p)

This, along with the simulation argument, is why I'm not too emotionally stressed out with feelings of impending doom that seem to afflict some people familiar with SIAI's ideas. My subjective anticipation is mollified by the thought that I'll probably either never experience dying or wake up to find that I've been in an ancestral simulation, which leaves the part of me that wants to prevent all the empty galaxies from going to waste to work in peace. :)

Also, we can extend the argument a bit for those worried about "measure". First, an FAI might be able to recreate people from historical clues (writings, recordings, other's memories of them, etc.). But suppose that's not possible. An FAI could still create a very large number of historically plausible people, and assuming FAIs in other Everett branches do the same, the fact that I probably won't be recreated in this branch will be compensated for by the fact that I'll be recreated in other branches where I currently don't exist, thus preserving or even increasing my overall measure.

Replies from: Wei_Dai, Will_Newsome, TheWakalix
comment by Wei Dai (Wei_Dai) · 2020-07-01T09:46:26.878Z · LW(p) · GW(p)

My subjective anticipation is mollified by the thought that I’ll probably either never experience dying or wake up to find that I’ve been in an ancestral simulation, which leaves the part of me that wants to prevent all the empty galaxies from going to waste to work in peace. :)

Update: Recent events have made me think that the fraction of advanced civilizations in the multiverse that are sane may be quite low. (It looks like our civilization will probably build a superintelligence while suffering from serious epistemic pathologies, and this may be be typical for civilizations throughout the multiverse.) So now I'm pretty worried about "waking up" in some kind of dystopia (powered or controlled by a superintelligence with twisted beliefs or values), either in my own future lightcone or in another universe.

Actually, I probably shouldn't have been so optimistic even before the recent events...

Replies from: steven0461, zrkrlc
comment by steven0461 · 2020-07-06T17:57:01.678Z · LW(p) · GW(p)

I updated downward somewhat on the sanity of our civilization, but not to an extremely low value or from a high value. That update justifies only a partial update on the sanity of the average human civilization (maybe the problem is specific to our history and culture), which justifies only a partial update on the sanity of the average civilization (maybe the problem is specific to humans), which justifies only a partial update on the sanity of outcomes (maybe achieving high sanity is really easy or hard). So all things considered (aside from your second paragraph) it doesn't seem like it justifies, say, doubling the amount of worry about these things.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2020-07-06T18:55:20.882Z · LW(p) · GW(p)

I agree recent events don't justify a huge update by themselves if one started with a reasonable prior. It's more that I somehow failed to consider the possibility of that scenario, the recent events made me consider it, and that's why it triggered a big update for me.

comment by junk heap homotopy (zrkrlc) · 2020-12-04T10:40:41.717Z · LW(p) · GW(p)

Now I'm curious. Does studying history make you update in a similar way? I feel that these times are not especially insane compared to the rest of history, though the scale of the problems might be bigger.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2020-12-05T00:55:58.312Z · LW(p) · GW(p)

Now I’m curious. Does studying history make you update in a similar way?

History is not one of my main interests, but I would guess yes, which is why I said "Actually, I probably shouldn’t have been so optimistic even before the recent events..."

I feel that these times are not especially insane compared to the rest of history, though the scale of the problems might be bigger.

Agreed. I think I was under the impression that western civilization managed to fix a lot of the especially bad epistemic pathologies in a somewhat stable way, and was unpleasantly surprised when that turned out not to be the case.

comment by Will_Newsome · 2012-04-05T23:59:36.516Z · LW(p) · GW(p)

prevent all the empty galaxies from going to waste

(Off-topic: Is this a decision theoretic thing or an epistemic thing? That is, do you really think the stars are actually out there to pluck in a substantial fraction of possible worlds, or are you just focusing on the worlds where they are there to pluck because it seems like we can't do nearly as much if the stars aren't real? Because I think I've come up with some good arguments against the latter and was planning on writing a post about it; but if you think the former is the case then I'd like to know what your arguments are, because I haven't seen any really convincing ones. (Katja suggested that the opposite hypothesis—that superintelligences have already eaten the stars and are just misleading us, or we are in a simulation where the stars aren't real—isn't a "simple" hypothesis, but I don't quite see why that would be.) What's nice about postulating that the stars are just an illusion is that it means there probably isn't actually a great filter, and we aren't left with huge anthropic confusions about why we're apparently so special.)

Replies from: Wei_Dai, CarlShulman
comment by Wei Dai (Wei_Dai) · 2012-04-06T01:30:39.015Z · LW(p) · GW(p)

do you really think the stars are actually out there to pluck in a substantial fraction of possible worlds

Assuming most worlds start out lifeless like ours, they must have lots of resources for "plucking" until somebody actually plucks them... I guess I'm not sure what you're asking, or what is motivating the question. Maybe if you explain your own ideas a bit more? It sounds like you're saying that we may not want to try to pluck the stars that are apparently out there. If so, what should we be trying to do instead?

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-06T01:47:56.803Z · LW(p) · GW(p)

I guess I didn't clearly state the relevant hypothesis. The hypothesis is that the stars aren't real, they're just an illusion or a backdrop put their by superintelligences so we don't see what's actually going on. This would explain the great filter paradox (Fermi's paradox) and would imply that if we build an AI then that doesn't necessarily mean it'll get to eat all the stars. If the stars are out there, we should pluck them—but are they out there? They're like a stack of twenties on the ground, and it seems plausible they've already been plucked without our knowing. Maybe my previous comment will make more sense now. I'm wondering if your reasons for focusing on eating all the galaxies is because you think the galaxies actually haven't already been eaten, or if it's because even if it's probable that they've actually already been eaten and our images of them are an illusion, most of the utility we can get is still concentrated in worlds where the galaxies haven't already been eaten, so we should focus on those worlds. (This is sort of orthogonal to the simulation argument because it doesn't necessitate that our metaphysical ideas about how simulations work make sense; the mechanism for the illusion works by purely physical means.)

Replies from: Wei_Dai, amit, wedrifid
comment by Wei Dai (Wei_Dai) · 2012-04-07T05:02:31.507Z · LW(p) · GW(p)

The hypothesis is that the stars aren't real, they're just an illusion or a backdrop put their by superintelligences so we don't see what's actually going on.

If that's the case, then I'd like to break out by building our own superintelligence to find and exploit whatever weaknesses might exist in the SIs that are boxing us in, or failing that, negotiate with them for a share of the universe. (Presumably they want something from us, or else why are they doing this?) Does that answer your question?

BTW, I'm interested in the "good arguments" that you mentioned earlier. Can you give a preview of them here?

comment by amit · 2012-04-15T21:05:14.291Z · LW(p) · GW(p)

The hypothesis is that the stars aren't real, they're just an illusion or a backdrop put their by superintelligences so we don't see what's actually going on. This would explain the great filter paradox (Fermi's paradox) and would imply that if we build an AI then that doesn't necessarily mean it'll get to eat all the stars.

If the SI wanted to fool us, why would it make us see something (a lifeless universe) that would make us infer that we're being fooled by an SI?

Replies from: wedrifid
comment by wedrifid · 2012-04-15T21:37:45.662Z · LW(p) · GW(p)

If the SI wanted to fool us, why would it make us see something (a lifeless universe) that would make us infer that we're being fooled by an SI?

It would seem it is trying to fool just the unenlightened masses. But the chosen few who see the Truth shall transcend all that...

comment by wedrifid · 2012-04-16T16:24:14.456Z · LW(p) · GW(p)

Will, as per amit's point, how do you anticipate your decision to tell us about the superintelligent fake stars hypothesis the decision of the superintelligences to create (or otherwise cause to exist) human life on earth with the illusion of living in a free universe?

All things considered (and assuming that hypothesis as a premise) I think you might have just unmade us a little bit. How diabolical!

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-17T01:56:48.380Z · LW(p) · GW(p)

I agree with your rationale, i.e. assuming they're actually around then the superintelligences clearly aren't trying that hard to be quiet, and are instead trying to stay on some sort of edge of influence or detectability. Remember, it's only atheists who don't suspect supernatural influence; a substantial fraction of humans already suspects weird shit is going on. Not "the chosen few" so much as "the chosen multitude". Joke's only on the atheists. Presumably if they wanted us to entirely discount the possibility that they were around then it would be easy for them to influence memetic evolution such that supernatural hypotheses were even less popular, e.g. by subtly keeping the U.S. from entering WWII and thus letting the Soviet Union capture all of Europe, and so on and so forth in that vein. (I'm not a superintelligence, surely they could come up with better memetic engineering strategies than I can.)

Nitpick: they don't have to have chosen to create or cause us to exist as such, just left us alone. The latter is more likely because of game theoretic asymmetry ("do no harm"). Not sure if you intended that to be in your scope.

Replies from: wedrifid
comment by wedrifid · 2012-04-17T02:04:47.766Z · LW(p) · GW(p)

Nitpick: they don't have to have chosen to create or cause us to exist as such, just left us alone. The latter is more likely because of game theoretic asymmetry ("do no harm"). Not sure if you intended that to be in your scope.

The intended scope was inclusive - I didn't want to go overboard with making the caveat 'or' chain mention everything. The difference in actions and inactions when it comes to superintelligences that are controlling everything around us including giving us an entire fake universe to look at become rather meaningless.

comment by CarlShulman · 2012-04-19T01:27:48.023Z · LW(p) · GW(p)

or are you just focusing on the worlds where they are there to pluck because it seems like we can't do nearly as much if the stars aren't real? Because I think I've come up with some good arguments against the latter

Let's hear them.

Replies from: Will_Newsome
comment by Will_Newsome · 2012-04-19T02:24:41.262Z · LW(p) · GW(p)

Forthcoming in the next year or so, in my treatise on theology & decision theory & cosmology & moral philosophy, which for better or worse I am going to write in English and not in Latin.

But anyway, I'll say now that my arguments don't work for utilitarians, at least for preference utilitarianism the way it's normally considered. Even if the argument's valid it still probably doesn't sway people who are using e.g. the parliamentary meta-moral system and who give much weight to utilitarian intuitions.

Replies from: None
comment by [deleted] · 2012-04-19T02:33:48.331Z · LW(p) · GW(p)

which for better or worse I am going to write in English and not in Latin.

Oh thank all the gods of counterfactual violence.

comment by TheWakalix · 2018-11-16T22:40:46.590Z · LW(p) · GW(p)
My subjective anticipation is mollified by the thought that I'll probably either never experience dying or wake up to find that I've been in an ancestral simulation.

This feels very close to my moral intuitions, but I cognitively believe very strongly that things can be bad even if there are no observers left to regret that they happened. That follows logically from the moral intuition that people suffering where you can't see is still bad.

...is that a natural moral intuition? I don't think so. That could explain the dissonance.

comment by orthonormal · 2012-04-05T14:38:27.517Z · LW(p) · GW(p)

If anthropics makes any sense, then cryonics in a Big World is still controlling what you should mostly expect to experience. The anthropic escape valves for a version of me who experiences death and remembers not being signed up for cryonics range from Boltzmann brains to lunatics to completely random ancestor simulations, and I think I value experiencing the mainline expected cryonics outcome more highly than I do experiencing these.

And even if anthropics doesn't make sense, then as a matter of decision theory I value actually being around in a substantial fraction of the futures of the current world-state.

comment by Mitchell_Porter · 2012-04-05T03:08:13.872Z · LW(p) · GW(p)

If all copies count as you, then that includes Boltzmann brains who die in the vacuum a second after their formation and copies of you who awaken inside a personally dedicated hell. And this is supposed to provide hope?

There is clearly a sense in which you do not experience what your copies experience. The instance of you who dies in a car crash on the way to your wedding never experiences the wedding itself; that is experienced by the second instance, created from a backup a few weeks later.

Any extension of identity beyond the "current instance" level is therefore an act of imagination or chosen affiliation. Identifying with your copies and almost-copies scattered throughout the multiverse, identifying with your descendants, and identifying with all beings who ever live, all have this in common - "you", defined in the broad sense supplied by your expansive concept of identity, will experience things that "you", defined in the narrow but practical sense of your local instance, will never experience.

Since it is a contradiction to say that you will experience things that you will never experience, it is desirable to perceive very clearly that these expansive identifications are being made by one local instance that is choosing to regard a multiplicity of other distinct beings as other parts of its extended self. Of course, once you perceive this distinction, between local self and global self, and especially once you notice that the same local self can have arbitrarily expansive or delimited beliefs about who gets to be a part of its global self... you might begin to doubt the meaningfulnss of any notion of global self other than "the whole of reality", or indeed you might doubt the meaningfulness of any notion of "global self" at all. Perhaps in reality you are just your local self and that's it; all other identifications are fantasy.

When that attitude is taken to its limit, it usually leads to disconnection between one moment and the next. Each moment's experience is only had by that momentary self. You could make a slogan out of it: "instances are instantaneous", meaning that if you apply this philosophy consistently, you have to deny that the "local self" extends in time.

But this part I don't believe, because I do believe that experiences are extended in time. There is such a thing as change, the flow of one moment into the next, and not just a static difference between static moments each containing its own encapsulated illusion of flow-connectedness to other moments. The reduction of time to simply another spatial coordinate, and the consequent relegation of the experience of time passing to the category of illusion, results from the cultural hypertrophy (that's the opposite of atrophy) of "logical perception" in scientific culture, at the expense of more "phenomenological" capacities, like a sensitivity to the actual form of consciousness. If people took appearances more seriously, their response to the difficulties of reconciling them with scientific theory would be to look for a better theory, not to call the appearances illusory or nonexistent. It's not at all easy even to get the ontology of appearance right, let alone to conceptually reconstruct the ontology by means of which we understand our mathematical theories of nature, so as to include the ontology implied by appearance.

In fact, an extra layer of difficulty has been added by the attempted reduction of epistemology to computation - it means that the epistemological claims of phenomenology, e.g. that we can know that time really passes or that change is real, struggle to get a hearing. Computational epistemology in its existing forms presupposes an inadequate ontology, and therefore offers a new, methodological barrier to any truth from outside that ontology. One needs to remember that computation is about state transitions in state machines, and says nothing about the "intrinsic nature" of those states or how that intrinsic nature may be related to the causality of the state transitions. So any ontology featuring causal interactions between things with states contains computation, in the same way that any ontology containing multiple things contains arithmetic; but you can't bootstrap your way from computation to ontology, just as you can't bootstrap your way from arithmetic to physics.

In my polemic I have strayed far from the original topic of Yvain's post, but any discussion of the ontology of persons eventually has to tackle these "hard problems".

Replies from: Yvain, ESRogs
comment by Scott Alexander (Yvain) · 2012-04-05T11:17:51.364Z · LW(p) · GW(p)

It doesn't seem too much more distressing to believe that there are copies of me being tortured right now, than to believe that there are currently people in North Korea being tortured right now, or other similarly unpleasant facts everyone agrees to be true.

There's a distinction between intuitive identity - my ability to get really upset about the idea that me-ten-minutes-from-now will be tortured - and philosophical identity - an ability to worry slightly about the idea that a copy of me in another universe is getting tortured. This difference isn't just instrumentally based on the fact that it's easier for me to save me-ten-minutes-from-now than me-in-another-universe; even if I were offered some opportunity to help me-in-another-universe, I would feel obligated to do so only on grounds of charity, not on grounds of selfishness. I'd ground that as some mental program that intuitively makes me care about me-ten-minutes-from-now which is much stronger than whatever rational kinship I can muster with me-in-another-universe. This mental program seems pretty good at dealing with minor breaks in continuity like sleep or coma.

The problem is, once death comes into the picture, the mental program can't carry on business as usual - there won't be any "me-ten-minutes-from-now". And one reaction is to automatically switch allegiance to the closest copy of me - for example, cryonically-resurrected-me-a-century-from-now. I don't think this allegiance-switching has any fundamental ontological basis, but I'm not prepared to say it's stupid either. My point here is only that once you're okay with switching allegiances, you might as well do it to the nearest other pre-existing copy of you, rather than go through all the trouble of creating a new one.

I agree that we can't ground identity in individual moments. For one thing, the only reasonable candidate for "moment" is the Planck time, and there's no experience that can happen on that short an interval. For another, static experiences don't seem to be conscious: if I were frozen in time, I couldn't think "Darnit, I'm frozen in time now!" because that thought involves a state change. I think this is what you're saying your third to last paragraph but I'm not sure.

I'm leaning towards saying my identification with self-at-the-present-moment isn't any more interesting or fundamental than my artificially created identification with me-ten-minutes-from-now, and that a feeling of being in the present is just a basis for other computational processes. As far as I can understand, this doesn't seem to be your solution at all.

Do any of the various theories marketed as "timeless" here claim that the belief in a present moment is purely indexical - that is, a function of the randomly chosen observer-moment currently experienced as "me" being Yvain(2012) as opposed to Yvain(2013), in the same sense that seeing a quantum coin come up heads instead of tails is indexical? It seems like an elegant idea and would be relevant to this discussion.

Replies from: None
comment by [deleted] · 2012-04-05T14:03:17.049Z · LW(p) · GW(p)

The problem is, once death comes into the picture, the mental program can't carry on business as usual - there won't be any "me-ten-minutes-from-now". And one reaction is to automatically switch allegiance to the closest copy of me - for example, cryonically-resurrected-me-a-century-from-now.

Once you get resurrected, wont the mental program continue carrying business as usual and so wont the "me-ten-minutes-from-now" keep being there?

Replies from: None
comment by [deleted] · 2012-04-13T04:01:54.236Z · LW(p) · GW(p)

I don't understand why this is downvoted

comment by ESRogs · 2012-04-05T09:01:31.629Z · LW(p) · GW(p)

"that is experienced by the second instance, created from a backup a few weeks later." What scenario is this referring to?

comment by Dustin · 2012-04-04T23:56:02.660Z · LW(p) · GW(p)

I like this post! Two comments barely related to the post:

  1. I would be interested in those calculations about how big the universe would have to be to have repeating Earths if anyone recalls where they saw them.

  2. A meta-LessWrongian comment: A great part of the value I get out of LessWrong is that there's always someone out there writing a post or comment tying together various thoughts and musings I've had into coherent essays in a way I don't have the mental discipline to do. So this is a big thanks to all of you writing interesting stuff!

And a comment more directly related to the post:

I can only assume that some other type of intelligence would be bemused by our confusion surrounding these issues in much the same way I'm often bemused by people making hopelessly confused arguments about religion or evolution or whatever.

Other Intelligence: "Of course continuity isn't important! There's no difference between waking up after a coma, being unfrozen, or your clone in a Big World! Sheesh, you humans are messed up in the head!"

(Or maybe OI would argue that there is a difference.)

Replies from: gwern, Incorrect
comment by gwern · 2012-04-05T00:12:19.738Z · LW(p) · GW(p)

I would be interested in those calculations about how big the universe would have to be to have repeating Earths if anyone recalls where they saw them.

http://lesswrong.com/lw/ws/for_the_people_who_are_still_alive/

If the universe is spatially infinite, then, on average, we should expect that no more than 10^10^29 meters away is an exact duplicate of you. If you're looking for an exact duplicate of a Hubble volume - an object the size of our observable universe - then you should still on average only need to look 10^10^115 lightyears. (These are numbers based on a highly conservative counting of "physically possible" states, e.g. packing the whole Hubble volume with potential protons at maximum density given by the Pauli Exclusion principle, and then allowing each proton to be present or absent.)

Google search query: "duplicate earth lightyears away site:lesswrong.com". Estimated time to search and go through first page: 30 seconds.

Replies from: Dmytry, Dustin
comment by Dmytry · 2012-04-05T17:27:20.209Z · LW(p) · GW(p)

This is simply the distance where the information content of the position is on same order that the information content of the brain for which the second instance is being 'found'. Essentially, infinite universe allows to encode brains into spatial positions.

comment by Dustin · 2012-04-05T03:09:15.886Z · LW(p) · GW(p)

Dangit! Pointing to a Google search is normally my modus operandi.

comment by Incorrect · 2012-04-05T02:11:46.116Z · LW(p) · GW(p)

Or maybe:

"All of those, including changing your preferences in icecream, constitute enough change to be considered death. It is theoretically possible to keep a human alive in a controlled environment but this has never been the case in the history of your species."

Replies from: falenas108
comment by falenas108 · 2012-04-05T02:36:54.765Z · LW(p) · GW(p)

I wouldn't say a change in ice cream preference constitutes a death. People's taste buds change as they get older.

Or if you do say getting older is a younger person's death, then it probably isn't that strict definition of death that you're concerned about, as you don't mourn people getting older.

comment by XiXiDu · 2012-04-05T11:53:02.236Z · LW(p) · GW(p)

If you don't care about measure then why try to solve friendly AI? In a VERY BIG world some AI's will turn out to be friendly.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2012-04-07T15:20:25.146Z · LW(p) · GW(p)

I'm uncertain of many-worlds, Big World theory, and the reductionist view of identity as not linked to a particular body. If any of those fail, then if this body dies I'm just plain dead, and same for everyone else.

And there are many ways that an unfriendly AI - or no AI at all - could screw things up besides killing everyone instantly.

Replies from: XiXiDu
comment by XiXiDu · 2012-04-07T16:27:03.313Z · LW(p) · GW(p)

I'm uncertain of many-worlds, Big World theory, and the reductionist view of identity as not linked to a particular body. If any of those fail, then if this body dies I'm just plain dead, and same for everyone else.

And there are many ways that an unfriendly AI - or no AI at all - could screw things up besides killing everyone instantly.

I understand and agree.

I am curious. What are you more confident of, MWI and reductionism or unfriendly AI according to SI?

As far as I am aware, the Born probabilities are about the only big uncertainty when it comes to MWI.

When it comes to reductionism, the only real puzzle that I can think of is how one could possible build a conscious machine in the Game of Life. It would be really weird to look at the board and say, this 10000x10000 section of the board is consciousness, but the 10x10 section to the left isn't, nor is this 10k x 10k section of noise.

Ain't there many more uncertainties when it comes to the urgency and possibility of risks from superhuman intelligence?

ETA My question was triggered by your statement that you are uncertain of many-worlds and reductionism. It seems that people more often proclaim their uncertainty about relatively straightforward ideas like reductionism or cryonics than risks from AI. Which seems weird.

Skeptic: I am uncertain if MWI is the correct interpretation and if cryonics is going to work.

Advocate: Yes, there are some valid objections. Especially MWI is a very complicated subject that a layman is seldom able to judge. But I think there is more that speaks in favor of those ideas than against.

vs.

Skeptic: I am uncertain if AI could undergo explosive recursive self-improvement and take over the universe.

Advocate: The uncertainty isn't justified. It is straightforward that it will happen soon. The idea is based on years of disjunctive lines of reasoning.

Replies from: Rain
comment by Rain · 2012-04-11T14:41:30.497Z · LW(p) · GW(p)

Skeptic: I am uncertain if AI could undergo explosive recursive self-improvement and take over the universe.

Advocate: The uncertainty isn't justified. It is straightforward that it will happen soon. The idea is based on years of disjunctive lines of reasoning.

How many people fit the second category to that level of certainty? I consider myself an advocate, yet I would fall under the skeptic label given those phrasings.

comment by BlackNoise · 2012-04-05T08:39:44.727Z · LW(p) · GW(p)

It should be mentioned that when considering things like Cryonics in the Big World, you can't just treat all the other "you" instances as making independent decisions, they'll be thinking similarly enough to you that whatever conclusion you reach, this is what most "you" instances will end up doing. (unless you randomize, and assuming 'most' even means anything)

Seriously, I'd expect people to at least mention the superrational view when dealing with clones of themselves in decide-or-die coordination games.

comment by pleeppleep · 2012-04-05T21:04:38.240Z · LW(p) · GW(p)

I have a feeling that I'm missing something and that this is going to be downvoted, but I still have to ask. In the event that a big universe exists, there are numerous people almost exactly like me going about their business. My problem is that what i would call my consciousness doesn't seem to experience their actions. This would seem to me like there is some factor in my existence that is not present in theirs. If a perfect clone was sitting next to me, I wouldn't be able to see my computer through his eyes. I would continue to see it through mine. This chain of experience is the thing I care most to preserve. I have interest in the continued existence of people like me, but for separate reasons.

I know the idea of an "inner listener" is false, but the sensation of such a thing and a continuous stream of experience do exist. I am emotionally tied to those perceptions. I don't know how enthusiastically I can look forward to the future if I won't be able to experience it any more than I can the nearest parallel universe.

Replies from: Yvain, ESRogs
comment by Scott Alexander (Yvain) · 2012-04-07T15:06:46.188Z · LW(p) · GW(p)

This chain of experience is the thing I care most to preserve.

Okay, think of it this way.

You go to sleep tonight, your "chain of experience" is briefly broken. You wake up tomorrow morning, chain of experience is back, you're happy.

But what makes you say "chain of experience is back"? Only that a human being wakes up, notices it has the memories of being pleeppleep, and says "Hey, my chain of experience is back! Good!"

Suppose Omega killed you in your sleep, then created a perfect clone of you. The perfect clone would wake up, notice it has the memories of being pleeppleep, and say "Hey, my chain of experience is back! Good!" Then it would continue living your life.

Right now you have zero evidence that Omega hasn't actually done this to you every single night of your life. So the idea of a "chain of experience", except as another word for your memories, is pretty tenuous.

And if I told you today that Omega had really been doing this to you your whole life, then you would be really scared before going to sleep tonight, but eventually you'd have to do it. And then the next day, your clone would still be pretty scared before going to sleep, but he'd do it too. And by the thousandth day, you'd probably have forgotten all about it except when someone reminds you.

(what if it were every time you blinked, instead of every time you slept?)

Since this would be totally indistinguishable from the way we are right now, and since there's no logical basis for me-ness, at some point you just have to think, "screw it, there's no continuity of experience or identity and I don't really care", at least as regards blinking and sleeping.

Cryonicists say we can extend this indifference to freezing and thawing. I'm saying that as long as we're extending it, might as well extend it all the way.

Replies from: pleeppleep
comment by pleeppleep · 2012-04-07T20:36:18.206Z · LW(p) · GW(p)

I suppose you're right. If A clone of me appeared next to me he would also think it curious that he could not experience exactly the same sensations as the identical person next to him. Still, I don't think that a "me" from another world coming into existence and living through an apparently identical series of events to me life would count as a resurrection, as the events he lives through are not the same events that I live. In order for it to be a resurrection, I would have to return with the specific set of memories I have from this section of causality. It would be more like reincarnation than anything. Also I have evidence that I have not been intentionally deleted every night because I have zero evidence that this has occurred.

comment by ESRogs · 2012-04-05T23:32:58.765Z · LW(p) · GW(p)

What if there is a perfect clone of you not sitting next to you, but sitting in another part of the universe that is atom-by-atom identical to this one throughout the span of a 46 billion light-year diameter sphere (the size of our observable universe) whose history has been identical to ours since the big bang.

Every thought you have had over the course of your entire life this clone has had too. And what's more, they've had them for the exact same reasons. The chain of events leading up to each of their thoughts is identical to the chain for yours.

Suppose in fact that there are billions such clones (or even infinitely many) spread (unimaginable far apart) all across our giant universe. A moment from now due to quantum events the copies' experiences will diverge. Some will experience one sensation, and some another, so at that point only a fraction of them continue to have histories identical to yours. But can you say which one of them you will be?

If there is no way to distinguish one copy from another, then perhaps it makes sense to say that they are all the same thing -- so that all of the clones with identical histories to you are together 'you' right now, but as their experiences diverge going forward, they split into different possible future versions of you.

What do you think?

Replies from: pleeppleep
comment by pleeppleep · 2012-04-06T00:00:32.645Z · LW(p) · GW(p)

That's all true, but I think the fact that I can't see more than one of their points of view is enough to distinguish between them. The only difference I can think of that pertains to a reductionist universe is location. I know the "inner listener" to be an illusion, but still can't shake the fact that if I died in this universe, but not in another one, then I wouldn't experience "waking up". Its possible that I'm being irrational and misinterpreting the way the brain works, but I think it is clearly observable that I don't feel that I'm experiencing more than one universe, and this feeling is the thing that I care most to preserve.

Replies from: GDC3
comment by GDC3 · 2012-04-09T22:02:09.393Z · LW(p) · GW(p)

I think you're missing the part where "their points of view" are exactly the same. What would it mean to see more than one of them when they're exactly the same. Are you picturing them lined up next to each other in your field of view so you can count them?

Similarly there is no "I just definitely died" feeling that we know of. (How would we know?) You shouldn't picture "dying and then waking up in another universe." You should picture "I experience passing out knowing I may die, but that there is a least one of me that probably doesn't. So when I wake up it will turn out that I was one of them."

Does this make more sense? I think the barrier to intuition is in just how indistinguishable," indistinguishable" is. You can be a billion exact copies and you'll never notice, because they're exact.

Replies from: pleeppleep
comment by pleeppleep · 2012-04-10T00:29:15.992Z · LW(p) · GW(p)

I meant that the "me" in a different universe is different from me in this one. The distance between universes is not trivial. I might never notice the difference between a million "me"s and a billion, but the overall number of "me"s is significant. If multiple versions of myself live side by side, and one dies, then that one does not really continue living, unless it i replaced. Does that make sense? Its not very easy to word ideas regarding this topic.

Replies from: GDC3
comment by GDC3 · 2012-04-10T17:38:26.564Z · LW(p) · GW(p)

I suppose you mean they have different positions. But if indistinguishable particles in quantum mechanics can freely switch places with each other whenever, and which is which has no meaning, then what argument do you have that the universe can even keep different versions of you apart itself?

Not very formal, but I'm trying to convey the idea that certain facts that seem important have no actual meaning in the ontology of quantum physics.

comment by shminux · 2012-04-04T23:49:36.026Z · LW(p) · GW(p)

First, I wish people stopped using this untestable Big World nonsense in their arguments.

For things to "add up to normality", your decisions should not be affected by a particular interpretation of QM, and so you should arrive to the same ones by sticking to the orthodox interpretation (or any other). If your argument fails without invoking a version of the MWI, it is not a sound argument, period. Similarly, your belief that in a galaxy far far away you are a three-eyed Pope named LeBron should not affect your decision of whether to sign-up for cryonics here and now.

Your other point is eminently worthwhile: since the cryonic resurrection is manifestly not exact, how much deviation from the original are you prepared to allow while still considering the resurrected object to be you for practical purposes? The answer is not objective in any way. For some losing a single memory is enough to say No, for others retaining even a small fraction of memories is enough for a Yes. The acceptable range on emotions, volitions and physical makeup can also vary widely.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-04-05T05:49:44.727Z · LW(p) · GW(p)

First, I wish people stopped using this untestable Big World nonsense in their arguments.

Big World theories are derived from existing theories of mainstream physics, which are not generally considered untestable.

Replies from: shminux
comment by shminux · 2012-04-05T06:46:06.575Z · LW(p) · GW(p)

You are using the world "derived" rather loosely.

There are untested but suggestive speculations in the string theory (which in itself is not what one would call a sound physical model), and most models of the universe being significantly larger than some 10^15 light years in whichever direction are also speculative. Certainly there are no indication of a size large enough to duplicate our corner of it in any kind of detail.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-04-05T07:46:54.024Z · LW(p) · GW(p)

Hmm. I was under the impression that Big World theories would be relatively accepted. At least Bostrom and Tegmark seem to argue as if they were:

Self-Locating Belief in Big Worlds: Cosmology’s Missing Link to Observation (Bostrom 2002):

Space is big. It is very, very big. On the currently most favored cosmological theories, we are living in an infinite world, a world that contains an infinite number of planets, stars, galaxies, and black holes. This is an implication of most “multiverse theories”, according to which our universe is just one in a vast ensemble of physically real universes. But it is also a consequence of the standard Big Bang cosmology, if combined with the assumption that our universe is open or flat, as recent evidence suggests it is. An open or flat universe – assuming the simplest topology[1] – is spatially infinite at any time and contains infinitely many planets etc.[2] [...]

[1] I.e. that space is simply connected. There is a recent burst of interest in the possibility that our universe might be multiply connected, in which case it could be both finite and hyperbolic. A multiply connected space could lead to a telltale pattern consisting of a superposition of multiple images of the night sky seen at varying distances from Earth (roughly, one image for each lap around the universe that the light has traveled). Such a pattern has not been found, although the search continues. For an introduction to multiply connected topologies in cosmology, see M. Lachièze-Rey and J.-P. Luminet, J.-P., “Cosmic Topology,” Physics Reports, 254(3) (1995): 135-214.

[2] A widespread misconception is that the open universe in the standard Big Bang model becomes spatially infinite only in the temporal limit. The observable universe is finite, but only a small part of the whole is observable (by us). One fallacious intuition that might be responsible for this misconception is that the universe came into existence at some spatial point in the Big Bang. A better way of picturing things is to imagine space as an infinite rubber sheet, and gravitationally bound groupings (such as stars and galaxies) as buttons glued on to it. As we move forward in time, the sheet is stretched in all directions so that the separation between the buttons increases. Going backwards in time, we imagine the buttons coming closer together until, at “time zero”, the density of the (still spatially infinite) universe becomes infinite everywhere. See e.g. J. L. Martin, General Relativity (London: Prentice Hall, 1995).

Parallel Universes (Tegmark 2003):

How large is space? Observationally, the lower bound has grown dramatically (Figure 2) with no indication of an upper bound. We all accept the existence of things that we cannot see but could see if we moved or waited, like ships beyond the horizon. Objects beyond cosmic horizon have similar status, since the observable universe grows by a light-year every year as light from further away has time to reach us. Since we are all taught about simple Euclidean space in school, it can therefore be diffi cult to imagine how space could not be in finite - for what would lie beyond the sign saying "SPACE ENDS HERE - MIND THE GAP"? Yet Einstein's theory of gravity allows space to be fi nite by being di fferently connected than Euclidean space, say with the topology of a four-dimensional sphere or a doughnut so that traveling far in one direction could bring you back from the opposite direction. The cosmic microwave background allows sensitive tests of such finite models, but has so far produced no support for them - flat infi nite models fi t the data fi ne and strong limits have been placed on both spatial curvature and multiply connected topologies. In addition, a spatially in finite universe is a generic prediction of the cosmological theory of inflation (Garriga & Vilenkin 2001b). The striking successes of inflation listed below therefore lend further support to the idea that space is after all simple and in finite just as we learned in school.

How uniform is the matter distribution on large scales? In an "island universe" model where space is infi nite but all the matter is confi ned to a finite region, almost all members of the Level I multiverse would be dead, consisting of nothing but empty space. Such models have been popular historically, originally with the island being Earth and the celestial objects visible to the naked eye, and in the early 20th century with the island being the known part of the Milky Way Galaxy. Another nonuniform alternative is a fractal universe, where the matter distribution is self-similar and all coherent structures in the cosmic galaxy distribution are merely a small part of even larger coherent structures. The island and fractal universe models have both been demolished by recent observations as reviewed in Tegmark (2002). Maps of the three-dimensional galaxy distribution have shown that the spectacular large-scale structure observed (galaxy groups, clusters, superclusters, etc.) gives way to dull uniformity on large scales, with no coherent structures larger than about 10^24m. More quantitatively, imagine placing a sphere of radius R at various random locations, measuring how much mass M is enclosed each time, and computing the variation between the measurements as quanti ed by their standard deviation M. The relative fluctuations M/M have been measured to be of order unity on the scale R ~ 3 X 10^23m, and dropping on larger scales. The Sloan Digital Sky Survey has found M/M as small as 1% on the scale R ~ 10^25m and cosmic microwave background measurements have established that the trend towards uniformity continues all the way out to the edge of our observable universe (R ~ 10^27m), where M/M ~ 10^(-5). Barring conspiracy theories where the universe is designed to fool us, the observations thus speak loud and clear: space as we know it continues far beyond the edge of our observable universe, teeming with galaxies, stars and planets.

Though those papers are from 2002 and 2003 - have the theories in question been disproven since then? If so, I'd be curious to read about it.

Replies from: shminux
comment by shminux · 2012-04-05T19:10:42.897Z · LW(p) · GW(p)

It is accepted that the Universe is likely much bigger than what is visible. There are no indications that it is infinite or even large enough to ensure the Big Worlds-type recurrence. My point is that your decision of whether to sign up for cryonics now should not depend on whether the universe is 10^10 (not big enough for recurrence) or 10^10^10 times larger than what we can presently see.

comment by Dmytry · 2012-04-05T19:00:06.976Z · LW(p) · GW(p)

Something oddly relevant: quantum insomnia:

http://quantuminsomnia.blogspot.com/

Of course that guy not being me or you, it is obvious that he is not a quantum insomniac, but there's a rather creepy thought: Today may be the day that you woke up for the last time in your life, and you'll remain awake forever, just because there will always be branch where you are awake, and you after a night's sleep is more different from you falling asleep, than you one second later, still awake. Good night. On second thought, I probably shouldn't joke like this here but for some reason I find that concept quite funny (and I don't believe in it for a second; it is very silly that QM would prevent you from falling asleep but won't prevent you from being cognitively impaired by insomnia, but i do fear that if i get insomnia, that may make me think of the MWI while my judgement is impaired, so hereby i'm trying to pre-commit not to think of MWI after more than 20 hours awake, which i really recommend to everyone here as well)

Replies from: Risto_Saarelma
comment by Risto_Saarelma · 2012-04-06T12:26:48.110Z · LW(p) · GW(p)

Looks like my idea wasn't as original as I thought.

comment by Merkle · 2012-04-09T17:26:09.962Z · LW(p) · GW(p)

How many bytes in human memory? is a very brief article providing estimates of just that. Evidence from human learning experiments suggests that, after using a very good data compression algorithm, human long term declarative memory holds only a few hundred megabytes.

How much of that information is common knowledge, such as knowledge of the English language, memories from media such as books or television, or knowledge of local buildings and streets, is unclear.

Additional information specific to an individual could be gained from email, internet posts, and other personal electronic information and written records, such as diaries; as well as individual genomes, which will soon be ubiquitously available.

The modest information content of human declarative memory might be relevant to some of the discussions on this thread.

comment by Grognor · 2012-04-04T23:36:20.535Z · LW(p) · GW(p)

The simplest is the theory that the universe (or multiverse) is Very Very Big.

Do you mean this in an Occamian way? I suspect not, but I think you should make it clearer.

Anyway, this is a subject I've actually thought about a lot.

A lot of people (including Derek Parfit himself) think continuity is absolutely essential for personal identity / selfhood, but I don't. I've had such terribly marked psychological changes over the years that I cannot even conceive of the answer to the question, "Am I the same person as Grognor from 2009?" being yes in any real sense. I interpret this experience, along with of course the actual evidence from neuroscience, physics, and good® philosophers like Daniel Dennett to mean that personal identity doesn't really exist and is really just a sort of cognitive illusion that I don't understand and neither do people smarter than me.

That said, here is an excellent essay by reductionist atheist Occam's razor-wielding AI researcher Paul Almond arguing that there is no continuity of self. It is a good essay, but sadly it is a very boring essay.

Oh, and I'm not so sure about your assumption that personal measure should not be taken into account in determining whether to purchase cryonics. It's quite rational to maximize the probability that things as similar to you as possible exist, so that as many of them as possible get to count as "you" in whatever sense matters to whatever it is that does the decision making (tentatively, I'll call this "you").

Replies from: David_Gerard, Dmytry, Thomas
comment by David_Gerard · 2012-04-05T07:31:24.472Z · LW(p) · GW(p)

Yes. Reading LessWrong has nearly convinced me that I don't exist ...

Replies from: Grognor
comment by Dmytry · 2012-04-05T07:23:08.830Z · LW(p) · GW(p)

I dunno... I feel being essentially same as me from 18 years ago. I have terabytes of that person's memories, at any rate, and no-one else does until you go so far from here as to start encoding those memories into the spacetime coordinates.

The personality changes may very well be down to some change to levels of hormones; few hundred bits worth of change, some terabytes worth of extra memories, and learned skills.

That should make me the closest match, much closer than anything else. There's continuity to data inside your computer; you write an essay, you copy it around, you edit it, the file system may move file around to close gaps, you may store it in google docs which will store it in really odd ways and move it around outside your knowledge, but there is a causal chain.

Replies from: Grognor
comment by Grognor · 2012-04-05T07:43:32.559Z · LW(p) · GW(p)

and no-one else does

If your personhood definition requires there to be only one of you, I think it already fails.

but there is a causal chain.

You have not made a case that a causal chain is necessary or even a little bit relevant. I don't think it is. If an algorithm appears twice at distances so disparate they could not possibly be causally related, they're still the same algorithm.

Your comment is one of those philosophical confusions that, unbeknownst to its author (that'd be you), puts its conclusion in its premise.

Replies from: Dmytry
comment by Dmytry · 2012-04-05T08:22:57.791Z · LW(p) · GW(p)

and no-one else does

If your personhood definition requires there to be only one of you, I think it already fails.

The full quote was, "and no-one else does until you go so far from here as to start encoding those memories into the spacetime coordinates." . Don't quote out of context.

It is a physical fact of reality that within ridiculously huge volume, I am the only one who remembers what I thought about on a train ride in the year 2000 or so on the way back home from work, err, i mean school. It's only when you get extremely far, where the memory of this and other thoughts can be encoded into the coordinate, that you begin to see instances that hold this memory.

(subject to there being a zillion of mes comparatively 'nearby' if the MWI is true, of course)

edit: and okay, i was thinking about electrical aircraft propulsion, using electric arc to accelerate the air. I can remember in vivid details what I imagined then because I committed it to memory (the train was packed and I had to stand, so I couldn't write it down). I have terabytes of such stuff in the head, and while someone on earth can have approximately similar memories, the edit distance is huge.

comment by Thomas · 2012-04-05T06:16:06.241Z · LW(p) · GW(p)

"Am I the same person as Grognor from 2009?"

Or the same person as myself 2008? Or anybody, anytime?

comment by ahh · 2013-01-01T07:57:36.441Z · LW(p) · GW(p)

You know, Stross tacitly considered an interesting form of resurrection in Accelerando--a hypothetical post-singularity (non-Friendly) AI computes a minimum message length version of You based off any surviving records of what you've done or said (plus the baseline prior for how humans work) and instantiates the result.

I'm having real trouble proving that's not more-or-less me, and what's more, that such a resurrection would feel any different from the inside looking back over its memories of my life.

comment by Sly · 2012-04-05T23:45:03.632Z · LW(p) · GW(p)

I still have yet to see anyone adequately address how I am supposed to relate in any way to the magical copy of me a universe away.

If they have a shitty day, I feel nothing. If they have a good day, I feel nothing. If they die, big whoop. When I die, I will not be magically waking up in their body and vice versa. I will be dead.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-06T00:46:13.472Z · LW(p) · GW(p)

Can you relate to the person who will remember having been you in twenty years?
If so, how do you do that?

Replies from: Sly
comment by Sly · 2012-04-06T01:00:38.520Z · LW(p) · GW(p)

The same way an instance of a class is still that instance even if I change some of it's private variables.

Now, the other instances of the same class are still quite different, and are not the same, even though they are similar.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-06T01:24:56.733Z · LW(p) · GW(p)

Well, as long as we're switching to metaphor, the way I as an instance of class C relate to a separate instance of C is through our shared inheritance of C's methods, defaults, and other attributes.

Is that enough relationship for me to care? Well, it depends.

I mean, I don't necessarily care about my self in twenty years either... even if I know he's going to have a shitty day in 2032, or a good day, etc., I feel nothing. Sometimes it's enough to believe that I will feel something. Sometimes it isn't.

In much the same way, I don't necessarily care about other people, but sometimes I do.

Replies from: Sly
comment by Sly · 2012-04-06T01:51:22.092Z · LW(p) · GW(p)

Sure. I would argue that me in 20 years is the same instance as the current Sly, but older, while the parallel universe Slys are just other instances of the Sly Class.

Hence why I can easily differentiate between the two.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-06T02:10:00.546Z · LW(p) · GW(p)

Of course you can differentiate between them. I can differentiate between me-in-five-years and me-in-twenty-years, as well. There exist differences between these things.

I initially thought you were asking how one could identify with a different-but-similar person, given the absence of feedback ("If they have a shitty day, I feel nothing. If they have a good day, I feel nothing.").

With respect to that, it seems to me that my ability to identify with myself-in-the-future despite the lack of feedback suggests that the lack of feedback isn't a showstopper, and more generally that what I identify with is more a function of my capacity for empathy than it is of any "me"-ness in the world.

I'm no longer sure I understood you correctly in the first place, though.

Replies from: None, Sly
comment by [deleted] · 2012-04-06T05:54:17.970Z · LW(p) · GW(p)

And for that matter, I don't really have that much feedback from me in 20 seconds from now, or me 20 seconds ago either. My current remembering self has instantaneous inclinations, some of which are predicated on memories or anticipations, but at no point am I ever really a smearing of multiple time slices of myself. (I am probably a smearing of different quantum branches of myself, though. Until those selves decohere and I incrementally discover which branch that "I" have been on "all along").

For example, what is the difference between what we would commonly call "me", and an entity whose conscious experience is the Heaviside function with argument equal to the entire description of my brain state right as I type this question mark? That version of me just started existing, but luckily had molecules and quarks in all the right places to feel and remember everything as if he's been alive for 26 years and ate tofu for dinner.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-06T13:58:32.977Z · LW(p) · GW(p)

Well, in practical terms, the anticipations matter. I expect decisions I make now to affect the state of me-in-twenty-seconds; if I jump off a tall building, for example, I expect me-in-twenty-seconds to be a smear on the pavement, so if I value me-in-twenty-seconds not being a smear on the pavement, that inclines me not to jump off a tall building.

But no such relationship exists between me and me-twenty-seconds-ago; I don't expect decisions I make to affect the state of that entity.

You are of course correct, though, that my anticipations are facts about my minds and not facts about reality-other-than-my-mind, which might have all kinds of properties that make my anticipations (and recollections, and current perceptions and beliefs) simply false.

Replies from: None
comment by [deleted] · 2012-04-06T16:36:42.120Z · LW(p) · GW(p)

Yeah, it was your last paragraph that I was meaning. In a given moment, I don't value something about me 20 seconds from now. I value the current experience of thoughts that involve simulations about an idealization of me extrapolated in time. The thing I am valuing are immediate thoughts though. Much like altruistic values being rooted in your own immediate value of anticipations. My meat computer will act on observations to induce an anticipation of X in my brain. If I want to anticipate Y then I should do Z to bring about the actions that lead to the anticipation of Y. Once I am at the precise instant that I'm experiencing Y, I'm not valuing Y because my mind is valuing anticipations post-Y.

It is very interesting given things like closed-timelike curves and so on that there is no anticipation of past selves. It would be great to see a write up of why the perceived flow of entropy causes me to only future-value an anticipation like being proud of my former actions or accomplishments. I'm sure that the right level of articulation is evolutionary biology. I don't see how having visceral cognitive anticipations of the past could be adaptive. But it's still interesting. And even more interesting to think that there is some most-like-me entity within the subspace of entities that do have past-looking anticipations, probably wondering why people don't have future-looking anticipations right now (in a Big Universe, anyway)

comment by Sly · 2012-04-06T04:28:50.304Z · LW(p) · GW(p)

The point I was trying to convey was that there is a huge difference between if I die vs a copy elsewhere of me dying.

Hence me using the programming metaphor. If I am an instance, it matters a hell of a lot if it is this particular instance that gets scrapped vs some other instance of Sly.

My argument is that ageing is more like modifying the variables, whereas the Big Universe copy of sly is a separate instance.

Therefore it makes a lot of sense why I do not consider the copy of Sly to be me. I do not equate the two as other people here really want too. I also reject the idea that ageing identity loss is comparable to death identity loss, this seems to be completely wishful thinking.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-06T13:49:50.405Z · LW(p) · GW(p)

You started out by asking how you were supposed to relate in any way to a copy of you . What I'm gathering from our subsequent discussion is that this was a rhetorical question; what you actually meant to express was that you don't relate in any way to such a copy, and you don't feel obligated to.

I accept that you don't, and I agree that you aren't obligated to.

Identity in the sense we're discussing is a fact about the mind, not a fact about the world. If you choose to identify solely with your present self and future selves in the same body, and treat everything else that could conceivably exist as not-you, that's fine. It's a perfectly reasonable choice, and I've no doubt that you can come up with lots of perfectly valid arguments to support making that choice.

The fact that other people make different choices about their identity than you do doesn't mean that either of you is wrong about what your identity "really" is, or that either of you is ignoring reality in favor of "wishful thinking".

There are consequences to those choices, of course: if I choose to identify with me-now but not with me-in-ten-years, for example, I will tend to make decisions such that ten years from now I am worse off. If I choose to identify with me-in-this-body but not copies of me, I will tend to make decisions such that copies of me are worse off. (Obviously, this doesn't actually create consequences in cases where none of my decisions can affect copies of me that exist.) Etc.

Replies from: Sly
comment by Sly · 2012-04-06T17:38:39.743Z · LW(p) · GW(p)

Yes, it was rhetorical.

I see now, I was operating on the fact about the world level.

comment by A4FB53AC · 2012-04-06T13:52:29.158Z · LW(p) · GW(p)

Is the amount of bits necessary to discriminate one functional human brain among all permutations of matter of the same volume greater or smaller than the amount of bits necessary to discriminate a version of yourself among all permutations of functional human brains? My intuition is that once you've defined the first, there isn't much left needed, comparatively, to define the latter.

Corollary, cryonics doesn't need to preserve a lot of information, if any, you can patch it up with, among other things, info from what a generic human brain is, or better what a human brain derived from your genetic code is, and correlate that with information left behind on the Internet, in your writings, the memories of other people, etc., about what some of your own psychological specs and memories should be.

The result might be a fairly close approximation of you, at least according to this gradation of identity idea.

comment by A4FB53AC · 2012-04-06T13:43:26.252Z · LW(p) · GW(p)

suppose that a Friendly AI fills a human-sized three-dimensional grid with atoms, using a quantum dice to determine which atom occupies each "pixel" in the grid. This splits the universe into as many branches as there are possible permutations of the grid (presumably a lot)

How is that a Friendly AI?

comment by faul_sname · 2012-04-05T01:47:37.729Z · LW(p) · GW(p)

Once again, Nick Bostrom's Quantity of experience: brain-duplication and degrees of consciousness comes to the rescue. Cryonics greatly expands the proportion of "you-like" algorithms as opposed to "!you-like" algorithms, for the same reason that quantum russian roulette greatly shrinks that proportion.

comment by Manfred · 2012-04-05T00:01:43.019Z · LW(p) · GW(p)

Pfsch, silly. I couldn't wake up as someone from another part of the universe - he's already busy waking up as himself :P

Several of your examples are also equivalent to quantum suicide situations. Similar comments about measure apply, except in this case we have a process (cryonics) that actually can restore measure (therefore, we can ignore all differences from our intuitive idea of resurrection).

comment by Kevin · 2012-04-04T23:31:28.152Z · LW(p) · GW(p)

See Tegmark's "Multiverse Hierarchy": http://arxiv.org/pdf/0905.1283.pdf Also there has been good work done showing that a spatial Big Universe is a realization of the quantum multiverse in 3 spatial dimensions. http://blogs.discovermagazine.com/cosmicvariance/2011/05/26/are-many-worlds-and-the-multiverse-the-same-idea/

comment by [deleted] · 2012-04-05T18:48:15.743Z · LW(p) · GW(p)

Pretty ok article.

It has been years since I've last thought about personal identity. The last time it seemed a pretty reasonable and obvious conclusion to slightly less value the "me" stored in my body and value humans who where similar to me a bit more.

There seemed to be little point in (ceteris paribus) me being willing to spending more to save "my" life compared to saving the life a of a random average human A and not also expending at least something extra on person B who is say half way in "meness" between said random average human A and the part of me currently stored in my brain. And then expending a bit extra over that on person C who is halfway between me and person B. ect.

Like Robin Hanson said, if there is a imperfect copy of his brain running on a computer and you shot meant-Hanson in the head, Robin didn't really die but he did get smaller. Dying is in the modern era basically getting a lot smaller in the world and time you happen to care about.

Nearly all other humans dying or my culture & value-set going extinct might as a step actually make "me" much smaller than the death of my body would.

Edit: Some feedback besides the down votes or a counter argument would be very much welcomed. As I said it has been a some time and I need to review this cached opinion.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-05T19:32:47.911Z · LW(p) · GW(p)

Every man's death diminishes you, because you are involved in all mankind?

Replies from: None
comment by [deleted] · 2012-04-06T07:10:01.394Z · LW(p) · GW(p)

Well yes to a very small extent. However when I get to very small amounts of me I don't give away any extra effort, much for the same reason animals responding to the pay offs of kin selection don't:

Kin selection is emphatically not a special case of group selection. … If an altruistic animal has a cake to give to relatives; there is no reason at all for it to give every relative a slice, the size of the slices being determined by the closeness of relatedness. Indeed this would lead to absurdity since all members of the species, not to mention other species, are at least distant relatives who could therefore each claim a carefully measured crumb! To the contrary, if there is a close relative in the vicinity, there is no reason to give a distant relative any cake at all. Subject to other complications like laws of diminishing returns, the whole cake should be given to the closest relative available.

(p. 290, The Selfish Gene by Richard Dawkins)

comment by Eneasz · 2012-04-05T17:51:47.146Z · LW(p) · GW(p)

I'm signed up for cryonics, and I believe you're leaving out two crucial motivations for cryonics. These are not particularly smart reasons. You don't logic your way into them. But they are emotionally strong reasons, and emotions are the primary motivators of humans.

The first is that it reduces my fear of death significantly. Perhaps unreasonably so? But the fear of death had been a problem for me for a long time, and now not as much.

The second is that I want OTHERS to be frozen for MY sake. I want to see my parents and brothers and friends again in this iteration. In typical coordination problem solutions, I should also freeze myself for their sake.

There's also the fact that I think it may be a good idea to maximize the number of people in any particular world like me, and this is one way of doing that. But that's not really emotionally motivating, just an abstract thought.

comment by TheOtherDave · 2012-04-05T03:25:42.875Z · LW(p) · GW(p)

As I've said elsewhere, I mostly think that any notion of preserving personal identity (from moment to moment, let alone after my heart stops beating) depends on a willingness to acknowledge some threshold of similarity and/or continuity such that I'm willing to consider anything above that threshold to share that identity.

Defining that threshold such that entities in other universes share my identity means there's lots of me out there, sure. And defining that threshold such that entities elsewhere on Earth right now share my identity means there's lots of me right here.

It's not clear to me why any of that matters.

comment by lsparrish · 2012-04-05T01:14:37.774Z · LW(p) · GW(p)

This is an interesting argument that I have been giving a lot of thought to lately. Are all randomly occuring me-like entities out there in the big universe just as much me as the ones I anticipate in the future?

Well, there's one difference: causality. I'm not so sure a me with no causal relationship to my current self is something I can justifiably consider a future me. When I step in a teleporter that vaporizes my molecules and reconstructs me, I still have a very well-defined causal relationship with that future self -- despite the unorthodox path by which the relationship was maintained, the likelihood of there being such a me after the event approaches 100% almost as closely as ordinary continuity generally does.

I don't have that kind of relationship with a Boltzman Brain or a 3D grid of "all possible humanoids" that some AI is doubtless running with quantum dice in a completely uncorrelated universe.

On the other hand, I'm having a hard time coming up with a consequence-based reason that it matters, apart from the fact that such beings seem like they must be awfully rare compared to straight up continuations of me.

If that's the only reason it matters -- relative scarcity -- then it follows that as aging and other (perhaps more constant) risk factors make my life less likely to experience ordinary continuity, these other forms of continuity become increasingly likely to represent my future experiences. If so, we can speculate as to what form that actually takes. Perhaps being spontaneously cured of aging due to a freak biological accident is much more common than any of the competing possibilities, resulting in most people spending large portions of their existence as a "lonely immortal" watching their friends die but never aging (or at least never croaking of old age). On the other hand, perhaps the lonely brain dying of explosive decompression is a much more common fate, because there's just so much space out there that it happens much more often.

Perhaps the way to sell cryonics is as a way to avoid a lonely and unpredictable future existence. Even if you only survive cryonics with one in a billion probability or something like that, it is still a better chance than the competing scenarios and thus more likely to win the existence lottery in a way that gives you a link to the history and world you actually remember coming from.

comment by Will_Newsome · 2012-04-06T00:09:03.603Z · LW(p) · GW(p)

Whenever I think about anthropics I'm always worried that I'm only experiencing thinking about anthropics because I'm thinking about anthropics, as if anthropics itself is the domain of some unfathomably powerful god who can change relative measures at whim and who thinks it's really fun to fuck with philosophers.

comment by Lightwave · 2012-04-05T17:33:33.667Z · LW(p) · GW(p)

There may be another alternative to cryonics that doesn't require a Big World - indirect mind uploading (scroll down to "Using ‘indirect mind uploading’ to avoid staying dead"). The idea is that if you record every second of your life (e.g. on video), using this information a future AI might be able to converge on a specific brain configuration that is close to your original brain. Since only certain brains would say, write, do or think (although we currently can't record thoughts) the things you recorded yourself saying, writing and doing, depending on the amount of information the AI has about you, it might be able to narrow it down and the scenario becomes similar to Yvain's 'gradations of identity' scenario in this post.

comment by Benquo · 2012-04-05T14:45:42.630Z · LW(p) · GW(p)

Right now I don't go to bed at night weeping that my father only met my mother through a series of unlikely events and so most universes probably don't contain me; I'm not sure why I should do so after having been resurrected in the far future.

Because you understand that you can't change it:

For nothing is more certain, than that despair has almost the same effect upon us with enjoyment, and that we are no sooner acquainted with the impossibility of satisfying any desire, than the desire itself vanishes.

I don't think this claim is true in all cases, but it seems plausible here. There's no adaptive value to learning about "risks" you were subject to before you were able to do anything about them, so I don't feel how bad it was that I almost wasn't born as myself. But I did feel a kind of possible-worlds regret when I stepped out into the street without looking and almost got hit by a bus, since in some likely possible worlds I did get hit by that bus.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2012-04-07T15:09:28.771Z · LW(p) · GW(p)

Because you understand that you can't change it

As I said below, if Omega offered me a financial deal to increase my measure - $100 per alternate Everett branch to make my father and mother meet and conceive me - this wouldn't seem like a remotely good use of my money.

comment by Viliam_Bur · 2012-04-05T13:48:28.088Z · LW(p) · GW(p)

Parallel copies of me are not me. Dying in X% of Everett branches has X% of disutility of dying.

Gradual change feels OK. Less gradual change feels less OK (unless I percieve it as an improvement). Going to sleep at night and knowing that in the morning I will feel a bit differently already makes me nervous. But it's preferable to dying. (But if I could avoid it without bad consequences, I would.) Small changes are good, because more of the future selves will be more similar to my current self.

How exactly does one measure the similarity or the change? Some parts of myself seem more important to me than other parts. Maybe if I make a matrix of how much some parts of me influence other parts of me, it would be some eigenvector. A part of me that strongly influences other parts of me, is more me. You can change my taste for ice cream and the rest of the personality remains pretty much the same, so I would be willing to sacrifice the taste for ice cream for survival or even some minor benefit of the remaining parts.

Would making 2 copies of me double my utility? Well, it depends of what "my" utility means in this context. Neither of the copies would perceive the utility of the other copy, so neither of them would feel like anything has doubled. But there would be 2 people having the utility of being alive, so globally there would be twice as much utility for my future selves, just not twice the utility for any particular self. (Just like dying in 50% of Everett branches does not mean that there is a self which feels only 50% alive.)

If I had an opportunity to copy myself, assuming that each copy will have the same quality of life as I have now, I would do it. If I would have to pay for making the copy... I don't know how much would I be willing to pay. (Also money is not exactly the same thing as utility, so if I'd have to split my property in two halves -- a half for each copy -- it would be worth it, because each copy would get more than 50% of utility.)

Without cryonics or uploading or some other immortality treatment, selves die. Pretending that you continue to live in a galaxy far far away is like pretending that you continue to live in heaven. And by the way, that's not a cheap analogy, because in Tegmark universe there exists a heaven where you will be after you die, assuming "heaven" = a place where you get 3^^^3 utility, and "you, after you die" = a particle-level copy of you in the moment of your death, except that in heaven you will survive. And this is not a good news, because hell exists too, and maybe a random afterlife is more like hell than heaven, in which case a prolonged life in our universe is preferable.

Replies from: wedrifid, wedrifid
comment by wedrifid · 2012-04-05T15:15:10.284Z · LW(p) · GW(p)

Parallel copies of me are not me. Dying in X% of Everett branches has X% of disutility of dying.

Let us suppose you are forced to play a quantum roulette (assume the payoffs are along the lines of those described here). The next day, someone asked you whether you were glad that you were forced to play quantum roulette. Do you answer:

  • NO! I just 50% died! Those @%#%@ assholes!
  • Yes! I just got $300k for free!

I ask because while from the perspective of evaluating expected payoffs in the future your two assertions are compatible but from the perspective of evaluating outcomes that have happened they are not.

Replies from: Viliam_Bur, Sly
comment by Viliam_Bur · 2012-04-09T19:54:44.854Z · LW(p) · GW(p)

I guess my answer would be like: "It's too bad they 50% killed me... but now, I'm not going to cry for my parallel dead bodies (they are a sunk cost) and I'll enjoy the money." So I would be both happy that I am in the winning branch, but also aware of the cost, so I would not be retrospectively happy about being forced to play.

Does this make sense? I would be both happy and unhappy about two different aspects of the situation. The part that makes me worry about the death of non-me's is that they were killed by something that was a threat for me too. (Something like when terrorists capture 16 prisoners and kill 15 of them and you are the one they release, and then somehow illogically they also pay you a lot of money. They did not kill you, and you even profited from the action, but that was not a personal decision on their side, just a random choice. So in some sense, they wanted to kill you too, and almost succeeded.)

Replies from: wedrifid
comment by wedrifid · 2012-04-09T20:41:18.258Z · LW(p) · GW(p)

Thanks, the illustration with the terrorists nailed the meaning down.

comment by Sly · 2012-04-06T00:36:59.160Z · LW(p) · GW(p)

The answer is that I just had a 50% chance of dying. Assholes.

This should be pretty obvious considering the pile of corpses the quantum murderers are leaving behind as they repeat their game.

Replies from: wedrifid
comment by wedrifid · 2012-04-06T01:38:43.840Z · LW(p) · GW(p)

The answer is that I just had a 50% chance of dying. Assholes.

That actually isn't an answer to the clarification I asked of Viliam. If you (are sane and so) consider quantum roulette undesirable then naturally you consider the folks who forced it upon you to be assholes. Yet you are the (measure of the) guy that won the lottery, didn't die and got the $300k. If Viliam considers parallel copies not-me then after the coin is tossed he doesn't care (in the direct personal sense) about the other 'not-me' guy who lost and got killed.

Mind you the language around this subject is ambiguous. It could be that Viliam's expression was intended to place parrallel-everett-branch selves into a qualitatively different class to other forms of parallel selves.

Replies from: Sly
comment by Sly · 2012-04-06T01:52:32.697Z · LW(p) · GW(p)

I see what you are saying now. Thanks for clarifying.

Replies from: wedrifid
comment by wedrifid · 2012-04-06T02:08:13.539Z · LW(p) · GW(p)

I see what you are saying now. Thanks for clarifying.

Glad to hear it. I hope my inclusion of parenthetical 'sanity' claims conveyed that I essentially agree with what you were saying too.

Replies from: Sly
comment by Sly · 2012-04-06T04:20:38.919Z · LW(p) · GW(p)

Yea, that helped.

comment by wedrifid · 2012-04-05T15:25:03.018Z · LW(p) · GW(p)

And by the way, that's not a cheap analogy, because in Tegmark universe there exists a heaven where you will be after you die, assuming "heaven" = a place where you get 3^^^3 utility

Curiously, for all the enormous scope of the higher level Tegmark multiverses this isn't necessarily the case. The evaluation of utility is determined by an extrapolation from you. If you are the kind of person that does not have an unbounded utility function it is entirely possible that "heaven" does not exist even Tegmark's level IV ultimate ensemble. It would require going beyond that, to universes that aren't mathematically possible.

comment by Dmytry · 2012-04-05T06:13:06.282Z · LW(p) · GW(p)

Well, those copies, they are very very far away, and thus are more different from you than something nearby; it feels you are using the intuitions for things nearby, whose relative position is only a minuscule fraction of their total information content, for the things whose information content is entirely their position information. In principle we can recreate this conversation by iterating every value of every letter on a very powerful computer, but unless there's process for selecting this conversation out of the sea of nonsense, that won't constitute a backup.

comment by Unknowns · 2015-04-27T07:37:12.341Z · LW(p) · GW(p)

Yes, to the degree that you accept the existence of a Big World, together with the usual assumptions about personal identity, you should expect never to die.

Even if there is no Big World, however, no one will ever experience dying anyway. Your total lifespan will be limited, but you will never notice it come to an end. So you might as well think of that limited span as a projection of an infinite lifespan onto an open finite interval. So again, one way or another you should expect never to die.

comment by Houshalter · 2014-11-25T21:56:08.330Z · LW(p) · GW(p)

...

comment by shokwave · 2012-04-06T01:23:19.784Z · LW(p) · GW(p)

I'm curious how we distinguish copies of ourselves and near-copies of ourselves from ourselves. I mean, this intuition runs strongly through personal identity discussions: in this post you are identifying possible candidates for "clone of me", not "me". If it was the latter, we'd just go looking for ourselves in the Big World, which is a much simpler problem: "I'm me, and there's 863 of me here and there, some brighter than others." No concerns for them needing to be closer to some canonical source.

But we keep dragging in this "clone of me" designation.

An exact clone is more me than a clone with different ice cream preferences, who is more me than a clone who is a Hindu fundamentalist, who is more me than LeBron James is.

Put another way, I grant that you have a function that returns the "more-me"-ness of a clone (I do too). Should we dig into this function and find out what it cares about, hoping to find some variable we haven't yet treated in personal identity discussions?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-04-06T01:29:28.134Z · LW(p) · GW(p)

Well, for starters, would you distinguish between a "more me" function and a "similar to me" function?

Because there's been a lot of cogsci research done on what underlies human similarity judgments, so that would be a fine place to start, I suppose.

comment by DanielLC · 2012-04-05T00:30:59.199Z · LW(p) · GW(p)

Why do you care if you continue to exist?

Also, if you don't care about your measure, than why is the Big Universe even necessary? You already know you have a measure of at least how long you've been alive. You can't cease to have ever existed.

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2012-04-05T10:11:21.830Z · LW(p) · GW(p)

The main thrust of this post is that assumptions sufficient to support cryonics are also sufficient to survive without cryonics.

The "why do you care if you continue to exist?" question is just part of the background assumption - people sign up for cryonics because they care about continuing to exist. All I'm saying is that if you believe that, you can get the same results by not signing up for cryonics, maybe.

comment by steven0461 · 2012-04-04T23:42:20.513Z · LW(p) · GW(p)

This is a restatement of quantum immortality, right?

Replies from: Yvain
comment by Scott Alexander (Yvain) · 2012-04-04T23:44:51.531Z · LW(p) · GW(p)

I'm not sure why you crossed that out; it more or less is. The only new part is that I've never heard anyone discuss the relevance to cryonics before.

comment by Richard_Kennaway · 2012-04-04T23:42:12.300Z · LW(p) · GW(p)

But this is a slippery slope. If my recreation is exactly like me except for one neuron, is he the same person? Signs point to yes. What about five neurons? Five million? Or on a functional level, what if he blinked at exactly one point where I would not have done so? What if he prefers a different flavor of ice cream? What if he has exactly the same memories as I do, except for the outcome of one first-grade spelling bee I haven't thought about in years anyway? What if he is a Hindu fundamentalist?

These questions apply equally to the person who wakes up as Yvain tomorrow. Are you still "you" after a day's worth of loss of neurons in the ordinary course of things? After a minor stroke? After brain surgery? After a religious conversion? After changing in any way at all, including by reading this comment?

I've never seen the concept of degrees of identity formalised in a modal logic of possible worlds. The modal logics I've seen all consider each entity to either exist, or not exist, in each possible world. One can make toy mathematical systems out of this, but what practical use they are is less clear.

comment by chaosmosis · 2012-05-01T03:04:07.126Z · LW(p) · GW(p)

Since I inherently desire to struggle for life regardless of whether or not my efforts will have any effect this argument does not alter my motivations or the decisions I will make. I'm perfectly fine with struggling for a lost cause if the process of struggling is either valuable or inevitable.

In a Big World, the process of struggling is all that we have, and success doesn't matter so much.