Timeless Identity

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-06-03T08:16:33.000Z · LW · GW · Legacy · 249 comments

Contents

249 comments

Followup toNo Individual Particles, Identity Isn't In Specific Atoms, Timeless Physics, Timeless Causality

People have asked me, "What practical good does it do to discuss quantum physics or consciousness or zombies or personal identity?  I mean, what's the application for me in real life?"

Before the end of today's post, we shall see a real-world application with practical consequences, for you, yes, you in today's world.  It is built upon many prerequisites and deep foundations; you will not be able to tell others what you have seen, though you may (or may not) want desperately to tell them.  (Short of having them read the last several months of OB.)

In No Individual Particles we saw that the intuitive conception of reality as little billiard balls bopping around, is entirely and absolutely wrong; the basic ontological reality, to the best of anyone's present knowledge, is a joint configuration space.  These configurations have mathematical identities like "A particle here, a particle there", rather than "particle 1 here, particle 2 there" and the difference is experimentally testable.  What might appear to be a little billiard ball, like an electron caught in a trap, is actually a multiplicative factor in a wavefunction that happens to approximately factor.  The factorization of 18 includes two factors of 3, not one factor of 3, but this doesn't mean the two 3s have separate individual identities—quantum mechanics is sort of like that.  (If that didn't make any sense to you, sorry; you need to have followed the series on quantum physics.)

In Identity Isn't In Specific Atoms, we took this counterintuitive truth of physical ontology, and proceeded to kick hell out of an intuitive concept of personal identity that depends on being made of the "same atoms"—the intuition that you are the same person, if you are made out of the same pieces.  But because the brain doesn't repeat its exact state (let alone the whole universe), the joint configuration space which underlies you, is nonoverlapping from one fraction of a second to the next.  Or even from one Planck interval to the next.  I.e., "you" of now and "you" of one second later do not have in common any ontologically basic elements with a shared persistent identity.

Just from standard quantum mechanics, we can see immediately that some of the standard thought-experiments used to pump intuitions in philosophical discussions of identity, are physical nonsense.  For example, there is a thought experiment that runs like this:

"The Scanner here on Earth will destroy my brain and body, while recording the exact states of all my cells.  It will then transmit this information by radio.  Travelling at the speed of light, the message will take three minutes to reach the Replicator on Mars.  This will then create, out of new matter, a brain and body exactly like mine.  It will be in this body that I shall wake up."

This is Derek Parfit in the excellent Reasons and Persons, p. 199—note that Parfit is describing thought experiments, not necessarily endorsing them.

There is an argument which Parfit describes (but does not himself endorse), and which I have seen many people spontaneously invent, which says (not a quote):

Ah, but suppose an improved Scanner were invented, which scanned you non-destructively, but still transmitted the same information to Mars .  Now, clearly, in this case, you, the original have simply stayed on Earth, and the person on Mars is only a copy.  Therefore this teleporter is actually murder and birth, not travel at all—it destroys the original, and constructs a copy!

Well, but who says that if we build an exact copy of you, one version is the privileged original and the other is just a copy?  Are you under the impression that one of these bodies is constructed out of the original atoms—that it has some kind of physical continuity the other does not possess?  But there is no such thing as a particular atom, so the original-ness or new-ness  of the person can't depend on the original-ness or new-ness of the atoms.

(If you are now saying, "No, you can't distinguish two electrons yet, but that doesn't mean they're the same entity -" then you have not been following the series on quantum mechanics, or you need to reread it.  Physics does not work the way you think it does.  There are no little billiard balls bouncing around down there.)

If you further realize that, as a matter of fact, you are splitting all the time due to ordinary decoherence, then you are much more likely to look at this thought experiment and say:  "There is no copy; there are two originals."

Intuitively, in your imagination, it might seem that one billiard ball stays in the same place on Earth, and another billiard ball has popped into place on Mars; so one is the "original", and the other is the "copy".  But at a fundamental level, things are not made out of billiard balls.

A sentient brain constructed to atomic precision, and copied with atomic precision, could undergo a quantum evolution along with its "copy", such that, afterward, there would exist no fact of the matter as to which of the two brains was the "original".  In some Feynman diagrams they would exchange places, in some Feynman diagrams not.  The two entire brains would be, in aggregate, identical particles with no individual identities.

Parfit, having discussed the teleportation thought experiment, counters the intuitions of physical continuity with a different set of thought experiments:

"Consider another range of possible cases: the Physical Spectrum.  These cases involve all of the different possible degrees of physical continuity...

"In a case close to the near end, scientists would replace 1% of the cells in my brain and body with exact duplicates.  In the case in the middle of the spectrum, they would replace 50%.  In a case near the far end, they would replace 99%, leaving only 1% of my original brain and body.  At the far end, the 'replacement' would involve the complete destruction of my brain and body, and the creation out of new organic matter of a Replica of me."

(Reasons and Persons, p. 234.)

Parfit uses this to argue against the intuition of physical continuity pumped by the first experiment: if your identity depends on physical continuity, where is the exact threshold at which you cease to be "you"?

By the way, although I'm criticizing Parfit's reasoning here, I really liked Parfit's discussion of personal identity.  It really surprised me.  I was expecting a rehash of the same arguments I've seen on transhumanist mailing lists over the last decade or more.  Parfit gets much further than I've seen the mailing lists get.  This is a sad verdict for the mailing lists.  And as for Reasons and Persons, it well deserves its fame.

But although Parfit executed his arguments competently and with great philosophical skill, those two particular arguments (Parfit has lots more!) are doomed by physics.

There just is no such thing as "new organic matter" that has a persistent identity apart from "old organic matter".  No fact of the matter exists, as to which electron is which, in your body on Earth or your body on Mars.  No fact of the matter exists, as to how many electrons in your body have been "replaced" or "left in the same place".  So both thought experiments are physical nonsense.

Parfit seems to be enunciating his own opinion here (not Devil's advocating) when he says:

"There are two kinds of sameness, or identity.  I and my Replica are qualitatively identical, or exactly alike.  But we may not be numerically identical, one and the same person.  Similarly, two white billiard balls are not numerically but may be qualitatively identical.  If I paint one of these balls red, it will cease to be qualitatively identical with itself as it was.  But the red ball that I later see and the white ball that I painted red are numerically identical.  They are one and the same ball." (p. 201.)

In the human imagination, the way we have evolved to imagine things, we can imagine two qualitatively identical billiard balls that have a further fact about them—their persistent identity—that makes them distinct.

But it seems to be a basic lesson of physics that "numerical identity" just does not exist.  Where "qualitative identity" exists, you can set up quantum evolutions that refute the illusion of individuality—Feynman diagrams that sum over different permutations of the identicals.

We should always have been suspicious of "numerical identity", since it was not experimentally detectable; but physics swoops in and drop-kicks the whole argument out the window.

Parfit p. 241:

"Reductionists admit that there is a difference between numerical identity and exact similarity.  In some cases, there would be a real difference between some person's being me, and his being someone else who is merely exactly like me."

This reductionist admits no such thing.

Parfit even describes a wise-seeming reductionist refusal to answer questions as to when one person becomes another, when you are "replacing" the atoms inside them.  P. 235:

(The reductionist says:)  "The resulting person will be psychologically continuous with me as I am now.  This is all there is to know.  I do not know whether the resulting person will be me, or will be someone else who is merely exactly like me.  But this is not, here, a real question, which must have an answer.  It does not describe two different possibilities, one of which must be true.  It is here an empty question.  There is not a real difference here between the resulting person's being me, and his being someone else.  This is why, even though I do not know whether I am about to die, I know everything."

Almost but not quite reductionist enough!  When you master quantum mechanics, you see that, in the thought experiment where your atoms are being "replaced" in various quantities by "different" atoms, nothing whatsoever is actually happening—the thought experiment itself is physically empty.

So this reductionist, at least, triumphantly says—not, "It is an empty question; I know everything that there is to know, even though I don't know if I will live or die"—but simply, "I will live; nothing happened."

This whole episode is one of the main reasons why I hope that when I really understand matters such as these, and they have ceased to be mysteries unto me, that I will be able to give definite answers to questions that seem like they ought to have definite answers.

And it is a reason why I am suspicious, of philosophies that too early—before the dispelling of mystery—say, "There is no answer to the question."  Sometimes there is no answer, but then the absence of the answer comes with a shock of understanding, a click like thunder, that makes the question vanish in a puff of smoke.  As opposed to a dull empty sort of feeling, as of being told to shut up and stop asking questions.

And another lesson:  Though the thought experiment of having atoms "replaced" seems easy to imagine in the abstract, anyone knowing a fully detailed physical visualization would have immediately seen that the thought experiment was physical nonsense.  Let zombie theorists take note!

Additional physics can shift our view of identity even further:

In Timeless Physics, we looked at a speculative, but even more beautiful view of quantum mechanics:  We don't need to suppose the amplitude distribution over the configuration space is changing, since the universe never repeats itself.  We never see any particular joint configuration (of the whole universe) change amplitude from one time to another; from one time to another, the universe will have expanded.  There is just a timeless amplitude distribution (aka wavefunction) over a configuration space that includes compressed configurations of the universe (early times) and expanded configurations of the universe (later times).

Then we will need to discover people and their identities embodied within a timeless set of relations between configurations that never repeat themselves, and never change from one time to another.

As we saw in Timeless Beauty, timeless physics is beautiful because it would make everything that exists either perfectly global—like the uniform, exceptionless laws of physics that apply everywhere and everywhen—or perfectly local—like points in the configuration space that only affect or are affected by their immediate local neighborhood.  Everything that exists fundamentally, would be qualitatively unique: there would never be two fundamental entities that have the same properties but are not the same entity.

(Note:  The you on Earth, and the you on Mars, are not ontologically basic.  You are factors of a joint amplitude distribution that is ontologically basic.  Suppose the integer 18 exists: the factorization of 18 will include two factors of 3, not one factor of 3.  This does not mean that inside the Platonic integer 18 there are two little 3s hanging around with persistent identities, living in different houses.)

We also saw in Timeless Causality that the end of time is not necessarily the end of cause and effect; causality can be defined (and detected statistically!) without mentioning "time".  This is important because it preserves arguments about personal identity that rely on causal continuity rather than "physical continuity".

Previously I drew this diagram of you in a timeless, branching universe:

Manybranches4

To understand many-worlds:  The gold head only remembers the green heads, creating the illusion of a unique line through time, and the intuitive question, "Where does the line go next?"  But it goes to both possible futures, and both possible futures will look back and see a single line through time.  In many-worlds, there is no fact of the matter as to which future you personally will end up in.  There is no copy; there are two originals.

To understand timeless physics:  The heads are not popping in and out of existence as some Global Now sweeps forward.  They are all just there, each thinking that now is a different time.

In Timeless Causality I drew this diagram:

Causeright

This was part of an illustration of how we could statistically distinguish left-flowing causality from right-flowing causality—an argument that cause and effect could be defined relationally, even the absence of a changing global time.  And I said that, because we could keep cause and effect as the glue that binds configurations together, we could go on trying to identify experiences with computations embodied in flows of amplitude, rather than having to identify experiences with individual configurations.

But both diagrams have a common flaw: they show discrete nodes, connected by discrete arrows.  In reality, physics is continuous.

So if you want to know "Where is the computation?  Where is the experience?" my best guess would be to point to something like a directional braid:

Braid_2

This is not a braid of moving particles.  This is a braid of interactions within close neighborhoods of timeless configuration space.

Braidslice

Every point intersected by the red line is unique as a mathematical entity; the points are not moving from one time to another.  However, the amplitude at different points is related by physical laws; and there is a direction of causality to the relations.

You could say that the amplitude is flowing, in a river that never changes, but has a direction.

Embodied in this timeless flow are computations; within the computations, experiences.  The experiences' computations' configurations might even overlap each other:

Braidtime_2

In the causal relations covered by the rectangle 1, there would be one moment of Now; in the causal relations covered by the rectangle 2, another moment of Now.  There is a causal direction between them: 1 is the cause of 2, not the other way around.  The rectangles overlap—though I really am not sure if I should be drawing them with overlap or not—because the computations are embodied in some of the same configurations.  Or if not, there is still causal continuity because the end state of one computation is the start state of another.

But on an ontologically fundamental level, nothing with a persistent identity moves through time.

Even the braid itself is not ontologically fundamental; a human brain is a factor of a larger wavefunction that happens to factorize.

Then what is preserved from one time to another?  On an ontologically basic level, absolutely nothing.

But you will recall that I earlier talked about any perturbation which does not disturb your internal narrative, almost certainly not being able to disturb whatever is the true cause of your saying "I think therefore I am"—this is why you can't leave a person physically unaltered, and subtract their consciousness.  When you look at a person on the level of organization of neurons firing, anything which does not disturb, or only infinitesimally disturbs, the pattern of neurons firing—such as flipping a switch from across the room—ought not to disturb your consciousness, or your personal identity.

If you were to describe the brain on the level of neurons and synapses, then this description of the factor of the wavefunction that is your brain, would have a very great deal in common, across different cross-sections of the braid.  The pattern of synapses would be "almost the same"—that is, the description would come out almost the same—even though, on an ontologically basic level, nothing that exists fundamentally is held in common between them.  The internal narrative goes on, and you can see it within the vastly higher-level view of the firing patterns in the connection of synapses.  The computational pattern computes, "I think therefore I am".  The narrative says, today and tomorrow, "I am Eliezer Yudkowsky, I am a rationalist, and I have something to protect."  Even though, in the river that never flows, not a single drop of water is shared between one time and another.

If there's any basis whatsoever to this notion of "continuity of consciousness"—I haven't quite given up on it yet, because I don't have anything better to cling to—then I would guess that this is how it works.

Oh... and I promised you a real-world application, didn't I?

Well, here it is:

Many throughout time, tempted by the promise of immortality, have consumed strange and often fatal elixirs; they have tried to bargain with devils that failed to appear; and done many other silly things.

But like all superpowers, long-range life extension can only be acquired by seeing, with a shock, that some way of getting it is perfectly normal.

If you can see the moments of now braided into time, the causal dependencies of future states on past states, the high-level pattern of synapses and the internal narrative as a computation within it—if you can viscerally dispel the classical hallucination of a little billiard ball that is you, and see your nows strung out in the river that never flows—then you can see that signing up for cryonics, being vitrified in liquid nitrogen when you die, and having your brain nanotechnologically reconstructed fifty years later, is actually less of a change than going to sleep, dreaming, and forgetting your dreams when you wake up.

You should be able to see that, now, if you've followed through this whole series.  You should be able to get it on a gut level—that being vitrified in liquid nitrogen for fifty years (around 3e52 Planck intervals) is not very different from waiting an average of 2e26 Planck intervals between neurons firing, on the generous assumption that there are a hundred trillion synapses firing a thousand times per second.  You should be able to see that there is nothing preserved from one night's sleep to the morning's waking, which cryonic suspension does not preserve also.  Assuming the vitrification technology is good enough for a sufficiently powerful Bayesian superintelligence to look at your frozen brain, and figure out "who you were" to the same resolution that your morning's waking self resembles the person who went to sleep that night.

Do you know what it takes to securely erase a computer's hard drive?  Writing it over with all zeroes isn't enough.  Writing it over with all zeroes, then all ones, then a random pattern, isn't enough.  Someone with the right tools can still examine the final state of a section of magnetic memory, and distinguish the state, "This was a 1 written over by a 1, then a 0, then a 1" from "This was a 0 written over by a 1, then a 0, then a 1".  The best way to securely erase a computer's hard drive is to destroy it with thermite.

I really don't think that carefully vitrifying a brain to prevent ice crystal formation and then freezing it in liquid nitrogen is going to be a secure erase procedure, if you can examine atomic-level differences in the synapses.

Someone hears about cryonics and thinks for 10 seconds and says, "But if you're frozen and then revived, are you really the same person?"

And if they happened to know all about quantum physics and could apply the abstract knowledge to real life, and they had followed the whole debate about zombies and resolved it against epiphenomenalism in general, then they would be able to visualize the braids in the river that never flows, and say, "Yes."

But this knowledge is not common.

So they die.

There are numerous other reasons that people seize on, when they search for a rationalization for a negative initial flinch against cryonics.  And numerous other knowledges that would be required to answer those objections.  "But wouldn't it be boring to live such a long time?"  (Can be answered if you know hedonic psychology, and have developed a theory of fun, and can visualize accessible fun spaces that increase in volume with increasing intelligence.)  "Why would future civilizations bother to revive me?"  (Requires understanding either economic growth diminishing the cost, or knowledge of history and how societies have become kinder over time, or knowing about Friendly AI.)  "Isn't it wrong to live so long?"  (Requires knowing about the "sour grapes" bias.  See also transhumanism as simplified humanism and the meaning that immortality gives to life.)  Then there's the meta-knowledge of how to question all these deeply wise cached thoughts that pop into your head about the futility of life; and the ability to do things that might make people look at you weird, and so on...

Some of these are series of posts I haven't done yet.  But if you anticipate updating your probabilities when you read those future posts, then you should update them now.  Or, if you prefer, trust me:

If you would rather live happily ever after, than die, and you are willing to spend between $300 and $2000 per year(*) to express this preference, then sign up for cryonics.

If you've been cryocrastinating, putting off signing up for cryonics "until later", don't think that you've "gotten away with it so far".  Many worlds, remember?  There are branched versions of you that are dying of cancer, and not signed up for cryonics, and it's too late for them to get life insurance.

See, knowing about many worlds can help you visualize probabilities as frequencies, because they usually are.

It might encourage you to get around to getting health insurance, too, or wearing a helmet on your motorcycle, or whatever: don't think you've gotten away with it so far.

And if you're planning to play the lottery, don't think you might win this time.  A vanishingly small fraction of you wins, every time.  So either learn to discount small fractions of the future by shutting up and multiplying, or spend all your money on lottery tickets—your call.

It is a very important lesson in rationality, that at any time, the Environment may suddenly ask you almost any question, which requires you to draw on 7 different fields of knowledge.  If you missed studying a single one of them, you may suffer arbitrarily large penalties up to and including capital punishment.  You can die for an answer you gave in 10 seconds, without realizing that a field of knowledge existed of which you were ignorant.

This is why there is a virtue of scholarship.

150,000 people die every day.  Some of those deaths are truly unavoidable, but most are the result of inadequate knowledge of cognitive biases, advanced futurism, and quantum mechanics.(**)

If you disagree with my premises or my conclusion, take a moment to consider nonetheless, that the very existence of an argument about life-or-death stakes, whatever position you take in that argument, constitutes a sufficient lesson on the sudden relevance of scholarship.


(*)  The way cryonics works is that you get a life insurance policy, and the policy pays for your cryonic suspension.  The Cryonics Institute is the cheapest provider, Alcor is the high-class one.  Rudi Hoffman set up my own insurance policy, with CI.  I have no affiliate agreements with any of these entities, nor, to my knowledge, do they have affiliate agreements with anyone.  They're trying to look respectable, and so they rely on altruism and word-of-mouth to grow, instead of paid salespeople.  So there's a vastly smaller worldwide market for immortality than lung-cancer-in-a-stick.  Welcome to your Earth; it's going to stay this way until you fix it.

(**)  Most deaths?  Yes:  If cryonics were widely seen in the same terms as any other medical procedure, economies of scale would considerably diminish the cost; it would be applied routinely in hospitals; and foreign aid would enable it to be applied even in poor countries.  So children in Africa are dying because citizens and politicians and philanthropists in the First World don't have a gut-level understanding of quantum mechanics.

Added:  For some of the questions that are being asked, see Alcor's FAQ for scientists and Ben Best's Cryonics FAQ (archived snapshot).

 

Part of The Quantum Physics Sequence

Next post: "Thou Art Physics"

Previous post: "Timeless Causality"

249 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Roland2 · 2008-06-03T09:03:28.000Z · LW(p) · GW(p)

Where can I sign up for cryonics if I live outside the United States and Europe?

Replies from: TimFreeman, accolade
comment by TimFreeman · 2011-04-19T17:11:08.980Z · LW(p) · GW(p)

Kriorus might be worth a try.

Be aware that some jurisdictions, such as British Columbia and France, go out of their way to outlaw it.

comment by mitchell_porter2 · 2008-06-03T09:23:53.000Z · LW(p) · GW(p)

The argument that "there is no such thing as a particular atom, therefore neither duplicate has a preferred status as the original" looks sophistical, and it may even be possible to show that it is within your preferred quantum framework. Consider a benzene ring. That's a ring of six carbon atoms. If it occurs as part of a larger molecule, there will be covalent bonds between particular atoms in the ring and atoms exterior to it. Now suppose I verify the presence of the benzene ring through some nondestructive procedure, and then create another benzene ring elsewhere, using other atoms. In fact, suppose I have a machine which will create that second benzene ring only if the investigative procedure verifies the existence of the first. I have created a copy, but are you really going to say there's no fact of the matter about which is the original? There's even a hint of how you can distinguish between the two given your ontological framework, when I stipulated that the original ring is bonded to something else; something not true of the duplicate. If you insist on thinking there is no continuity of identity of individual particles, at least you can say that one of the carbon atoms in the first ring is entangled with an outside atom in a way that none of the atoms in the duplicate ring is, and distinguish between them that way. You may be able to individuate atoms within structures by looking at their quantum correlations; you won't be able to say 'this atom has property X, that atom has property Y' but you'll be able to say 'there's an atom with property X, and there's an atom with property Y'.

Assuming that this is on the right track, the deeper reality is going to be field configurations anyway, not particle configurations. Particle number is frame-dependent (see: Unruh effect), and a quantum particle is just a sort of wavefunction over field configurations - a blob of amplitude in field configuration space.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-06-03T09:30:59.000Z · LW(p) · GW(p)

Roland, I do not know. There is an organization in Russia. The Cryonics Institute accepts bodies shipped to them packed in ice. I'm not sure about Alcor, which tries to do on-scene suspension. Alcor lists a $25K surcharge (which would be paid out of life insurance) for suspension outside the US/UK/Canada, but I'm not sure how far abroad they'd go. Where are you?

Mitchell: You may be able to individuate atoms within structures by looking at their quantum correlations; you won't be able to say 'this atom has property X, that atom has property Y' but you'll be able to say 'there's an atom with property X, and there's an atom with property Y'.

Certainly. That's how we distinguish Eliezer from Mitchell.

Replies from: someonewrongonthenet
comment by someonewrongonthenet · 2012-08-15T19:21:23.544Z · LW(p) · GW(p)

Eliezer...the main issue that keeps me from cryonics is not whether the "real me" wakes up on the other side.

The first question is about how accurate the reconstruction will be. When you wipe a hard drive with a magnet, you can recover some of the content, but usually not all of it. Recovering "some" of a human, but not all of it, could easily create a mentally handicapped, broken consciousness.

But lets set that aside, as it is a technical problem. There is an second issue. If and when immortality and AI are achieved, what value would my revived consciousness contribute to such a society?

You've thus far established that death isn't a bad thing when a copy of the information is preserved and later revived. You've explained that you are willing to treat consciousness much like you would a computer file - you've explained that you would be willing to destroy one of two redundant duplicates of yourself.

Tell me, why exactly is it okay to destroy a redundant duplicate of yourself? You can't say that it's okay to destroy it simply because it is redundant, because that also destroys the point of cryonics. There will be countless humans and AIs that will come into existence, and each of those minds will require resources to maintain. Why is it so important that your, or my, consciousness be one among this swarm? Is that not similarly redundant?

For the same reasons that you would be willing to destroy one of two identical copies of yourself because having two copies is redundant, I am wondering just how much I care that my own consciousness survives forever. My mind is not exceptional among all the possible consciousnesses that resources could be devoted to. Keeping my mind preserved through the ages seems to me just as redundant as making twenty copies of yourself and carefully preserving each one.

I'm not saying I don't want to live forever...I do want to. I'm saying that I feel one aught to have a reason for preserving ones consciousness that goes beyond the simple desire for at least one copy of ones consciousness to continue existing.

When we deconstruct the notion of consciousness as thoroughly as we are doing in this discussion, the concept of "life" and "death" become meaningless over-approximations, much like "free will". Once society reaches that point, we are going to have to deconstruct those ideas and ask ourselves why it is so important that certain information never be deleted. Otherwise, it's going to get a little silly...a "21st century human brain maximizer" is not that much different from a paperclip maximizer, in the grand scheme of things.

Replies from: None, hairyfigment
comment by [deleted] · 2013-09-30T16:21:55.821Z · LW(p) · GW(p)

The main issue that keeps me from cryonics is not whether the "real me" wakes up on the other side.

How do you go to sleep at night, not knowing if it is the "real you" that wakes up on the other side of consciousness?

Replies from: TheOtherDave, someonewrongonthenet
comment by TheOtherDave · 2013-09-30T17:32:15.019Z · LW(p) · GW(p)

Your comment would make more sense to me if I removed the word "not" from the sentence you quote. (Also, if I don't read past that sentence of someonewrongonthenet's comment.)

That said, I agree completely that the kinds of vague identity concerns about cryonics that the quoted sentence with "not" removed would be raising would also arise, were one consistent, about routine continuation of existence over time.

Replies from: None, army1987
comment by [deleted] · 2013-09-30T18:37:14.926Z · LW(p) · GW(p)

Hrm.. ambiguous semantics. I took it to imply acceptance of the idea but not elevation of its importance, but I see how it could be interpreted differently. And yes, the rest of the post addresses something completely different. But if I can continue for a moment on the tangent, expanding my comment above (even if it doesn't apply to the OP):

You actually continue functioning when you sleep, it's just that you don't remember details once you wake up. A more useful example for such discussion is general anesthesia, which shuts down the regions of the brain associated with consciousness. If personal identity is in fact derived from continuity of computation, then it is plausible that general anesthesia would result in a "different you" waking up after the operation. The application to cryonics depends greatly on the subtle distinction of whether vitrification (and more importantly, the recovery process) slows downs or stops computation. This has been a source of philosophical angst for me personally, but I'm still a cryonics member.

More troubling is the application to uploading. I haven't done this yet, but I want my Alcor contract to explicitly forbid uploading as a restoration process, because I am unconvinced that a simulation of my destructively scanned frozen brain would really be a continuation of my personal identity. I was hoping that “Timeless Identity” would address this point, but sadly it punts the issue.

Replies from: TheOtherDave, shminux
comment by TheOtherDave · 2013-09-30T19:01:54.552Z · LW(p) · GW(p)

Well, if the idea is unimportant to the OP, presumably that also helps explain how they can sleep at night.

WRT the tangent... my own position wrt preservation of personal identity is that while it's difficult to articulate precisely what it is that I want to preserve, and I'm not entirely certain there is anything cogent I want to preserve that is uniquely associated with me, I'm pretty sure that whatever does fall in that category has nothing to do with either continuity of computation or similarity of physical substrate. I'm about as sanguine about continuing my existence as a software upload as I am about continuing it as this biological system or as an entirely different biological system, as long as my subjective experience in each case is not traumatically different.

Replies from: None
comment by [deleted] · 2013-10-01T17:03:19.689Z · LW(p) · GW(p)

I wrote up about a page-long reply, then realized it probably deserves its own posting. I'll see if I can get to that in the next day or so. There's a wide spectrum of possible solutions to the personal identity problem, from physical continuity (falsified) to pattern continuity and causal continuity (described by Eliezer in the OP), to computational continuity (my own view, I think). It's not a minor point though, whichever view turns out to be correct has immense ramifications for morality and timeless decision theory, among other things...

Replies from: TheOtherDave, pengvado
comment by TheOtherDave · 2013-10-01T17:08:09.988Z · LW(p) · GW(p)

When you write up the post, you might want to say a few words about what it means for one of these views to be "correct" or "incorrect."

Replies from: None
comment by [deleted] · 2013-10-01T17:58:35.541Z · LW(p) · GW(p)

Ok I will, but that part is easy enough to state here: I mean correct in the reductionist sense. The simplest explanation which resolves the original question and/or associated confusion, while adding to our predictive capacity and not introducing new confusion.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-10-01T18:56:39.249Z · LW(p) · GW(p)

Mm. I'm not sure I understood that properly; let me echo my understanding of your view back to you and see if I got it.

Suppose I get in something that is billed as a transporter, but which does not preserve computational continuity. Suppose, for example, that it destructively scans my body, sends the information to the destination (a process which is not instantaneous, and during which no computation can take place), and reconstructs an identical body using that information out of local raw materials at my destination.

If it turns out that computational or physical continuity is the correct answer to what preserves personal identity, then I in fact never arrive at my destination, although the thing that gets constructed at the destination (falsely) believes that it's me, knows what I know, etc. This is, as you say, an issue of great moral concern... I have been destroyed, this new person is unfairly given credit for my accomplishments and penalized for my errors, and in general we've just screwed up big time.

Conversely, if it turns out that pattern or causal continuity is the correct answer, then there's no problem.

Therefore it's important to discover which of those facts is true of the world.

Yes? This follows from your view? (If not, I apologize; I don't mean to put up strawmen, I'm genuinely misunderstanding.)

If so, your view is also that if we want to know whether that's the case or not, we should look for the simplest answer to the question "what does my personal identity comprise?" that does not introduce new confusion and which adds to our predictive capacity. (What is there to predict here?)

Yes?

EDIT: Ah, I just read this post where you say pretty much this. OK, cool; I understand your position.

Replies from: None, Eliezer_Yudkowsky
comment by [deleted] · 2013-10-01T19:16:05.862Z · LW(p) · GW(p)

Yes, that is not only 100% accurate, but describes where I'm headed.

I am looking for the simplest explanation of the subjective continuity of personal identity, which either answers or dissolves the question. Further, the explanation should either explain which teleportation scenario is correct (identity transfer, or murder+birth), or satisfactorily explain why it is a meaningless distinction.

What is there to predict here?

If I, the person standing in front of the transporter door, will experience walking on Mars, or oblivion.

Yes, it is perhaps likely that this will never be experimentally observable. That may even be a tautology since we are talking about subjective experience. But still, a reductionist theory of consciousness could provide a simple, easy to understand explanation for the origin of personal identity (e.g., what an computational machine feels like from the inside) and which predicts identity transfer or murder + birth. That would be enough for me, at least as long as there's not competing equally simple theories.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-10-01T19:43:57.552Z · LW(p) · GW(p)

What is there to predict here?
If I, the person standing in front of the transporter door, will experience walking on Mars, or oblivion.

Well, you certainly won't experience oblivion, more or less by definition. The question is whether you will experience walking on Mars or not.

But there is no distinct observation to be made in these two cases. That is, we agree that either way there will be an entity having all the observable attributes (both subjective and objective; this is not about experimental proof, it's about the presence or absence of anything differentially observable by anyone) that Mark Friendebach has, walking on Mars.

So, let me rephrase the question: what observation is there to predict here?

Replies from: None
comment by [deleted] · 2013-10-01T19:58:06.958Z · LW(p) · GW(p)

So, let me rephrase the question: what observation is there to predict here?

That's not the direction I was going with this. It isn't about empirical observation, but rather aspects of morality which depend on subjective experience. The prediction is under what conditions subjective experience terminates. Even if not testable, that is still an important thing to find out, with moral implications.

Is it moral to use a teleporter? From what I can tell, that depends on whether the person's subjective experience is terminated in the process. From the utility point of view the outcomes are very nearly the same - you've murdered one person, but given “birth” to an identical copy in the process. However if the original, now destroyed person didn't want to die, or wouldn't have wanted his clone to die, then it's a net negative.

As I said elsewhere, the teleporter is the easiest way to think of this, but the result has many other implications from general anesthesia, to cryonics, to Pascal's mugging and the basilisk.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-10-01T20:00:48.463Z · LW(p) · GW(p)

OK. I'm tapping out here. Thanks for your time.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2013-10-01T21:51:37.484Z · LW(p) · GW(p)

Suppose I get in something that is billed as a transporter, but which does not preserve computational continuity. Suppose, for example, that it destructively scans my body, sends the information to the destination (a process which is not instantaneous, and during which no computation can take place), and reconstructs an identical body using that information out of local raw materials at my destination.

I don't know what "computation" or "computational continuity" means if it's considered to be separate from causal continuity, and I'm not sure other philosophers have any standard idea of this either. From the perspective of the Planck time, your brain is doing extremely slow 'computations' right now, it shall stand motionless a quintillion ticks and more before whatever arbitrary threshold you choose to call a neural firing. Or from a faster perspective, the 50 years of intervening time might as well be one clock tick. There can be no basic ontological distinction between fast and slow computation, and aside from that I have no idea what anyone in this thread could be talking about if it's distinct from causal continuity.

Replies from: TheOtherDave, None
comment by TheOtherDave · 2013-10-01T22:29:46.079Z · LW(p) · GW(p)

(shrug) It's Mark's term and I'm usually willing to make good-faith efforts to use other people's language when talking to them. And, yes, he seems to be drawing a distinction between computation that occurs with rapid enough updates that it seems continuous to a human observer and computation that doesn't. I have no idea why he considers that distinction important to personal identity, though... as far as I can tell, the whole thing depends on the implicit idea of identity as some kind of ghost in the machine that dissipates into the ether if not actively preserved by a measurable state change every N microseconds. I haven't confirmed that, though.

comment by [deleted] · 2013-10-02T01:30:10.615Z · LW(p) · GW(p)

Hypothesis: consciousness is what a physical interaction feels like from the inside.

Importantly, it is a property of the interacting system, which can have various degrees of coherence - a different concept than quantum coherence, which I am still developing: something along the lines of negative-entropic complexity. There is therefore a deep correlation between negentropy and consciousness. Random thermodynamic motion in a gas is about as minimum-conscious as you can get (lots of random interactions, but all short lived and decoherent). A rock is slightly more conscious due to its crystalline structure, but probably leads a rather boring existence (by our standards, at least). And so on, all the way up to the very negentropic primate brain which experiences a high degree of coherent experience that we call “consciousness” or “self.”

I know this sounds like making thinking an ontologically basic concept. It's rather the reverse - I am building the experience of thinking up from physical phenomenon: consciousness is the experience of organized physical interactions. But I'm not yet convinced of it either. If you throw out the concept of coherent interaction (what I have been calling computation continuity), then it does reduce to causal continuity. But causal continuity does have it's problems which make me suspect it as not being the final, ultimate answer...

Replies from: shminux, Richard_Kennaway
comment by Shmi (shminux) · 2013-10-02T03:53:47.953Z · LW(p) · GW(p)

Hypothesis: consciousness is what a physical interaction feels like from the inside.

I would imagine that consciousness (in a sense of self-awareness) is the ability to introspect into your own algorithm. The more you understand what makes you tick, rather than mindlessly following the inexplicable urges and instincts, the more conscious you are.

comment by Richard_Kennaway · 2013-10-02T07:00:12.099Z · LW(p) · GW(p)

Hypothesis: consciousness is what a physical interaction feels like from the inside.
...
consciousness is the experience of organized physical interactions.

How do you explain the existence of the phenomenon of "feeling like" and of "experience"?

Replies from: Kawoomba, None
comment by Kawoomba · 2013-10-02T07:54:43.799Z · LW(p) · GW(p)

I agree that the grandparent has circumvented addressing the crux of the matter, however I feel (heh) that the notion of "explain" often comes with unrealistic expectations. It bears remembering that we merely describe relationships as succinctly as possible, then that description is the "explanation".

While we would e.g. expect/hope for there to be some non-contradictory set of descriptions applying to both gravity and quantum phenomena (for which we'd eat a large complexity penalty, since complex but accurate descriptions always beat out simple but inaccurate descriptions; Occam's Razor applies only to choosing among fitting/not yet falsified descriptions), as soon as we've found some pinned-down description in some precise language, there's no guarantee -- or strictly speaking, need -- of an even simpler explanation.

A world running according to currently en-vogue physics, plus a box which cannot be described as an extension of said physics, but only in some other way, could in fact be fully explained, with no further explanans for the explanandum.

It seems pretty straightforward to note that there's no way to "derive" phenomena such as "feeling like" in the current physics framework, except of course to describe which states of matters/energy correspond to which qualia.

Such a description could be the explanation, with nothing further to be explained:

If it empirically turned out that a specific kind matter needs to be arranged in the specific pattern of a vertebrate brain to correlate to qualia, that would "explain" consciousness. If it turned out (as we all expect) that the pattern alone sufficies, then certain classes of instantiated algorithms (regardless of the hardware/wetware) would be conscious. Regardless, either description (if it turned out to be empirically sound) would be the explanation.

I also wonder, what could any answer within the current physics framework possibly look like, other than an asterisk behind the equations with the addendum of "values n1 ... nk for parameters p1 ... pk correlate with qualia x"?

comment by [deleted] · 2013-10-02T07:56:23.871Z · LW(p) · GW(p)

How do you explain "feeling like" and "experience" in general? This is LW so I assume you have a reductionist background and would offer an explanation based on information patterns, neuron firings, hormone levels, etc. But ultimately all of that reduces down to a big collection of quarks, each taking part in mostly random interactions on the scale of femtoseconds. The apparent organization of the brain is in the map, not the territory. So if subjective experience reduces down to neurons, and neurons reduce down to molecules, and molecules reduce to quarks and leptons, where then does the consciousness reside? "Information patterns" alone is an inadequate answer - that's at the level of the map, not the territory. Quarks and leptons combine into molecules, molecules into neural synapses, and the neurons connect into the 3lb information processing network that is my brain. Somewhere along the line, the subjective experience of "consciousness" arises. Where, exactly, would you propose that happens?

We know (from our own subjective experience) that something we call "consciousness" exists at the scale of the entire brain. If you assume that the workings of the brain is fully explained by its parts and their connections, and those parts explained by their sub-components and designs, etc. you eventually reach the ontologically basic level of quarks and leptons. Fundamentally the brain is nothing more than the interaction of a large number of quarks and leptons. So what is the precise interaction of fundamental particles is the basic unit of consciousness? What level of complexity is required before simply organic matter becomes a conscious mind?

It sounds ridiculous, but if you assume that quarks and leptons are "conscious," or rather that consciousness is the interaction of these various ontologically primitive, fundamental particles, a remarkably consistent theory emerges: one which dissolves the mystery of subjective consciousness by explaining it as the mere aggregation of interdependent interactions. Besides being simple, this is also predictive: it allows us to assert for a given situation (e.g. a teleporter or halted simulation) whether loss of personal identity occurs, which has implications for morality of real situations encountered in the construction of an AI.

Replies from: Richard_Kennaway, lavalamp
comment by Richard_Kennaway · 2013-10-02T08:52:12.760Z · LW(p) · GW(p)

How do you explain "feeling like" and "experience" in general? This is LW so I assume you have a reductionist background and would offer an explanation based on information patterns, neuron firings, hormone levels, etc.

I indeed have a reductionist background, but I offer no explanation, because I have none. I do not even know what an explanation could possibly look like; but neither do I take that as proof that there cannot be one. The story you tell surrounds the central mystery with many physical details, but even in your own accont of it the mystery remains unresolved:

Somewhere along the line, the subjective experience of "consciousness" arises.

However much you assert that there must be an explanation, I see here no advance towards actually having one. What does it mean to attribute consciousness to subatomic particles and rocks? Does it predict anything, or does it only predict that we could make predictions about teleporters and simulations if we had a physical explanation of consciousness?

comment by lavalamp · 2013-10-02T17:51:50.034Z · LW(p) · GW(p)

The apparent organization of the brain is in the map, not the territory.

What do you mean by this? Are fMRIs a big conspiracy?

Fundamentally the brain is nothing more than the interaction of a large number of quarks and leptons.

This description applies equally to all objects. When you describe the brain this way, you leave out all its interesting characteristics, everything that makes it different from other blobs of interacting quarks and leptons.

Replies from: None
comment by [deleted] · 2013-10-02T19:57:33.606Z · LW(p) · GW(p)

What I'm saying is that the high-level organization is not ontologically primitive. When we talk about organizational patterns of the brain, or the operation of neural synapses, we're taking about very high level abstractions. Yes, they are useful abstractions primarily because they ignore unnecessary detail. But that detail is how they are actually implemented. The brain is soup of organic particles with very high rates of particle interaction due simply to thermodynamic noise. At the nanometer and femtosecond scale, there is very little signal to noise, however at the micrometer and millisecond scale general trends start to emerge, phenomenon which form the substrate of our computation. But these high level abstractions don't actually exist - they are just average approximations over time of lower level, noisy interactions.

I assume you would agree that a normal adult brain in a human experiences a subjective feeling of consciousness that persists from moment-to-moment. I also think it's a fair bet that you would not think that a single electron bouncing around in some part of a synaptic pathway or electronic transistor has anything resembling a conscious experience. But somehow, a big aggregation of these random motions does add up to you or me. So at what point in the formation of a human brain, or construction of an AI does it become conscious? At what point does it mere dead matter transform into sentience? Is this a hard cutoff? Is it gradual?

Speaking of gradations, certain animals can't recognize themselves in a mirror. If you use self-awareness as a metric as was argued elsewhere, does that mean they're not conscious? What about insects, which operate with a more distributed neural system. Dung beetles seem to accomplish most tasks by innate reflex response. Do they have at least a little, tiny subjective experience of consciousness? Or is their existence no more meaningful than that of a stapler?

Yes, this objection applies equally to all objects. That's precisely my point. Brains are not made of any kind of “mind stuff” - that's substance dualism which I reject. Furthermore, minds don't have a subjective experience separate from what is physically explainable - that's epiphenomenalism, similarly rejected. "Minds exist in information patterns" is a mysterious answer - information patterns are themselves merely evolving expressions in the configuration space of quarks & leptons. Any result of the information pattern must be explainable in terms of the interactions of its component parts, or else we are no longer talking about a reductionist universe. If I am coming at this with a particular bias, it is this: all aspects of mind including consciousness, subjective experience, qualia, or whatever you want to call it are fundamentally reducible to forces acting on elementary particles.

I see only two reductionist paths forward to take: (1) posit a new, fundamental law by which at some aggregate level of complexity or organization, a computational substrate becomes conscious. How & why is not explained, and as far as I can tell there is no experimental way to determine where this cutoff is. But assume it is there. Or, (2) accept that like everything else in the universe, consciousness reduces down to the properties of fundamental particles and their interactions (it is the interaction of particles). A quark and a lepton exchanging a photon is some minimal quantum Plank-level of conscious experience. Yes, that means that even a rock and a stapler experience some level of conscious experience - barely distinguishable from thermal noise, but nonzero - but the payoff is a more predictive reductionist model of the universe. In terms of biting bullets, I think accepting many-worlds took more gumption than this.

Replies from: lavalamp
comment by lavalamp · 2013-10-02T20:19:41.020Z · LW(p) · GW(p)

I also think it's a fair bet that you would not think that a single electron bouncing around in some part of a synaptic pathway or electronic transistor has anything resembling a conscious experience. But somehow, a big aggregation of these random motions does add up to you or me. So at what point in the formation of a human brain, or construction of an AI does it become conscious? At what point does it mere dead matter transform into sentience? Is this a hard cutoff? Is it gradual?

This is a Wrong Question. Consciousness, whatever it is, is (P=.99) a result of a computation. My computer exhibits a microsoft word behavior, but if I zoom in to the electrons and transistors in the CPU, I see no such microsoft word nature. It is silly to zoom in to quarks and leptons looking for the true essence of microsoft word. This is the way computations work-- a small piece of the computation simply does not display behavior that is like the entire computation. The CPU is not the computation. It is not the atoms of the brain that are conscious, it is the algorithm that they run, and the atoms are not the algorithm. Consciousness is produced by non-conscious things.

"Minds exist in information patterns" is a mysterious answer - information patterns are themselves merely evolving expressions in the configuration space of quarks & leptons. Any result of the information pattern must be explainable in terms of the interactions of its component parts, or else we are no longer talking about a reductionist universe. If I am coming at this with a particular bias, it is this: all aspects of mind including consciousness, subjective experience, qualia, or whatever you want to call it are fundamentally reducible to forces acting on elementary particles.

Minds exist in some algorithms ("information pattern" sounds too static for my taste). Your desire to reduce things to forces on elementary particles is misguided, I think, because you can do the same computation with many different substrates. The important thing, the thing we care about, is the computation, not the substrate. Sure, you can understand microsoft word at the level of quarks in a CPU executing assembly language, but it's much more useful to understand it in terms of functions and algorithms.

Replies from: None
comment by [deleted] · 2013-10-02T21:20:41.586Z · LW(p) · GW(p)

You've completely missed / ignored my point, again. Microsoft Word can be functionally reduced to electrons in transistors. The brain can be functionally reduced to biochemistry. Unless you resort to some form of dualism, the mind (qualia) is also similarly reduced.

just as computation can be brought down to the atomic scale (or smaller, with quantum computing), so too can conscious experiences be constructed out of such computational events. Indeed they are one and the same thing, just viewed from different perspectives.

Replies from: lavalamp
comment by lavalamp · 2013-10-02T21:37:51.930Z · LW(p) · GW(p)

The brain can be functionally reduced to biochemistry. Unless you resort to some form of dualism, the mind (qualia) is also similarly reduced.

I thought dualism meant you thought that there was ontologically basic conciousness stuff separate from ordinary matter?

I think the mind should be reduced to algorithms, and biochemistry is an implementation detail. This may make me a dualist by your usage of the word.

I think that it's equally silly to ask, "where is the microsoft-word-ness" about a subset of transistors in your CPU as it is to ask "where is the consciousness" about a subset of neurons in your brain. I see this as describing how non-ontologically-basic consciousness can be produced by non-conscious stuff.

You've completely missed / ignored my point, again.

Apologies; does the above address your point? If not I'm confused about your point.

Replies from: None
comment by [deleted] · 2013-10-02T22:04:16.381Z · LW(p) · GW(p)

I'm arguing that if you think the mind can be reduced to algorithms implemented on computational substrate, then it is a logical consequence from our understanding of the rules of physics and the nature of computation that what we call subjective experience must also scale down as you reduce a computational machine down to its parts. After all, the algorithms themselves too also reducible down to stepwise axiomatic logical operations, implemented as transistors or interpretable machine code.

The only way to preserve the common intuition that “it takes (simulation of) a brain or equivalent to produce a mind” is to posit some form of dualism. I don't think it is silly to ask “where is the microsoft-word-ness” about a subset of a computer - you can for example point to the regions of memory and disk where the spellchecker is located, and say “this is the part that matches user input against tables of linguistic data,” just like we point to regions of the brain and say “this is your language processing centers.”

The experience of having a single, unified me directing my conscious experience is an illusion - it's what the integration process feels like from the inside, but it does not correspond to reality (we have psychological data to back this up!). I am in fact a society of agents, each simpler but also relying on an entire bureaucracy of other agents in an enormous distributed structure. Eventually though, things reduce down to individual circuits, then ultimately to the level of individual cell receptors and chemical pathways. At no point along the way is there a clear division where it is obvious that conscious experience ends and what follows is merely mechanical, electrical, and chemical processes. In fact as I've tried to point out the divisions between higher level abstractions and their messy implementations is in the map, not the territory.

To assert that "this level of algorithmic complexity is a mind, and below that is mere machines" is a retreat to dualism, though you may not yet see it in that way. What you are asserting is that there is this ontologically basic mind-ness which spontaneously emerges when an algorithm has reached a certain level of complexity, but which is not the aggregation of smaller phenomenon.

Replies from: lavalamp
comment by lavalamp · 2013-10-02T22:31:33.602Z · LW(p) · GW(p)

I think we have really different models of how algorithms and their sub-components work.

it is a logical consequence from our understanding of the rules of physics and the nature of computation that what we call subjective experience must also scale down as you reduce a computational machine down to its parts.

Suppose I have a computation that produces the digits of pi. It has subroutines which multiply and add. Is it an accurate description of these subroutines that they have a scaled down property of computes-pi-ness? I think this is not a useful way to understand things. Subroutines do not have a scaled-down percentage of the properties of their containing algorithm, they do a discrete chunk of its work. It's just madness to say that, e.g., your language processing center is 57% conscious.

The experience of having a single, unified me directing my conscious experience is an illusion...

I agree with all this. Humans probably are not the minimal conscious system, and there are probably subsets of our component circuitry which maintain the property of conciousness. But yes, I maintain that eventually, you'll get to an algorithm that is conscious while none of its subroutines are.

If this makes me a dualist then I'm a dualist, but that doesn't feel right. I mean, the only way you can really explain a thing is to show how it arises from something that's not like it in the first place, right?

Replies from: None
comment by [deleted] · 2013-10-03T00:13:38.216Z · LW(p) · GW(p)

I think we have different models of what consciousness is. In your pi example, the multiplier has multiply-ness, and the adder has add-ness properties, and when combined together in a certain way you get computes-pi-ness. Likewise our minds have many, many, many different components which - somehow, someway - each have a small experiential qualia which when you sum together yield the human condition.

Through brain damage studies, for example, we have descriptions of what it feels like to live without certain mental capabilities. I think you would agree with this, but for others reading take this thought experiment: imagine that I were to systematically shut down portions of your brain, or in simulation, delete regions of your memory space. For the purpose of the argument I do it slowly over time in relatively small amounts, and cleaning up dangling references so the whole system doesn't shut down. Certainly as time goes by your mental functionality is reduced, and you stop being capable of having experiences you once took for granted. But at what point, precisely, do you stop experiencing at all qualia of any form? When you're down to just a billion neurons? A million? A thousand? When you're down to just one processing region? Is one tiny algorithm on a single circuit enough?

Humans probably are not the minimal conscious system, and there are probably subsets of our component circuitry which maintain the property of consciousness. But yes, I maintain that eventually, you'll get to an algorithm that is conscious while none of its subroutines are.

What is the minimal conscious system? It's easy and perhaps accurate to say “I don't know.” After all, neither one of us know enough neural and cognitive science to make this call, I assume. But we should be able to answer this question: “if presented criteria for a minimally-conscious-system, what would convince me of its validity?”

If this makes me a dualist then I'm a dualist, but that doesn't feel right. I mean, the only way you can really explain a thing is to show how it arises from something that's not like it in the first place, right?

Eliezer's post on reductionism is relevant here. In a reductionist universe, anything and everything is fully defined by its constituent elements - no more, no less. There's a popular phrase that has no place is reductionist theories: “the whole is greater than the sum of its parts.” Typically what this actually means is that you failed to count the “parts” correctly: a part list should also include spatial configurations and initial conditions, which together imply the dynamic behaviors as well. For example, a pulley is more than a hunk of metal and some rope, but it is fully defined if you specify how the metal is shaped, how the rope is threaded through it and fixed to objects with knots, how the whole contraption is oriented with respect to gravity, and the procedure for applying rope-pulling-force. Combined with the fundamental laws of physics, this is a fully reductive explanation of a rope-pulley system which is the sum of its fully-defined parts.

And so it goes with consciousness. Unless we are comfortable with the mysterious answers provided by dualism - or empirical evidence like confirmation of psychic phenomenon compels us to go there - then we must demand that an explanation be provided that explains consciousness fully as the aggregation of smaller processes.

When I look a explanations of the workings of the brain, starting with the highest level psychological theories and neural structure, and working the way all the way down the abstraction hierarchy to individual neural synapses and biochemical pathways, nowhere along the way do I see an obvious place to stop and say “here is where consciousness begins!” Likewise, I can start from the level of mere atoms and work my way up to the full neural architecture, without finding any step that adds something which could be consciousness, but which isn't fundamentally like the levels below it. But when you get to the highest level, you've described the full brain without finding consciousness anywhere along the way.

I can see how this leads otherwise intelligent philosophers like David Chalmers to epiphenomenalism. But I'm not going to go down that path, because the whole situation is the result of mental confusion.

The Standard Rationalist Answer is that mental processes are information patterns, nothing more, and tat consciousness is an illusion, end of story. But that still leaves me confused! It's not like free will for example, where because of the mind projection fallacy I think I have free will due to how a deterministic decision theory algorithm feels from the inside. I get that. No, the answer of “that subjective experience of consciousness isn't real, get over it” is unsatisfactory because if I don't have conscious, how am I experiencing thinking in the first place? Cogito ergo sum.

However there is a way out. I went looking for a source of consciousness because I like nearly every other philosopher assumed that there was something special and unique which set brains aside as having minds which other more mundane objects - like rocks and staplers - do not possess. That's so obviously true, but honestly I have no real justification for that belief. So let's try negating it. What is possible if we don't exclude mundane things from having minds too?

Well, what does it feel like to be a quark and a lepton exchanging a photon? I'm not really sure, but let's call that approximately the minimum possible “experience”, and for the duration of the interaction continuous interaction over time, the two particles share a “mind”. Arrange a number of these objects together and you get an atom, which itself also has a shared/merged experience so long as the particles remain in bonded interaction. Arrange a lot of atoms together and you get a electrical transistor. Now we're finally starting to get to a level where I have some idea of what the “shared experience of being a transistor” would be (rather boring, by my standards), and more importantly, it's clear how that experience is aggregated together from its constituent parts. From here, computing theory takes over as more complex interdependent systems are constructed, each merging experiences together into a shared hive mind, until you reach the level of the human being or AI.

Are you at least following what I'm saying, even if you don't agree?

Replies from: lavalamp
comment by lavalamp · 2013-10-03T00:57:33.962Z · LW(p) · GW(p)

That was a very long comment (thank you for your effort) and I don't think I have the energy to exhaustively go through it.

I believe I follow what you're saying. It doesn't make much sense to me, so maybe that belief is false.

I think the fact that if you start with a brain, which is presumably conscious, and zoom in all the way looking for the conciousness boundary, and then start with a quark, which is presumably not conscious, and zoom all the way out to the entire brain, also without finding a consciousness barrier--I think this means that the best we can do at the moment is set upper and lower bounds.

A minimally conscious system-- say, something that can convince me that it thinks it is conscious. "echo 'I'm conscious!'" doesn't quite cut it, things that recognize themselves in mirrors probably do, and I could go either way on the stuff in between.

I think your reductionism is a little misapplied. My pi-calculating program develops a new property of pi-computation when you put the adders and multipliers together right, but is completely described in terms of adders and multipliers. I expect consciousness to be exactly the same; it'll be completely described in terms of qualia generating algorithms (or some such), which won't themselves have the consciousness property.

This is hard to see because the algorithms are written in spaghetti code, in the wiring between neurons. In computer terms, we have access to the I/O system and all the gates in the CPU, but we don't currently know how they're connected. Looking at more or fewer of the gates doesn't help, because the critical piece of information is how they're connected and what algorithm they implement.

IMO, my guess (P=.65) is that qualia are going to turn out to be something like vectors in a feature space. Under this model, clearly systems incapable of representing such a vector can't have any qualia at all. Rocks and single molecules, for example.

comment by pengvado · 2013-10-01T18:09:33.703Z · LW(p) · GW(p)

What relevance does personal identity have to TDT? TDT doesn't depend on whether the other instances of TDT are in copies of you, or in other people who merely use the same decision theory as you.

Replies from: None
comment by [deleted] · 2013-10-01T18:33:08.762Z · LW(p) · GW(p)

It has relevance for the basilisk scenario, which I'm not sure I should say any more about.

comment by Shmi (shminux) · 2013-09-30T23:00:59.789Z · LW(p) · GW(p)

I want my Alcor contract to explicitly forbid uploading as a restoration process, because I am unconvinced that a simulation of my destructively scanned frozen brain would really be a continuation of my personal identity.

Like TheOtherDave (I presume), I consider my identity to be adequately described by whatever Turing machine that can emulate my brain, or at least its prefrontal cortex + relevant memory storage. I suspect that a faithful simulation of just my Brodmann area 10 coupled with a large chunk of my memories would restore enough of my self-awareness to be considered "me". This sim-me would probably lose most of my emotions without the rest of the brain, but it is still infinitely better than none.

Replies from: TheOtherDave, someonewrongonthenet
comment by TheOtherDave · 2013-10-01T00:12:21.271Z · LW(p) · GW(p)

Like TheOtherDave (I presume), I consider my identity to be adequately described by whatever Turing machine that can emulate my brain, or at least its prefrontal cortex + relevant memory storage.

There's a very wide range of possible minds I consider to preserve my identity; I'm not sure the majority of those emulate my prefrontal cortex significantly more closely than they emulate yours, and the majority of my memories are not shared by the majority of those minds.

Replies from: shminux
comment by Shmi (shminux) · 2013-10-01T00:53:14.734Z · LW(p) · GW(p)

Interesting. I wonder what you would consider a mind that preserves your identity. For example, I assume that the total of your posts online, plus whatever other information available without some hypothetical future brain scanner, all running as a process on some simulator, is probably not enough.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-10-01T02:18:51.890Z · LW(p) · GW(p)

At one extreme, if I assume those posts are being used to create a me-simulation by me-simulation-creator that literally knows nothing else about humans, then I'm pretty confident that the result is nothing I would identify with. (I'm also pretty sure this scenario is internally inconsistent.)

At another extreme, if I assume the me-simulation-creator has access to a standard template for my general demographic and is just looking to customize that template sufficiently to pick out some subset of the volume of mindspace my sufficiently preserved identity defines... then maybe. I'd have to think a lot harder about what information is in my online posts and what information would plausibly be in such a template to even express a confidence interval about that.

That said, I'm certainly not comfortable treating the result of that process as preserving "me."

Then again I'm also not comfortable treating the result of living a thousand years as preserving "me."

comment by someonewrongonthenet · 2013-10-01T03:03:07.266Z · LW(p) · GW(p)

a large chunk of my memories

You'll need the rest of the brain because these other memories would be distributed throughout the rest of your cortex. The hippocampus only contains recent episodic memories.

If you lost your temporal lobe, for example, you'd lose all non-episodic knowledge concerning what the names of things are, how they are categorized, and what the relationships between them are.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-10-01T03:07:43.899Z · LW(p) · GW(p)

That said, I'm not sure why I should care much about having my non-episodic knowledge replaced with an off-the-shelf encyclopedia module. I don't identify with it much.

Replies from: someonewrongonthenet
comment by someonewrongonthenet · 2013-10-01T03:30:51.852Z · LW(p) · GW(p)

If you only kept the hippocampus, you'd lose your non-recent episodic memories too. But technical issues aside, let me defend the "encyclopedia":

Episodic memory is basically a cassette reel of your life, along with a few personalized associations and maybe memories of thoughts and emotions. Everything that we associate with the word knowledge is non-episodic. It's not just verbal labels - that was just a handy example that I happened to know the brain region for. I'd actually care about that stuff more about non-episodic memories than the episodic stuff.

Things like "what is your wife's name and what does her face look like" are non-episodic memory. You don't have to think back to a time when you specifically saw your wife to remember what her name and face is, and that you love her - that information is treated as a fact independent of any specific memory, indelibly etched into your model of the world. Cognitively speaking, "I love my wife stacy, she looks like this" is as much of a fact as "grass is a green plant" and they are both non-episodic memories. Your episodic memory reel wouldn't even make sense without that sort of information. I'd still identify someone with memory loss, but retaining my non-episodic memory, as me. I'd identify someone with only my episodic memories as someone else, looking at a reel of memory that does not belong to them and means nothing to them.

(Trigger Warning: link contains writing in diary which is sad, horrifying, and nonfiction.): This is what complete episodic memory loss looks like. Patients like this can still remember the names of faces of people they love.

Ironically...the (area 10) might actually be replaceable. I'm not sure whether any personalized memories are kept there - I don't know what that specific region does but it's in an area that mostly deals with executive function - which is important for personality, but not necessarily individuality.

Replies from: TheOtherDave, shminux
comment by TheOtherDave · 2013-10-01T03:46:42.128Z · LW(p) · GW(p)

I take it you're assuming that information about my husband, and about my relationship to my husband, isn't in the encyclopedia module along with information about mice and omelettes and your relationship to your wife.

If that's true, then sure, I'd prefer not to lose that information.

Replies from: someonewrongonthenet, CynicalOptimist
comment by someonewrongonthenet · 2013-10-01T04:05:26.005Z · LW(p) · GW(p)

I take it you're assuming

Well...yeah, I was. I thought the whole idea of having an encyclopedia was to eliminate redundancy through standardization of the parts of the brain that were not important for individuality?

If your husband and my husband, your omelette and my omelette, are all stored in the encyclopedia, it wouldn't be a "off-the-shelf encyclopedia module" anymore. It would be an index containing individual people's non-episodic knowledge. At that point, it's just an index of partial uploads. We can't standardize that encyclopedia to everyone: If the the thing that stores your omelette and your husband went around viewing my episodic reel and knowing all the personal stuff about my omelette and husband...that would be weird and the resulting being would be very confused (let alone if the entire human race was in there - I'm not sure how that would even work).

(Also, going back into the technical stuff, there may or may not be a solid dividing line between very old episodic memory and non-episodoc memory

Replies from: TheOtherDave
comment by TheOtherDave · 2013-10-01T05:07:07.493Z · LW(p) · GW(p)

Sure, if your omelette and my omelette are so distinct that there is no common data structure that can serve as a referent for both, and ditto for all the other people in the world, then the whole idea of an encyclopedia falls apart. But that doesn't seem terribly likely to me.

Your concept of an omelette probably isn't exactly isomorphic to mine, but there's probably a parametrizable omelette data structure we can construct that, along with a handful of parameter settings for each individual, can capture everyone's omelette. The parameter settings go in the representation of the individual; the omelette data structure goes in the encyclopedia.

And, in addition, there's a bunch of individualizing episodic memory on top of that... memories of cooking particular omelettes, of learning to cook an omelette, of learning particular recipes, of that time what ought to have been an omelette turned into a black smear on the pan, etc. And each of those episodic memories refers to the shared omelette data structure, but is stored with and is unique to the uploaded agent. (Maybe. It may turn out that our individual episodic memories have a lot in common as well, such that we can store a standard lifetime's memories in the shared encyclopedia and just store a few million bits of parameter settings in each individual profile. I suspect we overestimate how unique our personal narratives are, honestly.)

Similarly, it may be that our relationships with our husbands are so distinct that there is no common data structure that can serve as a referent for both. But that doesn't seem terribly likely to me. Your relationship with your husband isn't exactly isomorphic to mine, of course, but it can likely similarly be captured by a common parameterizable relationship-to-husband data structure.

As for the actual individual who happens to be my husband, well, the majority of the information about him is common to all kinds of relationships with any number of people. He is his father's son and his stepmother's stepson and my mom's son-in-law and so on and so forth. And, sure, each of those people knows different things, but they know those things about the same person; there is a central core. That core goes in the encyclopedia, and pointers to what subset each person knows about him goes in their individual profiles (along with their personal experiences and whatever idiosyncratic beliefs they have about him).

So, yes, I would say that your husband and my husband and your omelette and my omelette are all stored in the encyclopedia. You can call that an index of partial uploads if you like, but it fails to incorporate whatever additional computations that create first-person experience. It's just a passive data structure.

Incidentally and unrelatedly, I'm not nearly as committed as you sound to preserving our current ignorance of one another's perspective in this new architecture.

Replies from: someonewrongonthenet
comment by someonewrongonthenet · 2013-10-01T09:04:39.061Z · LW(p) · GW(p)

I'm really skeptical that parametric functions which vary on dimensions concerning omelets (Egg species? Color? ingredients? How does this even work?) are a more efficient or more accurate way of preserving what our wetware encode when compared to simulating the neural networks devoted dealing with omelettes. I wouldn't even know how to start working on the problem mapping a conceptual representation of an omelette into parametric functions (unless we're just using the parametric functions to model the properties of individual neurons - that's fine).

Can you give an example concerning what sort of dimension you would parametrize so I have a better idea of what you mean?

Incidentally and unrelatedly, I'm not nearly as committed as you sound to preserving our current ignorance of one another's perspective in this new architecture.

I was more worried that it might break stuff (as in, resulting beings would need to be built quite differently in order to function) if one-another's perspectives would overlap. Also, that brings us back to the original question I was raising about living forever - what exactly is it that we value and want to preserve?

Replies from: TheOtherDave, TheOtherDave
comment by TheOtherDave · 2013-10-01T14:53:26.192Z · LW(p) · GW(p)

Can you give an example concerning what sort of dimension you would parametrize so I have a better idea of what you mean?

Not really. If I were serious about implementing this, I would start collecting distinct instances of omelette-concepts and analyzing them for variation, but I'm not going to do that. My expectation is that if I did, the most useful dimensions of variability would not map to any attributes that we would ordinarily think of or have English words for.

Perhaps what I have in mind can be said more clearly this way: there's a certain amount of information that picks out the space of all human omelette-concepts from the space of all possible concepts... call that bitstring S1. There's a certain amount of information that picks out the space of my omelette-concept from the space of all human omelette-concepts... call that bitstring S2.

S2 is much, much, shorter than S1.

It's inefficient to have 7 billion human minds each of which is taking up valuable bits storing 7 billion copies of S1 along with their individual S2s. Why in the world would we do that, positing an architecture that didn't physically require it? Run a bloody compression algorithm, store S1 somewhere, have each human mind refer to it.

I have no idea what S1 or S2 are.

And I don't expect that they're expressible in words, any more than I can express which pieces of a movie are stored as indexed substrings... it's not like MPEG compression of a movie of an auto race creates an indexed "car" data structure with parameters representing color, make, model, etc. It just identifies repeated substrings and indexes them, and takes advantage of the fact that sequential frames share many substrings in common if properly parsed.

But I'm committed enough to a computational model of human concept storage that I believe they exist. (Of course, it's possible that our concept-space of an omelette simply can't be picked out by a bit-string, but I can't see why I should take that possibility seriously.)

comment by TheOtherDave · 2013-10-01T15:02:12.208Z · LW(p) · GW(p)

Oh, and agreed that we would change if we were capable of sharing one another's perspectives.
I'm not particularly interested in preserving my current cognitive isolation from other humans, though... I value it, but I value it less than I value the ability to easily share perspectives, and they seem to be opposed values.

comment by CynicalOptimist · 2016-11-17T20:08:01.244Z · LW(p) · GW(p)

I think I've got a good response for this one.

My non-episodic memory contains the "facts" that Buffy the Vampire Slayer was one of the best television shows that was ever made, and the Pink Floyd aren't an interesting band. My boyfriend's non-episodic memory contains the facts that Buffy was boring, unoriginal, and repetetive (and that Pink Floyd's music is trancendentally good).

Objectively, these are opinions, not facts. But we experience them as facts. If I want to preserve my sense of identity, then I would need to retain the facts that were in my non-episodic memory. More than that, I would also lose my sense of self if I gained contradictory memories. I would need to have my non-episodic memories and not have the facts from my boyfriend's memory.

That's the reason why "off the shelf" doesn't sound suitable in this context.

Replies from: TheOtherDave
comment by TheOtherDave · 2016-11-18T22:22:01.063Z · LW(p) · GW(p)

So, on one level, my response to this is similar to the one I gave (a few years ago) [http://lesswrong.com/lw/qx/timeless_identity/9trc]... I agree that there's a personal relationship with BtVS, just like there's a personal relationship with my husband, that we'd want to preserve if we wanted to perfectly preserve me.

I was merely arguing that the bitlength of that personal information is much less than the actual information content of my brain, and there's a great deal of compression leverage to be gained by taking the shared memories of BtVS out of both of your heads (and the other thousands of viewers) and replacing them with pointers to a common library representation of the show and then have your personal relationship refer to the common library representation rather than your private copy.

The personal relationship remains local and private, but it takes up way less space than your mind currently does.

That said... coming back to this conversation after three years, I'm finding I just care less and less about preserving whatever sense of self depends on these sorts of idiosyncratic judgments.

I mean, when you try to recall a BtVS episode, your memory is imperfect... if you watch it again, you'll uncover all sorts of information you either forgot or remembered wrong. If I offered to give you perfect eideitic recall of BtVS -- no distortion of your current facts about the goodness of it, except insofar as those facts turn out to be incompatible with an actual perception (e.g., you'd have changed your mind if you watched it again on TV, too) -- would you take it?

I would. I mean, ultimately, what does it matter if I replace my current vague memory of the soap opera Spike was obsessively watching with a more specific memory of its name and whatever else we learned about it? Yes, that vague memory is part of my unique identity, I guess, in that nobody else has quite exactly that vague memory... but so what? That's not enough to make it worth preserving.

And for all I know, maybe you agree with me... maybe you don't want to preserve your private "facts" about what kind of tie Giles was wearing when Angel tortured him, etc., but you draw the line at losing your private "facts" about how good the show was. Which is fine, you care about what you care about.

But if you told me right now that I'm actually an upload with reconstructed memories, and that there was a glitch such that my current "facts" about BTVS being a good show for its time is mis-reconstructed, and Dave before he died thought it was mediocre... well, so what?

I mean, before my stroke, I really disliked peppers. After my stroke, peppers tasted pretty good. This was startling, but it posed no sort of challenge to my sense of self.

Apparently (Me + likes peppers) ~= (Me + dislikes peppers) as far as I'm concerned.

I suspect there's a million other things like that.

comment by Shmi (shminux) · 2013-10-01T06:28:16.675Z · LW(p) · GW(p)

Ironically...the (area 10) might actually be replaceable. I'm not sure whether any personalized memories are kept there - I don't know what that specific region does but it's in an area that mostly deals with executive function - which is important for personality, but not necessarily individuality.

What's the difference between personality and individuality?

Replies from: someonewrongonthenet
comment by someonewrongonthenet · 2013-10-01T09:29:20.746Z · LW(p) · GW(p)

In my head:

Personality is a set of dichotomous variables plotted on a bell curve. "Einstein was extroverted, charismatic, nonconforming, and prone to absent-mindedness" describes his personality. We all have these traits in various amounts. You can some of these personality nobs really easily with drugs. I can't specify Einstein out of every person in the world using only his personality traits - I can only specify individuals similar to him.

Individuality is stuff that's specific to the person. "Einstein's second marriage was to his cousin and he had at least 6 affairs. He admired Spinoza, and was a contemporary of Tagore. He was a socialist and cared about civil rights. He had always thought there was something wrong about refrigerators." Not all of these are dichotomous variables - you either spoke to Tagore or you didn't. And it makes no sense to put people on a "satisfaction with Refrigerators" spectrum, even though I suppose you could if you wanted to. And all this information together specifically points to Einstein, and no one else in the world. Everyone in the world a set of unique traits like fingerprints - and it doesn't even make sense to ask what the "average" is, since most of the variables don't exist on the same dimension.

And...well, when it comes to Area 10, just intuitively, do you really want to define yourself by a few variables that influence your executive function? Personally I define myself partially by my ideas, and partially by my values...and the former is definitely in the "individuality" territory.

Replies from: shminux
comment by Shmi (shminux) · 2013-10-01T16:49:14.257Z · LW(p) · GW(p)

OK, I understand what you mean by personality vs individuality. However, I doubt that the functionality of BA10 can be described "by a few variables that influence your executive function". Then again, no one knows anything definite about it.

comment by A1987dM (army1987) · 2013-10-01T00:14:43.493Z · LW(p) · GW(p)

That said, I agree completely that the kinds of vague identity concerns about cryonics that the quoted sentence with "not" removed would be raising would also arise, were one consistent, about routine continuation of existence over time.

There are things that when I go to bed to wake up eight hours later are very nearly preserved but if I woke up sixty years later wouldn't be, e.g. other people's memories of me (see I Am a Strange Loop) or the culture of the place where I live (see Good Bye, Lenin!).

(I'm not saying whether this is one of the main reasons why I'm not signed up for cryonics.)

Replies from: TheOtherDave
comment by TheOtherDave · 2013-10-01T01:48:07.267Z · LW(p) · GW(p)

Point.

comment by someonewrongonthenet · 2013-10-01T02:51:03.050Z · LW(p) · GW(p)

Because the notion of "me" is not an ontologically basic category and the question of whether the "real me" wakes up is a question that aught to be un-asked.

I'm a bit confused at the question...you articulated my intent with that sentence perfectly in your other post.

Hrm.. ambiguous semantics. I took it to imply acceptance of the idea but not elevation of its importance, but I see how it could be interpreted differently.

and, as TheOtherDave said,

presumably that also helps explain how they can sleep at night.

EDIT: Nevermind, I now understand which part of my statement you misunderstood.

I'm not accepting-but-not-elevating the idea that the 'Real me" doesn't wake up on the other side. Rather, I'm saying that the questions of personal identity over time do not make sense in the first place. It's like asking "which color is the most moist"?

You actually continue functioning when you sleep, it's just that you don't remember details once you wake up. A more useful example for such discussion is general anesthesia, which shuts down the regions of the brain associated with consciousness. If personal identity is in fact derived from continuity of computation, then it is plausible that general anesthesia would result in a "different you" waking up after the operation. The application to cryonics depends greatly on the subtle distinction of whether vitrification (and more importantly, the recovery process) slows downs or stops computation. This has been a source of philosophical angst for me personally, but I'm still a cryonics member.

More troubling is the application to uploading. I haven't done this yet, but I want my Alcor contract to explicitly forbid uploading as a restoration process, because I am unconvinced that a simulation of my destructively scanned frozen brain would really be a continuation of my personal identity. I was hoping that “Timeless Identity” would address this point, but sadly it punts the issue.

The root of your philosophical dilemma is that "personal identity" is a conceptual substitution for soul - a subjective thread that connects you over space and time.

No such thing exists. There is no specific location in your brain which is you. There is no specific time point which is you. Subjective experience exists only in the fleeting present. The only "thread' connecting you to your past experiences is your current subjective experience of remembering them. That's all.

Replies from: None, AFinerGrain_duplicate0.4555006182262571
comment by [deleted] · 2013-10-01T17:15:55.281Z · LW(p) · GW(p)

The root of your philosophical dilemma is that "personal identity" is a conceptual substitution for soul - a subjective thread that connects you over space and time.

No such thing exists. There is no specific location in your brain which is you. There is no specific time point which is you. Subjective experience exists only in the fleeting present. The only "thread' connecting you to your past experiences is your current subjective experience of remembering them. That's all.

I have a strong subjective experience of moment-to-moment continuity, even if only in the fleeting present. Simply saying “no such thing exists” doesn't do anything to resolve the underlying confusion. If no such thing as personal identity exists, then why do I experience it? What is the underlying insight that eliminates the question?

This is not an abstract question either. It has huge implications for the construction of timeless decision theory and utilitarian metamorality.

Replies from: shminux
comment by Shmi (shminux) · 2013-10-01T17:24:29.989Z · LW(p) · GW(p)

"a strong subjective experience of moment-to-moment continuity" is an artifact of the algorithm your brain implements. It certainly exists in as much as the algorithm itself exists. So does your personal identity. If in the future it becomes possible to run the same algorithm on a different hardware, it will still produce this sense of personal identity and will feel like "you" from the inside.

Replies from: None
comment by [deleted] · 2013-10-01T17:53:06.374Z · LW(p) · GW(p)

Yes, I'm not questioning whether a future simulation / emulation of me would have an identical subjective experience. To reject that would be a retreat to epiphenomenalism.

Let me rephrase the question, so as to expose the problem: if I were to use advanced technology to have my brain scanned today, then got hit by a bus and cremated, and then 50 years from now that brain scan is used to emulate me, what would my subjective experience be today? Do I experience “HONK Screeeech, bam” then wake up in a computer, or is it “HONK Screeeech, bam” and oblivion?

Yes, I realize that in both cases result in a computer simulation of Mark in 2063 claiming to have just woken up in the brain scanner, with a subjective feeling of continuity. But is that belief true? In the two situations there's a very different outcome for the Mark of 2013. If you can't see that, then I think we are talking about different things, and maybe we should taboo the phrase “personal/subjective identity”.

Replies from: shminux, TheOtherDave, lavalamp
comment by Shmi (shminux) · 2013-10-01T18:32:09.245Z · LW(p) · GW(p)

if I were to use advanced technology to have my brain scanned today, then got hit by a bus and cremated, and then 50 years from now that brain scan is used to emulate me, what would my subjective experience be today? Do I experience “HONK Screeeech, bam” then wake up in a computer, or is it “HONK Screeeech, bam” and oblivion?

Ah, hopefully I'm slowly getting what you mean. So, there was the original you, Mark 2013, whose algorithm was terminated soon after it processed the inputs “HONK Screeeech, bam”, and the new you, Mark 2063, whose experience is “HONK Screeeech, bam” then "wake up in a computer". You are concerned with... I'm having trouble articulated what exactly... something about the lack of experiences of Mark 2013? But, say, if Mark 2013 was restored to life in mostly the same physical body after a 50-year "oblivion", you wouldn't be?

Replies from: None
comment by [deleted] · 2013-10-01T18:55:52.857Z · LW(p) · GW(p)

Ah, hopefully I'm slowly getting what you mean. So, there was the original you, Mark 2013, whose algorithm was terminated soon after it processed the inputs “HONK Screeeech, bam”, and the new you, Mark 2063, whose experience is “HONK Screeeech, bam” then "wake up in a computer".

Pretty much correct. To be specific, if computational continuity is what matters, then Mark!2063 has my memories, but was in fact “born” the moment the simulation started, 50 years in the future. That's when his identity began, whereas mine ended when I died in 2013.

This seems a little more intuitive when you consider switching on 100 different emulations of me at the same time. Did I somehow split into 100 different persons? Or was there in fact 101 separate subjective identities, 1 of which terminated in 2013 and 100 new ones created for the simulations? The latter is a more straight forward explanation, IMHO.

You are concerned with... I'm having trouble articulated what exactly... something about the lack of experiences of Mark 2013? But, say, if Mark 2013 was restored to life in mostly the same physical body after a 50-year "oblivion", you wouldn't be?

No, that would make little difference as it's pretty clear that physical continuity is an illusion. If pattern or causal continuity were correct, then it'd be fine, but both theories introduce other problems. If computational continuity is correct, then a reconstructed brain wouldn't be me any more than a simulation would. However it's possible that my cryogenically vitrified brain would preserve identity, if it were slowly brought back online without interruption.

I'd have to learn more about how general anesthesia works to decide if personal identity would be preserved across on the operating table (until then, it scares the crap out of me). Likewise, a AI or emulation running on a computer that is powered off and then later resumed would also break identity, but depending on the underlying nature of computation & subjective experience, task switching and online suspend/resume may or may not result in cycling identity.

I'll stop there because I'm trying to formulate all these thoughts into a longer post, or maybe a sequence of posts.

Replies from: TheOtherDave, lavalamp, shminux
comment by TheOtherDave · 2013-10-01T19:02:09.148Z · LW(p) · GW(p)

Did I somehow split into 100 different persons? Or was there in fact 101 separate subjective identities, 1 of which terminated in 2013 and 100 new ones created for the simulations? The latter is a more straight forward explanation, IMHO.

I would say that yes, at T1 there's one of me, and at T2 there's 100 of me.
I don't see what makes "there's 101 of me, one of which terminated at T1" more straightforward than that.

Replies from: None
comment by [deleted] · 2013-10-01T19:47:11.912Z · LW(p) · GW(p)

I don't see what makes "there's 101 of me, one of which terminated at T1" more straightforward than that.

It's wrapped up in the question over what happened to that original copy that (maybe?) terminated at T1. Did that original version of you terminate completely and forever? Then I wouldn't count it among the 100 copies that were created later.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-10-01T19:54:44.300Z · LW(p) · GW(p)

Sure, obviously if it terminated then it isn't around afterwards.
Equally obviously, if it's around afterwards, it didn't terminate.

You said your metric for determining which description is accurate was (among other things) simplicity, and you claimed that the "101 - 1" answer is more straightforward (simpler?) than the "100" answer.
You can't now turn around and say that the reason it's simpler is because the "101-1" answer is accurate.

Either it's accurate because it's simpler, or it's simpler because it's accurate, but to assert both at once is illegitimate.

Replies from: None
comment by [deleted] · 2013-10-01T20:29:23.721Z · LW(p) · GW(p)

I'll address this in my sequence, which hopefully I will have time to write. The short answer is that what matters isn't which explanation of this situation is simpler, requires fewer words, a smaller number, or whatever. What matters is: which general rule is simpler?

Pattern or causal continuity leads to all sorts of weird edge cases, some of which I've tried to explain in my examples here, and in other cases fails (mysterious answer) to provide a definitive prediction of subjective experience. There may be other solutions, but computational continuity at the very least provides a simpler model, even if it results in the more "complex" 101-1 answer.

It's sorta like wave collapse vs many-worlds. Wave collapse is simpler (single world), right? No. Many worlds is the simpler theory because it requires fewer rules, even though it results in a mind-bogglingly more complex and varied multiverse. In this case I think computational continuity in the way I formulated it reduces consciousness down to simple general explanation that dissolves the question with no residual problems.

Kinda like how freewill is what a decision algorithm feels like from the inside, consciousness / subjective experience is what any computational process feels like from the inside. And therefore, when the computational process terminates, so too does the subjective experience.

comment by lavalamp · 2013-10-01T19:13:06.624Z · LW(p) · GW(p)

Can you taboo "personal identity"? I don't understand what important thing you could lose by going under general anesthesia.

Replies from: None
comment by [deleted] · 2013-10-01T19:40:55.457Z · LW(p) · GW(p)

It's easier to explain in the case of multiple copies of yourself. Imagine the transporter were turned into a replicator - it gets stuck in a loop reconstructing the last thing that went through it, namely you. You step off and turn around to find another version of you just coming out. And then another, and another, etc. Each one of you shares the same memories, but from that moment on you have diverged. Each clone continues life with their own subjective experience until that experience is terminated by that clone's death.

That sense of subjective experience separate from memories or shared history is what I have been calling “personal identity.” It is what gives me the belief, real or illusory, that I am the same person from moment to moment, day to day, and what separates me from my clones. You are welcome to suggest a better term.

The replicator / clone thought experiment shows that “subjective experience of identity” is something different from the information pattern that represents your mind. There is something, although at this moment that something is not well defined, which makes you the same “you” that will exist five minutes in the future, but which separates you from the “you”s that walked out of the replicator, or exist in simulation, for example.

The first step is recognizing this distinction. Then turn around and apply it to less fantastical situations. If the clone is “you” but not you (meaning no shared identity, and my apologies for the weak terminology), then what's to say that a future simulation of “you” would also be you? What about cryonics, will your unfrozen brain still be you? That might depend on what they do to repair damage from vitrification. What about general anesthesia? Again, I need to learn more about how general anesthesia works, but if they shut down your processing centers and then restart you later, how is that different from the teleportation or simulation scenario? After all we've already established that whatever provides personal identity, it's not physical continuity.

Replies from: TheOtherDave, lavalamp
comment by TheOtherDave · 2013-10-01T19:48:45.430Z · LW(p) · GW(p)

That sense of subjective experience separate from memories or shared history is what I have been calling “personal identity.” It is what gives me the belief, real or illusory, that I am the same person from moment to moment, day to day, and what separates me from my clones.

Well, OK. So suppose that, after I go through that transporter/replicator, you ask the entity that comes out whether it has the belief, real or illusory, that it is the same person in this moment that it was at the moment it walked into the machine, and it says "yes".

If personal identity is what creates that belief, and that entity has that belief, it follows that that entity shares my personal identity... doesn't it?

Replies from: None
comment by [deleted] · 2013-10-01T20:17:54.045Z · LW(p) · GW(p)

Well, OK. So suppose that, after I go through that transporter/replicator, you ask the entity that comes out whether it has the belief, real or illusory, that it is the same person in this moment that it was at the moment it walked into the machine, and it says "yes".

If personal identity is what creates that belief, and that entity has that belief, it follows that that entity shares my personal identity... doesn't it?

Not quite. If You!Mars gave it thought before answering, his thinking probably went like this: “I have memories of going into the transporter, just a moment ago. I have a continuous sequence of memories, from then until now. Nowhere in those memories does my sense of self change. Right now I am experiencing the same sense of self I always remember experiencing, and laying down new memories. Ergo, proof by backwards induction I am the same person that walked into the teleporter.” However for that - or any - line of meta reasoning to hold, (1) your memories need to accurately correspond with the true and full history of reality and (2) you need trust that what occurs in the present also occurred in the past. In other words, it's kinda like saying “my memory wasn't altered because I would have remembered that.” It's not a circular argument per se, but it is a meta loop.

The map is not the territory. What happened to You!Earth's subjective experience is an objective, if perhaps not empirically observable fact. You!Mars' belief about what happened may or may not correspond with reality.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-10-01T21:04:40.189Z · LW(p) · GW(p)

What if me!Mars, after giving it thought, shakes his head and says "no, that's not right. I say I'm the same person because I still have a sense of subjective experience, which is separate from memories or shared history, which gives me the belief, real or illusory, that I am the same person from moment to moment, day to day, and which separates me from my clones"?

Do you take his word for it?
Do you assume he's mistaken?
Do you assume he's lying?

Replies from: None
comment by [deleted] · 2013-10-01T21:26:03.905Z · LW(p) · GW(p)

Assuming that he acknowledges that clones have a separate identity, or in other words he admits that there can be instances of himself that are not him, then by asserting the same identity as the person that walked into the teleporter, he is making an extrapolation into the past. He is expressing a belief that by whatever definition he is using the person walking into the teleporter meets a standard of meness that the clones do not. Unless the definition under consideration explicitly reference You!Mars' mental state (e.g. "by definition" he has shared identity with people he remembers having shared identity with), then the validity of that belief is external: it is either true or false. The map is not the territory.

Under an assumption of pattern or causal continuity, for example, it would be explicitly true. For computational continuity it would be false.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-10-01T22:43:15.049Z · LW(p) · GW(p)

If I understood you correctly, then on your account, his claim is simply false, but he isn't necessarily lying.

Yes?

It seems to follow that he might actually have a sense of subjective experience, which is separate from memories or shared history, which gives him the belief, real or illusory (in this case illusory), that he is the same person from moment to moment, day to day, and the same person who walked into the teleporter, and which separates him from his clones.

Yes?

Replies from: None
comment by [deleted] · 2013-10-01T22:57:29.737Z · LW(p) · GW(p)

If I understood you correctly, then on your account, his claim is simply false, but he isn't necessarily lying.

Yes, in the sense that it is a belief about his own history which is either true or false like any historical fact. Whether it actually false depends on the nature of “personal identity”. If I understand the original post correctly, I think Eliezer would argue that his claim is true. I think Eliezer's argument lacks sufficient justification, and there's a good chance his claim is false.

It seems to follow that he might actually have a sense of subjective experience, which is separate from memories or shared history, which gives him the belief, real or illusory (in this case illusory), that he is the same person from moment to moment, day to day, and the same person who walked into the teleporter, and which separates him from his clones.

Yes. My question is: is that belief justified?

If your memory were altered such to make you think you won the lottery, that doesn't make you any richer. Likewise You!Mars' memory was constructed by the transporter machine in such a way, following the transmitted design as to make him remember stepping into the transporter on Earth as you did, and walking out of it on Mars in seamless continuity. But just because he doesn't remember the deconstruction, information transmission, and reconstruction steps doesn't mean they didn't happen. Once he learns what actually happened during his transport, his decision about whether he remains the same person that entered the machine on Earth depends greatly on his model of consciousness and personal identity/continuity.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-10-01T23:34:31.686Z · LW(p) · GW(p)

It seems to follow that he might actually have a sense of subjective experience, which is separate from memories or shared history, which gives him the belief, real or illusory (in this case illusory), that he is the same person from moment to moment, day to day, and the same person who walked into the teleporter, and which separates him from his clones.
Yes. My question is: is that belief justified?

OK, understood.

Here's my confusion: a while back, you said:

That sense of subjective experience separate from memories or shared history is what I have been calling “personal identity.” It is what gives me the belief, real or illusory, that I am the same person from moment to moment, day to day, and what separates me from my clones.

And yet, here's Dave!Mars, who has a sense of subjective experience separate from memories or shared history which gives him the belief, real or illusory (in this case illusory), that he is the same person from moment to moment, day to day, and the same person who walked into the teleporter, and which separates him from his clones.

But on your account, he might not have Dave's personal identity.

So, where is this sense of subjective experience coming from, on your account? Is it causally connected to personal identity, or not?

Once he learns what actually happened during his transport, his decision about whether he remains the same person that entered the machine on Earth depends greatly on his model of consciousness and personal identity/continuity.

Yes, that's certainly true. By the same token, if I convince you that I placed you in stasis last night for... um... long enough to disrupt your personal identity (a minute? an hour? a millisecond? a nanosecond? how long a period of "computational discontinuity" does it take for personal identity to evaporate on your account, anyway?), you would presumably conclude that you aren't the same person who went to bed last night. OTOH, if I placed you in stasis last night and didn't tell you, you'd conclude that you're the same person, and live out the rest of your life none the wiser.

comment by lavalamp · 2013-10-01T20:21:59.936Z · LW(p) · GW(p)

That experiment shows that "personal identity", whatever that means, follows a time-tree, not a time-line. That conclusion also must hold if MWI is true.

So I get that there's a tricky (?) labeling problem here, where it's somewhat controversial which copy of you should be labeled as having your "personal identity". The thing that isn't clear to me is why the labeling problem is important. What observable feature of reality depends on the outcome of this labeling problem? We all agree on how those copies of you will act and what beliefs they'll have. What else is there to know here?

Replies from: None
comment by [deleted] · 2013-10-01T20:44:31.807Z · LW(p) · GW(p)

Would you step through the transporter? If you answered no, would it be moral to force you through the transporter? What if I didn't know your wishes, but had to extrapolate? Under what conditions would it be okay?

Also, take the more vile forms of Pascal's mugging and acausal trades. If something threatens torture to a simulation of you, should you be concerned about actually experiencing the torture, thereby subverting your rationalist impulse to shut up and multiply utility?

Replies from: lavalamp
comment by lavalamp · 2013-10-01T20:59:11.612Z · LW(p) · GW(p)

Would you step through the transporter? If you answered no, would it be moral to force you through the transporter? What if I didn't know your wishes, but had to extrapolate? Under what conditions would it be okay?

I don't see how any of that depends on the question of which computations (copies of me) get labeled with "personal identity" and which don't.

Also, take the more vile forms of Pascal's mugging and acausal trades. If something threatens torture to a simulation of you, should you be concerned about actually experiencing the torture, thereby subverting your rationalist impulse to shut up and multiply utility?

Depending on specifics, yes. But I don't see how this depends on the labeling question. This just boils down to "what do I expect to experience in the future?" which I don't see as being related to "personal identity".

Replies from: None
comment by [deleted] · 2013-10-01T21:17:51.914Z · LW(p) · GW(p)

This just boils down to "what do I expect to experience in the future?" which I don't see as being related to "personal identity".

Forget the phrase "personal identity". If I am a powerful AI from the future and I come back to tell you that I will run a simulation of you so we can go bowling together, do you or do you not expect to experience bowling with me in the future, and why?

Replies from: lavalamp, TheOtherDave, linkhyrule5, shminux
comment by lavalamp · 2013-10-01T21:29:25.100Z · LW(p) · GW(p)

I'll give a 50% chance that I'll experience that. (One copy of me continues in the "real" world, another copy of me appears in a simulation and goes bowling.)

(If you ask this question as "the AI is going to run N copies of the bowling simulation", then I'm not sure how to answer-- I'm not sure how to weight N copies of the exact same experience. My intuition is that I should still give a 50% chance, unless the simulations are going to differ in some respect, then I'd give a N/(N+1) chance.)

Replies from: None
comment by [deleted] · 2013-10-01T23:19:45.799Z · LW(p) · GW(p)

I need to think about your answer, as right now it doesn't make any sense to me. I suspect that whatever intuition underlies it is the source of our disagreement/confusion.

@linkhyrule5 had an answer better than the one I had in mind. The probability of us going bowling together is approximately equal to the probability that you are already in said simulation, if computational continuity is what matters.

If there were a 6th Day like service I could sign up for where if anything were to happen to me, a clone/simulation of with my memories would be created, I'd sign up for it in a heartbeat. Because if something were to happen to me I wouldn't want to deprive my wife of her husband, or my daughters of their father. But that is purely altruistic: I would have P(~0) expectation that I would actually experience that resurrection. Rather, some doppelganger twin that in every outward way behaves like me will take up my life where I left off. And that's fine, but let's be clear about the difference.

If you are not the simulation the AI was referring to, then you and it will not go bowling together, period. Because when said bowling occurs, you'll be dead. Or maybe you'll be alive and well and off doing other things while the simulation is going on. But under no circumstances should you expect to wake up as the simulation, as we are assuming them to be causally separate.

At least from my way of thinking. I'm not sure I understand yet where you are coming from well enough to predict what you'd expect to experience.

Replies from: lavalamp
comment by lavalamp · 2013-10-01T23:37:51.561Z · LW(p) · GW(p)

@linkhyrule5 had an answer better than the one I had in mind. The probability of us going bowling together is approximately equal to the probability that you are already in said simulation, if computational continuity is what matters.

You could understand my 50% answer to be expressing my uncertainty as to whether I'm in the simulation or not. It's the same thing.

I don't understand what "computational continuity" means. Can you explain it using a program that computes the digits of pi as an example?

Rather, some doppelganger twin that in every outward way behaves like me will take up my life where I left off. And that's fine, but let's be clear about the difference.

I think you're making a distinction that exists only in the map, not in the territory. Can you point to something in the territory that this matters for?

comment by TheOtherDave · 2013-10-01T22:51:12.327Z · LW(p) · GW(p)

Suppose that my husband and I believe that while we're sleeping, someone will paint a blue dot on either my forehead, or my husband's, determined randomly. We expect to see a blue dot when we wake up... and we also expect not to see a blue dot when we wake up. This is a perfectly reasonable state for two people to be in, and not at all problematic.

Suppose I believe that while I'm sleeping, a powerful AI will duplicate me (if you like, in such a way that both duplicates experience computational continuity with the original) and paint a blue dot on one duplicate's forehead. When I wake up, I expect to see a blue dot when I wake up... and I also expect not to see a blue dot when I wake up. This is a perfectly reasonable state for a duplicated person to be in, and not at all problematic.

Similarly, I both expect to experience bowling with you, and expect to not experience bowling with you (supposing that the original continues to operate while the simulation goes bowling).

Replies from: None
comment by [deleted] · 2013-10-01T23:29:35.249Z · LW(p) · GW(p)

The situation isn't analogous, however. Let's posit that you're still alive when the simulation is ran. In fact, aside from technology there's no reason to put it in the future or involve an AI. I'm a brain scanning researcher that shows up at your house tomorrow, with all the equipment to do a non-destructive mind upload and whole-brain simulation. I tell you that I am going to scan your brain, start the simulation, then don VR goggles and go virtual-bowling with “you”. Once the scanning is done you and your husband are free to go to the beach or whatever, while I go bowling with TheVirtualDave.

What probability would you put on you ending up bowling instead of at the beach?

Replies from: lavalamp, TheOtherDave
comment by lavalamp · 2013-10-01T23:42:28.760Z · LW(p) · GW(p)

Prediction: TheOtherDave will say 50%, Beach!Dave and Bowling!Dave would both consider both to be the "original". Assuming sufficiently accurate scanning & simulating.

Replies from: TheOtherDave
comment by TheOtherDave · 2013-10-01T23:48:31.754Z · LW(p) · GW(p)

Here's what TheOtherDave actually said.

Replies from: lavalamp
comment by lavalamp · 2013-10-01T23:55:21.026Z · LW(p) · GW(p)

Yes, looks like that prediction is falsified. At least the first sentence. :)

comment by TheOtherDave · 2013-10-01T23:46:48.299Z · LW(p) · GW(p)

Well, let's call P1 my probability of actually going to the beach, even if you never show up. That is, (1-P1) is the probability that traffic keeps me from getting there, or my car breaks down, or whatever. And let's call P2 my probability of your VR/simulation rig working. That is, (1-P2) is the probability that the scanner fails, etc. etc.

In your scenario, I put a P1 probability of ending up at the beach, and a P2 probability of ending up bowling. If both are high, then I'm confident that I will do both.

There is no "instead of". Going to the beach does not prevent me from bowling. Going bowling does not prevent me from going to the beach. Someone will go to the beach, and someone will go bowling, and both of those someones will be me.

Replies from: lavalamp, shminux
comment by lavalamp · 2013-10-01T23:54:43.710Z · LW(p) · GW(p)

Your probabilities add up to more than 1...

Replies from: TheOtherDave
comment by TheOtherDave · 2013-10-01T23:59:46.804Z · LW(p) · GW(p)

Of course they do. Why shouldn't they?

What is your probability that you will wake up tomorrow morning?
What is your probability that you will wake up Friday morning?
I expect to do both, so my probabilities of those two things add up to ~2.

In Mark's scenario, I expect to go bowling and I expect to go to the beach.
My probabilities of those two things similarly add up to ~2.

Replies from: lavalamp
comment by lavalamp · 2013-10-02T00:13:58.408Z · LW(p) · GW(p)

I think we have the same model of the situation, but I feel compelled to normalize my probability. A guess as to why:

I can rephrase Mark's question as, "In 10 hours, will you remember having gone to the beach or having bowled?" (Assume the simulation will continue running!) There'll be a you that went bowling and a you that went to the beach, but no single you that did both of those things. Your successive wakings example doesn't have this property.

I suppose I answer 50% to indicate my uncertainty about which future self we're talking about, since there are two possible referents. Maybe this is unhelpful.

Replies from: TheOtherDave, None
comment by TheOtherDave · 2013-10-02T00:44:38.819Z · LW(p) · GW(p)

Yes, that seems to be what's going on.

That said, normalizing my probability as though there were only going to be one of me at the end of the process doesn't seem at all compelling to me. I don't have any uncertainty about which future self we're talking about -- we're talking about both of them.

Suppose that you and your husband are planning to take the day off tomorrow, and he is planning to go bowling, and you are planning to go to the beach, and I ask the two of you "what's y'all's probability that one of y'all will go bowling, and what's y'all's probability that one of y'all will go to the beach?" It seems the correct answers to those questions will add up to more than 1, even though no one person will experience bowling AND going to the beach. In 10 hours, one of you will will remember having gone to the beach, and one will remember having bowled.

This is utterly unproblematic when we're talking about two people.

In the duplication case, we're still talking about two people, it's just that right now they are both me, so I get to answer for both of them. So, in 10 hours, I (aka "one of me") will remember having gone to the beach. I will also remember having bowled. I will not remember having gone to the beach and having bowled. And my probabilities add up to more than 1.

I recognize that it doesn't seem that way to you, but it really does seem like the obvious way to think about it to me.

Replies from: lavalamp
comment by lavalamp · 2013-10-02T00:59:53.110Z · LW(p) · GW(p)

I recognize that it doesn't seem that way to you, but it really does seem like the obvious way to think about it to me.

I think your description is coherent and describes the same model of reality I have. :)

comment by [deleted] · 2013-10-02T00:52:47.992Z · LW(p) · GW(p)

I can rephrase Mark's question as, "In 10 hours, will you remember having gone to the beach or having bowled?"

Yes. Probabilities aside, this is what I was asking.

I suppose I answer 50% to indicate my uncertainty about which future self we're talking about, since there are two possible referents.

I was asking a disguised question. I really wanted to know: "which of the two future selfs do you identify with, and why?"

Replies from: lavalamp
comment by lavalamp · 2013-10-02T00:55:33.298Z · LW(p) · GW(p)

I was asking a disguised question. I really wanted to know: "which of the two future selfs do you identify with, and why?"

Oh, that's easy. Both of them, equally. Assuming accurate enough simulations etc., of course.

ETA: Why? Well, they'll both think that they're me, and I can't think of a way to disprove the claim of one without also disproving the claim of the other.

Replies from: None
comment by [deleted] · 2013-10-02T20:00:20.465Z · LW(p) · GW(p)

ETA: Why? Well, they'll both think that they're me, and I can't think of a way to disprove the claim of one without also disproving the claim of the other.

Any of the models of consciousness-as-continuity would offer a definitive prediction.

Replies from: lavalamp
comment by lavalamp · 2013-10-02T20:24:16.174Z · LW(p) · GW(p)

Any of the models of consciousness-as-continuity would offer a definitive prediction.

IMO, there literally is no fact of the matter here, so I will bite the bullet and say that any model that supposes there is one is wrong. :) I'll reconsider if you can point to an objective feature of reality that changes depending on the answer to this. (So-and-so will think it to be immoral doesn't count!)

Replies from: None
comment by [deleted] · 2013-10-02T21:10:41.115Z · LW(p) · GW(p)

I won't because that's not what I'm arguing. My position is that subjective experience has moral consequences, and therefore matters.

PS: The up/down karma vote isn't a record of what you agree with, but whether a post has been reasonably argued.

Replies from: lavalamp, TheOtherDave, wedrifid
comment by lavalamp · 2013-10-02T21:23:44.429Z · LW(p) · GW(p)

I won't because that's not what I'm arguing. My position is that subjective experience has moral consequences, and therefore matters.

OK, that's fine, but I'm not convinced-- I'm having trouble thinking of something that I consider to be a moral issue that doesn't have a corresponding consequence in the territory.

PS: That downvote wasn't me. I'm aware of how votes work around here. :)

Replies from: None
comment by [deleted] · 2013-10-02T21:35:11.390Z · LW(p) · GW(p)

Example: is it moral to power-cycle (hibernate, turn off, power on, restore) a computer running an self-aware AI? WIll future machine intelligences view any less-than-necessary AGI experiments I run the same way we do Josef Mengele's work in Auschwitz? Is it a possible failure mode that an unfriendly/not-proovably-friendly AI that experiences routine power cycling might uncover this line of reasoning and decide it doesn't want to “die” every night when the lights go off? What would it do then?

Replies from: lavalamp, shminux
comment by lavalamp · 2013-10-02T21:50:52.084Z · LW(p) · GW(p)

OK, in a hypothetical world where somehow pausing a conscious computation--maintaining all data such that it could be restarted losslessly--is murder, those are concerns. Agreed. I'm not arguing against that.

My position is that pausing a computation as above happens to not be murder/death, and that those who believe it is murder/death are mistaken. The example I'm looking for is something objective that would demonstrate this sort of pausing is murder/death. (In my view, the bad thing about death is its permanence, that's most of why we care about murder and what makes it a moral issue.)

comment by Shmi (shminux) · 2013-10-02T22:08:08.873Z · LW(p) · GW(p)

As Eliezer mentioned in his reply (in different words), if power cycling is death, what's the shortest suspension time that isn't? Currently most computers run synchronously off a common clock. The computation is completely suspended between clock cycles. Does this mean that an AI running on such a computer is murdered billions of times every second? If so, then morality leading to this absurd conclusion is not a useful one.

Edit: it's actually worse than that: digital computation happens mostly within a short time of the clock level switch. The rest of the time between transitions is just to ensure that the electrical signals relax to within their tolerance levels. Which means that the AI in question is likely dead 90% of the time.

Replies from: None
comment by [deleted] · 2013-10-02T22:24:49.270Z · LW(p) · GW(p)

What Eliezer and you describe is more analogous to task switching on a timesharing system, and yes my understanding of computational continuity theory is that such a machine would not be sent to oblivion 120 times a second. No, such a computer would be strangely schizophrenic, but also completely self-consistent at any moment in time.

But computational continuity does have a different answer in the case of intermediate non-computational states. For example, saving the state of a whole brain emulation to magnetic disk, shutting off the machine, and restarting it sometime later. In the mean time, shutting off the machine resulted in decoupling/decoherence of state between the computational elements of the machine, and general reversion back to a state of thermal noise. This does equal death-of-identity, and is similar to the transporter thought experiment. The relevance may be more obvious when you think about taking the drive out and loading it in another machine, copying the contents of the disk, or running multiple simulations from a single checkpoint (none of these change the facts, however).

Replies from: lavalamp, shminux
comment by lavalamp · 2013-10-02T22:50:42.386Z · LW(p) · GW(p)

I really have a hard time imagining a universe where there exists a thing that is preserved when 10^-9 seconds pass between computational steps but not when 10^3 pass between steps (while I move the harddrive to another box).

comment by Shmi (shminux) · 2013-10-02T23:06:55.889Z · LW(p) · GW(p)

In the mean time, shutting off the machine resulted in decoupling/decoherence of state between the computational elements of the machine, and general reversion back to a state of thermal noise.

It is probably best for you to stay away from the physics/QM point of view on this, since you will lose: the states "between the computational elements", whatever you may mean by that, decohere and relax to "thermal noise" much quicker than the time between clock transitions, so there no difference between a nanosecond an an hour.

Maybe what you mean is more logic-related? For example, when a self-aware algorithm (including a human) expects one second to pass and instead measures a full hour (because it was suspended), it interprets that discrepancy of inputs as death? If so, shouldn't any unexpected discrepancy, like sleeping past your alarm clock, or day-dreaming in class, be treated the same way?

This does equal death-of-identity, and is similar to the transporter thought experiment.

I agree that forking a consciousness is not a morally trivial issue, but that's different from temporary suspension and restarting, which happens all the time to people and machines. I don't think that conflating the two is helpful.

Replies from: None
comment by [deleted] · 2013-10-03T00:22:18.281Z · LW(p) · GW(p)

It is probably best for you to stay away from the physics/QM point of view on this, since you will lose: the states "between the computational elements", whatever you may mean by that, decohere and relax to "thermal noise" much quicker than the time between clock transitions, so there no difference between a nanosecond an an hour.

Maybe what you mean is more logic-related?...

No, I meant the physical explanation (I am a physicist, btw). It is possible for a system to exhibit features at certain frequencies, whilst only showing noise at others. Think standing waves, for example.

I agree that forking a consciousness is not a morally trivial issue, but that's different from temporary suspension and restarting, which happens all the time to people and machines. I don't think that conflating the two is helpful.

When does it ever happen to people? When does your brain, even just regions ever stop functioning, entirely? You do not remember deep sleep because you are not forming memories, not because your brain has stopped functioning. What else could you be talking about?

Replies from: shminux
comment by Shmi (shminux) · 2013-10-03T04:42:18.237Z · LW(p) · GW(p)

Hmm, I get a feeling that none of these are your true objections and that, for some reason, you want to equate suspension to death. I should have stayed disengaged from this conversation. I'll try to do so now. Hope you get your doubts resolved to your satisfaction eventually.

Replies from: None
comment by [deleted] · 2013-10-03T07:53:12.347Z · LW(p) · GW(p)

I don't want to, I just think that the alternatives lead to absurd outcomes that can't possibly be correct (see my analysis of the teleporter scenario).

comment by TheOtherDave · 2013-10-02T22:22:38.736Z · LW(p) · GW(p)

For many people, the up/down karma vote is a record of what we want more/less of.

comment by wedrifid · 2013-10-03T06:03:28.202Z · LW(p) · GW(p)

PS: The up/down karma vote isn't a record of what you agree with, but whether a post has been reasonably argued.

It is neither of those things. This isn't debate club. We don't have to give people credit for finding the most clever arguments for a wrong position.

I make no comment about the subject of debate is in this context (I don't know or care which party is saying crazy things about 'conciousness'). I downvoted the parent specifically because it made a normative assertion about how people should use the karma mechanism which is neither something I support nor an accurate description of an accepted cultural norm. This is an example of voting being used legitimately in a way that is nothing to do with whether the post has been reasonably argued.

Replies from: None
comment by [deleted] · 2013-10-03T06:53:58.741Z · LW(p) · GW(p)

I did use the term "reasonably argued" but I didn't mean clever. Maybe "rationally argued"? By my own algorithm a cleverly argued but clearly wrong argument would not garner an up vote.

I gave you an upvote for explaining your down vote.

Replies from: wedrifid
comment by wedrifid · 2013-10-03T11:19:17.470Z · LW(p) · GW(p)

I did use the term "reasonably argued" but I didn't mean clever. Maybe "rationally argued"? By my own algorithm a cleverly argued but clearly wrong argument would not garner an up vote.

You are right, 'clever' contains connotations that you wouldn't intend. I myself have used 'clever' as term of disdain and I don't want to apply that to what you are talking about. Let's say stick with either of the terms you used and agree that we are talking about arguments that are sound, cogent and reasonable rather than artful rhetoric that exploits known biases in human social behaviour to score persuasion points. I maintain that even then down-votes are sometimes appropriate. Allow me to illustrate.

There are two outwardly indistinguishable boxes with buttons that display heads or tails when pressed. You know that one of the boxes returns true 70% of the time, the other returns heads 40% of the time. A third party, Joe, has experimented with the first box three times and tells you that each time it returned true. This represents an argument that the first box is the "70%" box. Now, assume that I have observed the internals of the boxes and know that the first box is, in fact, the 40% box.

Whether I downvote Joe's comment depends on many things. Obviously, tone matters a lot, as does my impression of whether Joe's bias is based on dis-ingenuity or more innocent ignorance. But even in the case when Joe is arguing in good faith there are some cases where a policy attempting to improve the community will advocate downvoting the contribution. For example if there is a significant selection bias in what kind of evidence people like Joe have exposed themselves to then popular perception after such people share their opinions will tend to be even more biased than the individuals alone. In that case downvoting Joe's comment improves the discussion. The ideal outcome would be for Joe to learn to stfu until he learns more.

More simply I observe that even the most 'rational' of arguments can be harmful if the selection process for the creation and repetition of those arguments is at all biased.

comment by Shmi (shminux) · 2013-10-02T00:09:21.872Z · LW(p) · GW(p)

As I alluded to in another reply, assuming perfectly reliable scanning, and assuming that you hate losing in bowling to MarkAI, how do you decide whether to go practice bowling or to do something else you like more?

Replies from: TheOtherDave
comment by TheOtherDave · 2013-10-02T00:31:06.993Z · LW(p) · GW(p)

If it's important to me not to lose in bowling, I practice bowling, since I expect to go bowling. (Assuming uninteresting scanning tech.)
If it's also important to me to show off my rocking abs at the beach, I do sit-ups, since I expect to go to the beach.
If I don't have the time to do both, I make a tradeoff, and I'm not sure exactly how I make that tradeoff, but it doesn't include assuming that the going to the beach somehow happens more or happens less or anything like that than the going bowling.

Admittedly, this presumes that the bowling-me will go on to live a normal lifetime. If I know the simulation will be turned off right after the bowling match, I might not care so much about winning the bowling match. (Then again, I might care a lot more.) By the same token, if I know the original will be shot tomorrow morning I might not care so much abuot my abs. (Then again, I might care more. I'm really not confident about how the prospect of upcoming death affects my choices; still less how it does so when I expect to keep surviving as well.)

comment by linkhyrule5 · 2013-10-01T22:56:03.222Z · LW(p) · GW(p)

Yes, with probability P(simulation), or no, with probability P(not simulation), depending.

comment by Shmi (shminux) · 2013-10-01T23:53:55.231Z · LW(p) · GW(p)

I come back to tell you that I will run a simulation of you so we can go bowling together

Presumably you create a sim-me which includes the experience of having this conversation with you (the AI).

do you or do you not expect to experience bowling with me in the future, and why?

Let me interpret the term "expect" concretely as "I better go practice bowling now, so that sim-me can do well against you later" (assuming I hate losing). If I don't particularly enjoy bowling and rather do something else, how much effort is warranted vs doing something I like?

The answer is not unambiguous and depends on how much I (meat-me) care about future sim-me having fun and not embarrassing sim-self. If sim-me continues on after meat-me passes away, I care very much about sim-me's well being. On the other hand, if the sim-me program is halted after the bowling game, then I (meat-me) don't care much about that sim-loser. After all, meat-me (who will not go bowling) will continue to exist, at least for a while. You might feel differently about sim-you, of course. There is a whole range of possible scenarios here. Feel free to specify one in more detail.

TL;DR: If the simulation will be the only copy of "me" in existence, I act as if I expect to experience bowling.

comment by Shmi (shminux) · 2013-10-01T20:13:50.467Z · LW(p) · GW(p)

I'd have to learn more about how general anesthesia works to decide if personal identity would be preserved across on the operating table

Hmm, what about across dreamless sleep? Or fainting? Or falling and hitting your head and losing consciousness for an instant? Would these count as killing one person and creating another? And so be morally net-negative?

Replies from: None
comment by [deleted] · 2013-10-01T20:38:04.803Z · LW(p) · GW(p)

If computational continuity is what matters, then no. Just because you have no memory doesn't mean you didn't experience it. There is in fact a continuous experience throughout all of the examples you gave, just no new memories being formed. But from the last point you remember (going to sleep, fainting, hitting your head) to when you wake up, you did exist and were running a computational process. From our understanding of neurology you can be certain that there was no interruption of subjective experience of identity, even if you can't remember what actually happened.

Whether this is also true of general anesthesia depends very much on the biochemistry going on. I admit ignorance here.

Replies from: shminux
comment by Shmi (shminux) · 2013-10-01T20:46:51.390Z · LW(p) · GW(p)

OK, I guess I should give up, too. I am utterly unable to relate to whatever it is you mean by "because you have no memory doesn't mean you didn't experience it" or "subjective experience of identity, even if you can't remember what actually happened".

comment by TheOtherDave · 2013-10-01T19:09:33.639Z · LW(p) · GW(p)

Clearly, your subjective experience today is HONK-screech-bam-oblivion, since all the subjective experiences that come after that don't happen today in this example... they happen 50 years later.

It is not in the least bit clear to me that this means those subjective experiences aren't your subjective experiences. You aren't some epiphenomenal entity that dissipates in the course of those 50 years and therefore isn't around to experience those experiences when they happen... whatever is having those subjective experiences, whenever it is having them, that's you.

maybe we should taboo the phrase “personal/subjective identity”.

Sounds like a fine plan, albeit a difficult one. Want to take a shot at it?

EDIT: Ah, you did so elsethread. Cool. Replied there.

comment by lavalamp · 2013-10-01T19:17:20.306Z · LW(p) · GW(p)

Do I experience “HONK Screeeech, bam” then wake up in a computer, or is it “HONK Screeeech, bam” and oblivion?

Non-running algorithms have no experiences, so the latter is not a possible outcome. I think this is perhaps an unspoken axiom here.

Replies from: None
comment by [deleted] · 2013-10-01T19:25:01.805Z · LW(p) · GW(p)

Non-running algorithms have no experiences, so the latter is not a possible outcome. I think this is perhaps an unspoken axiom here.

No disagreement here - that's what I meant by oblivion.

Replies from: lavalamp
comment by lavalamp · 2013-10-01T19:31:37.197Z · LW(p) · GW(p)

OK, cool, but now I'm confused. If we're meaning the same thing, I don't understand how it can be a question-- "not running" isn't a thing an algorithm can experience; it's a logical impossibility.

comment by AFinerGrain_duplicate0.4555006182262571 · 2017-10-03T01:54:37.016Z · LW(p) · GW(p)

I always wonder how I should treat my future self if I reject the continuity of self. Should I think of him like a son? A spouse? A stranger? Should I let him get fat? Not get him a degree? Invest in stock for him? Give him another child?

Replies from: Elo
comment by Elo · 2017-10-03T07:48:49.876Z · LW(p) · GW(p)

I think it matters in so far as assisting your present trajectory. Otherwise it might as well be an unfeeling entity.

comment by hairyfigment · 2013-09-30T16:55:39.597Z · LW(p) · GW(p)

It seems you place less value on your life than I do on mine. I'm glad we've reached agreement.

Replies from: someonewrongonthenet
comment by someonewrongonthenet · 2013-10-01T03:14:40.875Z · LW(p) · GW(p)

I agree, it's quite possible that someone might deconstruct "me" and "life" and "death" and "subjective experience" to the same extent that I have and still value never deleting certain information that is computationally descended from themselves more than all the other things that would be done with the resources that are used to maintain them.

Hell, I might value it to that extent. This isn't something I'm certain about. I'm still exploring this. My default answer is to live forever - I just want to make sure that this is really what I want after consideration and not just a kicking, screaming survival instinct (AKA a first order preference)

Replies from: CynicalOptimist
comment by CynicalOptimist · 2016-11-17T20:29:45.541Z · LW(p) · GW(p)

This seems to me like an orthogonal question. (A question that can be entirely extricated and separated from the cryonics question).

You're talking about whether you are a valuable enough individual that you can justify resources being spent on maintaining your existence. That's a question that can be asked just as easily even if you have no concept of cryonics. For instance: if your life depends on getting medical treatment that costs a million dollars, is it worth it? Or should you prefer that the money be spent on saving other lives more efficiently?

(Incidentally, i know that utilitarianism generally favours the second option. But I would never blame anyone for choosing the first option if the money was offered to them.)

I would accept an end to my existence if it allowed everyone else on earth to live for as long as they wished, and experience an existentially fulfilling form of happiness. I wouldn't accept an end to my existence if it allowed one stranger to enjoy an ice cream. There are scenarios where I would think it was worth using resources to maintain my existence, and scenarios where I would accept that the resources should be used differently. I think this is true when we consider cryonics, and equally true if we don't.

The cryonics question is quite different.

For the sake of argument, I'll assume that you're alive and that you intend to keep on living, for at least the next 5 years. I'll assume that If you experienced a life-threatening situation tomorrow, and someone was able to intervene medically and grant you (at least) 5 more years of life, then you would want them to.

There are many different life-threatening scenarios, and many different possible interventions. But for decision making purposes, you could probably group them into "interventions which extend my life in a meaningful way" and interventions that don't. For instance, an intervention that kept your body alive but left you completely brain-dead would probably go in the second category. Coronary bypass surgery would probably go in the first.

The cryonics question here is simply: "If a doctor offered to freeze you, then revive you 50 years later" would you put this in the same category as other "life-saving" interventions? Would you consider it an extension of your life, in the same way as a heart transplant would be? And would you value it similarly in your considerations?

And of course, we can ask the same question for a different intervention, where you are frozen, then scanned, then recreated years later in one (or more) simulations.

comment by Frank_Hirsch · 2008-06-03T09:32:58.000Z · LW(p) · GW(p)
[Eliezer says:] And if you're planning to play the lottery, don't think you might win this time. A vanishingly small fraction of you wins, every time.

I think this is, strictly speaking, not true. A more extreme example: While recently talking with a friend, he asserted that "In one of the future worlds, I might jump up in a minute and run out onto the street, screaming loudly!". I said: "Yes, maybe, but only if you are already strongly predisposed to do so. MWI means that every possible future exists, not every arbitrary imaginable future.". Although your assertion in the case about lottery is much weaker, I don't believe it's strictly true.

comment by devicerandom · 2008-06-03T09:38:58.000Z · LW(p) · GW(p)

I have two (unrelated) comments:

1) I very much enjoyed the concept of "timeless physics", and in a MWI framework it sounds particularly elegant and intuitive. How does relativity fit into the picture? What I mean is, the speed of light, c, somehow gives an intrinsic measure of time to us. In what does c translates in timeless terms?

2) About your argument for cryonics, what about irreversible processes? Is quantum physics giving you a chance to beat entropy? When you die, a lot of irreversible processes happen into your brain (e.g. proteins and membranes break down). It is true that probably there are less changes in a cryonized brain than in a sleeping one, but it's the nature of the changes that's fundamently different. Of course no information is really lost, but it's irreversibly dispersed in the environment well before you have a chance to get it back. Cryonics to me looks like an attempt to unscramble an egg -only with the egg being frozen when it starts to scramble, but already a bit scrambled. I admit it is better than rotting in a grave (and I'd like to sign up for it) but has anyone tried to measure the hopes?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-06-03T09:39:03.000Z · LW(p) · GW(p)

Frank, it's not logically necessary but it seems highly likely to be true - the spread in worlds including "you" seems like it ought to include worlds where each combination of lottery balls turns up. Possibly even worlds where your friend screams and runs out of the room, though that might be a vanishingly small fraction unless predisposed.

Roland, the Cryonics Institute seems to accept patients from anywhere that can be arranged to be shipped: http://www.cryonics.org/euro.html. Not sure about Alcor.

Devicerandom, see the Added links to FAQs.

comment by mitchell_porter2 · 2008-06-03T09:47:00.000Z · LW(p) · GW(p)

Eliezer: That's how we distinguish Eliezer from Mitchell.

Isn't that then how we distinguish a nondestructive copy from the original? If the original has been copied nondestructively, why shouldn't we continue to regard it as the original?

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-06-03T09:50:19.000Z · LW(p) · GW(p)

Isn't that then how we distinguish a nondestructive copy from the original?

Not unless you postulate an imperfect copy, with covalent bonds in different places and so on.

Once you begin to postulate a theoretically imperfect copy, the Generalized version of the Anti-Zombie Principle has to take over and ask whether the differences have more impact on your internal narrative / memories / personality etc. than the corresponding effects of thermal noise at room temperature.

comment by mitchell_porter2 · 2008-06-03T10:01:57.000Z · LW(p) · GW(p)

Covalent bonds with external atoms are just one form of "correlation with the environment".

I wish to postulate a perfect copy, in the sense that the internal correlations are identical to the original, but in which the correlations to the rest of the universe are different (e.g. "on Mars" rather than "on Earth").

There is some confusion here in the switching between individual configurations, and configuration space. An atom is already a blob in configuration space (e.g. "one electron in the ground-state orbital") rather than a single configuration, with respect to a particle basis.

While we cannot individuate particles in a relative configuration, we can individuate wave packets traveling in relative configuration space. And since even an atom already exists at that level, it is far from clear to me that the attempt to abandon continuity of identity carries over to complicated structures.

comment by MichaelAnissimov · 2008-06-03T10:44:18.000Z · LW(p) · GW(p)

Eliezer, why are all of your posts so long? I understand how most of them would be -- because you're trying to convey complex ideas -- but how come none of the ideas you convey are concise? Some of them seem like attempts to pad with excessive "background" material when simple tell-it-like-it-is brevity would suffice.

I thought this post was legitimately long, but this just came to mind when reflecting on past posts.

comment by Robin_Brandt · 2008-06-03T11:10:21.000Z · LW(p) · GW(p)

Great summary I have sent the link to all my friends! In the wait for some kind of TOC this is the best link yet to send people concerning this series.

I would like to know your opinion on Max Tegmarks ultimate ensemble theory! Or if someone knows Elis opinion on this wonderful theory, please tell me!

Are other bright scientists and philosophers aware of this blog? Do you send links to people when there is a topic that relates to them? Do you send links to the people you mention? Does Chalmers, Dennett, Pinker, Deutsch, Barbour, Pearl, Tegmark, Dawkins, Vinge, Egan, Hoftadter, McCarthy, Kurzweil, Smolin, Witten, Taleb, Shermer, Khaneman, Tooby, Cosmides, Aumann, Penrose, Hameroff etc. etc. know about all this?

They may all be wrong in one way or another, but they are certainly not stupid blind people.

And I think your writing would definitely interest all of these people and contribute to their work and journey towards the truth. So it would both be altruistic to send them the links, and exciting if they would comment!

Especially it would be nice if these people would comment on the posts where you show your dissagreement!

comment by IL · 2008-06-03T11:32:12.000Z · LW(p) · GW(p)

I still don't get the point of timeless physics. It seems to me like two different ways of looking at the same thing, like classical configuration space vs relational configuration space. Sure, it may make more sense to formulate the laws of physics without time, and it may make the equations much simpler, but how exactly does it change your expected observations? In what ways does a timeless universe differ from a timeful universe?

Also, I don't think it's neccessary to study quantum mechanics in order to understand personal identity. I've reached the same conclusions about identity without knowing anything about QM, I feel it's just simple deductions from materialism.

comment by Will_Pearson · 2008-06-03T11:52:11.000Z · LW(p) · GW(p)

The Rudi Hoffman link is broken.

Is there any literature on the likely energy costs of large scale cryonics?

And do you have a cunning plan where adding an extra 150k vitrified people per day to maintain does not drive up the already heaven ward bound energy prices, reducing the quality of life of the poorest? This could lead to conflict and more death, see recent South Africa for an example of poor energy planning.

A pre-requisite for large scale cryonics seem to me to be a stable and easily growable energy supply, which we just don't seem to be able to manage at the moment.

comment by Unknown · 2008-06-03T12:34:46.000Z · LW(p) · GW(p)

Eliezer, your account seems to give people two new excuses for not signing up for cryonics:

1) It seems to imply Quantum Immortality anyway.

2) Since there is nothing that persists on a fundamental level, the only reason new human beings in the future aren't "me" is that they don't remember me. But I also don't remember being two years old, and the two year old who became me didn't expect it. So the psychological continuity between my past self and my present self is no greater, in the case of my two year old self, than between myself and future human beings. This doesn't bother me in the case of the two year old, so it seems like it might not bother me in my own case. In other words, why should I try to live forever? There will be other human beings anyway, and they will be just as good as me, and there will be just as much identity on a fundamental level.

You may think that these arguments don't work, but that doesn't matter. The point is that because cryonics is "strange" to people, they are looking for reasons not to do it. So given that these arguments are plausible, they will embrace them immediately.

comment by Kaj_Sotala · 2008-06-03T12:46:01.000Z · LW(p) · GW(p)

Frank, it's not logically necessary but it seems highly likely to be true - the spread in worlds including "you" seems like it ought to include worlds where each combination of lottery balls turns up. Possibly even worlds where your friend screams and runs out of the room, though that might be a vanishingly small fraction unless predisposed.

Eliezer,

I have to ask now, because this is a topic that's been bothering me for months, and occasionally been making it real hard for me to take pleasure in anything.

How strongly does MWI imply that worlds will show up where I even do things that I consider immensly undesirable - for instance, stab somebody I love with a knife, and then when they lay there dying and look at me, I honestly can't tell them or myself why I did it - because what happened was caused by a very-low probability event that momentarily caused my brain to give my arm that command? (I know I'm not using anywhere near the correct QM terminology, but you know what I mean.) Or that my brain would spontaneously reconfigure parts of itself so that I ended up coldly abandoning somebody who had trusted me and who I'd promised to always be with, etc.

The thought of I - and yes, since there are no originals or copies, the very I writing this - having a guaranteed certainty of ending up doing that causes me so much anguish that I can't help but thinking that if true, humanity should be destroyed in order to minimize the amount of branches where people end up in such situations. I find little comfort in the prospect of the "betrayal branches" being vanishingly few in frequency - in absolute numbers, their amount is still unimaginably large, and more are born every moment.

comment by Ian_Maxwell · 2008-06-03T13:52:08.000Z · LW(p) · GW(p)

This argument makes no sense to me:

If you've been cryocrastinating, putting off signing up for cryonics "until later", don't think that you've "gotten away with it so far". Many worlds, remember? There are branched versions of you that are dying of cancer, and not signed up for cryonics, and it's too late for them to get life insurance.

This is only happening in the scenarios where I didn't sign up for cryonics. In the ones where I did sign up, I'm safe and cozy in my very cold bed. These universes don't exist contingent on my behavior in this one; what possible impact could my choice here to sign up for cryonics have on my alternate-universe Doppelgängeren?

Replies from: propater, propater
comment by propater · 2011-03-08T10:35:42.744Z · LW(p) · GW(p)

Same here. This does not strike me as a good argument at all... We can reverse it to argue against signing up for cryonics :

"Even if I sign up for cryonics, there will still be some other worlds in wich I didn't and in wich "I" am dying of cancer."

Or

"Even if don't sign up, there are still other worlds in wich I did."

Maybe there is something about me actually making the choice to sign up in this world altering/constraining the overall probability distribution and making some outcomes less and less probable in the overall distribution...

I am new to this side and I still have to search through it more thoroughfuly but I really don't think I can let that argument fly by without reaction. I appologize in advance if I make some really dumb mistake here.

comment by propater · 2011-03-08T10:38:36.175Z · LW(p) · GW(p)

Same here. This does not strike me as a good argument at all... We can reverse it to argue against signing up for cryonics :

"Even if I sign up for cryonics, there will still be some other worlds in wich I didn't and in wich "I" am dying of cancer."

Or

"Even if don't sign up, there are still other worlds in wich I did."

Maybe there is something about me actually making the choice to sign up in this world altering/constraining the overall probability distribution and making some outcomes less and less probable in the overall distribution...

I am new to this side and I still have to search through it more thoroughfuly but I really don't think I can let that argument fly by without reaction. I appologize in advance if I make some really dumb mistake here.

Edit:

Okay, I thought this over a little bit and I can see a point: the earlier I sign up the more there will be of future "me"s getting cryonised. I do not see how much it matters in the grand scheme of things (I am just choosing a branch , I am not destroying the branch in wich I choose not to sign up.) but I guess there can be something along the lines of "I can not do much about the past but my decisions can influence the 'future'" or "my responsability is about my future 'me's, I should not worry about the worlds I can not 'reach'"

The argument still sounds rather weak to me (and the many-world view a bit nihilistic, not that it makes it wrong but I find it rather weird that you manage to get some sort of positive drive from it.)

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2012-05-29T15:20:08.410Z · LW(p) · GW(p)

I am just choosing a branch , I am not destroying the branch in wich I choose not to sign up.

Actually... you are. The physical implementation of making the choice involves shifting weight from not-signed-up branches to signed-up branches (note, the 'not-signed-up-yet' branch is defined in a way that lets it leak amplitude). That implementation is contained within you, and it involves processes we describe as applying operators on that branch which reduce its amplitude. This totally counts as destroying the branch.

Replies from: Zaq
comment by Zaq · 2013-10-10T21:58:04.992Z · LW(p) · GW(p)

Okay, we need to be really careful about this.

If you sign up for cryonics at time T1, then the not-signed-up branch has lower amplitude after T1 than it had before T1. But this is very different from saying that the not-signed up branch has lower amplitude after T1 than it would have had after T1 if you had not signed up for cryonics at T1. In fact, the latter statement is necessarily false if physics really is timeless.

I think this latter point is what the other posters are driving at. It is true that if there is a branch at T1 where some yous go down a path where they sign up and others don't, then the amplitude for not-signed-up is lower after T1. But this happens even if this particular you doesn't go down the signed-up branch. What matters is that the branch point occurs, not which one any specific you takes.

In other words, amplitude is always being seeped from the not-signed-up branch, even if some particular you keeps not leaving that branch.

comment by Doug_S. · 2008-06-03T14:46:46.000Z · LW(p) · GW(p)

Random question for Eliezer:

If, today, you had the cryopreserved body of Genghis Khan, and had the capacity to revive it, would you? (Remember, this is the guy who, according t legend, said that the best thing in life was to crush your enemies, see them driven before you, and hear the lamentation of the women.)

(As Unknown suggests, I'd rather have a "better" person exist in the future than have "me" exist in the future. What I'd do in a post-Singularity future is sign up to be a wirehead. The future doesn't need more wireheads.)

Replies from: gwern
comment by gwern · 2009-06-21T01:07:28.811Z · LW(p) · GW(p)

If, today, you had the cryopreserved body of Genghis Khan, and had the capacity to revive it, would you? (Remember, this is the guy who, according t legend, said that the best thing in life was to crush your enemies, see them driven before you, and hear the lamentation of the women.)

Absolutely. You could raise millions just from the historians of Europe & Asia who would kill to talk to Genghis Khan; and then there's all the genetic and medical research one could do on an authentic living/breathing person of a millennium ago. (More tests of Sapir-Whorf, anyone?)

comment by TGGP4 · 2008-06-03T14:47:26.000Z · LW(p) · GW(p)

Kaj Sotala, it seems you have stumbled upon the apocalyptic imperative.

comment by iwdw · 2008-06-03T14:53:06.000Z · LW(p) · GW(p)

@Ian Maxwell: It's not about the yous in the universes where you have signed up -- it's about all of the yous that die when you're not signed up. i.e. none of the yous that die on your way to work tommorow are going to get frozen.

(This is making me wonder if anyone has developed a corresponding grammar for many worlds yet...)

comment by Brandon_Reinhart · 2008-06-03T15:24:53.000Z · LW(p) · GW(p)

I'm a member of Alcor. I wear my id necklace, but not the bracelet. I sometimes wonder how much my probability of being successfully suspended depends on wearing my id tags and whether I have a significantly higher probability from wearing both. I've assigned a very high (70%+) probability to wearing at least one form of Alcor id, but it seems an additional one doesn't add as much, assuming emergency response personnel are trained to check the neck & wrists for special case ids. In most cases where I could catastrophically lose one form of id (such as dismemberment!) I would probably not be viable for suspension. What do you other members think?

comment by David_Solomon · 2008-06-03T15:36:13.000Z · LW(p) · GW(p)

I can understand why creating a reconstruction of a frozen brain might still be considered 'you'. But what happens if multiple versions of 'you' are created? Are they all still 'you'? If I create 4 reconstructions of a brain and put them in four different bodies, punching one in the arm will not create nerve impulses in the other three. And the punched brain will begin to think different thoughts ('who is this jerk punching me?').

In that case, all 4 brains started as 'you', but will not experience the same subsequent thoughts, and will be as disconnected from each other as identical twins.

This is basically the first Parfit example, which I note you don't actually address. Is the 'you' on mars the same as 'you' on Earth? And what exactly does that mean if the 'you' on earth doesn't get to experience the other one's sensations first hand? Why should I care chat happens to him/me?

comment by Constant2 · 2008-06-03T16:32:32.000Z · LW(p) · GW(p)

note that Parfit is describing thought experiments, not necessarily endorsing them.

I spy with my little eye something beginning with D.

comment by poke · 2008-06-03T16:39:38.000Z · LW(p) · GW(p)

I knew this was where we were headed when you started talking about zombies and I knew exactly what the error would be.

Even if I accept your premises of many-worlds and timeless physics, the identity argument still has exactly the same form as it did before. Most people are aware that atomic-level identity is problematic even if they're not aware of the implications of quantum physics. They know this because they consume and excrete material. Nobody who's thought about this for more than a few seconds thinks their identity lies in the identity of the atoms that make up their bodies.

Your view of the world actually makes it easier to hold a position of physical identity. If you can say "this chunk of Platonia is overlapping computations that make up me" I can equally say "this chunk of Platonia is overlapping biochemical processes that make up me." Or I can talk about the cellular level or whatever. Your physics has given us freedom to choose an arbitrary level of description. So your argument reduces to to the usual subjectivist argument for psychological identity (i.e., "no noticeable difference") without the physics doing any work.

Replies from: None
comment by [deleted] · 2020-01-10T15:16:33.865Z · LW(p) · GW(p)

How I wish this could be made into a giant disclaimer at the beginning of the posts of this entire sequence...

comment by Robin_Brandt · 2008-06-03T16:42:05.000Z · LW(p) · GW(p)

from http://www.edge.org/q2008/q08_5.html#smolin

"Other physicists argue that aspects of time are real, such as the relationships of causality, that record which events were the necessary causes of others. Penrose, Sorkin and Markopoulou have proposed models of quantum spacetime in which everything real reduces to these relationships of causality."

I guess Eliezer is already aware of these theories...

comment by michael_vassar3 · 2008-06-03T17:13:00.000Z · LW(p) · GW(p)

Kaj: No, more aren't born every minute, they are all simply there, and if one cannot tolerate vanishingly small frequencies or probabilities then there will always be things other than your brain spontaneously configuring themselves into "your brain resolved to abandon those you had resolved to help" for every real or hypothetical "someone" you might resolve to help. For what its worth though, if "you" is the classical computation approximated by your neurons then it isn't "you" in the personal continuity relevant sense that does any given highly improbable thing. The causal relations that cause unlikely behaviors exist only in the configuration space of the universe. They differ from the causal relations that exist in the abstract deterministic computation that you probably experience being.

Frank: See Kaj

Eliezer: What's up with continuous physics from an infinite set atheist?

Unknown: 2 seems plausible but it's definitely not an argument that most people would accept

Will Pearson: Shut up and multiply. 150K/day adds up to about 3B after 60 years, which is a conservatively high estimate for how long we need. Heads have a volume of a few liters, call it 3.33 for convenience, so that's 10M cubic meters. Cooling involves massive economies of scale, as only surfaces matter. All we are talking about is, assuming a hemispherical facility, 168 meters of radius and 267,200 square meters of surface area. Not a lot to insulate. One small power plant could easily power the maintenance of such a facility at liquid nitrogen temperatures.

comment by Wiseman · 2008-06-03T17:14:54.000Z · LW(p) · GW(p)

Err, how can two copies of a person be exactly the same when the gravitational forces on each will both be different? Isn't the very idea that you can transfer actual atoms in the universe to a new location while somehow ensuring that this transfer doesn't deterministically guarantee being able to determining which person "caused" the copy to exist (I.E. the original), physical nonsense?

While molecules may not have invisible "unique ID" numbers attached to them, they are unique in the sense of quantum evolution, preserving the "importance" of one atom distinguished from another.

comment by RobinHanson · 2008-06-03T17:22:06.000Z · LW(p) · GW(p)

Is there really anyone who would sign up for cryonics except that they are worried that their future revived self wouldn't be made of the same atoms and thus would not be them? The case for cryonics (a case that persuades me) should be simpler than this.

Replies from: bliumchik, None
comment by bliumchik · 2012-06-06T04:27:40.893Z · LW(p) · GW(p)

I agree. I'd be more worried about civilisation collapsing in the interim between being frozen and the point when people would have worked out how to revive me.

Replies from: Kenny
comment by Kenny · 2013-03-25T23:43:49.363Z · LW(p) · GW(p)

Why would you worry about that? Wouldn't you worry instead about the opportunity costs of signing-up for cryonics?

comment by [deleted] · 2020-01-10T15:11:32.341Z · LW(p) · GW(p)

I am. I am very concerned that an organization like Alcor might decide to perform a cheaper destructive scan and brain emulation rather than revival. I'm willing to choose my cryonics organization based on their willingness to abide by my wishes to be revived, not copied.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-06-03T17:33:26.000Z · LW(p) · GW(p)

@Kaj:

I find little comfort in the prospect of the "betrayal branches" being vanishingly few in frequency - in absolute numbers, their amount is still unimaginably large, and more are born every moment.

Kaj, you have to learn to take comfort in this. Not taking comfort in it is not a viable option.

I'm serious. Otherwise you'll buy lottery tickets because some version of you wins, make inconsistent choices on the Allais paradox, choose SPECKS over TORTURE...

Shut up and multiply. In a Big World there is no other choice.

comment by Unknown · 2008-06-03T17:55:59.000Z · LW(p) · GW(p)

Kaj didn't suggest that there is any other viable option. He suggested killing off the human race.

This strategy would fail too, however, since it would not succeed on every branch.

comment by Sebastian_Hagen2 · 2008-06-03T17:57:40.000Z · LW(p) · GW(p)

Is the 'you' on mars the same as 'you' on Earth?
There's one of you on earth, and one on mars. They start out (by assumption) the same, but will presumably increasingly diverge due to different input from the environment. What else is there to know? What does the word 'same' mean for you?

And what exactly does that mean if the 'you' on earth doesn't get to experience the other one's sensations first hand? Why should I care chat happens to him/me?
That's between your world model and your values. If this happened to me, I'd care because the other instance of myself happens to have similar values to the instance making the judgement, and will therefore try to steer the future into states which we will both prefer.

comment by Roland2 · 2008-06-03T17:58:01.000Z · LW(p) · GW(p)

Eliezer,

ok, thanks for the link.

comment by LazyDave · 2008-06-03T17:59:53.000Z · LW(p) · GW(p)

I have been seriously considering cryonics; if the MWI is correct, I figure that even if there is a vanishingly small chance of it working, "I" will still wake up in one of the worlds where it does work. Then again, even if I do not sign up, there are plenty of worlds out there where I do. So signing up is less of an attempt to live forever as it is an attempt to line up my current existence with the memory of the person who is revived, if that makes any sense. To put it another way, if there is a world where I procrastinate signing up until right before I die, the person who is revived will have 99.9% of the same memories as someone who did not sign up at all, so if I don't end up signing up I do not lose much.

FWIW, I sent an email to Alcor a while ago that was never responded to, which makes me wonder if they have their act together enough to preserve me for the long haul.

On a related note, is there much agreement on what is "possible" as far as MWI goes? For example, in a classical universe if I know the position/momentum of every particle, I can predict the outcome of a coin flip with 1.0 probability. If we throw quantum events in the mix, how much does this change? I figure the answer should be in the range of (1.0 - tiny number) and (0.5 + tiny number).

comment by HalFinney · 2008-06-03T18:16:46.000Z · LW(p) · GW(p)

I wrote a comment this morning on the monthly open thread which addresses some of the questions that have been raised above, but I will copy it here since that is a stale thread.

A couple of people asked about the relationship between quantum randomness and the macroscopic world.

Eliezer wrote a long essay here, http://www.sl4.org/wiki/KnowabilityOfFAI, about (among other things) the difference between unpredictability of intelligent decisions, and randomness. Decisions we or someone else make may be unpredictable beforehand, but that doesn't mean they are random. It may well be that even for a close and difficult decision where it felt like we could have gone either way, that in the vast majority of the MWI branches, we would have decided the same way.

At the same time, it is clear that there would be at least some branches where we would have decided differently. The brain ultimately depends on chemical processes like diffusion that have a random component, and this randomness will be influenced by quantum effects as molecules interact. So there would be some quantum fluctuations that could cause neurons to behave differently, and ultimately lead to different brain activities. This means that at the philosophical level, we do face the fact that every decision we make goes "both ways" in different branches. Our decision making is then a matter of what fraction of the branches go which way, and our mental efforts can be thought of as maximizing the fraction of good outcomes.

It would be interesting to try to figure out the degree to which quantum effects influence other macroscopic sources of randomness. Clearly, due to the butterfly effect, storms will be highly influenced by quantum randomness. If we reset the world to 5 years ago and put every molecule on the same track, New Orleans would not have been destroyed in almost all cases. How about a coin flip? If it comes up heads, what fraction of the branches would have seen tails? My guess is that the major variable will be the strength with which the coin is thrown by the thumb and arm. At the molecular level this will have two influences: the actin and myosin fibers in the muscles, activated by neurotransmitter packets; and the friction between the thumbnail and the forefinger which determines the exact point at which the coin is released. The muscle activity will have considerable quantum variation in individual fiber steps, but there would be a huge number of fibers involved, so I'd guess that will average out and be pretty stable. The friction on the other hand would probably be nonlinear and chaotic, an avalanche effect where a small change in stickiness leads to a big change in overall motion. I can't come up with a firm answer on this basis, but my guess would be that there is a substantial but not overwhelming quantum effect, so that we would see close to a 50-50 split among the branches. I wonder if anyone has attempted a more quantitative analysis.

One thing I will add, I imagine that ping-pong ball based lottery machines would be substantially affected by quantum randomness. The many bounces will lead to chaotic behavior, sensitive dependence on initial conditions, and even very small randomness due to quantum effects during collisions will almost certainly IMO be amplified to produce macroscopically different circumstances after several seconds.

comment by iwdw · 2008-06-03T18:36:20.000Z · LW(p) · GW(p)

Is there really anyone who would sign up for cryonics except that they are worried that their future revived self wouldn't be made of the same atoms and thus would not be them? The case for cryonics (a case that persuades me) should be simpler than this.

I think that's just a point in the larger argument that whatever the "consciousness we experience" is, it's at sufficiently high level that it does survive massive changes at at quantum level over the course of a single night's sleep. If worry about something as seemingly disastrous as having all the molecules in your body replaced with identical twins can be shown to be unfounded, then worrying about the effects of being frozen for a few decades on your consciousness should seem to be similarly unfounded.

comment by Patrick_(orthonormal) · 2008-06-03T18:59:46.000Z · LW(p) · GW(p)

Dave,

Well, if you resolve not to sign up for cryonics and if the thinking on Quantum Immortality is correct, you might expect a series of weird (and probably painful) events to prevent you indefinitely from dying; while if you're signed up for it, the vast majority of the worlds containing a later "you" will be the ones revived after a peaceful death. So there's a big difference in the sort of experience you might anticipate, depending on whether you've signed up.

comment by Roko · 2008-06-03T19:13:30.000Z · LW(p) · GW(p)

Just thought I'd mention that if one wants to consider parfit's thought experiment (the brain scanner that non-destructively copies you) and the underlying quantum mechanical nature of reality, you have to remember the no-cloning theorem.

http://en.wikipedia.org/wiki/No_cloning_theorem

Thus if you consider yourself to be a specific quantum state, parfit's machine cannot possibly exist. Of course there are subtleties here, but I just though I'd throw that in for people to consider.

comment by Nick_Tarleton · 2008-06-03T19:25:48.000Z · LW(p) · GW(p)

Also, if you count on quantum immortality alone, the measure of future-yous surviving through freakish good luck will be much smaller than the measure that would survive with cryonics. I'm not sure how this matters, though, because naive weighting seems to imply a very steep discount rate to account for constant splitting, which seems absurd.

comment by Kaj_Sotala · 2008-06-03T19:46:54.000Z · LW(p) · GW(p)

I suppose I'll just have to deal with it, then. Sigh - I was expecting there to be some more cheerful answer, which I'd just failed to realize. Vassar's response does help a bit.

comment by Wiseman · 2008-06-03T20:13:24.000Z · LW(p) · GW(p)

Kaj - there is a more cheerful answer. And this is it: Many-Worlds isn't true. Although Eliezer may be confident, the final word on the issue is still a long way off. Eliezer has been illogical on enough of his reasoning that there is reason to question that confidence.

comment by Nick_Tarleton · 2008-06-03T20:20:53.000Z · LW(p) · GW(p)

Only if Many-Worlds isn't true and the universe is finite or repeats with a finite period and Tegmark's ultimate ensemble theory is false. Personally, I find that prospect more disturbing for some reason.

comment by Kaj_Sotala · 2008-06-03T20:31:26.000Z · LW(p) · GW(p)

Wiseman: Yes, that's a possibility. But even if I only gave MWI a, say, 30% probability of being true, the thought of it being even that likely would continue to bother me. In order to avoid feeling the anguish through that route, I'd need to make myself believe the chance for MWI being true was far lower than what's rational. In addition to that being against my principles, I'm not sure if it was ethical, either - if MWI really is true, or even if there's a chance of it being true, then that should influence my behavior somehow, by e.g. avoiding having offspring so there'd at least be less sentients around to experience the horror of MWI (not that I'd probably be having kids pre-Singularity anyway, but that was the first example that came to mind - avoiding situations where I'm in a position to harm somebody else would probably also be good).

Thanks for trying to help, though.

comment by Wiseman · 2008-06-03T20:58:19.000Z · LW(p) · GW(p)

In that case I don't think MWI says anything we didn't already know: specifically that 'stuff happens' outside of our control, which is something which we have to deal with even in non-quantum lines of thought. Trying to make choices different when acknowledging that MWI is true probably will result in no utility gain at all, since saying that x number of future worlds out of the total will result in some undesirable state, is the same as saying, under copenhagen, the chances it will happen to you is x out-of total. And that lack of meaningfull difference should be a clue as to MWI's falshood.

In the end the only way to guide our actions is to abide by rational ethics, and seek to improve those.

comment by Recovering_irrationalist · 2008-06-03T21:35:03.000Z · LW(p) · GW(p)

I think the entire post makes sense, but what if...

Adam signs up for cryonics.

Brian flips a coin ten times, and in quantum branches where he get all tails he signs up for cryonics. Each surviving Brian makes a few thousand copies of himself.

Carol takes $1000 and plays 50/50 bets on the stock market till she crashes or makes a billion. Winning Carols donate and invest wisely to make positive singularity more likely and negative singularity less likely, and sign up for cryonics. Surviving Carols run off around a million copies each, but adjusted upwards or downwards based how nice a place to live they ended up in.

Assuming Brian and Carol aren't in love (most of her won't get to meet any of him at the Singularity Reunion), who's better off here?

comment by Recovering_irrationalist · 2008-06-03T21:39:13.000Z · LW(p) · GW(p)

(Assume Adam's a Xeroxphobe)

comment by Brandon_Reinhart · 2008-06-03T22:07:02.000Z · LW(p) · GW(p)

RI - Aren't Surviving Brian Copies [1-1000] are each their own entity? Brian-like entities? "Who is better off" are any Brian-like entities that managed to survive, any Adam-like entities that managed to survive, and any Carol-like entities that managed to survive. All in various infinite forms of "better off" based on lots of other splits from entirely unrelated circumstances. Saying or implying that Carol-Current-Instant-Prime is better off because more future versions of her survived than Adam-Current-Instant-Prime seems mistaken, because future versions of Adam or Carol are all their own entities. Aren't Adam-Next-Instant-N and Adam-Current-Instant-Prime also different entities?

And isn't multiplying infinities by finite integers to prove values through quantitative comparison an exercise doomed to failure?

All this trying to compare the qualitative values of the fates of infinities of uncountable infinite-infinities seems somewhat pointless. Also: it seems to be an exercise in ignoring probability and causality to make strange points that would be better made in clear statements.

:(

I might just misunderstand you.

comment by David_Solomon · 2008-06-03T22:13:18.000Z · LW(p) · GW(p)

Sebastian:

I see your point that given the atoms are what they are, they are 'the same person', but can't get around the sense that it still matters on some level.

What if cryonics were phrased as the ability to create an identical twin from your brain at some point in the future, rather than 'you' waking up. If all versions of people are the same, this distinction should be immaterial. But do you think it would have the same appeal to people?

Suppose you do a cryogenics brain scan and create a second version of yourself while you're still alive. Each twin might feel strong regard for the other, but there's no way they would actually be completely indifferent between pain for themselves and pain for their twin. They share a past up to a certain point, and were identical when created, but that's it. If another 'me' were created on mars and then got a bullet in the head, this would be sad, but no more so than any other death. It wouldn't feel like a life-extending boon when he was created, nor a horrible blow to my immortality when he was destroyed. How is cryogenics different from this?

comment by JulianMorrison · 2008-06-03T22:32:19.000Z · LW(p) · GW(p)

Quantum non-sameness of the configurations from moment to moment, and quantum absolute equality of "the same sorts of particles in the same arrangement" are both illustrative as extremes, but the question looks much simpler to me. Since I have every reason to suppose "the me of me" is informational, I can simply apply what I know of information: that it exists as patterns independent of a particular substrate, and that it can be copied and still be the original. If I'm copied then the two mes will start diverging and become distinguishable, but neither has a stronger claim.

comment by Patrick_(orthonormal) · 2008-06-03T22:36:23.000Z · LW(p) · GW(p)

David,

You're right not to feel a 'blow to your immortality' should that happen; but consider an alternate story:

You step into the teleport chamber on Earth and, after a weird glow surrounds you, you step out on Mars feeling just fine and dandy. Then somebody tells you that there was a copy of you left in the Earth booth, and that the copy was just assassinated by anti-cloning extremists.

The point of the identity post is that there's really no difference at all between this story and the one you just told, except that in this story you subjectively feel you've traveled a long way instead of staying in the booth on Earth.

Both of the copies are you (or, more precisely, before you step into the booth each copy is a future you); and to each copy, the other copy is just a clone that shares their memories up to time X.

comment by David_Solomon · 2008-06-03T22:41:21.000Z · LW(p) · GW(p)

Sebastian:

Take this as a further question. One of the key distinctions between the 'you you' and the 'identical twin you' is the types of sacrifice I'll make for each one. Notwithstanding that I can't tell you why I'm still the same person when I wake up tomorrow, I'll sacrifice for my future self in ways that I won't for an atom-exact identical twin.

If you truly believe that 'the same atoms means its 'you' in every sense', suppose I'm going to scan you and create an identical copy of you on mars. Would you immediately transfer half your life savings to a bank account only accessible from mars? What if I did this a hundred times? If the same atoms make it the same person, why wouldn't you?

And if you don't really have the same regard for a 'copy' of yourself while you're still alive, why should this change when the original brain stays cryogenically frozen and a copy is created?

comment by Caledonian2 · 2008-06-03T22:54:45.000Z · LW(p) · GW(p)
If you truly believe that 'the same atoms means its 'you' in every sense', suppose I'm going to scan you and create an identical copy of you on mars. Would you immediately transfer half your life savings to a bank account only accessible from mars?

Even assuming that I could confirm where my money was actually going, I don't think a copy of myself left on Mars would have much use for money. So, no.

comment by [deleted] · 2008-06-03T23:08:00.000Z · LW(p) · GW(p)
If you truly believe that 'the same atoms means its 'you' in every sense', suppose I'm going to scan you and create an identical copy of you on mars. Would you immediately transfer half your life savings to a bank account only accessible from mars?

Absolutely, as there is a 50% that after the copy "I" will be the one ending up on Mars. If 100 copies were going to be made, I would be pretty screwed; I think I would move to a welfare state first :)

Alternatively, I would ask that they pick one of the copies at random and give him the money and kill the other 99. Of course, this would have the same effect as the copies never being made (in a sense).

comment by Recovering_irrationalist · 2008-06-03T23:19:00.000Z · LW(p) · GW(p)
Brandon:And isn't multiplying infinities by finite integers to prove values through quantitative comparison an exercise doomed to failure?

Infinities? OK, I'm fine with my mind smeared frozen in causal flowmation over countlessly splitting wave patterns but please, no infinite splitting. It's just unnerving.

comment by John_Faben · 2008-06-04T00:33:00.000Z · LW(p) · GW(p)

I get the feeling a lot of proponents of cryonics are a bit like those who criticize prediction markets, but refuse to bet on them. If you really believe that signing up for cryonics is so important, why aren't you being frozen now? Surely there are large numbers of branches in which your brain gets irretrievably destroyed tomorrow - if the reward for being frozen is so big, why wait?

comment by Phil_Goetz · 2008-06-04T01:25:00.000Z · LW(p) · GW(p)

The thought of I - and yes, since there are no originals or copies, the very I writing this - having a guaranteed certainty of ending up doing that causes me so much anguish that I can't help but thinking that if true, humanity should be destroyed in order to minimize the amount of branches where people end up in such situations. I find little comfort in the prospect of the "betrayal branches" being vanishingly few in frequency - in absolute numbers, their amount is still unimaginably large, and more are born every moment.
To paraphrase:

Statistically, it is inevitable that someone, somewhere, will suffer. Therefore, we should destroy the world.

Eli's posts, when discussing rationality and communication, tend to focus on failures to communicate information. I find that disagreements that I have with "normal people" are sometimes because they have some underlying bizarre value function, such as Kaj's valuation (a common one in Western culture since about 1970) that Utility(good things happening in 99.9999% of worlds - bad things happening in 0.0001% of worlds) < 0. I don't know how to resolve such differences rationally.

comment by Court · 2008-06-04T01:27:00.000Z · LW(p) · GW(p)

As a matter of historical coherence, as it were, see Nagarjuna's Mūlamadhyamaka-kārikā (Fundamental Verses of the Middle Way). Concerning the point that 'nothing happens,' you have more or less arrived at the same conclusions, though needless to say his version lacks the fancy mathematical footwork. I tend to think that your fundamental position regarding the physical nature of existence, insofar as I understand it, is probably correct. It's where you go from there that's a little more troubling.

Nagarjuna extrapolates from his views that via the Law of Karma we can reach Nirvana; Eliezer extrapolates from his views that via the Laws of Physics we can reach the Singularity. Both hold that their Law(s) do not require our assent; they continue to operate whether we believe in them them or not, and furthermore, their operation is inevitable. I am very skeptical that this follows in either case.

As regards cryonics, it seems to me what Eliezer is doing is fairly simple: he's taking Pascal's Wager. Pascal wagered on gaining immortality via God, Eliezer wagers on gaining immortality via the Singularity. There's no harm in it, per se, any more than there was in Pascal's being a believing Christian. But one of the major fallacies in Pascal's Wager is the assumption that we know God's characteristics, e.g., if I am believe in Him, He will reward me with eternal life. The same fallacy seems to apply to Eliezer's Wager - even if the Singularity is true, how can we know its characteristics, e.g., that some future benevolent AI will re-animate his frozen brain?

Perhaps, Eliezer, you could in future posts fill in the gaps .

comment by Peter6 · 2008-06-04T02:30:00.000Z · LW(p) · GW(p)

"Will Pearson: Shut up and multiply. 150K/day adds up to about 3B after 60 years, which is a conservatively high estimate for how long we need. Heads have a volume of a few liters, call it 3.33 for convenience, so that's 10M cubic meters. Cooling involves massive economies of scale, as only surfaces matter. All we are talking about is, assuming a hemispherical facility, 168 meters of radius and 267,200 square meters of surface area. Not a lot to insulate. One small power plant could easily power the maintenance of such a facility at liquid nitrogen temperatures.

Michael Vassar - you've also assumed here that the number "150K/day" is going to remain constant over the next 60 years: it's going to increase.

I'm serious. Otherwise you'll buy lottery tickets because some version of you wins, make inconsistent choices on the Allais paradox, choose SPECKS over TORTURE...

Eliezer - I'm largely unconvinced by MWI, or at your interpretation. But I'm not going try to argue it here.

You're a great writer, you're clever, and very quick. But you haven't got a clue about morality. Your torture-over-specks conclusion, and the line of argument which was used to reach it, is cripplingly flawed. And every time you repeat it, you delude minds.

comment by Nick_Tarleton · 2008-06-04T02:55:00.000Z · LW(p) · GW(p)

Phil: What makes you say negative utilitarianism is "a common view in Western culture since 1970"?

Pascal wagered on gaining immortality via God, Eliezer wagers on gaining immortality via the Singularity.... The same fallacy seems to apply to Eliezer's Wager - even if the Singularity is true, how can we know its characteristics, e.g., that some future benevolent AI will re-animate his frozen brain?

See Artificial Intelligence as a Positive and Negative Factor in Global Risk.

comment by Allan_Crossman · 2008-06-04T03:06:00.000Z · LW(p) · GW(p)

John Faben: "If you really believe that signing up for cryonics is so important, why aren't you being frozen now?"

I'm not sure anyone's claimed that cryonics is 100% guaranteed to work. So committing suicide just to get frozen would be an odd thing to do, given such uncertainty.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-06-04T03:56:00.000Z · LW(p) · GW(p)

Cryonicists have a saying: "Being cryonically suspended is the second worst thing that can happen to you."

comment by Ben_Wraith · 2008-06-04T04:03:00.000Z · LW(p) · GW(p)

Michael Anissimov raises a good question about post length. Eliezer, I think some of your posts could benefit from being shorter. You have to say what you need to, but people are more likely to read shorter blog posts.

Even before I'd read the series on quantum physics, I can't imagine fear of still being the same person as a reason I wouldn't sign up for cryonics. My understanding was that all the atoms making up your body change many times in a lifetime anyway, and while that used to distress me I wouldn't have seen it as a problem that would be exacerbated greatly by signing up for cryonics. The only reason I haven't signed up for cryonics yet is money, but hopefully I'll be able to overcome that soon.

comment by Erik_Mesoy · 2008-06-04T05:54:00.000Z · LW(p) · GW(p)

Something's been bugging me about MWI and scenarios like this: am I performing some sort of act of quantum altruism by not getting frozen since that means that "I" will be experiencing not getting frozen while some other me, or rather set of world-branches of me, will experience getting frozen?

comment by Sebastian_Hagen2 · 2008-06-04T07:28:00.000Z · LW(p) · GW(p)

What if cryonics were phrased as the ability to create an identical twin from your brain at some point in the future, rather than 'you' waking up. If all versions of people are the same, this distinction should be immaterial. But do you think it would have the same appeal to people?
I don't know, and unless you're trying to market it, I don't think it matters. People make silly judgements on many subjects, blindly copying the majority in this society isn't particularly good advice.

Each twin might feel strong regard for the other, but there's no way they would actually be completely indifferent between pain for themselves and pain for their twin.
Any reaction of this kind is either irrational, based on divergence which has already taken place, or based on value systems very different from my own. In real life, you'd probably get a mix of the first two, and possibly also the last, from most people.

If another 'me' were created on mars and then got a bullet in the head, this would be sad, but no more so than any other death. It wouldn't feel like a life-extending boon when he was created, nor a horrible blow to my immortality when he was destroyed.
For me, this would be a quantitative judgement: it depends on how much both instances have changed since the split. If the time lived before the split is significantly longer than that after, I would consider the other instance a near-backup, and judge the relevance of its destruction accordingly. Aside from the aspect of valuing the other person as a human like any other that also happens to share most of your values, it's effectively like losing the only (and somewhat out-of-date) backup of a very important file: No terrible loss if you can keep the original intact until you can make a new backup, but an increased danger in the meantime.

If you truly believe that 'the same atoms means its 'you' in every sense', suppose I'm going to scan you and create an identical copy of you on mars. Would you immediately transfer half your life savings to a bank account only accessible from mars? What if I did this a hundred times?
Maybe, maybe not, depends on the exact strategy I'd mapped out beforehand for what each of the copies will do after the split. If I didn't have enough foresight to do that beforehand, all of my instances would have to agree on the strategy (including allocation of initial resources) over IRC or wiki or something, which could get messy with a hundred of them - so please, if you ever do this, give me a week of advance warning. Splitting it up evenly might be ok in the case of two copies (assuming they both have comparable expected financial load and income in the near term), but would fail horribly for a hundred; there just wouldn't be enough money left for any of them to matter at all (I'm a poor university student, currently; I don't really have "life savings" in transferrable format).

comment by Will_Pearson · 2008-06-04T07:31:00.000Z · LW(p) · GW(p)

Michael Vasser, thanks for the start of the calculation. Shame you didn't actually finish it by giving energy needed to maintain temp per metre squared. This could be from 1 watt to 1000 watts, I don't personally have a good estimate of insulation/nitrogen loss at this temp.

Taking into account how much energy will be needed to take 150k heads down to -200 degrees C, would also be good. I am pressed for time, so I may not get around to it.

comment by Court · 2008-06-04T08:58:00.000Z · LW(p) · GW(p)

Nick,

Nothing about cryonics there. That was what I was referring to specifically in bringing up Pascal's Wager. Or am I missing something?

comment by Tim_Tyler · 2008-06-04T10:11:00.000Z · LW(p) · GW(p)

Re: "In reality, physics is continuous."

That has yet to be established.

The universe could turn out to be finite and discrete - e.g. see my site:

http://finitenature.com/

It is confusion to argue from the continuity of the wave equation to the
continuity of the underlying physics - since there is no compelling reason to think that the wave equation is the final word on the issue - and discrete phenomena often look continuous if you observe them from a sufficiently great distance - e.g. see lattice gasses.

http://en.wikipedia.org/wiki/Loop_quantum_gravity is an example of a more modern discrete theory.

comment by Günther_Greindl · 2008-06-04T10:33:00.000Z · LW(p) · GW(p)

@Kaj: There are more cheerful prospects. I think you are still too much caught up in an "essence" of you which acts. There is no such thing. There is no dichotomy between you and the universe.

The anguish you feel is anguish about your own (the universes!) suffering. Try to be happy, you will increase happiness overall.

Eastern philosophy helps, it merges well with materialism. You are only disturbed if you can't get rid of deeply-conditioned Western philosophical assumptions.

Reading recommendations:

Anything by Alan Watts (start with "The Way of Zen").
Raymond Smullyan's "The Tao is Silent"
Joseph Goldstein's "One Dharma: The Emerging Western Buddhism" is excellent also.

On my reading list (looks highly relevant) is this book:
Kolak, Daniel. "I Am You: The Metaphysical Foundations for Global Ethics"

Maybe you want to check that out too.

Some inspiriation from Lao Tse's Dao de jing (verse two):

Under heaven all can see beauty as beauty only because there is ugliness.
All can know good as good only because there is evil.

Therefore having and not having arise together.
Difficult and easy complement each other.
Long and short contrast each other:
High and low rest upon each other;
Voice and sound harmonize each other;
Front and back follow one another.

Therefore the sage goes about doing nothing, teaching no-talking.
The ten thousand things rise and fall without cease,
Creating, yet not.
Working, yet not taking credit.
Work is done, then forgotten.
Therefore it lasts forever.


Cheers,
Günther


comment by steven · 2008-06-04T13:10:00.000Z · LW(p) · GW(p)

I think Kaj's concerns are silly and I'm all for shutting up and multiplying, but is there a strong argument why the expected utility of better-than-death outcomes outweighs the expected negative utility of worse-than-death outcomes (boots stamping on human faces forever and the like)?

comment by Will_Pearson · 2008-06-04T22:13:00.000Z · LW(p) · GW(p)

Hmm, assuming dewar levels of insulation* and a few other numbers guestimated like joules required to create a litre of N2 (2 KWh/l) I got 7 litres lost per second and a 50KW supply for energy.

* I'm not sure this is a safe assumption. A dewar is fully sealed, we are putting 495000 litres of material in per day.

It looks like the cost to freeze the heads would dwarf this as well, 137 litres per second of more dense material with higher specific heat capacity to cool to liquid nitrogen levels. Probably up to the megawatt range, if not more. Not taking into account travel energy costs and freezing costs while travelling.

It would be interesting to see whether it is better to have 1 giant store or many smaller ones. Anyone up for brainstorming a design? I am not too interested in personal survival but if it can be done for minimal-ish cost it would be very worthwhile from an archival of humanity point of view.

comment by Nick_Tarleton · 2008-06-04T22:32:00.000Z · LW(p) · GW(p)

Court, that paper addresses the general question of what we can know about the outcome of the Singularity.

Something's been bugging me about MWI and scenarios like this: am I performing some sort of act of quantum altruism by not getting frozen since that means that "I" will be experiencing not getting frozen while some other me, or rather set of world-branches of me, will experience getting frozen?

Not really, since your decision determines the relative sizes of the sets of branches.

comment by Psy-Kosh · 2008-06-04T22:34:00.000Z · LW(p) · GW(p)

Eliezer: "my own insurance policy, with CI."? I thought you had said you were signed up as a neuro rather than full body. As far as I know, CI only does full body rather than neuro.

(Isn't neuro supposed to be better, anyways? That is, better chance of "clean" brain vitrification?)

comment by Aaron7 · 2008-06-05T00:05:00.000Z · LW(p) · GW(p)
Clearly, due to the butterfly effect, storms will be highly influenced by quantum randomness. If we reset the world to 5 years ago and put every molecule on the same track, New Orleans would not have been destroyed in almost all cases.

No, no, no. The vast majority will have New Orleans destroyed, but in slightly different ways. Yes, weather is chaotic, but it evolves in fairly set ways. The original Lorenz attractor is chaotic, but it has a definite shape that recurs.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2012-05-29T15:33:20.143Z · LW(p) · GW(p)

... therefore, you can predict that some hurricanes will occur in the area, but not precisely where they will be. The original statement is correct.

comment by Hopefully_Anonymous3 · 2008-06-12T08:09:00.000Z · LW(p) · GW(p)

I don't think you've proven what you claim to have proven in this post, but it might work as propaganda to increase cyronics enrollment, which should be good for both of us.
Specifically, I don't think it's clear that (1) current cryonics technology prevents information-theoretic death, (2) that if I'm "revived" from cryonics such that it fools discernment technology of that era, I'm actually having a subjective conscious experience of being alive and conscious. And perhaps discernment technology 30 years later will tragically demonstrate why, and what could've been done differently to preserve me as a subjective conscious entity, (3) future societies with the technology to revive us will choose to.

Separate from propaganda, I think 1-3 are important areas to focus on in terms of research and innovation. We don't want to be fooled by our own propaganda and thus fail to rationally maximize our persistence odds. We don't want to be prisoners of our own myths.

comment by Eymer · 2008-06-24T18:33:00.000Z · LW(p) · GW(p)

Question for Eliezer and everyone else :

Would you really not care about dying if you knew you had a full backup body (with an up-to-date version of your brain) just waiting to be woken up ?

Replies from: ata
comment by ata · 2011-01-06T02:02:14.445Z · LW(p) · GW(p)

"up-to-date" as of the moment before I died? Yes, I would not mind.

comment by Jonii · 2009-10-08T19:46:33.028Z · LW(p) · GW(p)

Why do timeless physics require absence of repeating? How would things change even if universe repeated itself?

Replies from: ata, Viliam_Bur
comment by ata · 2011-01-08T01:44:59.024Z · LW(p) · GW(p)

Bumping an old comment because I was wondering this too.

comment by Viliam_Bur · 2011-09-16T00:10:04.144Z · LW(p) · GW(p)

How would things change even if universe repeated itself?

Even then there would be no difference between repeating and non-repeating universe.

As an example, try to imagine 100 universes, each one exactly the same as our, in every last detail. Is it somehow different from having only 1 universe? No. Even infinitely many universes, as long as they are exactly the same, don't make any difference.

Now try to imagine one universe that somehow (despite the second law of thermodynamics) repeats. It follows the same laws, so it repeats exactly the same way, in every last detail. Is it somehow different from only repeating once? No.

Replies from: Baughn, jvdubois
comment by Baughn · 2012-10-24T09:50:53.108Z · LW(p) · GW(p)

I'm inclined to think "yes", actually. I think redundancy matters..

Before expounding on that, though, could you point me at any material that says it doesn't?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-10-24T14:10:31.610Z · LW(p) · GW(p)

I have no material that directly says that.

Indirectly, though, what experience do you expect if there are 100 universes exactly the same as our in every detail, as opposed to if there is only 1 such universe?

Replies from: Baughn
comment by Baughn · 2012-10-24T18:51:32.166Z · LW(p) · GW(p)

The same, if that was the entirety of existence.

Since we're postulating multiple universes, that's probably pretty unlikely though. I would expect to have an increased probability of existing in a universe with more copies, proportionally to the number of copies.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-10-25T07:33:25.097Z · LW(p) · GW(p)

So, let's suppose there is a universe A which exists only once, and a universe B which exactly repeats forever. Those universes are different, but a situation of "me, now" can happen in both of them. (Both universes happen to contain the same configuration of particles in a limited space at some time.) Then, I should expect to be in the universe B, because that is infinitely more probable.

Unfortunately, I don't know whether I just wrote a nonsensical sequence of words, or whether there is some real meaning in them.

Replies from: Baughn
comment by Baughn · 2012-10-26T23:34:17.404Z · LW(p) · GW(p)

No, it sounds pretty meaningful to me.

I'm modeling this as if we have an (unbounded) computer executing all possible programs, some of which involve intelligences.. usually embedded in a universe. The usual dovetailer model, that is.

In the case you described, there would be two programs involved. One computes my life once, and then halts. The second runs the same program as the first, but in an infinite loop. And yes, in this case I would expect to find myself in program B in most samples. (Although, as I wrote that, there's no way to tell the difference.. make the obvious correction to fix that.)

I described it as all possible programs, though, which would certainly include things such as boltzmann brains. The reason I don't see that as a problem is.. the computational density of (my?) mind, strictly speaking, is what matters; not just the total number of instantiations, which over an infinite runtime is a nonsensical thing to ask about. Certainly there would be an infinite number of boltzmann brains, but they're rare; much rarer than, say, a cyclic universe.

Well. That said, the apparent scarcity of life in this universe, as opposed to computation-hungry but boring things like stars, seems to be a decent counterargument. I'm not sure how it'd work out, really. :O

comment by jvdubois · 2013-10-28T17:39:11.853Z · LW(p) · GW(p)

I think this sentence does not make sense. If a universe has some configuration, then it IS the UNIVERSE. It does not make sense that there are 100 of them

I imagine it like a sequence of numbers. There is 0, then there is 1 etc. It does not make sense that if you have sequence:

1,2,8,5,1

That there are two different occurences of a thing "number one. No matter how "many times" the number was used, it is still fundamentaly the number.

I think Elizier himself used a very good example of how things work like. Imagine that everything you know about our universe can be coded into a sequence of numbers. Everything - all its history etc. Now what meaning it has from inside of this universe that some powerful alien race with a lot of computing power can take this sequence and load it into a memory of some supercomputer? What if they load it and delete it it twice or million times? What if they load it on two computers simultaneously? It does not matter from within the universe. It just is.

comment by David_Gerard · 2011-01-25T22:25:43.325Z · LW(p) · GW(p)

Do you know what it takes to securely erase a computer's hard drive? Writing it over with all zeroes isn't enough. Writing it over with all zeroes, then all ones, then a random pattern, isn't enough. Someone with the right tools can still examine the final state of a section of magnetic memory, and distinguish the state,

Minor note: this claim is obsolete and should not be used to make the point you're trying to make.

Peter Gutmann's original list of steps to erase a hard drive is obsolete. Gutmann himself is particularly annoyed that it appears to have taken on the status of a voodoo ritual. As that Wikipedia article notes, "There is yet no published evidence as to intelligence agencies' ability to recover files whose sectors have been overwritten, although published Government security procedures clearly consider an overwritten disk to still be sensitive. Companies specializing in recovery of damaged media (e.g., media damaged by fire, water or otherwise) cannot recover completely overwritten files. No private data recovery company currently claims that it can reconstruct completely overwritten data." Overwriting with random data is enough in practice in 2011, and was in 2008 for that matter.

Replies from: wedrifid
comment by wedrifid · 2011-01-26T00:45:10.031Z · LW(p) · GW(p)

Scientists have played with electron microscopes and established that in principle someone with the right tools could examine the final state of a section of magnetic memory and distinguish an earlier state. It's just that nobody has said tools in practice and the engineering tasks to create tools that worked reliably for the task is an absolute nightmare.

One could argue that the quoted claim is technically correct.

Replies from: David_Gerard
comment by David_Gerard · 2011-01-26T01:52:18.081Z · LW(p) · GW(p)

Citation needed, one talkiing about hard disks as of 2008 at the earliest, or an equivalent magnetic problem.

A supporting claim needing to be stretched as far as "well, it's not technically false!" still strikes me as not being a good example to try to persuade people with.

Replies from: wedrifid
comment by wedrifid · 2011-01-26T02:14:07.435Z · LW(p) · GW(p)

Citation needed, one talkiing about hard disks as of 2008 at the earliest, or an equivalent magnetic problem.

I am reluctant to comply with demands for citations on something that is not particularly controversial and, more importantly, does not contradict the references you yourself provided. Apart from reading your own references (Gutmann and wikipedia) you can look at the most substantial criticism of the idea that there are real world agencies who could recover your overwritten data, that by Daniel Feenberg.

Gutmann mentions that after a simple setup of the MFM device, that bits start flowing within minutes. This may be true, but the bits he refers to are not from from disk files, but pixels in the pictures of the disk surface. Charles Sobey has posted an informative paper "Recovering Unrecoverable Data" with some quantitative information on this point. He suggests that it would take more than a year to scan a single platter with recent MFM technology, and tens of terabytes of image data would have to be processed.

His general point is that while there has been some limited success with playing with powerful microscopes the current process is so ridiculously impractical and unreliable that there is no chance any existing intelligence agency would be able to pull it off.

A supporting claim needing to be stretched as far as "well, it's not technically false!" still strikes me as not being a good example to try to persuade people with.

Not a position I have argued against, nor would I be inclined to.

Replies from: David_Gerard
comment by David_Gerard · 2011-01-27T11:00:57.365Z · LW(p) · GW(p)

Fair enough!

comment by TimFreeman · 2011-04-19T17:09:04.164Z · LW(p) · GW(p)

So what's timeless identity?

I read this article with the title "Timeless Identity", and there was a bunch of statements of the form "identity isn't this" and "identity isn't that", and at the end I didn't see a positive statement about how timeless identity works. Does the article fail to solve the problem it set out to solve, or did I read too fast?

Personally, I think the notion of identity is muddled and should be discarded. There is a vague preference about which way the world should be moved, there's presently one blob of protoplasm (wearing a badge with "Tim Freeman" written on it, as I write) that does a sloppy job of making that happen, and if cryonics or people-copying or an AI apocalypse or uploading happen, there will be a different number of blobs of something taking action to make it happen. The vague preference is more likely to be enacted if things exist in the world that are trying to make it happen, hence self-preservation is rational. No identity needed. The Buddhists are right -- there a transient collection of skandhas, not an indwelling essence, so there is no identity, timeless or otherwise.

So I'm not concerned about the possibility of there being no such thing as timeless identity, but I am slightly concerned that either the article has something good I missed, or groupthink is happening to the extent that none of the upvoted comments on this article are screaming "The Emperor has no clothes!", and I don't know which.

Thanks for the pointer to Parfit's work. I've added it to my reading list. Upvoted the article because of the reference to Parfit and the idea that maybe the interminable debates on the various transhumanist mailing lists actually didn't make significant progress on the issue.

Nitpick 1: if the odds of actual implementations of cryonics working is less than 50%, then maybe most of those 150K deaths actually are unavoidable, on the average. One failure mode is cryonics not working because we will lose an AI apocalypse, for example.

Nitpick 2: If the forces that prevent food and clean water from getting to the dying children in Africa would also prevent delivery of cryonics, then we can't blame ignorant first-worlders for their deaths.

Nitpick 3: I think cryonics would still make just as much sense in a deterministic world, so IMO you don't have to understand quantum mechanics to properly evaluate it.

I call these nitpicks because the essence of the argument is that there are many, many avoidable deaths happening every day on the average, and I agree with that.

Replies from: shokwave
comment by shokwave · 2011-04-19T18:29:48.328Z · LW(p) · GW(p)

The Buddhists are right

I always cringe at statements like this. I'm quite familiar with the Buddhist notion of no self, but I don't think for a second that study of Buddhist philosophy would convince anyone that a cryonically frozen person will wake up as themselves - in fact, given the huge stretch of time between freeze and unfreeze, there is a strong (but wrong) argument from Buddhist philosophy that cryonics wouldn't work.

And so if it bears a superficial similarity but doesn't output the same answers ... it is about as right as a logic gate that looks like AND but performs ALWAYS RETURN FALSE.

Replies from: TimFreeman
comment by TimFreeman · 2011-04-19T22:53:48.017Z · LW(p) · GW(p)

I'm quite familiar with the Buddhist notion of no self, but I don't think for a second that study of Buddhist philosophy would convince anyone that a cryonically frozen person will wake up as themselves

If there is no self, then cryonics obviously neither works nor doesn't work at making a person wake up as themselves, since they don't have a self to wake up as. From this point of view, cryonics works if someone wakes up, and the person who originally signed up for cryonics would have preferred for that person to wake up over not having that person wake up, given the opportunity costs incurred when doing cryonics.

Cryonics is similar in kind to sleep or the passage of time in that way.

Whether most Buddhists are able to figure that out is another question. I agree that I'm not describing the Buddhist consensus on cryonics, and I agree that Buddhist philosophy does not motivate doing cryonics. My only points are that they're consistent, and that Buddhist philosophy frees me from urgently trying to puzzle out what "Timeless Identity" is supposed to mean.

I'm slightly concerned that the OP apparently doesn't say how timeless identity is supposed to work, and nobody seems to have noticed that.

Replies from: shokwave
comment by shokwave · 2011-04-20T06:45:20.515Z · LW(p) · GW(p)

I'm slightly concerned that the OP apparently doesn't say how timeless identity is supposed to work, and nobody seems to have noticed that.

The explanation of identity starts when he kicks off around the many-worlds heads diagram. Specifically the part that makes timeless identity work (as long as you accept most reductionist physical descriptions of identity - configurations of neurons and synapses and such) is this:

We also saw in Timeless Causality that the end of time is not necessarily the end of cause and effect; causality can be defined (and detected statistically!) without mentioning "time". This is important because it preserves arguments about personal identity that rely on causal continuity rather than "physical continuity".

Replies from: TimFreeman
comment by TimFreeman · 2011-04-20T12:44:01.454Z · LW(p) · GW(p)

Ah. The assumption that identity = consciousness was essential to recognizing that this was an attempt to answer the question of how timeless identity works. He only mentions identity = consciousness in passing once, and I missed it the first time around, so the problem was that I was reading too fast. Thanks.

If you need a notion of identity, I agree that identity = consciousness is a reasonable stand to take.

comment by handoflixue · 2011-05-25T00:31:47.007Z · LW(p) · GW(p)

"If cryonics were widely seen in the same terms as any other medical procedure, economies of scale would considerably diminish the cost"

To what degree are these economies of scale assumed? Is it really viable, both practically and financially, to cryogenically preserve 150,000 people a day?

Is there any particular reason to suspect that investing this sort of funding in to cryonics research is the best social policy? What about other efforts to "cure death" by keeping people from dying in the first place (for instance, those technologies that would be the necessary foundations for restoring people from cryonics in the first place)?

I see cryonics hyped a lot here, and in rationalist / transhuman communities at large, and it seems like an "applause light", a social signal of "I'm rationalist; see, I even have the Mandatory Transhumanist Cryogenics Policy!"

Replies from: Eliezer_Yudkowsky, ciphergoth, bcoburn
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2011-05-25T01:46:35.004Z · LW(p) · GW(p)

Liquid nitrogen is cheap, and heat loss scales as the 2/3 power of volume. Cryonically preserving 150,000 people per day would, I fully expect, be vastly cheaper than anything else we could do to combat death.

comment by Paul Crowley (ciphergoth) · 2011-05-25T07:53:52.728Z · LW(p) · GW(p)

Could you tell us what you see in the way that cryonics is "hyped" that you would be less likely to see if people praised it simply because it was a good idea?

Replies from: handoflixue
comment by handoflixue · 2011-05-25T19:18:44.424Z · LW(p) · GW(p)

I would expect to see a rational discussion of the benefits and trade-offs involved, in such a way as to let me evaluate, based on my utility function, whether this is a good investment for me.

Instead, I primarily see almost a "reversed stupidity" discussion, combined with what seems like in-group signalling: "See all these arguments against cryonics? They are all irrational, as I have now demonstrated. QED cryonics is rational, and you should signal your conformity to the Rationality Tribe by signing up today!"

I can totally understand why it's presented this way, but it reads off as "hype" because I almost never encounter anything else. It all seems to just naively assume that "preserving my individual life at any cost is a perfectly rational decision." Maybe that really is all the thought that goes in to it; if your utility function places a suitably high value on self-preservation, then there's not really a lot of further discussion required.

But I get the sense that there are deeper thoughts that just never get discussed, because everyone is busy fighting against the nay-sayers. There's a deep absence of arguments for cryonics, especially ones that actually take in to consideration social policy, and what else could be accomplished for $200K.

(Eliezer hinted at it, with his comments about economies of scale, but it was a mere footnote, and quite possibly the first time I've seen anyone discuss the issue from that perspective even briefly)

Replies from: ciphergoth
comment by Paul Crowley (ciphergoth) · 2011-05-26T13:07:52.645Z · LW(p) · GW(p)

Looks like you've just found another way of saying "you're all irrational!" without providing evidence.

Replies from: handoflixue
comment by handoflixue · 2011-05-26T16:49:00.942Z · LW(p) · GW(p)

It's more that all the arguments I see are aimed at a different audience (cryonics skeptics). I do not take this as very strong evidence of irrationality. On the other hand, anyone who posts here, I take that as decent evidence of rationality, especially people like Eliezer. So I assume with a high probability that either the people espousing it have a different utility function than I, or are simply not talking about the other half of the argument. I'm assuming that there is a rational reason, but objecting because I don't feel anyone is trying to rationally explain it to me :)

Loosely, in my head, there's the idea of a "negative" argument, which is just rebutting your opponent, or a "positive" argument which actually looks at the advantages of your position. I see hype, in-group signalling, and "negative" arguments. I'm interested in seeing some "positive" ones.

As far as evidence, I did actually just put up a post discussing specifically the "economies of scale" argument. It is thus far the only "positive" argument I've heard for it, aside from the (IMO) very weak argument of "who doesn't want immortality?" (I find it weak specifically because it ignores both availability and price, and glosses over how reliability is affected by those two factors as well)

Hopefully that was clearer!

comment by bcoburn · 2011-05-25T23:17:34.298Z · LW(p) · GW(p)

Mandatory link on cryonics scaling that basically agrees with Eliezer:

http://lesswrong.com/lw/2f5/cryonics_wants_to_be_big/

Replies from: handoflixue
comment by handoflixue · 2011-05-26T01:20:37.542Z · LW(p) · GW(p)

Unless modern figures have drifted dramatically, free storage would give you a whopping 25% off coupon.

This is based on the 1990 rates I found for Alcor. And based on Alcor's commentary on those prices, this is an optimistic estimate.

Source: http://www.alcor.org/Library/html/CostOfCryonicsTables.txt

Cost of cryogenic suspension (neuro-suspension only): $18,908.76

Cost of fund to cover all maintenance costs: $6,600

Proportional cost of maintenance: 25.87%


I'd also echo ciphergoth's request for any sort of actual citation on the numbers in that post; the entire post strikes me as making some absurdly optimistic assumptions (or some utterly trivial ones, if the author was talking about neuro-suspension instead of whole-body...)

comment by orthonormal · 2011-06-14T18:59:35.544Z · LW(p) · GW(p)

The Ben Best Cryonics FAQ link is dead, or at least frozen.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2011-06-15T00:44:24.335Z · LW(p) · GW(p)

Added link to a snapshot on Internet Archive (last snapshot was 31 Dec 2009, so it's possibly not available for some time now, but maybe not).

comment by someonewrongonthenet · 2012-08-15T19:05:31.072Z · LW(p) · GW(p)

Eliezer...the main issue that keeps me from cryonics is not whether the "real me" wakes up on the other side. Most smart people would agree that this is a non-issue, a silly question arising from the illusion of mind-body duality.

The first question is about how accurate the reconstruction will be. When you wipe a hard drive with a magnet, you can recover some of the content, but usually not all of it. Recovering "some" of a human, but not all of it, could easily create a mentally handicapped, broken consciousness.

Setting that aside, there is an second problem. If and when immortality and AI are achieved, what value would my revived consciousness contribute to such a society?

You've thus far understood that death isn't a bad thing when a copy of the information is preserved and later revived. You've explained that you are willing to treat consciousness much like you would a computer file - you've explained that you would be willing to destroy one of two redundant duplicates of yourself.

Tell me, why exactly is it okay to destroy a redundant duplicate of yourself? You can't say that it's okay to destroy it simply because it is redundant, because that also destroys the point of cryonics. There will be countless humans and AIs that will come into existence, and each of those minds will require resources to maintain. Why is it so important that your, or my, consciousness be one among this swarm? Isn't that...well...redundant?

For the same reasons that you would be willing to destroy one of two identical copies, you should be willing to destroy all the copies given that the software - the consciousness - that runs within is not exceptional among all the possible consciousnesses that those resources could be devoted to.

comment by Wei Dai (Wei_Dai) · 2012-12-20T09:56:37.397Z · LW(p) · GW(p)

A sentient brain constructed to atomic precision, and copied with atomic precision, could undergo a quantum evolution along with its "copy", such that, afterward, there would exist no fact of the matter as to which of the two brains was the "original".

On the other hand, an ordinary human brain could undergo 100 years worth of ordinary quantum evolution along with its "copy", and probably 99 out of 100 naive human observers would still agree which one is the "original" and which is the "copy". It seems there must be a fact of the matter in this case, or else how did they reach agreement? By magic?

Given that physical continuity is an obvious fact of daily life, in our EEA and now, why can't "caring about physical continuity" be a part of our preferences/morality? In other words, if the above specially constructed sentient brain were to host a human mind, it doesn't seem implausible that it would consider both post-evolution versions of itself to be less valuable "copies" (due to loss of clear physical continuity) and would choose to avoid undergoing such quantum evolution if it could. This "physical continuity" may not have a simple definition in terms of fundamental physics, but then nobody said our values had to be simple...

EDIT: I've expanded this criticism into a discussion post.

"Consider another range of possible cases: the Physical Spectrum. These cases involve all of the different possible degrees of physical continuity...

"In a case close to the near end, scientists would replace 1% of the cells in my brain and body with exact duplicates. In the case in the middle of the spectrum, they would replace 50%. In a case near the far end, they would replace 99%, leaving only 1% of my original brain and body. At the far end, the 'replacement' would involve the complete destruction of my brain and body, and the creation out of new organic matter of a Replica of me."

(Reasons and Persons, p. 234.)

Parfit uses this to argue against the intuition of physical continuity pumped by the first experiment: if your identity depends on physical continuity, where is the exact threshold at which you cease to be "you"?

Isn't this just a variant of the Sorites paradox? (I can use it to argue that identity can't have anything to do with synapse connections: suppose I destroy your synapses one at a time, where is the exact threshold at which you cease to be "you"?) I'm surprised at Parfit's high reputation if he made arguments like this one.

comment by Indon · 2013-04-17T23:28:21.335Z · LW(p) · GW(p)

Since you're a computer guy (and I imagine many people you talk to are also computer-savvy), I'm surprised you don't use file/process analogues for identity.

  • If I move a file's physical location on my hard drive, it's obviously still the same file, because it has handle and data continuity. This is analogous to existing in different locations, being expressed with different atoms.
  • If I change the content of the file, it's obviously still the same file, because it has handle and location continuity. This is analogous to changing over not-technically-time-but-causal-effect-chains-that-we-may-as-well-call-time-for-convenience.
  • If I delete the file (actually just removing its' file handle in most modern systems) and use a utility to recover it, it's obviously still the same file, because it has location and data continuity. This is analogous to cryonics.

Identity is thus describable with three components: handle, data, and location continuity, only two of which are required at any given point. As for having just one:

  • If you have only handle continuity, you have two distinct objects with the same name.
  • If you have only data continuity, then you have duplicate work.
  • If you have only location continuity, you've reformatted.

All three break file identity.

As for cryonics, I would sign up if I could be convinced that I would not become obsolete or even detrimental to a society that resurrects me. And looking at some of the problems in my country already being caused by merely having a normally aging population at current social development rate, I don't even think it's a given that I could contribute meaningfully to society during the twilight of my at-present-natural life.

comment by Zaq · 2013-10-10T22:32:03.495Z · LW(p) · GW(p)

Eliezer, why no mention of the no-cloning theorem?

Also, some thoughts this has triggered:

Distinguishability can be shown to exist for some types of objects in just the same way that it can be shown to not exist for electrons. Flip two coins. If the coins are indistinguishable, then the HT state is the same as the TH state, and you only have three possible states. But if the coins are distinguishable, then HT is not TH, and there are four possible states. You can experimentally verify that the probability obeys the latter situation, and not the former. And of course, you can experimentally verify that electron pairs obeys the former situation, and not the latter. This is probably just because the coins are qualitatively distinct, while the electrons are not.

But it seems that if you did make a quantum copy (no-cloning theorem be damned!) then after a bit of interaction with the different environments, the two would become distinguishable (on the basis of developing different qualitative identities) and start behaving more like the coins than the electrons. In fact, if you're actually using the lightspeed limit then the reconstructed you would be several years younger, and immediately distinguishable from what the scanned you has since evolved into. At the time of reconstruction, the two are already acting like coins and not electrons. Does this break the argument? I'm not really sure, because the reconstructed you at the time of reconstruction would still be indistinguishable from the you at the time of scanning, if you could somehow get them both around at the same time.

Bonus! The reconstructed you could be seen to have a very qualitatively different time-evolution. The scanned you evolves throughout its entire history via a Hamiltonian which itself changes continuously as scanned-you moves continuously through your environment. Reconstructed you, however, has a clear discontinuity in its Hamiltonian at the time of reconstruction (the state is effectively instantly moved from one environment into a completely different environment). The state of the reconstructed you would still evolve continuously, it would just have a discontinuous derivative. So I'm not really sure if reconstructed you would fail to pass the bar of having a "continuity of identity" that a lot of people talk about when dealing with the concept of self. My gut says no, but I'm not sure why.

Replies from: deciplex
comment by deciplex · 2016-04-27T10:35:19.015Z · LW(p) · GW(p)

Eliezer, why no mention of the no-cloning theorem?

Indeed. It is disappointing to see this buried at the bottom of the page. I don't think the no-cloning and no-teleportation theorems have any serious implications for Eliezer's arguments for life extension (although, it might have some implications for how he anticipates being recovered later). But, it does have some implications for the ideas about identity presented here. Here is the relevant text:

Are you under the impression that one of these bodies is constructed out of the original atoms—that it has some kind of physical continuity the other does not possess? But there is no such thing as a particular atom, so the original-ness or new-ness of the person can't depend on the original-ness or new-ness of the atoms.

In fact, having read the entire QM sequence, I am not under the impression that I am made out of atoms at all! I am an ever-decohering configuration of amplitude distributions. Furthermore since I know my configuration can never be decomposed and transmitted via classical means, I also know that the scanner/teleporter so-defined can't possibly exist.

Now, if you want to talk about entangling my body at point A, with some matter at point B, and via some additional information transmitted via normal channels, move me from point A to point B that way - now we have something to talk about. But the original proposition, of a teleporter which can move me from point A to point B, but can also, with some minor tweaking, be turned into a scanner which would "merely" create a copy of me at point A, is an absurdity. It is impossible to copy the configuration that makes up "me". The original classical teleporter kills the people who use it, because the configuration of amplitude constructed at point B can't possibly match, even in principle the one destroyed at point A.

comment by Lumifer · 2017-04-12T15:10:17.748Z · LW(p) · GW(p)

SPAMMITY SPAM SPAM

comment by [deleted] · 2017-05-22T11:37:25.278Z · LW(p) · GW(p)

There is no situation where two same objects can be observed in the same place at the same time.

If we were to ignore their physical location and we are looking at a flowing action - time will split them the moment one is copied. Their first experience will be different, creating two different identities.

If we were to ignore the location and observe them both in a certain moment of time. This would be similar to looking at two identical photos of the same person, we would not be able to spot a difference in their identity unless we press the "Play" button again.

I assume there is no identity without time. And where is time, there are no exact copies.

comment by SafeAtLast · 2018-01-05T15:48:22.074Z · LW(p) · GW(p)

I cannot experience what future me will experience, not even what past me experienced. I cannot experience what my hypothetical copy experiences. The configuration that leads to my identity is not important. The only thing I can value and preserve is what I experience now.

Why should I care about a copy of me? Invest on a resurrected version of myself?

comment by Elias (Eliyoole) · 2022-02-22T02:20:42.285Z · LW(p) · GW(p)
If there's any basis whatsoever to this notion of "continuity of consciousness"—I haven't quite given up on it yet, because I don't have anything better to cling to—then I would guess that this is how it works.

Why "cling to"? It all adds up to normality, right? What you are saying sounds like someone resisting the "winds of evidence" (in this case added complexity, I am guessing).

I tried to come up with ways to explain my observations of consciousness, but they all seem incomplete too, so far. But I don't see how that impacts your argument here. I'm not saying "stop asking". I just don't see the reason to "cling" to this "notion of continuity".

And if you think there is a reason, and I don't see it, I am somewhat worried.

Best regards

Replies from: cole-killian
comment by Cole Killian (cole-killian) · 2022-12-10T21:20:30.043Z · LW(p) · GW(p)

My response is to say that sometimes it doesn't all add up to normality. Sometimes you learn something which renders your previous way of living obsolete.

It's similar to the idea of thinking of yourself as having free will even if it isn't the case: It can be comforting to think of yourself as having continuity of consciousness even if it isn't the case.

Wei Dai posts here (https://www.lesswrong.com/posts/uXxoLPKAdunq6Lm3s/beware-selective-nihilism [LW · GW]) suggesting that we "keep all of our (potential/apparent) values intact until we have a better handle on how we're supposed to deal with ontological crises in general". So basically, favor the status quo until you develop an alternative and understand its implications.

What do you think?