An argument for personal identity transfer.

post by Gadersd · 2020-12-12T21:03:13.552Z · LW · GW · 32 comments

This is a question post.

I am very concerned with the general attitude towards cryonics and body preservation in general. People who reject these as worthwhile as far as I can tell fall into two primary camps: the probability of revival is too low to justify the monetary sacrifice or that personal identity is not transferred in the revival process. The first issue does not worry me much. Restoring brain function or some equivalent is an engineering problem, a practical problem. Monetary cost is an unfortunate problem, but it is also a practical problem. The other issue however is more of a philosophical one. Even if the technology to restore a preserved brain or upload it into a simulation becomes viable technologically and monetarily people may still reject it for philosophical reasons. Practical problems can be solved through sufficient research and design, but philosophical problems may never go away.

Regarding synthetic brains or brain simulations, I have heard time and time again people claiming that any brain created in such a way will not have the same identity as the original. If someone's brain is scanned while he or she is alive and a synthetic or simulated brain is created and run, then I agree that two separate identities will form. The problem, I think, is that people imagine this particular situation and generalize its conclusion to all possible scenarios regardless of context. Obviously if the scan is performed after the original brain ceases to function there will not be any parallel consciousnesses to diverge from each other.

Some people will then argue that a synthetic brain or simulation cannot even in principle carry over the original consciousness, that personal identity is not transferred. I will try to provide an informal sketch of a proof here of the contrary, that personal identity for all intents and purposes can be transferred over to a synthetic or simulated brain.

Assumptions:

#1 There is a brain device that manifests consciousness using neurons or some functional equivalent. It may be a natural biological, synthetic, simulated brain, or a mixture of these.

#2 There is a procedure that is to be performed on the brain device that will replace some neurons with functional equivalents such that neurons in the unaltered regions of the brain device will not behave any differently throughout time in the presence of the replaced neurons than they would if no neurons were replaced as long as the external stimuli (sight, touch, smell, etc.) is the same in both cases. This procedure, even if every neuron is replaced in one go, is completed faster than the individual neurons can react so that it won't lag behind and cause syncing issues between the unreplaced and replaced neurons. For the case of uploading one can imagine that neurons are removed and sensors are placed there to record what would have been the inputs to the removed neurons. A computer calculates what the outputs of the removed neurons would have been and sends this output to a biological interface connected to the unremoved neurons.

#3 There is a placebo procedure that gives the subject the appearance of the actual procedure having been performed without any neurons actually being altered.

#4 There exists a number N such that if any N neurons of a brain device without any degraded consciousness are altered while not affecting any other neurons, then the brain device will not suffer any significant cognitive impairment. This basically means that a small portion of the brain device can be altered without a significant loss to consciousness or identity, even if those portions are completely removed.

#̶5̶ ̶S̶c̶i̶e̶n̶c̶e̶ ̶a̶n̶d̶ ̶o̶b̶s̶e̶r̶v̶a̶t̶i̶o̶n̶ ̶i̶s̶ ̶n̶e̶c̶e̶s̶s̶a̶r̶y̶ ̶a̶n̶d̶ ̶s̶u̶f̶f̶i̶c̶i̶e̶n̶t̶ ̶t̶o̶ ̶e̶v̶a̶l̶u̶a̶t̶e̶ ̶c̶l̶a̶i̶m̶s̶ ̶r̶e̶g̶a̶r̶d̶i̶n̶g̶ ̶t̶h̶e̶ ̶p̶h̶y̶s̶i̶c̶a̶l̶ ̶w̶o̶r̶l̶d̶ ̶a̶n̶d̶ ̶t̶h̶e̶ ̶m̶i̶n̶d̶.̶

#5 Consciousness can observe and evaluate all aspects of itself relevant to itself.

Proof:

Suppose the procedure is performed on N neurons of the original brain device. By #4 the subject does not incur any significant impairment. The subject does not notice any degradation in consciousness or identity or any change at all compared with the placebo procedure, for if it did then it would cause a behavior change to reflect this which is impossible since the replaced neurons are functionally equivalent to the originals and the unaltered neurons will behave the same as if no neurons were replaced.

There is not, even in principle, a method for observing a degradation of consciousness or identity after N neurons are replaced by the procedure since the replaced neurons are functionally equivalent to the originals. If the subject noticed any change whatsoever then the subject could, for example, raise a finger to signify this. But the subject's behavior is the same whether the actual procedure or placebo were carried out. As long as the subject is given the same external sensory information, the subject cannot distinguish which procedure took place. From an internal point of view the consciousness cannot distinguish any degradation or change of any kind in itself. By #5, there must not have been any alteration relevant to consciousness. Assuming that consciousness and identity is an aspect of consciousness, then there is no degradation of either.

Assume that the procedure will not degrade the mind if performed on kN neurons, where k is some positive integer. Suppose the procedure is performed on kN neurons of the original brain device. The resulting brain device does not have degraded consciousness. Perform the procedure on an additional N neurons with a negligible lapse in time since the former replacement. By assumption #2, altering N neurons on a non-degraded brain device will not cause any significant effect to its mind so the mind is still capable of evaluating any potential changes to its consciousness. Furthermore, since the N neurons just replaced are functionally equivalent to the originals the behavior of the brain device cannot be different from the placebo procedure that gives the subject the appearance that the N neurons were replaced. Since the behavior is indistinguishable from the placebo, the subject cannot have noticed a change or degradation in consciousness for if it did a difference in its behavior would signify this. As explained previously, there is no method even in principle for the subject to observe any degradation since its behavior is unaltered in any case. By #5, the procedure of replacing (k + 1)N neurons will not cause any degradation or change of consciousness or identity.

By mathematical induction, the procedure performed on kN neurons will not cause any degradation to consciousness or identity for all positive integers k where kN is less than or equal to the total number of neurons in the brain device.

I do not know how high N can be for human brains, but based on brain damage survivors it is likely to be quite high. N is at least 1. Therefore any number of neurons can be replaced by the procedure in a single iteration without any observable degradation. This implies that the entire brain device can be replaced in one go without any degradation.

This informal proof can be made much more general and rigorous. For example by replacing closed volume regions instead of individual neurons since the brain uses more than just neurons to function. Regions could be replaced with devices that interact with the region boundaries in functionally the same way as the original material. One can go into arbitrary detail and specialize the argument for cryonically preserved people, but I think the general point of the argument is clear. The argument can be extended to neurons that have partially random behavior. The conclusion would be the same regardless.

Imagine that someone developed such a procedure. How would one evaluate the claim that the procedure does or does not degrade consciousness or identity? A philosophical or metaphysical system could be applied to generate an absolute conclusion. But how could one know that the philosophical or metaphysical system used corresponds to the actual universe and actual minds? Observation must decide this. If one accepts that different philosophies of mind with different conclusions each have a probability of being true, then observation must be what narrows down the probabilities. If one is less than certain of one's own philosophical conviction, then one must observe to decide. My proof was a thought experiment of what would occur if one were to experimentally test whether the procedure affects consciousness. Consciousness itself is used as the standard for evaluating claims regarding consciousness.

Do you all find this reasonable? Crucially, do you all think this might convince the people who deny synthetic and simulated brains for philosophical reasons to not choose death for the sake of philosophy. Dying for philosophy is, in my opinion, no better than dying for religious dogma. Science, observation, and grounded reason should be the evaluator of physical and mental claims, as I hope my arguments reflect.

Update

After reading the comments and thinking over the matter I can see how people can justifiably disagree with this.

I used the term consciousness vaguely. Replacing a part of the brain with a functional equivalent does not alter the future behavior of the neurons that are unreplaced. However, the unaltered part of the brain not being able to tell a difference does not necessarily imply that consciousness was not altered. One can conceive that the removed part had consciousness inherent in it that may not be manifested in the same way in the new replacement part even though the rest of the brain does not react differently.

Corpus callosotomy severs the connection of one half of a brain from the other half. People seem to retain consciousness after the procedure and each side of the brain then acts independently, presumably with independent consciousness. This implies that consciousness is manifested throughout the brain.

If the right side of the brain is replaced with a synthetic one that interacts with the left side of the brain in the same way, then the left side doesn't notice a difference. However, the left side does not necessarily know if the consciousness in the right side is now manifested in the same way or manifested at all.

Answers

answer by Stuart Anderson · 2020-12-13T20:49:55.293Z · LW(p) · GW(p)

-

comment by Gadersd · 2020-12-13T21:00:35.236Z · LW(p) · GW(p)

The philosophical problems people have with identity may seem silly, but many people are affected by it. Some people who may otherwise have no problems with cryonics or other preservation techniques will choose guaranteed death because they don't intuitively think their consciousness will persist. That is why I think it is so significant. People who doubt for practical reasons can be convinced in time if the technology comes up to speed, but those who deny it for philosophical reasons may never be convinced regardless of technological advancements.

An easy practical way to verify that the recreation process done correctly is to revive someone shortly after they die. The close friends and family members could do a sort of Turing test to see if the revived person matches who they knew. The likelihood of something going significantly wrong in the process without the close friends and family members noticing would be very small. This method should convincing as long as someone takes a practical and empirical perspective of identity.

Replies from: stuart-anderson
answer by ChristianKl · 2020-12-13T14:43:04.312Z · LW(p) · GW(p)

#5 Science and observation is necessary and sufficient to evaluate claims regarding the physical world and the mind.

We clearly know that this is false. Heisenberg's uncertainty principle clearly demostrates that there are claims about the physical world that we can't evaluate as through or false through observation and science.

Eliezer wrote in the sequences about how he believes it to be false with regards to the Many-World Hypothesis. 

If observation is not a sufficient standard, then what is?

Eliezer wrote the sequences to lay out a standard. 

comment by Gadersd · 2020-12-13T17:09:21.922Z · LW(p) · GW(p)

Ok, #5 was a bit strong for this, though I must argue that Heisenberg's uncertainty principle itself was discovered through observation. Using a claim justified by observation and experiment to undermine the sufficiency of observation with regards to evaluating claims in general seems off to me.

If a change or thing has no observable effects, how can one claim that that change or thing exists? Eliezer himself believes that the Many-Worlds Hypothesis has observable effects, namely anthropic immortality, which can be tested if one is willing.

Bayesian updating works by updating priors with observation. As Eliezer has mentioned, all possibilities should be given probability greater than 0. Claiming that observation is insufficient in general to evaluate claims implies that there are true statements that are literally impossible to justify beyond a priori belief which requires that one must always to some extent appeal to a belief that is never further justified. I personally don't accept this.

Replies from: ChristianKl
comment by ChristianKl · 2020-12-13T19:47:38.346Z · LW(p) · GW(p)

Using a claim justified by observation and experiment to undermine the sufficiency of observation with regards to evaluating claims in general seems off to me.

Proof by contradition is a standard way to make proofs in mathematics. 

true statements that are literally impossible to justify beyond a priori belief which requires that one must always to some extent appeal to a belief that is never further justified. I personally don't accept this.

The fact that there are truths that one can't verify, means that there are things one doesn't know (and Gödel showed that on a very fundamental level). Being skeptic and not believing things without evidence means that there are things that one doesn't know and can't know. 

Replies from: Gadersd
comment by Gadersd · 2020-12-13T20:36:27.128Z · LW(p) · GW(p)

>Heisenberg's uncertainty principle clearly demostrates that there are claims about the physical world that we can't evaluate as through or false through observation and science.

What you are saying implies, for example, that a particle's momentum has a precise value but cannot be known by observation if the particle's position is known with certainty. How do know this? It could just as well be that the particle's position and momentum are mutually exclusive to a degree such that if the position is known with high certainty, then the momentum does not have a precise objective value. This is the standard view of most physicists. Rejecting this requires there to be non-local effects such as in Bohmian Mechanics.

Gödel proved that formal systems of sufficient expressive capabilities cannot prove all true statements regarding themselves. Relatedly, there are possible situations where people cannot know their futures actions because they could have the determination to do the opposite of what an oracle machine that analyzes their brain may say to guarantee that the oracle is wrong. This is a limitation or feature of systems with sufficient recursive capability. This says nothing of what can known in general. An outside observer could analyze the oracle machine and subject system and know the subject's future action as long as the outside observer does not interfere with the system and become bound up in it. A personal knowledge limitation is not an absolute limitation, whether it be a formal system or person. What is unprovable in the domain of one formal system need not be unprovable by other formal systems.

Not all theorems can be proven with respect to a single formal system or person, but the key word here is "proven." Any mathematical claim can be justified by observation. One can test a theorem by testing many cases. One can for example test whether the addition of even numbers always results in an even number by trying out many cases. With each case one's probability belief of the theorem being true increases. The halting problem, related to Gödel's incompleteness theorems, can be solved in the limit this way. A computer can run a program and assume that it does not halt. If the program does halt then the computer changes its claim. This way the computer is guaranteed to be right eventually, but it is unknown how long it will take for it to be correct. This corresponds to Bayesian updating where knowledge is increased throughout time with observation. One converges to correctness in the limit.

Besides, my argument was regarding claims of the universe and mind, not mathematics. If you have a better way than experimentation and observation to justify claims of consciousness and identity, then I would be ecstatic to hear it. Justifying worldly and consciousness claims with complexity and a priori probabilities is fine and even necessary as a starting point, but if there is no way even in principle to further justify them, then I am skeptical. Even math, which people say is above observation itself, can be justified in the limit by observation.

Replies from: ChristianKl
comment by ChristianKl · 2020-12-13T20:54:27.136Z · LW(p) · GW(p)

There are things we don't know. There are questions where we don't know the answer. Both saying "I know that identity survives cryonics" and saying "I know that identity survives cryonics" require justification. The position of not knowing doesn't. 

Replies from: Gadersd
comment by Gadersd · 2020-12-13T21:25:44.105Z · LW(p) · GW(p)

How do you know there are things you cannot know eventually?

Replies from: ChristianKl
comment by ChristianKl · 2020-12-14T13:36:48.755Z · LW(p) · GW(p)

Gödel. And no the halting problem is separate from Gödels arguments.

Replies from: Gadersd
comment by Gadersd · 2020-12-14T14:28:49.089Z · LW(p) · GW(p)

Gödel established fundamental limits on a very specific notion of "knowing", a proof, that is, a sequence of statements that together justify a theorem to be true with absolute certainty.

If one relaxes the definition of knowing by removing the requirement of absolute certainty within a finite time, then one is not so restricted by Gödel's theorem. Theorems regarding nonfractional numbers such as what Gödel used can be known to be true or false in the limit by checking each number to check whether the theorem holds. 

Theorems of the nature "there exists a number x such that" can be initially set false. If such a theorem is true then one will know eventually by checking each case. If it is not true, then one is correct from the start. Theorems of the nature "for all numbers x P(x) holds" can be initially set true. If such a theorem is false then one will know eventually by checking the cases. If such a theorem is true then one is correct from the start.

The limitation here is absolute certainty within a finite time. One can be guaranteed to be correct eventually, but not know at which point in time correctness will occur.

answer by Rafael Harth · 2020-12-14T15:41:11.012Z · LW(p) · GW(p)

My honest take on this is that it's completely missing the point. There is an assumption here that your future self shares an identity with your current self that other people don't, which is called Closed Individualism. People tend to make this assumption without questioning it, but personally, I assign it < 1% of being true.

I think it's fair to say that, if you accept reasoning of the kind you made in this post (which I'm not claiming is wrong), you can prove arbitrarily absurd things about identity with equal justification. Just imagine a procedure for uploads that does not preserve identity, one that is perfect and does, and then gradually change one into the other. Either identity is a spectrum (???), or it can shift on a single atom (which I believe would contradict #4).

The hypothesis that you share identity with everyone (Open Individualism) is strictly simpler, equally consistent with everyday experience, has no compatibility issues with physics, and is resistant to thought experiments.

I'm not saying that Open Individualism is definitely true, but Closed Individualism is almost certainly not true, and that's enough to be disinterested in cryonics. Maybe you share identity with your upload, maybe you don't, but the idea that you share identity with your upload and not with other future people is extremely implausible. My impression is that most people agree with the difficulty of justifying Closed Individualism, but have a hard-coded assumption that it must be true and therefore think of it as an inexplicably difficult problem that must be solved, rather than drawing the conclusion that it's untrue.

comment by Gadersd · 2020-12-14T16:04:23.438Z · LW(p) · GW(p)

"There is an assumption here that your future self shares an identity with your current self that other people don't, which is called Closed Individualism."

I actually wrote the argument for people who believe in Closed Individualism. I myself subscribe to Open Individualism. The purpose was to convince people who subscribe to Closed Individualism to not reject cryonics on the basis that their identity will be lost. Some people, even if revived after cryonics, may worry that their identity has fundamentally changed which can lead to an existential crisis.

"I'm not saying that Open Individualism is definitely true, but Closed Individualism is almost certainly not true, and that's enough to be disinterested in cryonics."

Why would believing Open Individualism to be true cause disinterest in cryonics? I would be ecstatic to continue working on what I love after my natural lifespan has ended. A few centuries of life experience could give so much depth to art that cannot be gained through only 80 years.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2020-12-14T16:18:43.199Z · LW(p) · GW(p)

That's interesting, both because I wouldn't expect an Open Individualist to be interested in cryonics, and because I wouldn't expect an OI to make this argument. Do you agree that you could prove much stronger claims about identity with equal validity?

It feels strange to me, somewhat analogous to arguing that Bigfoot can't do magic while neglecting to mention that he also doesn't exist. But I'm not saying that arguing under an assumption you don't believe in isn't valuable.

Why would believing Open Individualism to be true cause disinterest in cryonics? I would be ecstatic to continue working on what I love after my natural lifespan has ended.

I enthusiastically agree with Eliezer Yudkowsky that the utilitarian argument against cryonics is weak under the assumption of Closed Individualism. Even committed EA's enjoy so many luxuries that there is no good reason why you can't pay for cryonics if that's what you value, especially if it helps you live with less fear (in which case it's an investment).

However, if you're an open individualist, there is no reason to be afraid anyway, so I don't see why you would spend the ~200.000$ on cryonics when you can use it for higher priority causes instead. I don't have any moral qualms with it, I just don't see the motivation. I don't think I'm happy or smart enough for it to be worth it, and I don't really care if my identity is preserved in this particular form. I just care about having positive experiences.

(I still approve of advertising cryonics for practical reasons. It may change the behavior of powerful people if they believe they have skin in the game.)

Replies from: Gadersd
comment by Gadersd · 2020-12-14T17:29:54.896Z · LW(p) · GW(p)

"It feels strange to me, somewhat analogous to arguing that Bigfoot can't do magic while neglecting to mention that he also doesn't exist."

I assumed that the assumptions used would resonate with people. I used to believe in a rigid soul like concept of identity when I was a child, likely stemming from my religious upbringing. Thinking of an argument similar to what I wrote is what relaxed my once rigid view of identity.

"...I don't really care if my identity is preserved in this particular form. I just care about having positive experiences."

I think this is where we differ. I don't value my life mostly for positive experiences. There are many others who enjoy the same things I enjoy and their experience is no more or less valuable in an objective sense than mine. However, I value the unique things that I can imprint on the world. Others may create art similar to mine, but it is unlikely to be exactly the same. The potentiality of the universe is limited if I am not around to affect things. The more unique agents doing unique things the more personally interesting variation there is in the universe. I care about potentiality and variation. What I most despise in the universe is a loss of potential. Forced or chosen death is a loss of potential. I can accept chosen death, but only from ones who are fully aware of the consequences such as you. I cannot accept death chosen from ignorance, which is why I exert so much effort trying to convince people that identity is not such a rigid matter.

One can argue that potentiality is relative, that the ceasing of myself would allow others to do things that they would not be able to do if I were alive. When I say potentiality I mean potential variations of the universe that excite me personally.

answer by Polytopos · 2020-12-15T14:50:11.236Z · LW(p) · GW(p)

I can't say anything on this subject that Derek Parfit didn't say better in Reasons and Persons. To my mind, this book is the starting point for all such discussions. Without awareness of it, we are just reinventing the wheel over and over again.

answer by Viliam · 2020-12-17T23:14:12.158Z · LW(p) · GW(p)

Even if the technology to restore a preserved brain or upload it into a simulation becomes viable technologically and monetarily people may still reject it for philosophical reasons. Practical problems can be solved through sufficient research and design, but philosophical problems may never go away.

Bah. Philosophical problems can easily go away with peer pressure.

Suppose the technology for uploading is reliable and cheap. Some people will try it. Then some of their friends will. At some moment, a celebrity will upload, followed by many fans and copycats.

When someone in your social circle has uploaded, you can either accept them as being "them", or stop interacting with them... the remaining option, meeting them regularly and saying: "You are not the real John; the real John is dead, and left you as a fake ghost" is emotionally difficult. So most people will accept the uploads.

(This is true regardless of whether the philosophical problems are right or wrong.)

answer by Richard_Kennaway · 2020-12-13T19:07:40.672Z · LW(p) · GW(p)

This is the argument of the beard. You can pluck one hair from a bearded man and he still has a beard, therefore by induction you can pluck all the hairs and he still has a beard.

Or if you stipulate that replacing N neurons not merely causes no "significant" change, but absolutely no change at all, even according to observations that we don't yet know we would need to make, then you've baked the conclusion into the premises.

comment by Gadersd · 2020-12-13T19:56:57.489Z · LW(p) · GW(p)

If I continually pluck hairs from my beard then I have noticeably less of a beard. Eventually I will have no beard. Replacing some neurons with the given procedure does not change behavior so the subject cannot notice a change. If the subject noticed a change then there would be a change in behavior. If you assert that a change in consciousness occurred, then you assert that a change in consciousness does not produce a change in consciousness to notice it.

We can fall asleep without noticing, but there is always a way to notice the changes. One can decide to be vigilant and use self awareness to prevent oneself from falling asleep, for example. After the procedure of replacing any arbitrary number of neurons, one cannot notice an internal change at all regardless of any self evaluation of consciousness one decides to do. What standard of deciding claims of consciousness can possibly supersede consciousness evaluating itself? If I had a million neurons replaced and could not possibly notice a difference, how could you honestly justify a claim that my identity was degraded?

Replies from: alahonua
comment by alahonua · 2020-12-13T22:33:56.444Z · LW(p) · GW(p)

Almost all gradual-brain-to-device replacement arguments are indeed sorites arguments. You assume:

Plucking one or three hairs from a beard that has 10000 hairs beard is too small an action to change a beard visibly

Plucking 2 hairs from a beard with 9998 hairs is too small a change to see (true)

Plucking 2 hairs from a beard with 9996 hairs is too small a change to see (true)

...

plucking 2 hairs 4000 times from a beard is too small a change to see (false)

Replies from: Gadersd
comment by Gadersd · 2020-12-14T00:29:57.974Z · LW(p) · GW(p)

If plucking hairs changes my beard then there will be a point at which it is noticeable before it is completely gone. My beard does not go from existing to not existing in a single pluck.

My consciousness does not go from existing to not existing in a single neuron pluck. My identity does not radically change in a single pluck. There is a continuum of small changes that lead to large changes. There will come a point at which the changes accumulate that can be noticed.

Note that I'm not referring to gradual changes through time, but a single procedure occurring once that replaces N neurons in one go.

Assume that the procedure does produce a significant change, significant meaning noticeable but not crippling, to consciousness at some number of replacements U. There is a number of replacements 0 < N <= U such that N-1 replacements is not noticeable by the subject. Noticing is a yes or no binary matter, the subject can be asked to say yes or no to whether a change is noticed.

The crucial part of the argument is that one cannot in any way notice any difference regardless of how many neurons are altered during the procedure because the specified procedure preserves behavior. Conscious awareness corresponds with behavior. If behavior cannot change when the procedure alters a third of the brain, then consciousness cannot noticeably change. If consciousness is noticeably changed from an internal perspective then a difference in behavior can be produced.

Replies from: alahonua, Richard_Kennaway
comment by alahonua · 2020-12-14T01:36:35.295Z · LW(p) · GW(p)

One advantage to a thought experiment is that it can be scaled without cost. Instead of your  sorites series, let us posit a huge number of conscious humans. We alter each human to correspond to a single step in your gradual change over time, so that we wind up performing in parallel what you posit as a series of steps. Line our subjects in "stage of alteration" order.

Now the conclusion of your series of steps corresponds to the state of the last subject in our lineup. Is this subject's consciousness the same as at start? If we assume yes, then we have assumed our conclusion, and the argument assumes its conclusions.

If we assume for sake of argument the subject's consciousness at the end of our lineup differs from the start of the lineup, then we can walk along the line and locate where we first begin to notice a change. This might vary with groups of subjects, but we can certainly then find a mean for where the change may start. This is possible even if in series we cannot perceive a difference between the subject from one step to another.

comment by Richard_Kennaway · 2020-12-14T11:24:20.442Z · LW(p) · GW(p)

Note that I'm not referring to gradual changes through time, but a single procedure occurring once that replaces N neurons in one go.

You refer to doing this k times. There is your gradual process, your argument by the beard.

If A is indistinguishable from B, and B is indistinguishable from C, it does not follow that A is indistinguishable from C.

Replies from: Gadersd
comment by Gadersd · 2020-12-14T13:50:12.297Z · LW(p) · GW(p)

Where did I say "times"? I meant that kN neurons are effectively replaced at once. I said in the argument that the neurons are replaced with a neglible time difference.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2020-12-14T15:11:37.016Z · LW(p) · GW(p)

Doing them all at once doesn't help. You are still arguing that if kN neurons make no observable difference, then neither do (k+1)N, for any k. This is not true, and the underlying binary concept that it either does, or does not, make an observable difference does not fit the situation.

Replies from: Gadersd
comment by Gadersd · 2020-12-14T15:51:33.777Z · LW(p) · GW(p)

Let P(n) designate the proposition that the procedure does not alter current or future consciousness if n neurons are replaced at once.

  1. P(0) is true.

       2. Suppose P(k) is true for some number k. Then replacing k neurons does not change consciousness for the present or future. Replace a single extra neuron in a neglible amount of time since the former replacement, such as the reaction time of a single neuron divided by the total number of neurons in the brain. #Replacing a single neuron on an unaltered consciousness with a functional replacement produces no change in current or future consciousness. # Therefore P(k+1) is true.

By mathematical induction, P(n) is true for all n >= 0

The proof uses mathematical induction. The only way to argue against this is to show that 1. or 2. is false. P(0) is obviously true. The supposition is valid because P(k) is true for at least one k, k = 0. One must then demonstrate that the statement between the hashtags is false. As I implied in my update, the statement between the hashtags is not necessarily true.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2020-12-15T09:12:06.529Z · LW(p) · GW(p)

One must then demonstrate that the statement between the hashtags is false. As I implied in my update, the statement between the hashtags is not necessarily true.

Then that undercuts the whole argument. That is exactly the argument by the beard. It depends on indistinguishablility being a transitive property, but it is not. If A and B are, for example, two colours that you cannot tell apart, and also B and C, and also C and D, you may see a clear difference between A and D.

You cannot see grass grow from one minute to the next. But you can see it grow from one day to the next.

Replies from: Gadersd
comment by Gadersd · 2020-12-15T13:47:15.436Z · LW(p) · GW(p)

"Indistinguishability" in my original argument was meant as a behavior change that reflects the subject's awareness of a change in consciousness. The replacement indistinguishability is not transitive. Regardless of how many are replaced in any order there cannot be a behavior change, even if it goes as A to B, A to C, A to D...

I think we differ in that I assumed that a change in consciousness can be manifested in a behavior change. You may disagree with this and claim that consciousness can change without the behavior being able to change.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2020-12-15T15:51:10.703Z · LW(p) · GW(p)

The replacement indistinguishability is not transitive.

I assume that's a typo for "is transitive".

Regardless of how many are replaced in any order there cannot be a behavior change, even if it goes as A to B, A to C, A to D.

Why not? If you assume absolute identity of behaviour, you're assuming the conclusion. But absolute identity is unobservable. The best you can get is indistinguishability under whatever observations you're making, in which case it is not transitive. There is no way to make this argument work without assuming the conclusion.

Replies from: Gadersd
comment by Gadersd · 2020-12-15T18:25:45.120Z · LW(p) · GW(p)

All proofs at least implicitly contain the conclusion in the assumptions or axioms. That's because proofs don't generate information, they just unravel what one has already assumed by definition or axioms.

So yes, I'm implicitly assuming the conclusion in the assumptions. The point of the proof was to convince people who agreed with all the assumptions in the first place but who did not believe in the conclusion. There are people who do believe the assumptions but do not agree with the conclusion, which, as you say is in the assumptions.

32 comments

Comments sorted by top scores.

comment by shminux · 2020-12-13T23:42:14.706Z · LW(p) · GW(p)

Consider reading Scott Aaronson's The God in the Quantum Turing Machine, it goes in considerable depth stating and answering the questions such as

Could there exist a machine, consistent with the laws of physics, that “non-invasively cloned” all the information in a particular human brain that was relevant to behavior—so that the human could emerge from the machine unharmed, but would thereafter be fully probabilistically predictable given his or her future sense-inputs, in much the same sense that a radioactive atom is probabilistically predictable?

The answers are not obvious and certainly not amenable to a "simple proof".

comment by Mitchell_Porter · 2020-12-14T11:25:17.279Z · LW(p) · GW(p)

You assume that the conscious part of the brain consists of interacting but independent subunits, whose only property of significance is how they interact with their neighbors. 

This is not the only ontological option. For example, there is the quantum notion of entanglement. There may exist a situation in which there are nominally two entities, but the overall quantum state cannot be reduced to one entity being in one state, and the other entity in a second state. 

Consider a state of two qubits. If the overall state is |01>, that can be decomposed into |0>|1>. But a superposition like |01>+|10> cannot. 

There is a further issue of whether one ascribes reality to quantum states. In the (original, true) Copenhagen interpretation, quantum states are not real things, that role is reserved only for observables and only when they take definite values. 

However, if one's ontology says quantum states are things that exist, and if the conscious part of the brain is one big entangled state, then you can't just replace the parts independently. There may be other operations you can perform, like quantum teleportation, but what they signify or allow, in the way of identity transfer, is unclear (at least, in the absence of a definite quantum theory of consciousness).