Posts
Comments
All proofs at least implicitly contain the conclusion in the assumptions or axioms. That's because proofs don't generate information, they just unravel what one has already assumed by definition or axioms.
So yes, I'm implicitly assuming the conclusion in the assumptions. The point of the proof was to convince people who agreed with all the assumptions in the first place but who did not believe in the conclusion. There are people who do believe the assumptions but do not agree with the conclusion, which, as you say is in the assumptions.
"Indistinguishability" in my original argument was meant as a behavior change that reflects the subject's awareness of a change in consciousness. The replacement indistinguishability is not transitive. Regardless of how many are replaced in any order there cannot be a behavior change, even if it goes as A to B, A to C, A to D...
I think we differ in that I assumed that a change in consciousness can be manifested in a behavior change. You may disagree with this and claim that consciousness can change without the behavior being able to change.
"It feels strange to me, somewhat analogous to arguing that Bigfoot can't do magic while neglecting to mention that he also doesn't exist."
I assumed that the assumptions used would resonate with people. I used to believe in a rigid soul like concept of identity when I was a child, likely stemming from my religious upbringing. Thinking of an argument similar to what I wrote is what relaxed my once rigid view of identity.
"...I don't really care if my identity is preserved in this particular form. I just care about having positive experiences."
I think this is where we differ. I don't value my life mostly for positive experiences. There are many others who enjoy the same things I enjoy and their experience is no more or less valuable in an objective sense than mine. However, I value the unique things that I can imprint on the world. Others may create art similar to mine, but it is unlikely to be exactly the same. The potentiality of the universe is limited if I am not around to affect things. The more unique agents doing unique things the more personally interesting variation there is in the universe. I care about potentiality and variation. What I most despise in the universe is a loss of potential. Forced or chosen death is a loss of potential. I can accept chosen death, but only from ones who are fully aware of the consequences such as you. I cannot accept death chosen from ignorance, which is why I exert so much effort trying to convince people that identity is not such a rigid matter.
One can argue that potentiality is relative, that the ceasing of myself would allow others to do things that they would not be able to do if I were alive. When I say potentiality I mean potential variations of the universe that excite me personally.
"There is an assumption here that your future self shares an identity with your current self that other people don't, which is called Closed Individualism."
I actually wrote the argument for people who believe in Closed Individualism. I myself subscribe to Open Individualism. The purpose was to convince people who subscribe to Closed Individualism to not reject cryonics on the basis that their identity will be lost. Some people, even if revived after cryonics, may worry that their identity has fundamentally changed which can lead to an existential crisis.
"I'm not saying that Open Individualism is definitely true, but Closed Individualism is almost certainly not true, and that's enough to be disinterested in cryonics."
Why would believing Open Individualism to be true cause disinterest in cryonics? I would be ecstatic to continue working on what I love after my natural lifespan has ended. A few centuries of life experience could give so much depth to art that cannot be gained through only 80 years.
Let P(n) designate the proposition that the procedure does not alter current or future consciousness if n neurons are replaced at once.
- P(0) is true.
2. Suppose P(k) is true for some number k. Then replacing k neurons does not change consciousness for the present or future. Replace a single extra neuron in a neglible amount of time since the former replacement, such as the reaction time of a single neuron divided by the total number of neurons in the brain. #Replacing a single neuron on an unaltered consciousness with a functional replacement produces no change in current or future consciousness. # Therefore P(k+1) is true.
By mathematical induction, P(n) is true for all n >= 0
The proof uses mathematical induction. The only way to argue against this is to show that 1. or 2. is false. P(0) is obviously true. The supposition is valid because P(k) is true for at least one k, k = 0. One must then demonstrate that the statement between the hashtags is false. As I implied in my update, the statement between the hashtags is not necessarily true.
Gödel established fundamental limits on a very specific notion of "knowing", a proof, that is, a sequence of statements that together justify a theorem to be true with absolute certainty.
If one relaxes the definition of knowing by removing the requirement of absolute certainty within a finite time, then one is not so restricted by Gödel's theorem. Theorems regarding nonfractional numbers such as what Gödel used can be known to be true or false in the limit by checking each number to check whether the theorem holds.
Theorems of the nature "there exists a number x such that" can be initially set false. If such a theorem is true then one will know eventually by checking each case. If it is not true, then one is correct from the start. Theorems of the nature "for all numbers x P(x) holds" can be initially set true. If such a theorem is false then one will know eventually by checking the cases. If such a theorem is true then one is correct from the start.
The limitation here is absolute certainty within a finite time. One can be guaranteed to be correct eventually, but not know at which point in time correctness will occur.
Where did I say "times"? I meant that kN neurons are effectively replaced at once. I said in the argument that the neurons are replaced with a neglible time difference.
If plucking hairs changes my beard then there will be a point at which it is noticeable before it is completely gone. My beard does not go from existing to not existing in a single pluck.
My consciousness does not go from existing to not existing in a single neuron pluck. My identity does not radically change in a single pluck. There is a continuum of small changes that lead to large changes. There will come a point at which the changes accumulate that can be noticed.
Note that I'm not referring to gradual changes through time, but a single procedure occurring once that replaces N neurons in one go.
Assume that the procedure does produce a significant change, significant meaning noticeable but not crippling, to consciousness at some number of replacements U. There is a number of replacements 0 < N <= U such that N-1 replacements is not noticeable by the subject. Noticing is a yes or no binary matter, the subject can be asked to say yes or no to whether a change is noticed.
The crucial part of the argument is that one cannot in any way notice any difference regardless of how many neurons are altered during the procedure because the specified procedure preserves behavior. Conscious awareness corresponds with behavior. If behavior cannot change when the procedure alters a third of the brain, then consciousness cannot noticeably change. If consciousness is noticeably changed from an internal perspective then a difference in behavior can be produced.
How do you know there are things you cannot know eventually?
The philosophical problems people have with identity may seem silly, but many people are affected by it. Some people who may otherwise have no problems with cryonics or other preservation techniques will choose guaranteed death because they don't intuitively think their consciousness will persist. That is why I think it is so significant. People who doubt for practical reasons can be convinced in time if the technology comes up to speed, but those who deny it for philosophical reasons may never be convinced regardless of technological advancements.
An easy practical way to verify that the recreation process done correctly is to revive someone shortly after they die. The close friends and family members could do a sort of Turing test to see if the revived person matches who they knew. The likelihood of something going significantly wrong in the process without the close friends and family members noticing would be very small. This method should convincing as long as someone takes a practical and empirical perspective of identity.
>Heisenberg's uncertainty principle clearly demostrates that there are claims about the physical world that we can't evaluate as through or false through observation and science.
What you are saying implies, for example, that a particle's momentum has a precise value but cannot be known by observation if the particle's position is known with certainty. How do know this? It could just as well be that the particle's position and momentum are mutually exclusive to a degree such that if the position is known with high certainty, then the momentum does not have a precise objective value. This is the standard view of most physicists. Rejecting this requires there to be non-local effects such as in Bohmian Mechanics.
Gödel proved that formal systems of sufficient expressive capabilities cannot prove all true statements regarding themselves. Relatedly, there are possible situations where people cannot know their futures actions because they could have the determination to do the opposite of what an oracle machine that analyzes their brain may say to guarantee that the oracle is wrong. This is a limitation or feature of systems with sufficient recursive capability. This says nothing of what can known in general. An outside observer could analyze the oracle machine and subject system and know the subject's future action as long as the outside observer does not interfere with the system and become bound up in it. A personal knowledge limitation is not an absolute limitation, whether it be a formal system or person. What is unprovable in the domain of one formal system need not be unprovable by other formal systems.
Not all theorems can be proven with respect to a single formal system or person, but the key word here is "proven." Any mathematical claim can be justified by observation. One can test a theorem by testing many cases. One can for example test whether the addition of even numbers always results in an even number by trying out many cases. With each case one's probability belief of the theorem being true increases. The halting problem, related to Gödel's incompleteness theorems, can be solved in the limit this way. A computer can run a program and assume that it does not halt. If the program does halt then the computer changes its claim. This way the computer is guaranteed to be right eventually, but it is unknown how long it will take for it to be correct. This corresponds to Bayesian updating where knowledge is increased throughout time with observation. One converges to correctness in the limit.
Besides, my argument was regarding claims of the universe and mind, not mathematics. If you have a better way than experimentation and observation to justify claims of consciousness and identity, then I would be ecstatic to hear it. Justifying worldly and consciousness claims with complexity and a priori probabilities is fine and even necessary as a starting point, but if there is no way even in principle to further justify them, then I am skeptical. Even math, which people say is above observation itself, can be justified in the limit by observation.
If I continually pluck hairs from my beard then I have noticeably less of a beard. Eventually I will have no beard. Replacing some neurons with the given procedure does not change behavior so the subject cannot notice a change. If the subject noticed a change then there would be a change in behavior. If you assert that a change in consciousness occurred, then you assert that a change in consciousness does not produce a change in consciousness to notice it.
We can fall asleep without noticing, but there is always a way to notice the changes. One can decide to be vigilant and use self awareness to prevent oneself from falling asleep, for example. After the procedure of replacing any arbitrary number of neurons, one cannot notice an internal change at all regardless of any self evaluation of consciousness one decides to do. What standard of deciding claims of consciousness can possibly supersede consciousness evaluating itself? If I had a million neurons replaced and could not possibly notice a difference, how could you honestly justify a claim that my identity was degraded?
Ok, #5 was a bit strong for this, though I must argue that Heisenberg's uncertainty principle itself was discovered through observation. Using a claim justified by observation and experiment to undermine the sufficiency of observation with regards to evaluating claims in general seems off to me.
If a change or thing has no observable effects, how can one claim that that change or thing exists? Eliezer himself believes that the Many-Worlds Hypothesis has observable effects, namely anthropic immortality, which can be tested if one is willing.
Bayesian updating works by updating priors with observation. As Eliezer has mentioned, all possibilities should be given probability greater than 0. Claiming that observation is insufficient in general to evaluate claims implies that there are true statements that are literally impossible to justify beyond a priori belief which requires that one must always to some extent appeal to a belief that is never further justified. I personally don't accept this.