The Generalized Anti-Zombie Principle

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-04-05T23:16:30.000Z · LW · GW · Legacy · 64 comments

Contents

64 comments

"Each problem that I solved became a rule which served afterwards to solve other problems."
        —Rene Descartes, Discours de la Methode

"Zombies" are putatively beings that are atom-by-atom identical to us, governed by all the same third-party-visible physical laws, except that they are not conscious.

Though the philosophy is complicated, the core argument against zombies is simple:  When you focus your inward awareness on your inward awareness, soon after your internal narrative (the little voice inside your head that speaks your thoughts) says "I am aware of being aware", and then you say it out loud, and then you type it into a computer keyboard, and create a third-party visible blog post.

Consciousness, whatever it may be—a substance, a process, a name for a confusion—is not epiphenomenal; your mind can catch the inner listener in the act of listening, and say so out loud.  The fact that I have typed this paragraph would at least seem to refute the idea that consciousness has no experimentally detectable consequences.

I hate to say "So now let's accept this and move on," over such a philosophically controversial question, but it seems like a considerable majority of Overcoming Bias commenters do accept this.  And there are other conclusions you can only get to after you accept that you cannot subtract consciousness and leave the universe looking exactly the same.  So now let's accept this and move on.

The form of the Anti-Zombie Argument seems like it should generalize, becoming an Anti-Zombie Principle.  But what is the proper generalization?

Let's say, for example, that someone says:  "I have a switch in my hand, which does not affect your brain in any way; and iff this switch is flipped, you will cease to be conscious."  Does the Anti-Zombie Principle rule this out as well, with the same structure of argument?

It appears to me that in the case above, the answer is yes.  In particular, you can say:  "Even after your switch is flipped, I will still talk about consciousness for exactly the same reasons I did before.  If I am conscious right now, I will still be conscious after you flip the switch."

Philosophers may object, "But now you're equating consciousness with talking about consciousness!  What about the Zombie Master, the chatbot that regurgitates a remixed corpus of amateur human discourse on consciousness?"

But I did not equate "consciousness" with verbal behavior.  The core premise is that, among other things, the true referent of "consciousness" is also the cause in humans of talking about inner listeners.

As I argued (at some length) in the sequence on words, what you want in defining a word is not always a perfect Aristotelian necessary-and-sufficient definition; sometimes you just want a treasure map that leads you to the extensional referent.  So "that which does in fact make me talk about an unspeakable awareness" is not a necessary-and-sufficient definition.  But if what does in fact cause me to discourse about an unspeakable awareness, is not "consciousness", then...

...then the discourse gets pretty futile.  That is not a knockdown argument against zombies—an empirical question can't be settled by mere difficulties of discourse.  But if you try to defy the Anti-Zombie Principle, you will have problems with the meaning of your discourse, not just its plausibility.

Could we define the word "consciousness" to mean "whatever actually makes humans talk about 'consciousness'"?  This would have the powerful advantage of guaranteeing that there is at least one real fact named by the word "consciousness".  Even if our belief in consciousness is a confusion, "consciousness" would name the cognitive architecture that generated the confusion.  But to establish a definition is only to promise to use a word consistently; it doesn't settle any empirical questions, such as whether our inner awareness makes us talk about our inner awareness.

Let's return to the Off-Switch.

If we allow that the Anti-Zombie Argument applies against the Off-Switch, then the Generalized Anti-Zombie Principle does not say only, "Any change that is not in-principle experimentally detectable (IPED) cannot remove your consciousness."  The switch's flipping is experimentally detectable, but it still seems highly unlikely to remove your consciousness.

Perhaps the Anti-Zombie Principle says, "Any change that does not affect you in any IPED way cannot remove your consciousness"?

But is it a reasonable stipulation to say that flipping the switch does not affect you in any IPED way?  All the particles in the switch are interacting with the particles composing your body and brain.  There are gravitational effects—tiny, but real and IPED.  The gravitational pull from a one-gram switch ten meters away is around 6 * 10-16 m/s2.  That's around half a neutron diameter per second per second, far below thermal noise, but way above the Planck level.

We could flip the switch light-years away, in which case the flip would have no immediate causal effect on you (whatever "immediate" means in this case) (if the Standard Model of physics is correct).

But it doesn't seem like we should have to alter the thought experiment in this fashion.  It seems that, if a disconnected switch is flipped on the other side of a room, you should not expect your inner listener to go out like a light, because the switch "obviously doesn't change" that which is the true cause of your talking about an inner listener.  Whatever you really are, you don't expect the switch to mess with it.

This is a large step.

If you deny that it is a reasonable step, you had better never go near a switch again.  But still, it's a large step.

The key idea of reductionism is that our maps of the universe are multi-level to save on computing power, but physics seems to be strictly single-level.  All our discourse about the universe takes place using references far above the level of fundamental particles.

The switch's flip does change the fundamental particles of your body and brain.  It nudges them by whole neutron diameters away from where they would have otherwise been.

In ordinary life, we gloss a change this small by saying that the switch "doesn't affect you".  But it does affect you.  It changes everything by whole neutron diameters!  What could possibly be remaining the same?  Only the description that you would give of the higher levels of organization—the cells, the proteins, the spikes traveling along a neural axon.  As the map is far less detailed than the territory, it must map many different states to the same description.

Any reasonable sort of humanish description of the brain that talks about neurons and activity patterns (or even the conformations of individual microtubules making up axons and dendrites) won't change when you flip a switch on the other side of the room.  Nuclei are larger than neutrons, atoms are larger than nuclei, and by the time you get up to talking about the molecular level, that tiny little gravitational force has vanished from the list of things you bother to track.

But if you add up enough tiny little gravitational pulls, they will eventually yank you across the room and tear you apart by tidal forces, so clearly a small effect is not "no effect at all".

Maybe the tidal force from that tiny little pull, by an amazing coincidence, pulls a single extra calcium ion just a tiny bit closer to an ion channel, causing it to be pulled in just a tiny bit sooner, making a single neuron fire infinitesimally sooner than it would otherwise have done, a difference which amplifies chaotically, finally making a whole neural spike occur that otherwise wouldn't have occurred, sending you off on a different train of thought, that triggers an epileptic fit, that kills you, causing you to cease to be conscious...

If you add up a lot of tiny quantitative effects, you get a big quantitative effect—big enough to mess with anything you care to name.  And so claiming that the switch has literally zero effect on the things you care about, is taking it too far.

But with just one switch, the force exerted is vastly less than thermal uncertainties, never mind quantum uncertainties.  If you don't expect your consciousness to flicker in and out of existence as the result of thermal jiggling, then you certainly shouldn't expect to go out like a light when someone sneezes a kilometer away.

The alert Bayesian will note that I have just made an argument about expectations, states of knowledge, justified beliefs about what can and can't switch off your consciousness.

This doesn't necessarily destroy the Anti-Zombie Argument.  Probabilities are not certainties, but the laws of probability are theorems; if rationality says you can't believe something on your current information, then that is a law, not a suggestion.

Still, this version of the Anti-Zombie Argument is weaker.  It doesn't have the nice, clean, absolutely clear-cut status of, "You can't possibly eliminate consciousness while leaving all the atoms in exactly the same place."  (Or for "all the atoms" substitute "all causes with in-principle experimentally detectable effects", and "same wavefunction" for "same place", etc.)

But the new version of the Anti-Zombie Argument still carries.  You can say, "I don't know what consciousness really is, and I suspect I may be fundamentally confused about the question.  But if the word refers to anything at all, it refers to something that is, among other things, the cause of my talking about consciousness.  Now, I don't know why I talk about consciousness.  But it happens inside my skull, and I expect it has something to do with neurons firing.  Or maybe, if I really understood consciousness, I would have to talk about an even more fundamental level than that, like microtubules, or neurotransmitters diffusing across a synaptic channel.  But still, that switch you just flipped has an effect on my neurotransmitters and microtubules that's much, much less than thermal noise at 310 Kelvin.  So whatever the true cause of my talking about consciousness may be, I don't expect it to be hugely affected by the gravitational pull from that switch.  Maybe it's just a tiny little infinitesimal bit affected?  But it's certainly not going to go out like a light.  I expect to go on talking about consciousness in almost exactly the same way afterward, for almost exactly the same reasons."

This application of the Anti-Zombie Principle is weaker.  But it's also much more general.  And, in terms of sheer common sense, correct.

The reductionist and the substance dualist actually have two different versions of the above statement.  The reductionist furthermore says, "Whatever makes me talk about consciousness, it seems likely that the important parts take place on a much higher functional level than atomic nuclei.  Someone who understood consciousness could abstract away from individual neurons firing, and talk about high-level cognitive architectures, and still describe how my mind produces thoughts like 'I think therefore I am'.  So nudging things around by the diameter of a nucleon, shouldn't affect my consciousness (except maybe with very small probability, or by a very tiny amount, or not until after a significant delay)."

The substance dualist furthermore says, "Whatever makes me talk about consciousness, it's got to be something beyond the computational physics we know, which means that it might very well involve quantum effects.  But still, my consciousness doesn't flicker on and off whenever someone sneezes a kilometer away.  If it did, I would notice.  It would be like skipping a few seconds, or coming out of a general anesthetic, or sometimes saying, "I don't think therefore I'm not."  So since it's a physical fact that thermal vibrations don't disturb the stuff of my awareness, I don't expect flipping the switch to disturb it either."

Either way, you shouldn't expect your sense of awareness to vanish when someone says the word "Abracadabra", even if that does have some infinitesimal physical effect on your brain—

But hold on!  If you hear someone say the word "Abracadabra", that has a very noticeable effect on your brain—so large, even your brain can notice it.  It may alter your internal narrative; you may think, "Why did that person just say 'Abracadabra'?"

Well, but still you expect to go on talking about consciousness in almost exactly the same way afterward, for almost exactly the same reasons.

And again, it's not that "consciousness" is being equated to "that which makes you talk about consciousness".  It's just that consciousness, among other things, makes you talk about consciousness.  So anything that makes your consciousness go out like a light, should make you stop talking about consciousness.

If we do something to you, where you don't see how it could possibly change your internal narrative—the little voice in your head that sometimes says things like "I think therefore I am", whose words you can choose to say aloud—then it shouldn't make you cease to be conscious.

And this is true even if the internal narrative is just "pretty much the same", and the causes of it are also pretty much the same; among the causes that are pretty much the same, is whatever you mean by "consciousness".

If you're wondering where all this is going, and why it's important to go to such tremendous lengths to ponder such an obvious-seeming Generalized Anti-Zombie Principle, then consider the following debate:

Albert:  "Suppose I replaced all the neurons in your head with tiny robotic artificial neurons that had the same connections, the same local input-output behavior, and analogous internal state and learning rules."

Bernice:  "That's killing me!  There wouldn't be a conscious being there anymore."

Charles:  "Well, there'd still be a conscious being there, but it wouldn't be me."

Sir Roger Penrose:  "The thought experiment you propose is impossible.  You can't duplicate the behavior of neurons without tapping into quantum gravity.  That said, there's not much point in me taking further part in this conversation."  (Wanders away.)

Albert:  "Suppose that the replacement is carried out one neuron at a time, and the swap occurs so fast that it doesn't make any difference to global processing."

Bernice:  "How could that possibly be the case?"

Albert:  "The little robot swims up to the neuron, surrounds it, scans it, learns to duplicate it, and then suddenly takes over the behavior, between one spike and the next.  In fact, the imitation is so good, that your outward behavior is just the same as it would be if the brain were left undisturbed.  Maybe not exactly the same, but the causal impact is much less than thermal noise at 310 Kelvin."

Charles:  "So what?"

Albert:  "So don't your beliefs violate the Generalized Anti-Zombie Principle?  Whatever just happened, it didn't change your internal narrative!  You'll go around talking about consciousness for exactly the same reason as before."

Bernice:  "Those little robots are a Zombie Master.  They'll make me talk about consciousness even though I'm not conscious.  The Zombie World is possible if you allow there to be an added, extra, experimentally detectable Zombie Master—which those robots are."

Charles:  "Oh, that's not right, Bernice.  The little robots aren't plotting how to fake consciousness, or processing a corpus of text from human amateurs.  They're doing the same thing neurons do, just in silicon instead of carbon."

Albert:  "Wait, didn't you just agree with me?"

Charles:  "I never said the new person wouldn't be conscious.  I said it wouldn't be me."

Albert:  "Well, obviously the Anti-Zombie Principle generalizes to say that this operation hasn't disturbed the true cause of your talking about this me thing."

Charles:  "Uh-uh!  Your operation certainly did disturb the true cause of my talking about consciousness.  It substituted a different cause in its place, the robots.  Now, just because that new cause also happens to be conscious—talks about consciousness for the same generalized reason—doesn't mean it's the same cause that was originally there."

Albert:  "But I wouldn't even have to tell you about the robot operation.  You wouldn't notice.  If you think, going on introspective evidence, that you are in an important sense "the same person" that you were five minutes ago, and I do something to you that doesn't change the introspective evidence available to you, then your conclusion that you are the same person that you were five minutes ago should be equally justified.  Doesn't the Generalized Anti-Zombie Principle say that if I do something to you that alters your consciousness, let alone makes you a completely different person, then you ought to notice somehow?"

Bernice:  "Not if you replace me with a Zombie Master.  Then there's no one there to notice."

Charles:  "Introspection isn't perfect.  Lots of stuff goes on inside my brain that I don't notice."

Albert:  "You're postulating epiphenomenal facts about consciousness and identity!"

Bernice:  "No I'm not!  I can experimentally detect the difference between neurons and robots."

Charles:  "No I'm not!  I can experimentally detect the moment when the old me is replaced by a new person."

Albert:  "Yeah, and I can detect the switch flipping!  You're detecting something that doesn't make a noticeable difference to the true cause of your talk about consciousness and personal identity.  And the proof is, you'll talk just the same way afterward."

Bernice:  "That's because of your robotic Zombie Master!"

Charles:  "Just because two people talk about 'personal identity' for similar reasons doesn't make them the same person."

I think the Generalized Anti-Zombie Principle supports Albert's position, but the reasons shall have to wait for future posts.  I need other prerequisites, and besides, this post is already too long.

But you see the importance of the question, "How far can you generalize the Anti-Zombie Argument and have it still be valid?"

The makeup of future galactic civilizations may be determined by the answer...

64 comments

Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by Tom_McCabe2 · 2008-04-05T23:32:44.000Z · LW(p) · GW(p)

"But you see the importance of the question, "How far can you generalize the Anti-Zombie Argument and have it still be valid?""

Hmmm... I can see three different possible generalizations:

1). Any Turing-equivalent device which implements the same algorithms that you do is you, in every ethical and philosophical sense of the word. 2). There are no mysterious "properties" in the universe which can exist or not exist independently of what the quarks and leptons are doing. 3). Physics, and all the larger-scale mental structures based on physics, are topologically continuous (no large-scale effects for arbitrarily small causes).

Replies from: rkyeun
comment by rkyeun · 2012-07-28T05:17:21.733Z · LW(p) · GW(p)

1) Quantum phenomenon -- ie, the universe and any given subsets of it you care to name -- are not Turing-equivalent. The universe has no problem factoring quantum configurations which may or may not represent prime numbers in linear time into amplitude distributions that overlap whenever they aren't prime. 2) There is nothing mysterious about the universe, correct. It is lawful. There are things mysterious about our crudely hand-drawn maps of the territory. 3) Arbitrarily small cause: The big bang. Large-scale effect: The universe.

Replies from: wedrifid, Oscar_Cunningham
comment by wedrifid · 2012-07-28T06:13:51.987Z · LW(p) · GW(p)

Quantum phenomenon -- ie, the universe and any given subsets of it you care to name -- are not Turing-equivalent.

(Probably. We can't be sure of this.)

comment by Oscar_Cunningham · 2012-07-28T09:34:03.749Z · LW(p) · GW(p)

Turing-equivalent usually means "able to simulate and be simulated by a Turing machine". In this sense (almost) all the current theories of quantum physics are Turing-equivalent. The only thing that quantum computers might be able to do is go exponentially faster. But you can still simulate quantum events on a classical computer, it just takes a long time.

comment by Roland2 · 2008-04-06T00:36:21.000Z · LW(p) · GW(p)

I hate to say "So now let's accept this and move on," over such a philosophically controversial question, but it seems like a considerable majority of Overcoming Bias commenters do accept this.

Yep, I accept this, let's move on!

comment by Frank_Hirsch · 2008-04-06T01:42:22.000Z · LW(p) · GW(p)

[Warning: Here be sarcasm] No! Please let's spend more time discussing dubious non-disprovable hypotheses! There's only a gazillion more to go, then we'll have convinced everyone!

comment by JulianMorrison · 2008-04-06T01:49:04.000Z · LW(p) · GW(p)

Doesn't this change pure reductionism into something else?

Everything above the level of fundamental physics is essentially informational in nature. It has interfaces upwards (its behaviors) and it has interfaces downwards (the necessary behaviors of its substrate). Something like an electron may plug straight into fundamental physics, but an atom plugs into electrons and a molecule plugs into atoms.

This layering means you could lift something right off its substrate and run it on anything else that provides the same interfaces. So for example you can do protein chemistry on a computer atom-simulator. At that point, is it really fair to say "quarks fully describe a hand" when it would be equally interface-valid (if not in this case true) to say "a sufficiently powerful simulator fully describes a hand"? The quarks become less a reduction and more a circumstantial fact: "this hand is implemented using quarks".

Replies from: ramana-kumar
comment by Ramana Kumar (ramana-kumar) · 2009-10-31T12:24:15.335Z · LW(p) · GW(p)

It makes sense to humans (modelers), who can recognize hands, to say "this hand is implemented using quarks", and "that hand is implemented using sand (which, incidentally, is implemented using quarks)". But when we say "quarks fully describe a hand" I think part of the meaning is an acknowledgment that reducing to quarks gets you closer to the territory. (Hands are only in our maps.)

comment by Phil_Goetz2 · 2008-04-06T03:03:35.000Z · LW(p) · GW(p)

Consciousness, whatever it may be - a substance, a process, a name for a confusion - is not epiphenomenal; your mind can catch the inner listener in the act of listening, and say so out loud. The fact that I have typed this paragraph would at least seem to refute the idea that consciousness has no experimentally detectable consequences.
Eliezer, I'm shocked to see you write such nonsense. This only shows that you don't understand the zombie hypothesis at all. Or, you suppose that intelligence requires consciousness. This is the spiritualist, Searlian stuff you usually oppose.

The zombie hypothesis begins by asserting that I have no way of knowing whether you are conscious, no matter what you write. You of all people I expect to accept this, since you believe that you are Turing-computable. You haven't made an argument against the zombie hypothesis; you've merely asserted that it is false and called that assertion an argument.

The only thing I can imagine is that you have flipped the spiritualist argument around to its mirror image. Instead of saying that "I am conscious; Turing machines may not be conscious; therefore I am not just a Turing machine", you may be saying, "I am conscious; I am a Turing machine; therefore, all Turing machines that emit this sequence of symbols are conscious."

comment by Caledonian2 · 2008-04-06T03:13:45.000Z · LW(p) · GW(p)
Eliezer, I'm shocked to see you write such nonsense. This only shows that you don't understand the zombie hypothesis at all. Or, you suppose that intelligence requires consciousness. This is the spiritualist, Searlian stuff you usually oppose.

Um, no. What it IS is a radically different meaning of the word than what the p-zombie nonsense uses. Chalmers' view requires stripping 'consciousness' of any consequence, while Eliezer's involves leaving the standard usage intact.

'Consciousness' in that sense refers to self-awareness or self-modeling, the attempt of a complex computational system to represent some aspects of itself, in itself. It has causal implications for the behavior of the system, can potentially be detected by an outside observer who has access to the mechanisms underlying that system, and is fully part of reality.

Do not confuse two totally different concepts just because the same word is used for both.

comment by poke · 2008-04-06T03:46:49.000Z · LW(p) · GW(p)

This isn't your clearest essay, and I'm not completely sure of the point you're making, but I think you make quite a leap at the end. It seems like you want to equate my awareness of changes to myself with my identity; so any change that's imperceptible to my awareness of myself would not change my identity. This seems odd. For one thing, if you tell me you're going to change my neurons to robotic equivalents (or if I study the procedure), aren't I then aware of the change? I think you'd have a hard time defining consciousness as something I could detect a change in.

comment by Hopefully_Anonymous · 2008-04-06T04:51:51.000Z · LW(p) · GW(p)

"Zombies" are putatively beings that are atom-by-atom identical to us, governed by all the same third-party-visible physical laws, except that they are not conscious.

That seems to me be a bit beyond current technical ability (whether or not 2 things on the scale of a human being are atom-by-atom identical).

I'm not sure there's huge value in spending a lot of time on that "problem", except a very small fraction of our energy as a persistence-maximing hedge, sort of like spending a very small amount of time (if any) on planning how to beat proton decay trillions of years from now. http://www.pbs.org/wgbh/nova/universe/historysans.html

I've seen the term zombie used however, in ways other than your definition in this piece. For example, mental "uploads" that profess to be the person uploaded. That could be a little trickier, because just because something fools an observer into thinking that it a particular subjective conscious entity (for example, that it's me, HA) doesn't mean that it is. And since our technology can't current do atom-by-atom comparisons of humans, it requires less than that to fool almost any current observer. That to me is the more relevant problem currently. In attempting to maximize my persistence odds, I don't want to minimize my chances of being replaced by a "zombie" in that sense: something that meets current discernment technology but doesn't actually preserve my subjective conscious experience. Practically, it seems to me this results in giving somewhat greater weight to persistence strategies that are more conservative in keeping my subjective consciousness in something closer to it's current wet brain in a human body experience (as opposed to 'uploading', etc.)

comment by GNZ · 2008-04-06T05:05:37.000Z · LW(p) · GW(p)

I always took it to be that a zombie could catch itself thinking (as a listener) in the same way a non zombie could, the zombie doesn't lack inner speech in that sense, the whole causal chain remains and doesn't create an issue well at least not for a certain set of definitions and understanding of logic.

which really just reflects my agreement with Frank's "Please let's spend more time discussing dubious non-disprovable hypotheses!"

however I like "Thou shalt provide operational definitions for your terms!"

comment by michael_vassar3 · 2008-04-06T05:41:43.000Z · LW(p) · GW(p)

Phil Goetz: Have you been reading the recent posts and comments? I'm very surprised to see you being surprised if you have.

Caledonian: Has anyone ever suggested to you that you look into early-mid 20th century refutations to "positivism"? Operational definitions etc are good heuristics, not divine edicts.

comment by Will_Pearson · 2008-04-06T07:22:42.000Z · LW(p) · GW(p)

Albert: "The little robot swims up to the neuron, surrounds it, scans it, learns to duplicate it, and then suddenly takes over the behavior, between one spike and the next. In fact, the imitation is so good, that your outward behavior is just the same as it would be if the brain were left undisturbed. Maybe not exactly the same, but the causal impact is much less than thermal noise at 310 Kelvin."

I find this physically implausible. By behaviour you would have to include all interactions it has with mind altering substances (caffeine to acid), how it reacts to acceleration and lack of blood (e.g. seeing stars). You have to build new neurons and modify existing connections. To be as identical as makes no difference you would also have to imitate all possible brain affecting diseases, from CJD to Alzheimers. All while generating roughly the same electro magnetic radiation so that our brain waves are some what similar.

Replies from: rkyeun
comment by rkyeun · 2011-05-10T08:49:16.932Z · LW(p) · GW(p)

Let's call that little robot a neuron, and build it out of protoplasm. How many ATOMS do we have to swap out before you aren't you? When does this change have a more significant impact on you-ness than the jiggling of your brainmeats inside you car when you go over a speedbump?

comment by Latanius2 · 2008-04-06T10:22:00.000Z · LW(p) · GW(p)

Eliezer, does this whole theory cause us to anticipate something different after thinking about it? For example, after I upload, will I (personally) feel anything or only the death-like dark nothingness comes?

I think I did find such a thing, involving copying yourself in parts varying in size. (Well, it's leading to a contradiction, by the way, but maybe that's why it's even more worthwhile to talk about.)

comment by Caledonian2 · 2008-04-06T13:23:31.000Z · LW(p) · GW(p)
Caledonian: Has anyone ever suggested to you that you look into early-mid 20th century refutations to "positivism"? Operational definitions etc are good heuristics, not divine edicts.

They are neither heuristics nor edicts. They're what's necessary for a definition to be functional and make sense - if you cannot divide the world into A and ~A based on a provided definition, it is invalid.

As for positivism, the 'refutations' made certain assumptions critical to their validity that I assert do not hold. With a whole field dominated by Richards, why would you assume that long-standing consensuses are valid?

The concept of logical positivism is certainly wrong... but it's the 'logical' part that's the problem.

comment by Frank_Hirsch · 2008-04-06T14:24:01.000Z · LW(p) · GW(p)

Will Pearson [about tiny robots replacing neurons]: "I find this physically implausible."

Um, well, I can see it would be quite hard. But that doesn't really matter for a thought experiment. To ask "What it would be like to ride on a light beam?" is quite as physically implausible as it gets, but seems to have produced a few rather interesting insights.

comment by Meta_and_Meta · 2008-04-06T14:38:56.000Z · LW(p) · GW(p)

How can we possibly move on when there are still people who are wrong on the Internet?

comment by Caledonian2 · 2008-04-06T15:11:24.000Z · LW(p) · GW(p)

One of the very many problems with today's world is that, instead of confronting the root issues that underlie disagreement, people simply split into groups and sustain themselves on intragroup consensus.

If we do this every time we run up against a persistent disagreement, we will never actually resolve any issue; we'll just winnow down the number of people we're willing to listen to until we're secure in a safe and comfortable echo chamber, with our own opinions bouncing back at us forever.

That is an extraordinarily bad way to overcome bias.

comment by michael_vassar3 · 2008-04-06T15:57:23.000Z · LW(p) · GW(p)

Caledonian: It doesn't look to me like Philosophy has always been dominated by people with such a weak grip on the opposing positions, at least with respect to reduction. Russell, for instance, is a clear counter example, though he was weak in his understanding of the economic mind set. Definitions are a useful tool for thought, not the whole thing. The classical disproof of positivism is that it is self-contradictory. "Only the empirical can be true", but that statement is not empirical.

Will Pearson: I share some of your suspicion that replacing neurons may not be possible. Reactions to mind altering substances should be fairly easy. Ditto reaction to acceleration etc. I don't think that any of those, nor the diseases nor the EM radiation really need to be copied according to the "generalized anti-zombie principle" in any event. Would you really worry about a pill that eliminated all of them possibly also eliminating phenomenal consciousness? Seems far more likely that it would preserve it from those things. OTOH, mimicking the construction of new neurons and connections sounds very tough. It doesn't seem very likely to me that after replacing my brain with these robots I wouldn't still be "conscious" and "me" but it seems not unlikely that I would fairly soon be a brain damaged version of "me", possibly in a manner that was opaque from outside, possibly in a manner that was at first opaque from outside and later not.

comment by michael_vassar3 · 2008-04-06T16:00:07.000Z · LW(p) · GW(p)

Caledonian: Good point about echo chambers, but its far from clear to me how to fix it. It's a fairly clear empirical fact that most people are not receptive to arguments on many topics. Since they persist in disagreement at some point we have to stop listening to some of them if we are ever to get on to doing anything else.

comment by Caledonian2 · 2008-04-06T16:39:43.000Z · LW(p) · GW(p)

but its far from clear to me how to fix it.
Well, the first thing we have to do is stop talking about the argument as though it were a matter of possibilities and probabilities: "it seems unlikely that", "I suspect that", "this is unlike what we would probably expect", et cetera, need to be abolished. The argument must be logically resolved, not merely trail off with stated positions that one side feels are "reasonable".

The p-zombie advocates are confusing physics-as-it-is and physics-as-we-understand. It is entirely possible that there are phenomena that our current understanding of physics and limited powers of observation might not include. But those hypothetical new things would be detected IF AND ONLY IF we noticed that the world did not act as our model said it should, given the available conditions. That would be the evidence we'd need to conclude that our model was missing some parts - perhaps our representation of the conditions were wrong and our rules were right, or perhaps our rules were inadequate.

If we had 'souls', 'consciousness', 'experiences', 'qualia', whatever we wish to call the hypothesized "new things", they would bring about changes in the world that the models that did not include them could not account for. The p-zombie advocates explicitly rule out this possibility: the p-zombie world acts precisely as ours does in all respects, not just the ways we can currently see.

Ergo, the properties that they postulate, that make p-zombies different from non-p-zombies, do not exist. Imagining a p-zombie as distinct from a 'conscious entity' is not possible, because the two things are the same. They have precisely the same properties, it's just that the labels that point to them are different.

Eliezer doesn't go far enough. Chalmers' idea of consciousness isn't just unnecessary, it's incoherent. It's not merely improbable, it is wrong. The people postulating effective epiphenomena aren't fiddling with trivia, they are logically contradictory.

If we cannot perceive a logical contradiction of this simplicity and directness, how do we expect to resolve subtler questions?

comment by GreedyAlgorithm · 2008-04-06T17:27:30.000Z · LW(p) · GW(p)

The only way I can see p-zombieness affecting our world is if

a) we decide we are ethically bound to make epiphenomenal consciousnesses happier, better, whatever; b) our amazing grasp of physics and how the universe exists leads our priors to indicate that even though it's impossible to ever detect them, epiphenomenal consciousnesses are likely to exist; and c) it turns out doing this rather than that gives the epiphenomenal consciousnesses enough utility that it is ethical to help them out.

comment by Frank_Hirsch · 2008-04-06T17:56:52.000Z · LW(p) · GW(p)

Caledonian:

One of the very many problems with today's world is that, instead of confronting the root issues that underlie disagreement, people simply split into groups and sustain themselves on intragroup consensus. [...] That is an extraordinarily bad way to overcome bias.

I disagree. What do we have to gain from bringing all-and-everyone in line with our own beliefs? While it is arguably a good thing to exchange our points of view, and how we are rationalising them, there will always be issues where the agreed evidence is just not strong enough to refute all but one way to look at things. I believe that sometimes you really do have to agree to disagree (unless all participants espouse bayesianism, that is), and move on to more fertile pastures. And even if all participants in a discussion claim to be rationalists, sometimes you'll either have to agree that someone is wrong (without agreeing on who it is, naturally) or waste time you could have spent on more promising endeavours.

comment by Caledonian2 · 2008-04-06T18:37:42.000Z · LW(p) · GW(p)
there will always be issues where the agreed evidence is just not strong enough

Then no one arguing can justify their positions, and everyone is incorrect in their assertions.

In any argument there can be at most one correct side. There's no principle saying that any of the sides involved must be right - only that only one can be.

There's also no principle mandating that any of the sides must be wrong. Incoherent arguments aren't wrong. They would have to go up in ontological status to be wrong. It would take a great deal of work and some serious improvement for them to be wrong.

P-zombies aren't right. They aren't wrong. They are merely nonsense.

comment by Phil_Goetz2 · 2008-04-06T19:22:25.000Z · LW(p) · GW(p)

Caledonian writes:

Um, no. What it IS is a radically different meaning of the word than what the p-zombie nonsense uses. Chalmers' view requires stripping 'consciousness' of any consequence, while Eliezer's involves leaving the standard usage intact.

'Consciousness' in that sense refers to self-awareness or self-modeling, the attempt of a complex computational system to represent some aspects of itself, in itself. It has causal implications for the behavior of the system, can potentially be detected by an outside observer who has access to the mechanisms underlying that system, and is fully part of reality. What Eliezer wrote is consistent with that definition of consciousness. But that is not "the standard usage". It's a useless usage. Self-representation is trivial and of no philosophical interest. The interesting philosophical question is why I have what the 99% of the world who doesn't use your "standard usage" means by "consciousness". Why do I have self-awareness? - and by self-awareness, I don't mean anything I can currently describe computationally, or know how to detect the consequences of.

This is the key unsolved mystery of the universe, the only one that we have really no insight into yet. You can't call it "nonsense" when it clearly exists and clearly has no explanation or model. Unless you are a zombie, in which case what I interpret as your stance is reasonable.

There is a time to be a behaviorist, and it may be reasonable to say that we shouldn't waste our time pursuing arguments about internal states that we can't detect behaviorially, but it is Silly to claim to have dispelled the mystery merely by defining it away.

There have been too many attempts by scientists to make claims about consciousness that sound astonishing, but turn out to be merely redefinitions of "consciousness" to something trivial. Like this, for instance. Or Crick's "The Astonishing Hypothesis", or other works by neuroscientists on "consciousness" when they are actually talking about focus of attention. I have developed an intellectual allergy to such things. Going on about zombies and consciousness as if you were addressing philosophical issues, when you have redefined consciousness to mean a particular easily-comprehended computational or graph-theoretic property, falls squarely into the category of ideas that I consider Silly.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-04-06T19:24:03.000Z · LW(p) · GW(p)

Poke: . It seems like you want to equate my awareness of changes to myself with my identity; so any change that's imperceptible to my awareness of myself would not change my identity.

As Charles says, many real cognitive processes are not accessible to introspection, or are real and substantial without rising to the point of producing an immediate effect on the internal narrative. Maybe, as you were reading just now, your brain forgot some small fact about your thirteenth birthday. That would be a real change to your personal identity; if enough such changes accumulated, you would cease to exist; and you didn't notice it as it happened.

Albert's reply is that we aren't talking about a small change here; Charles has just postulated that your personal continuity was absolutely interrupted - one person died, another person was born, and neither of them noticed. This is what Albert thinks the GAZP prohibits, on account of it strongly resembling the notion that consciousness can be eliminated entirely without your internal narrative noticing; in other words, if you don't notice yourself dying and being born, you probably didn't - that what makes Albert think that something epiphenomenal is being postulated.

As a side note, Bernice might say that you don't notice the change because the area responsible for noticing it has been damaged (destroyed, actually) like an anosognosic patient who can't believe that their left arm is paralyzed. But Albert and probably even Charles would agree that this kind of specific functional brain damage to awareness-areas, is not occurring here.

comment by Phil_Goetz2 · 2008-04-06T19:25:34.000Z · LW(p) · GW(p)

Going on about zombies and consciousness as if you were addressing philosophical issues, when you have redefined consciousness to mean a particular easily-comprehended computational or graph-theoretic property, falls squarely into the category of ideas that I consider Silly.
Although, ironically, I'm in the process of doing exactly that. I will try to come up with a rationalization for why it is Not Silly when I do it.

Replies from: diegocaleiro
comment by diegocaleiro · 2010-09-17T05:50:31.784Z · LW(p) · GW(p)

It probably doesn't feel silly when you do it because you unconsciously have two epistemic subjects in your model of the world. One is the conscious you, and the other is the brainy speaky, from wernicke to mouth to the word "consciousness" you.

Since the model your physical self has made of the world includes both the physical you, and the chalmersian-conscious-you, and the physical self does not know it has this division, the model constantly switches between representations, allowing for silly things to happen. In fact, except for Chalmers, who is really skilled at dodging this mistake (because he invented it and made a career out of it), most smart people do this. (It was so hard to find where Chalmers cheated in his "The Content and Epistemology of Phenomenal Belief" I wrote an article pointing it out.)

If you want to gain a few bits to the model of what feels like you, the chalmersian-conscious-you, tononi http://www.biolbull.org/cgi/content/abstract/215/3/216 will give you a little information, it will explain only (don't put high hopes) why colors are different from sounds.

I have never read anything else that improves the brute model of chalmersian-conscious-me with which we are equipped naturally....

comment by Will_Pearson · 2008-04-06T19:31:26.000Z · LW(p) · GW(p)

Frank Hirsch: Riding a light beam is well specified. Silicon neurons having the same behaviour as protein ones is not well specified and not likely to be fruitful until it is so.

michael vassar : My statement was not about eliminating phenomenal conciousness, it was changing personal identity. I don't personally but other people strongly associate with altering their conciousness through drugs. Someone who couldn't get stoned with their mates or have a night out on the town drinking while enjoying the altered mind state might think that something was wrong with them, and that they weren't the same person as they were before.

And even if personal identity is not affected, social identity might be. That is other people would see you as a different person even if you didn't. If your personal identity is founded upon your relationships with others, this may be a problem.

comment by Stirling_Westrup · 2008-04-06T19:54:37.000Z · LW(p) · GW(p)

I must admit I found the previous articles on Zombies somewhat tedious as I find the entire concept of philosophical Zombies to be specious. Still, now I'm glad I read through it all as I can see why you were so careful to lay down the foundations you did.

The question of what changes one can make to the brain while maintaining 'identity' has been been discussed many times on the Extropians list, and seldom with any sort of constructive results.

Today's article has already far exceeded the signal to noise ratio of any other discussion on the same topic that I've ever seen, so I am really looking forward to seeing where you go from here.

comment by komponisto2 · 2008-04-06T21:46:38.000Z · LW(p) · GW(p)

Michael Vassar:

The classical disproof of positivism is that it is self-contradictory. "Only the empirical can be true", but that statement is not empirical.

I have always been mystified at how this glib dismissal has been taken as some kind of definitive refutation. To the contrary, it should be perfectly obvious that a meta-statement like () a statement is nonsense unless it describes an empirically observable phenomenon is not meant to be self-referential. What () does is to lay down a rule of discourse (not meta-discourse). Its purpose is to banish invisible dragons from the discussion.

You cannot appeal to the "legitimacy" of sentences like (*) in order to argue on behalf of your favorite invisible dragon. But this is exactly what is going on in exchanges like the following: A: "The concept of consciousness is meaningless because it has no empirical consequences." B" "Silly amateur! Don't you know that logical positivism has been refuted?"

comment by Caledonian2 · 2008-04-06T22:22:07.000Z · LW(p) · GW(p)

Expecting an argument to be able to justify itself is unreasonable, to my mind. Nothing can justify itself; everything must be justified by referring to something else, and the references cannot be circular.

Sure, you can always reference a deeper, more fundamental set of assertions to justify any particular claim, but what justifies those? You could construct an infinite chain that way, and still not explain how "assertions you make" can be justified, because you must always assume that the latest claims are justified themselves in order for them to support everything you claimed before them.

The key, I think, is to recognize that you can justify your claims only by pointing to something outside yourself. This applies as much to the totality of humanity as it does to an individual. You can construct an argument, but what validates your ability to construct is not yourself, but something greater; no argument can be constructed that validates that thing.

"It" is empirical reality, and the justification for human claims is observation of that reality. It does not need our support to function, and its functioning is unaffected by the arguments we make. The truth points to itself.

But this is another argument and will be argued another time.

comment by Fly2 · 2008-04-07T03:35:31.000Z · LW(p) · GW(p)

For this discussion I use "consciousness" to refer to the mind's internal awareness of qualia. Consciousness may be an inherent property of whatever makes up the universe, i.e., even individual photons may have some essence of consciousness. Human type consciousness might then arise whenever sufficient elements group together in the right pattern. Other groupings into other patterns might generate other types of consciousness. Consciousness may have no purpose. Or perhaps certain types of consciousness somehow enhance intelligence and provide an evolutionary advantage.

If I don't trust that other people have a self awareness much like mine, then I have no reason to trust any of my senses or memories or beliefs. So I trust the evidence that other humans look like me, act like me, have brains like mine, and express internal thoughts in language as I do. I am only slightly less certain that mammals, birds, reptiles, and fish are conscious as they share common ancestry, have similar brain structures, and exhibit similar behavior. I am less certain about insects or worms. As I don't know the physical correlates of consciousness, the further from myself an entity is in structure and behavior, the less certain I am that it has an internal awareness similar to my own.

Animal consciousness can be explored by experimentation on humans, primates, mice, and fruit flies. The boundaries of consciousness can be mapped in the neural tissue of the brain. Cognitive scientists can explore what stimuli provoke a conscious response, what provoke an unconscious response, and what don't provoke any response. Scientists can observe what brain tissue is active when we say we experience qualia and what is active when we say we don't experience qualia. Studying brain injury patients provides a wealth of information concerning the brain's generation of consciousness...split brain, phantom limbs, aphasias, personality changes, delusions, etc.

Such experimentation indicates that our internal concept of self is largely an illusion. The mind tries to make sense out of whatever is available. If both brain hemispheres are strongly connected then there is a strong illusion of one internal person. If the brain hemispheres are disconnected, then experiments show two different personalities inhabiting the same brain. Each personality has no awareness of the other personality. When the second personality acts independently, the first personality rationalizes why the first personality "chose" to perform the action. It is possible that many such self aware personalities co-exist in our brains, each with its own illusion of being in control and each with its own perception of qualia. (In some brain injuries, a person no longer believes that their own arm is part of self. Even though they can control the arm and feel what the arm touches, they think it is someone else's arm. The brain function that creates the illusion of self is broken.) These illusions of self may not be necessary to experience qualia but probably are necessary for a human to describe or relate the experience of qualia.

Speculation about zombies should take into account what science has already discovered. I.e., our internal concept of ourselves is only a blurred reflection of reality. "Self" is manufactured on the fly out of bits and pieces that change with every experience, with every hormonal change, with every drug we take, or with every injury we experience.

What would our internal "self" experience as each neuron were gradually replaced by a nano computer simulator? If the simulator generated a similar essence of qualia (i.e., simulating a brain pattern is sufficient to generate the experience of qualia) then the internal experience should be the same. If the simulator produced no such experience of qualia, then our internal self would be unable to recognize that our internal awareness was shrinking. We would not be able to remember that we could once hear more sounds or see more colors as memory itself depends on that internal awareness. Our internal self would fade away unnoticed by that internal self. (In some cases of dementia, the patient doesn't comprehend that his mind is failing. He don't understand why his family has brought him to see the doctor.) With nano-simulators mental function would continue, but internal awareness might disappear.

comment by Hopefully_Anonymous · 2008-04-07T04:07:33.000Z · LW(p) · GW(p)

I envy that your blog has attracted a much richer discussion on this topic than has mine (see for instance Phil Goetz's & Fly's recent posts).

comment by Z._M._Davis · 2008-04-07T04:53:20.000Z · LW(p) · GW(p)

"But if you add up enough tiny little gravitational pulls, they will eventually yank you across the room and tear you apart by tidal forces, so clearly a small effect is not 'no effect at all'."

When I reread this passage, I can't help but think of the combined gravitational pull of 3^^^3 dust specks.

comment by Nick_Tarleton · 2008-04-08T21:33:43.000Z · LW(p) · GW(p)

The anti-epiphenominalist argument makes me think that if substance dualism is true, introspection alone can't provide an epistemic warrant for it, any more than introspection could tell an AI what its processors are made of. Substance dualism makes the prediction that certain loci in the brain behave in a physics-violating but regular way with a significant impact on behavior, but the brain doesn't have any ability to notice this. Since the brain is of finite complexity, there would have to be some computer that, wired in the right way, would produce the same behavior as the 'soul', in which case the brain would have the same belief (or at least 'z-belief', informationally identical but lacking in phenomenal content) in the soul... you see where this is going.

Actually, that might better be said to show that there's no such thing as the "supernatural", it's all one web of causality, in which case the impossibility of introspective warrant for 'dualism' (= our model of physics being incomplete in some way that affects the brain's behavior) is even more obvious.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-06-21T15:28:07.079Z · LW(p) · GW(p)

The concept of a supercausal cause is nonsense of the highest order; e.g. "God speaks to me in my heart, and you can't scientifically refute that because it has no experimental consequences". But if you define the "supernatural" as "ontologically basic mental stuff not reducible to non-mental parts, like the Force in Star Wars", then it is much less obviously nonsense; nonsense of a lower order, which is harder to detect.

comment by Matthew4 · 2008-04-09T00:07:39.000Z · LW(p) · GW(p)

Such experimentation indicates that our internal concept of self is largely an illusion.

It is relatively easy to discover that the self / me is only a thought, not the reality it is assumed to be. Some basic inquiry into the nature of the assumed "self" will dissolve the illusion rather quickly.

It is often assumed that this is some kind of "religious" belief, but in fact it is also easily available to atheists as well, such as Susan Blackmore and Sam Harris. I suspect Nick Tarleton would also include himself in this category.

comment by Ralph · 2008-05-19T10:53:17.000Z · LW(p) · GW(p)

Wittgenstein's post-Tractatus work concentrated on the role played by language in the kind of talk we sometimes call "philosophical discussions." In this later portion of Wittgenstein's work, his central activity consisted of pointing out, by means of numerous distinct examples, that one cannot stretch the use of words and phrases into new, synthetic realms, without incurring a significant risk of ending up talking nonsense.

The trap against which Wittgenstein warns us is something like writing "Where has a wireless mouse's tail gone? Possibly it is still somehow attached, but invisible."

In this case it is obvious that the word "mouse" can be used in a number of different contexts, and that these various contexts are not actually linked together in any but the trivial fact of both containing the word, "mouse."

In the case of words such as "conscious," "aware," "thought" and the like, the lack of connection between contexts is less apparent. Likewise, the danger of conflating and distorting those contexts is significantly greater for such (actually quite narrow) words.

comment by nawitus · 2009-07-30T13:42:34.702Z · LW(p) · GW(p)

The problem with this argument is, that it doesn't explain anything nor does it solve the hard problem of consciousness. You simply redefine consciousness to mean something experimentally detectable, and then use that to claim p-zombies are impossible. You can move on, but that doesn't leave the original problem answered.

"Consciousness, whatever it may be - a substance, a process, a name for a confusion - is not epiphenomenal; your mind can catch the inner listener in the act of listening, and say so out loud." That's simply a fact about human brains, and is of course empirically detectable, and we can in principle write out algorithms and then create a consciousness detector. That doesn't explain anything about qualia though, and that's the hard problem.

Replies from: thomblake, Eliezer_Yudkowsky
comment by thomblake · 2009-07-30T13:59:20.056Z · LW(p) · GW(p)

No, the problem with the zombie argument, the notion of 'qualia', and anything postulating mysterious entities, is that they don't explain / predict anything. This post mostly just explains that for people who don't feel like reading Dennett.

Replies from: nawitus
comment by nawitus · 2009-08-01T22:39:26.986Z · LW(p) · GW(p)

There are many valid arguments or reason to believe in the existence of qualia, you can't simply say that because we cannot use qualia to predict anything at this point, then you can just ignore qualia. Qualia is "mysterious" in the same way the universe is, we don't know it's properties fully.

Replies from: thomblake
comment by thomblake · 2009-08-01T23:24:38.478Z · LW(p) · GW(p)

you can't simply say that because we cannot use qualia to predict anything at this point, then you can just ignore qualia

In fact, I can and did. Furthermore, if a hypothesis doesn't predict anything, then it is a meaningless hypothesis; it cannot be tested, and it is not useful even in principle. An explanation that does not suggest a prediction is no explanation at all.

Avoid mysterious answers to mysterious questions

Replies from: nawitus, Juno_Watt
comment by nawitus · 2009-08-02T08:31:34.727Z · LW(p) · GW(p)

Qualia is not a full explanation as of yet, you can think of it as a philosophical problem. There are many arguments to believe in the existence of qualia. It might be possible to show all of them to be false, in fact Dennet has attempted this. After you've shown them all to be false, it's okay to say "qualia doesn't exist". However, it's irrational to claim that since the concept/problem of qualia doesn't predict anything, qualia therefore doesn't exist.

Replies from: thomblake
comment by thomblake · 2009-08-02T20:52:45.361Z · LW(p) · GW(p)

However, it's irrational to claim that since the concept/problem of qualia doesn't predict anything, qualia therefore doesn't exist.

Nope. It's irrational to claim that qualia does exist when the hypothesis that qualia exists does not entail any predictions. I am not aware of any good arguments in favor of the existence of qualia, and already have a good reason to reject the hypothesis that it exists.

comment by Juno_Watt · 2013-08-26T01:11:04.729Z · LW(p) · GW(p)

"qualia" labels part of the explanandum, not the explanation.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-08-02T01:43:55.704Z · LW(p) · GW(p)

The essay isn't trying to solve the hard problem of consciousness. It is trying to demonstrate the impossibility of p-zombies. Consciousness is not "redefined" as something experimentally detectable; it is simply pointed out that consciousness defined the usual way is, in fact, experimentally detectable, since we can catch ourselves in the act of listening and visibly move our lips to report it.

comment by FrankAdamek · 2010-07-02T00:36:52.513Z · LW(p) · GW(p)

Admittedly few are likely to read this comment, the post being so old. Also, I must apologize that marginal values seem to suggest that I not read all the other comments for something like this point, having read the first few. That said, a few thoughts.

While I'm not sure if I agree with Albert or not, I don't believe the GAZP applies wholly to this topic. It's actually somewhat summarized with the amusing line "I don't think therefore I'm not". Or rather, "I'm not, therefore I don't think". Nobody ever has (reliable) evidence that they don't exist. I can never know, based on direct experience, that I haven't just died and somehow been replaced with "another identical person", and am not now the person I was moments ago. Identity is complicated and I don't find this likely to happen, but the point still stands. It's not that there is no subjective evidence of the switch, but that the entity who had the experience no longer exists. One may still have intuitions regarding experiences they won't notice, or rather, experiences that will remove any further noticing of anything ever, the simplest cases being traditional death. You just can't use your past memories of existing as evidence for what might cause that.

Replies from: lessdazed
comment by lessdazed · 2011-07-24T12:12:48.096Z · LW(p) · GW(p)

I can never know, based on direct experience, that I haven't just died and somehow been replaced with "another identical person", and am not now the person I was moments ago.

Not to be crass (at this point, good Bayesians should bear in mind Cromwell's rule; it's still logically possible that what follows won't be something crass), but there is eating and pooping.

comment by rosyatrandom · 2011-04-15T16:07:52.196Z · LW(p) · GW(p)

Another comment to add a few years later than the original post and hence be pretty useless:

My thoughts are that consciousness (as in the experience of it) is a kind of epiphenomenon:

The sensation is derived from cognitive processes that map isomorphically to an abstract model of consciousness in mindspace (and I do not make any distinction or heirarchy between realspace and mindspace in terms of privileged levels of existence).

It does this because the brain is doing exactly what it feels like consciousness does - integrating various inputs into a representation of self and environment, making plans and telling a consistent story about it all. And the mapping, by being possible, is also real.

comment by googleplexius · 2011-09-03T19:56:48.232Z · LW(p) · GW(p)

have to say I agree with Charles' proposition. I mean, if one thinks "i am thinking" the neurons have to fire off in your head A) to think B) to say "I am thinking" C) to realize one is saying he or she is thinking D) determine the cause and thought process of all of the above and E) rationalize the behavior of our brains in a inductive reasoning-based processing sense of the word.

So, if all of the above are true, as are the aforementioned butterfly effect that causes a misplaced neuron to trigger a seizure, than if one's neurons were replaced by other completely identical neurons, then you would have consciousness, but not the same consciousness, and not necessarily a human consciousness. (Also the argument depends on whether one believes if random is really not random at all, and if it is random, than the robot neurons could not replicate that process in an algorithm, since there isn't one (in that case) than the randomness of human consciousness would constitute the definition of the difference between the robot consciousness and the human one, if the robot conscious is actually considered "conscious" at all, which would mean zombies COULD exist due to the lack of randomness in a robot-neuron-composed brain.) BUT, if randomness isn't actually random at all, and such variables as pi consist of a very complex pattern, then whose to say robots cannot replicate the pattern, in which case human existence would be replicable, and there would be no difference between conscious robots and conscious humans, but the unconscious would not be unconscious unless they were dead, thus proving the GAZP. Does anyone else agree or am I missing something?

comment by Linx · 2012-05-24T04:13:18.505Z · LW(p) · GW(p)

I'm sorry to comment on such an old post, but I'm really new to rationality and especially bayesianism, and this discussion got me confused about something.

Non-reductionists such as Richard say there is a non-physical "thingy" called a consciousness, and that it is epiphenomenal. That means it has no consequences on the physical world.

Wouldn't this be a model that doesn't anticipate anything, as you described in your first posts? If one argues that conciousness has no effect on the observable world, isn't one arguing that there might not be any conciousness at all? That the whole argument is pointless?

comment by A1987dM (army1987) · 2013-09-15T09:16:22.215Z · LW(p) · GW(p)

The gravitational pull from a one-gram switch ten meters away is around 6 * 10-16 m/s2. That's around half a neutron diameter per second per second, far below thermal noise, but way above the Planck level.

[...]

The switch's flip does change the fundamental particles of your body and brain. It nudges them by whole neutron diameters away from where they would have otherwise been.

But when you flip the switch, it doesn't disappear altogether, it just gets displaced by a few millimetres, so the number you care about is this, which is hundreds of times smaller.
comment by Articulator · 2013-11-13T17:56:39.668Z · LW(p) · GW(p)

If we assume Reductionism and Naturalism, the concept of the Zombie is a paradox.

The two premises I have just outlined are mutually exclusive to the premise "beings that are atom-by-atom identical to us... except that they are not conscious."

That is like saying that there are two gears that mesh together, yet one one turns, the other does not. Paradox. There is no solving it. The only difference is the layers of complexity. We cannot, with only our own minds, find or prove prime numbers with many digits to them, but that doesn't mean that they do not exist.

If you truly believe that there is no external, supernatural cause to consciousness, then Zombies are a true paradox that cannot exist.

Since an argument like this rests on several necessary premises, one should really just attack the one with the least support.

I have noticed that Eliezer favors synthetic over analytic arguments, but sometimes, the later is much more efficient than the former.

Replies from: pjeby
comment by pjeby · 2013-11-13T19:44:43.483Z · LW(p) · GW(p)

If we assume Reductionism and Naturalism, the concept of the Zombie is a paradox.

I don't understand, unless by "paradox" you mean "contradiction" or "nonsense" or "impossible".

Replies from: Articulator
comment by Articulator · 2013-11-13T19:50:42.422Z · LW(p) · GW(p)

Apologies.

I have indeed used paradox incorrectly. Your latter definitions are more appropriate. My confusion arose from the apparent possibility, but I see now that 'paradox' would only be correct if my argument also still felt the existence of the zombie was possible.

However, I hope that despite that minor terminology quibble, you were still able to understand the thrust of my argument. If my argument is unclear from the line you quoted, it is worth noting that I explain it in the following paragraphs.

comment by higurashimerlin · 2014-08-28T02:05:43.175Z · LW(p) · GW(p)

Albert's position is similar to how you know that two calculators will have the same output despite having different physical configurations. If you have an idealized abstract model of say addition, you can draw a boundary around different designs that perform addition despite being different. You will know that something like a unconnected switch won't be enough to make it stop matching the model of addition.

If we take the reason that Albert talks about being Albert and him talking about a person from five minutes ago as himself and build an abstract idealize model, we will see that a lot of physicals differences can take place without effecting the real reason for his report. There is a range of physical designs that match the models prediction and it includes ones were his brain is made of tiny robots. The cause of his report with remain the same regardless of the tiny robots replacing his neurons with themselves.

The implication I see is mind uploading. Where must we draw the boundary to capture the referent of "Albert". How that questions is answered may determine the future.

comment by Alex_Arendar · 2015-12-03T15:54:41.802Z · LW(p) · GW(p)

And if the tiny gravitational pull of the littlw 1 gram switch can turn off the consciousness then imagine what would happen in the crowded city when a large lorry loaded with 20 tons of lead moved accross that city :) People will go zombiiiiiies and that would be a total chaos.

Replies from: gjm
comment by gjm · 2015-12-03T18:07:09.094Z · LW(p) · GW(p)

Nope, because the notion of "zombie" here is a weird one cooked up by philosophers with the property that whether someone is a zombie has no effect at all on how they behave. So there would be exactly the same amount of chaos as before the switch or lorry had its effect.

Replies from: Alex_Arendar
comment by Alex_Arendar · 2015-12-03T18:55:19.023Z · LW(p) · GW(p)

yes, you are right

comment by Vmax · 2020-08-08T14:16:14.451Z · LW(p) · GW(p)

This is the most I've read on this Zombieism concept, and now I can see it may not be the first thing I read about it. There is a fantasy series called Skulduggery Pleasant by Derek Landy. In one of it's many side plots, two characters become zombies and even eventually get their brains fully replaced by genetically modified plant matter. They retain their consciousness and their personalities the entire time. They also continued functioning without a hitch after their Zombie Master died (the term was used in the books).

So I suppose the author would agree with this principle, and I find myself inclined to as well. It just makes so much sense, although I personally feel Eliezer could have been a bit more concise.

I will take this opportunity to recommend the series to all rationalists. It's the most rational piece of fiction I've read aside from HPMoR.