Zombies! Zombies?
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-04-04T09:55:26.000Z · LW · GW · Legacy · 157 commentsContents
158 comments
Your "zombie", in the philosophical usage of the term, is putatively a being that is exactly like you in every respect—identical behavior, identical speech, identical brain; every atom and quark in exactly the same position, moving according to the same causal laws of motion—except that your zombie is not conscious.
It is furthermore claimed that if zombies are "possible" (a term over which battles are still being fought), then, purely from our knowledge of this "possibility", we can deduce a priori that consciousness is extra-physical, in a sense to be described below; the standard term for this position is "epiphenomenalism".
(For those unfamiliar with zombies, I emphasize that this is not a strawman. See, for example, the SEP entry on Zombies. The "possibility" of zombies is accepted by a substantial fraction, possibly a majority, of academic philosophers of consciousness.)
I once read somewhere, "You are not the one who speaks your thoughts—you are the one who hears your thoughts". In Hebrew, the word for the highest soul, that which God breathed into Adam, is N'Shama—"the hearer".
If you conceive of "consciousness" as a purely passive listening, then the notion of a zombie initially seems easy to imagine. It's someone who lacks the N'Shama, the hearer.
(Warning: Long post ahead. Very long 6,600-word post involving David Chalmers ahead. This may be taken as my demonstrative counterexample to Richard Chappell's Arguing with Eliezer Part II, in which Richard accuses me of not engaging with the complex arguments of real philosophers. Edit December 2019: There now exists a shorter edited version of this post here [LW · GW])
When you open a refrigerator and find that the orange juice is gone, you think "Darn, I'm out of orange juice." The sound of these words is probably represented in your auditory cortex, as though you'd heard someone else say it. (Why do I think this? Because native Chinese speakers can remember longer digit sequences than English-speakers. Chinese digits are all single syllables, and so Chinese speakers can remember around ten digits, versus the famous "seven plus or minus two" for English speakers. There appears to be a loop of repeating sounds back to yourself, a size limit on working memory in the auditory cortex, which is genuinely phoneme-based.)
Let's suppose the above is correct; as a postulate, it should certainly present no problem for advocates of zombies. Even if humans are not like this, it seems easy enough to imagine an AI constructed this way (and imaginability is what the zombie argument is all about). It's not only conceivable in principle, but quite possible in the next couple of decades, that surgeons will lay a network of neural taps over someone's auditory cortex and read out their internal narrative. (Researchers have already tapped the lateral geniculate nucleus of a cat and reconstructed recognizable visual inputs.)
So your zombie, being physically identical to you down to the last atom, will open the refrigerator and form auditory cortical patterns for the phonemes "Darn, I'm out of orange juice". On this point, epiphenomalists would willingly agree.
But, says the epiphenomenalist, in the zombie there is no one inside to hear; the inner listener is missing. The internal narrative is spoken, but unheard. You are not the one who speaks your thoughts, you are the one who hears them.
It seems a lot more straightforward (they would say) to make an AI that prints out some kind of internal narrative, than to show that an inner listener hears it.
The Zombie Argument is that if the Zombie World is possible—not necessarily physically possible in our universe, just "possible in theory", or "imaginable", or something along those lines—then consciousness must be extra-physical, something over and above mere atoms. Why? Because even if you somehow knew the positions of all the atoms in the universe, you would still have be told, as a separate and additional fact, that people were conscious—that they had inner listeners—that we were not in the Zombie World, as seems possible.
Zombie-ism is not the same as dualism. Descartes thought there was a body-substance and a wholly different kind of mind-substance, but Descartes also thought that the mind-substance was a causally active principle, interacting with the body-substance, controlling our speech and behavior. Subtracting out the mind-substance from the human would leave a traditional zombie, of the lurching and groaning sort.
And though the Hebrew word for the innermost soul is N'Shama, that-which-hears, I can't recall hearing a rabbi arguing for the possibility of zombies. Most rabbis would probably be aghast at the idea that the divine part which God breathed into Adam doesn't actually do anything.
The technical term for the belief that consciousness is there, but has no effect on the physical world, is epiphenomenalism.
Though there are other elements to the zombie argument (I'll deal with them below), I think that the intuition of the passive listener is what first seduces people to zombie-ism. In particular, it's what seduces a lay audience to zombie-ism. The core notion is simple and easy to access: The lights are on but no one's home.
Philosophers are appealing to the intuition of the passive listener when they say "Of course the zombie world is imaginable; you know exactly what it would be like."
One of the great battles in the Zombie Wars is over what, exactly, is meant by saying that zombies are "possible". Early zombie-ist philosophers (the 1970s) just thought it was obvious that zombies were "possible", and didn't bother to define what sort of possibility was meant.
Because of my reading in mathematical logic, what instantly comes into my mind is logical possibility. If you have a collection of statements like (A->B),(B->C),(C->~A) then the compound belief is logically possible if it has a model—which, in the simple case above, reduces to finding a value assignment to A, B, C that makes all of the statements (A->B),(B->C), and (C->~A) true. In this case, A=B=C=0 works, as does A=0, B=C=1 or A=B=0, C=1.
Something will seem possible—will seem "conceptually possible" or "imaginable"—if you can consider the collection of statements without seeing a contradiction. But it is, in general, a very hard problem to see contradictions or to find a full specific model! If you limit yourself to simple Boolean propositions of the form ((A or B or C) and (B or ~C or D) and (D or ~A or ~C) ...), conjunctions of disjunctions of three variables, then this is a very famous problem called 3-SAT, which is one of the first problems ever to be proven NP-complete.
So just because you don't see a contradiction in the Zombie World at first glance, it doesn't mean that no contradiction is there. It's like not seeing a contradiction in the Riemann Hypothesis at first glance. From conceptual possibility ("I don't see a problem") to logical possibility in the full technical sense, is a very great leap. It's easy to make it an NP-complete leap, and with first-order theories you can make it arbitrarily hard to compute even for finite questions. And it's logical possibility of the Zombie World, not conceptual possibility, that is needed to suppose that a logically omniscient mind could know the positions of all the atoms in the universe, and yet need to be told as an additional non-entailed fact that we have inner listeners.
Just because you don't see a contradiction yet, is no guarantee that you won't see a contradiction in another 30 seconds. "All odd numbers are prime. Proof: 3 is prime, 5 is prime, 7 is prime..."
So let us ponder the Zombie Argument a little longer: Can we think of a counterexample to the assertion "Consciousness has no third-party-detectable causal impact on the world"?
If you close your eyes and concentrate on your inward awareness, you will begin to form thoughts, in your internal narrative, that go along the lines of "I am aware" and "My awareness is separate from my thoughts" and "I am not the one who speaks my thoughts, but the one who hears them" and "My stream of consciousness is not my consciousness" and "It seems like there is a part of me which I can imagine being eliminated without changing my outward behavior."
You can even say these sentences out loud, as you meditate. In principle, someone with a super-fMRI could probably read the phonemes out of your auditory cortex; but saying it out loud removes all doubt about whether you have entered the realms of testability and physical consequences.
This certainly seems like the inner listener is being caught in the act of listening by whatever part of you writes the internal narrative and flaps your tongue.
Imagine that a mysterious race of aliens visit you, and leave you a mysterious black box as a gift. You try poking and prodding the black box, but (as far as you can tell) you never succeed in eliciting a reaction. You can't make the black box produce gold coins or answer questions. So you conclude that the black box is causally inactive: "For all X, the black box doesn't do X." The black box is an effect, but not a cause; epiphenomenal; without causal potency. In your mind, you test this general hypothesis to see if it is true in some trial cases, and it seems to be true—"Does the black box turn lead to gold? No. Does the black box boil water? No."
But you can see the black box; it absorbs light, and weighs heavy in your hand. This, too, is part of the dance of causality. If the black box were wholly outside the causal universe, you couldn't see it; you would have no way to know it existed; you could not say, "Thanks for the black box." You didn't think of this counterexample, when you formulated the general rule: "All X: Black box doesn't do X". But it was there all along.
(Actually, the aliens left you another black box, this one purely epiphenomenal, and you haven't the slightest clue that it's there in your living room. That was their joke.)
If you can close your eyes, and sense yourself sensing—if you can be aware of yourself being aware, and think "I am aware that I am aware"—and say out loud, "I am aware that I am aware"—then your consciousness is not without effect on your internal narrative, or your moving lips. You can see yourself seeing, and your internal narrative reflects this, and so do your lips if you choose to say it out loud.
I have not seen the above argument written out that particular way—"the listener caught in the act of listening"—though it may well have been said before.
But it is a standard point—which zombie-ist philosophers accept!—that the Zombie World's philosophers, being atom-by-atom identical to our own philosophers, write identical papers about the philosophy of consciousness.
At this point, the Zombie World stops being an intuitive consequence of the idea of a passive listener.
Philosophers writing papers about consciousness would seem to be at least one effect of consciousness upon the world. You can argue clever reasons why this is not so, but you have to be clever.
You would intuitively suppose that if your inward awareness went away, this would change the world, in that your internal narrative would no longer say things like "There is a mysterious listener within me," because the mysterious listener would be gone. It is usually right after you focus your awareness on your awareness, that your internal narrative says "I am aware of my awareness", which suggests that if the first event never happened again, neither would the second. You can argue clever reasons why this is not so, but you have to be clever.
You can form a propositional belief that "Consciousness is without effect", and not see any contradiction at first, if you don't realize that talking about consciousness is an effect of being conscious. But once you see the connection from the general rule that consciousness has no effect, to the specific implication that consciousness has no effect on how philosophers write papers about consciousness, zombie-ism stops being intuitive and starts requiring you to postulate strange things.
One strange thing you might postulate is that there's a Zombie Master, a god within the Zombie World who surreptitiously takes control of zombie philosophers and makes them talk and write about consciousness.
A Zombie Master doesn't seem impossible. Human beings often don't sound all that coherent when talking about consciousness. It might not be that hard to fake their discourse, to the standards of, say, a human amateur talking in a bar. Maybe you could take, as a corpus, one thousand human amateurs trying to discuss consciousness; feed them into a non-conscious but sophisticated AI, better than today's models but not self-modifying; and get back discourse about "consciousness" that sounded as sensible as most humans, which is to say, not very.
But this speech about "consciousness" would not be spontaneous. It would not be produced within the AI. It would be a recorded imitation of someone else talking. That is just a holodeck, with a central AI writing the speech of the non-player characters. This is not what the Zombie World is about.
By supposition, the Zombie World is atom-by-atom identical to our own, except that the inhabitants lack consciousness. Furthermore, the atoms in the Zombie World move under the same laws of physics as in our own world. If there are "bridging laws" that govern which configurations of atoms evoke consciousness, those bridging laws are absent. But, by hypothesis, the difference is not experimentally detectable. When it comes to saying whether a quark zigs or zags or exerts a force on nearby quarks—anything experimentally measurable—the same physical laws govern.
The Zombie World has no room for a Zombie Master, because a Zombie Master has to control the zombie's lips, and that control is, in principle, experimentally detectable. The Zombie Master moves lips, therefore it has observable consequences. There would be a point where an electron zags, instead of zigging, because the Zombie Master says so. (Unless the Zombie Master is actually in the world, as a pattern of quarks—but then the Zombie World is not atom-by-atom identical to our own, unless you think this world also contains a Zombie Master.)
When a philosopher in our world types, "I think the Zombie World is possible", his fingers strike keys in sequence: Z-O-M-B-I-E. There is a chain of causality that can be traced back from these keystrokes: muscles contracting, nerves firing, commands sent down through the spinal cord, from the motor cortex—and then into less understood areas of the brain, where the philosopher's internal narrative first began talking about "consciousness".
And the philosopher's zombie twin strikes the same keys, for the same reason, causally speaking. There is no cause within the chain of explanation for why the philosopher writes the way he does, which is not also present in the zombie twin. The zombie twin also has an internal narrative about "consciousness", that a super-fMRI could read out of the auditory cortex. And whatever other thoughts, or other causes of any kind, led to that internal narrative, they are exactly the same in our own universe and in the Zombie World.
So you can't say that the philosopher is writing about consciousness because of consciousness, while the zombie twin is writing about consciousness because of a Zombie Master or AI chatbot. When you trace back the chain of causality behind the keyboard, to the internal narrative echoed in the auditory cortex, to the cause of the narrative, you must find the same physical explanation in our world as in the zombie world.
As the most formidable advocate of zombie-ism, David Chalmers, writes:
Think of my zombie twin in the universe next door. He talks about conscious experience all the time—in fact, he seems obsessed by it. He spends ridiculous amounts of time hunched over a computer, writing chapter after chapter on the mysteries of consciousness. He often comments on the pleasure he gets from certain sensory qualia, professing a particular love for deep greens and purples. He frequently gets into arguments with zombie materialists, arguing that their position cannot do justice to the realities of conscious experience.
And yet he has no conscious experience at all! In his universe, the materialists are right and he is wrong. Most of his claims about conscious experience are utterly false. But there is certainly a physical or functional explanation of why he makes the claims he makes. After all, his universe is fully law-governed, and no events therein are miraculous, so there must be some explanation of his claims.
...Any explanation of my twin’s behavior will equally count as an explanation of my behavior, as the processes inside his body are precisely mirrored by those inside mine. The explanation of his claims obviously does not depend on the existence of consciousness, as there is no consciousness in his world. It follows that the explanation of my claims is also independent of the existence of consciousness.
Chalmers is not arguing against zombies; those are his actual beliefs!
This paradoxical situation is at once delightful and disturbing. It is not obviously fatal to the nonreductive position, but it is at least something that we need to come to grips
with...
I would seriously nominate this as the largest bullet ever bitten in the history of time. And that is a backhanded compliment to David Chalmers: A lesser mortal would simply fail to see the implications, or refuse to face them, or rationalize a reason it wasn't so.
Why would anyone bite a bullet that large? Why would anyone postulate unconscious zombies who write papers about consciousness for exactly the same reason that our own genuinely conscious philosophers do?
Not because of the first intuition I wrote about, the intuition of the passive listener. That intuition may say that zombies can drive cars or do math or even fall in love, but it doesn't say that zombies write philosophy papers about their passive listeners.
The zombie argument does not rest solely on the intuition of the passive listener. If this was all there was to the zombie argument, it would be dead by now, I think. The intuition that the "listener" can be eliminated without effect, would go away as soon as you realized that your internal narrative routinely seems to catch the listener in the act of listening.
No, the drive to bite this bullet comes from an entirely different intuition—the intuition that no matter how many atoms you add up, no matter how many masses and electrical charges interact with each other, they will never necessarily produce a subjective sensation of the mysterious redness of red. It may be a fact about our physical universe (Chalmers says) that putting such-and-such atoms into such-and-such a position, evokes a sensation of redness; but if so, it is not a necessary fact, it is something to be explained above and beyond the motion of the atoms.
But if you consider the second intuition on its own, without the intuition of the passive listener, it is hard to see why it implies zombie-ism. Maybe there's just a different kind of stuff, apart from and additional to atoms, that is not causally passive—a soul that actually does stuff, a soul that plays a real causal role in why we write about "the mysterious redness of red". Take out the soul, and... well, assuming you just don't fall over in a coma, you certainly won't write any more papers about consciousness!
This is the position taken by Descartes and most other ancient thinkers: The soul is of a different kind, but it interacts with the body. Descartes's position is technically known as substance dualism—there is a thought-stuff, a mind-stuff, and it is not like atoms; but it is causally potent, interactive, and leaves a visible mark on our universe.
Zombie-ists are property dualists—they don't believe in a separate soul; they believe that matter in our universe has additional properties beyond the physical.
"Beyond the physical"? What does that mean? It means the extra properties are there, but they don't influence the motion of the atoms, like the properties of electrical charge or mass. The extra properties are not experimentally detectable by third parties; you know you are conscious, from the inside of your extra properties, but no scientist can ever directly detect this from outside.
So the additional properties are there, but not causally active. The extra properties do not move atoms around, which is why they can't be detected by third parties.
And that's why we can (allegedly) imagine a universe just like this one, with all the atoms in the same places, but the extra properties missing, so that everything goes on the same as before, but no one is conscious.
The Zombie World may not be physically possible, say the zombie-ists—because it is a fact that all the matter in our universe has the extra properties, or obeys the bridging laws that evoke consciousness—but the Zombie World is logically possible: the bridging laws could have been different.
But, once you realize that conceivability is not the same as logical possibility, and that the Zombie World isn't even all that intuitive, why say that the Zombie World is logically possible?
Why, oh why, say that the extra properties are epiphenomenal and indetectable?
We can put this dilemma very sharply: Chalmers believes that there is something called consciousness, and this consciousness embodies the true and indescribable substance of the mysterious redness of red. It may be a property beyond mass and charge, but it's there, and it is consciousness. Now, having said the above, Chalmers furthermore specifies that this true stuff of consciousness is epiphenomenal, without causal potency—but why say that?
Why say that you could subtract this true stuff of consciousness, and leave all the atoms in the same place doing the same things? If that's true, we need some separate physical explanation for why Chalmers talks about "the mysterious redness of red". That is, there exists both a mysterious redness of red, which is extra-physical, and an entirely separate reason, within physics, why Chalmers talks about the "mysterious redness of red".
Chalmers does confess that these two things seem like they ought to be related, but really, why do you need both? Why not just pick one or the other?
Once you've postulated that there is a mysterious redness of red, why not just say that it interacts with your internal narrative and makes you talk about the "mysterious redness of red"?
Isn't Descartes taking the simpler approach, here? The strictly simpler approach?
Why postulate an extramaterial soul, and then postulate that the soul has no effect on the physical world, and then postulate a mysterious unknown material process that causes your internal narrative to talk about conscious experience?
Why not postulate the true stuff of consciousness which no amount of mere mechanical atoms can add up to, and then, having gone that far already, let this true stuff of consciousness have causal effects like making philosophers talk about consciousness?
I am not endorsing Descartes's view. But at least I can understand where Descartes is coming from. Consciousness seems mysterious, so you postulate a mysterious stuff of consciousness. Fine.
But now the zombie-ists postulate that this mysterious stuff doesn't do anything, so you need a whole new explanation for why you say you're conscious.
That isn't vitalism. That's something so bizarre that vitalists would spit out their coffee. "When fires burn, they release phlogiston. But phlogiston doesn't have any experimentally detectable impact on our universe, so you'll have to go looking for a separate explanation of why a fire can melt snow." What?
Are property dualists under the impression that if they postulate a new active force, something that has a causal impact on observables, they will be sticking their necks out too far?
Me, I'd say that if you postulate a mysterious, separate, additional, inherently mental property of consciousness, above and beyond positions and velocities, then, at that point, you have already stuck your neck out as far as it can go. To postulate this stuff of consciousness, and then further postulate that it doesn't do anything—for the love of cute kittens, why?
There isn't even an obvious career motive. "Hi, I'm a philosopher of consciousness. My subject matter is the most important thing in the universe and I should get lots of funding? Well, it's nice of you to say so, but actually the phenomenon I study doesn't do anything whatsoever." (Argument from career impact is not valid, but I say it to leave a line of retreat.)
Chalmers critiques substance dualism on the grounds that it's hard to see what new theory of physics, what new substance that interacts with matter, could possibly explain consciousness. But property dualism has exactly the same problem. No matter what kind of dual property you talk about, how exactly does it explain consciousness?
When Chalmers postulated an extra property that is consciousness, he took that leap across the unexplainable. How does it help his theory to further specify that this extra property has no effect? Why not just let it be causal?
If I were going to be unkind, this would be the time to drag in the dragon—to mention Carl Sagan's parable of the dragon in the garage. "I have a dragon in my garage." Great! I want to see it, let's go! "You can't see it—it's an invisible dragon." Oh, I'd like to hear it then. "Sorry, it's an inaudible dragon." I'd like to measure its carbon dioxide output. "It doesn't breathe." I'll toss a bag of flour into the air, to outline its form. "The dragon is permeable to flour."
One motive for trying to make your theory unfalsifiable, is that deep down you fear to put it to the test. Sir Roger Penrose (physicist) and Stuart Hameroff (neurologist) are substance dualists; they think that there is something mysterious going on in quantum, that Everett is wrong and that the "collapse of the wave-function" is physically real, and that this is where consciousness lives and how it exerts causal effect upon your lips when you say aloud "I think therefore I am." Believing this, they predicted that neurons would protect themselves from decoherence long enough to maintain macroscopic quantum states.
This is in the process of being tested, and so far, prospects are not looking good for Penrose—
—but Penrose's basic conduct is scientifically respectable. Not Bayesian, maybe, but still fundamentally healthy. He came up with a wacky hypothesis. He said how to test it. He went out and tried to actually test it.
As I once said to Stuart Hameroff, "I think the hypothesis you're testing is completely hopeless, and your experiments should definitely be funded. Even if you don't find exactly what you're looking for, you're looking in a place where no one else is looking, and you might find something interesting."
So a nasty dismissal of epiphenomenalism would be that zombie-ists are afraid to say the consciousness-stuff can have effects, because then scientists could go looking for the extra properties, and fail to find them.
I don't think this is actually true of Chalmers, though. If Chalmers lacked self-honesty, he could make things a lot easier on himself.
(But just in case Chalmers is reading this and does have falsification-fear, I'll point out that if epiphenomenalism is false, then there is some other explanation for that-which-we-call consciousness, and it will eventually be found, leaving Chalmers's theory in ruins; so if Chalmers cares about his place in history, he has no motive to endorse epiphenomenalism unless he really thinks it's true.)
Chalmers is one of the most frustrating philosophers I know. Sometimes I wonder if he's pulling an "Atheism Conquered". Chalmers does this really sharp analysis... and then turns left at the last minute. He lays out everything that's wrong with the Zombie World scenario, and then, having reduced the whole argument to smithereens, calmly accepts it.
Chalmers does the same thing when he lays out, in calm detail, the problem with saying that our own beliefs in consciousness are justified, when our zombie twins say exactly the same thing for exactly the same reasons and are wrong.
On Chalmers's theory, Chalmers saying that he believes in consciousness cannot be causally justified; the belief is not caused by the fact itself. In the absence of consciousness, Chalmers would write the same papers for the same reasons.
On epiphenomenalism, Chalmers saying that he believes in consciousness cannot be justified as the product of a process that systematically outputs true beliefs, because the zombie twin writes the same papers using the same systematic process and is wrong.
Chalmers admits this. Chalmers, in fact, explains the argument in great detail in his book. Okay, so Chalmers has solidly proven that he is not justified in believing in epiphenomenal consciousness, right? No. Chalmers writes:
Conscious experience lies at the center of our epistemic universe; we have access to it directly. This raises the question: what is it that justifies our beliefs about our experiences, if it is not a causal link to those experiences, and if it is not the mechanisms by which the beliefs are formed? I think the answer to this is clear: it is having the experiences that justifies the beliefs. For example, the very fact that I have a red experience now provides justification for my belief that I am having a red experience...
Because my zombie twin lacks experiences, he is in a very different epistemic situation from me, and his judgments lack the corresponding justification. It may be tempting to object that if my belief lies in the physical realm, its justification must lie in the physical realm; but this is a non sequitur. From the fact that there is no justification in the physical realm, one might conclude that the physical portion of me (my brain, say) is not justified in its belief. But the question is whether I am justified in the belief, not whether my brain is justified in the belief, and if property dualism is correct than there is more to me than my brain.
So—if I've got this thesis right—there's a core you, above and beyond your brain, that believes it is not a zombie, and directly experiences not being a zombie; and so its beliefs are justified.
But Chalmers just wrote all that stuff down, in his very physical book, and so did the zombie-Chalmers.
The zombie Chalmers can't have written the book because of the zombie's core self above the brain; there must be some entirely different reason, within the laws of physics.
It follows that even if there is a part of Chalmers hidden away that is conscious and believes in consciousness, directly and without mediation, there is also a separable subspace of Chalmers—a causally closed cognitive subsystem that acts entirely within physics—and this "outer self" is what speaks Chalmers's internal narrative, and writes papers on consciousness.
I do not see any way to evade the charge that, on Chalmers's own theory, this separable outer Chalmers is deranged. This is the part of Chalmers that is the same in this world, or the Zombie World; and in either world it writes philosophy papers on consciousness for no valid reason. Chalmers's philosophy papers are not output by that inner core of awareness and belief-in-awareness, they are output by the mere physics of the internal narrative that makes Chalmers's fingers strike the keys of his computer.
And yet this deranged outer Chalmers is writing philosophy papers that just happen to be perfectly right, by a separate and additional miracle. Not a logically necessary miracle (then the Zombie World would not be logically possible). A physically contingent miracle, that happens to be true in what we think is our universe, even though science can never distinguish our universe from the Zombie World.
Or at least, that would seem to be the implication of what the self-confessedly deranged outer Chalmers is telling us.
I think I speak for all reductionists when I say Huh?
That's not epicycles. That's, "Planetary motions follow these epicycles—but epicycles don't actually do anything—there's something else that makes the planets move the same way the epicycles say they should, which I haven't been able to explain—and by the way, I would say this even if there weren't any epicycles."
I have a nonstandard perspective on philosophy because I look at everything with an eye to designing an AI; specifically, a self-improving Artificial General Intelligence with stable motivational structure.
When I think about designing an AI, I ponder principles like probability theory, the Bayesian notion of evidence as differential diagnostic, and above all, reflective coherence. Any self-modifying AI that starts out in a reflectively inconsistent state won't stay that way for long.
If a self-modifying AI looks at a part of itself that concludes "B" on condition A—a part of itself that writes "B" to memory whenever condition A is true—and the AI inspects this part, determines how it (causally) operates in the context of the larger universe, and the AI decides that this part systematically tends to write false data to memory, then the AI has found what appears to be a bug, and the AI will self-modify not to write "B" to the belief pool under condition A.
Any epistemological theory that disregards reflective coherence is not a good theory to use in constructing self-improving AI. This is a knockdown argument from my perspective, considering what I intend to actually use philosophy for. So I have to invent a reflectively coherent theory anyway. And when I do, by golly, reflective coherence turns out to make intuitive sense.
So that's the unusual way in which I tend to think about these things. And now I look back at Chalmers:
The causally closed "outer Chalmers" (that is not influenced in any way by the "inner Chalmers" that has separate additional awareness and beliefs) must be carrying out some systematically unreliable, unwarranted operation which in some unexplained fashion causes the internal narrative to produce beliefs about an "inner Chalmers" that are correct for no logical reason in what happens to be our universe.
But there's no possible warrant for the outer Chalmers or any reflectively coherent self-inspecting AI to believe in this mysterious correctness. A good AI design should, I think, look like a reflectively coherent intelligence embodied in a causal system, with a testable theory of how that selfsame causal system produces systematically accurate beliefs on the way to achieving its goals.
So the AI will scan Chalmers and see a closed causal cognitive system producing an internal narrative that is uttering nonsense. Nonsense that seems to have a high impact on what Chalmers thinks should be considered a morally valuable person.
This is not a necessary problem for Friendly AI theorists. It is only a problem if you happen to be an epiphenomenalist. If you believe either the reductionists (consciousness happens within the atoms) or the substance dualists (consciousness is causally potent immaterial stuff), people talking about consciousness are talking about something real, and a reflectively consistent Bayesian AI can see this by tracing back the chain of causality for what makes people say "consciousness".
According to Chalmers, the causally closed cognitive system of Chalmers's internal narrative is (mysteriously) malfunctioning in a way that, not by necessity, but just in our universe, miraculously happens to be correct. Furthermore, the internal narrative asserts "the internal narrative is mysteriously malfunctioning, but miraculously happens to be correctly echoing the justified thoughts of the epiphenomenal inner core", and again, in our universe, miraculously happens to be correct.
Oh, come on!
Shouldn't there come a point where you just give up on an idea? Where, on some raw intuitive level, you just go: What on Earth was I thinking?
Humanity has accumulated some broad experience with what correct theories of the world look like. This is not what a correct theory looks like.
"Argument from incredulity," you say. Fine, you want it spelled out? The said Chalmersian theory postulates multiple unexplained complex miracles. This drives down its prior probability, by the conjunction rule of probability and Occam's Razor. It is therefore dominated by at least two theories which postulate fewer miracles, namely:
- Substance dualism:
- There is a stuff of consciousness which is not yet understood, an extraordinary super-physical stuff that visibly affects our world; and this stuff is what makes us talk about consciousness.
- Not-quite-faith-based reductionism:
- That-which-we-name "consciousness" happens within physics, in a way not yet understood, just like what happened the last three thousand times humanity ran into something mysterious.
- Your intuition that no material substance can possibly add up to consciousness is incorrect. If you actually knew exactly why you talk about consciousness, this would give you new insights, of a form you can't now anticipate; and afterward you would realize that your arguments about normal physics having no room for consciousness were flawed.
Compare to:
- Epiphenomenal property dualism:
- Matter has additional consciousness-properties which are not yet understood. These properties are epiphenomenal with respect to ordinarily observable physics—they make no difference to the motion of particles.
- Separately, there exists a not-yet-understood reason within normal physics why philosophers talk about consciousness and invent theories of dual properties.
- Miraculously, when philosophers talk about consciousness, the bridging laws of our world are exactly right to make this talk about consciousness correct, even though it arises from a malfunction (drawing of logically unwarranted conclusions) in the causally closed cognitive system that types philosophy papers.
I know I'm speaking from limited experience, here. But based on my limited experience, the Zombie Argument may be a candidate for the most deranged idea in all of philosophy.
There are times when, as a rationalist, you have to believe things that seem weird to you. Relativity seems weird, quantum mechanics seems weird, natural selection seems weird.
But these weirdnesses are pinned down by massive evidence. There's a difference between believing something weird because science has confirmed it overwhelmingly—
—versus believing a proposition that seems downright deranged, because of a great big complicated philosophical argument centered around unspecified miracles and giant blank spots not even claimed to be understood—
—in a case where even if you accept everything that has been told to you so far, afterward the phenomenon will still seem like a mystery and still have the same quality of wondrous impenetrability that it had at the start.
The correct thing for a rationalist to say at this point, if all of David Chalmers's arguments seem individually plausible—which they don't seem to me—is:
"Okay... I don't know how consciousness works... I admit that... and maybe I'm approaching the whole problem wrong, or asking the wrong questions... but this zombie business can't possibly be right. The arguments aren't nailed down enough to make me believe this—especially when accepting it won't make me feel any less confused. On a core gut level, this just doesn't look like the way reality could really really work."
Mind you, I am not saying this is a substitute for careful analytic refutation of Chalmers's thesis. System 1 is not a substitute for System 2, though it can help point the way. You still have to track down where the problems are specifically.
Chalmers wrote a big book, not all of which is available through free Google preview. I haven't duplicated the long chains of argument where Chalmers lays out the arguments against himself in calm detail. I've just tried to tack on a final refutation of Chalmers's last presented defense, which Chalmers has not yet countered to my knowledge. Hit the ball back into his court, as it were.
But, yes, on a core level, the sane thing to do when you see the conclusion of the zombie argument, is to say "That can't possibly be right" and start looking for a flaw.
157 comments
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by anonymous7 · 2008-04-04T10:37:01.000Z · LW(p) · GW(p)
If a theory does not make you 'less confused', it doesn't mean that the theory is wrong or bad. It could just be that it is the way how the world really functions, that some things are truly unknowable. Consciousness might be one of those things that will never be solved (yes, I know that a statement like this is dangerous, but this time there are real reasons to believe this). Of course, it is always a good thing to try find flaws with the theory.
"Separately, there exists a not-yet-understood reason within normal physics why philosophers talk about consciousness and invent theories of dual properties.". Minds also have the delusion of 'free will', so I don't see that argument as a major one.
"But based on my limited experience, the Zombie Argument may be a candidate for the most deranged idea in all of philosophy." It is irrational to reject an argument because it seems absurd. However, it is a good reason to study the argument to find flaws.
Your first argument is that zombie worlds might not actually be logically possible. Fine, it is a possibility, but if you accept that minds can work by computation and that zombie worlds are impossible, it would mean that certain algorithms cannot logically exist without some kind of consciousness popping up into existence.
Replies from: Insert_Idionym_Here↑ comment by Insert_Idionym_Here · 2011-12-16T18:29:59.861Z · LW(p) · GW(p)
You haven't said anything. Make a relevant point.
Replies from: thomblake↑ comment by thomblake · 2011-12-16T19:29:50.361Z · LW(p) · GW(p)
You responded to an anonymous comment from nearly 4 years ago. I don't think they're going to take your advice.
Replies from: Insert_Idionym_Here↑ comment by Insert_Idionym_Here · 2011-12-16T19:42:18.088Z · LW(p) · GW(p)
Better late than never.
comment by Tim_Tyler · 2008-04-04T11:02:10.000Z · LW(p) · GW(p)
Re: I think the hypothesis you're testing is completely hopeless, and your experiments should definitely be funded.
How hopeless does a hypothesis have to be before the funding gets cut? ;-)
Re: Richard Chappell, David Chalmers, and the foes of reductionism.
Is this really your battle? It reminds me of Richard Dawkins getting sucked into debating with creationists. I can't help thinking that Richard is getting distracted from real science by the opinions of the masses - and that, preventing scientists from doing sensible work and advancing scientific materialism is actually one of the things on his opponents' agenda.
Replies from: learned↑ comment by learned · 2013-02-04T05:12:19.325Z · LW(p) · GW(p)
Hi, this is me, Max Raoy Gron, of the Adelaide city and its suburbs, I studied philosophy and have made experiments on how good my life would be morally, socially, emotionally and spiritually, even physiologically, and my philosophical method/s is the result of 3 years of learning philosophy from experience from what I have learned on the internet, therefore my painstaking study in solipsism tells me that a p-zombie looks exactly like a human, for example a behavioural zombie could be a human brain in a shark's body, even though it looks exactly like a human, and thinks it is itself a human, and is similar in behaviour to a human. However a soulless zombie is a ghost that seems to be a physically real person and no person can distinguish this from a real person, it's the result of removing Satan or some demon out of the person's body, so it physically feels like a human, and physically acts like a human, when it's a demon let loose. If everyone was conscious of this then it wouldn't be possible, only a single person can possibly be conscious of this, thus Intelligent Design, not the human psyche, created the natural things surrounding the seemingly supernatural beings, and I didn't create this mistake, nor did God, it's the result of my mother removing the soul from my body, and when it turned human it was therefore a soulless zombie and it was hassling my patriotic Australian neighbour at night.
Replies from: wizzwizz4↑ comment by wizzwizz4 · 2019-08-05T14:23:45.464Z · LW(p) · GW(p)
This is incoherent, reader.
Replies from: charles-paul↑ comment by Charles Paul (charles-paul) · 2021-06-25T18:35:03.100Z · LW(p) · GW(p)
Understatement of the year
comment by Mikko_Rauhala · 2008-04-04T11:17:54.000Z · LW(p) · GW(p)
All pretty much in prior agreement here (though no, I haven't stated "the listener caught in the act of listening" quite so eloquently either).
Personally I just go by the priori that zombies are simply not logically possible. Postulating that they are "seems" to lead to quite contrived and/or internally inconsistent scenarios, as you lay out.
comment by Paul_Crowley · 2008-04-04T11:34:03.000Z · LW(p) · GW(p)
Heterophenomenology!
Sorry, I thought it needed saying.
comment by Sebastian_Hagen2 · 2008-04-04T12:04:51.000Z · LW(p) · GW(p)
Consciousness might be one of those things that will never be solved (yes, I know that a statement like this is dangerous, but this time there are real reasons to believe this). What real reasons? I don't see any. I don't consider "because it seems really mysterious" a real reason; most of the things that seemed really mysterious to some people at some point in history have turned out to be quite solvable.
Replies from: Bluestorm_321↑ comment by ship_shlap (Bluestorm_321) · 2022-05-23T01:47:16.492Z · LW(p) · GW(p)
-Conceivability vs Actual Logical Possibility
-Mysteriousness is our projection of it/how we view it, nothing is inherently mysterious - reductionism
comment by Caledonian2 · 2008-04-04T12:07:29.000Z · LW(p) · GW(p)
A lesser mortal would simply fail to see the implications, or refuse to face them, or rationalize a reason it wasn't so.
But that's precisely what he's done, not with the implications, but the implications of the implications. He's simply denied them.
But, yes, on a core level, the sane thing to do when you see the conclusion of the zombie argument, is to say "That can't possibly be right" and start looking for a flaw.
No, what we say is "That argument is wrong". We've already found the flaw. Our emotional response is irrelevant - the logical contradiction has already been found. P-zombies-as-a-hypothesis is trying to both possess a cake and eat it, have unbroken eggs and make omelets with them at the same time. It is postulating a causative agent that does not and cannot cause anything and so cannot be tested by looking at consequences.
People like Chalmers don't have that negative emotional response! They are convinced, in the sense of possessing conviction, that the hypothesis works, and they have that sensation because the logical short-circuit does indeed satisfy their desire to believe they have minds that are magical, not subject to logic and causality. If we go by what feels satisfying, and we have such a desire, we'll accept p-zombies, because logical consistency is less important than satisfying our deeper desires.
People who have a deeper desire for logical consistency will note that p-zombies are a stupid, self-contradictory idea, and reject it on those grounds. No uncovering further flaws is necessary, no intuitive sense that the conclusion "doesn't look right" and needs more investigation. Those people reject p-zombies immediately and for the obvious reasons alone, because that's all you need.
comment by Caledonian2 · 2008-04-04T12:17:03.000Z · LW(p) · GW(p)
To put it even more simply:
When we find a logical contradiction in an argument, we first check to make sure that we haven't made any errors in derivation. If not, then we conclude that there is a problem with the assumptions we started the argument with, and begin trying to generate ways to test those assumptions.
People like Chalmers are psychologically incapable of rejecting the idea that there is something 'special' amount minds. They cannot doubt that assumption! And so they do not look for ways to test it, because bringing an assertion into question requires admitting the possibility that it could be invalid.
Creationists, for a variety of reasons, cannot emotionally accept that the world we see was not designed by a powerful and intelligent entity. They also do not have a desire to be right that is stronger than their desire to believe themselves right. Thus, they will reject lines of reasoning that lead to a designer-less conclusion no matter how valid they are, and accept lines that produce the conclusion they want no matter how invalid they are.
Chalmers is a "Creationist". He is a True Believer. He will never admit that he is wrong, because he cannot perceive that he is wrong, because his reason is wielded by the desire to reach specific conclusions. When his reason contradicts that desire, it is abandoned.
comment by anonymous7 · 2008-04-04T12:45:45.000Z · LW(p) · GW(p)
"We've already found the flaw."
What exactly is the logical flaw you've found? The zombie argument tells among other things that there can be no test that will tell if a person is really conscious or just a zombie. You might "know" that you're conscious yourself, but there can be no rational argument that proves this.
"What real reasons? I don't see any." If Zombie Worlds are possible, we might be living in it and therefore there can be no argument that proves otherwise. Your brain assumes that you have qualia, but I make no such assumption.
Replies from: bigjeff5, dylan-oneal↑ comment by bigjeff5 · 2011-02-03T06:42:05.084Z · LW(p) · GW(p)
The zombie argument tells among other things that there can be no test that will tell if a person is really conscious or just a zombie.
I believe you just found the flaw.
If it looks like a duck, walks like a duck, quacks like a duck, it's probably a duck.
↑ comment by DYLAN ONEAL (dylan-oneal) · 2022-03-09T16:19:38.815Z · LW(p) · GW(p)
That is exactly what I was thinking. If it is indifferentiable, then how do we know that we aren't 'zombies' ? It's like the antimatter problem. We can't prove A or B, so we assume A, which we have always preferred.
Replies from: dylan-oneal↑ comment by DYLAN ONEAL (dylan-oneal) · 2022-03-09T16:20:35.336Z · LW(p) · GW(p)
is indifferentiable a word?
comment by Sebastian_Hagen2 · 2008-04-04T14:44:57.000Z · LW(p) · GW(p)
Your brain assumes that you have qualia Actually, currently my brain isn't particularly interested in the concepts some people call "qualia"; it certainly doesn't assume it has them. If you got the idea that it did because of discussions it participated in in the past, please update your cache: This doesn't hold for my present-brain.
If qualia-concepts are shown in some point in the future to be useful in understanding the real world, i.e. specify a compact border around a high-density region of thingspace, my brain will likely become interested in them when that happens. However, this will necessarily mean that they're shown to refer to things that are actually measurable. Possibly clusters of atoms, but many kinds of exotic physical entites postulated by substance dualists would also work.
As Eliezer Yudkowsky mentioned, epiphenomenalism includes parts in a prediction program which are known to be dead code. That dead code won't ever interest my brain, except possibly to figure out where exactly the design fault in the human brain which causes some people to become epiphenomenalists is.
comment by Will_Pearson · 2008-04-04T15:27:35.000Z · LW(p) · GW(p)
People might find this site interesting:
http://www.macrovu.com/CCTGeneralInfo.html
Whenever I come across this subject, I tend to leave it with a feeling of, "not enough information". It is a good thing AI designers only need to worry about creating physical properties.
comment by Credulous · 2008-04-04T15:36:17.000Z · LW(p) · GW(p)
Hmm. So, on the Chalmers view, when the AI concludes that it has no way of knowing whether it is epiphenomenally conscious and abandons the belief that it is mysteriously so, would the consciousness 'evaporate,' or are there qualia of not being aware of any qualia? It seems that Chalmers might say that in non-zombie worlds the epiphenomenal-AI would still be conscious of various things (like the 'redness' of red) but just not conscious of its consciousness. [Given our 'bridging laws' the epiphenomenal self can only think "cogito ergo sum" when the physical self does.]
comment by Richard4 · 2008-04-04T15:37:54.000Z · LW(p) · GW(p)
Eliezer - thanks for this post, it's certainly an improvement on some of the previous ones. A quick bibliographical note: Chalmers' website offers his latest papers, and so is a much better source than google books. A terminological note (to avoid unnecessary confusion): what you call 'conceivable', others of us would merely call "apparently conceivable". That is, you view would be characterized as a form of Type-A materialism, the view that zombies are not even (genuinely) conceivable, let alone metaphysically possible. On to the substantive points:
(1) You haven't, so far as I can tell, identified any logical contradiction in the description of the zombie world. You've just pointed out that it's kind of strange. But there are many bizarre possible worlds out there. That's no reason to posit an implicit contradiction. So it's still completely mysterious to me what this alleged contradiction is supposed to be.
(2) It's misleading to say it's "miraculous" (on the property dualist view) that our qualia line up so neatly with the physical world. There's a natural law which guarantees this, after all. So it's no more miraculous than any other logically contingent nomic necessity (e.g. the constants in our physical laws). That is, it's "miraculous" in the same sense that it's "miraculous" that our universe is fit to support life. Atheists and other opponents of fine-tuning arguments are not usually so troubled by this kind of alleged 'miracle'. Just because things logically could have been different, doesn't mean that they easily could have been different. Natural laws are pretty safe and dependable things. They are primitive facts, not explained by anything else, but that doesn't make them chancy.
(3) I'd also dispute the following characterization: "talk about consciousness... arises from a malfunction (drawing of logically unwarranted conclusions) in the causally closed cognitive system that types philosophy papers."
No, typing the letters 'c-o-n-s-c-i-o-u-s-n-e-s-s' arises from a causally closed cognitive system. Whether these letters actually mean anything (and so constitute a contentful conclusion that may or may not follow from other contentful premises) arguably depends on whether the agent is conscious. (Utterances express beliefs, and beliefs are partly constituted by the phenomenal properties instantiated by their neural underpinnings.) That is, Zombie (or 'Outer') Chalmers doesn't actually conclude anything, because his utterances are meaningless. A fortiori, he doesn't conclude anything unwarrantedly. He's just making noises; these are no more susceptible to epistemic assessment than the chirps of a bird. (You can predict the zombie's behaviour by adopting the Dennettian pretense of the 'intentional stance', i.e. interpreting the zombie as if it really had beliefs and desires. But that's mere pretense.)
(4) I'm all for 'reflective coherence' (at least if that means what I think it means). I don't see how it counts against this view, unless you illicitly assume a causal theory of knowledge (which I obviously don't).
P.S. Note that while I'm a fan of epiphenomenalism myself, Chalmers doesn't actually commit to the view. See his response to Perry for more detail. (It also addresses many of the other points you raise in this post.)
Replies from: Lachouette, wafflepudding↑ comment by Lachouette · 2013-08-11T09:27:20.179Z · LW(p) · GW(p)
Replying to (1):
That misses the point. No one can possibly show any logical contradiction in the hypothesis that zombies exist, because those who postulate it have not made their claim falsifiable. As in, there is no observable difference between a world with zombies versus one without them. Similarly, I could claim my room is filled with scientifically undetectable, invisible fairies and you would not be able to logically refute this claim. I don't believe your inability to disprove it would make it any less laughable, however. The fact that the hypothesis is unfalsifiable says something about Chalmer, not about Eliezer.
To be honest, I wonder why a philosopher would go on those lengths to argue for something that has no impact on the world whatsoever.
↑ comment by wafflepudding · 2016-03-04T04:16:36.929Z · LW(p) · GW(p)
On (3), if Zombie Chalmers can't be correct or incorrect about consciousness -- as in, he's just making noise when he says "consciousness" -- does the same hold for his beliefs on anything else? Like, Zombie Chalmers also (probably) says "the sun will rise tomorrow," but would you also question whether these letters actually mean anything? In both the cases of the sun's rising and epiphenomenalism's truth, Zombie Chalmers is commenting on an actual way that reality can be. Is there a difference? Or, does Zombie Chalmers have no beliefs about anything? I'd think that a zombie could be thought to have beliefs as far as some advanced AI could.
comment by Cyan2 · 2008-04-04T16:06:58.000Z · LW(p) · GW(p)
That is, Zombie (or 'Outer') Chalmers doesn't actually conclude anything, because his utterances are meaningless.
You have a curious definition of "conclude" and "meaningless"... or possibly "actually conclude" and "actually meaningless". If Outer/Zombie Chalmers convinces me, Conscious Cyan (haha), that property dualism is correct (something no chirping bird could manage), whence came the meaning?
comment by anonymous7 · 2008-04-04T16:14:14.000Z · LW(p) · GW(p)
"However, this will necessarily mean that they're shown to refer to things that are actually measurable."
Things that cannot be measured can still be very important, especially in regard to ethics. One may claim for example that it is ok to torture philosophical zombies, since after all they aren't "really" experiencing any pain. If it could be shown that I'm the only conscious person in this world and everybody else are p-zombies, then I could morally kill and torture people for my own pleasure.
"Actually, currently my brain isn't particularly interested in the concepts some people call "qualia"; it certainly doesn't assume it has them. If you got the idea that it did because of discussions it participated in in the past, please update your cache: This doesn't hold for my present-brain."
Does your brain assume/think it creates sensory experiences (or what people often call consciousness)?
comment by mtraven · 2008-04-04T16:26:24.000Z · LW(p) · GW(p)
Have to agree about Chalmer's ideas about zombies being the most deranged around, and I guess that is a polite way of putting it. They make no sense whatsoever. However, his view is not the only alternative to reductionism, and you would do yourself and your project a favor if you engaged with some of the more plausible forms, such as emergentism.
Consider "squareness". It is a property of many physical objects or systems, but it doesn't depend on what those objects are made of. It relies on the physical configuration of the object's components, but not on the physical properties of the components. If you had a quantum-level simulation of the universe, it wouldn't tell you when squares appeared (unless you also had, within the simulation or outside of it, something with the about the same computational power of the human visual system). It is a non-physical concept, but implemented, incarnated, and intimately tied to the physical. If you removed one of the sticks or pencils or iron bars making up the square, it wouldn't be a square any more. But it wouldn't make sense to talk about a zombie-square, which would be a physical object in the same physical configuration that somehow is not a square.
Replies from: TAG↑ comment by TAG · 2022-05-09T12:53:13.056Z · LW(p) · GW(p)
Squareness is clearly reducible -- given fine grained information about the locations of the atoms in an object , it is no problem to figure out that it is square. If conscious were a common or garden higher level property, it would be reducible , and there would be no hard problem.
comment by Z._M._Davis · 2008-04-04T16:43:29.000Z · LW(p) · GW(p)
mtraven, I don't think that really counts as an alternative to reductionism. We just say "Squareness is in the map, not the ..." &c.
comment by michael_vassar3 · 2008-04-04T17:08:53.000Z · LW(p) · GW(p)
I think that Leibnitz's monadology holds that this world actually contains a zombie master, which we call god, who does his manipulation through careful set-up of the initial conditions. This view doesn't seem to be very compelling to most contemporary philosophers. I'm also of the impression that it wasn't considered plausible in his time and that many people doubt that he really believed it.
With respect to "argument from career impact", it seems highly plausible to me that within many academic circles one best advances a career precisely by making outlandish claims, the more outlandish the better, and then by defending them as well as one can.
comment by PK · 2008-04-04T17:19:16.000Z · LW(p) · GW(p)
Humans have a metaphysical consciousness which is outside the mere physical world. I know this is true because this means I'm special and I feel special so it must be true. If you say humans consciousness is not something metaphysical and special then you are saying humans are no more special then animals or mere matter. You are saying that if you arrange mere matter in a certain way it will be just as special as me. Well, for your information, I'm really really special: therefore I'm right and you are wrong. In fact, I'm so special that there must be some way in which the universe says I'm special. Also, anyone attempting to take my specialness feelings about myself away from me is evil.
comment by poke · 2008-04-04T17:56:55.000Z · LW(p) · GW(p)
It seems to me that Chalmers does not just believe in epiphenomenal consciousness. Chalmers posits a non-physical concept of "direct access" and a non-physical notion of "having an experience." I can't see how one can give an account of "direct access" and "having an experience" as dual properties. But if "direct access" and "having an experience" are given physical accounts then the whole argument for epiphenomenalism falls apart; a physical system cannot gain "direct access" because physical systems are always mediated (and it's this "direct access" that premises his entire argument; if he could be wrong about being conscious the argument would not go through) and "having an experience" couldn't be a counterpart to having a causal link between your experience and your beliefs (as Chalmers uses it) if it was identical with such a causal link.
comment by Richard4 · 2008-04-04T18:10:34.000Z · LW(p) · GW(p)
Cyan - think of the million monkeys at typewriters eventually outputting a replica of Chalmers' book. The monkeys obviously haven't given an argument. There's just an item there that you are capable of projecting a meaningful interpretation onto. But the meaning obviously comes from you, not the monkeys.
Credulous - I'm not entirely sure what you're asking. I think an agent could still have qualia without believing that this is so on a theoretical level. (Dennett springs to mind!) But I guess if you tinkered with the internal computational processes enough, you might eventually succeed in ridding the agent of [the neural underpinnings of] phenomenal representations (e.g. of pain) altogether. It would then behave very differently.
PK - Yep, you're so very special that you're the only discussant in this conversation who's made entirely of straw!
comment by Sebastian_Hagen2 · 2008-04-04T18:35:56.000Z · LW(p) · GW(p)
Things that cannot be measured can still be very important, especially in regard to ethics. One may claim for example that it is ok to torture philosophical zombies, since after all they aren't "really" experiencing any pain. If it could be shown that I'm the only conscious person in this world and everybody else are p-zombies, then I could morally kill and torture people for my own pleasure. For there to be a possibility that this "could be shown", even in principle, there would have to be some kind of measurable difference between a p-zombie and a "conscious" entity. In worlds where it is impossible to measure a difference in principle, it shouldn't have any impact on what's the correct action to take, for any sane utility function. My ethics are by necessity limited to valuing things whose presence/absence I have some way to measure, at least in principle. If they weren't, I'd have to worry about epiphenomenal pink unicorns all the time.
Does your brain assume/think it creates sensory experiences (or what people often call consciousness)? It thinks that it receives data from its environment and processes it, maintaining a (somewhat crude) model of that environment, to create output that manipulates the environment in a predictable manner. It doesn't think that there's any non-measurable consequences of that data processing (once again: that'd be dead code in the model). If that doesn't answer your query, please state it more clearly; specifically rationalist-taboo the word "experience".
comment by bstark2 · 2008-04-04T18:43:26.000Z · LW(p) · GW(p)
Richard, I'm a little confused by your use of "natural law". Natural laws as I know them have, you know, consequences.
Replies from: bigjeff5↑ comment by bigjeff5 · 2011-02-03T06:52:25.700Z · LW(p) · GW(p)
He has (I don't know if it was actually him who did it, it feels a bit charlatan to me) redefined the term, and uses "physical laws" for the old-school version, and "natural laws" for his version which has absolutely nothing to do with the physical world.
comment by Unknown · 2008-04-04T18:52:21.000Z · LW(p) · GW(p)
Eliezer's argument could have been made in a much simpler way; there is no difference between pointing to a human being and a zombie each saying, "I am conscious," and pointing to a human being and a zombie each saying, "I see the color red," or "I plan to post this comment on the blog to see how people respond."
In other words, the causally closed process that results in the words "I see the color red," is not based in any way on the color red, just as it is not based on consciousness. And the causally closed process that results in posting a comment on a blog has nothing whatever to do with people reacting, since the laws that govern quarks do not have purposes such as seeing how people respond.
Unfortunately for Eliezer, seeing these parallel cases should also show why he is not giving anything remotely close to a reductio, nor showing that the position is improbable (after someone thinks about these cases for a bit, this should become clear). And he has given no direct response whatever to Chalmers' arguments, except by saying that the position doesn't make sense to him.
comment by Nick_Tarleton · 2008-04-04T19:12:52.000Z · LW(p) · GW(p)
Sebastian, I'll try. Is there some property E such that: (1) an entity without E can have identical outward behavior to an entity with E (but possibly different physical structure); and (2) you assign intrinsic value to at least some entities with E, but none without it? If so, do you have property E?
comment by Unknown · 2008-04-04T19:13:35.000Z · LW(p) · GW(p)
Also, one other thing: if the possibility of zombies is accepted by a majority, or even a substantial minority, of philosophers who study consciousness, it seems highly unlikely that this position is so insane as Eliezer suggests. So on a core level, the sane thing to do when you see the conclusion of the Eliezer's argument, is to say "That can't possibly be right" and start looking for a flaw.
Replies from: bigjeff5↑ comment by bigjeff5 · 2011-02-03T06:59:11.243Z · LW(p) · GW(p)
Most people believe airplanes fly because air going over the top of the wing must "catch up" to the air going underneath the wing. This belief doesn't seem to prevent them from being wrong, why would it prevent the majority of philosophers from being wrong on the subject of consciousness?
To put it another way, a long long time ago the vast majority of scientists believed that fire was caused by a physical material called phlogiston. Did this consensus make them right? Did the universe somehow change to use a chemical reaction with oxygen to produce fire later, when previously it had always released phlogiston to produce fire?
I hope you can see how the majority of philosophers can be wrong.
comment by michael_vassar3 · 2008-04-04T19:17:30.000Z · LW(p) · GW(p)
Bstark: seconded
Sebastian Hagen: You don't need a measurable difference between a p-zombie and a "conscious" entity. At least in principle you can also start from priors, not update except regarding your own consciousness, and estimate the probabilities, given that you are conscious, that you inhabit a world where a given entity is a zombie. In Chalmers' framework you ask "given that there exist bridging laws between this experience here now and this configuration of atoms, what is the probability that there are more general bridging laws relating matter to consciousness and if there are such laws what is the probability distribution over their content?" Sane utility functions pay attention to base rates, not just evidence, so even if it's impossible to measure a difference in principle one can still act according to a probability distribution over differences. Of course, its a physical system that is estimating these base probabilities, so its not at all clear that it should calculate that it is more likely that "I am conscious but you are not" than "you are conscious but I am not", or even than "tooth decay is conscious but humans are not". All it has to look at is the space of bridging laws. Chalmers comes close to intellectual honesty by embracing panpsychism on similar grounds but he fails to zero out morality by recognizing that "outer Chalmers", while right by coincidence to take into account the interests of "inner Chalmers" in this one case, does violence to the "inner Chalmers" in the infinite universes where "bridging laws" connect a flourishing life for "outer Chalmers" with unspeakable agony for "inner Chalmers". I personally think that this is a confused way of thinking about what sort of thing natural laws are in the first place, but I can play the game if that's the cost of entry to being taken seriously by some genuinely bright if deeply deluded people.
Jotedem: I'm astounded. A decent theological view that many top priests agree with is once again getting equated with property dualism? I mean, seriously. I would laugh if it weren't so serious and depressing. Pity the poor souls of those innocent heathen scientists who damn themselves by doubting creationism, fortified in their hubristic confidence that only scientists know anything by the ease with which they demolish the arguments of philosophers. Still, they should know better. Its a huge leap in arrogance to step from rejecting a tiny and little respected academic discipline like Philosophy or Women's Studies to rejecting Theology, the "King of the Sciences", the propagation and development of which was the very reason for the creation of academia in the first place.
comment by conchis · 2008-04-04T19:34:53.000Z · LW(p) · GW(p)
"For there to be a possibility that this "could be shown", even in principle, there would have to be some kind of measurable difference between a p-zombie and a "conscious" entity. In worlds where it is impossible to measure a difference in principle, it shouldn't have any impact on what's the correct action to take, for any sane utility function."
Thank you. It doesn't seem to me that zombies are impossible. But I'm rather confused as to why anyone should care at a practical level, even if whatever "consciousness" is supposed to mean in this discussion is supposed to be morally salient.
comment by michael_vassar3 · 2008-04-04T19:38:28.000Z · LW(p) · GW(p)
If the above comment doesn't clarify it, I think that our basic problem here is still that we don't know how to properly use Aumann Agreement without falling into Majoritarianism. No-one would, after thinking through the arguments, take seriously zombies, or it seems to me most recent claims of eminent philosophers, without an argument from authority behind them, but given the argument from authority its natural to try to strengthen the argument with "he could have meant" claims or simply accept it as "profound". Because some people who we call "great philosophers" or "great scientists" actually made fairly subtle, surprising and sound arguments in the past, we may be too quick to transfer the credit that they earned to those who are currently called the same, or we may not be, but majoritarianism and Pascal's Wager seem to be the two hardest stumbling blocks that our attempts at Overcoming Bias have turned up and I think that we may need to get back to them.
comment by anonymous7 · 2008-04-04T20:06:01.000Z · LW(p) · GW(p)
"In worlds where it is impossible to measure a difference in principle, it shouldn't have any impact on what's the correct action to take, for any sane utility function."
Wrong, since it may be possible to estimate the probability of being in a p-zombie world, or more generally the probability that such a difference exists.
comment by Latanius2 · 2008-04-04T20:54:15.000Z · LW(p) · GW(p)
We have that "special" feeling: we are distinct beings from all the others, including zombie twins. I think we tend to use only one word for two different concepts, which causes a lot of confusion... Namely: 1) the ability of intelligent physical systems to reflect on themselves, imagine what we think or whatever makes us think that whichever we are talking to is "conscious" 2) that special feeling that somebody is listening in there. AGI research tries to solve the first problem, Chalmers the second one.
So let's try to create zombies then! I don't see why this seems logically so difficult, we only need some nanotechnology... So consider the following thought experiment.
You enter room A. Some equipment scans your atoms, and after scanning each, replaces it with one of the same element, same position. Meanwhile, the original atoms are assembled in room B, resulting in a zombie twin of you. You were conscious all along, and noticed nothing except some buzz coming from the walls... So you wouldn't be worried about the experiment even if your zombie is killed afterward, or sent to the stone mines of Mars for a lifelong sentence, etc.
You enter room A. Now, the copy process goes cell by cell. Scanning every cell, making an atom-by-atom perfect copy of it, then replacing, original goes into room B, assembled. You still notice nothing.
You enter room A. Your whole brain is grabbed, scanned, and then placed into room B. The body with the copied brain and other organs walks out happily of room A, while you go to the stone mines. A bit more depressing than the original version.
So, if we copy only atoms or cells (which is regularly done in our bodies), we stay in room A. If we copy whole organs or bodies, we go to room B. It wouldn't be intuitive to postulate that consciousness can be divided, it's either in room A or room B. But the quantity of atoms to be moved in one step is almost continous... it would be weird to assume that there is some magic number of them which allows consciousness to transfer.
The conclusion: to differentiate between "conscious beings" and "zombies" leads to contradiction even from a subjective viewpoint. (Where would that mysterious "inner Chalmers" be in the above cases?)
I think we are used to our consistent self-image too much, and can't imagine how anything else would feel. An example: using brain-computer interfaces, we construct a camera which watches our surroundings, even as we sleep. As we wake up, we could "remember" what happened while we slept, because of the connection the camera hardware made with our memories. (The right images just "popped into our minds".) But how would it feel? Were we conscious at night? If not, why do we remember certain things? If we were, why did we just watch as those thieves got away with all our stuff?
All we need to understand that is some experience. If it were given, we wouldn't ask questions like "why am I so special", I think.
Replies from: AspiringRationalist↑ comment by NoSignalNoNoise (AspiringRationalist) · 2013-05-14T02:30:35.384Z · LW(p) · GW(p)
Why do you assume that the replica would be a zombie?
comment by mtraven · 2008-04-04T20:57:08.000Z · LW(p) · GW(p)
Z M Davis - my point is that there are versions of non-reductionism or weak reductionism that do not depend on or imply supernatural forces. That's the sort I'm interested in, anyway. The zombie argument is a paradigm of how not to explore the conceptual space between strict reductionism and outright religious dualism.
I'll say again that the zombie argument is inane...and the fact that people who expound it have fame and tenure indicates that the quarks are cruel, arbitrary, and capricious.
comment by Brent_S · 2008-04-04T21:45:51.000Z · LW(p) · GW(p)
While I read the SEP article and Eliezer's discussion, I don't understand much more than the basics of the theory. My biggest question is why Occam's razor cannot be used to eliminate the zombie theory?
The core of the zombie argument states that it can never be proved, even with perfect information. This is a perfect, stereotypical, textbook, etc. example of what Occam's razor is used against. From Wikipedia: "...eliminating those that make no difference in the observable predictions of the explanatory hypothesis or theory."
Occam's states that the central thesis of zombie theory should be eliminated, thus destroying the rest of it.
What is even more confusing to me is the fact that Occam's razor started as a key tenet of philosophy not science, yet it doesn't seem to apply here.
Replies from: bigjeff5↑ comment by bigjeff5 · 2011-02-03T07:06:28.292Z · LW(p) · GW(p)
That was the point Eliezer was making at the end of the post.
Occam's Razor makes epiphenomonalism the least likely of all possibilities by a huge margin. It can very safely be ignored.
And you know, if we figure out how everything works, and there is still something actually missing, well then epiphenomonalism will be vindicated. It still doesn't mean anything real, by its own definition, though, so what's the friggin point of it?
comment by JulianMorrison · 2008-04-04T21:47:05.000Z · LW(p) · GW(p)
I'd come at it from a different direction. Reality is defined by interaction. A real something that doesn't interact at all, ever, is a straightforward contradiction in terms.
comment by Matthew_J. · 2008-04-04T22:30:05.000Z · LW(p) · GW(p)
From reading your article, it seems that the flaw of epiphenomenalism goes beyond what you have stated, Eliezer. The epiphenomenalist position is that, say, a zombie sensation ZS can cause a zombie belief, ZB, while ZS causes MS, the mental sensation, and ZB causes MB, the mental belief. There is supposed to be no relation between MS and MB. Surely then, this means that all beliefs, language and logic in the mental universe, or whatever it is, are both unjustified and unjustifiable. The connection normally assumed in justifying things is necessarily absent. Beliefs must not be caused by argument, or experience, experiment, or anything else. If mental beliefs even resemble true beliefs which exist in a world where beliefs are actually related to some sort of justification, it cannot be known. The epiphenomenalist position thus seems to resemble the skeptical hypothesis of the evil demon espoused by Descartes. All truth only seems to be true by virtue of the composition of experience. Particular phenomena are merely arranged as such to provide only the illusion of a coherent world, where in fact the illusion may be all there is. Just as Descartes' demon would decieve us such that we think we know 2+2=5, we, the epiphenomenal consciousnesses, could be decieved by the nature of our feeling into believing it. There can be no way to know whether 2+2 is or is not 5. There are only sensations arranged such that it seems that way. Thus, an epiphenomenalist must admit that all meaning and truth apparent to us is probably meaningless and false -it cannot be known to be otherwise. So, by simply attempting to use language or logic to establish epiphenomenalism, there is contradiction. Epiphenomenalism must either be true, and so all we 'know' is probably nonsensical, or epiphenomenalism is false and the three paragraphs I just typed are a meaningful refutation.
comment by Stephen · 2008-04-04T23:41:00.000Z · LW(p) · GW(p)
While I don't necessarily endorse epiphenomenalism, I think there may exist an argument in favor of it that has not yet been discussed in this thread. Namely, if we don't understand consciousness and consciousness affects behavior then we should not be able to predict behavior. So it seems like we're forced to choose between:
a) consciousness has no effect on behavior (epiphenomenalism)
or
b) a completely detailed simulation of a person based on currently known physics would fail to behave the same as the actual person
Both seem at least somewhat surprising. (b would seem impossible rather than merely surprising to a person who thinks physics is completely known and makes deterministic predictions. In the nineteenth century, most people believed the latter, and some the former. Perhaps this explains how epiphenomenalism originally arose as a popular belief.)
comment by LazyDave · 2008-04-04T23:53:37.000Z · LW(p) · GW(p)
I haven't read Chalmers book, so I am just going by what I read here, but at the beginning of the post you promise to show the zombie world as logically impossible, but never deliver; you show that it is improbable enough to be perhaps be considered practically impossible, but since we are just dealing with a "thought experiment," that is irrelevant. For example, I do not think that everyone around me is a zombie. In fact, I'd bet all the money I have that they aren't. But I still don't KNOW they aren't, the way I KNOW that I am not.
On another note, I'm surprised at some of the ad hominem-type statements on this thread (people that don't agree with are like creationists, people that don't agree with me just don't want to see the truth). On most blogs, it's expected, but it is interesting to see it here.
comment by Caledonian2 · 2008-04-05T00:07:13.000Z · LW(p) · GW(p)
1) You haven't, so far as I can tell, identified any logical contradiction in the description of the zombie world.
You seem to be missing the point, Richard. Eliezer isn't concerned with the "zombie world" so much as the very idea of "consciousness" that the zombie thought experiment presupposes.
Let's make this really, really simple:
Various entities have asserted the existence of a phenomenon that cannot be examined by any physical test and that has no effect on any physical process; they claim to have direct experience of this phenomenon.
However, the entities making the proclamation are physical, as are the means by which they make the proclamation. If the asserted phenomenon really couldn't affect physical systems, they could have no experience of it at all.
By their own claims, they can possess no knowledge about the thing they're making the claims about. Whatever experiences they may be experiencing, they are necessarily wrong about the specific assertions they're making about them.
The contradiction is that Chalmers is a physical being claiming to possess knowledge that he claims cannot be derived by physical beings.
Replies from: bigjeff5↑ comment by bigjeff5 · 2011-02-03T07:16:59.497Z · LW(p) · GW(p)
To put it even more simply:
In Eliezer's zombie world, the zombies have consciousness (and therefore are not zombies), because they are in no physical way different from us.
The assumed priors make zombies impossible for Eliezer, and give Chalmers and Richard no way of actually knowing if the zombies in Zombie World are really zombies, other than that Chalmers and Richard both say that they are zombies.
My question to Richard (should he ever come back, I'm two years late after all) is this: how do you know you aren't a zombie?
comment by Meta_and_Meta · 2008-04-05T00:18:18.000Z · LW(p) · GW(p)
Imagine a minimally complete physical duplicate of our cosmos. (So, e.g., the earth travels round the sun consistent with Keplar's laws, etc.) But: There's no gravity.
comment by Rolf_Nelson2 · 2008-04-05T00:27:29.000Z · LW(p) · GW(p)
* That-which-we-name "consciousness" happens within physics, in a way not yet understood, just like what happened the last three thousand times humanity ran into something mysterious.
not yet understood? Is your position that there's a mathematical or physical discovery waiting out there, that will cause you, me, Chalmers, and everyone else to slap our heads and say, "of course, that's what the answer is! We should have realized it all along!"
Question for all: How do you apply Occam's Razor to cases where there are two competing hypotheses:
A and B are independently true
A is true, and implies B, but in some mysterious way we haven't yet determined. (For example, "heat is caused by molecular motion" or "quarks are caused by gravitation", to pick two inferences at opposite ends of the plausibility spectrum.)
I don't know what the best answer is. Maybe the practical answer is a variant of Solomoff induction: somehow compare "P(A) P(B)" with "P(A) P(B follows logically from A, and we were too dumb to realize that)", where the P's are some type of Solomonoff-ish a-priori "2^shortest program" probabilities. But the best answer certainly isn't, "A is simpler than A + B, so we know hypothesis 2 is correct, without even having to glance at the likelihood that B follows from A." Otherwise, you would have to conclude that, logically, quarks are caused by gravitation, in some currently-mysterious way that future mathematicians will be certain to discover.
For the record, my belief is that many of the debaters have beliefs that are isomorphic to their opponent's beliefs. When I hear things like, "You said this is a physical law without material consequences, but I define physical laws as things that have material consequences, so you're wrong QED!" then that's a sign that we're in "does a tree falling in the forest make a noise" territory. Does a conciousness mapping rule "actually exist"? Does the real world "actually exist"? Does pi "actually exist"? Why should I care?
In the end, I care about actions and outcomes, and the algorithms that produce those actions. I don't care whether you label conciousness as "part of reality" (because it's something you observe), or "part of your utility function" (because it's not derivable by an intelligence-in-general), or "part of this complete nutritious breakfast" (because, technically, anything that's not poisonous can be combined with separate unrelated nutritious items to form a complete nutritious breakfast.)
comment by Caledonian2 · 2008-04-05T00:36:03.000Z · LW(p) · GW(p)
"You said this is a physical law without material consequences, but I define physical laws as things that have material consequences!
If the law has no material consequences, it doesn't matter whether we assert it to be true or false. The two states are identical in every way. Asserting that the law is true, or false, is therefore incorrect. It is neither; it is incoherent and thus can not be true or false.
This is not a matter of personal definition.
comment by Z._M._Davis · 2008-04-05T01:04:14.000Z · LW(p) · GW(p)
"not yet understood? Is your position that there's a mathematical or physical discovery waiting out there, that will cause you, me, Chalmers, and everyone else to slap our heads and say, 'of course, that's what the answer is! We should have realized it all along!'"
I would actually suppose something like this. I found this post to be a compelling knockdown of property dualism, and substance dualism is untenable until we (say) observe the pineal gland disobeying the laws of physics because it's being pushed on by the soul. Practically the only alternative left is that there's something about physical brains that explains consciousness that we're just too stupid or ignorant to understand.
Unless there's some other alternative of which I am too stupid or ignorant to understand.
comment by Frank_Hirsch · 2008-04-05T01:10:45.000Z · LW(p) · GW(p)
I must say I found this rather convincing (but I might just be confirmation biased). Also, I have a question on the topic: The zombiists assume that the universe U of existing things is split into two exclusive parts, physical things P and epiphenomenal things E. The physical things P probably develop something like P(t+1)=f(P(t),noise), as we have defined that E does not influence P. But what does E develop like? Is it E(t+1)=f(P(t)[,noise]), or is it E(t+1)=f(P(t),E(t)[,noise])? I have somehow always assumed the first, but I do not remember having read it spellt out so unmistakeably.
comment by Nick_Tarleton · 2008-04-05T02:14:09.000Z · LW(p) · GW(p)
Stephen:
So it seems like we're forced to choose between: a) consciousness has no effect on behavior (epiphenomenalism) or b) a completely detailed simulation of a person based on currently known physics would fail to behave the same as the actual person
c) a completely detailed simulation of a person would behave like the actual person, and have "consciousness", which actually refers to some complex physical property.
comment by Cyan2 · 2008-04-05T02:18:45.000Z · LW(p) · GW(p)
Cyan - think of the million monkeys at typewriters eventually outputting a replica of Chalmers' book. The monkeys obviously haven't given an argument. There's just an item there that you are capable of projecting a meaningful interpretation onto. But the meaning obviously comes from you, not the monkeys.
Suppose I expend the energy to search through the monkeys' output, discarding random strings and the large set of near-copies of Chalmers' book that degenerate into gibberish or have Shakespeare spliced into to the middle. I decide that the string that is Chalmers' book has meaning, and its arguments convince. In this situation, the meaning obviously comes from me. Zombie/Outer Chalmers' book is not the outcome of such a process, so I'm not convinced that the meaning can sensibly be located where you put it.
And "expend the energy" is not just a figure of speech here -- my efforts will increase entropy in a fashion related to the amount of information I incorporate into my brain structure.
comment by Stephen · 2008-04-05T02:48:27.000Z · LW(p) · GW(p)
Nick,
Thanks for your comment. If I understand correctly, by c) you are suggesting that consciousness is something like temperature or pressure, a property of physical systems, but one which you don't need to know about if you are doing a completely detailed simulation. I was lumping this in with epiphenomenalism, since in that case, consciousness does not affect physical systems, it is a descriptor of them. However, I guess the key point is that one can subscribe to epiphenomenalism in this sense without concluding that zombies are logically possible. Because we understand temperature, it is obvious to us that imagining our world exactly as it is except without temperature is nonsensical. To make an even starker example, it would be like saying there are two identical universes that contain five things, but in one of the universes they don't have the property of fiveness. Maybe if we understood in what way consciousness is a descriptor of physical systems, we would see that our world exactly as it is except without consciousness is a non-sequitur in the same way.
comment by Frank_Hirsch · 2008-04-05T13:40:00.000Z · LW(p) · GW(p)
Apart from Occams Razor (multiplying entities beyond necessity) and Bayesianism (arguably low prior and no observation possible), how about the identity of indiscernibles:
Anything inconsequential is indiscernible from anything that does not exist at all, therefore inconsequental equals nonexistent.
Admittedly, zombiism is not really irresistibly falsifiable... but that's only yet another reason to be sceptical about it! There are gazillions of that kind of theory floating around in the observational vacuum. You can pick any one of those, if you want to indulge your need to believe that kind of stuff, and watch those silly rationalists try to disprove you. A great pastime for boring parties!
Also, the concept of identity is twisted beyond recognition by zombiism:
The psysical me causes the existence of something outside of the psysical me, which I define to be the single most important part of me. Huh?
Btw, anyone to answer my question further above?
I asked: Can epiphenomenal things cause nothing at all, or can they (too, as can physical things can,) cause other epiphenomenal things?
Maybe Richard, as our expert zombiist, might want to relieve me of my ignorance?
comment by Nicholas_Hundley · 2008-04-05T17:55:00.000Z · LW(p) · GW(p)
I happened to write an article about this the other day (click my name to read it), but from a different vantage point. I am enjoying your article(still reading it!)
comment by mitchell_porter2 · 2008-04-06T02:49:00.000Z · LW(p) · GW(p)
I agree with the spirit of what Eliezer has written here: The possibility of p-zombies would suggest epiphenomenalism. If you believe in redness, you should expect it to be causally efficacious.
However, subjective redness manifestly exists; subjective redness manifestly does not exist in any physics known to us; yet the physics we have already have appears to be fantastically predictive in detail, and quite capable of producing intelligent behavior in principle.
The usual way out of this is to deny my second proposition, and say that redness is a type of brain state or property, and therefore as much a part of physics as any other material thing. But the elementary properties one finds in physics are of a very limited nature: quantitative, geometric, causal, probabilistic. How can piling up number and shape, even when glued together by causal relations, create color?
The answer is that it cannot, at least if you restrict yourself to logical, set-theoretic, and other relations truly intrinsic to arithmetic and geometry. This is why people become property dualists, or believers in "strong emergence".
Another possibility canvassed by Chalmers is a reinterpretation of the mathematical formalism of physics, so that it is about something other than what it appears to be about, namely entities starkly devoid of the "secondary qualities" revealed in conscious perception. This is his panpsychism or panprotopsychism. It's "pan-" because current physics has monistic tendencies, a homogeneity of kind among its fundamental entities; if any of them are mind-like or qualia-like, one might expect all of them to participate in or at least to approach that condition.
Given the problems of epiphenomenalism, I find this monistic approach more appealing. However, it seems that even those few philosophers who pursue this avenue are hindered by a rather crude notion of mind. People talk as if consciousness is nothing but sensation, and as if sensation is nothing but a pile of pixel-like elementary sensations, and as if these elementary qualia then just need to be identified with something physically elementary. There is a tendency, for example, which denies that there is any phenomenology of thinking, and no such thing as conscious perception of meaning; cognition is unconscious computation, and it's just the raw feels of sensation which exist and constitute conscious experience.
But in reality, they are just the part of consciousness which is hardest to deny. Also, this mode of thought is least removed from billiard-ball materialism: an object is a heap of particles, a conscious experience is a heap of qualia. Alas, it is not so simple.
If anyone out there really wants to engage in phenomenology, I have a few recommendations. First, you must overcome the conceptual reflex which rejects the reality of "reifications" and "abstractions". Err in the other direction, see what picture you build up, and then have a go at paring it back. My training curriculum is as follows. First, Chapter IX of Russell's The Problems of Philosophy, which makes the case for the reality of properties and relations, as entities which exist just as much as "things" do. Then, ideally I would recommend Reinhardt Grossman's The Categorial Structure of the World, for a mind-stretching example of a systematic ontology in which abstract entities exist just as much as concrete ones do, and in which the relationship between them is also analysed at length. But the book may be hard to obtain, so some other contemporary system of categories might have to do. Then, one should tackle Husserl and Kant, for epistemologically systematic ontology. And the final step - but by now I'm just describing my own research program - is to interpret the world of appearances which one has been describing as interior to a single entity, and to embed that in physics. As I remarked here last month, quantum entanglement suggests an ontology in which there are fundamental entities with arbitrarily many degrees of freedom. It is the one indication we have from physics that old-fashioned atomism (according to which all fundamental entities are simple and perpetually encapsulated) may be radically wrong.
Returning to the theme of zombies - in a monistic ontology like this, you can't subtract the mental properties and leave the causal relations unchanged, so the critique of epiphenomenalism no longer holds. The best one can do is to imagine a possible world in which the causal laws are isomorphic, but the elementary states they relate are different: are not 'mind-like', are not states of consciousness. However - and this is relevant for AI, Friendly or otherwise - there is nothing to guarantee that every causal simulation of consciousness even in this world will itself be conscious, if consciousness is indeed this deeply grounded in "substance" rather than "algorithm". For example - if conscious states are only ever states of a single elementary entity (e.g. a single tensor factor such as I mention in the previous link), then any distributed simulation of consciousness will not be conscious, even though it will exhibit all the same problem-solving capacities. (The correspondence between these two would begin to break down if they set about investigating their own physical constitution, which by hypothesis is different, so there is a limit to the duplication involved with this sort of "zombie".)
Note 1: Chalmers does acknowledge the challenge of epiphenomenalism in The Conscious Mind, and discusses it at length. He proposes a number of ways in which the phenomenal properties might still be causally relevant, but does say that if forced to a choice between epiphenomenalism and eliminativism, we should prefer the former. (He develops the monistic alternative to property dualism in later papers.)
Note 2: A few commenters on this blog have tried the old dodge according to which "redness" is just a word. I would agree that it is a categorization whose scope is vague and varies with the individual. But is it not clear that the individual patches of color and shades of red to which it is applied do exist, and that they pose a challenge to the colorless ontology of physics, regardless of how we group them by name?
↑ comment by SiglNY · 2010-11-15T02:13:51.232Z · LW(p) · GW(p)
Mitchell -
I agree totally. As someone who has read Chalmers entire book, it's frustrating to read so many people misinterpreting his views. Chalmers is in no way committed to a strict epiphenomal dualism; with the Zombie argument Chalmers is merely demonstrating the intellectual bankruptcy of traditional reductionist dualism. He hesitantly endorses epiphenomenalism only because he is granting the materalist as many possible premises as he can to make his point. The materialist would seemingly want to hold to these facts:
1)The physical world is causally closed. All physical activity can be explained fully by the laws of physics.
2)The fundamental constituents are in no way conscious and exist as solely extrinsic entities in space and time.
3) The mind is fully reducible to physical activity.
Chalmers grants premise 1 and 2 but then asks, if this is true, why not Zombies? Why couldn't the universe, made as it is of unconscious entities, simply allow for arrangements of matter exactly like us but without any internal subjective reality? This basic argument seems to me irrefutable; an argument in the face of which materialism must wither and die. Chalmers is fully aware that the most problematic aspect of this argument is the Zombie's ability to "think" about consciousness. And it is here that Yudkowsky finds Chalmers theories descend into absurdity. But why, if the materialists premises are correct? Indeed it seems, according to a classical view of physics, that if a super-mathematician had perfect knowledge of all arrangements and properties of matter from a very early time in the universe he could predict perfectly the future existence of a book called "'The Conscious Mind' written by David Chalmers" and he could make this prediction without any reference to any of Chalmers' mental states. If this is possible, (and a materialist would seemingly HAVE to agree) then it seems wholly possible that a Zombie could indeed write books on consciousness without consciousness being at all present. Throughout his rebuttal to Chalmers, Yudkowsky tacitly endorses a interactionist view of consciousness where the behavior of a philosopher musing on the vagaries of mind cannot be understood without reference to subjective experience as part of the causal chain. But of course, this gives up the ghost! Chalmers, in fact, does not rule out interactionist dualism, a view which, if true, would make Zombies impossible. Unfortunately, this rules out not only materialism but the causal closure of the physcial world! Chalmers doesn't want to go out on such a limb. The "defense" of epiphenomenalism Chalmers mounts in his Zombie argument is an attempt to grant as much to the materialist as logically possible. As counter-intuitive epiphenomenalist dualism is it's still preferable to materialism, a view which is simply impossible. But, again, there are still OPTIONS available to us other than such a strange counter-intuitive position as epiphenomenalism; options which Chalmers ENDORSES! A point Yudkowsky fails to acknowledge.
To speculate further (and digress somewhat), it does seem some sort of panpsychist monism is the most satisfying of all possible psychophysical theories; not only does it have the virtue of maintaining the causal closure of the physical world, it, like the most far-reaching theories of theoretical physics, unites different phenomenon into one structure. It still is problematic however that our thoughts about consciousness, while impossible to be "divorced" from conscious experience (zombies are impossible in a monist view of reality), aren't caused by consciousness in the same way that the existence of zebras "causes" our beliefs about Zebras. It seems the role of consciousness in the causal chain is deeper and more non-linear in any view of consciousness that isn't full-blown interactionist dualism. Perhaps this is actually a virtue of monism. Maybe we should expect that subjectivity itself, the thing allows us to have "beliefs" at all (the Zombie has no beliefs as a belief is wholly defined as a subjective intentional state), should play a more "grounded" role in our belief formation mechanism, including beliefs about consciousness. Consciousness doesn't "cause" our beliefs about consciousness, it merely makes them true. Weird, but not impossible.
If I had to bet, an ultimate theory of reality would be an information theory in which the fundamental constituents of the universe are subjective "monads" undergoing computational processes; I favor this view metaphysically because subjectivity has irreducible "intrinisic" qualities, physical entities are wholly defined by external structure and function. It tells us what the universe is made of "in and of itself," perhaps penetrating the world of unknowable reality that Kant believed we could never see. And if, as Tononi and Koch's recent scientific work suggests and consciousness IS integrated information (a view I first heard fuzzily articulated in, that's right, Chalmer's The Conscious Mind), then such a theory could explain ALL the phenomenon of the universe, from the cosmological to the phenomenological (because it turns out the former is merely a version of the latter). But this is highly speculative.
Anyway, I don't think Zombies can exist. Neither probably does Chalmers. But it's the materialist who has the impossible task here; deny the zombie but maintain the physical world as closed and fundamentally unconscious. He can't do it.
Replies from: abandon↑ comment by dirk (abandon) · 2024-05-27T00:06:41.839Z · LW(p) · GW(p)
Why couldn't the universe, made as it is of unconscious entities, simply allow for arrangements of matter exactly like us but without any internal subjective reality?
Because an arrangement of matter exactly like us would (if under the same set of physical laws as us) be conscious.
comment by Nick_Tarleton · 2008-04-06T03:46:00.000Z · LW(p) · GW(p)
Mitchell: Very interesting. But, in what sense is the brain not already a distributed simulation?
comment by mitchell_porter2 · 2008-04-06T04:37:00.000Z · LW(p) · GW(p)
Nick, that final step leads to a quantum-mind theory in which the Cartesian theater of consciousness is a single irreducible tensor factor of very high dimension. The notion may make more sense if you look at a paper like this. Physical reality is modelled as a "quantum causal history" in which one has a partially ordered set of events, each event being characterized by a Hilbert space and a state therein. The partial order gives you causality, but if you take a spacelike cross-section of the history, it need not reduce to a set of spatially localized events. In a formalism like that, you can represent an EPR pair (of spatially distant entangled particles) as a single entity, and the quantum state of the universe on any such spacelike cross-section is just the direct product of all the individual states. So I would propose to recast physics in that fashion, and then ontologically interpret it as I described. To be distributed here would mean to be implemented using many of those tensor factors at once, and the whole idea is that the instantaneous global state of consciousness is (formally) described by just one, very big, tensor factor. The rest of the brain would indeed be performing distributed computations, which (in physically ultimate terms) would consist of large numbers of simpler (lower-dimensional) tensor factors; but they would be causally coupled to the big central one, which would be an objectively existing self.
The standard notion of consciousness in neuroscience, insofar as there is a standard notion, is that some (possibly changing) subset of the brain is the "neural correlate of consciousness"; it's the physical entity whose state is my current state of consciousness, and somehow it contains that representation of external reality which is the direct object of my experience. What I write above is exactly the same, except that I propose that the place where it all comes together is one of these physically elementary tensor factors, rather than some loose set of coarse-grained mesoscopic neurocomputational states. The advantage of my idea is that there is no fuzziness regarding the boundary of the "correlate of consciousness". The disadvantage is that it would seem to require that macroscopic quantum coherence exists in the brain and is functionally relevant, something for which there is no evidence. (I should add in passing that Tegmark's well-known calculations are far from conclusive, as there are many other ways to look for entanglement in the brain, and a variety of possible coherence-protecting mechanisms.)
It is unlikely that my particular ideas are correct, but it's vitally important to show that detailed alternatives to the existing way of thinking are possible, because even people who agree that consciousness is somehow a problem are conceptually frozen in place, and more inclined to think that the problem arises from some personal incapacity rather than from a defect in the conventional wisdom.
comment by GNZ · 2008-04-06T04:52:00.000Z · LW(p) · GW(p)
Richard,
is he burden of proof always upon the person proposing that something is impossible when it looks like it might be possible? You are after all trying to use this item to prove something and it is a scenario constructed carefully by your own side of the argument to be almost impossible to disprove. I would think you carry the burden of showing that it is extremely likely to be ideally conceivable - something I think you are very far from doing because of more or less the sort of argument made in the main article above.
2) That is, it's "miraculous" in the same sense that it's "miraculous" that our universe is fit to support life.
is it not conceivable (in the sense that you refer to) that you could have qualia that don't line up with the outside world? That is a situation where we are concious in such a way as we might reasonably be in that world (in the same way that we obviously can't be in a world where there is no conciousness) just not in a sense that looks like it has causality - ie you are a watcher of some actions but they are different from your desires - like how some people describe hypnosis but much more extreme.
anon,
> If it could be shown that I'm the only conscious person in this world and everybody else are p-zombies, then I could morally kill and torture people for my own pleasure.
that would seem to imply that if I don't believe in qualia that I can kill and torture people... cripes.... sounds a bit like the "if there was no god we could all do anything" argument.
comment by Andrew8 · 2008-04-06T07:36:00.000Z · LW(p) · GW(p)
What bothers me is that from reading the Chalmers quotes, it seems like he assumes that this world is the world with the listener, the conscious people. Because 'this' world and the zombie world are physically indistinguishable from one another, then this world could be equally likely to be the zombie world and the other world is full of conscious people.
Or better yet of course, the argument is absurd and there is no external part that is what we call 'consciousness'.
comment by Nick_Tarleton · 2008-04-06T17:49:00.000Z · LW(p) · GW(p)
that would seem to imply that if I don't believe in qualia that I can kill and torture people... cripes.... sounds a bit like the "if there was no god we could all do anything" argument.
Not really. If something doesn't feel pain, or pleasure, or anything else, it's not a moral object.
comment by Tom_Breton · 2008-04-07T03:17:00.000Z · LW(p) · GW(p)
To put it much more briefly, under the Wesley Salmon definition of "explanation" the epiphenomenal picture is simply not an explanation of consciousness.
comment by Daniel_Humphries2 · 2008-04-07T04:42:00.000Z · LW(p) · GW(p)
There, pretty as a picture.
comment by David_Chalmers · 2008-04-08T19:52:00.000Z · LW(p) · GW(p)
Someone e-mailed me a pointer to these discussions. I'm in the middle of four weeks on the road at conferences, so just a quick comment. It seems to me that although you present your arguments as arguments against the thesis (Z) that zombies are logically possible, they're really arguments against the thesis (E) that consciousness plays no causal role. Of course thesis E, epiphenomenalism, is a much easier target. This would be a legitimate strategy if thesis Z entails thesis E, as you appear to assume, but this is incorrect. I endorse Z, but I don't endorse E: see my discussion in "Consciousness and its Place in Nature", especially the discussion of interactionism (type-D dualism) and Russellian monism (type-F monism). I think that the correct conclusion of zombie-style arguments is the disjunction of the type-D, type-E, and type-F views, and I certainly don't favor the type-E view (epiphenomenalism) over the others. Unlike you, I don't think there are any watertight arguments against it, but if you're right that there are, then that just means that the conclusion of the argument should be narrowed to the other two views. Of course there's a lot more to be said about these issues, and the project of finding good arguments against Z is a worthwhile one, but I think that such an argument requires more than you've given us here.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2008-04-08T21:56:00.000Z · LW(p) · GW(p)
David, thanks for commenting!
It seems to me that there is a direct, two-way logical entailment between "consciousness is epiphenomenal" and "zombies are logically possible".
If and only if consciousness is an effect that does not cause further third-party detectable effects, it is possible to describe a "zombie world" that is closed under the causes of third-party detectable effects, but lacks consciousness.
Type-D dualism, or interactionism, or what I've called "substance dualism", makes it impossible - by definition, though I hate to say it - that a zombie world can contain all the causes of a neuron's firing, but not contain consciousness.
You could, I suppose, separate causes into (arbitrary-seeming) classes of "physical causes" and "extraphysical causes", but then a world-description that contains only "physical causes" is incompletely specified, which generally is not what people mean by "ideally conceivable"; i.e., the zombies would be writing papers on consciousness for literally no reason, which sounds more like an incomplete imagination than a coherent state of affairs. If you want to give an experimental account of the observed motion of atoms, on Type-D dualism, you must account for all causes whether labeled "physical" or "extraphysical".
Type-F monism is a bit harder to grasp, but presumably, on this view, it is not possible for anything to be real at all without being made out of the stuff of consciousness, in which case the zombie world is structurally identical to our own but contains no consciousness by virtue of not being real, nothing to breathe fire into the equations. If you can subtract the monist consciousness of the electron and leave behind the electron's structure and have the structure still be real, then that is equivalent to property dualism or E. This gets us into a whole separate set of issues, really; but I wonder if this isn't isomorphic to what most materialists believe. After all, presumably the standard materialist theory says that there are computations that could exist, but don't exist, and therefore aren't conscious. Though this is an issue on which I confess to still being confused.
I understand that you have argued that epiphenomenalism is not equivalent to zombieism, enabling them to be argued separately; but I think this fails. Consciousness can be subtracted from the world without changing anything third-party-observable, if and only if consciousness doesn't cause any third-party-observable differences. Even if philosophers argue these ideas separately, that does not make them ideally separable; it represents (on my view) a failure to see logical implications.
Replies from: None↑ comment by [deleted] · 2012-04-10T18:53:03.367Z · LW(p) · GW(p)
This is a misunderstanding of the role being played by the zombie argument in considerations of consciousness. The question is whether a zombie world is logically possible (whether it is conceivable), not whether it is coextensive with an epiphenomenalist view of consciousness. That is a critical distinction.
To see the difference, consider the "four-sidedness" of squares. Is it possible to conceive of a world in which squares happen to be other than four-sided? The answer, of course, is no. It would be logically incoherent for someone to ask that we discuss the kinds of universe where this might be possible, because all such discussion would be a waste of time: squares have four sides by definition, so there is no empirical or conceivable fact about any universe that could make it different.
By contrast, we could conceive all kinds of outlandish variations on the physical reality in this universe, and ask questions about those conceptions. We could imagine, for example, a universe in which only the most wise and intelligent people spontaneously rose to the top of all organizations. However bizarre and physically impossible the conception, it is still conceivable ... unlike the non-four-sided-square universe.
So the question addressed by the zombie argument is whether a zombie universe is conceivable in that particuar sense. Is a zombie universe logically impossible, for the same kind of reasons that the non-four-sided-square universe is impossible? And if so, on what grounds? Given the terms of the definition of "logically possible" it is meaningless to try to introduce arguments about the contingencies of science, the "deranged" character of a theory that allows zombies to write sincere papers on the subject of consciousness, or the merits of epiphenomenalism, in this context. It is not a question of empirical or theoretical science that makes non-four-sided squares impossible, it is something much deeper, and the question about the zombie world is about whether it too deserves to be categoried as a logical impossibility.
So, the two phrases "consciousness is epiphenomenal" and "zombies are logically possible" cannot be compared: the former is a statement of how consciousness actually plays a role in the universe, whereas the latter is asking about an entirely different KIND of distinction.
For the record, the reason to ask whether zombies are logically possible is that if a thing can be present in the real world, but not present in a logically possible world, it is then meaningful to ask about the nature of the thing that differs between the two cases. That is the goal of positing a zombie world: the goal is not to say that zombie worlds actually do exist, and certainly not to say that zombie worlds are coextensive with epiphenomenalism.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-04-10T19:42:40.165Z · LW(p) · GW(p)
For the record, the reason to ask whether zombies are logically possible is that if a thing can be present in the real world, but not present in a logically possible world, it is then meaningful to ask about the nature of the thing that differs between the two cases.
As long as you're recording, can you also explain the reason to ask about the nature of the thing that differs between a conscious system in the real world (A) and its logically possible physically identical nonconscious analog in the zombie world (B)?
Relatedly: if it turns out that no such B can exist in the real world even in principle, what depends on the nature of the thing that differs between A and B?
comment by Caledonian2 · 2008-04-09T11:16:00.000Z · LW(p) · GW(p)
It seems to me that although you present your arguments as arguments against the thesis (Z) that zombies are logically possible, they're really arguments against the thesis (E) that consciousness plays no causal role. Of course thesis E, epiphenomenalism, is a much easier target. This would be a legitimate strategy if thesis Z entails thesis E, as you appear to assume, but this is incorrect.
If 'consciousness' plays a causal role, then we cannot imagine a world in which it is removed, yet everything behaves precisely as it did when it was present.
In that case, p-zombies are impossible, because the whole point of their hypothesized existence is that they're identical in every way to people with 'consciousness' except that they lack it. If you remove a causal factor from people, they behave differently in some way, even if the difference is not immediately obvious to a limited observer. If the p-zombies really do behave precisely the same as they would with 'consciousness', and nothing else has been changed, then there is no difference between having consciousness and lacking it.
comment by Marius_Gedminas · 2008-06-11T10:31:00.000Z · LW(p) · GW(p)
The link under "Bayesian" is wrong: it points to yudowsky.net instead of yudkowsky.net
comment by Jack_Mallah · 2009-03-06T22:35:00.000Z · LW(p) · GW(p)
Hi. Eliezer, very interesting post - I have been thinking along the same lines against epiphenomenalism, though I don't think the point is as clear as you thought.
Richard wrote:
"(2) It's misleading to say it's "miraculous" (on the property dualist view) that our qualia line up so neatly with the physical world. There's a natural law which guarantees this, after all. So it's no more miraculous than any other logically contingent nomic necessity (e.g. the constants in our physical laws). That is, it's "miraculous" in the same sense that it's "miraculous" that our universe is fit to support life. Atheists and other opponents of fine-tuning arguments are not usually so troubled by this kind of alleged 'miracle'."
Actually the fine tuning of physical laws to support life is a big deal. Pretty much the only way to explain it is that there are a large variety of physical universes with different laws, so it's not improbable that some would support life, and (by anthropic selection) we'd be in one of those.
The fine tuning of the hypothesized qualia bridge laws would likewise need an explanation. If there are a wide variety of bridge laws, does anthropic selection explain why ours are fine tuned to make our qualia the same as our mathematically describable brains would say they are? It seems evident that it does not - we could have very odd qualia yet still be conscious. So it would indeed be a strange coincidence to say the least.
To me it would be an even stranger coincidence if 'qualia-properties' need not logically exist, but do exist in nature, which happen to be just like the qualia that material beings would (falsely) say they have.
I don't think that is the case; I think that it's much more likely that either we are material and material beings must be on to something when they think that they have qualia, or that we are not on to something when we think we have them, or some middle ground between the two (we are are material and have some kind of qualia but they are not what we think they are).
comment by Hopefully_Anonymous3 · 2009-03-07T06:07:00.000Z · LW(p) · GW(p)
Eliezer, why do you give more attention to Chalmers than to Professor Koch of Caltech?
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2009-03-07T06:15:00.000Z · LW(p) · GW(p)
Chalmers is an enormously more adept philosopher than Koch. I don't even understand this question.
comment by Hopefully_Anonymous3 · 2009-03-07T16:39:00.000Z · LW(p) · GW(p)
Well, a lot of what you're doing here regarding consciousness and "zombies" seems to me like philosophy-of-the-gaps. If I'm not mistaken, Prof. Koch and his peers are the most literate (and adept) on where and how empirically derived evidence is filling in these gaps. I confess I'm not reading these ginormous posts from you carefully -but I'm curious why (as far as I can tell) you're not mucking around with cutting edge empiricism on consciousness with the same glee Robin's mucking around with cutting edge empiricism on behavioral economics, decision theory, etc.
comment by DanielLC · 2010-01-06T00:56:11.910Z · LW(p) · GW(p)
Consciousness has no effect: The zombie hypothesis is true.
Consciousness has an effect: There are two possibilities:
a. It's logically possible for a universe to exist in which something else has the same effect: the zombie hypothesis is true.
b. It's logically impossible for a universe to exist in which something else has the same effect.
2b is the only possibility in which the zombie hypothesis is false. I'll examine that. I will not, however, make a pun about it.
Since nothing else can have the effect consciousness has, it can be defined by its effect. In other words, consciousness is whatever has that effect.
In other words, there's some function, f(x), that's conscious. No matter what physical method you use to compute f(x), if you put the right input into it, there will be qualia.
f(x) might involve mathematics more complex than we can make. Essentially, it would require an infinitely large lookup table. If it does, it can still be approximated within our universe to arbitrary accuracy using only a finite lookup table. I don't see why the idea that a p-zombie can only act arbitrarily close to how you act would be any less unsettling. If they can't do that, that means that f(x) is something you can write down using nothing more than basic arithmetic.
What function is it? Is there really any reason to believe that the result of f(x) will be "I am self-aware" rather than, say, "I am not self-aware"? How can you even begin to figure out what f(x) is?
Have you begun to figure out what f(x) is? If so, by all means, tell us. If not, you're guessing. Not estimating; guessing. You are using some method that is simply wrong. In other words, "The internal narrative is mysteriously malfunctioning, but miraculously happens to be correctly echoing the justified thoughts of the epiphenomenal inner core".
Does it matter whether it works in only our universe or all universes? Does it matter if it can be proven? You haven't proven humans are conscious. It's entirely possible that within the next 30 seconds, you will finally figure it out and elegantly prove, once and for all, that humans are not conscious.
Replies from: orthonormal, Blueberry↑ comment by orthonormal · 2010-01-06T05:36:02.885Z · LW(p) · GW(p)
I can't follow you at all. I don't think that this is my fault, and I don't think I'm alone.
Replies from: DanielLC↑ comment by DanielLC · 2010-01-06T18:24:51.013Z · LW(p) · GW(p)
It's more of a rant than a structured argument. I guess my main points are:
The only alternative to consciousness arising from some unknown and unknowable process is it being some unknown process.
Since you don't know what consciousness is, you still have no more evidence that you're actually conscious than you would if it was unknowable.
Also, the p-zombie argument isn't just that we don't see how lack of consciousness would lead to a contradiction. It's that it's totally unrelated to anything else. There is no axiom you can use to conclude something is conscious without already knowing what's conscious. It's similar to the is-ought problem.
↑ comment by Blueberry · 2010-01-11T04:57:32.575Z · LW(p) · GW(p)
Since nothing else can have the effect consciousness has, it can be defined by its effect. In other words, consciousness is whatever has that effect.
This is essentially behaviorism, which is now considered outdated and has largely been replaced by functionalism.
Consciousness is the result of many complicated processes working together in the brain. Even if you could create the function f(x) which has the same output as a human, it wouldn't have the same structural organization that gives rise to that output: it would just be a giant lookup table. Consciousness is the result of all that structural organization.
The function f(x) with the same output as a human wouldn't be a zombie, because it's not physically identical to the human. It's just a summary of the human's behavior without the actual process that generates the behavior, and it's the process that creates consciousness.
comment by [deleted] · 2010-04-03T20:55:44.621Z · LW(p) · GW(p)
.
comment by red75 · 2010-06-06T14:19:05.365Z · LW(p) · GW(p)
Well, I just can't comprehend why this zombie fuss might have any practical consequences for AGI (other that supporting/disproving "soulless machine" cliche). Either it's epiphenomenal, or not - there will be neuronal correlates of "mysterious listener". Let philosophers think should it always listen, or not, and just act as it should.
comment by [deleted] · 2010-08-24T19:40:46.311Z · LW(p) · GW(p)
#include
int main() { printf("Following my stream of thought... I feel pretty conscious!\n"); return 42; }
Yeah - We made it!
Also check out this.
Edit: As I realized not everyone here might be familiar with programming: I meant that, if I understand Eliezer right, his POV would mean that creating a computer program that simply outputs "I can follow my stream of consciousness." is conscious.
Replies from: RobinZ, JGWeissman↑ comment by RobinZ · 2010-08-24T20:23:52.907Z · LW(p) · GW(p)
Two things:
First, welcome to Less Wrong! As you may have gathered, this is a blog devoted to the question of human rationality - techniques we can use to more accurately draw correct conclusions from what we see. If you want to introduce yourself to everyone, feel free to do so on the latest Welcome Thread.
Second, regarding the content of your post, I don't think that's quite what Eliezer Yudkowsky was talking about. He was talking about the sort of evidence we use right here and now to decide that we are conscious, not the sort of evidence sufficient to determine that some other thing is conscious. You can tell if an ordinary human being has a headache by hearing them say, "I've got a headache", but that doesn't mean a computer which runs speech-synthesis to emit audio recognizable as the English words "I've got a headache" has any such thing. It's not supposed to be a general any-context test, and it doesn't have to be.
comment by VKS · 2012-03-21T13:52:40.384Z · LW(p) · GW(p)
Huh. I guess I got lucky then.
Last year, I had a very strange dream, whose details I have unfortunately forgotten. It had a plot which involved some mysteries needing explanations, and I remember feeling tension as details about whatever was happening accumulated and the resolution to the various questions the plot had posed seemed to be about to be resolved.
But then, the story tried to execute a twist. It failed miserably; I only remember vaguely, something incredibly lazy along the lines of some characters having had psychic powers all along or the such. At which point I remember very clearly saying to myself "Oh, come ON!", but not being able to continue, having jerked awake, as if from a nightmare, but instead of cold sweat and horror, confusion and the feeling which leads one to say well let's see YOU do better.
I guess somebody was listening...
(where by somebody, I mean me.)
comment by [deleted] · 2012-04-02T14:25:48.800Z · LW(p) · GW(p)
You have misunderstood the argument completely. You say "I know I'm speaking from limited experience, here. But based on my limited experience, the Zombie Argument may be a candidate for the most deranged idea in all of philosophy." Melodrama, this, but I would advise focusing on the first part of the phrase ("But based on my limited experience....") if you want to make progress.
The main point of the zombie argument is that if science is so completely helpless that it can say nothing -- even in principle -- about the subjective phenomenology of consciousness (and by widespread consensus, this appears to be the case), then the possibility of a parallel universe in which that particular aspect is missing (i.e. the Zombie universe) cannot be ruled out. This Can't-Rule-It-Out aspect is what Chalmers is deploying.
He is NOT saying that we should believe in a parallel zombie universe (a common misunderstandinga among amateur philosophers), he is saying that IF science decides to do a certain kind of washing-its-hands on the whole phenomenology of consciousness idea THEN it follows that philosophers can declare that it is logically possible for there to be a parallel universe in which the thing is missing. It is that logical entailment that is being exploited as a way to come to a particular conclusion about the nature of consciousness.
Specifically, Chalmers then goes on to say that the very nature of subjective phenomenology is that we have privileged access to it, and we are able to assert its existence in some way. It is the conflict between privileged access and logical possibility of absence, that drives the various zombie arguments.
But notice what I said about science washing its hands. If science declares that there really is absolutely nothing it can say about pure subjective phenomenology, science cannot then try to have its cake and eat it too. Science (or rather you, with remarks like "I think I speak for all reductionists when I say Huh?") cannot turn right back around and say "That's preposterous!" when faced with the idea that a zombie universe is conceivable. Science cannot say:
a) "We can say NOTHING about the nature of subjective conscious experience, and
b) "Oh, sorry, I forgot: there is one thing we can say about it after all: it is Preposterous
that a world could exist in which subjective conscious experience did not exist, but
where everything else was the same!"
Your misunderstanding comes from not appreciating that this is the conundrum on which the whole argument is based.
Instead, you just fell into the trap and tried to use "Huh!?" as a scientific response.
Finally, in case the point needs to be explained: why does the "Huh!" response not work? Try to apply it to this parallel case. Suppose you are trying to tell whether there is a possibility of a liar faking their emotions. You know: kid suspected of stealing cookies, and kid cries and emotes and pleads with Mother to believe that she didn't do it. Is it logically possible for the kid to give a genuine-looking display of innocence, while at the same time being completely guilty inside? If all liars had an equal facility with this kind of fake emotion, would philosophers be justified in saying that it is nevertheless LOGICALLY POSSIBLE for there to be all the outward signs of innocence, but with none of the internal innocence?
According to your approach, you could just simply laugh and say "Huh?", and then declare that "the Fake-Innocence Argument may be a candidate for the most deranged idea in all of philosophy."
Replies from: APMason, TheOtherDave, None, wedrifid, Bugmaster, abramdemski↑ comment by APMason · 2012-04-02T14:52:54.587Z · LW(p) · GW(p)
Eliezer's article is actually quite long, and not the only article he's written on the subject on this site - it seems uncharitable to decide that "Huh?" is somehow the most crucial part of it. Also, whether or not there is widespread consensus that science can in principle say nothing about subjective phenomenology, there is certainly no such consensus amongst reductionists - it simply wouldn't be very reductionist, would it?
Replies from: None↑ comment by [deleted] · 2012-04-06T17:48:56.085Z · LW(p) · GW(p)
The "Huh?" part was then elaborated, but the elaboration itself added nothing to the basic "Huh?" argument: he simply appealed to the idea that this is self-evidently preposterous. He did also pursue other arguments (as you say: there were many more words), but the rest involved extrapolations and extensions, all of which were either strawmen or irrelevant.
If you disagree, you should really find the supporting arguments of his that you believe I overlooked. I see none.
↑ comment by TheOtherDave · 2012-04-02T14:53:39.591Z · LW(p) · GW(p)
Nicely argued.
So I suppose the question, for someone who wishes to rescue their opposition to zombies as logically possible entities, is what else they open the door to if they concede "You're right, science does have something to say about conscious experience after all. One thing science has to say about conscious experience is that a given physical state of the world either gives rise to conscious experience, or it doesn't; the same state of the world cannot do both."
That seems a relatively safe move to me.
All of that said, your analogy to Fake-Innocence is a bit of a bait-and-switch. The idea that two different systems (including the same individual at different times) can demonstrate identical behavior that is in one case the result of a specified mental state (innocence, consciousness, pain, what-have-you) and in the other case is not is very different from the idea that two identical systems ("identical behavior, identical speech, identical brain; every atom and quark in exactly the same position, moving according to the same causal laws of motion") can have the mental state in one case and not in the other.
It's not clear to me that incredulity is inappropriate with respect to the second claim, except in the sense that it's impolite.
Replies from: None↑ comment by [deleted] · 2012-04-06T18:08:59.930Z · LW(p) · GW(p)
About Science making the claim "You're right, science does have something to say about conscious experience after all ... [namely] ... that a given physical state of the world either gives rise to conscious experience, or it doesn't; the same state of the world cannot do both."
This would just be Solution By Fiat. Hardly a very dignified thing for Science to do.
And don't forget: Chalmers' goal is to say "IF there is a logical possibility that in another imaginable kind of universe a thing X does not exist (where it exists in this one), THEN this thing X is a valid subject of questions about its nature."
That is a truly fundamental aspect of epistemology -- one of the bedrock assumptions accepted by philosophers -- so all Chalmers is doing is employing it. Chalmers did not invent that line of argument.
About the analogy. It only looks like a bait and switch because I did not spell out the implications properly. I should have asked what would happen if there was no possible way for internal inspection of mental state to be done. If, for some reason, we could not do any physics to say what went on inside the mind when it was either telling the truth or lying, would it be valid to deploy that appeal to preposterousness? You must keep my assumption in order to understand the analogy, because I am asking about a situation in which we cannot ever distinguish the physical state of a lying human brain and a truthtelling human brain, but where we nevertheless had privileged access to our own mental states, and knew for sure that sometimes we lied when we made a genuine protest of innocence. (Imagine, if you will, a universe in which the crucial mental process that determined intention to tell the truth versus intention to deceive was actually located inside some kind of quantum field subject to an uncertainty principle, in such a way that external knowledge of the state was forbidden).
My point is that if we lived in such a universe, and if Eliezer poured scorn on the idea of Appearance-Of-Innocence without Intention-To-Be-Genuine, his appeal would be transparently empty.
Replies from: TheOtherDave, abramdemski, MarsColony_in10years↑ comment by TheOtherDave · 2012-04-06T18:34:58.316Z · LW(p) · GW(p)
I have no idea what dignity has to do with anything here.
As for the analogy... sure, if we discard the assertion that the two systems are physically identical, then there's no problem. Agreed. The idea that two systems can demonstrate the same behavior at some level of analysis (e.g., they both utter "Hey! I'm conscious!"), where one of them is conscious and one isn't, isn't problematic at all.
It's also not the claim the essay you're objecting to was objecting to.
That's why I classed it as a Bait and Switch.
↑ comment by abramdemski · 2012-04-09T19:22:08.403Z · LW(p) · GW(p)
This would just be Solution By Fiat. Hardly a very dignified thing for Science to do.
It isn't solution by fiat; the idea isn't to add just that statement to science. Rather, the idea is that such a statement already seems probable from basic scientific considerations such as those discussed in the post.
EDIT:
I see now that this is not relevant. The point of the zombie argument is not to refute such considerations, but rather, to illustrate the difference between "the hard problem of consciousness" and other sorts of consciousness.
↑ comment by MarsColony_in10years · 2015-03-22T13:24:44.445Z · LW(p) · GW(p)
I am asking about a situation in which we cannot ever distinguish the physical state of a lying human brain and a truthtelling human brain, but where we nevertheless had privileged access to our own mental states, and knew for sure that sometimes we lied when we made a genuine protest of innocence.
So, if we have knowledge that cannot possibly be observed in the physical world, then that proves that there is something else going on? Are you saying, for example, that we somehow know both the position and momentum of a particle with a precision greater than that allowed by the Heisenberg Uncertainty Principle, and that this gives rise to us either knowing that we are lying or knowing the we are telling the truth?
Well sure, if you start out with the given premise that breaks the laws of physics as we know them, of course you are going to conclude that there is something beyond "mere atoms". Suppose we know that the sky is actually green, even though all of physics says it should be blue. Clearly our map (aka the laws of physics as we currently know them) doesn't match the territory (the stuff that's causing our observations). But it doesn't seem to be necessary to resort to such wild hypotheses, because it is still quite plausible that consciousness emerges from "mere atoms". We just don't know the details of how yet, but we're working on it. If someday we have a full understanding of the brain, and there doesn't seem to be anything there to give rise to consciousness, then such wild speculation will be warranted. Today though, the substance dualism argument has no evidence behind it, and therefore an infinitesimally small probability of being true.
Replies from: curiousone↑ comment by curiousone · 2018-02-04T18:56:04.555Z · LW(p) · GW(p)
Hello. You state that "it is still quite plausible that consciousness emerges from "mere atoms" ", but you do not explain why you make that statement. In fact you say that one day it will all be totally clear, even if it isn't yet right now.
I might be wrong, but that's why I'm asking: Is it not possible to say that about anything?
↑ comment by [deleted] · 2012-04-02T15:25:38.540Z · LW(p) · GW(p)
But isn't it the point that Science specifically IS actually going around saying things about subjective consciousness? Namely that apparently it is a causal result of the way your cerebral neurons interact, to paraphrase Yudkowsky "Consciousness is made of atoms." You cannot take away consciousness and still have the same thing. Consciousness-testing is a one place function.
Quine's view of philosophy, which appears to be generally accepted here on LW, says that ultimately all philosophy is psychology, so is it not a better and more productive idea to ask "Why do we talk so passionately of this strange property called consciousness?"
Replies from: None↑ comment by [deleted] · 2012-04-06T18:11:43.417Z · LW(p) · GW(p)
This is not correct. Science is not making any claims about subjective consciousness. It makes claims about other meanings of the term "consciousness", but about subjective phenomenology it is silent or incoherent. For example, the claim "Consciousness is made of atoms" is just silliness. What type of atoms? Boron? Carbon? Hydrogen? And in virtue of what feature of atoms, is red the way it is?
Replies from: None, hairyfigment↑ comment by [deleted] · 2012-04-06T18:52:37.463Z · LW(p) · GW(p)
Your consciousness is made of atoms. Not a single kind of atom, but many different kinds. I cannot recite the entirety of the human biochemistry from heart, but I am sure it is readily available somewhere in peer reviewed publications. The fact of the matter is that your consciousness is a program running on the specialized wetware that is your brain. It might be possible to run your consciousness in a microanatomical computersimulation, but a microana sim is still run on a computer made of atoms.
Now iformation theoretically it must be possible to say something about this consciousness property that some progams exhibit and others don't, or maybe there isn't a hard and fast point where consciousness is defined and it is in fact a continuous spectrum. I don't know, but if I am to bet I say the latter.
There must also then be some way of making definite statements about how that conscious program will act if it is copied from one medium (human) to another (microanatomical sim).
The information theoretical facts does not change that the computer or the brain that runs the conscious program is still a real physical thing. So we can with science say something about the computation substrate which is made of atoms, about the consciousness property which is information theory, and about the nature of copying a mind which is also information theory.
Now, are you telling me that information theory, chemsitry and electrical engineering are not sciences?
↑ comment by hairyfigment · 2012-04-07T16:19:29.523Z · LW(p) · GW(p)
If you read this mini-sequence and say you can imagine a Zombie Mary in this kind of detail, then I declare your intuition broken. By which I mean, we'd have to drop the topic or ask if one type of intuition has more reason to work (given what science tells us).
↑ comment by wedrifid · 2012-04-02T17:12:02.514Z · LW(p) · GW(p)
You have misunderstood the argument completely. You say "I know I'm speaking from limited experience, here. But based on my limited experience, the Zombie Argument may be a candidate for the most deranged idea in all of philosophy." Melodrama, this, but I would advise focusing on the first part of the phrase ("But based on my limited experience....") if you want to make progress.
The 'limited experience' caveat serves to allow that Eliezer may be unfamiliar with something in philosophy that is even more deranged than the Zombie argument - a necessary concession if he is to make the claim 'most deranged'. It isn't intended to concede any ignorance of the zombie argument itself, which he quite clearly understands.
Replies from: None↑ comment by [deleted] · 2012-04-06T18:15:48.301Z · LW(p) · GW(p)
Your claim ("... the zombie argument itself, which he quite clearly understands....") is entirely unsupported. I know many philosophers, on both sides of the debate about zombies, and consciousness in general, who would say that Eliezer's claims are in a standard class of amateur misconstruals of the zombie argument.
Old, old counterarguments, in other words, that were dealt with a long time ago.
Your arbitrary declaration that he "quite clearly understands" the zombie argument do nothing to show that he does.
Replies from: wedrifid↑ comment by wedrifid · 2012-04-07T02:03:47.681Z · LW(p) · GW(p)
Your arbitrary declaration that he "quite clearly understands" the zombie argument do nothing to show that he does.
This is true. My arbitrary declaration of comprehension is very nearly as meaningless as your claim to the contrary. The two combined do serve to at least establish controversy. That means readers are reminded to think critically about what they read and arrive at their own judgement through whatever evidence gathering mechanisms they have in place.
I know many philosophers, on both sides of the debate about zombies, and consciousness in general, who would say that Eliezer's claims are in a standard class of amateur misconstruals of the zombie argument.
I know many philosophers who would indeed dismiss Eliezer's position as naive. And to be fair the position is utterly naive. The question is whether the sophisticated alternative is a load of rent seeking crock founded on bullshit. (And, on the other hand, I also know some philsophers whose thinking I do respect!)
↑ comment by Bugmaster · 2012-04-07T02:32:28.897Z · LW(p) · GW(p)
If all liars had an equal facility with this kind of fake emotion, would philosophers be justified in saying that it is nevertheless LOGICALLY POSSIBLE for there to be all the outward signs of innocence, but with none of the internal innocence?
Logically possible, yes. But in practice, you could not use outward signs of emotion to determine whether anyone was lying. If, somehow, there were no other ways to determine whether people other than yourself were lying (preposterous, yes, but bear with my thought experiment for a moment) -- then the best you could do is to say, "well, I know that I sometimes lie, but everyone else has no capacity for lies at all, as far as I can ever know"). In other words, you'd have arrived at a sort of deception-solipsism. Would you agree ?
Replies from: abramdemski↑ comment by abramdemski · 2012-04-08T00:35:57.720Z · LW(p) · GW(p)
I would think that the better analogy would be "Well, I know that I sometimes tell the truth, but so far as I can ever know, the utterances of other people bear no special relationship to the truth". I find it to be a better analogy because, in this view, we could try to introduce "philosophical liars": people who appear to be truthful in every way, but are merely putting up facades, with no inherent truth-connection behind their words.
↑ comment by abramdemski · 2012-04-09T19:57:33.848Z · LW(p) · GW(p)
Upvoted for pointing out that the post fails to address a basic issue.
However, I don't think anything said in the post is really wrong. Your characterization of the zombie argument appears to be this:
A1: Science can say nothing about the nature of subjective experience.
A2: If science can say nothing about the nature of subjective experience, then science must leave open the possibility of zombies.
Conclusion: Science leaves open the possibility of zombies.
The "long version" of the zombie argument has much to say in order to establish A1 and A2. However, the essence of A1 was (in my understanding) established as a philosophical idea long before the zombie argument. If I understand your complaint, it is that Eliezer is not really addressing A2 at all, which is the meat of the zombie argument; rather, in rejecting the conclusion, he is rejecting A1. So, for a more complete argument, he could have directly addressed the idea of the "hard problem of consciousness" and its relationship to empirical science. (Perhaps he does this in other posts; I haven't read 'em all...)
EDIT:
I now have a different understanding (thanks to talking to Richard elsewhere). The point of the zombie argument, in this understanding, is to distinguish "the hard problem of consciousness" from other problems (especially, the neurological problem). Eliezer argues by identifying belief in Zombies with epiphenomenalism; but this seems to require the wrong form of "possible".
If the zombie argument is meant to establish that given an explanation for the neurological problem, we would still need an explanation for the hard problem, then the notion of "possible" that is relevant is "possible given a theory explaining neurological consciousness". The zombie argument relies on our intuitions to conclude that, given such a theory, we could still not rule out philosophical zombies.
This does not imply epiphenomenalism because it does not imply that zombies are causally possible. It only argues the need for more statements to rule them out.
That said-- if Eliezer is simply denying the intuition that the zombie argument relies on (the intuition that there is something about consciousness that would be left unexplained after we had a physical theory of consciousness, so that such a theory leaves open the possibility of zombies), then that's "fair game".
Replies from: Bugmaster↑ comment by Bugmaster · 2012-04-09T20:17:26.436Z · LW(p) · GW(p)
So, for a more complete argument, he could have directly addressed the idea of the "hard problem of consciousness" and its relationship to empirical science.
He could have, but, logically speaking, he doesn't need to. If he rejects the premise A1, he can then reject the conclusion as well, even if the reason A2 is logically valid -- since rejecting A1 renders the conclusion unsound.
comment by FeepingCreature · 2012-12-09T23:26:25.358Z · LW(p) · GW(p)
Late note, and apologies if this is obvious: could Chalmers' missing piece be that his epiphenomenal-Chalmers is actually the model Chalmers has of himself? Ie. not that dual-Chalmers causes a physical effect, or that they're causally distinct, but that the physics of physical-Chalmers' cognition cause the epiphenomenal-Chalmers-model to be created in Chalmers' physical head? And that that's the reason physical-Chalmers talks about consciousness? (Which would make zombieChalmers correct, of course) And that it looks like there'd be an epiChalmers because Chalmers doesn't correctly identify modelChalmers as a product of physicalChalmers' cognitive algorithm?
In other words, he'd need to read How an Algorithm Feels on the Inside.
comment by MrLovingKindness · 2013-01-31T18:13:57.759Z · LW(p) · GW(p)
The following paragraph from the article is not a sound argument against epiphenomenalism.
If you can close your eyes, and sense yourself sensing—if you can be aware of yourself being aware, and think "I am aware that I am aware"—and say out loud, "I am aware that I am aware"—then your consciousness is not without effect on your internal narrative, or your moving lips.
The above argument is conflating a you, some kind of agent which can cause thinking, moving lips, etc. with consciousness which does not necessarily have any agency. As I understand it, modern neurological research has some pretty convincing evidence that there is no you controlling the show.
If consciousness has no agency, then that is consistent with epiphenomenalism. Perhaps, substituting “consciousness” for “you” where you” is either stated or implied will make it clearer.
If consciousness can close eyes, and consciousness can sense consciousness sensing—if consciousness can be aware of consciousness being aware, and conscious can think "I am aware that I am aware"—and consciousness can say out loud, "I am aware that I am aware"—then consciousness is not without effect on internal narrative, or moving lips.
The above argument simply asserts if consciousness can cause thinking or saying, then it affects the physical world, which is a tautology, because thinking and saying are physical phenomena. In order to provide argue against epiphenomenalism, you would have to show that consciousness can cause thinking or saying.
Replies from: AkCarl↑ comment by AkCarl · 2013-06-24T13:14:24.340Z · LW(p) · GW(p)
It seems to me that if research shows that there is no "you" running the show, and consciousness has no agency, then the current state of affairs in the universe is not only consistent with the idea of epiphenominal consciousness, but also with the idea that consciousness is nonexistent.
comment by WedgeOfCheese (DiamondSoul) · 2014-09-22T17:45:40.246Z · LW(p) · GW(p)
comment by 3p1cd3m0n · 2015-01-10T16:15:56.946Z · LW(p) · GW(p)
If I understand correctly, Yudkowsky finds philosophical zombies to be implausible, as it would require consciousness to have no causal influence on reality, which Yudkowsky seems to believe entails that if there are philosophical zombies, it’s purely coincidental that accurate discussions of consciousness are done by those who are conscious, which is very improbable and thus philosophical zombies are very implausible. This reasoning seems flawed, as discussing and thinking about consciousness could cause consciousness to exist, but this consciousness would have no effect on anything else. For philosophical zombies to exist, thinking about consciousness could only bring about consciousness in certain substrates.
comment by Document · 2015-03-24T23:24:25.960Z · LW(p) · GW(p)
I would seriously nominate this as the largest bullet ever bitten
Why would anyone bite a bullet that large?
No, the drive to bite this bullet
This has bugged me for a while: is there a definition of "biting" or "dodging" a "bullet"? It seems to be used here in a way exactly opposite how I've seen it used elsewhere.
Replies from: None↑ comment by [deleted] · 2015-03-24T23:36:35.173Z · LW(p) · GW(p)
"biting a bullet" means taking a position you would not previously or otherwise wanted to, out of necessity. Maybe something you are uncomfortable with, but which the logic demands is better than the alternative, etc.
"dodging a bullet" is totally unrelated and means there was a close call. Maybe there was some argument that appeared to undo everything except for one accidental and unexpected technicality. But it very nearly went the other way.
Replies from: Document↑ comment by Document · 2015-03-24T23:51:34.514Z · LW(p) · GW(p)
Interesting. It sounds like "dodging" and "swallowing" are equally misused in Science Doesn't Trust Your Rationality, but in different ways.
comment by Alex_Arendar · 2015-12-02T14:52:30.162Z · LW(p) · GW(p)
Eliezer, I am wondering why to bother yourself with going into dispute with people who profess a zombie argument :) Do you hope that some of them will change their way of thinking? I hardly believe they visit this site often. In general, have you personally seen a transformation of such type of person who operates seriously by things like zombie argument to a more rational type of person?
Replies from: entirelyuseless↑ comment by entirelyuseless · 2015-12-02T17:08:38.941Z · LW(p) · GW(p)
Scott Aaronson says here that the zombie argument that science cannot explain consciousness is completely convincing to him. Do you find Aaronson especially lacking in rationality?
Replies from: Morendil, Alex_Arendar, Alex_Arendar↑ comment by Morendil · 2015-12-02T19:25:56.184Z · LW(p) · GW(p)
Where in that (long) post does he say that?
Replies from: gjm↑ comment by gjm · 2015-12-02T22:58:01.674Z · LW(p) · GW(p)
In the paragraph beginning "The most obvious thing". But it is worth reading the paragraphs that follow. He says it's "perfectly reasonable" to reject that argument on the basis that the "hard problem" (as Chalmers calls it) is mere sophistry -- that being roughly what I think most people here on LW would be inclined to do. But he objects to the combination of (1) doing that with (2) saying that some theory in neuroscience will solve the "hard problem".
That seems to me like a reasonable objection, but I'm not sure his diagnosis is correct; I suspect at least some of the people he's objecting to actually (1) say that the "hard problem" is mere sophistry but (2) say that some theory in neuroscience gives an answer to the question "what is consciousness?" that doesn't involve that sort of sophistry; an answer not to the question "what is this further extra thing that constitutes consciousness, above and beyond people's behaviour and how their brains work?" but to "what exactly is it about people's behaviour and how their brains work that constitutes this thing we call consciousness?".
He goes on to accept that this sort of question is reasonable, and in fact that's the question he focuses on in the rest of what he writes.
↑ comment by Alex_Arendar · 2015-12-02T22:55:16.692Z · LW(p) · GW(p)
Honestly I don't know who Scott Aaronson is, so hard to say for me. And regarding that science cannot explain consciousness - it probably cannot explain it exhaustively YET. But this was the same with a lot of other things in the past, which were not explainable at some moment in time but were explained completely clear after some work has been done. So be patient, science will explain consciousness some day as well (at least I want to believe in this).
↑ comment by Alex_Arendar · 2015-12-02T22:57:06.012Z · LW(p) · GW(p)
Now I know who is Scott Aaronson, so your comment was useful.
comment by curiousone_duplicate0.944435470947067 · 2017-07-06T10:57:00.472Z · LW(p) · GW(p)
The said Chalmersian theory postulates multiple unexplained complex miracles. This drives down its prior probability, by the conjunction rule of probability and Occam's Razor. It is therefore dominated by at least two theories which postulate fewer miracles, namely:
Substance dualism: There is a stuff of consciousness which is not yet understood, an extraordinary super-physical stuff that visibly affects our world; and this stuff is what makes us talk about consciousness.
Not-quite-faith-based reductionism: That-which-we-name "consciousness" happens within physics, in a way not yet understood, just like what happened the last three thousand times humanity ran into something mysterious. Your intuition that no material substance can possibly add up to consciousness is incorrect. If you actually knew exactly why you talk about consciousness, this would give you new insights, of a form you can't now anticipate; and afterward you would realize that your arguments about normal physics having no room for consciousness were flawed.
The second theorie seems odd to me. Since it seems to postulate a solution that will happen in the future, of which we have no possible knowledge right now. And therefore any counter argument (like the mentioned intuition) fails, because as soon as the solution is revealed, of course all that opposes it is proven flawed.
Which means that this position is quite bullet proof. Anything opposing it is automatically wrong, we just need to wait until we can see that for ourselves. I have a slight unwillingness to follow that instruction.
But okay: What gives the permission to put such an amount of trust into the field of physics? Mentioned is that this situation happened "three thousand times" before: That people would see no solution (or rather wrong solutions) until physics cleared the case. That is true enough to me. But does it give the possibility to anticipate it happening in the future? On a topic that has not been cleared three thousand times before by physics?
But as we can see in the quote, the arguments against "normal physics" being incapable of the solution are invalid - will be proven invalid - too! For the then "new" physics must be of a completely new structure. Which cannot be anticipated now as well.
I can see how this argument is perfectly bullet proof. But I still don't trust it, and that's because it's so bulletproof. With this structure of "It will be proven in the future!" I can make anything bullet proof.
So: Does our case have any special properties so that it is more fit than "anything" to be made bullet proof? The only possibility I see would be: "It is more probable to be true". This "three thousand times"-sentence is a way to make it look probable. So we are now looking at the question if we rate the solution of the problem of consciousness through physics in the future more probable than - for example - the also mentioned substance dualism. Now how could we get any useful measurement on the probability of something we have not the slightest amount of knowledge about? If it was, as proposed, the fact that physics cleared not understood cases before, that would also count for any other well working and developed system such as psychology, philosophy, biology, mathmatics, etc. We could claim the soon-to-be-there-solution for any theorie claiming to be close to it.
Therefore it doesn't seem to be a useful theory to me. When it's appliable to more or less anything, how can we know where it is applied correctly? All I can see in this case is the intuition that material substance CAN possibly add up to consciousness. And why would this intuition be more reliable than the one opposing it?
comment by curiousone · 2018-02-04T18:18:37.077Z · LW(p) · GW(p)
The said Chalmersian theory postulates multiple unexplained complex miracles. This drives down its prior probability, by the conjunction rule of probability and Occam's Razor. It is therefore dominated by at least two theories which postulate fewer miracles, namely:
Substance dualism: There is a stuff of consciousness which is not yet understood, an extraordinary super-physical stuff that visibly affects our world; and this stuff is what makes us talk about consciousness.
Not-quite-faith-based reductionism: That-which-we-name "consciousness" happens within physics, in a way not yet understood, just like what happened the last three thousand times humanity ran into something mysterious. Your intuition that no material substance can possibly add up to consciousness is incorrect. If you actually knew exactly why you talk about consciousness, this would give you new insights, of a form you can't now anticipate; and afterward you would realize that your arguments about normal physics having no room for consciousness were flawed.
The second theory seems odd to me. Since it seems to postulate a solution that will happen in the future, of which we have no possible knowledge right now. And therefore any counter argument (like the mentioned intuition) fails, because as soon as the solution is revealed, of course all that opposes it is proven flawed.
Which means that this position is quite bullet proof. Anything opposing it is automatically wrong, we just need to wait until we can see that for ourselves. I have a slight unwillingness to follow that instruction.
But okay: What gives the permission to put such an amount of trust into the field of physics? Mentioned is that this situation happened "three thousand times" before: That people would see no solution (or rather wrong solutions) until physics cleared the case. That is true enough to me. But does it give the possibility to anticipate it happening in the future? On a topic that has not been cleared three thousand times before by physics?
But as we can see in the quote, the arguments against "normal physics" being incapable of the solution are invalid - will be proven invalid - too! For the then "new" physics must be of a completely new structure. Which cannot be anticipated now as well.
I can see how this argument is perfectly bullet proof. But I still don't trust it, and that's because it's so bulletproof. With this structure of "It will be proven in the future!" I can make anything bullet proof.
So: Does our case have any special properties so that it is more fit than "anything" to be made bullet proof? The only possibility I see would be: "It is more probable to be true". This "three thousand times"-sentence is a way to make it look probable. So we are now looking at the question if we rate the solution of the problem of consciousness through physics in the future more probable than - for example - the also mentioned substance dualism. Now how could we get any useful measurement on the probability of something we have not the slightest amount of knowledge about? If it was, as proposed, the fact that physics cleared not understood cases before, that would also count for any other well working and developed system such as psychology, philosophy, biology, mathematics, etc. We could claim the soon-to-be-there-solution for any theory claiming to be close to it.
Therefore it doesn't seem to be a useful theory to me. When it's applicable to more or less anything, how can we know where it is applied correctly? All I can see in this case is the intuition that material substance CAN possibly add up to consciousness. And why would this intuition be more reliable than the one opposing it?
Replies from: curiousone↑ comment by curiousone · 2018-02-05T14:57:27.848Z · LW(p) · GW(p)
I see that people have rated my comment above negatively. I hope it isn't offensive or so, for that was not my intention; if there is a mistake in it I would like to know about it and learn from it!
Replies from: habryka4↑ comment by habryka (habryka4) · 2018-02-05T18:50:02.675Z · LW(p) · GW(p)
Your argument mostly just strikes me as logically flawed. There are clear and easy ways of falsifying the hypothesis of "the process of natural physics will in short time explain all things you find mysterious". Namely every second that passes without physics doing so, is evidence against that theory.
The argument that Eliezer makes is that Physics has a strong enough track record that it will take quite a few seconds to pass until you should really consider alternative hypotheses.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-02-05T19:25:53.382Z · LW(p) · GW(p)
I definitely second this response, but want to add the following nitpick:
It’s not quite time passing that’s the metric here, I think, but rather effort invested into the attempt to explain a thing (with physics). (Suppose that tomorrow, all humanity suddenly lost all interest in explaining how consciousness works, and abandoned all work on the problem. A thousand years could pass thus, and yet we wouldn’t thereby learn anything too interesting about how explainable-by-physics consciousness is.)
Replies from: habryka4, curiousone↑ comment by habryka (habryka4) · 2018-02-05T19:57:56.433Z · LW(p) · GW(p)
Ah, yes. I agree. Effort invested is more accurate.
Replies from: curiousone↑ comment by curiousone · 2018-02-05T20:16:00.212Z · LW(p) · GW(p)
Thanks for the replies!
So Eliezer basically says to me (as the reader) that Physics has solved so many problems in the past ("track record") that I should really give it some time until I start to doubt and search for other explanations. Do I have this right?
So: How much time would you recommend as an appropriate waiting time; and why? How much is "quite a few seconds"?
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-02-05T21:46:36.608Z · LW(p) · GW(p)
Concerning the track record of physics (or “science” more generally)—my philosophy professor in college, Robert Lurz, had a wonderful analogy / intuition pump about this (which I will slightly extend here).
There is a sort of children’s puzzle, which consists of a set of flat plastic or wooden pieces, all of abstract geometric shapes, that fit together in various ways along the edges; and also a set of cards, which have, on the obverse side, an outline shape—that’s the puzzle—and on the reverse side, the solution to the puzzle—i.e., the same outline, but filled in to show how the pieces may be fitted together to form the desired outline.
So you take a puzzle card from the box, and you look at the obverse side, and you look at the pieces, and you say to yourself: gosh, I just can’t see how these pieces could possibly fit together to form this shape. You turn the pieces this way and that, but you can’t make it work; so you conclude that there’s got to be some mistake—perhaps a manufacturing defect, or maybe the wrong puzzle cards were mistakenly put into the box with the wrong pieces.
Then you turn the card over, to the side with the solution, and—Ohhh! So they go together like that! I wouldn’t have guessed… Yes, the solution works, and is obvious in retrospect.
Reassured, you take the next puzzle card from the box, and this time you give it a good deal of thought. You turn the pieces this way and that, you rotate the puzzle card, but… you just can’t make it work. You know, I bet this particular card was mis-printed, you think; the other one had a solution, but this one, well, I just don’t see how it possibly could…
And then you turn the card over, and—Ohhh…! Like that… and that one goes there… wow, yeah, that makes sense.
Reassured, you take the third puzzle card from the box…
…
…
… you take the fifteenth puzzle card from the box, and this one is a real stumper. You spend days on it. You call your friends for help. There just isn’t any possible way those darn pieces could fit together in the way they’d need to, to make this confounded shape! In fact, you think you can even prove that it’s impossible… well, you think you know what a proof would look like, anyway… you can see it, vaguely, in your mind. Yes, the last fourteen cards all turned out to have solutions, but this one has just got to be a misprint, or something…
Of course, in our case, we can’t turn the cards over to reveal the true solution; we’ve no recourse but to keep looking for it on our own. And yet—has there ever not been a solution? That is: have we ever encountered a problem which “science” (i.e. the physics-based, materialistic, reductionist [in the LW sense, not the ‘naive’ sense] view of the world) could definitely not solve—but that some other approach could?
A final point:
It’s not just that our current approach has a good track record. It has a perfect track record, and all other approaches have a perfect track record of failure.
Replies from: curiousone↑ comment by curiousone · 2018-02-06T10:07:09.861Z · LW(p) · GW(p)
Thanks again for the answer! I understand the analogy to my problem like this: in our case, we have the brain and consciousness as pieces of the puzzle, and the explanation of consciousness being based on the brain as solution. But we cannot just see the solution as easy as by flipping a card. For it has not yet been found
Now, I wonder at this: When I am solving this children's puzzle, and I am, just as in your example, sure that it does not have a solution: It is well possible that the puzzle card really does not. For example the game designer could have made one card unsolvable, or there is, as I could assume, a mistake. And there are actually ways to prove such problems solvable or not, with proofs being not just vaguely in the mind. But in our consciousness-problem, we have only the vague intuition of proof. For the real proof is yet to be revealed. So we obviously need to trust in the resolvability of our problem (through physics) from the very beginning.
It seems to me that one argument against that trust might be the analogy between 1.) the differences between the problems on the cards, and 2.) the differences between physical problems and our problem. The cards are all of the same kind, they present the same form of problem. Whereas Physics usually take care of the natural laws affecting the world around us. Not the structures of consciousness “in ourselves”. So one might say that the track record is set on a different track than the track currently in question. Also, even solving hundreds of cards does not lead to knowledge about the resolvability of the next one unless one finds mathematical ways of proving. And it is such proof that my trust relies on, not what was found in the past.
But you state that there has not been a single problem that “science” in the mentioned meaning did not solve (except our problem obviously). Even more fascinating: Every other approach to solutions ever made has failed. I am really impressed by your knowledge capacity. But I must admit that I'm not entirely persuaded here. I mean: anyone can state that; but can you prove it, too?
Replies from: curiousone↑ comment by curiousone · 2018-02-13T18:38:37.105Z · LW(p) · GW(p)
Well, how should I interpret this? One week without an answer to my questions. Is there no answer? And - if that is so - is the theory proposed by Eliezer Yudkowski here not right?
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-02-13T19:33:40.469Z · LW(p) · GW(p)
Eliezer Yudkowski
It’s “Yudkowsky”, fyi.
As for your questions… to be honest, they are rather repetitive, and cover well-trod ground—both in that you’re going over things that have already been addressed in this very thread, and also recapitulating arguments about, e.g., the “problem of induction”, of which so much has been said, over so many years (and decades, and indeed centuries), that to bring it up as if it’s a fresh concern is… not, shall we say, a productive use of anyone’s time.
To inspire further interest, among the commentariat here, in discussing this with you, you would have to, at least, show that you’re already well-familiar with existing commentary on relevant philosophical topics.
Replies from: curiousone↑ comment by curiousone · 2018-02-14T15:00:07.715Z · LW(p) · GW(p)
Thanks for answering again! And thanks for correcting my misspelling.
Okay. So, I read the whole thread. And I did not find the answers I asked from you. If these questions have been solved, they are not fresh (obviously), but they are fresh to me. Of course, you can say that explaining anything to me is not worth your – or anyone's – time (for whatever reason). But you did answer once again. So why did you tell me all that, instead of answering my questions that are – to you – already solved (or telling me where to find the solutions)?
Replies from: curiousone↑ comment by curiousone · 2018-02-15T10:50:45.161Z · LW(p) · GW(p)
To just try to state what I understood so far (and hopefully therefore inspire further interest) : In the comments section to the post on “a priori”, Eliezer Yudkowky claims to be a “material monist”. That would mean that he thinks that there is only matter, and that anything that could be described as “non-material” must therefore actually be material. Which fits the section of this “Zombies”-post that I commented on in the first place. The argumentation seems to be as follows: The world can be described using physical laws, and one does not need any “mind” or “consciousness” to formulate why – for example – the lips of a human move. There is causality, from the processes in the brain to the muscles in the lips, that explains why the lip has to move as it does. And since this causal chain starts with something we might call “thought” in our “normal” language, and that starting chain link needs to influence the next link, it must be material and within the laws of physics as well. That means that – although we do not yet know the exact form of that physical “thought”-property – we are allowed to take it as given.
What we have to presuppose is that the only influence on a physical object is possible through a physical object. Of course, some sort of “dualist” would never let that pass. If “thought” had no influence on the physical world, then that would go against our experience that for example we think and our body follows those “orders”. So thought must have an influence on the material world.
That's exactly the section of Eliezer Yudkowsky's “Zombies”-post that I commented on above: He presents “substance dualism” where we have a not-yet-understood “thought”affecting our world. And he presents “Not-quite-faith-based reductionism”, similar to “material monism”, where we have a not-yet-understood “material substance”. One relies on the intuition that no material substance can possibly add up to consciousness, the other one on the intuition that material substance can possibly add up to consciousness.
So: Which intuition is more reliable? Both admit that there is some kind of difference between “thought” and “material substance” at first. Then, dualism says that this difference is unbridgeable, while monism states the opposite (material monism stating that bridge would be built from “material substance” to “thought”). What kind of difference do we encounter here? Why is “thought” not the same as “material substance” right away? Because we can not see, touch, generally sense “thought” with our sense perception, whereas we can sense “material substance”. We can say that “material substance” must follow laws, whereas “thought”has a degree of freedom. “Material substance” is three-dimensional, whereas “thought” is not.
Of course, the answer of the material monists to that could state that all these properties of “thought” are not what they seem to be. But, as I have seen here, the material monists Eliezer Yudkowsky, habryka and Said Achmiz' argument is that they do not know how “thought” is actually “material substance”, but trust in physics to solve that question in the future because of a “track record”. And that besides that there did not arise another reason.
In the comments section here, mitchell_porter2 pointed at Bertrand Russell's “the Problems of Philosophy”, chapter IX (10 years ago). There, Russell points at Plato's theory of forms, stating that not only “material substance” has being. So, when I adopt a neutral position, I still have both sides standing there in front of me. Even in this quite material monistic commentariat.
This is about the very foundation of the mindset of material monism. If this repetitive questions by me really do cover well-trod ground, as I was told above, and there was so much said already – did all that solve the questions, or am I repeating them because they're still valid?
↑ comment by curiousone · 2018-02-05T20:35:41.489Z · LW(p) · GW(p)
Thanks for the comment. I guessed that when someone argues that physics will reveal something after a period of time, of course physicists must put effort into their work for that to happen. But it is better to actually formulate it.
Do you think that, when we exchange "time passed" by "effort invested", there is any way to tell "now enough effort was invested without any outcome, so we have to look for another solution!"?
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2018-02-05T21:19:28.748Z · LW(p) · GW(p)
It’s a matter of judgment, I should think; and whether we ought to look for a solution elsewhere seems to me to depend on three variables:
- How much effort has been invested already, in the existing direction of research (i.e., physics, materialism, science—which is, of course, a very broad category of effort).
- How important it is, that the problem be solved. We’ve gone several thousand years without “solving” the “Hard Problem of consciousness” (though we’ve made what seems to me like quite a bit of progress); is it terribly urgent that we solve it ASAP?
- How likely it seems that whatever alternate approach is available, will make any real progress (and how much it costs us to engage in such an approach). (Of course, this consideration suggests that it might be profitable to look for heretofore-unknown approaches—but then again, it might not be. Unknown unknowns, and all that.)
From an epistemic perspective—what we should think about whether the problem will be solved at some point if we continue on our current course—only point #1 really matters. From an instrumental perspective—what, if anything, should we do, or what should we change about what and how we do things—points #2 and #3 seem to me to be at least as important.
Replies from: curiousone↑ comment by curiousone · 2018-02-06T10:01:45.635Z · LW(p) · GW(p)
Thanks for the answer! So my judgement should go along these questions you propose. Now I ask myself the question: “There seems to be much effort invested in the explanation of the hard problem of consciousness through physics. Does that make sense?”. But I need to find out (1.) HOW much effort was ACTUALLY invested already, (2.) HOW important it is to find a solution there, and (3.) WHICH alternate approaches are available. Right?
But how do you measure effort? And why is it important to know how much was already invested? I don't understand that yet...
comment by bvoq · 2020-05-28T12:33:45.221Z · LW(p) · GW(p)
I think of consciousness more like the heat extruded from the motor. The motor does the work and consciousness is a product of the motor. If the motor accelerates then the consciousness will think it has caused the acceleration, even if it did not. Modern neuroscience hints at the fact that decisions are made before we are conscious of them.
comment by philanthropy · 2023-11-29T00:41:18.643Z · LW(p) · GW(p)
I find it likely you will never read this, or probably anyone else on this very old thread, but I will give a statement that you may like, since I believe you are actually correct at framing this issue:
Firstly, I think zombies are unavoidable unless you are a flat-out dualist. And that Chalmers framing this thing called the Hard Problem, was a way of his own question being answered. That also means you can't really say the Hard Problem is answerable from his framing either with zombies being the true point in it. It's already set up for failure via a category error on its own from someone who is basically using this to get a circular argument out of someone who didn't understand the logic to begin with. SO if you are not just a dualist, you can create this problem for any ideology. Which is basically silly and you can say that any non-physicalism would also have a zombie effect counterpart unanswerable not based on reductive problems but on still a conceptual ground. Which is a problem with conceptual arguments.
Secondly, Penrose is not a dualist. He is basically a physicalist but his view from his book Shadows Of The Mind he refers to it as Three Worlds based on a Platonic version of Karl Popper's metaphysics. However, his is not dualism, and his version of the quantum state reduction positions the reduction inside of a physical world using a version of gravity and quantum oscillators that randomize incomputable events, so it is basically still a physicalist view of consciousness. His view does not even take into consideration the Hard Problem as I understand it and never mentions it. And by the end of his book the idea itself was unfinished. Whether or not this is coherent enough to be truly tested or understood as a theory on the other hand is something else. (which is only considered by some physicalists) The confusion over dualism I think stems from the fact that everyone basically seems to not understand the idea of it or never read the book fully and how confusing it is. And Hameroff, who he was working with on this also referred to his idea as an "identity theory" of consciousness. Which has perhaps become the confusion between Hameroff and Penrose's statements.