Zombies Redacted

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2016-07-02T20:16:33.687Z · LW · GW · Legacy · 172 comments

I looked at my old post Zombies! Zombies? and it seemed to have some extraneous content.  This is a redacted and slightly rewritten version.


Your "zombie", in the philosophical usage of the term, is putatively a being that is exactly like you in every respect—identical behavior, identical speech, identical brain; every atom and quark in exactly the same position, moving according to the same causal laws of motion—except that your zombie is not conscious.

It is furthermore claimed that if zombies are "conceivable" (a term over which battles are still being fought), then, purely from our knowledge of this "conceivability", we can deduce a priori that consciousness is extra-physical, in a sense to be described below.

See, for example, the SEP entry on Zombies.  The "conceivability" of zombies is accepted by a substantial fraction, possibly a majority, of academic philosophers of consciousness.


I once read somewhere, "You are not the one who speaks your thoughts—you are the one who hears your thoughts".

If you conceive of "consciousness" as a quiet, passive listening, then the notion of a zombie initially seems easy to imagine.  It's someone who lacks the the inner hearer.

Sketching out that intuition in a little more detail:

When you open a refrigerator and find that the orange juice is gone, you think "Darn, I'm out of orange juice."  The sound of these words is probably represented in your auditory cortex, as though you'd heard someone else say it.

Why do I think the sound of your inner thoughts is represented in the auditory cortex, as if it were a sound you'd heard?  Because, for example, native Chinese speakers can remember longer digit sequences than English-speakers.  Chinese digits are all single syllables, and so Chinese speakers can remember around ten digits, versus the famous "seven plus or minus two" for English speakers.  There appears to be a loop of repeating sounds back to yourself, a size limit on working memory in the auditory cortex, which is genuinely phoneme-based.

It's not only conceivable in principle, but possibly possible in the next couple of decades, that surgeons will lay a network of neural taps over someone's auditory cortex and read out their internal narrative.  Researchers have already tapped the lateral geniculate nucleus of a cat and reconstructed recognizable visual inputs.

So your zombie, being physically identical to you down to the last atom, will open the refrigerator and form auditory cortical patterns for the phonemes "Darn, I'm out of orange juice".  On this point, p-zombie advocates agree.

But in the Zombie World, allegedly, there is no one inside to hear; the inner listener is missing.  The internal narrative is spoken, but unheard.  You are not the one who speaks your thoughts, you are the one who hears them.

The Zombie Argument is that if the Zombie World is possible—not necessarily physically possible in our universe, just "possible in theory", or "conceivable"—then consciousness must be extra-physical, something over and above mere atoms.  Why?  Because even if you knew the positions of all the atoms in the universe, you would still have be told, as a separate and additional fact, that people were conscious—that they had inner listeners—that we were not in the Zombie World.

The technical term for the belief that consciousness is there, but has no effect on the physical world, is epiphenomenalism.

Though there are other elements to the zombie argument (I'll deal with them below), I think that the intuition of the inner listener is what first persuades people to zombie-ism.  The core notion is simple and easy to access:  The lights are on but nobody's home.

Philosophers are appealing to the intuition of the quiet, passive inner listener when they say "Of course the zombie world is imaginable; you know exactly what it would be like."

But just because you don't see a contradiction in the Zombie World at first glance, it doesn't mean that no contradiction is there.  Just because you don't see an internal contradiction yet within some set of generalizations, is no guarantee that you won't see a contradiction in another 30 seconds.  "All odd numbers are prime.  Proof:  3 is prime, 5 is prime, 7 is prime..."

So let us ponder the Zombie Argument a little longer:  Can we think of a counterexample to the assertion "Consciousness has no third-party-detectable causal impact on the world"?

If you close your eyes and concentrate on your inward awareness, you will begin to form thoughts, in your internal narrative, along the lines of "I am aware" and "My awareness is separate from my thoughts" and "I am not the one who speaks my thoughts, but the one who hears them" and "My stream of consciousness is not my consciousness" and "It seems like there is a part of me which I can imagine being eliminated without changing my outward behavior."

You can even say these sentences out loud.  In principle, someone with a super-fMRI could probably read the phonemes right out of your auditory cortex; but saying it out loud removes all doubt about whether you have entered the realms of physically visible consequences.

This certainly seems like the inner listener is being caught in the act of listening by whatever part of you writes the internal narrative, a causally potent neural pattern in your auditory cortex, which can eventually move your lips and flap your tongue.

Imagine that a mysterious race of aliens visit you, and leave you a mysterious black box as a gift.  You try poking and prodding the black box, but (as far as you can tell) you never elicit a reaction.  You can't make the black box produce gold coins or answer questions.  So you conclude that the black box is causally inactive:  "For all X, the black box doesn't do X."  The black box is an effect, but not a cause; epiphenomenal, without causal potency.  In your mind, you test this general hypothesis to see if the generalization is true in some trial cases, and it seems to be true in every one—"Does the black box repair computers?  No.  Does the black box boil water?  No."

But you can see the black box; it absorbs light, and weighs heavy in your hand.  This, too, is part of the dance of causality.  If the black box were wholly outside the causal universe, you wouldn't be able to see it; you would have no way to know it existed; you could not say, "Thanks for the black box."  You didn't think of this counterexample, when you formulated the general rule:  "All X: Black box doesn't do X".  But it was there all along.

(Actually, the aliens left you another black box, this one purely epiphenomenal, and you haven't the slightest clue that it's there in your living room.  That was their joke.)

If something has no causal effect, you can't know about it.  The territory must be causally entangled with the map for the map to correlate with the territory.  To 'see' something is to be affected by it.  If an allegedly physical thing or property has absolutely no causal impact on the rest of our universe, there's a serious question about whether we can even talk about it, never mind justifiably knowing that it's there.

It is a standard point—which zombie-ist philosophers accept!—that the Zombie World's philosophers, being atom-by-atom identical to our own philosophers, write identical papers about the philosophy of consciousness.

At this point, the Zombie World stops being an intuitive consequence of the idea of an inner listener.

Philosophers writing papers about consciousness would seem to be at least one effect of consciousness upon the world.  You can argue clever reasons why this is not so, but you have to be clever.  You are no longer playing straight to the intuition.

Let's say you'd never heard of the Zombie World and never formed any explicit generalizations about how zombies are supposed to exist.  The thought might spontaneously occur to you that, as you stand and watch a beautiful sunset, your awareness of your awareness could be subtracted from you without changing your outward smile.  But then ask whether you still think "I am aware of my inner awareness", as a neural pattern in your auditory cortex, and then say it out loud, after the inner awareness has been subtracted.  I would not expect the generalization "my inner awareness has no effect on physical things" to still seem intuitive past that point, if you'd never been explicitly indoctrinated with p-zombieism.

Intuitively, we'd suppose that if your inward awareness vanished, your internal narrative would no longer say things like "There is a mysterious listener within me," because the mysterious listener would be gone and you would not be thinking about it.  It is usually immediately after you focus your awareness on your awareness, that your internal narrative says "I am aware of my awareness"; which suggests that if the first event never happened again, neither would the second.

Once you see the collision between the general rule that consciousness has no effect, to the specific implication that consciousness has no effect on how you think about consciousness (in any way that affects your internal narrative that you could choose to say out loud), zombie-ism stops being intuitive.  It starts requiring you to postulate strange things.

One strange thing you might postulate is that there's a Zombie Master, a god within the Zombie World who surreptitiously takes control of zombie philosophers and makes them talk and write about consciousness.

Human beings often don't sound all that coherent when talking about consciousness.  It might not be that hard to fake.  Maybe you could take, as a corpus, one thousand human amateurs trying to discuss consciousness; feed them into a sufficiently powerful but non-reflective machine learning algorithm; and get back discourse about "consciousness" that sounded as sensible as most humans, which is to say, not very.

But this speech about "consciousness" would not be produced within the AI.  It would be an imitation of someone else talking.  You might as well observe that you can make a video recording of David Chalmers (the most formidable advocate of zombieism) and play back the recording.  The cause that shaped the pattern of the words in the video recording was Chalmers's consciousness moving his lips; that shaping cause is merely being transmitted through a medium, like sounds passing through air.

A separate, extra Zombie Master is not what the philosophical Zombie World postulates.  It's asserting that the atoms in the brain are quark-by-quark identical, moving under exactly the same physical laws we know; there's no separate, additional Zombie Master AI Chatbot making the lips move in ways that were copied off the real David Chalmers.  The zombie you's lips are talking about consciousness for the same causal reason your lips talk about consciousness.

As David Chalmers writes:

Think of my zombie twin in the universe next door. He talks about conscious experience all the time—in fact, he seems obsessed by it. He spends ridiculous amounts of time hunched over a computer, writing chapter after chapter on the mysteries of consciousness. He often comments on the pleasure he gets from certain sensory qualia, professing a particular love for deep greens and purples. He frequently gets into arguments with zombie materialists, arguing that their position cannot do justice to the realities of conscious experience.

And yet he has no conscious experience at all! In his universe, the materialists are right and he is wrong. Most of his claims about conscious experience are utterly false. But there is certainly a physical or functional explanation of why he makes the claims he makes. After all, his universe is fully law-governed, and no events therein are miraculous, so there must be some explanation of his claims.

...Any explanation of my twin’s behavior will equally count as an explanation of my behavior, as the processes inside his body are precisely mirrored by those inside mine. The explanation of his claims obviously does not depend on the existence of consciousness, as there is no consciousness in his world. It follows that the explanation of my claims is also independent of the existence of consciousness.

Chalmers is not arguing against zombies; those are his actual beliefs!

This paradoxical situation is at once delightful and disturbing.  It is not obviously fatal to the nonreductive position, but it is at least something that we need to come to grips with...

I would seriously nominate this as the largest bullet ever bitten in the history of time.  And that is a backhanded compliment to David Chalmers:  A lesser mortal would simply fail to see the implications, or refuse to face them, or rationalize a reason it wasn't so.

Why would anyone bite a bullet that large?  Why would anyone postulate unconscious zombies who write papers about consciousness for exactly the same reason that our own genuinely conscious philosophers do?

Not because of the first intuition I wrote about, the intuition of the quiet inner listener.  That intuition may say that zombies can drive cars or do math or even fall in love, but it doesn't say that zombies write philosophy papers about their quiet inner listeners.

No, the drive to bite this bullet comes from an entirely different intuition—the intuition that no matter how many atoms you add up, no matter how many masses and electrical charges interact with each other, they will never necessarily produce a subjective sensation of the mysterious redness of red.  It may be a fact about our physical universe (Chalmers says) that putting such-and-such atoms into such-and-such a position, evokes a sensation of redness; but if so, it is not a necessary fact, it is something to be explained above and beyond the motion of the atoms.

But if you consider the second intuition on its own, without the intuition of the quiet listener, it is hard to see why irreducibility implies zombie-ism.  Maybe there's just a different kind of stuff, apart from and additional to atoms, that is not causally passive—a soul that actually does stuff.  A soul that plays a real causal role in why we write about "the mysterious redness of red".  Take out the soul, and... well, assuming you just don't fall over in a coma, you certainly won't write any more papers about consciousness!

This is the position taken by Descartes and most other ancient thinkers:  The soul is of a different kind, but it interacts with the body.  Descartes's position is technically known as substance dualism—there is a thought-stuff, a mind-stuff, and it is not like atoms; but it is causally potent, interactive, and leaves a visible mark on our universe.

Zombie-ists are property dualists—they don't believe in a separate soul; they believe that matter in our universe has additional properties beyond the physical.

"Beyond the physical"?  What does that mean?  It means the extra properties are there, but they don't influence the motion of the atoms, like the properties of electrical charge or mass.  The extra properties are not experimentally detectable by third parties; you know you are conscious, from the inside of your extra properties, but no scientist can ever directly detect this from outside.

So the additional properties are there, but not causally active.  The extra properties do not move atoms around, which is why they can't be detected by third parties.

And that's why we can (allegedly) imagine a universe just like this one, with all the atoms in the same places, but the extra properties missing, such that every atom moves the same as before, but no one is conscious.

The Zombie World might not be physically possible, say the zombie-ists—because it is a fact that all the matter in our universe has the extra properties, or obeys the bridging laws that evoke consciousness—but the Zombie World is logically possible: the bridging laws could have been different.

But why, oh why, say that the extra properties are epiphenomenal and undetectable?

We can put this dilemma very sharply:  Chalmers believes that there is something called consciousness, and this consciousness embodies the true and indescribable substance of the mysterious redness of red.  It may be a property beyond mass and charge, but it's there, and it is consciousness.  Now, having said the above, Chalmers furthermore specifies that this true stuff of consciousness is epiphenomenal, without causal potency—but why say that?

Why say that you could subtract this true stuff of consciousness, and leave all the atoms in the same place doing the same things?  If that's true, we need some separate physical explanation for why Chalmers talks about "the mysterious redness of red".  That is, there exists both a mysterious redness of red, which is extra-physical, and an entirely separate reason, within physics, why Chalmers talks about the "mysterious redness of red".

Chalmers does confess that these two things seem like they ought to be related, but why do you need to assert two separate phenomena?  Why not just assert one or the other?

Once you've postulated that there is a mysterious redness of red, why not just say that it interacts with your internal narrative and makes you talk about the "mysterious redness of red"?

Isn't Descartes taking the simpler approach, here?  The strictly simpler approach?

Why postulate an extramaterial soul, and then postulate that the soul has no effect on the physical world, and then postulate a mysterious unknown material process that causes your internal narrative to talk about conscious experience?

Why not postulate the true stuff of consciousness which no amount of mere mechanical atoms can add up to, and then, having gone that far already, let this true stuff of consciousness have causal effects like making philosophers talk about consciousness?

I am not endorsing Descartes's view.  But at least I can understand where Descartes is coming from.  Consciousness seems mysterious, so you postulate a mysterious stuff of consciousness.  Fine.

But now the zombie-ists postulate that this mysterious stuff doesn't do anything, so you need a whole new explanation for why you say you're conscious.

That isn't vitalism.  That's something so bizarre that vitalists would spit out their coffee.  "When fires burn, they release phlogistonBut phlogiston doesn't have any experimentally detectable impact on our universe, so you'll have to go looking for a separate explanation of why a fire can melt snow."  What?

Are property dualists under the impression that if they postulate a new active force, something that has a causal impact on physics, they will be sticking their necks out too far?

Me, I'd say that if you postulate a mysterious, separate, additional, inherently mental property of consciousness, above and beyond positions and velocities, then, at that point, you have already stuck your neck out.  To postulate this stuff of consciousness, and then further postulate that it doesn't do anything—for the love of cute kittens, why?

There isn't even an obvious career motive.  "Hi, I'm a philosopher of consciousness.  My subject matter is the most important thing in the universe and I should get lots of funding?  Well, it's nice of you to say so, but actually the phenomenon I study doesn't do anything whatsoever."

Chalmers is one of the most frustrating philosophers I know.  He does this really sharp analysis... and then turns left at the last minute.  He lays out everything that's wrong with the Zombie World scenario, and then, having reduced the whole argument to smithereens, calmly accepts it.

Chalmers does the same thing when he lays out, in calm detail, the problem with saying that our own beliefs in consciousness are justified, when our zombie twins say exactly the same thing for exactly the same reasons and are wrong.

On Chalmers's theory, Chalmers saying that he believes in consciousness cannot be causally justified; the belief is not caused by the fact itself, like looking at an actual real sock being the cause of why you say there's a sock.  In the absence of consciousness, Chalmers would write the same papers for the same reasons.

On epiphenomenalism, Chalmers saying that he believes in consciousness cannot be justified as the product of a process that systematically outputs true beliefs, because the zombie twin writes the same papers using the same systematic process and is wrong.

Chalmers admits this.  Chalmers, in fact, explains the argument in great detail in his book.  Okay, so Chalmers has solidly proven that he is not justified in believing in epiphenomenal consciousness, right?  No.  Chalmers writes:

Conscious experience lies at the center of our epistemic universe; we have access to it directly.  This raises the question: what is it that justifies our beliefs about our experiences, if it is not a causal link to those experiences, and if it is not the mechanisms by which the beliefs are formed?  I think the answer to this is clear: it is having the experiences that justifies the beliefs. For example, the very fact that I have a red experience now provides justification for my belief that I am having a red experience...

Because my zombie twin lacks experiences, he is in a very different epistemic situation from me, and his judgments lack the corresponding justification.  It may be tempting to object that if my belief lies in the physical realm, its justification must lie in the physical realm; but this is a non sequitur. From the fact that there is no justification in the physical realm, one might conclude that the physical portion of me (my brain, say) is not justified in its belief. But the question is whether I am justified in the belief, not whether my brain is justified in the belief, and if property dualism is correct than there is more to me than my brain.

So—if I've got this thesis right—there's a core you, above and beyond your brain, that believes it is not a zombie, and directly experiences not being a zombie; and so its beliefs are justified.

But Chalmers just wrote all that stuff down, in his very physical book, and so did the zombie-Chalmers.

The zombie Chalmers can't have written the book because of the zombie's core self above the brain; there must be some entirely different reason, within the laws of physics.

It follows that even if there is a part of Chalmers hidden away that is conscious and believes in consciousness, directly and without mediation, there is also a separable subspace of Chalmers—a causally closed cognitive subsystem that acts entirely within physics—and this "outer self" is what speaks Chalmers's internal narrative, and writes papers on consciousness.

I do not see any way to evade the charge that, on Chalmers's own theory, this separable outer Chalmers is deranged.  This is the part of Chalmers that is the same in this world, or the Zombie World; and in either world it writes philosophy papers on consciousness for no valid reason.  Chalmers's philosophy papers are not output by that inner core of awareness and belief-in-awareness, they are output by the mere physics of the internal narrative that makes Chalmers's fingers strike the keys of his computer.

And yet this deranged outer Chalmers is writing philosophy papers that just happen to be perfectly right, by a separate and additional miracle.  Not a logically necessary miracle (then the Zombie World would not be logically possible).  A physically contingent miracle, that happens to be true in what we think is our universe, even though science can never distinguish our universe from the Zombie World.

I think I speak for all reductionists when I say Huh? 

That's not epicycles.  That's, "Planetary motions follow these epicycles—but epicycles don't actually do anything—there's something else that makes the planets move the same way the epicycles say they should, which I haven't been able to explain—and by the way, I would say this even if there weren't any epicycles."

According to Chalmers, the causally closed system of Chalmers's internal narrative is (mysteriously) malfunctioning in a way that, not by necessity, but just in our universe, miraculously happens to be correct.  Furthermore, the internal narrative asserts "the internal narrative is mysteriously malfunctioning, but miraculously happens to be correctly echoing the justified thoughts of the epiphenomenal inner core", and again, in our universe, miraculously happens to be correct.

Oh, come on!

Shouldn't there come a point where you just give up on an idea?  Where, on some raw intuitive level, you just go:  What on Earth was I thinking?

Humanity has accumulated some broad experience with what correct theories of the world look like.  This is not what a correct theory looks like.

"Argument from incredulity," you say.  Fine, you want it spelled out?  The said Chalmersian theory postulates multiple unexplained complex miracles.  This drives down its prior probability, by the conjunction rule of probability and Occam's Razor.  It is therefore dominated by at least two theories which postulate fewer miracles, namely:

Compare to:

I know I'm speaking from limited experience, here.  But based on my limited experience, the Zombie Argument may be a candidate for the most deranged idea in all of philosophy.

There are times when, as a rationalist, you have to believe things that seem weird to you.  Relativity seems weird, quantum mechanics seems weird, natural selection seems weird.

But these weirdnesses are pinned down by massive evidence.  There's a difference between believing something weird because science has confirmed it overwhelmingly—

—versus believing a proposition that seems downright deranged, because of a great big complicated philosophical argument centered around unspecified miracles and giant blank spots not even claimed to be understood—

—in a case where even if you accept everything that has been told to you so far, afterward the phenomenon will still seem like a mystery and still have the same quality of wondrous impenetrability that it had at the start.

The correct thing for a rationalist to say at this point, if all of David Chalmers's arguments seem individually plausible, is:

"Okay... I don't know how consciousness works... I admit that... and maybe I'm approaching the whole problem wrong, or asking the wrong questions... but this zombie business can't possibly be right.  The arguments aren't nailed down enough to make me believe this—especially when accepting it won't make me feel any less confused.  On a core gut level, this just doesn't look like the way reality could really really work."

But this is not what I say, for I don't think the arguments are plausible.  "In general, all odd numbers are prime" looked "conceivable" when you had only thought about 3, 5, and 7.  It stopped seeming reasonable when you thought about 9.

Zombies looked conceivable when you looked out at a beautiful sunset and thought about the quiet inner awareness inside you watching that sunset, which seemed like it could vanish without changing the way you walked or smiled; obedient to the plausible-sounding generalization, "the inner listener has no outer effects".  That generalization should stop seeming possible when you say out loud, "But wait, I am thinking this thought right now inside my auditory cortex, and that thought can make my lips move, translating my awareness of my quiet inner listener into a motion of my lips, meaning that consciousness is part of the minimal closure of causality in this universe."  I can't think of anything else to say about the conceivability argument.  The zombies are dead.

172 comments

Comments sorted by top scores.

comment by Elo · 2016-07-02T21:19:33.570Z · LW(p) · GW(p)

Welcome back!

Replies from: ingres
comment by namespace (ingres) · 2016-07-02T21:52:53.266Z · LW(p) · GW(p)

Seconded.

Replies from: kilobug
comment by kilobug · 2016-07-05T12:27:37.371Z · LW(p) · GW(p)

Sorry to go meta, but could someone explain me how "Welcome back!" can be at -1 (0 after my upvote) and yet "Seconded." at +2.

Doesn't sound like very consistent scoring...

Replies from: Elo, NancyLebovitz
comment by Elo · 2016-07-05T14:17:53.915Z · LW(p) · GW(p)

I am plagued by our resident troll Eugine because I am the mod that keeps banning him. Working on alternative solutions.

comment by NancyLebovitz · 2016-07-05T18:10:25.184Z · LW(p) · GW(p)

We have a karma troll.

comment by Rob Bensinger (RobbBB) · 2016-07-03T20:30:32.285Z · LW(p) · GW(p)

The "conceivability" of zombies is accepted by a substantial fraction, possibly a majority, of academic philosophers of consciousness.

This can be made precise. According to the 2009 PhilPapers Survey (sent to all faculty at the top 89 Ph.D-granting philosophy departments in the English-speaking world as ranked by the Philosophical Gourmet Report, plus 10 high-prestige non-Anglophone departments), about 2/3 of professional philosophers of mind think zombies are conceivable, though most of these think physicalism is true anyway. Specifically, 91 of the 191 respondents (47.6%) said zombies are conceivable but not metaphysically possible; 47 (24.6%) said they were inconceivable; 35 (18.3%) said they're (conceivable and) metaphysically possible; and the other 9.4% were agnostic/undecided or rejected all three options.

Looking at professional philosophers as a whole in the relevant departments, including non-philosophers-of-mind, 35.6% say zombies are conceivable, 16% say they're inconceivable, 23.3% say they're metaphysically possible, 17% say they're undecided or insufficiently familiar with the issue (or they skipped the question), and 8.2% rejected all three options. So the average top-tier Anglophone philosopher of mind is more likely to reject zombies than is the average top-tier Anglophone philosopher. (Relatedly, 22% of philosophers of mind accept or lean toward 'non-physicalism', vs. 27% of philosophers in general.)

There is a stuff of consciousness which is not yet understood, an extraordinary super-physical stuff that visibly affects our world; and this stuff is what makes us talk about consciousness.

Chalmers' core objection to interactionism, I think, is that any particular third-person story you can tell about the causal effects of consciousness could also be told without appealing to consciousness. E.g., if you think consciousness intervenes on the physical world by sometimes spontaneously causing wavefunctions to collapse (setting aside that Chalmers and most LWers reject collapse...), you could just as easily tell a story in which wavefunctions just spontaneously collapse without any mysterious redness getting involved; or a story in which they mysteriously collapse when mysterious greenness occurs rather than redness, or when an alien color occurs.

Chalmers thinks any argument for thinking that the mysterious redness of red is causally indispensable for dualist interactionism should also allow that the mysterious redness of red is an ordinary physical property that's indispensable for physical interactions. Quoting "Moving Forward on the Problem of Consciousness":

The real "epiphenomenalism" problem, I think, does not arise from the causal closure of the physical world. Rather, it arises from the causal closure of the world! Even on an interactionist picture, there will be some broader causally closed story that explains behavior, and such a story can always be told in a way that neither includes nor implies experience. Even on the interactionist picture, we can view minds as just further nodes in the causal network, like the physical nodes, and the fact that these nodes are experiential is inessential to the causal dynamics. The basic worry arises not because experience is logically independent of physics, but because it is logically independent of causal dynamics more generally.

The interactionist has a reasonable solution to this problem, I think. Presumably, the interactionist will respond that some nodes in the causal network are experiential through and through. Even though one can tell the causal story about psychons without mentioning experience, for example, psychons are intrinsically experiential all the same. Subtract experience, and there is nothing left of the psychon but an empty place-marker in a causal network, which is arguably to say there is nothing left at all. To have real causation, one needs something to do the causing; and here, what is doing the causing is experience.

I think this solution is perfectly reasonable; but once the problem is pointed out this way, it becomes clear that the same solution will work in a causally closed physical world. Just as the interactionist postulates that some nodes in the causal network are intrinsically experiential, the "epiphenomenalist" can do the same.

This brings up a terminology-ish point:

The technical term for the belief that consciousness is there, but has no effect on the physical world, is epiphenomenalism.

Chalmers denies that he's an epiphenomenalist. Rather he says (in "Panpsychism and Panprotopsychism"):

I think that substance dualism (in its epiphenomenalist and interactionist forms) and Russellian monism (in its panpsychist and panprotopsychist forms) are the two serious contenders in the metaphysics of consciousness, at least once one has given up on standard physicalism. (I divide my own credence fairly equally between them.)

Quoting "Moving Forward" again:

Here we can exploit an idea that was set out by Bertrand Russell (1926), and which has been developed in recent years by Grover Maxwell (1978) and Michael Lockwood (1989). This is the idea that physics characterizes its basic entities only extrinsically, in terms of their causes and effects, and leaves their intrinsic nature unspecified. For everything that physics tells us about a particle, for example, it might as well just be a bundle of causal dispositions; we know nothing of the entity that carries those dispositions. The same goes for fundamental properties, such as mass and charge: ultimately, these are complex dispositional properties (to have mass is to resist acceleration in a certain way, and so on). But whenever one has a causal disposition, one can ask about the categorical basis of that disposition: that is, what is the entity that is doing the causing?

One might try to resist this question by saying that the world contains only dispositions. But this leads to a very odd view of the world indeed, with a vast amount of causation and no entities for all this causation to relate! It seems to make the fundamental properties and particles into empty placeholders, in the same way as the psychon above, and thus seems to free the world of any substance at all. It is easy to overlook this problem in the way we think about physics from day to day, given all the rich details of the mathematical structure that physical theory provides; but as Stephen Hawking (1988) has noted, physical theory says nothing about what puts the "fire" into the equations and grounds the reality that these structures describe. The idea of a world of "pure structure" or of "pure causation" has a certain attraction, but it is not at all clear that it is coherent.

So we have two questions: (1) what are the intrinsic properties underlying physical reality?; and (2) where do the intrinsic properties of experience fit into the natural order? Russell's insight, developed by Maxwell and Lockwood, is that these two questions fit with each other remarkably well. Perhaps the intrinsic properties underlying physical dispositions are themselves experiential properties, or perhaps they are some sort of proto-experiential properties that together constitute conscious experience. This way, we locate experience inside the causal network that physics describes, rather than outside it as a dangler; and we locate it in a role that one might argue urgently needed to be filled. And importantly, we do this without violating the causal closure of the physical. The causal network itself has the same shape as ever; we have just colored in its nodes.

This ideas smacks of the grandest metaphysics, of course, and I do not know that it has to be true. But if the idea is true, it lets us hold on to irreducibility and causal closure and nevertheless deny epiphenomenalism. By placing experience inside the causal network, it now carries a causal role. Indeed, fundamental experiences or proto-experiences will be the basis of causation at the lowest levels, and high-level experiences such as ours will presumably inherit causal relevance from the (proto)-experiences from which they are constituted. So we will have a much more integrated picture of the place of consciousness in the natural order.

This is also (a more honest name for) the non-physicalist view that sometimes gets called "Strawsonian physicalism." But this view seems to be exactly as vulnerable to your criticisms as traditional epiphenomenalism, because the "causal role" in question doesn't seem to be a difference-making role -- it's maybe "causal" in some metaphysical sense, but it's not causal in a Bayesian or information-theoretic sense, a sense that would allow a brain to nonrandomly update in the direction of Strawsonian physicalism / Russellian monism by computing evidence.

I'm not sure what Chalmers would say to your argument in detail, though he's responded to the terminological point about epiphenomenalism. If he thinks Russellian monism is a good response, then either I'm misunderstanding how weird Russellian monism is (in particular, how well it can do interactionism-like things), or Chalmers is misunderstanding how general your argument is. The latter is suggested by the fact that Chalmers thinks your argument weighs against epiphenomenalism but not against Russellian monism in this old LessWrong comment.

It might be worth e-mailing him this updated "Zombies" post, with this comment highlighted so that we don't get into the weeds of debating whose definition of "epiphenomenalism" is better.

comment by ike · 2016-07-03T02:23:29.113Z · LW(p) · GW(p)

Are you planning on doing this for more of the sequences? I think that would be great.

comment by Furcas · 2016-07-02T22:20:24.151Z · LW(p) · GW(p)

Nice.

So, when are you going to tell us your solution to the hard problem of consciousness?

Edited to add: The above wasn't meant as a sarcastic objection to Eliezer's post. I'm totally convinced by his arguments, and even if I wasn't I don't think not having a solution to the hard problem is a greater problem for reductionism than for dualism (of any kind). I was seriously asking Eliezer to share his solution, because he seems to think he has one.

Replies from: kilobug, TheAncientGeek
comment by kilobug · 2016-07-05T12:22:15.339Z · LW(p) · GW(p)

Not having a solution doesn't prevent from criticizing an hypothesis or theory on the subject. I don't know what are the prime factors of 4567613486214 but I know that "5" is not a valid answer (numbers having 5 among their prime factors end up with 5 or 0) and that "blue" doesn't have the shape of a valid answer. So saying p-zombism and epiphenomenalism aren't valid answers to the "hard problem of consciousness" doesn't require having a solution to it.

Replies from: gjm, TheAncientGeek
comment by gjm · 2016-07-05T14:33:08.237Z · LW(p) · GW(p)

Quite true, but if you follow the link in Furcas's last paragraph (which may not have been there when you wrote your comment) you will see Eliezer more or less explicitly claiming to have a solution.

Replies from: Furcas
comment by Furcas · 2016-07-05T18:58:52.696Z · LW(p) · GW(p)

Yeah, I edited my comment after reading kilobug's.

comment by TheAncientGeek · 2016-07-05T15:09:06.627Z · LW(p) · GW(p)

There's saying it and saying it. If you say that a particular solution is particularly bad, that kind of inmplies a better solution somewhere. If you wanted to say that all known solutions are bad, you would presumably say something about all known solutions.

comment by TheAncientGeek · 2016-07-04T13:57:21.385Z · LW(p) · GW(p)

Snarky, but pertinent.

This re-posting was prompted by a Sean Carroll article, that argued along similar lines...epiphenomenalism (one of a number of possible alternatives to physicalism) is incredible, therefore no zombies.

There are a number of problems with this kind of thinking.

One is that there may be better dualisms than epiphenomenalism.

Another is that criticising epi. doesn't show that there is a workable physical explanation of consciousness. There is no see-saw (titter-totter) effect whereby the wrongness of one theory implies the correctness of another. For one thing,there are more than two theories (see above). For another, an explanation has to explain...there are positive, absolute standards for explanation..you cannot say some Y is an an explanation, that it actually explains, just because some X is wrong, and Y is different to X. (The idea that physicalism is correct as an incomprehensible brute fact is known as the "new mysterianism" and probably isn't what reductionists physicalists and rationalists are aiming at).

Carroll and others have put forward a philosophical version of a physical account of consciousness, one stating in general terms that consciousness is a high-level, emergent outcome, of fine-grained neurological activity. The zombie argument (Mary's room, etc) are intended as handwaving philosophical arguments against that sort of argument. If the physicalist side had a scientific version of a physical account of consciousness, there would be no point in arguing against them philosophically, any more than there is a point in arguing philosophically against gravity. Scientific, as opposed to philosophical, theories are detailed and predictive, which allows them to be disproven or confirmed and not merely argued for or against.

And, given that there is no detailed, predictive explanation of consciousness, zombies are still imaginable, in a sense. If someone claims they can imagine (in the sense of picturing) a hovering rock, you can show that it is not possible by writing down some high school physics. Zombies are imaginable in a stronger sense: not only can they be pictuured, but the picture cannot be refuted.

Replies from: Furcas, Houshalter
comment by Furcas · 2016-07-04T14:48:57.448Z · LW(p) · GW(p)

Ahh, it wasn't meant to be snarky. I saw an opportunity to try and get Eliezer to fess up, that's all. :)

comment by Houshalter · 2016-07-05T17:43:10.487Z · LW(p) · GW(p)

Another is that criticising epi. doesn't show that there is a workable physical explanation of consciousness.

I feel like it's gets halfway there though. Once you accept epiphenomenalism is nonsense, you are left with something like nonmaterial "souls" at best. That there is some real force that actually interacts with the world, and could be, in principle, observed, experimented with, and modelled in something like a computer simulation. Some chain of causes and effects lead you to say you "feel conscious", and that chain could be, in principle, understood.

That seems to take all the magic out of it though. It's no longer something that's "beyond science". It's some set of laws that could be understood just like physics, just not the physics we know currently. If you are uncomfortable with the idea that we are "just atoms", and don't feel like that explains qualia or experience, just getting new laws of physics isn't going to help. Then you have to confront the idea that maybe physics can explain experience.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2016-07-05T18:25:55.618Z · LW(p) · GW(p)

. Once you accept epiphenomenalism is nonsense, you are left with something like nonmaterial "souls" at best

Perhaps, so long as you have also refuted physicalist monism.

That seems to take all the magic out of it though. It's no longer something that's "beyond science". It's some set of laws that could be understood just like physics, just not the physics we know currently. If you are uncomfortable with the idea that we are "just atoms", and don't feel like that explains qualia or experience, just getting new laws of physics isn't going to help.

If you have a reason for thinking that no physics can possibly explain consciousness, then you would reject the Extra Physics family of theories, but if you beef is just with the present state of physics, then you might not.

comment by timujin · 2016-07-06T14:40:54.688Z · LW(p) · GW(p)

This argument is not going to win over their heads and hearts. It's clearly written for a reductionist reader, who accepts concepts such as Occam's Razor and knowing-what-a-correct-theory-looks-like. But such a person would not have any problems with p-Zombies to begin with.

If you want to persuade someone who's been persuaded by Chalmers, you should debunk the argument itself, not bring it to your own epistemological ground where the argument is obviously absurd. Because you, and the Chalmers-supporter are not on the same epistemological ground, and will probably never be.

Here's how you would do that.

---- START ARGUMENT ----

Is it conceivable that the 5789312365453423234th digit of Pi is 7?

No, don't look it up just yet. Is it conceivable to you, right now, that it's 7?

For me, yes, it is. If I look it up, and it turns out to be 7, I would not be surprised at all. It's a perfectly reasonable outcome, with predictable consequences. It's not that hard for me to imagine me running a program that calculates and prints the number, and it printing out 7.

Yet, until you look it up, you don't really know if it's 7 or not. It could be 5. It would also be a reasonable, non-surprising and conceivable outcome.

Yet at least one of those outcomes is logically impossible. The exact value of Pi is logically determined, and, if you believe that purely logical conclusions apply universally, then one of those values of 5789312365453423234th digit of Pi is universally impossible.

And yet both are conceivable.

So logical impossibility does not imply inconceivability. This is logically equivalent to saying "conceivability does not imply logical possibility" (A->B => ~B->~A).

If conceivability does not imply logical possibility, then even if you can imagine a Zombie world, it does not mean that the Zombie world is logically possible. It may be the case that the Zombie world is logically impossible. Chalmer's argument does not rule that out. For example, it may be the case that certain atomic configurations necessarily imply consciousness. Or it may be any other case of logical impossibility. What matters is that consciousness as an additional nonphysical entity is not implied by its conceivability.

---- END ARGUMENT ----

Replies from: UmamiSalami
comment by UmamiSalami · 2016-07-06T20:29:09.546Z · LW(p) · GW(p)

This argument is not going to win over their heads and hearts. It's clearly written for a reductionist reader, who accepts concepts such as Occam's Razor and knowing-what-a-correct-theory-looks-like.

I would suggest that people who have already studied this issue in depth would have other reasons for rejecting the above blog post. However, you are right that philosophers in general don't use Occam's Razor as a common tool and they don't seem to make assumptions about what a correct theory "looks like."

If conceivability does not imply logical possibility, then even if you can imagine a Zombie world, it does not mean that the Zombie world is logically possible.

Chalmers does not claim that p-zombies are logically possible, he claims that they are metaphysically possible. Chalmers already believes that certain atomic configurations necessarily imply consciousness, by dint of psychophysical laws.

The claim that certain atomic configurations just are consciousness is what the physicalist claims, but that is what is contested by knowledge arguments: we can't really conceive of a way for consciousness to be identical with physical states.

Replies from: timujin, RobbBB
comment by timujin · 2016-07-07T10:37:05.464Z · LW(p) · GW(p)

Chalmers does not claim that p-zombies are logically possible, he claims that they are metaphysically possible. Chalmers already believes that certain atomic configurations necessarily imply consciousness, by dint of psychophysical laws.

Okay. In that case, I peg his argument as proving too much. Imagine a cookie that is exactly like an Oreo, down to the last atom, except it's raspberry flavored. This situation is semantically the same as a p-Zombie, so it's exactly as metaphysically possible, whatever that means. Does it prove that raspberry flavor is an extra, nonphysical fact about cookies?

Replies from: ChristianKl, UmamiSalami
comment by ChristianKl · 2016-07-08T18:49:19.470Z · LW(p) · GW(p)

Via hypnosis it's perfectly possible to let someone perceive the raspberry flavor when eating an Oreo. There's no problem to say that an Oreo has a flavor that based on the person eating it (an observer).

The flavor qualia of an Oreo is not predetermined by it's physcial makeup.

Replies from: gjm
comment by gjm · 2016-07-08T20:54:49.556Z · LW(p) · GW(p)

In the presence of hypnosis, hallucination, olfactory damage, etc., the different flavour qualia of the Oreo are not properties of the Oreo at all. This doesn't seem to me at all analogous to the p-zombie or "inverted spectrum" thought experiments, where the point is that the people are the same and the qualia are unchanged.

Replies from: ChristianKl
comment by ChristianKl · 2016-07-09T08:29:04.466Z · LW(p) · GW(p)

the different flavour qualia of the Oreo are not properties of the Oreo at all.

Why isn't how the Oreo tastes a property of the Oreo? It's just not a physical property of it in the sense that you can investigate it by investigating the physical makeup of the Oreo.

It's simplier to how the qualia of conscious experience that a p-zombie might lack.

Replies from: gjm
comment by gjm · 2016-07-10T09:27:28.736Z · LW(p) · GW(p)

Sorry, I was a little inexact. The way an Oreo tastes to someone whose tasting-system has been interfered with is a property of both the Oreo and the interference, and in some cases (e.g., someone hypnotized to think they're eating a raspberry) it may be a property of only the interference; in any case, the differences between different such tasters' experiences is largely a matter of the interference rather than the Oreo.

Something a bit like this is true even without interference, of course. Different people have different experiences on tasting the same foods. Very different, sometimes.

But, again, none of this is a good analogy for the p-zombie or inverted-spectrum experiments. The way the analogy is meant to work is:

  • Person : inverted spectrum :: Oreo : tastes of raspberries.
  • Inversion is a difference in the person :: raspberry taste is a difference in the Oreo.
  • In both cases the change is purely internal.
    • "Inverted spectra" pose no sort of difficulty for physicalism if what's actually happening is that person 1 sees red and person 2 sees green because person 1 is looking at a tomato and person 2 is looking at a cabbage.
  • In both cases the only change is supposed to be non-physical.
    • Otherwise there's no argument against physicalism here.

And timujin is suggesting that the Oreo version of this is obviously silly, and that we should apply the same intuitions to the other side of the analogy.

Your introduction of hypnosis breaks the analogy, because now (1) the change is no longer "internal": the raspberry taste is only there if a particular person is eating the Oreo, and that person has changed; and (2) the change is no longer non-physical: hypnosis involves physical processes and so far as we know is a physical process.

comment by UmamiSalami · 2016-07-07T15:18:23.981Z · LW(p) · GW(p)

Yes, this is called qualia inversion and is another common argument against physicalism. There's a detailed discussion of it here: http://plato.stanford.edu/entries/qualia-inverted/

Replies from: timujin
comment by timujin · 2016-07-07T20:01:04.576Z · LW(p) · GW(p)

It's not about qualia. It's about any arbitrary property.

Imagine a cookie like Oreo to the last atom, except that it's deadly poisonous, weighs 100 tons and runs away when scared.

Replies from: kilobug, UmamiSalami
comment by kilobug · 2016-07-08T08:42:25.889Z · LW(p) · GW(p)

Imagine a cookie like Oreo to the last atom, except that it's deadly poisonous, weighs 100 tons and runs away when scared.

Well, I honestly can't. When you tell me that, I picture a real Oreo, and then at its side a cartoonish Oreo with all those weird property, but then trying to assume the microscopic structure of the cartoonish Oreo is the same than of a real Oreo just fails.

It's like if you tell me to imagine an equilateral triangle which is also a right triangle. Knowing non-euclidian geometry I sure can cheat around, but assuming I don't know about non-euclidian geometry or you explicitely add the constraint of keeping it, it just fails. You can hold the two sets of properties next to each other, but not reunite them.

Or if you tell me to imagine an arrangement of 7 small stones as a rectangle which isn't a line of 7x1. I can hold the image of 7 stones, the image of a 4x2 rectangle side-by-side, but reuniting the two just fails. Or leads to 4 stones in a line with 3 stones in a line below, which is no longer a rectangle.

When you multiply constraints to the point of being logically impossible, imagination just breaks - it holds the properties in two side-by-side sets, unable to re-conciliate them into a single coherent entity.

That's what your weird Oreo or zombies do to me.

Replies from: gjm, Good_Burning_Plastic
comment by gjm · 2016-07-08T12:55:05.388Z · LW(p) · GW(p)

My impression was that this was pretty much tinujin's point: saying "imagine something atom-for-atom identical to you but with entirely different subjective experience" is like saying "imagine something atom-for-atom identical to an Oreo except that it weighs 100 tons etc.": it only seems imaginable as long as you aren't thinking about it too carefully.

Replies from: timujin
comment by timujin · 2016-07-08T13:36:09.021Z · LW(p) · GW(p)

Confirm.

comment by Good_Burning_Plastic · 2016-07-09T09:17:47.833Z · LW(p) · GW(p)

Or if you tell me to imagine an arrangement of 7 small stones as a rectangle which isn't a line of 7x1.

O.O.O.O
O..O..O
comment by UmamiSalami · 2016-07-07T20:18:23.363Z · LW(p) · GW(p)

Flavor is distinctly a phenomenal property and a type of qualia.

It is metaphysically impossible for distinctly physical properties to differ between two objects which are physically identical. We can't properly conceive of a cookie that is physically identical to an Oreo yet contains different chemicals, is more massive or possessive of locomotive powers. Somewhere in our mental model of such an item, there is a contradiction.

comment by Rob Bensinger (RobbBB) · 2016-07-07T04:13:57.247Z · LW(p) · GW(p)

Chalmers doesn't think 'metaphysical possibility' is a well-specified idea. He thinks p-zombies are logically possible, but that the purely physical facts in our world do not logically entail the phenomenal facts; the phenomenal facts are 'further facts.'

comment by MockTurtle · 2016-07-04T12:17:58.118Z · LW(p) · GW(p)

I wonder what probability epiphenomenalists assign to the theory that they are themselves conscious, if they admit that belief in consciousness isn't caused by the experiences that consciousness brings.

The more I think about it, the more absurdly self-defeating it sounds, and I have trouble believing that ANYONE could hold such views after having thought about it for a few minutes. The only reason I continue to think about it is because it's very easy to believe that some people, no matter how an AI acted and for how long, would never believe the AI to be conscious. And that bothers me a lot, if it affects their moral stance on that AI.

Replies from: kilobug
comment by kilobug · 2016-07-05T11:50:40.117Z · LW(p) · GW(p)

Another more directly worrying question, is why or if the p-zombie philosopher postulate that other persons have consciousness.

After all, if you can speak about consciousness exactly like we do and yet be a p-zombie, why doesn't Chalmer assume he's the only not being a zombie, and therefore letting go of all forms of caring for others and all morality ?

The fact that Chalmer and people like him still behave like they consider other people to be as conscious as they are probably points to the fact they have belief-in-belief, more than actual belief, in the possibility of zombieness.

Replies from: buybuydandavis, UmamiSalami
comment by buybuydandavis · 2016-07-09T12:09:43.260Z · LW(p) · GW(p)

Another more directly worrying question, is why or if the p-zombie philosopher postulate that other persons have consciousness.

A wonderful way to dehumanize.

therefore letting go of all forms of caring for others and all morality ?

The meat bag you ride will let go of caring, or not.

Under the theory, the observer chooses nothing in the physical world. The meatbag produces experiences of caring for you, or not, according to his meatbag reasons for action in the world.

comment by UmamiSalami · 2016-07-06T20:33:35.844Z · LW(p) · GW(p)

is why or if the p-zombie philosopher postulate that other persons have consciousness.

Because consciousness supervenes upon physical states, and other brains have similar physical states.

Replies from: kilobug
comment by kilobug · 2016-07-07T07:20:11.751Z · LW(p) · GW(p)

Because consciousness supervenes upon physical states, and other brains have similar physical states.

But why, how ? If consciousness is not a direct product of physical states, if p-zombies are possible, how can you tell apart the hypothesis "every other human is conscious" from "only some humans are conscious" from "I'm the only one conscious by luck" from "everything including rocks are conscious" ?

Replies from: UmamiSalami
comment by UmamiSalami · 2016-07-07T15:21:14.011Z · LW(p) · GW(p)

Chalmers does believe that consciousness is a direct product of physical states. The dispute is about whether consciousness is identical to physical states.

Chalmers does not believe that p-zombies are possible in the sense that you could make one in the universe. He only believes it's possible that under a different set of psychophysical laws, they could exist.

Replies from: dxu
comment by dxu · 2016-07-18T04:30:54.744Z · LW(p) · GW(p)

I claim that it is "conceivable" for there to be a universe whose psychophysical laws are such that only the collection of physical states comprising my brainstates are conscious, and the rest of you are all p-zombies. Note that this argument is exactly as plausible as the standard Zombie World argument (which is to say, not very) since it relies on the exact same logic; as such, if you accept the standard Zombie World argument, you must accept mine as well. Now then: I claim that by sheer miraculous coincidence, this universe that we are living in possesses the exact psychophysical laws described above (even though there is no way for my body typing this right now to know that), and hence I am the only one in the universe who actually experiences qualia. Also, I would say this even if we didn't live in such a universe.

Prove me wrong.

Replies from: entirelyuseless, UmamiSalami
comment by entirelyuseless · 2016-07-18T13:35:53.113Z · LW(p) · GW(p)

No one can prove you wrong. But your pretended belief is unreasonable, in the same way that it is unreasonable to believe that the sun will not rise tomorrow, even though no one can prove that it will.

It is also for the same reasons; the argument that the sun will rise tomorrow is inductive, and similarly the argument that others are conscious.

It may even be the case that infants originally believe your argument, and then come to the opposite conclusion through induction. I know someone who says that he clearly remembers that when he was three years old, he believed that he alone was conscious, because the behavior of others was too dissimilar to his own, e.g. his parents did not go and eat the ice cream in the freezer, even though there was no one to stop them.

Replies from: dxu
comment by dxu · 2016-07-18T15:49:39.174Z · LW(p) · GW(p)

No one can prove you wrong. But your pretended belief is unreasonable, in the same way that it is unreasonable to believe that the sun will not rise tomorrow, even though no one can prove that it will.

In that case, the Zombie World argument is just as unreasonable--which is what I was getting at in the first place.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-07-19T04:50:45.932Z · LW(p) · GW(p)

I don't know what you mean by the "Zombie World argument." No thinks that the real world is a zombie world.

Replies from: dxu
comment by dxu · 2016-07-19T19:55:26.357Z · LW(p) · GW(p)

Okay, here's the Zombie World argument, paraphrased:

  1. It is "conceivable" (whatever that means) for there to be a universe with physical laws exactly identical to ours, but without the "bridging psychophysical laws" that cause certain physical configurations of atoms to produce subjective awareness, i.e. "consciousness".
  2. By assumption, the universe described above is physically identical to ours, right down to the last quark. As a result, there is a planet called "Earth" in this universe, and this planet is populated by humans identical to ourselves; each of us has a counterpart in this other universe. Moreover, each of those counterparts behaves exactly like you or I would, talking to each other, laughing at jokes, and even falling in love.
  3. However, since this hypothetical "conceivable" universe lacks the "bridging psychophysical laws" that are necessary for true consciousness to exist, each of those people in that universe, despite acting exactly like you'd expect a conscious being to act, aren't actually conscious, i.e. they don't experience qualia or possess any sense of self-awareness at all. They are, for all intents and purposes, automatons.
  4. Since by definition, there is no physical experiment you can perform to distinguish our universe from the Zombie Universe, any observer would have be told, as a separate and independent fact, that "yes, this universe is not the Zombie World--there is actually consciousness in this universe". This is then taken as proof that consciousness must be extra-physical, i.e. epiphenomenal.
  5. In both the Zombie World and our universe, people write philosophy papers about consciousness, since (again) the Zombie World and our universe are stipulated to be physically identical, and the act of writing a philosophy paper is a physical act. Incidentally, by the way, this means that the philosophers in the Zombie World are being absolutely crazy, since they're talking about a phenomenon that they have no way of knowing exists, by definition.
  6. However, it turns out that our universe's philosophers (whose beliefs about consciousness are no more justified than the Zombie World's philosopher's beliefs) actually are correct about consciousness, because by sheer miraculous coincidence, they happen to be living in a universe with the correct "psychophysical laws" that produce consciousness. They are correct, not because of any logical reasoning on their part (indeed, the reasoning they used must be flawed, since they somehow deduced the existence of a phenomenon they literally have no way of knowing about), but because they just happen to be living in a universe where their statements are true. Yay for them (and us)!
  7. Oh, and by the way, we really are living in a universe with consciousness, not the Zombie World. I know that there's literally no way for me to prove this to you (in fact, there's no way for me to know this myself), but just trust me on this one.

And now here's my argument, paraphrased:

  1. It is "conceivable" (whatever that means) for there to be a universe with physical laws exactly identical to ours, but whose "bridging psychophysical laws" are such that only those physical configurations of atoms corresponding to my (dxu's) brainstates produce consciousness; nothing else is or can ever be conscious.
  2. By assumption, the universe described above is physically identical to ours, right down to the last quark. As a result, there is a planet called "Earth" in this universe, and this planet is populated by humans identical to ourselves; each of us has a counterpart in this other universe. Moreover, each of those counterparts behaves exactly like you or I would, talking to each other, laughing at jokes, and even falling in love. One of those people is a counterpart to me; we'll call him "dxu-2".
  3. However, since this hypothetical "conceivable" universe has a different set of "bridging psychophysical laws", each of those people in that universe (with one exception), despite acting exactly like you'd expect a conscious being to act, aren't actually conscious, i.e. they don't experience qualia or possess any sense of self-awareness at all. They are, for all intents and purposes, automatons. Of course, I said there was one exception, and that exception should be obvious: dxu-2 is the only person in this universe who possess consciousness.
  4. Since by definition, there is no physical experiment you can perform to distinguish our universe from the Modified Zombie Universe, any observer would have be told, as a separate and independent fact, that "yes, this universe is not the Modified Zombie World--everyone here is conscious, not just dxu-2". This is then taken as proof that consciousness must be extra-physical, i.e. epiphenomenal.
  5. In both the Modified Zombie World and our universe, people write philosophy papers about consciousness, since (again) the Modified Zombie World and our universe are stipulated to be physically identical, and the act of writing a philosophy paper is a physical act. Incidentally, by the way, this means that the philosophers in the Modified Zombie World are being absolutely crazy, since they're talking about a phenomenon that they have no way of knowing exists, by definition.
  6. Dxu-2, by the way, isn't a professional philosopher, but he's fond of making comments on the Internet that assert he's conscious and that no one else is. Of course, when he makes these comments, his physical self is being exactly as crazy as the other philosophers in the Modified Zombie World, but luckily for dxu-2, the drivel that his physical self types just happens to be exactly right, because by sheer miraculous coincidence, he lives in a universe with the correct "psychophysical laws" that cause him to be conscous.
  7. Oh, and by the way, the Modified Zombie World is our universe, and "dxu-2" is actually me. I know I can't prove this to you, but just trust me on this one.

If you accept the Zombie World argument, you have to accept my argument; the two are exactly analogous. Of course, the contrapositive of the above statement is also true: if you reject my argument, you must reject the Zombie World argument. In effect, my argument is a reductio ad absurdum of the Zombie World argument; it shows that given the right motivation, you can twist the Zombie World argument to include/exclude anything you want as conscious. Just say [insert-universe-here] is "conceivable" (whatever that means), and the rest of the logic plays out identically.

P. S. One last thing--this part of your comment here?

No [one] thinks that the real world is a zombie world.

If the Zombie World exists (which I don't believe it does--but if it did), all of the people in that universe (who don't think their world is a zombie world) are dead wrong.

Replies from: entirelyuseless, UmamiSalami, entirelyuseless
comment by entirelyuseless · 2016-07-20T04:56:26.444Z · LW(p) · GW(p)

While I disagree with Eliezer's post, I also disagree with the Zombie world argument as you have presented it. That said, it is not true that your argument is completely analogous with it. One difference is in number 7. In the first argument, we believe we are living in a world where everyone is conscious for inductive reasons. The fact that other human beings have similar bodies and actions with mine, gives me reason to think that others are conscious just as I am. In your argument, there is simply no reason to accept your #7, since there is no analogy that would lead you to that conclusion.

Replies from: dxu
comment by dxu · 2016-07-20T19:03:44.175Z · LW(p) · GW(p)

While I disagree with Eliezer's post

Where? How?

I also disagree with the Zombie world argument as you have presented it.

Well, I disagree with the Zombie World argument, period, so it's possible I may have misrepresented it somehow (though naturally, I don't believe I did). Is there something you specifically disagree with about my phrasing of the Zombie World argument, i.e. some objection that applies to my phrasing, but not to (what you consider) the original?

That said, it is not true that your argument is completely analogous with it. One difference is in number 7.

Okay, so it seems like this is the meat of your objection. This being the case, I'm going to devote a rather larger amount of effort to answering this objection than to what you wrote above. If you feel I didn't focus enough on what you wrote above, again, please feel free to expand on any objections you may have there.

In the first argument, we believe we are living in a world where everyone is conscious for inductive reasons. The fact that other human beings have similar bodies and actions with mine, gives me reason to think that others are conscious just as I am. In your argument, there is simply no reason to accept your #7, since there is no analogy that would lead you to that conclusion.

Well, first off, I personally think the Zombie World is logically impossible, since I treat consciousness as an emergent phenomenon rather than a mysterious epiphenomenal substance; in other words, I reject the argument's premise: that the Zombie World's existence is "conceivable". (That's why I believe every human on the planet is conscious--given the structure of their brains, there's no way for them not to be.)

That being said, if you do accept the Zombie World argument, then there's no reason to believe we live in a universe with any conscious beings. The Zombie World (the one that has no consciousness in it, period) is far simpler than both (1) a universe in which I'm the only conscious one, and (2) a universe in which everyone is conscious. In both of the latter cases, you're saying that there's a mysterious epiphenomenal substance called consciousness that isn't there by necessity; it just happens to be there in order to make all the philosophers of consciousness (and dxu-2) right. Let's repeat that for emphasis: there is literally no reason for consciousness to exist in our universe other than to make David Chalmers right when he writes about consciousness.

If you accept that the Zombie World is conceivable, in other words, the next logical step is not to conclude that by sheer luck, we somehow ended up in a universe with consciousness--no, the next logical step would be to conclude that we ourselves are actually living in the Zombie World. There's no reason to believe that you're conscious, or that I'm conscious, or that anyone is conscious; the Zombie World (assuming it's possible) is strictly simpler than all of those cases.

Remember how, in both arguments, step 7 contained the phrase "just trust me on this one"? That wasn't by accident. In order to accept that we live in a universe with any consciousness at all, you need an absolutely tremendous of faith. True, a universe in which I'm the only conscious being might be slightly more complicated that one where everyone is conscious, but that slight increase in complexity is nothing compared with the huge complexity penalty both hypotheses receive compared with the Zombie World hypothesis (assuming, once again, that you admit the Zombie World hypothesis as a valid hypothesis).

Quoting the last part of your comment once more:

In your argument, there is simply no reason to accept your #7, since there is no analogy that would lead you to that conclusion.

If you reject step 7 of my argument because you feel it is unjustified ("there is no analogy that would lead you to that conclusion"), then you must reject step 7 of (my phrasing of) the original Zombie World argument as well, because compared to the Zombie World itself, the latter claim is virtually just as unjustified as the former. Your objection is acknowledged, but it plays no role in determining the conclusion of the original discussion: you must either accept both arguments as I presented them, or accept neither.

TL;DR: I concede that the final steps of each argument were not exactly analogous. However, this does not change the fact that if you accept one argument, you must accept the other, and hence, my original contention remains unchallenged.

Replies from: UmamiSalami
comment by UmamiSalami · 2016-07-21T03:28:31.701Z · LW(p) · GW(p)

Well, first off, I personally think the Zombie World is logically impossible, since I treat consciousness as an emergent phenomenon rather than a mysterious epiphenomenal substance; in other words, I reject the argument's premise: that the Zombie World's existence is "conceivable".

And yet it seems really quite easy to conceive of a p zombie. Merely claiming that consciousness is emergent doesn't change our ability to imagine the presence or absence of the phenomenon.

That being said, if you do accept the Zombie World argument, then there's no reason to believe we live in a universe with any conscious beings.

But clearly we do have such a reason: that we are conscious, and know this fact through direct experience of consciousness.

The confusion in your post is grounded in the idea that Chalmers or I would claim that the proof for consciousness is people's claims that they are conscious. We don't (although it could be evidence for it, if we had prior expectations against p-zombie universes which talked about consciousness). The claim is that we know consciousness is real due to our experience of it. The fact that this knowledge is causally inefficacious does not change its epistemic value.

Replies from: dxu
comment by dxu · 2016-07-21T05:37:02.323Z · LW(p) · GW(p)

And yet it seems really quite easy to conceive of a p zombie. Merely claiming that consciousness is emergent doesn't change our ability to imagine the presence or absence of the phenomenon.

Not too long ago, it would also have been quite easy to conceive of a world in which heat and motion were two separate things. Today, this is no longer conceivable. If something seems conceivable to you now, that might just be because you don't yet understand how it's actually impossible. To make the jump from "conceivability" (a fact about your bounded mind) to "logically possible" (a fact about reality) is a misstep, and a rather enormous one at that.

But clearly we do have such a reason: that we are conscious, and know this fact through direct experience of consciousness.

By stipulation, you would have typed the above sentence regardless of whether or not you were actually conscious, and hence your statement does not provide evidence either for or against the existence of consciousness. If we accept the Zombie World as a logical possibility, our priors remain unaltered by the quoted sentence, and continue to be heavily weighted toward the Zombie World. (Again, we can easily get out of this conundrum by refusing to accept the logical possibility of the Zombie World, but this seems to be something you refuse to do.)

The claim is that we know consciousness is real due to our experience of it.

This exact statement could have been emitted by a p-zombie. Without direct access to your qualia, I have no way of distinguishing the difference based on anything you say or do, and as such this sentence provides just as much evidence that you are conscious as the earlier quoted statement does--that is to say, no evidence at all.

The fact that this knowledge is causally inefficacious does not change its epistemic value.

Oh, but it does. In particular, for a piece of knowledge to have epistemic value to me (or anyone else, for that matter), I need to have some way of acquiring that knowledge. For me to acquire that knowledge, I must causally interact with it in some manner. If that knowledge is "causally inefficacious", as you put it, by definition I have no way of knowing about it, and it can hardly be called "knowledge" at all, much less have any epistemic value.

Allow me to spell things out for you. Your claims, interpreted literally, would imply the following statements:

  1. There exists a mysterious substance called "consciousness" that does not causally interact with anything in the physical universe.
  2. Since this substance does not causally interact with anything in the physical universe, and you are part of the physical universe, said substance does not causally interact with you.
  3. This means, among other things, that when you use your physical fingers to type on your physical keyboard the words, "we are conscious, and know this fact through direct experience of consciousness", the cause of that series of physical actions cannot be the mysterious substance called "consciousness", since (again) that substance is causally inactive. Instead, some other mysterious process in your physical brain is occurring and causing you to type those words, operating completely independently of this mysterious substance. Moreover, this physical process would occur and cause you to type those same words regardless of whether the mysterious epiphenomenal substance called "consciousness" was actually present.
  4. Nevertheless, for some reason you appear to expect me to treat the words you type as evidence of this mysterious, causally inactive substance's existence. This, despite the fact that those words and that substance are, by stipulation, completely uncorrelated.

...Yeah, no. Not buying it, sorry. If you can't seeing the massive improbabilities you're incurring here, there's really not much left for me to say.

Replies from: UmamiSalami
comment by UmamiSalami · 2016-07-22T17:21:13.162Z · LW(p) · GW(p)

Not too long ago, it would also have been quite easy to conceive of a world in which heat and motion were two separate things. Today, this is no longer conceivable.

But it is conceivable for thermodynamics to be caused by molecular motion. No part of that is (or ever was, really) inconceivable. It is inconceivable for the sense qualia of heat to be reducible to motion, but that's just another reason to believe that physicalism is wrong. The blog post you linked doesn't actually address the idea of inconceivability.

If something seems conceivable to you now, that might just be because you don't yet understand how it's actually impossible.

No, it's because there is no possible physical explanation for consciousness (whereas there are possible kinetic explanations for heat, as well as possible sonic explanations for heat, and possible magnetic explanations for heat, and so on. All these nonexistent explanations are conceivable in ways that a physical description of sense datum is not).

By stipulation, you would have typed the above sentence regardless of whether or not you were actually conscious, and hence your statement does not provide evidence either for or against the existence of consciousness.

And I do not claim that my statement is evidence that I have qualia.

This exact statement could have been emitted by a p-zombie.

See above. No one is claiming that claims of qualia prove the existence of qualia. People are claiming that the experience of qualia proves the existence of qualia.

In particular, for a piece of knowledge to have epistemic value to me (or anyone else, for that matter), I need to have some way of acquiring that knowledge.

We're not talking about whether a statement has "epistemic value to [you]" or not. We're talking about whether it's epistemically justified or not - whether it's true or not.

There exists a mysterious substance called "consciousness" that does not causally interact with anything in the physical universe.

Neither I nor Chalmers describe consciousness as a substance.

Since this substance does not causally interact with anything in the physical universe, and you are part of the physical universe, said substance does not causally interact with you.

Only if you mean "you" in the reductive physicalist sense, which I don't.

This means, among other things, that when you use your physical fingers to type on your physical keyboard the words, "we are conscious, and know this fact through direct experience of consciousness", the cause of that series of physical actions cannot be the mysterious substance called "consciousness", since (again) that substance is causally inactive. Instead, some other mysterious process in your physical brain is occurring and causing you to type those words, operating completely independently of this mysterious substance.

Of course, although physicalists believe that the exact same "some other mysterious process in your physical brain" causes us to type, they just happen to make the assertion that consciousness is identical to that other process.

Nevertheless, for some reason you appear to expect me to treat the words you type as evidence of this mysterious, causally inactive substance's existence.

As I have stated repeatedly, I don't, and if you'd taken the time to read Chalmers you'd have known this instead of writing an entirely impotent attack on his ideas. Or you could have even read what I wrote. I literally said in the parent comment,

The confusion in your post is grounded in the idea that Chalmers or I would claim that the proof for consciousness is people's claims that they are conscious. We don't (although it could be evidence for it, if we had prior expectations against p-zombie universes which talked about consciousness). The claim is that we know consciousness is real due to our experience of it.

Honestly. How deliberately obtuse could you be to write an entire attack on an idea which I explicitly rejected in the comment to which you replied. Do not waste my time like this in the future.

comment by UmamiSalami · 2016-07-21T03:16:55.959Z · LW(p) · GW(p)

4 is not a correct summary because consciousness being extra physical doesn't imply epiphenominalism; the argument is specifically against physicalism, so it leaves other forms of dualism and panpsychism on the table.

5 and onwards is not correct, Chalmers does not believe that. Consciousness being nonphysical does not imply a lack of knowledge of it, even if our experience of consciousness is not causally efficacious (though again I note that the p zombie argument doesn't show that consciousness is not causally efficacious, Chalmers just happens to believe that for other reasons).

No part of the zombie argument really makes the claim that people or philosophers are conscious or not, so your analogous reasoning along 5-7 is not a reflection of the argument.

comment by entirelyuseless · 2016-07-21T03:03:59.454Z · LW(p) · GW(p)

I'm not going to respond to all of this, because I don't have the time or energy for it, and I think you are very confused here about a large number of issues; resolving them would take much, much more than a comment.

But I will point out one thing. I agree that zombies are impossible, and therefore that a zombie world is impossible. That says nothing about what is conceivable; we know what we mean by a zombie or a zombie world, so it is quite conceivable.

But the thing you are confused about is this: just because a zombie world is impossible, does not mean that we have a syllogistic proof from first principles that it is impossible. We do not. And so if someone thinks it is possible, you can never refute that. You can only give reasons, that is, non-conclusive reasons, for thinking that it is probably impossible. And the reasons for thinking that are very similar to the reason I gave for thinking that other people are conscious. Your comment confuses two different ideas, namely whether zombies are possible, and what we know about zombies and how we know it, which are two different things.

Replies from: dxu
comment by dxu · 2016-07-21T06:06:37.125Z · LW(p) · GW(p)

just because a zombie world is impossible, does not mean that we have a syllogistic proof from first principles that it is impossible. We do not.

True.

And so if someone thinks it is possible, you can never refute that.

False.

You can only give reasons, that is, non-conclusive reasons, for thinking that it is probably impossible. And the reasons for thinking that are very similar to the reason I gave for thinking that other people are conscious. Your comment confuses two different ideas, namely whether zombies are possible, and what we know about zombies and how we know it, which are two different things.

This is not a matter of knowledge, but of expectation. Basically, the question boils down to whether I, personally, believe that consciousness will eventually be explained in reductionistic, lower level terms, just as heat was explained in reductionistic, lower level terms, even if such an explanation is currently unavailable. And the answer to that question is yes. Yes, I do.

I do not believe that consciousness is magic, and I do not believe that it will remain forever inexplicable. I believe that although we do not currently have an explanation for qualia, we will eventually discover such an explanation, just as I believe there exists a googol-th digit of pi, even if we have not yet calculated that digit. And finally, I expect that once such an explanation is discovered, it will make the entire concept of "p-zombies" seem exactly as possible as "heat" somehow being different from "motion", or biology being powered by something other than chemistry, or the third digit of pi being anything other than 4.

This is, it seems to me, the only reasonable position to take; anything else would, in my opinion, require a massive helping of faith. I have attempted to lay out my arguments for why this is so on multiple occasions, and (if you'll forgive my immodesty) I think I've done a decent job of it. I have also asked you several questions in order to help clarify your objections so that I might be able to better address said objections; so far, these questions of mine have gone unanswered, and I have instead been presented with (what appears to me to be) little more than vague hand-waving in response to my carefully worded arguments.

As this conversation has progressed, all of these things have served to foster a feeling of increasing frustration on my part. I say this, not to start an argument, but to express my feelings regarding this discussion directly in the spirit of Tell Culture. Forgive me if my tone in this comment seems a bit short, but there is only so much dancing around the point I am willing to tolerate before I deem the conversation a frustrating and fruitless pursuit. I don't mean to sound like I'm giving an ultimatum here, but to put it bluntly: unless I encounter a point I feel is worth addressing in detail, this will likely be my last reply to you on this topic. I've laid out my case; I leave the task of refuting it to others.

Replies from: entirelyuseless, entirelyuseless
comment by entirelyuseless · 2016-07-21T10:28:47.192Z · LW(p) · GW(p)

"I do not believe etc."

That is my point. It is a question of your beliefs, not of proofs. In essence, in your earlier comment, you asserted that you do not depend on an inductive argument to tell you that other people are conscious, because zombies are impossible. But my point is that without the inductive argument, you would have no reason to believe that zombies are impossible.

Replies from: dxu
comment by dxu · 2016-07-21T16:13:58.585Z · LW(p) · GW(p)

No, I don't believe zombies are impossible because of some nebulously defined "inductive argument". I believe zombies are impossible because I am experiencing qualia, and I don't believe those qualia are the result of some magical consciousness substance that can be added or subtracted from a universe at will.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-07-22T03:04:38.093Z · LW(p) · GW(p)

Those are not the only possibilities (that either zombies are impossible or that qualia are the result of magic), but even if they were, your reasons for disbelieving in magic are inductive.

comment by entirelyuseless · 2016-07-21T10:36:12.094Z · LW(p) · GW(p)

Also, regarding the personal things here, I am not surprised that you find it hard to understand me, for two reasons. First, as I have said, I haven't been trying to lay out an entire position anyway, because it is not something that would fit into a few comments on Less Wrong. Second, you are deeply confused about a large number of things.

Of course, you suppose that I am the one who is confused. This is normal for disagreements. But I have good evidence that it is you who are confused, rather than me. You admit that you do not understand what I am saying, calling it "vague hand-waving." In contrast, I understand both what I am saying, and what you are saying. I understand your position quite well, and all of its reasons, along with the ways that you are mistaken. This is a difference that gives me a reason to think that you are the one who is confused, not me.

I agree that it would not be productive to continue a discussion along those lines, of course.

Replies from: dxu, UmamiSalami
comment by dxu · 2016-07-21T16:18:54.024Z · LW(p) · GW(p)

...Your comment, paraphrased:

"You think I'm wrong, but actually you're the one who's wrong. I'm not going to give any reasons you're wrong, because this margin is too narrow to contain those reasons, but rest assured I know for a fact that I'm right and you're wrong."

This is, frankly, ridiculous and a load of drivel. Sorry, but I have no intention of continuing to argue with someone who doesn't even bother to present their side of the argument and insults my intelligence on top of that. Tapping out.

comment by UmamiSalami · 2016-07-22T19:47:28.561Z · LW(p) · GW(p)

You should take a look at the last comment he made in reply to me, where he explicitly ascribed to me and then attacked (at length) a claim which I clearly stated that I didn't hold in the parent comment. It's amazing how difficult it is for the naive-eliminativist crowd to express cogent arguments or understand the positions which they attack, and a common pattern I've noticed across this forum as well as others.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-07-23T03:11:02.452Z · LW(p) · GW(p)

Yes, I noticed he overlooked the distinction between "I know I am conscious because it's my direct experience" and "I know I am conscious because I say 'I know I am conscious because it's my direct experience.'" And those are two entirely different things.

Replies from: hairyfigment
comment by hairyfigment · 2016-07-27T01:20:45.437Z · LW(p) · GW(p)

The first of those things is incompatible with the Zombie Universe Argument, if we take 'knowledge' to mean a probability that one could separate from the subjective experience. You can't assume that direct experience is epiphenomenal, meaning it doesn't cause any behavior or calculation directly, and then also assume, "I know I am conscious because it's my direct experience".

If it seems unfair to suggest that Chalmers doesn't know he himself is conscious, remember that to our eyes Chalmers is the one creating the problem; we say that consciousness is a major cause of our beliefs about consciousness.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-07-27T03:53:14.906Z · LW(p) · GW(p)

I don't think experience is epiphenomenal. As I said, I disagree with the Zombie world argument as proposed.

Nonetheless, it is not true that the first of those things is incompatible with the Zombie argument, even taken in that way. Because knowing I am conscious, not the saying of the words but the being, would itself be epiphenomenal, according to that theory. So direct experience could be the cause of someone knowing that he was conscious, because both of those (experience and knowing) would be epiphenonomenal, so that experience would not be the cause of anything physical (e.g. such as producing sounds that sound like someone saying "I know I am conscious because it's my direct experience.)

Replies from: dxu
comment by dxu · 2016-07-27T05:28:48.904Z · LW(p) · GW(p)

I don't intend to get involved in another discussion, but a brief note:

if we take 'knowledge' to mean a probability that one could separate from the subjective experience.

This definition is from hairyfigment's comment. Since you didn't challenge his/her definition, I assume this means you agree with it. However, if we use this definition of "knowledge", the second paragraph of your comment becomes irrelevant. (This, incidentally, was also the point I was making in my response to UmamiSalami.)

comment by UmamiSalami · 2016-07-22T17:07:20.741Z · LW(p) · GW(p)

I claim that it is "conceivable" for there to be a universe whose psychophysical laws are such that only the collection of physical states comprising my brainstates are conscious, and the rest of you are all p-zombies.

Yes. I agree that it is conceivable.

Now then: I claim that by sheer miraculous coincidence, this universe that we are living in possesses the exact psychophysical laws described above (even though there is no way for my body typing this right now to know that), and hence I am the only one in the universe who actually experiences qualia. Also, I would say this even if we didn't live in such a universe.

Sure, and I claim that there is a teapot orbiting the sun. You're just being silly.

comment by turchin · 2016-07-03T12:58:21.575Z · LW(p) · GW(p)

I know people who claim that they don't have qualia. I doubt that it is true, but based on their words they should be considered zombies. ))

I would like to suggest zombies of second kind. This is a person with inverted spectrum. It even could be my copy, which speaks all the same philosophical nonsense as me, but any time I see green, he sees red, but names it green. Is he possible? I could imagine such atom-exact copy of me, but with inverted spectrum. And if such second type zombies are possible, it is argument for epiphenomenalism. Now I will explain why.

Phenomenological judgments (PJ) about own consciousness, that is the ability to say something about your own consciousness, will be the same in me and my zombie of the second type.

But there are two types of PJ: quantitative (like "I have consciousness") and qualitative which describes exactly what type of qualia I experience now.

The qualitative type of PJ is impossible. I can't transfer my knowing about "green" in the words.

It means that the fact of existence of phenomenological judgments doesn't help in case of second type zombies.

So, after some upgrade, zombie argument still works as an argument for epiphenomenalism.

I would also recommend the following article with introduce "PJ" term and many problems about it (but I do not agree with it completely) "Experimental Methods for Unraveling the Mind-body Problem: The Phenomenal Judgment Approach" Victor Argonov http://philpapers.org/rec/ARGMAA-2

Replies from: Houshalter, VAuroch, kilobug, DefectiveAlgorithm, Riothamus
comment by Houshalter · 2016-07-05T17:27:49.523Z · LW(p) · GW(p)

I don't believe that I experience qualia. But I recall that in my childhood, I was really fascinated by the question "is my blue your blue?" Apparently this is a really common thing.

But I think it can be resolved by imagining our brains work sort of like artificial neural networks. In a neural network, we can train it to recognize objects from raw pixel data. There is nothing special about red or blue, they are just different numbers. And there is nothing magic going on in the NN, it's just a bunch of multiplications and additions.

But what happens is, those weights change to recognize features useful in identifying objects. It will build a complicated internal model of the world of objects. This model will associate "blue" with objects that are commonly blue, like water and bleggs.

From inside the neural network, blue doesn't feel like it's just a number in it's input, or that it's thoughts are just a bunch of multiplications and additions. Blue would feel like, well, an indescribable phenomenon. Where it lights up it's "blue" neurons, and everything associated with them. It could list those associations, or maybe recall memories of blue things it has seen in the past. But it wouldn't be able to articulate what blue "feels" like.

People raised in a similar environment should learn similar associations. But different cultures could have entirely different associations, and so really do have different blues than you. Notably many cultures don't even have a word for blue, and lump it together with green.

Replies from: TheAncientGeek, UmamiSalami
comment by TheAncientGeek · 2016-07-05T18:29:51.911Z · LW(p) · GW(p)

I don't believe that I experience qualia.

Meaning you have no experiences, or your experiences have no particular character or flavour?

From inside the neural network, blue doesn't feel like it's just a number in it's input, or that it's thoughts are just a bunch of multiplications and additions. Blue would feel like, well, an indescribable phenomenon. Where it lights up it's "blue" neurons, and everything associated with them. It could list those associations, or maybe recall memories of blue things it has seen in the past. But it wouldn't be able to articulate what blue "feels" like.

Which experiences would you expect to be easier to describe..novel ones, or familiar ones?

comment by UmamiSalami · 2016-07-06T20:22:54.959Z · LW(p) · GW(p)

I don't believe that I experience qualia.

Wait, what?

comment by VAuroch · 2016-07-04T22:10:13.971Z · LW(p) · GW(p)

I don't see any difference between me and other people who claim to have consciousness, but I have never understood what they mean by consciousness or qualia to an extent that lets me conclude that I have them. So I am sometimes fond of asserting that I have neither, mostly to get an interesting response.

Replies from: turchin
comment by turchin · 2016-07-04T22:41:23.297Z · LW(p) · GW(p)

Maybe your are phlizombie))

I think we should add new type p-zombies: epistemic p-zombies: The ones, who claim that they don't have qualia, and we don't know why they claim it.

You are not only one who claimed absence of qualia. I think there are 3 possible solutions.

a) You are p-zombie

b) You don't know where to look

с) You are troll. "So I am sometimes fond of asserting that I have neither, mostly to get an interesting response."

Replies from: kilobug
comment by kilobug · 2016-07-05T12:03:08.990Z · LW(p) · GW(p)

Or more likely :

d) the term "qualia" isn't very properly defined, and what turchin means with "qualia" isn't exactly what VAuroch means with "qualia" - basically an illusion of transparecny/distance of inference issue.

Replies from: VAuroch
comment by VAuroch · 2016-07-09T23:18:57.993Z · LW(p) · GW(p)

No one defines qualia clearly. If they did, I'd have a conclusion one way or the other.

Replies from: entirelyuseless, TheAncientGeek
comment by entirelyuseless · 2016-07-10T15:48:34.760Z · LW(p) · GW(p)

Do you have a clear definition of clear definition? Or of anything, for that matter?

Replies from: VAuroch
comment by VAuroch · 2016-07-14T23:02:30.839Z · LW(p) · GW(p)

In this case, "description of how my experience will be different in the future if I have or do not have qualia" covers it. There are probably cases where that's too simplistic.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-07-15T12:58:05.893Z · LW(p) · GW(p)

That's easy to describe. If I have any experience in the future, I have qualia. If I have no experience in the future, I have no qualia. That's the difference.

Replies from: dxu, VAuroch
comment by dxu · 2016-07-18T04:34:35.358Z · LW(p) · GW(p)

Taboo "qualia", "experience", "consciousness", "awareness", and any synonyms. Now try to provide a clear definition.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-07-18T13:30:47.973Z · LW(p) · GW(p)

Please stop commenting. Now try to present your argument.

But more importantly, VAuroch defined clear definition as describing how experience would be different. Experience cannot be tabooed if that is what clear definition means.

Replies from: dxu
comment by dxu · 2016-07-18T15:57:05.248Z · LW(p) · GW(p)

As my username might imply, I am not VAuroch.

But more importantly, the point of Taboo is to describe the thing you're talking about in lower level terms, terms that don't generate the same confusion that the original concept does. It is in this manner that confusions are dissolved. If you can't do this with a certain topic, that's evidence you don't fully understand the topic yet--and as far as I'm aware, no one can do this with consciousness/qualia, which is what I was trying to get at.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-07-19T04:53:16.484Z · LW(p) · GW(p)

There is no need to link to Eliezer's posts; I have read all of them, and the ones I disagree with, I will continue to disagree with even after reading them again.

My point about "please stop commenting" is that if something is not a lower level thing, then you cannot describe it lower level terms. That is not because of confusion, but because of what you are talking about.

Replies from: dxu
comment by dxu · 2016-07-19T18:59:05.537Z · LW(p) · GW(p)

There is no need to link to Eliezer's posts; I have read all of them, and the ones I disagree with, I will continue to disagree with even after reading them again.

The links are for the benefit of others who may be reading my comments. That being said, what exactly do you disagree with about dissolving the question?

if something is not a lower level thing

Assuming this "something" you're talking about is consciousness, I disagree. Strongly.

That is not because of confusion, but because of what you are talking about.

If you're claiming that you're not confused about consciousness and that you know what you're talking about, then you should be able to transmit that understanding to others through words. If you can't, I submit that you are in fact confused.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-07-20T04:50:26.866Z · LW(p) · GW(p)

I can transmit it through words. We both know what we're talking about here.

Replies from: dxu, VAuroch
comment by dxu · 2016-07-20T18:06:53.387Z · LW(p) · GW(p)

Sorry, but I don't know what we're talking about (i.e. I don't know how to define consciousness). Could you transmit your understanding to me through words? Thanks in advance.

comment by VAuroch · 2016-07-30T00:53:40.548Z · LW(p) · GW(p)

I, also, still do not know what you're talking about. I expect to have experiences in the future. I do not really expect them to contain qualia, but I'm not sure what that would mean in your terms. Please describe the difference I should expect in terms of things I can verify or falsify internally.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-07-30T13:46:19.435Z · LW(p) · GW(p)

"I will have experiences but do not expect them to contain qualia," as I understand it, means "I will have experiences but do not expect to experience things in any particular way." This is because qualia are just the ways that things are experienced.

I do not know what it would mean to expect that to happen. Asking how you can verify it is like asking, "How do I verify whether or not 2 + 2 seems to be 4 but is not?"

Replies from: gjm
comment by gjm · 2016-07-31T21:16:20.338Z · LW(p) · GW(p)

It seems like you and VAuroch have a disagreement about how to use the word "qualia".

Your usage seems very modest, in the sense of not committing you to anything much. Are you sure that the way you actually use the word is consistently that modest? If having qualia just means experiencing things in some way, then you aren't entitled to assume

  • that there are actually such things as qualia
  • anything about the "structure" of experiences -- e.g., perhaps experiences are kinda indivisible and there's no such thing as a "quale of red" that's separable from the qualia of all the vast numbers of experiences that involve red things
  • anything about the relationships between qualia and (other?) physical phenomena like electrochemical activity in the brain

without some further argument that explores the nature of qualia in more detail.

(I suspect that VAuroch may have in mind some more-specific meaning of "qualia" that does entail particular positions on some questions like those. Or perhaps he merely doubts that there is any really satisfactory way to define "qualia" and is pushing you for more detail in the expectation that doing so will reveal problems?)

Replies from: entirelyuseless
comment by entirelyuseless · 2016-07-31T23:47:37.739Z · LW(p) · GW(p)

I am not sure what you mean by "things" when you say that I can't assume there are such things as qualia. I say that there are ways we experience things, and those are qualia. They are not things in the way that apples and dogs are things, but in another way. There is nothing strange about that, because there are many kinds of things that exist in many kinds of ways.

I don't make any assumption about the structure of experiences, but try to figure it out by looking at my experiences.

I personally assume there is a direct relationship between our experiences and physical phenomena in the brain. I have no reason to think I disagree with VAuroch in that respect. I disagree that it follows that "qualia do not exist," is a reasonable description of the resulting situation.

comment by VAuroch · 2016-07-30T00:57:31.327Z · LW(p) · GW(p)

How are qualia different from experiences? If experiences are no different, why use 'qualia' rather than 'experiences'?

Replies from: entirelyuseless
comment by entirelyuseless · 2016-07-30T05:35:08.022Z · LW(p) · GW(p)

Qualia means the specific way that you experience something. And if you don't experience something in any way at all, then you don't experience it. So if there are no qualia, there are no experiences. But they don't mean the same thing, since qualia means "the ways things are experienced", not "experiences."

Replies from: gjm
comment by gjm · 2016-07-31T21:20:26.817Z · LW(p) · GW(p)

Suppose I propose that physical objects have not only "mass" but "massiness", which is "the way things have mass". I agree that we can do the usual calculations using mass and that they will tell us how particles move, but I insist that we do not know that massiness is purely physical; that doing those calculations may miss something about massiness.

I guess that you would have little sympathy for this position. Where (if at all) does the analogy "experience : qualia :: mass : massiness" fail?

Replies from: entirelyuseless
comment by entirelyuseless · 2016-07-31T23:39:46.760Z · LW(p) · GW(p)

I have quite a bit of sympathy for that position, actually. I am not sure that analogy fails at all. However, we directly notice that we experience things in particular ways; if there is a particular way that things have mass, it is not part of our direct experience, since mass itself is not.

comment by TheAncientGeek · 2016-07-14T15:50:55.804Z · LW(p) · GW(p)

Like the way we have an answer to every mathematical problem.

comment by kilobug · 2016-07-05T12:01:27.603Z · LW(p) · GW(p)

I would like to suggest zombies of second kind. This is a person with inverted spectrum. It even could be my copy, which speaks all the same philosophical nonsense as me, but any time I see green, he sees red, but names it green. Is he possible? I could imagine such atom-exact copy of me, but with inverted spectrum.

I can't.

As a reductionist and materialist, it doesn't make sense - the feeling of "red" and "green" is a consequence of the way your brain is wired and structured, an atom-exact copy would have the same feelings.

But letting aside the reductionist/materialist view (which after all is part of the debate), it still wouldn't make sense. The special quality that "red" has in my consciousness, the emotions it call upon, the analogies it triggers, has consequences on how I would invoke the "red" color in poetry, or use the "red" color in a drawing. And on how I would feel about a poetry or drawing using "red".

If seeing #ff0000 triggers exactly all the same emotions, feelings, analogies in the consciousness of your clone, then he's getting the same experience than you do, and he's seeing "red", not "green".

Replies from: TheAncientGeek
comment by TheAncientGeek · 2016-07-05T18:31:45.261Z · LW(p) · GW(p)

But letting aside the reductionist/materialist view (which after all is part of the debate), it still wouldn't make sense

Is "it" zombies, or epiphenomenalism?

Replies from: kilobug
comment by kilobug · 2016-07-06T07:19:45.517Z · LW(p) · GW(p)

Is "it" zombies, or epiphenomenalism?

The hypothesis I was answering to, the "person with inverted spectrum".

comment by DefectiveAlgorithm · 2016-07-03T18:31:56.560Z · LW(p) · GW(p)

I would like to suggest zombies of second kind. This is a person with inverted spectrum. It even could be my copy, which speaks all the same philosophical nonsense as me, but any time I see green, he sees red, but names it green. Is he possible?

Such an entity is possible, but would not be an atom-exact copy of you.

Replies from: turchin
comment by turchin · 2016-07-03T19:11:29.097Z · LW(p) · GW(p)

We don't know how qualia are encoded in the brain. And how to distinguish a person and his copy with inverted spectrum.

Replies from: DefectiveAlgorithm
comment by DefectiveAlgorithm · 2016-07-03T21:51:20.190Z · LW(p) · GW(p)

I didn't say I knew which parts of the brain would differ, but to conclude therefore that it wouldn't is to confuse the map with the territory.

Replies from: turchin
comment by turchin · 2016-07-03T22:32:21.929Z · LW(p) · GW(p)

We can't conclude that they would not differ. We could postulate it and then ask: could we measure if equal copies have equal qualia. And we can't measure it. And here we return to "hard question": we don't know if different qualia imply different atom's combinations.

Replies from: gjm, Gurkenglas
comment by gjm · 2016-07-06T11:25:43.875Z · LW(p) · GW(p)

we don't know if different qualia imply different atom's combinations

Either (1) your saying "this looks red to me" versus "this looks green to me" is completely unaffected by the red/green qualia you are experiencing;

or (2) your brain works by magic instead of (or as well as) physics;

or (3) different qualia imply different physical states.

For me #1 is kinda-imaginable but would take away all actual reasons for believing in qualia; #2 is kinda-imaginable but the evidence against seems extremely strong; which leaves #3 a clear enough winner that saying "we don't know" about it is in the same sort of territory as "we don't know whether there are ghosts" or "we don't know whether the world has been secretly taken over by alien lizardmen".

[EDITED to add: Of course my argument here is basically Eliezer's argument in the OP. Perhaps turchin has a compelling refutation of that argument, but I haven't seen it yet.]

Replies from: turchin
comment by turchin · 2016-07-06T13:26:45.461Z · LW(p) · GW(p)

My point was to show that using possibility of phenomenological judgments as an argument against epiphenomenalism is not working as intended.

Because more subtle form of epiphenomenalism is still possible. It is conceivable, but I don't know if it is true.

But your appeal to "alien lizardmen" as an argument against "don't know" is unfair as we have large prior knowledge against lizardmen, but we don't have any priors of experiences of other people.

No one in all history was able to feel the feeling of other being (maybe one-cranial siam twins will be able), so we have no any prior knowledge about if this qualia all similar or all disimilar in different beings.

Your (1) point also could be true in my opinion, or, more exact, we don't have instruments to show that it is true.

Imagine, that I met exact my copy and could ask him any questions, and want to know if he has inverted spectrum... More here: http://plato.stanford.edu/entries/qualia-inverted/

Replies from: gjm
comment by gjm · 2016-07-06T14:34:59.100Z · LW(p) · GW(p)

The alien lizardmen aren't intended as an argument against "don't know", just as an example of something else about which in some sense we "don't know" but where there's not much scope for doubt.

So, anyway, you're proposing that perhaps turchinA and turchinB have different qualia but the same behaviour because the connection between qualia and behaviour is wired differently; so for turchinA seeing red things produces red qualia which provoke saying "red", while for turchinB seeing red things produces green qualia which provoke saying "green".

So, what is the actual difference between turchinA's red qualia and turchinB's so-called green qualia? They produce the exact same behaviour. In particular, turchinB's so-called green qualia lead turchinB to say things like "that looks red to me". And they are provoked by the exact same stimuli that give turchinA red qualia. So, er, what reason is there for calling them green qualia?

I don't know about you, but my red qualia call up all kinds of specific associations. Blood, stop-signs, lipstick, sunsets. And these are, or at least are indirectly observable via, external questioning. "What does this colour make you think of?", etc. And red, in particular, produces lower-level effects that also feed into the experience of seeing red -- IIRC, people looking at bright red things have elevated pulse rate, for instance. So turchinB's so-called green qualia have to produce these same results. Similarly, my green qualia call up associations -- grass, sickness, emeralds, etc. -- and turchinB's so-called green qualia had better not remind turchinB of those things, at least not in any way that spills over into turchinB's actions, responses to questions, etc.

Do you find it reasonable to say that these qualia of turchinB's -- evoked by seeing blood and tomatoes and the like, calling up memories of blood and tomatoes and the like, increasing arousal in the autonomic nervous system, etc., etc., etc. -- can be subjectively identical to turchinA's green qualia (calling up memories of grass, not increasing autonomic nervous system arousal, etc.)? Because I don't. What would that even mean?

Replies from: turchin
comment by turchin · 2016-07-06T15:19:54.191Z · LW(p) · GW(p)

I see it simple: Turchin A sees red object - feels red qualia - associates it with blood - calls it "red". TurchinB - sees red object - feels green qualia - associates it with blood - calls it "red".

So all associations and behavior are the same, only the qualia is different. From objective point of view there is no difference. From my-subjective point of view there is a difference.

Replies from: gjm
comment by gjm · 2016-07-06T16:14:28.027Z · LW(p) · GW(p)

To my mind the following

all associations and behavior are the same, only the qualia is different

is incoherent. The associations are part of how seeing something red or green feels. So if turchinB sees something and associates it with blood, then turchinB's subjective experience is not the same as that of turchinA seeing something green.

Now, it looks as if you've retreated a bit from the full "inverted spectrum" scenario and are maybe now just saying that maybe turchinA and turchinB experience different qualia on seeing red, even though their behaviour is the same. That's not so obviously incoherent. Or is it?

Any way of probing turchinB's experience of seeing a tomato has to produce exactly the same result as for turchinA. Any question I might ask turchinB about that experience will produce the same answer. If I hook turchinB up to a polygraph machine while asking the questions, the readings will be the same as turchinA's. If I present turchinB with the tomato and then ask other questions in the hope that the answers will be subtly biased by whatever not-so-conscious influences the tomato may have had -- same results, again.

So whatever differences there are between turchinA's subjective experiences and turchinB's, they have to be absolutely undetectable by turchins A and B: any attempt at describing those experiences will produce the exact same effects; any effect of the experience on their mood will have no detectable consequences; and so on and so forth.

The situation still seems to me the way I described it before. If turchinA's and turchinB's brains run on physics rather than magic, and if their physical states are the same, then everything we can see of their subjective states by asking them questions, or attaching electrodes to them, or having sex with them, or showing them kitten pictures and seeing whether they smile, or any other kind of observation we can make, matches exactly; which means that any differences in their qualia are so subtle that they have no causal influence on turchinA's and turchinB's behaviour, mood, unconscious physiological reactions, etc.

I see no reason to believe in such subtle differences of qualia; I see no reason to think that asking about them is even meaningful; and they seem to me a violation of Ockham's razor. What am I missing? Why should we take this idea more seriously than lizardmen in the White House?

Replies from: turchin
comment by turchin · 2016-07-06T23:14:55.335Z · LW(p) · GW(p)

You say: "The situation still seems to me the way I described it before. If turchinA's and turchinB's brains run on physics rather than magic, and if their physical states are the same, then everything we can see of their subjective states by asking them questions, or attaching electrodes to them, or having sex with them, or showing them kitten pictures and seeing whether they smile, or any other kind of observation we can make, matches exactly; which means that any differences in their qualia are so subtle that they have no causal influence on turchinA's and turchinB's behaviour, mood, unconscious physiological reactions, etc."

I replied to this logic in the another comment to OP here: http://lesswrong.com/r/discussion/lw/nqv/zombies_redacted/dcwi

In short: If we postulate that physicalism is true, than there is no qualia by definition.

But if we use empirism - that is the idea that experiences is more important than theories, than I have to look on my experience for new knowledges, and in them I have qualia. So, my empirical experiences contradict my best theory of reality. And this contradiction is the essence of so called "hard problem".

I think that we need some updated version of physicalism and I have some ideas about how to create them, to get rid of any form of epiphenomenalism, which of course is ugly theory.

Returning to your point: You are arguing that if all associations are the same, the experience must be the same. I don't think that this thesis is proved (It may happen to be true, but we need some instruments to prove it, and I didn't see the yet)

I could imagine my self looking of large red field and looking on large green field, without any associations about them and still having different experiences of their colour.

I also had similar discussions before and we never came to agreement about nature of qualia.

Replies from: gjm
comment by gjm · 2016-07-07T11:05:00.398Z · LW(p) · GW(p)

If we postulate that physicalism is true, then there are no qualia by definition

But I'm not postulating that physicalism is true. (Although it looks to me like it probably is.) I am assuming only that physics gives a correct account of what brains and bodies do. (Or, as I put it before, your brain runs on physics and not on magic.) My understanding is that e.g. David Chalmers does not disagree with this. Of course if some kind of substance dualism is true -- if some of what I think your brain does is actually being done by an immaterial soul that interfaces with your brain at the pineal gland, or something -- then there can be qualia differences without physical differences; that idea has its own problems but they are quite different ones, and the argument against that view has a very different shape.

my empirical experiences contradict my best theory of reality

How?

Your empirical experiences show that there is such a thing as an experience of seeing red. They do not show that this experience isn't built on top of physics. They do not show that your experience of seeing red could be detached from all the physical phenomena involved in actually seeing red things. It's ideas like those that contradict your best theory of reality, and those ideas are not simply a matter of observing your empirical experiences.

(At least, I can't imagine how they could be. But perhaps your empirical experiences are spectacularly unlike mine.)

You are arguing that if all associations are the same, the experience must be the same

No. I am arguing that if the physical causes are the same, and the associations are the same, and everything else we can actually observe is the same, then we cannot have good reason to think that the experience is different; and that, in view of all the evidence favouring physicalism, much the simplest and most plausible conclusion is that in fact the experience is the same when all physical things are the same.

I could imagine my self looking of large red field and looking on large green field, without any associations about them and still having different experiences of their colour.

I agree. (Except that someone who really had no associations with either could hardly actually be your self or mine.) But what's required for zombies or "inverted spectrum" is quite different and goes much further.

We need, let's say, three Turchins. I'll call them R, RG, and G. R and G are versions of real-world-you; R is looking at a field of poppy flowers and G is looking at a field of grass. RG is different in some mysterious way; he is looking at a field of poppy flowers just like R, having the exact same experience as G is having, but also having the exact same behaviour as R. And whatever happens in the physical world, RG's experience is going to continue to match G's and his behaviour is going to continue to match R's.

Now, notice that this scenario as I've stated it is actually extra-specially ridiculous when taken seriously. For instance, suppose you walk with RG into the poppy field and ask about the shape of the plants he's looking at. For his behaviour to be an exact match for G's, he will need to experience seeing grass stalks and leaves while telling you about the poppy plants he sees. This is, for me, firmly in "lizardmen in the White House" territory.

It is to avoid this kind of ridiculousness that people advocating this kind of detachment of qualia from physics always pick on a very simple kind of quale, one with scarcely any "dimensions" to it, namely colour. No one ever talks about how maybe turchinB looks at a square, has the experience of looking at a hexagon, but behaves exactly as if he sees a square (including e.g. counting off its vertices correctly). No one ever talks about how maybe turchinB watches Casablanca, has the experience of watching The Matrix, but behaves exactly as if he is watching Casablanca. No one ever talks about how maybe turchinB eats a really hot Thai curry, has the experience of eating a rather bland lasagne, but behaves exactly as if he is eating a curry. Because the absurdity of those scenarios is plain to see: these are complex experiences, with parts that interrelate, and probing them more closely gives results that are preposterous if we imagine them interchanged. But colour? Well, colour's pretty simple; perhaps seeing red and seeing green could be interchanged. ... But then I say "hold on, actually colour isn't so simple because there are all these other things you associate with different colours, and these physiological things that happen when you see red, etc.", and of course the response is to say: yeah, OK, so let's imagine some kind of stripped-down version of seeing red where somehow all those associations aren't there, and then maybe they could be interchanged.

In other words, if you ignore almost all kinds of experience and focus on one particularly simple one, and then make the further simplification of assuming that all the complications even that simple kind of experience actually has ... why, then you can imagine (or at least you say you can imagine) two different experiences of this kind being interchanged without physical consequence.

I say that what you're trying to imagine is so utterly divorced from reality that thinking you can imagine it tells us nothing about how the world actually is. And that once you're considering things so simple, it becomes much harder to resist the reductionist argument: "We can, in principle, trace the physical consequences of seeing a red thing; we can see neural circuits that get activated, etc.; we can see how this differs from seeing a green thing. Why should we think this bare red-or-green quale, with no further richness to it, no associations or consequences or anything, is anything other than the activation of one or another set of neurons?"

(For more interesting qualia, I can feel the attraction of saying that they surely must be something more than neuron activations -- though I think that's actually a mistake. "How could a mere adjustment of electrical potentials and ion concentrations be so thrilling/moving/terrifying/...?" But for me, at least, that intuition entirely goes away when I consider the stripped-down minimal qualia that seem to be the only ones for which the sort of "inversion" you're talking about seems remotely plausible.)

Replies from: turchin
comment by turchin · 2016-07-07T15:35:51.688Z · LW(p) · GW(p)

I have to reply short as I am working on another long text now.

  1. I still think that sum of your claims about the brain is physicalism in nutshell.
  2. The fact that I see the red colour and the fact that I can't explain in words what is the difference between red feeling and green feeling is the empirical basis of qualia.

Even if we will find that qialia is corresponding to some kind of nuronal state, it will disprove existance of exact zombies and inverted color zombies, but will not help us to answer the reason why 1234 state of neuron is "red". I call this problem "the problem of table of correspondents". It correspondents different states of neurons to different subjective feelings. But it is itself mystical thing as we could ask why 1234 is red, and 4321 is green, and we could imagine many different corresponding tables. Sorry to be a little bit sketchy here.

Replies from: gjm
comment by gjm · 2016-07-07T16:23:48.079Z · LW(p) · GW(p)

(Please feel free to take your time if you have other things to be doing.)

I am, as I think I have already said, a physicalist. If you take everything I have said and lump it together, you doubtless get physicalism. But that doesn't mean that everything I say assumes physicalism, and in particular my main criticism of the "zombie" and "inverted spectrum" scenarios does not presuppose physicalism.

The fact that you can't explain something in words seems like awfully thin evidence on which to base the idea that they might be different in ways that have no physical foundation.

I think the question of just what physical states correspond to what subjective states, and why given physical differences correspond to the subjective-state differences they do, is a very difficult and important problem (conditional on physicalism or something like it, of course; the question doesn't arise if e.g. qualia are actually magical properties of your immaterial soul). I don't think "mystical" is the right word for it, though, and in particular I see no reason in principle why it shouldn't have (what would be for me) a satisfactory answer.

For instance, imagine that after a vast expenditure of effort and ingenuity a team of cognitive scientists, neurologists, etc., comes up with something like this:

  • A detailed neuron-level map of a typical human brain.
  • Higher-level explanations of structures within it at different scales, larger scales corresponding broadly to higher levels of abstraction.
  • Much analysis of how activity in these various structures (generally at the higher levels) corresponds to particular mental phenomena (surprise, sadness, awareness-of-red, multiplying small integers, imagining conversations with a friend, ...).
  • A detailed breakdown of what happens on a typical occasion when the person whose brain it is sees (say) a tomato:
    • Low-level description, right down to the firing of individual neurons.
    • Descriptions at higher levels, each one linked to the descriptions at slightly higher and lower levels that overlap with it and to whatever abstractions are appropriate to the level being described.
      • "So this bit here is something we see whenever he sees something he might eat in a salad. It doesn't fire for meat, though. We think it represents healthy food somehow."
      • "This one fires in that distinctive pattern for anything that makes him think of sex. It's not firing very strongly here, which is probably a good thing. You can see feeding into it some representations of lips and breasts, which are probably related to the colour and shape of the tomato, respectively. Men are weird."
      • "These circuits here, and here, and here all tend to turn on when he sees anything red. The details of their timing and relative strength depends on the brightness of the red and how it's located in his visual field. It doesn't seem to be downstream of anything more sophisticated, but if you compare with what happens when he sees someone bleeding badly you'll notice that that tends to turn this on more strongly and that it feeds into the processes that raise his autonomic arousal levels when he sees red."
      • "This is a part of the language-generating subsystem. It does things whenever he says, or thinks about saying, words like 'red' and 'crimson'. It also operates when he hears or reads those words, but in a slightly different way. The activity you see here tends to occur shortly before that happens -- it looks like his brain is getting ready to talk about the redness it sees, if it needs to."
  • A detailed breakdown of what happens when you ask him about what he's experiencing, including the links from structures like the ones sketched above into structures that are used when introspecting and when describing things.

Of course the above is an outrageous oversimplification, and the thing these people would have to produce would presumably be a vastly complicated computer model with some currently-inconceivable user interface for looking at different parts of it at different levels of abstraction, tracing what happens in the brain at different speeds, etc.

Anyway, if I were shown something like this and could use it to follow the processes by which seeing a tomato turns into saying "What a beautiful rich red colour" (etc., etc., etc.), and likewise for the processes that happen on seeing a green apple and commenting on its colour, and to observe the parallels and differences between these processes -- I would be pretty well satisfied that your "table of correspondents" problem had been adequately solved.

Replies from: turchin
comment by turchin · 2016-07-07T18:57:17.443Z · LW(p) · GW(p)

Hi! It seems like you are reinventing here or suggesting a variant of Mary room thought experiment, https://en.wikipedia.org/wiki/Knowledge_argument

Anyway, for example, the fact is that there are people who are tetrachromatics - they have 4th basic colour. And assuming that i could know everything about their brain, each neuron, each connection, i still don't know how they feel the 4th colour. It is not "correspondent table", because on the right self of the table should be experiences.

You say: "The fact that you can't explain something in words seems like awfully thin evidence on which to base the idea that they might be different in ways that have no physical foundation."

I think that for our discussion is useful to distinguish two main thesis: 1) Do qualia exist? I think: yes, sure. 2) How they are connected to physical? I think: I don't know. All attempts to create such connection results in ugly constructions, like zombies, inverted spectrum, epiphenomenalism, corresponding table - or in denial of existence of qualia.

I also want to remind my point about the post: EY tried to prove that zombies are impossible. The way he do it doesn't work for the thought experiment with inverted spectrum. This approach doesn't work.

I think it is clear the difference between to two types of theses: a theorem is wrong and the prove of the theorem is wrong.

Replies from: gjm
comment by gjm · 2016-07-07T21:45:07.742Z · LW(p) · GW(p)

There may be some things in common between what I describe and the "Mary's room" experiment, but I'm certainly not recreating it -- my position is pretty much the opposite of Jackson's.

I agree that your and my experience of colour are probably importantly different from those of some tetrachromats. (How different depends on how far their 4th cone's peak is from the others.) For that matter, different trichromats have cone response functions that aren't quite the same, so even two people with "normal" colour vision don't have the exact same colour qualia. In fact, they wouldn't have even without that difference, because their past experiences and general psychological makeup aren't identical. I don't see why any of this tells us anything at all about whether qualia are physical or not, though.

I don't understand how your "corresponding table" is an "ugly construction". I mean, if for some reason you actually had to write down such a table then no doubt it would be ugly, but the same is true for all sorts of things we can all agree are real. Is there something about it that you think is a reason not to believe in physicalism?

I don't think Eliezer was exactly trying to prove that zombies are impossible. He was trying to knock down an argument (based on the idea that obviously you can imagine a world just like this one but where the people are zombies) for zombies being possible, and to offer a better set of intuitions suggesting that they probably aren't. And it doesn't seem to me that replacing zombies with inverted qualia in any way refutes Eliezer's argument because (1) his argument was about zombies, not about inverted qualia and (2) for the reasons I've already given above, I think a very similar argument does in fact apply to inverted qualia. It's not as clear-cut as for zombies, but it seems very convincing to me.

Replies from: turchin
comment by turchin · 2016-07-07T22:40:55.944Z · LW(p) · GW(p)

It seems i start to understand where is the difference between our positions.

Qualia are not about cones in the eye. Because I could see colour dreams. So qualia are somewhere in the brains.

You said: "For that matter, different trichromats have cone response functions that aren't quite the same, so even two people with "normal" colour vision don't have the exact same colour qualia."

(I could also see visual images if I press finger on my eye.)

So the brain use qualia to represent colours in outside world, but qualia are not actual colours.

So qualia themselves are like variables in equation. M represent mass, and F presents force in second law of Newton.

F=Ma.

But "M" is not mass, and "F" is not force, they is just variables. And if we say that now "M" is force, and "F" is mass, we will have the same equation. (It is like an experiment with inverted spectrum, btw)

M=Fa

So, qualia are variables which the brain use to denote external experiences. The same way "F" is letter from latin alphabet, which we use to denote force. The latin alphabet is completely different entity than physical forces. And when we discuss "F" we should always remember what we a speaking about - latin alphabet or force.

EDITED: If we continue this analogy, we could imagine a computer which calculate force. It could tell us everything about results of calculations, but it can’t tell which variables it uses in its internal process.

What we conclude from here:

  • It could be infinitely many different variables which the computer could use. They could be inverted.
  • But it has to use some kind of variables, so it can’t be a zombie. Bingo! We just got new argument against p-zombie.

(The longer version of this new anti p-zombie argument is the following: thinking is impossible without variables and variables must be qualia, because the nature of qualia is that they are simple, different and unbreakable in parts, that is why I also called them “atoms of experience” - but it may need longer elaboration as it is very sketchy)

  1. The programer of the computer chose which variables to use in this particular computer. So he created the table of correspondence in which he stated: "F is force, and M is mass".
Replies from: gjm
comment by gjm · 2016-07-07T23:16:14.120Z · LW(p) · GW(p)

Qualia are not about cones in the eye.

But qualia (at least colour qualia) are often caused by what happens to cones in the eye, and the nature of your colour qualia (even ones that occur in dreams) will depend on how your visual system is wired up, which in turn will depend on the cones in your eyes.

qualia are not actual colours

Of course not. That would be a category error. Qualia are what happens in our brains (or our immaterial souls, or wherever we have experiences) in response to external stimulation, or similar things that arise in other ways (e.g., in dreams).

qualia are variables which the brain uses to denote external experiences

This seems like a dangerous metaphor, because the brain presumably uses kinda-variable-like things at different levels, some of which have no direct connection to experience.

They could be inverted

I think how plausible this is depends on what sort of variables you're thinking of, and is markedly less plausible for the sorts of variables that could actually correspond to qualia.

I mean, you can imagine (lots of oversimplification going on here, but never mind) taking some single neuron and inverting what happens at all its synapses, so that the activation of that neuron has the exact opposite meaning to what it used to be but everything else in the brain carries on just as before. That would be an inversion, of course. But it would be the exact opposite of what's supposed to happen in the "inverted spectrum" thought experiment. There, you have the same physical substrate somehow producing opposite experiences; but here we have the same experiences with part of the physical substrate inverted.

But a single neuron's internal state is not in any way a plausible candidate for what a quale could be. Qualia have to be things we are consciously aware of, and we are not consciously aware of the internal states of our neurons any more than a chess program is making plans by predicting the voltages in its DRAM cells.

comment by Gurkenglas · 2016-07-05T23:05:07.999Z · LW(p) · GW(p)

If the copies are different, the question is not interesting. If the copies aren't different, what causes you to label what he sees as red? It can't be the wavelength of the light that actually goes in his eye, because his identical brain would treat red's wavelength as red.

comment by Riothamus · 2016-07-08T18:58:58.486Z · LW(p) · GW(p)

I do not think we need to go as far as i-zombies. We can take two people, show them the same object under arbitrarily close conditions, and get the answer of 'green' out of both of them while one does not experience green on account of being color-blind.

Replies from: gjm
comment by gjm · 2016-07-08T20:56:04.955Z · LW(p) · GW(p)

What do you infer from being able to do this?

(Surely not that qualia are nonphysical, which is the moral Chalmers draws from thinking about p-zombies; colour-blindness involves identifiable physical differences.)

Replies from: Riothamus
comment by Riothamus · 2016-07-12T15:40:11.315Z · LW(p) · GW(p)

This gives us these options under the Chalmers scheme:

Same input -> same output & same qualia

Same input -> same output & different qualia

Same input -> same output & no qualia

I infer the ineffable green-ness of green is not even wrong. We have no grounds for thinking there is such a thing.

comment by UmamiSalami · 2016-07-03T08:08:09.381Z · LW(p) · GW(p)

This was longer than it needed to be, and in my opinion, somewhat mistaken.

The zombie argument is not an argument for epiphenomenalism, it's an argument against physicalism. It doesn't assume that interactionist dualism is false, regardless of the fact that Chalmers happens to be an epiphenomenalist.

Chalmers furthermore specifies that this true stuff of consciousness is epiphenomenal, without causal potency—but why say that?

Maybe because interactionism violates the laws of physics and is somewhat at odds with everything we (think we) know about cognition. There may be other arguments as well. It has mostly fallen out of favor. I don't know the specific reasons why Chalmers rejects it.

Once you see the collision between the general rule that consciousness has no effect, to the specific implication that consciousness has no effect on how you think about consciousness (in any way that affects your internal narrative that you could choose to say out loud), zombie-ism stops being intuitive. It starts requiring you to postulate strange things.

In the epiphenomenalist view, for whatever evolutionary reason, we developed to have discussions and beliefs in rich inner lives. Maybe those thoughts and discussions help us with being altruistic, or maybe they're a necessary part of our own activity. Maybe the illusion of interactionism is necessary for us to have complex cognition and decisionmaking.

Also in the epiphenomenalist view, psychophysical laws relate mental states to neurophysical aspects of our cognition. So for some reason there is a relation between acting/thinking of pain, and mental states which are painful. It's not arbitrary or coincidental because the mental reaction to pain (dislike/avoid) is a mirror of the physical reaction to pain (express dislike/do things to avoid it).

But Chalmers just wrote all that stuff down, in his very physical book, and so did the zombie-Chalmers.

Chalmers isn't denying that the zombie Chalmers would write that stuff down. He's denying that its beliefs would be justified. Maybe there's a version of me in a parallel universe that doesn't know anything about philosophy but is forced to type certain combinations of letters at gunpoint - that doesn't mean that I don't have reasons to believe the same things about philosophy in this universe.

Replies from: naasking, Vladimir
comment by naasking · 2016-07-04T19:31:54.065Z · LW(p) · GW(p)

This was longer than it needed to be

Indeed. The condensed argument against p-zombies:

  1. Assume consciousness has no effect upon matter, and is therefore not intrinsic to our behaviour.
  2. P-zombies that perfectly mimic our behaviour but have no conscious/subjective experience are then conceivable.
  3. Consider then a parallel Earth that was populated only by p-zombies from its inception. Would this Earth also develop philosophers that argue over consciousness/subjective experience in precisely the same ways we have, despite the fact that none of them could possibly have any knowledge of such a thing?
  4. This p-zombie world is inconceivable.
  5. Thus, p-zombies are not observationally indistinguishable from real people with consciousness.
  6. Thus, p-zombies are inconceivable.

In the epiphenomenalist view, for whatever evolutionary reason, we developed to have discussions and beliefs in rich inner lives.

Except such discussions would have no motivational impact. A "rich inner life" has no relation to any fact in a p-zombies' brain, and so in what way could this term influence their decision process? What specific sort of discussions of "inner life" do you expect in the p-zombie world? And if it has no conceivable impact, how could we have evolved this behaviour?

Replies from: UmamiSalami
comment by UmamiSalami · 2016-07-05T00:24:20.574Z · LW(p) · GW(p)

Indeed. The condensed argument against p-zombies:

I would hope not. 3 is entirely conceivable if we grant 2, so 4 is unsupported, and nothing that EY said supports 4. 5 does not follow from 3 or 4, though it's bundled up in the definition of a p-zombie and follows from 1 and 2 anyway. In any case, 6 does not follow from 5.

What EY is saying is that it's highly implausible for all of our ideas and talk of consciousness to have come to be if subjective consciousness does not play a causal role in our thinking.

Except such discussions would have no motivational impact.

Of course they would - our considerations of other people's feelings and consciousness changes our behavior all the time. And if you knew every detail about the brain, you could give an atomic-level causal account as to why and how.

A "rich inner life" has no relation to any fact in a p-zombies' brain, and so in what way could this term influence their decision process?

The concept of a rich inner life influences decision processes.

Replies from: naasking, TheAncientGeek
comment by naasking · 2016-07-06T00:39:42.763Z · LW(p) · GW(p)

I would hope not. 3 is entirely conceivable if we grant 2, so 4 is unsupported

It's not, and I'm surprised you find this contentious. 3 doesn't follow from 2, it follows from a contradiction between 1+2.

1 states that consciousness has no effect upon matter, and yet it's clear from observation that the concept of subjectivity only follows if consciousness can affect matter, ie. we only have knowledge of subjectivity because we observe it first-hand. P-zombies do not have first-hand knowledge of subjectivity as specified in 2.

If there were another way to infer subjectivity without first-hand knowledge, then that inference would resolve how physicalism entails consciousness and epiphenomenalism can be discarded using Occam's razor.

Of course they would - our considerations of other people's feelings and consciousness changes our behavior all the time. And if you knew every detail about the brain, you could give an atomic-level causal account as to why and how.

Except the zombie world wouldn't have feelings and consciousness, so your rebuttal doesn't apply.

The concept of a rich inner life influences decision processes.

That's an assertion, not an argument. Basically, you and epiphenominalists are merely asserting that that a) p-zombies would somehow derive the concept of subjectivity without having knowledge of subjectivity, and b) that this subjectivity would actually be meaningful to p-zombies in a way that would influence their decisions despite them having no first-hand knowledge of any such thing or its relevance to their life.

So yes, EY is saying it's implausible because it seems to multiply entities unnecessarily, I'm taking it one step further and flat out saying this position either multiplies entities unnecessarily, or it's inconsistent.

Replies from: UmamiSalami
comment by UmamiSalami · 2016-07-06T05:09:23.208Z · LW(p) · GW(p)

3 doesn't follow from 2, it follows from a contradiction between 1+2.

Well, first of all, 3 isn't a statement, it's saying "consider a world where..." and then asking a question about whether philosophers would talk about consciousness. So I'm not sure what you mean by suggesting that it follows or that it is true.

1 and 2 are not contradictions. Conversely, 1 and 2 are basically saying the exact same thing.

1 states that consciousness has no effect upon matter, and yet it's clear from observation that the concept of subjectivity only follows if consciousness can affect matter,

This is essentially what epiphenomenalists deny, and I'm inclined to say that everyone else should deny it too. Regardless of what the truth of the matter is, surely the mere concept of subjectivity does not rely upon epiphenomenalism being false.

we only have knowledge of subjectivity because we observe it first-hand.

This is confusing the issue; like I said: under the epiphenomenalist viewpoint, the cause of our discussions of consciousness (physical) is different from the justification for our belief in consciousness (subjective). Epiphenomenalists do not deny that we have first-hand experience of subjectivity; they deny that those experiences are causally responsible for our statements about consciousness.

and epiphenomenalism can be discarded using Occam's razor.

There are many criteria by which theories are judged in philosophy, and parsimony is only one of them.

Except the zombie world wouldn't have feelings and consciousness, so your rebuttal doesn't apply.

Nothing in my rebuttal relies on the idea that zombies would have feelings and consciousness. My rebuttal points out that zombies would be motivated by the idea of feelings and consciousness, which is trivially true: humans are motivated by the idea of feelings and consciousness, and zombies behave in the same way that humans do, by definition.

That's an assertion, not an argument.

But it's quite obviously true, because we talk about rich inner lives as the grounding for almost all of our moral thought, and then act accordingly, and because empathy relies on being able to infer rich inner lives among other people. And as noted earlier, whatever behaviorally motivates humans also behaviorally motivates p-zombies.

Replies from: naasking
comment by naasking · 2016-07-06T15:19:43.930Z · LW(p) · GW(p)

Epiphenomenalists do not deny that we have first-hand experience of subjectivity; they deny that those experiences are causally responsible for our statements about consciousness.

Since this is the crux of the matter, I won't bother debating the semantics of most of the other disagreements in the interest of time.

As for whether subjectivity is causally efficacious, all knowledge would seem to derive from some set of observations. Even possibly fictitious concepts, like unicorns and abstract mathematics, are generalizations or permutations of concepts that were first observed.

Do you have even a single example of a concept that did not arise in this manner? Generalizations remove constraints on a concept, so they aren't an example, it's just another form of permutation. If no such example exists, why should I accept the claim that knowledge of subjectivity can arise without subjectivity?

Replies from: UmamiSalami
comment by UmamiSalami · 2016-07-06T20:39:33.783Z · LW(p) · GW(p)

Unlike the other points which I raised above, this one is semantic. When we talk about "knowledge," we are talking about neurophysical responses, or we are talking about subjective qualia, or we are implicitly combining the two together. Epiphenomenalists, like physicalists, believe that sensory data causes the neurophysical responses in the brain which we identify with knowledge. They disagree with physicalists because they say that our subjective qualia are epiphenomenal shadows of those neurophysical responses, rather than being identical to them. There is no real world example that would prove or disprove this theory because it is a philosophical dispute. One of the main arguments for it is, well, the zombie argument.

Replies from: naasking
comment by naasking · 2016-07-16T16:31:43.531Z · LW(p) · GW(p)

Epiphenomenalists, like physicalists, believe that sensory data causes the neurophysical responses in the brain which we identify with knowledge. They disagree with physicalists because they say that our subjective qualia are epiphenomenal shadows of those neurophysical responses, rather than being identical to them. There is no real world example that would prove or disprove this theory because it is a philosophical dispute. One of the main arguments for it is, well, the zombie argument.

Which seems to suggest that epiphenominalism either begs the question, or multiplies entities unnecessarily by accepting unjustified intuitions.

So my original argument disproving p-zombies would seem to be on just as solid footing as the original p-zombie argument itself, modulo our disagreements over wording.

Replies from: UmamiSalami
comment by UmamiSalami · 2016-07-16T17:04:16.749Z · LW(p) · GW(p)

Which seems to suggest that epiphenominalism either begs the question,

Well, they do have arguments for their positions.

or multiplies entities unnecessarily by accepting unjustified intuitions.

It actually seems very intuitive to most people that subjective qualia are different from neurophysical responses. It is the key issue at stake with zombie and knowledge arguments and has made life extremely difficult for physicalists. I'm not sure in what way it's unjustified for me to have an intuition that qualia are different from physical structures, and rather than epiphenomenalism multiplying entities unnecessarily, it sure seems to me like physicalism is equivocating entities unnecessarily.

So my original argument disproving p-zombies would seem to be on just as solid footing as the original p-zombie argument itself, modulo our disagreements over wording.

Nothing you said indicates that p-zombies are inconceivable or even impossible. What you, or and EY seem to be saying is that our discussion of consciousness is a posteriori evidence that our consciousness is not epiphenomenal.

Replies from: naasking
comment by naasking · 2016-07-27T13:08:49.003Z · LW(p) · GW(p)

I'm not sure in what way it's unjustified for me to have an intuition that qualia are different from physical structures

It's unjustified in the same way that vilalism was an unjustified explanation of life: it's purely a product of our ignorance. Our perception of subjective experience/first-hand knowledge is no more proof of accuracy than our perception that water breaks pencils.

Intuition pumps supporting the accuracy of said perception either beg the question or multiply entities unnecessarily (as detailed below).

Nothing you said indicates that p-zombies are inconceivable or even impossible.

I disagree. You've said that epiphenominalists hold that having first-hand knowledge is not causally related to our conception and discussion of first-hand knowledge. This premise has no firm justification.

Denying it yields my original argument of inconceivability via the p-zombie world. Accepting it requires multiplying entities unnecessarily, for if such knowledge is not causally efficacious, then it serves no more purpose than vital in vitalism and will inevitably be discarded given a proper scientific account of consciousness, somewhat like this one.

I previously asked for any example of knowledge that was not a permutation of properties previously observed. If you can provide one such an example, this would undermine my position.

Replies from: UmamiSalami
comment by UmamiSalami · 2016-07-27T15:28:59.574Z · LW(p) · GW(p)

It's unjustified in the same way that vilalism was an unjustified explanation of life: it's purely a product of our ignorance.

It's not. Suppose that the ignorance went away: a complete physical explanation of each of our qualia - "the redness of red comes from these neurons in this part of the brain, the sound of birds flapping their wings is determined by the structure of electric signals in this region," and so on - would do nothing to remove our intuitions about consciousness. But a complete mechanistic explanation of how organ systems work would (and did) remove the intuitions behind vitalism.

I disagree. You've said that epiphenominalists hold that having first-hand knowledge is not causally related to our conception and discussion of first-hand knowledge. This premise has no firm justification.

Well... that's just what is implied by epiphenomenalism, so the justification for it is whatever reasons we have to believe epiphenomenalism in the first place. (Though most people who gravitate towards epiphenomenalism seem to do so out of the conviction that none of the alternatives work.)

Denying it yields my original argument of inconceivability via the p-zombie world.

As I've said already, your argument can't show that zombies are inconceivable. It only attempts to show that an epiphenomenalist world is probabilistically implausible. These are very different things.

Accepting it requires multiplying entities unnecessarily, for if such knowledge is not causally efficacious

Well the purpose of rational inquiry is to determine which theories are true, not which theories have the fewest entities. Anyone who rejects solipsism is multiplying entities unnecessarily.

I previously asked for any example of knowledge that was not a permutation of properties previously observed.

I don't see why this should matter for the zombie argument or for epiphenomenalism. In the post where you originally asked this, you were confused about the contextual usage and meaning behind the term 'knowledge.'

comment by TheAncientGeek · 2016-07-05T18:17:06.967Z · LW(p) · GW(p)

What EY is saying is that it's highly implausible for all of our ideas and talk of consciousness to have come to be if subjective consciousness does not play a causal role in our thinking

Although he is also saying that our ideas about free will come about from a source other than free will.

comment by Vladimir · 2016-07-03T18:48:28.322Z · LW(p) · GW(p)

forced to type certain combinations of letters at gunpoint

Except there can't be a gunman in the zombie universe if it's the same as ours (unless... that explains everything!). This essay is trying to convince you that there's no way you can write about consciousness without something real causing you to write about consciousness. Even a mistaken belief about consciousness has to come from somewhere. Try now to imagine a zombie world with no metaphorical gunman and see what comes up.

Replies from: UmamiSalami
comment by UmamiSalami · 2016-07-03T22:32:46.437Z · LW(p) · GW(p)

Well that's answered by what I said about psychophysical laws and the evolutionary origins of consciousness. What caused us to believe in consciousness is not (necessarily) the same issue as what reasons we have to believe it.

Replies from: Vladimir
comment by Vladimir · 2016-07-04T22:54:08.832Z · LW(p) · GW(p)

I think you're smuggling the gunman into evolution. I can come up with good evolutionary reasons why people talk about God despite him not existing, but I can't come up with good evolutionary reasons why people talk about consciousness despite it not existing. It's too verbose to go into detail, but I think if you try to distinguish the God example and the consciousness example you'll see that the one false belief is in a completely different category from the other.

comment by CronoDAS · 2016-07-02T21:04:32.209Z · LW(p) · GW(p)

Can you make "something" with the same input-output behavior as a human, and have that thing not be conscious? It doesn't have to be atom-by-atom identical.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2016-07-02T21:08:53.660Z · LW(p) · GW(p)

Sure. Measure a human's input and output. Play back the recording. Or did you mean across all possible cases? In the latter case see http://lesswrong.com/lw/pa/gazp_vs_glut/

Replies from: CronoDAS
comment by CronoDAS · 2016-07-03T00:33:17.199Z · LW(p) · GW(p)

Yeah, I meant in all possible cases. Start with a Brain In A Vat. Scan that brain and implement a GLUT in Platospace, then hook up the Brain-In-A-Vat and the GLUT to identical robots, and you'll have one robot that's conscious and one that isn't, right?

Replies from: kilobug, MrMind, TheAncientGeek
comment by kilobug · 2016-07-05T14:03:21.899Z · LW(p) · GW(p)

Did you read the GAZP vs GLUT article ? In the GLUT setup, the conscious entity is the conscious human (or actually, more like googolplex of conscious humans) that produced the GLUT, and the robot replaying the GLUT is no more conscious than a phone transmitting the answer from a conscious human to another - which is basically what it is doing, replaying the answer given by a previous, conscious, human from the same input.

Replies from: Houshalter
comment by Houshalter · 2016-07-05T17:55:20.566Z · LW(p) · GW(p)

I don't think the origin of the GLUT matters at all. It could have sprung up out of pure randomness. The point is that it exists, and appears to be conscious by every outward measure, but isn't.

Replies from: kilobug
comment by kilobug · 2016-07-05T21:34:37.223Z · LW(p) · GW(p)

It definitely does matter.

If you build a human-like robot, remotely controlled by a living human (or by a brain-in-a-vat), and interact with the robot, it'll appear to be conscious but isn't, and yet it wouldn't be a zombie in any way, what actually produces the response about being conscious would be the human (or the brain), not the robot.

If the GLUT was produced by a conscious human (or conscious human simulation), then it's akin to a telepresence robot, only slightly more remote (like the telepresence robot is only slightly more remote than a phone).

And if it "sprung up of pure randomness"... if you are ready to accept such level of improbability, you can accept anything - like the hypothesis that no human actually wrote what I'm replying to, but it's just the product of cosmic rays hitting my computers in the exact pattern for such a text to be displayed on my browser. Or the Shakespear was actually written by monkeys typing at random. If you start accepting such ridiculous levels of improbability, something even below than one chance in a googolplex, you are just accepting everything and anything making all attempt to reason or discuss pointless.

Replies from: Houshalter
comment by Houshalter · 2016-07-09T06:23:00.396Z · LW(p) · GW(p)

The question is whether the GLUT is conscious. I don't believe that it is.

Perhaps it was created by a conscious process. But that process is gone now. I don't believe that torturing the GLUT is wrong, for example, because the conscious entity has already been tortured. Nothing I do to the GLUT can causally interact with the conscious process that created it.

This is why I say the origin of the GLUT doesn't matter. I'm not saying that I believe GLUTs are actually likely to exist, let alone appear from randomness. But the origin of a thing shouldn't matter to the question of whether or not it is conscious.

If we can observe every part of the GLUT, but know nothing about it's origin, we should still be able to determine if it's conscious or not. The question shouldn't depend on its past history, but only it's current state.

I believe it might be possible for a non conscious entity to create a GLUT, or at least fake consciousness. Like a simple machine learning algorithm that imitates human speech or text. Or AIXI with it's unlimited computing power, that doesn't do anything other than brute force. I wouldn't feel bad about deleting an artificial neural network, or destroying an AIXI.

The question that bothers me is what about a bigger, more human like neural network? Or a more approximate, less brute force version of AIXI? When does an intelligence algorithm gain moral weight? This question bothers me a lot, and I think it's what people are trying to get at when they talk about GLUTs.

Replies from: dxu
comment by dxu · 2016-07-18T16:21:21.530Z · LW(p) · GW(p)

So, the question being asked here appears to be, "Can a GLUT be considered conscious?" I claim that this question is actually a stand-in for multiple different questions, each of which I will address individually.

1) Do the processes that underlie the GLUT's behavior (input/output) cause it to possess subjective awareness?

Without a good understanding of what exactly "subjective awareness" is and how it arises, this question is extremely difficult to answer. At a glance, however, it seems intuitively plausible (indeed, probable) that whatever processes underlie "subjective awareness", they need to be more complex than simply looking things up in an (admittedly enormous) database. So, I'm going to answer this one with a tentative "no".

2) Does the GLUT's existence imply the presence of consciousness (subjective awareness) elsewhere in the universe?

To answer this question, let's consider the size of a GLUT that contains all possible inputs and outputs for a conscious being. Now consider the set of all possible GLUTs of that size. Of those possible GLUTs, only a vanishingly minuscule fraction encode anything even remotely resembling the behavior of a conscious being. The probability of such a GLUT being produced by accident is virtually 0. (I think the actual probability should be on the order of 1 / K, where K is the Kolmogorov complexity of the brain of the being in question, but I could be wrong.)

As such, it's more or less impossible for the GLUT to have been produced by chance; it's indescribably more likely that there exists some other conscious process in the universe from which the GLUT's specifications were taken. In other words, if you ever encounter a GLUT that seems to behave like a conscious being, you can deduce with probability ~1 that consciousness exists somewhere in that universe. Thus, the answer to this question is "yes" with probability ~1.

3) Assuming that the GLUT was produced by chance and that the conscious being whose behavior it emulates does not and will not ever physically exist, can it still be claimed that the GLUT's existence implies the presence of consciousness somewhere?

This is the most ill-defined question of the lot, but hopefully I at least managed to render it into something comprehensible (if not easily answered!). To answer it, first we have to understand that while a GLUT may not be conscious itself, it certainly encodes a conscious process, i.e. you could theoretically specify a conscious process embedded in a physical medium (say, a brain, or maybe a computer) that, when run with a certain input, will produce the exact output that the GLUT produces given that input. (This is not a trivial statement, by the way; the set of GLUTs that fulfill this condition is tiny relative to the space of possible GLUTs.)

However, suppose we don't have that process available to us, only the GLUT itself. Then the question above is simply asking, "In what sense can the process encoded by the GLUT be said to 'exist'?" This is still a hard question, but it has one major advantage over the old phrasing: we can draw a direct parallel between this question and the debate over mathematical realism. In other words: if you accept mathematical realism, you should also be fine with accepting that the conscious process encoded by the GLUT exists in a Platonic sense, and if you reject it, you should likewise reject the existence of said process. Now, like most debates in philosophy, this one is unsettled--but at least now you know that your answer to the original question regarding GLUTs concretely depends on your answer to another question--namely, "Do you accept mathematical realism?", rather than nebulously floating out there in the void. (Note that since I consider myself a mathematical realist, I would answer "yes" to both questions. Your answer may differ.)

4) Under standard human values (e.g. the murder of a conscious being is generally considered immoral, etc.), should the destruction of a GLUT be considered immoral?

In my opinion, this question is actually fairly simple to answer. Recall that a GLUT, while not being conscious itself, encodes a conscious process. This means (among other things) that we could theoretically use the information contained in the look-up table to construct that conscious being, even if that being never existed before hand. Since destroying the GLUT would remove our ability to construct said being, we can clearly classify it as an immoral act (though whether it should be considered as immoral as the murder of a preexisting conscious being is still up for debate).

It seems to me that the four questions listed above suffice to describe all of the disguised queries the original question ("Can a GLUT be considered conscious?") stood for. Assuming I answered each of them in a sufficiently thorough manner, the original question should be resolved as well--and ideally, there shouldn't even be the feeling that there's a question left. Of course, that's if I did this thing correctly.

So, did I miss anything?

comment by MrMind · 2016-07-04T10:14:40.015Z · LW(p) · GW(p)

Hmm... is that true?
The only difference is that they were conscious at different time.
Also, creating a GLUT out of a person is an extremely immoral thing to do.

comment by TheAncientGeek · 2016-07-03T10:18:52.390Z · LW(p) · GW(p)

Right, given the standard assumptions of the reductionphile/zombiephobe side.The tribe has standard intuitions about how some agents are more real (more ethically relevant) than others, and, in the face of scepticism about qualia and souls, tend to cash that out in terms of computational complexity.

ETA

It's guesswork based on opinion.

See also:

http://lesswrong.com/lw/5ot/nature_red_in_truth_and_qualia/dcog

comment by Gram_Stone · 2016-07-03T15:02:41.585Z · LW(p) · GW(p)

I skipped the stuff on p-zombies when I read R:AZ because I experienced difficulty reading 'Zombies! Zombies?' and I didn't expect it to be as useful as what came later in the book.

Now I feel silly, because this time my reading experience was fluent and I had that extra processing motivation from the content's 'recency'.

Now I think it didn't have much at all to do with how useful I thought it would be at the time. It seems more like I asked "Are there minutiae on Penrose?" rather than "Will this be useful to know?"

After all, I didn't read the rest of the book via some mantra of instrumentality; not really. Instrumental value was a nice side effect like profits to the cheesecake industry is a nice side effect of consuming cheesecake. I really read it because it was enjoyable to read.

So, this content now seems accessible to at least one person to whom it did not seem accessible before. That seems like a plausible goal of someone rewriting something.

comment by Piecewise · 2016-07-04T16:27:44.597Z · LW(p) · GW(p)

"a being that is exactly like you in every respect—identical behavior, identical speech, identical brain; every atom and quark in exactly the same position, moving according to the same causal laws of motion—except that your zombie is not conscious."

As someone with a medical background, I find it very hard to believe this is possible. Not unless Consciousness is reduced to something so abstract and disconnected from what we consider our "Selves" as to render it almost meaningless. After all, traumatic brain injury can alter every aspect of your personality, capacity to reason, and ability to perceive. And if "consciousness" isn't bound up in any of these things, if it exists as some sort of super disconnected "Thinking thing" like Descartes seemed to think, I really can't see the value of it. It's like the Greek interpretation of the afterlife where your soul exists as a senseless shadow, lacking any concept of self or any memory of your past life. What good is an existence that lacks all the things which make it unique?

Then again, as a somewhat brutal pragmatist, I cease to see the meaning in having an argument when it seems to devolve beyond any connection to observable reality.

Replies from: kilobug, ian-wardell-1
comment by kilobug · 2016-07-05T11:47:44.794Z · LW(p) · GW(p)

I agree with your point in general, and it does speak against an immaterial soul surviving death, but I don't think it necessarily apply to p-zombies. The p-zombie hypothesis is that the consciousness "property" has no causality over the physical world, but it doesn't say that there is no causality the other way around: that the state of the physical brain can't affect the consciousness. So a traumatic brain injury would (under some unexplained mysterious mechanism) reflect into that immaterial consciousness.

But sure, it's yet more epicycles.

Replies from: buybuydandavis
comment by buybuydandavis · 2016-07-09T11:58:41.751Z · LW(p) · GW(p)

You're watching a POV movie of a meat bag living out it's life. When the meat bag falls apart, the movie gets crapped up.

comment by Ian Wardell (ian-wardell-1) · 2022-08-11T10:54:18.961Z · LW(p) · GW(p)

@Piecewise  You don't appear to be discussing p-zombies at all

traumatic brain injury can alter every aspect of your personality, capacity to reason, and ability to perceive.

Significant damage to the lenses in one's eyeglasses significantly impacts one's ability to see.  Doesn't mean I can't see perfectly when I whip them off.  

And I have no idea why a disembodied consciousness would have no concept of self and no memories.  Consciousness and memories are properties of the self.  Our recollections are impeded due to the brain, that obviously doesn't apply in a disembodied state.  

comment by Vladimir · 2016-07-03T00:10:40.345Z · LW(p) · GW(p)

I'm curious to see if this convinces Bryan Caplan & Sam Harris.

comment by Ian Wardell (ian-wardell-1) · 2022-08-11T10:10:45.847Z · LW(p) · GW(p)

I'm afraid none of this article is relevant.  The whole point of the p-zombie argument is to refute materialism. Now, materialists are obliged to hold that the world is physically closed, meaning that everything that ever happens is purely due to chains of physical causes and effects.

So, are they saying that consciousness therefore doesn't have any causal agency?  That it is only the underlying correlated physical processes in the brain that have causal agency?  No, because they claim that consciousness is the very same thing as the underlying physical processes.

Now, even if we accept this somehow makes sense i.e that consciousness is actually one and the very same thing as physical processes, the point here is that it remains the case that the interactions of molecules as mathematically described by the laws of physics are sufficient in and of themselves to account for all change in the world, including everything we human beings do, say and even think.  

In other words, even if consciousness is identical to physical processes, we still only need appeal to physical processes to account for all change in the world, including everything we human beings do, say and even think. Any causal efficacy of consciousness is redundant. It is superfluous.  We do not need this causal agency as well as physical causation.  But, this being so, then a world of p-zombies is at least metaphysically possible.

Your arguments against p-zombies (which I'm sympathetic towards), cannot rescue materialism since your arguments appear to be directed against physical causal closure.  But physical causal closure is precisely that which materialists cannot deny!

comment by SquirrelInHell · 2016-07-02T23:01:48.728Z · LW(p) · GW(p)

"Darn, I'm out of orange juice." The sound of these words is probably represented in your auditory cortex, as though you'd heard someone else say it.

Note for accuracy:

If something puts a significant strain on short-term memory, it does indeed end up relying on the "inner voice" for extended/improved storage.

However extending this to "Darn, I'm out of orange juice." is unfounded and depending on a particular person, false to a greater or smaller degree.

comment by jefftk (jkaufman) · 2016-07-12T19:58:08.745Z · LW(p) · GW(p)

I was curious about the diff, specifically what sections were being removed. This is too long for a comment, so I'll post each one as a reply to this comment.

Replies from: jkaufman, jkaufman, jkaufman, jkaufman, jkaufman, jkaufman, jkaufman, jkaufman
comment by jefftk (jkaufman) · 2016-07-12T19:59:54.556Z · LW(p) · GW(p)

Mind you, I am not saying this is a substitute for careful analytic refutation of Chalmers's thesis. System 1 is not a substitute for System 2, though it can help point the way. You still have to track down where the problems are specifically.

Chalmers wrote a big book, not all of which is available through free Google preview. I haven't duplicated the long chains of argument where Chalmers lays out the arguments against himself in calm detail. I've just tried to tack on a final refutation of Chalmers's last presented defense, which Chalmers has not yet countered to my knowledge. Hit the ball back into his court, as it were.

But, yes, on a core level, the sane thing to do when you see the conclusion of the zombie argument, is to say "That can't possibly be right" and start looking for a flaw.

comment by jefftk (jkaufman) · 2016-07-12T19:59:47.016Z · LW(p) · GW(p)

I have a nonstandard perspective on philosophy because I look at everything with an eye to designing an AI; specifically, a self-improving Artificial General Intelligence with stable motivational structure.

When I think about designing an AI, I ponder principles like probability theory, the Bayesian notion of evidence as differential diagnostic, and above all, reflective coherence. Any self-modifying AI that starts out in a reflectively inconsistent state won't stay that way for long.

If a self-modifying AI looks at a part of itself that concludes "B" on condition A—a part of itself that writes "B" to memory whenever condition A is true—and the AI inspects this part, determines how it (causally) operates in the context of the larger universe, and the AI decides that this part systematically tends to write false data to memory, then the AI has found what appears to be a bug, and the AI will self-modify not to write "B" to the belief pool under condition A.

Any epistemological theory that disregards reflective coherence is not a good theory to use in constructing self-improving AI. This is a knockdown argument from my perspective, considering what I intend to actually use philosophy for. So I have to invent a reflectively coherent theory anyway. And when I do, by golly, reflective coherence turns out to make intuitive sense.

So that's the unusual way in which I tend to think about these things. And now I look back at Chalmers:

The causally closed "outer Chalmers" (that is not influenced in any way by the "inner Chalmers" that has separate additional awareness and beliefs) must be carrying out some systematically unreliable, unwarranted operation which in some unexplained fashion causes the internal narrative to produce beliefs about an "inner Chalmers" that are correct for no logical reason in what happens to be our universe.

But there's no possible warrant for the outer Chalmers or any reflectively coherent self-inspecting AI to believe in this mysterious correctness. A good AI design should, I think, look like a reflectively coherent intelligence embodied in a causal system, with a testable theory of how that selfsame causal system produces systematically accurate beliefs on the way to achieving its goals.

So the AI will scan Chalmers and see a closed causal cognitive system producing an internal narrative that is uttering nonsense. Nonsense that seems to have a high impact on what Chalmers thinks should be considered a morally valuable person.

This is not a necessary problem for Friendly AI theorists. It is only a problem if you happen to be an epiphenomenalist. If you believe either the reductionists (consciousness happens within the atoms) or the substance dualists (consciousness is causally potent immaterial stuff), people talking about consciousness are talking about something real, and a reflectively consistent Bayesian AI can see this by tracing back the chain of causality for what makes people say "consciousness".

comment by jefftk (jkaufman) · 2016-07-12T19:59:29.185Z · LW(p) · GW(p)

... (Argument from career impact is not valid, but I say it to leave a line of retreat.)

Chalmers critiques substance dualism on the grounds that it's hard to see what new theory of physics, what new substance that interacts with matter, could possibly explain consciousness. But property dualism has exactly the same problem. No matter what kind of dual property you talk about, how exactly does it explain consciousness?

When Chalmers postulated an extra property that is consciousness, he took that leap across the unexplainable. How does it help his theory to further specify that this extra property has no effect? Why not just let it be causal?

If I were going to be unkind, this would be the time to drag in the dragon—to mention Carl Sagan's parable of the dragon in the garage. "I have a dragon in my garage." Great! I want to see it, let's go! "You can't see it—it's an invisible dragon." Oh, I'd like to hear it then. "Sorry, it's an inaudible dragon." I'd like to measure its carbon dioxide output. "It doesn't breathe." I'll toss a bag of flour into the air, to outline its form. "The dragon is permeable to flour."

One motive for trying to make your theory unfalsifiable, is that deep down you fear to put it to the test. Sir Roger Penrose (physicist) and Stuart Hameroff (neurologist) are substance dualists; they think that there is something mysterious going on in quantum, that Everett is wrong and that the "collapse of the wave-function" is physically real, and that this is where consciousness lives and how it exerts causal effect upon your lips when you say aloud "I think therefore I am." Believing this, they predicted that neurons would protect themselves from decoherence long enough to maintain macroscopic quantum states.

This is in the process of being tested, and so far, prospects are not looking good for Penrose—

—but Penrose's basic conduct is scientifically respectable. Not Bayesian, maybe, but still fundamentally healthy. He came up with a wacky hypothesis. He said how to test it. He went out and tried to actually test it.

As I once said to Stuart Hameroff, "I think the hypothesis you're testing is completely hopeless, and your experiments should definitely be funded. Even if you don't find exactly what you're looking for, you're looking in a place where no one else is looking, and you might find something interesting."

So a nasty dismissal of epiphenomenalism would be that zombie-ists are afraid to say the consciousness-stuff can have effects, because then scientists could go looking for the extra properties, and fail to find them.

I don't think this is actually true of Chalmers, though. If Chalmers lacked self-honesty, he could make things a lot easier on himself.

(But just in case Chalmers is reading this and does have falsification-fear, I'll point out that if epiphenomenalism is false, then there is some other explanation for that-which-we-call consciousness, and it will eventually be found, leaving Chalmers's theory in ruins; so if Chalmers cares about his place in history, he has no motive to endorse epiphenomenalism unless he really thinks it's true.)

comment by jefftk (jkaufman) · 2016-07-12T19:59:08.256Z · LW(p) · GW(p)

The zombie argument does not rest solely on the intuition of the passive listener. If this was all there was to the zombie argument, it would be dead by now, I think. The intuition that the "listener" can be eliminated without effect, would go away as soon as you realized that your internal narrative routinely seems to catch the listener in the act of listening.

comment by jefftk (jkaufman) · 2016-07-12T19:58:58.726Z · LW(p) · GW(p)

By supposition, the Zombie World is atom-by-atom identical to our own, except that the inhabitants lack consciousness. Furthermore, the atoms in the Zombie World move under the same laws of physics as in our own world. If there are "bridging laws" that govern which configurations of atoms evoke consciousness, those bridging laws are absent. But, by hypothesis, the difference is not experimentally detectable. When it comes to saying whether a quark zigs or zags or exerts a force on nearby quarks—anything experimentally measurable—the same physical laws govern.

The Zombie World has no room for a Zombie Master, because a Zombie Master has to control the zombie's lips, and that control is, in principle, experimentally detectable. The Zombie Master moves lips, therefore it has observable consequences. There would be a point where an electron zags, instead of zigging, because the Zombie Master says so. (Unless the Zombie Master is actually in the world, as a pattern of quarks—but then the Zombie World is not atom-by-atom identical to our own, unless you think this world also contains a Zombie Master.)

When a philosopher in our world types, "I think the Zombie World is possible", his fingers strike keys in sequence: Z-O-M-B-I-E. There is a chain of causality that can be traced back from these keystrokes: muscles contracting, nerves firing, commands sent down through the spinal cord, from the motor cortex—and then into less understood areas of the brain, where the philosopher's internal narrative first began talking about "consciousness".

And the philosopher's zombie twin strikes the same keys, for the same reason, causally speaking. There is no cause within the chain of explanation for why the philosopher writes the way he does, which is not also present in the zombie twin. The zombie twin also has an internal narrative about "consciousness", that a super-fMRI could read out of the auditory cortex. And whatever other thoughts, or other causes of any kind, led to that internal narrative, they are exactly the same in our own universe and in the Zombie World.

So you can't say that the philosopher is writing about consciousness because of consciousness, while the zombie twin is writing about consciousness because of a Zombie Master or AI chatbot. When you trace back the chain of causality behind the keyboard, to the internal narrative echoed in the auditory cortex, to the cause of the narrative, you must find the same physical explanation in our world as in the zombie world.

comment by jefftk (jkaufman) · 2016-07-12T19:58:48.321Z · LW(p) · GW(p)

One of the great battles in the Zombie Wars is over what, exactly, is meant by saying that zombies are "possible". Early zombie-ist philosophers (the 1970s) just thought it was obvious that zombies were "possible", and didn't bother to define what sort of possibility was meant.

Because of my reading in mathematical logic, what instantly comes into my mind is logical possibility. If you have a collection of statements like (A->B),(B->C),(C->~A) then the compound belief is logically possible if it has a model—which, in the simple case above, reduces to finding a value assignment to A, B, C that makes all of the statements (A->B),(B->C), and (C->~A) true. In this case, A=B=C=0 works, as does A=0, B=C=1 or A=B=0, C=1.

Something will seem possible—will seem "conceptually possible" or "imaginable"—if you can consider the collection of statements without seeing a contradiction. But it is, in general, a very hard problem to see contradictions or to find a full specific model! If you limit yourself to simple Boolean propositions of the form ((A or B or C) and (B or ~C or D) and (D or ~A or ~C) ...), conjunctions of disjunctions of three variables, then this is a very famous problem called 3-SAT, which is one of the first problems ever to be proven NP-complete."

So just because you don't see a contradiction in the Zombie World at first glance, it doesn't mean that no contradiction is there. It's like not seeing a contradiction in the Riemann Hypothesis at first glance. From conceptual possibility ("I don't see a problem") to logical possibility in the full technical sense, is a very great leap. It's easy to make it an NP-complete leap, and with first-order theories you can make it arbitrarily hard to compute even for finite questions. And it's logical possibility of the Zombie World, not conceptual possibility, that is needed to suppose that a logically omniscient mind could know the positions of all the atoms in the universe, and yet need to be told as an additional non-entailed fact that we have inner listeners.

comment by jefftk (jkaufman) · 2016-07-12T19:58:36.380Z · LW(p) · GW(p)

Zombie-ism is not the same as dualism. Descartes thought there was a body-substance and a wholly different kind of mind-substance, but Descartes also thought that the mind-substance was a causally active principle, interacting with the body-substance, controlling our speech and behavior. Subtracting out the mind-substance from the human would leave a traditional zombie, of the lurching and groaning sort.

And though the Hebrew word for the innermost soul is N'Shama, that-which-hears, I can't recall hearing a rabbi arguing for the possibility of zombies. Most rabbis would probably be aghast at the idea that the divine part which God breathed into Adam doesn't actually do anything.

comment by jefftk (jkaufman) · 2016-07-12T19:58:28.141Z · LW(p) · GW(p)

(Warning: Long post ahead. Very long 6,600-word post involving David Chalmers ahead. This may be taken as my demonstrative counterexample to Richard Chappell's Arguing with Eliezer Part II, in which Richard accuses me of not engaging with the complex arguments of real philosophers.)

comment by turchin · 2016-07-06T12:56:10.390Z · LW(p) · GW(p)

I think that main problem of this (and similar) reasoning is its circularity. This circularity doesn't make such reasoning untrue, but weakens its evidence base.

It starts with (some version of) physicalism is true. And its conclusion is that there is nothing that is not physical.

If we take definition (one of) of the physicalism from SEP:

"(1) Physicalism is true at a possible world w iff any world which is a physical duplicate of w is a duplicate of w simpliciter." http://plato.stanford.edu/entries/physicalism/#NonModDefPhy

It means that by definition a copy has all the same qualities (including consciousness and qualia) as original.

We could see that zombies are impossible in physical world by definition of physical world.

So there is no need to prove anything.

So the only thing we need to do in order to disprove zombies - is to prove that physicalism is true. So we could go to the end of SEP article and check what kind of proves exist.

There are two:

1) Idea of casual closure of all world

2) Knowledge from science about physics and idea that philosophy should be similar to the most successful scientific explanation of the world

Both of them have some problems, some of them: What is causality? Or if we find ourself in the world which will be describe in the best possible form by existence of many small gods, should we take it as prove of their metaphysical nature?

Basically first 50 pages of SEP article is about problems with physicalism and I was surprised that there are so many of them.

These ideas penetrates in typical discussions about consciousness in subtle form. Someone starts with: "ok we know, that all out thoughts consists of atoms, neurons etc, so there is no place for qualia etc". So he used latest knowledge from science to prove his point, that is using (2). But science knowledge about where and how we have experiences are incomplete, and he also implicitly use science as a prove to physicalism

I think that we need new definition and new prove of physicalism, and in this case we will be able to solve all its puzzles.

comment by curiousone_duplicate0.944435470947067 · 2017-06-29T19:11:36.516Z · LW(p) · GW(p)

This "Not-quite-faith-based reductionism" stating that consciousness lies within the physics and is just not yet understood - is it valid? To me it seems rather strange to state such trust in the solution capacity of a certain scientific field, when you have to add that the opposite (that consciousness does not lie within physics) is just something that will be proven wrong when the opposite of the opposite is proven right...

comment by turchin · 2016-07-11T15:17:40.247Z · LW(p) · GW(p)

By the way, some kind of p-zombies is possible.

Its me in different modal states. For example, a "possible me" will have all the same thoughts (possible thoughts) and possible experience but will have not any subjective experiences (if we don't patch it with modal realism, but it comes with price).

For example if I am choosing between 2 ways home, one pleasant and one full of pain, I could imagine a copy of me, which will walk home with some suffering and thinking about its subjective experiences, but after I choose walk home by pleasant way, it will never come into existence. So the main difference for me of possible me from real is that possible me doesn't have any experiences. But it could think about its experiences.

It is also similar to counterfactual mugging argument, where we should think about possible me as of real.

Replies from: gjm
comment by gjm · 2016-07-11T15:55:17.853Z · LW(p) · GW(p)

The usual point of p-zombies is to support non-physicalism. I hope it's clear that these purely-imagined possible-yous don't do that.

(Just as the fact that I can imagine disconnecting all my computer equipment from its electricity supply but having it continue to work doesn't offer any evidence that it doesn't work by electricity; it just means that my imagination is sufficiently coarse-grained that I'm often happy to imagine things that aren't actually possible.)

If you are not a modal realist, I don't think it's at all true that "the main difference for me of possible me from real is that possible me doesn't have any experiences"; a more important difference, surely, is that possible-you exists only inside your head whereas real-you exists in the actual world. And therefore, in particular, despite the "possible" in his name, possible-you is not constrained to actually be possible, unlike real-you.

It also seems to me that saying that possible-you differs from real-you mostly in not having experiences involves a sort of level-confusion. Within your imagination, possible-you has experiences (which is why it will be complaining about the pain as it takes the horrible way home, right?). In reality, possible-you has no experiences -- but also doesn't exist at all.

Replies from: turchin
comment by turchin · 2016-07-11T20:31:29.039Z · LW(p) · GW(p)

Unfortunately even "possible me" zombies may be used against physicalism. The argument for that I read in Kant's "The critique of pure reason".

Imagine two full universes, one real and one possible, and ask a question what is the difference between them. There will be no material differences between them - energy, atoms and observers will be the same. Any difference between them will be something unobservable. For Kant it is not a problem, as its just shows absurdity of asking such questions, and I sure that in our time he would use idea of p-zombies just to demonstrate principal limits of our knowledges about metaphysics. Kant mention this very short, even shorter than I explain this here. He just said that there is no difference between the thing which is possible in all aspects and real thing.

So we need to add some kind of "vital energy" to possible world to make it actual, or to accept timeless mathematical universe model, where any possible world is actual. As LW and EY seems to accept last version, in it any possible observer must have experiences, and no possible p-zombies exist (lets call them PP-zombies). The price for it is that I can't choose between two futures, without summoning mystical idea of "measure of existence", which is the probability that I will find myself one of my future copies with one type of experience.

For example, if I have two possible futures, one normal, and one where golden meteor hit my garden, and I am modal realist, I should think about them as both real. As it results in absurd expectation, I should add that some of them are more real than others, based on their "measure of existence". Any attempts to define measure of existence result in unexpected complexity, as we need to consider numbers of different and equal copies in different worlds, quantum theory, cosmology, infinities, anthropics, non-normal predictions and ethical paradoxes. This is what I meant when said that modal realism comes with price.

In short, trying to kill one monster, p-zombies, we create another monster in form of "measure of existence".

Your counterargument is based on idea that any possible thing is only a thing which exist in my imagination. It contradicts timeless mathematical universe reality hypothesis. I think that there could be many other definitions of possibility - uncertainty, future, quantum Shroedinger cat, separated possible universes.

Counterfactual mugging thought experiment shows that I must take into account behavior of possible me (and it could be much more complex than in the experiment) https://wiki.lesswrong.com/wiki/Counterfactual_mugging

Replies from: gjm
comment by gjm · 2016-07-11T23:45:30.137Z · LW(p) · GW(p)

Before, you were comparing a(n apparently) possible universe in which you do one thing, with an actual universe in which you do something else. These are not universes with no material differences.

But now you want me to imagine two otherwise identical worlds, one actual and one merely possible. This seems doubtfully coherent to me in two different ways. First, I am not at all sure it actually makes sense to distinguish between an imagined imaginary world and am imagined actual world: both, in fact, are merely imagined. Second and worse, if some possible world is identical to the actual world term I say it is in fact the actual world.

How you get from there to needing to choose between "vital energy" and modal realism, I don't understand; it seems to me that if any theory is in need of such "vital energy" (which I'm not at all sure if the case) then it's modal realism. And the thing you say is the "price" of modal realism seems to me obviously innocuous. Indeed, the most obvious way to try to implement something like modal realism is Everett quantum mechanics -- which comes automatically with exactly the kind of measure you're talking about.

Saying that your imagined possible-you who goes home by a different route is a figment of your imagination doesn't contradict the mathematical universe hypothesis; it just means declining to identify the constructs of your imagination with portions of the (hypothetical) mathematical universe. When you say "of course I could have gone home the other way" on the basis that it just seems obvious, you are not identifying a portion of the Tegmark universe, you're just indulging in imagination. (For the avoidance of doubt, there's nothing wrong with that.)

I'm not sure that the counterfactual mugging thought experiment exactly shows that you have to do anything in particular, but by all means take into account possible or ways you could behave -- but doing so really doesn't have the exotic metaphysical commitments you seem to be claiming it has.

comment by buybuydandavis · 2016-07-09T11:49:36.480Z · LW(p) · GW(p)

It is furthermore claimed that if zombies are "conceivable" (a term over which battles are still being fought), then, purely from our knowledge of this "conceivability", we can deduce a priori that consciousness is extra-physical, in a sense to be described below.

Is that really the claim? Because it seems terribly silly.

Since it is conceivable (a pun, wait for it!) that Mommy fooled around on the Daddy who raised you, and you aren't his biological son, then the identity of your "social" father and your biological father can't be true in reality?

At bottom it seems like if two concepts are different, they're denying that in reality they're referring to the same thing. It's just odd.

It is conceivable that consciousness is something beyond brain function, but it is increasingly implausible.

And how about we turn around the proposition?

It is furthermore claimed that if non-zombies are "conceivable" (a term over which battles are still being fought), then, purely from our knowledge of this "conceivability", we can deduce a priori that consciousness is extra-subjective.

comment by Riothamus · 2016-07-08T20:21:33.985Z · LW(p) · GW(p)

What do people in Chalmer's vein of belief think of the simulation argument?

If a person is plugged into an otherwise simulated reality, do all the simulations count as p-zombies, since they match all the input-output and lack-of-qualia criteria?

Replies from: gjm, TheAncientGeek
comment by gjm · 2016-07-08T20:56:49.782Z · LW(p) · GW(p)

Do they lack qualia? How accurate are these simulations meant to be?

Replies from: Riothamus
comment by Riothamus · 2016-07-12T15:23:59.494Z · LW(p) · GW(p)

They are meant to be arbitrarily accurate, and so we would expect them to include qualia.

However, in the Chalmers vein consciousness is non-physical, which suggests it cannot be simulated through physical means. This yields a scenario very similar to the identical-yet-not-conscious p-zombie.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2016-07-14T15:43:08.379Z · LW(p) · GW(p)

They are meant to be arbitrarily accurate, and so we would expect them to include qualia.

Whose "we"? They are only mean to be functional (input-output) duplicates, and a large chunk of the problem of qualiia is that qualia are not in any remotely obvious way functions.

However, in the Chalmers vein consciousness is non-physical, which suggests it cannot be simulated through physical means.

If you think consciousness is non physical , you would think the sims are probably zombies. You would also think that if you are a physicalist but not a computationalist. Physicalism does not guarantee anything about the nature of computational simulations.

Chalmers actual position is that consciousness supervenes on certain kinds of information processing, so that a sufficiently detailed simulation would be conscious: he's one of the "we".

comment by TheAncientGeek · 2016-07-14T15:35:26.905Z · LW(p) · GW(p)

They don't exactly count as p-zombie, since they are functional simulations, not atom-by-atom duplicates. I call such zombies c-zombies, for computational zombies.

comment by [deleted] · 2016-07-04T01:07:56.596Z · LW(p) · GW(p)

"Is", "is." "is"—the idiocy of the word haunts me. If it were abolished, human thought might begin to make sense. I don't know what anything "is"; I only know how it seems to me at this moment.

— Robert Anton Wilson, The Historical Illuminatus Chronicles, as spoken by Sigismundo Celine.

comment by Thomas Eisen (thomas-eisen) · 2021-03-18T17:39:08.902Z · LW(p) · GW(p)

You could use the "zombie argument" to "prove" that any kind of machine is more than the sum of its parts.

For example, imagine a "zombie car" which is the same on an atom-by-atom basis as a normal car, except it doesn't drive.

In this context, the absurdity of the zombie argument should be more obvious.

EDIT: OK, it isn't quite the same kind of argument, since the car wouldn't behave exactly the same, but it's pretty similar.

EDIT2: Another example to illustrate the absurdity of the zombie argument:
You could imagine an alternative world  that's exactly the same as ours, except humans (who are also exactly the same as in our world) don't perceive light with a wavelength of 700 nanometer as red. This "proves" that there is more to redness then wavelength of light.
 

Replies from: TAG
comment by TAG · 2021-03-18T19:29:44.953Z · LW(p) · GW(p)

The second example is Spectrum Inversion , which some people find quit conceivable. that's not surprising,since it operates on the same principles a p-zombiehood. There's no connection of logical necessity between having a certain configuration of quarks, and having a specific subjective sensation, hence spectrum inversion, and there's no connection of logical necessity between having a certain configuration of quarks, and having any subjective sensation, hence p zombies are conceivable.

comment by entirelyuseless · 2016-07-10T15:12:30.517Z · LW(p) · GW(p)

I think characterizing this discussion as being about whether zombies are conceivable, as Eliezer does here, prevents productive discussion. That is not the issue, and Eliezer basically admits that in the last paragraph. Of course they are conceivable. We all know what we are talking about here.

Eliezer's basic argument is that zombies are impossible, not that they are inconceivable. And I agree that they are impossible. But the fact that he has misrepresented the nature of the argument makes it difficult to have a productive discussion of the issue.

Suppose we have a grid of pixels, with pixel #1 located at position 2,1 ; pixel #2 located at position 2,2; pixel #3 located at position 2,3; pixel #4 located at position 2,4; and pixel #5 located at position 2,5.

The pixels are in a straight line on the grid. Now suppose someone says, "Could there be a series of pixels, all in exactly those positions mentioned, but in such a way that the pixels are not in straight line?"

In this case, asking whether or not the situation is conceivable is not a helpful question here. But we do know that the situation described cannot happen. I would say that zombies are essentially the same situation -- something physically identical to a human is a human, and has all human properties, including the property of consciousness.

One difference though is this: we think we understand why the pixels must be in a straight line, but we do not think we know enough about the physical properties of humans to say why they must be conscious. We just know that humans are in fact conscious, and this is enough to tell us that zombie humans cannot actually happen.

Given the terms that physics usually uses, in fact, a deductive argument to the conclusion, "humans are conscious," is impossible, since "conscious" is not one of those terms. But in the same way, if we start from premises that only say things about the positions of individual pixels, and nothing else, we cannot formulate a deductive argument that the pixels must be in a straight line. That does not mean that the pixels might fail to be in a straight line, nor does it imply that a human could fail to be conscious. It simply means that our account, whether the physical account of the human, or the one about the positions of the pixels, is an incomplete account of reality.

And I suspect that this last point is my real disagreement with Eliezer. I think he believes that the physical account is a complete account, and likewise that an account of the pixels including nothing but the individual positions is a complete account of the pixels. If so, I think he would be mistaken in both cases.

comment by jmc · 2016-07-04T18:24:24.216Z · LW(p) · GW(p)

This is the most brilliant argument I have every come across for libertarian free will and it's link to wave function collapse. Every theory of determinism necessarily implies consciousness is epiphenomenal. If the physical motion of particles is predetermined essentially by forces outside the control of consciousness, then by consciousness has no effect on the universe. For consciousness to act on the universe it must exert itself in a manner separated from the clockwork world of forces precisely defined velocity and position.

Wavefunction collapse is the only plausible mechanism

Replies from: TAG
comment by TAG · 2021-03-18T19:36:06.448Z · LW(p) · GW(p)

Every theory of determinism necessarily implies consciousness is epiphenomenal.

Or nonexistent, or identical to the physical.