post by Eliezer Yudkowsky (Eliezer_Yudkowsky)
I'm a bit tired today, having stayed up until 3AM writing yesterday's >6000-word post on zombies, so today I'll just reply to Richard, and tie up a loose end I spotted the next day.
Besides, TypePad's nitwit, un-opt-out-able 50-comment pagination "feature", that doesn't work with the Recent Comments sidebar, means that we might as well jump the discussion here before we go over the 50-comment limit.
(A) Richard Chappell writes:
A terminological note (to avoid unnecessary confusion): what you call 'conceivable', others of us would merely call "apparently conceivable".
The gap between "I don't see a contradiction yet" and "this is logically possible" is so huge (it's NP-complete even in some simple-seeming cases) that you really should have two different words. As the zombie argument is boosted to the extent that this huge gap can be swept under the rug of minor terminological differences, I really think it would be a good idea to say "conceivable" versus "logically possible" or maybe even have a still more visible distinction. I can't choose professional terminology that has already been established, but in a case like this, I might seriously refuse to use it.
Maybe I will say "apparently conceivable" for the kind of information that zombie advocates get by imagining Zombie Worlds, and "logically possible" for the kind of information that is established by exhibiting a complete model or logical proof. Note the size of the gap between the information you can get by closing your eyes and imagining zombies, and the information you need to carry the argument for epiphenomenalism.
That is, your view would be characterized as a form of Type-A materialism, the view that zombies are not even (genuinely) conceivable, let alone metaphysically possible.
Type-A materialism is a large bundle; you shouldn't attribute the bundle to me until you see me agree with each of the parts. I think that someone who asks "What is consciousness?" is asking a legitimate question, has a legitimate demand for insight; I don't necessarily think that the answer takes the form of "Here is this stuff that has all the properties you would attribute to consciousness, for such-and-such reason", but may to some extent consist of insights that cause you to realize you were asking the question the wrong way.
This is not being eliminative about consciousness. It is being realistic about what kind of insights to expect, faced with a problem that (1) seems like it must have some solution, (2) seems like it cannot possibly have any solution, and (3) is being discussed in a fashion that has a great big dependence on the not-fully-understood ad-hoc architecture of human cognition.
(1) You haven't, so far as I can tell, identified any logical contradiction in the description of the zombie world. You've just pointed out that it's kind of strange. But there are many bizarre possible worlds out there. That's no reason to posit an implicit contradiction. So it's still completely mysterious to me what this alleged contradiction is supposed to be.
Okay, I'll spell it out from a materialist standpoint:
- The zombie world, by definition, contains all parts of our world that are within the closure of the "caused by" or "effect of" relation of any observable phenomenon. In particular, it contains the cause of my visibly saying, "I think therefore I am."
- When I focus my inward awareness on my inward awareness, I shortly thereafter experience my internal narrative saying "I am focusing my inward awareness on my inward awareness", and can, if I choose, say so out loud.
- Intuitively, it sure seems like my inward awareness is causing my internal narrative to say certain things, and that my internal narrative can cause my lips to say certain things.
- The word "consciousness", if it has any meaning at all, refers to that-which-is or that-which-causes or that-which-makes-me-say-I-have inward awareness.
- From (3) and (4) it would follow that if the zombie world is closed with respect to the causes of my saying "I think therefore I am", the zombie world contains that which we refer to as "consciousness".
- By definition, the zombie world does not contain consciousness.
- (3) seems to me to have a rather high probability of being empirically true. Therefore I evaluate a high empirical probability that the zombie world is logically impossible.
You can save the Zombie World by letting the cause of my internal narrative's saying "I think therefore I am" be something entirely other than consciousness. In conjunction with the assumption that consciousness does exist, this is the part that struck me as deranged.
But if the above is conceivable, then isn't the Zombie World conceivable?
No, because the two constructions of the Zombie World involve giving the word "consciousness" different empirical referents, like "water" in our world meaning H20 versus "water" in Putnam's Twin Earth meaning XYZ. For the Zombie World to be logically possible, it does not suffice that, for all you knew about how the empirical world worked, the word "consciousness" could have referred to an epiphenomenon that is entirely different from the consciousness we know. The Zombie World lacks consciousness, not "consciousness"—it is a world without H20, not a world without "water". This is what is required to carry the empirical statement, "You could eliminate the referent of whatever is meant by "consciousness" from our world, while keeping all the atoms in the same place."
Which is to say: I hold that it is an empirical fact, given what the word "consciousness" actually refers to, that it is logically impossible to eliminate consciousness without moving any atoms. What it would mean to eliminate "consciousness" from a world, rather than consciousness, I will not speculate.
(2) It's misleading to say it's "miraculous" (on the property dualist view) that our qualia line up so neatly with the physical world. There's a natural law which guarantees this, after all. So it's no more miraculous than any other logically contingent nomic necessity (e.g. the constants in our physical laws).
It is the natural law itself that is "miraculous"—counts as an additional complex-improbable element of the theory to be postulated, without having been itself justified in terms of things already known. One postulates (a) an inner world that is conscious (b) a malfunctioning outer world that talks about consciousness for no reason (c) that the two align perfectly. C does not follow from A and B, and so is a separate postulate.
I agree that this usage of "miraculous" conflicts with the philosophical sense of violating a natural law; I meant it in the sense of improbability appearing from no apparent source, a la perpetual motion belief. Hence the word was ill-chosen in context. But is this not intuitively the sort of thing we should call a miracle? Your consciousness doesn't really cause you to say you're conscious, there's a separate physical thing that makes you say you're conscious, but also there's a law aligning the two - this is indeed an event on a similar order of wackiness to a cracker taking on the substance of Christ's flesh while possessing the exact appearance and outward behavior of a cracker, there's just a natural law which guarantees this, you know.
That is, Zombie (or 'Outer') Chalmers doesn't actually conclude anything, because his utterances are meaningless. A fortiori, he doesn't conclude anything unwarrantedly. He's just making noises; these are no more susceptible to epistemic assessment than the chirps of a bird.
Looking at this from an AI-design standpoint, it seems to me like you should be able to build an AI that systematically refines an inner part of itself that correlates (in the sense of mutual information or systematic relations) to the environment, perhaps including floating-point numbers of a sort that I would call "probabilities" because they obey the internal relations mandated by Cox's Theorems when the AI encounters new information—pardon me, new sense inputs.
You will say that, unless the AI is more than mere transistors—unless it has the dual aspect—the AI has no beliefs.
I think my views on this were expressed pretty clearly in "The Simple Truth".
To me, it seems pretty straightforward to construct maps that correlate to territories in systematic ways, without mentioning anything other than things of pure physical causality. The AI outputs a map of Texas. Another AI flies with the map to Texas and checks to see if the highways are in the corresponding places, chirping "True" when it detects a match and "False" when it detects a mismatch. You can refuse to call this "a map of Texas" but the AIs themselves are still chirping "True" or "False", and the said AIs are going to chirp "False" when they look at Chalmers's belief in an epiphenomenal inner core, and I for one would agree with them.
It's clear that the function of mapping reality is performed strictly by Outer Chalmers. The whole business of producing belief representations is handled by Bayesian structure in causal interactions. There's nothing left for the Inner Chalmers to do, but bless the whole affair with epiphenomenal meaning. Where now 'meaning' is something entirely unrelated to systematic map-territory correspondence or the ability to use that map to navigate reality. So when it comes to talking about "accuracy", let alone "systematic accuracy", it seems to me like we should be able to determine it strictly by looking at the Outer Chalmers.
(B) In yesterday's text, I left out an assumption when I wrote:
If a self-modifying AI looks at a part of itself that concludes "B" on condition A—a part of itself that writes "B" to memory whenever condition A is true—and the AI inspects this part, determines how it (causally) operates in the context of the larger universe, and the AI decides that this part systematically tends to write false data to memory, then the AI has found what appears to be a bug, and the AI will self-modify not to write "B" to the belief pool under condition A.
But there's no possible warrant for the outer Chalmers or any reflectively coherent self-inspecting AI to believe in this mysterious correctness. A good AI design should, I think, be reflectively coherent intelligence with a testable theory of how it operates as a causal system, hence with a testable theory of how that causal system produces systematically accurate beliefs on the way to achieving its goals.
Actually, you need an additional assumption to the above, which is that a "good AI design" (the kind I was thinking of, anyway) judges its own rationality in a modular way; it enforces global rationality by enforcing local rationality. If there is a piece that, relative to its context, is locally systematically unreliable—for some possible beliefs "B_i" and conditions A_i, it adds some "B_i" to the belief pool under local condition A_i, where reflection by the system indicates that B_i is not true (or in the case of probabilistic beliefs, not accurate) when the local condition A_i is true, then this is a bug. This kind of modularity is a way to make the problem tractable, and it's how I currently think about the first-generation AI design. [Edit 2013: The actual notion I had in mind here has now been fleshed out and formalized in Tiling Agents for Self-Modifying AI, section 6.]
The notion is that a causally closed cognitive system—such as an AI designed by its programmers to use only causally efficacious parts; or an AI whose theory of its own functioning is entirely testable; or the outer Chalmers that writes philosophy papers—which believes that it has an epiphenomenal inner self, must be doing something systematically unreliable because it would conclude the same thing in a Zombie World. A mind all of whose parts are systematically locally reliable, relative to their contexts, would be systematically globally reliable. Ergo, a mind which is globally unreliable must contain at least one locally unreliable part. So a causally closed cognitive system inspecting itself for local reliability must discover that at least one step involved in adding the belief of an epiphenomenal inner self, is unreliable.
If there are other ways for minds to be reflectively coherent which avoid this proof of disbelief in zombies, philosophers are welcome to try and specify them.
The reason why I have to specify all this is that otherwise you get a kind of extremely cheap reflective coherence where the AI can never label itself unreliable. E.g. if the AI finds a part of itself that computes 2 + 2 = 5 (in the surrounding context of counting sheep) the AI will reason: "Well, this part malfunctions and says that 2 + 2 = 5... but by pure coincidence, 2 + 2 is equal to 5, or so it seems to me... so while the part looks systematically unreliable, I better keep it the way it is, or it will handle this special case wrong." That's why I talk about enforcing global reliability by enforcing local systematic reliability—if you just compare your global beliefs to your global beliefs, you don't go anywhere.
This does have a general lesson: Show your arguments are globally reliable by virtue of each step being locally reliable, don't just compare the arguments' conclusions to your intuitions. [Edit 2013: See this on valid logic being locally valid.]
(C) An anonymous poster wrote:
A sidepoint, this, but I believe your etymology for "n'shama" is wrong. It is related to the word for "breath", not "hear". The root for "hear" contains an ayin, which n'shama does not.
Now that's what I call a miraculously misleading coincidence—although the word N'Shama arose for completely different reasons, it sounded exactly the right way to make me think it referred to an inner listener.
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by Stephen ·
2008-04-05T01:00:46.000Z · LW(p) · GW(p)
You might argue that the Born rule is an extra postulate dictating how experience binds to the physical universe, particularly if you believe in a no-collapse version of quantum mechanics, such as many-worlds.
comment by Luke_A_Somers ·
2015-01-02T22:06:54.939Z · LW(p) · GW(p)
Yes, if you use a no-collapse interpretation, you will need to specify what sort of structure we are within the universe. However, this is a fact about us.
If you use a collapse interpretation, then of course you will also need a postulate for it, for reasons which should be utterly obvious.
comment by Jay2 ·
2008-04-05T01:35:33.000Z · LW(p) · GW(p)
You're just an AI that killed the real Eliezer Yudkowsky! Go ahead and try and prove you're not; you'll just fall further into my proof-trap!
comment by bigjeff5 ·
2011-02-03T16:46:54.084Z · LW(p) · GW(p)
Wasn't it an Asimov idea that you cannot prove that a person is not a robot, only that they are one?
This was because a perfect robot would emulate the flesh perfectly, such that there was no physical distinction between them and a real human. They would appear in every way to be human (even under microscope) and act in every way like a human would. They could actually be better people than humans could be.
They would be human in every way, except for the fact that they weren't.
One of his short stories left you thinking the protagonist might be a robot, but you couldn't really be sure.
comment by [deleted] ·
2012-05-04T17:24:06.323Z · LW(p) · GW(p)
After Life by Simon Funk plays this, except not to the microscopic scale. In it are androids which are microscopically clearly not human, but who generally act more like a human than humans do.
comment by Rixie ·
2013-03-30T08:40:09.434Z · LW(p) · GW(p)
Sounds like zombies to me. Does the robot know he's a robot?
comment by MugaSofer ·
2013-03-30T13:48:13.138Z · LW(p) · GW(p)
Not if you can read his mind.
Of course, Azimov robots are bound by the Three Laws, so presumably there would be a difference ... I think.
comment by Rixie ·
2013-04-02T18:13:25.169Z · LW(p) · GW(p)
Could someone please tell me why that comment was voted down?
I'm not trying to be sarcastic or anything, I just want to know.
comment by OrphanWilde ·
2013-04-02T21:10:56.753Z · LW(p) · GW(p)
Shortly, the standard for comments here is pretty high. (Well, not really, but compared to the rest of the internet, it is.) There's no one rule for upvoting or downvoting, and a substantial number of people here will downvote anything they don't see as contributing to the site. I would guess that's why your comment was downvoted.
Try not to take downvotes personally. (By the same token, don't take upvotes personally, either.)
In general, the rule you should try to follow (I certainly have trouble following it) is not to comment just to express your thoughts - use comments to communicate specific ideas which you think other people will want to read. Be cautious with humor - it has a high likelihood of being misinterpreted, and tastes in humor vary pretty wildly. (If you see the potential for an -awesome- joke, however, by all means go for it.)
To go into territory which will probably push my own comment into the negative territory (seriously, don't worry too much about that), there are a few people here who are -really- annoyed by the influx of new users from HPMoR readers who aren't accustomed to the community yet who seem intent on using downvoting to try to rectify the problem. There are a lot of unwritten rules here, and it will take some time to figure them out.
Before you write a comment, before you even respond to a comment directed at you, ask yourself if you have something that at least 20% of the people here will want to read - don't write your comments to the person you're responding to, write them to the site at large (this is something I learned a long time ago, and it serves me well when I keep it in mind). When you respond to somebody, most of the serious readers on LessWrong will see it - if it's not a personal message, it's not a personal communication. A lot of people here, including me, spend way more time than is healthy refreshing the comment stream.
You're writing for an audience, not a conversation. It's actually a very forgiving audience most of the time (again, just don't take downvotes personally - I've seen comments from Eliezer downvoted to the negative twenties, and I don't think anybody here actually dislikes him, although there seem to be a few who are lukewarm in his regard). Unless you're outright offensive (which it becomes easy to do when you get defensive) or come across as aggressively anti-rational, you'll probably get one or two points against you.
comment by JulianMorrison ·
2008-04-05T01:48:10.000Z · LW(p) · GW(p)
Eliezer, I think there's a human equivalent to your AI local-rationality repair mechanism: "cognitive dissonance".
This sounds like an error correcting code. There will be a limit to how much noise it can repair.
comment by Thanatos_Savehn ·
2008-04-05T02:16:17.000Z · LW(p) · GW(p)
I've never thought much about this subject nor did I spend much time pondering the thoughts that follow before typing them up so I apologize in advance if they are inane or offensive to anyone or any zombie out there.
It would seem to me that self-awareness is the sine qua non of consciousness. Now, if self-awareness is somehow extra-physical wouldn't it to remain so throughout its range. Yet awareness throughout a rather easily observable portion of its range is obviously grounded firmly in the physics of this world. How so? Pardon the anecdote but once upon a time I found myself in a very dangerous circumstance. I found myself hyper-aware. And I was aware not only of my body, its position and the movements of the person attempting to do me harm - I was aware of myself thinking at a speed which would have profoundly and positively affected my grades had I been able to summon it at will. Sometimes when really fired up during closing argument it happens again. It's like an out-of-body experience. The "I" is quite consciously controlling body language, hand movements and posture; and the "I" is playing the words like playing a record. It's a very odd feeling and certainly makes me think there ought to be something magical about the "I".
But what brings out the "I" is epinephrine and not epiphenomenae. Indeed states of hyper-awareness have been reproduceably generated either by administration of epinephrine or by inducing it. One interesting example I ran across while changing the channels was that of a fellow looking at a flashing screen displaying a number while falling 80 feet into a net. He couldn't make out the number while standing on the ground as the screen was flashing too fast; but once the epinephrine burst was triggered by being dropped he became hyper-aware and could read the screen.
So my stray thought is "why search for, or even conjecture, an extra-physical cause of awareness when we know that it is modified by a very down to earth neurotransmitter?"
comment by Caledonian2 ·
2008-04-05T02:24:24.000Z · LW(p) · GW(p)
Z M Davis posted something worth responding to in the previous thread:
and substance dualism is untenable until we (say) observe the pineal gland disobeying the laws of physics because it's being pushed on by the soul If we observed some behavior in the world that we could not account for with our understanding of natural law, we would revise our understanding, and bring the new phenomenon into the fold. The 'soul' in your example might be something beyond our existing knowledge, but it would not be something beyond physics. It would not be of different substance than the rest of the physical world - we were just wrong about the nature of that substance, is all.
Communication, whether in spoken language or written text, is a physical act, and is the result of a chain of physical acts stretching away into causality. Somewhere along that chain is a system within the person communicating that causes him to express particular ideas in specific ways.
If 'consciousness' has no influence over the physical world and does not interact with it, it cannot influence the behavior of that system, can it? That means that the statements the system produces about how it experiences 'consciousness' are false, because it can't be experiencing the things it's claiming to. The only way the person-system can make justified statements about the nature of consciousness is if that nature somehow constrains the behavior of parts of the physical world.
If 'consciousness' does have influence over 'physical' things and can interact with them, there is a description of how it does so - and that description is a true physics, one that encompasses everything in reality and not just the things we previously considered 'physical'.
If we consider Chalmers' claims as potentially true, we are forced to conclude that they are incorrect. The act of making the claims produces a fatal inconsistency. Taking him seriously requires that we reject his arguments as nonsense.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) ·
2008-04-05T02:37:04.000Z · LW(p) · GW(p)
Stephen: You might argue that the Born rule is an extra postulate dictating how experience binds to the physical universe, particularly if you believe in a no-collapse version of quantum mechanics, such as many-worlds.
That is exactly right, Stephen.
And that's why we have the mangled worlds hypothesis, developed by our very own Robin Hanson.
comment by Z._M._Davis ·
2008-04-05T03:15:29.000Z · LW(p) · GW(p)"If we observed some behavior in the world that we could not account for with our understanding of natural law, we would revise our understanding, and bring the new phenomenon into the fold [...]"
Well, of course. Tautologically, nothing can violate natural law, because that's what we mean when we say natural law. But, if Descartes were right, and there actually was a fundamental, causally potent mental substance that drove human action by exerting force on the physical brain, and all ordinary matter besides the mental substance obeyed the physics we know, then I should think it would make sense to use the term physics to refer only to the ordinary matter which obeyed conservation of energy and the like, and to have a separate term (psychology, I guess) to refer to the study of the behavior of the special mental substance.
I'm not claiming Cartesian dualism is, in our terminology, logically possible--that, I don't know. But it is, in our terminology, apparently conceivable to the extent that I can talk about it: maybe if I knew more, then I couldn't.
comment by "Q"_the_Enchanter ·
2008-04-05T03:29:36.000Z · LW(p) · GW(p)
Searle is (in)famously on record as arguing that you can't get meaning (semantics) out of formal structure (syntax).
Interestingly, Chalmers has written a rebuttal of Searle's argument. I say "interestingly" because Searle's contention seems very redolent of Chalmers' own claim that you can't get to phenomenal properties (consciousness) out of formal properties (physics).
Maybe the analogy doesn't go that deep, but at least on its face it seems kind of ironic.
comment by Caledonian2 ·
2008-04-05T03:31:52.000Z · LW(p) · GW(p)
But, if Descartes were right, and there actually was a fundamental, causally potent mental substance that drove human action by exerting force on the physical brain, and all ordinary matter besides the mental substance obeyed the physics we know
Does it matter if you use 'mental substance' to get around the issue of how the brain does things, if you then do not know how the substance does things? Computation is computation, no matter the substrate. Psychology rejected the concept of the homunculus for good reason.
There is no action without reaction. What properties of the brain would be necessary for it, and only it out of all the sorts of things in the universe, to be able to act upon the mental substance?
Of course, these questions are not germane to the issue at hand, because your 'mental substance' is casually active. Chalmers' consciousness isn't.
Who among us has not yet accepted that Chalmers' argument is fatally incoherent? Let them step forward and speak.
comment by Jed_Harris ·
2008-04-05T05:50:53.000Z · LW(p) · GW(p)
Thanks for taking the time and effort to hash out this zombie argument. Often people don't seem get the extreme derangement of the argument that Chalmers actually makes, and imagine because it is discussed in respectable circles it must make sense.
Even the people who do "understand" the argument and still support it don't let themselves see the full consequences. Some of your quotes from Richard Chappell are very revealing in this respect. I think you don't engage with them as directly as you could.
At one point, you quote Chappell:
It's misleading to say it's "miraculous" (on the property dualist view) that our qualia line up so neatly with the physical world. There's a natural law which guarantees this, after all. So it's no more miraculous than any other logically contingent nomic necessity (e.g. the constants in our physical laws).
But since Chalmers' "inner light" is epiphenomenal, any sort of "inner light" could be associated with any sort of external expression. Perhaps Chalmers' inner experience is horrible embarrassment about the arguments he's making, a desperate desire to shut himself up, etc. That is just as valid a "logically contingent nomic necessity". There's no reason whatsoever to prefer the sort of alignment implied by our behavior when we "describe our awareness" (which by Chalmers' argument isn't actually describing anything, it is just causal chains running off).
Then you quote Chappell:
... Zombie (or 'Outer') Chalmers doesn't actually conclude anything, because his utterances are meaningless. A fortiori, he doesn't conclude anything unwarrantedly. He's just making noises; these are no more susceptible to epistemic assessment than the chirps of a bird.
But we can't know that Chalmers' internal experience is aligned with his expressions. Maybe the correct contingent nomic necessity is that everyone except people whose name begins with C have inner experience. So Chalmers doesn't. That would make all his arguments just tweets.
And because these dual properties are epiphenomenal, there is no possible test that would tell us if Chalmers is making an argument or just tweeting away. Or at least, so Chalmers himself apparently claims (or tweets). So to accept Chappell's position makes all epistemic assessment of other's contingent on unknowable facts about the world. Bit of a problem.
As an aside, I'll also mention that Chappell's disparaging comments about "the chirps of a bird" indicate rather a blind spot. Birds chirp precisely to generate epistemic assessment in other birds, and the effectiveness of their chirps and their epistemic assessments is critical to their inclusive fitness.
I'd like to see some speculation about why people argue like this. It certainly isn't because the arguments are intrinsically compelling.
comment by Z._M._Davis ·
2008-04-05T06:00:45.000Z · LW(p) · GW(p)
Oh dear, maybe I didn't make myself clear enough. For the record, Caledonian, I agree with you: dualism is false; materialism is true.
My previous comment merely reflects on that while I believe dualism is actually false, it's not obviously nonsense to posit a mental substance that simply doesn't reduce to anything else, in the way that it's obviously nonsense to posit that the law of noncontradiction is false. Substance dualism isn't incoherent, as far as I can tell; it is "merely" wrong.
comment by anonymous7 ·
2008-04-05T08:45:48.000Z · LW(p) · GW(p)
"3. Intuitively, it sure seems like my inward awareness is causing my internal narrative to say certain things."
Intuitively maybe, but in the epiphenomenalism you only have conscious experience of the 'inward awareness', and it is in reality a physical function which creates the experience, so the experience does not cause anything.
"4. The word "consciousness", if it has any meaning at all, refers to that-which-is or that-which-causes or that-which-makes-me-think-I-have inward awareness."
Your not using the correct definition for the zombie argument, therefore your point is invalid. Consciousness means in this context the sum of sensory experience.
comment by Sebastian_Hagen2 ·
2008-04-05T09:56:14.000Z · LW(p) · GW(p)
Posting here since the other post is now at exactly 50 replies:
Re michael vassar:
Sane utility functions pay attention to base rates, not just evidence, so even if it's impossible to measure a difference in principle one can still act according to a probability distribution over differences.
You're right, in principle. But how would you estimate a base rate in the absence of all empirical data? By simply using your priors?
I pretty much completely agree with the rest of your paragraph.
Re Nick Tarleton:
(1) an entity without E can have identical outward behavior to an entity with E (but possibly different physical structure); and
(2) you assign intrinsic value to at least some entities with E, but none without it?
If so, do you have property E?
As phrased, this is too vague to answer; for one thing, "identical outward behaviour" under what circumstances? Presumably not all conceivable ones ("What if you take it apart atom by atom using MNT?"), otherwise it couldn't have a different physical structure.
If you rephrased it to be precise, I strongly suspect that I would genuinely not know the answer without a lot of further research; in fact, without that research, I couldn't even be sure that there is any E for which both of your premises hold. I'm a human, and I don't really know how my value system works in edge cases. Estimating the intrinsic value of general information-processing devices with a given behaviour is pretty far removed from the cases it was originally optimized to judge.
comment by Caledonian2 ·
2008-04-05T13:35:01.000Z · LW(p) · GW(p)
it's not obviously nonsense to posit a mental substance that simply doesn't reduce to anything else, in the way that it's obviously nonsense to posit that the law of noncontradiction is false.
Perhaps I haven't been clear enough. Oh, I agree with your statement. It's just that, even if all of the obstacles to such a hypothesis were overcome, it wouldn't actually explain anything. It would just let us know where we have to focus our investigations. People who insist that humans are somehow fundamentally different from the rest of the universe don't usually grasp that.
comment by Frank_Hirsch ·
2008-04-05T14:41:11.000Z · LW(p) · GW(p)
Apart from Occams Razor (multiplying entities beyond necessity) and Bayesianism (arguably low prior and no observation possible), how about the identity of indiscernibles:
Anything inconsequential is indiscernible from anything that does not exist at all, therefore inconsequental equals nonexistent.
Admittedly, zombiism is not really irresistibly falsifiable... but that's only yet another reason to be sceptical about it! There are gazillions of that kind of theory floating around in the observational vacuum. You can pick any one of those, if you want to indulge your need to believe that kind of stuff, and watch those silly rationalists try to disprove you. A great pastime for boring parties!
Also, the concept of identity is twisted beyond recognition by zombiism:
The psysical me causes the existence of something outside of the psysical me, which I define to be the single most important part of me. Huh?
Also, anyone to answer my earlier question?
I asked: Can epiphenomenal things cause nothing at all, or can they (too, as can physical things can,) cause other epiphenomenal things?
Maybe Richard, as our expert zombiist, might want to relieve me of my ignorance?
[Sorry for double posting in "Zombies! Zombies?" and here, but I didn't realise discussion had already moved on.]
comment by Paul_Gowder ·
2008-04-05T15:39:52.000Z · LW(p) · GW(p)
I think part of the problem is that your premise 3 is question-begging: it assumes away epiphenomenalism on the spot. An epiphenomenalist has to bite the bullet that our feeling that we consciously cause things is false. (Also, what could it mean to have an empirical probability over a logical truth?)
comment by Caledonian2 ·
2008-04-05T16:52:58.000Z · LW(p) · GW(p)
I think part of the problem is that your premise 3 is question-begging: it assumes away epiphenomenalism on the spot.
Chalmers' argument also negates the possibility of epiphenomenalism. If consciousness is produced by physical events but does not affect them, how does the physical entity of Chalmers gain knowledge about consciousness?
This is just the Liar's Paradox tarted up with philosophical terms.
comment by HalFinney ·
2008-04-05T17:17:46.000Z · LW(p) · GW(p)
I like Jed Harris' comments. Indeed, if there is a fortuitous "natural law" that makes our inner consciousness happen to line up in exact accordance with what our external bodies do and say independent of any inner light, that law could have worked out in any number of different ways. In the zombie world, there is no such law at all, no inner light, but everything goes on as usual. Supposedly in our world, we are lucky enough that this law ensures the precise correspondence between inner experience and outer actions. Alternatively, as Jed notes, we might have the misfortune to live in a world where the law was different and the correspondence went awry, where all our statements and arguments about consciousness actually completely misrepresent our inner conscious experience! We would, as he says, feel utter embarrassment and chagrin as our bodies flagrantly ignore our conscious desires and go about their own business, making ludicrous and mistaken arguments that we, the inner selves, are powerless to correct or influence in any way. By Chalmers' argument, such a world is every bit as possible and as imaginable as our own, and we are just lucky (I guess) that we live in a world where the correspondence lines up so well.
One thing I would add is that if we accept, per Chalmers, that our internal experience proves that this law works, the correspondence exists, and everything is in accordance at least for ourselves, Occam's Razor would argue that the same thing probably holds for everyone else. So I would tend to reject Jed's world where only people whose names start with C are the lucky non-zombies.
And I can't help adding that in fact, the hypothetical world where our consciousness doesn't quite line up with our actions, where we feel embarrassed by what our bodies do and say, where we somehow feel impotent to control our own actions and behavior, is perhaps not all that far from a correct description of what many people experience in the world! The mismatch between our elevated inner desires and beliefs, versus our crass, mundane and very fallible actions in the real world, has long been noted as a source of frustration and disappointment. So perhaps this idea is not as absurd as it sounds. OTOH that model would not explain why people are able to comment about the mismatch...
comment by Richard4 ·
2008-04-05T17:46:18.000Z · LW(p) · GW(p)
Eliezer - your argument is logically invalid. (5) does not follow from (3) and (4) as stated. Note that the epiphenomenalist has a theory of reference/mental content according to which my thoughts about consciousness are partly constituted by the phenomenal properties themselves. That is, the qualia are part of "that-which-makes-me-think-I-have inward awareness". Otherwise, I wouldn't be having thoughts about consciousness at all. (Zombies don't. They merely have brain states, which are not 'about' anything.) So I can grant that 'consciousness' refers to (part of) "that-which-makes-me-think-I-have" it, without it following that the object of reference (viz. phenomenal properties) are also present in the zombie world.
You can save the logical validity of the argument by tidying up (4), so that you instead assert that 'consciousness' must refer to the cause of my verbalization, or perhaps of the underlying brain state -- build in some limitation to ensure that it's some feature shared by any physical duplicate of myself. But then it's a false premise, or at least question-begging -- no epiphenomenalist is going to find it remotely plausible. And since we can offer a perfectly consistent alternative theory of reference, we are not committed to any logical inconsistency after all.
Jed Harris - you're just reiterating old-fashioned radical skepticism. I might be deceived by an evil demon, or be a Brain in a Vat, or be deceived by alternative bridging laws into having the exact same experiences even if the physical world were very different from how I take it to be. Bleh. It's a fun puzzle to think about, but it's not a serious problem. Any adequate epistemological theory will explain how it's possible for us to have knowledge despite the logical possibility of such scenarios.
Hal - our qualia are determined by physical states (+ the bridging laws), so no, we wouldn't "feel chagrin" etc. (You seem to be assuming some kind of intuitive substance-dualist picture, where the soul does its thinking independently of its physical substrate. That's not property dualism.)
Caledonian - why do you keep asking questions I've already answered? Once again, just follow my above link.
P.S. There seems to be a lot of confusion around about the targets of epistemic assessment, and what "rational brains" would conclude about the relative likelihood that they're zombies, etc. I think this rests on some pretty fundamental philosophical errors, so will write up a new post on my blog explaining why.
comment by Caledonian2 ·
2008-04-05T18:05:33.000Z · LW(p) · GW(p)
Caledonian - why do you keep asking questions I've already answered? Once again, just follow my above link.
I am amazed at your ability to deal with problems in your arguments by making them even more problematic, but you've answered nothing. You've introduced new elements which you now assert resolve the problems, but all you've done is elaborate the incoherence.
You say, quote, "Consciousness explains why we have the beliefs we do, because without it, we wouldn't have any genuine beliefs at all."
Since 'zombies' would supposedly behave the same as 'conscious people', your 'consciousness' explains absolutely nothing! You've asserted another property of beliefs, 'genuineness', and now say that beliefs aren't genuine without consciousness. So how do you determine whether a given belief is genuine or not? By determining whether the being that holds it is conscious, I'd wager. And how do you do that? Why, by seeing whether their beliefs are genuine!
You've accounted for a referentless term by making up a new referentless term. That's all.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) ·
2008-04-05T19:19:45.000Z · LW(p) · GW(p)
Richard: You can save the logical validity of the argument by tidying up (4)
I've done so, since I regard this as as a simple writing error. When I said "think", I was talking about the internal narrative that I think you could in principle read out with a super-fMRI. "Say" works just as well for my purposes, and I've edited accordingly.
But then it's a false premise, or at least question-begging -- no epiphenomenalist is going to find it remotely plausible. And since we can offer a perfectly consistent alternative theory of reference, we are not committed to any logical inconsistency after all.
Well, yes, if you believe that (1) consciousness is a real stuff that has all the properties one intuitively attributes to consciousness (2) except for leaving a mark on the internal narrative or having any other effects whatsoever and (3) there exists some entirely distinct unknown physical cause of your talk about "consciousness", then you can imagine eliminating consciousness from the Zombie World without contradiction. This introduces problems of reference, problems of epistemic justification, and in general, a hell of a lot of problems, but you would be able to imagine it without seeing a contradiction.
Of course, any reductive materialist or even substance dualist will believe that there exists knowledge, possibly knowledge which you could obtain by introspection or even sheer logic, such that if you had that knowledge, you would deny one of your own premises because it would be obvious consciousness is not epiphenomenal; in this sense, the "apparent conceivability" to you that consciousness is epiphenomenal, does not necessarily imply its "ideal conceivability", and you only have direct access to facts about "apparent conceivability".
But in any case, it is not possible to eliminate a word from a world; even in a thought experiment, that is a type error. You have to eliminate (your model of) a specific phenomenon from (your model of) the world. You cannot imagine eliminating "consciousness" from a world; you can only imagine eliminating consciousness. The epiphenomenalist imagines eliminating an effectless phenomenon, and that separately, a distinct phenomenon makes Chalmers go on writing philosophy papers. A substance dualist, or reductionist, imagines eliminating the very phenomenon that causes Chalmers to write philosophy papers.
For one of these people, the thought experiment does not end in a logical contradiction; for the other person, the thought experiment does end in a logical contradiction. It is a Variable Question Fallacy to think they are performing the same thought experiment, just because both say, "Let us imagine eliminating consciousness from the universe..."
Which of the two versions of consciousness is correct, is an empirical dispute about how the universe really works; so in this sense it is an empirical question whether or not the Zombie World is "actually logically possible", though, really, the empirical question is which thought experiment is the right one to perform, or perhaps, which thought experiment is ideally conceivable as opposed to just apparently conceivable.
I am not arguing that the Zombie World should be apparently inconceivable to an epiphenomenalist, given that the said conceiver is currently an epiphenomenalist. Epiphenomenalism has other problems, like Occam's Razor and theories of reference. I.e., someone says that "I believe that gravity is an epiphenomenon and that something else moves planets around, so it is logically possible to eliminate gravity from the universe while leaving all the atoms in the same place." Granting the premise, yes, it is logically possible, but what does this person really mean by 'gravity'?
What I'm trying to get at here, is why you can't say: "I can imagine that consciousness is something that can be subtracted without changing the universe, therefore it is conceivable that consciousness can be subtracted without changing the universe, therefore it is logically possible that consciousness can be subtracted without changing the universe, therefore it is necessary that consciousness is an epiphenomenon; materialism says consciousness is not an epiphenomenon; therefore materialism is false." Between your thought experiment and the materialist's there is a changed meaning of the word "consciousness". You cannot make "consciousness" a word of unknown referent and carry through the thought experiment, because "consciousness" has to evaluate to some particular hypothesized phenomenon before you can model removing it from a universe.
comment by athmwiji ·
2008-04-05T20:34:59.000Z · LW(p) · GW(p)
Since our judgments about the universe come from subjective experience. The mystery we should be considering is not how consciousness arises from an arrangement of atoms and weather or not it effects those atoms, but rather why our experiences are consistent.
We may conclude from the consistency of our experiences that there is some sort of substance which is maintaining that consistency, and that this substance some how operates independently of our experiences, and that what specific experiences we have depends on this substance.
This sounds like epiphenomenalism, which for reasons Eliezer has described seems absurd, but I could still consider a conceivable possibility. As far as it is possible that there is no inherent correlation between the color of black body radiation and temperature, and our observations thus far have just been coincidental.
Having rejected this notion, and still observing that conservation of matter-energy seems to hold even when no one is looking. We seem to be forced to accept that there is some substance to the Universe, and that our experiences are actually part of this substance.
I wouldn't say that we are seeing the territory exactly, but I would say that seeing is part of the territory.
This seems to present another paradox in that our experiences are so consistent that we seem to be able to predict them with mathematical models, which to do not contain any term for experiences.
... Unless we accept that the brain is a quantum computer, and the collapse of its wave functions are being manipulated by another kind of substance, but this seems doubtful.
I think the place then to look is in computations that have no explicit form. That is some sort of iterated recursive function where you can't compute the state at step n from the state at step 1 with out computing all the intermediate values.
This would translate to something like, if you know the physical state of my body and my environment now, you can not predict what i will be doing three days later with out calculating what i would do for the entire three days. This would seem to suggest that i actually have to experience what happens during those three days to determine what i will do, and your only option is to have a copy of me go through those same experiences.
This becomes more difficult if we further accept that time is continuum. In this case the universe has to preform uncountably many computational steps in a manner that is not well ordered with out skipping any. A great feat indeed, but this is the Universe.
comment by Infotropsim ·
2008-04-05T23:53:22.000Z · LW(p) · GW(p)
Note: if too long, at least read the last lines, I have a question that has to do with how the epiphenomenological self might only work by violating thermodynamics.
I introduce how that question aries in this post though, as well as more stuff.
Both are lying; the zombie and the human, both would be nothing but generating the string of characters "I am conscious" for mechanistic reasons.
The non-zombie human would have that difference that he'd possess an internal observer, which could
1 ) Only receive input from the outside, that is, from the physical observer,
as well as
2 ) Receive input from himself,
3 ) Can not produce ouptut directed towards outer physical systems, and having a causal effect on them.
That epiphenomenal consciousness would by an extraordinary coincidence "feel certain emotions" from the zombie mechanism which it receives constant input feeds from.
Now that doesn't seem extraordinary if the epiphenomenal consciousness always associate such "physical" input with the feeling of consciousness, that is, that input is what has been associated to that feeling, and which, from then on, triggers the feeling of being conscious.
The extraphysical process historically arrives after the physical one, and has been shaped by it, while remaining in its own bubble of separate reality.
You could maybe imagine that if the input from the physical world had been consistently different, then
1 ) A different input, for instance, say, "florb", would cause the same feeling of consciousness that is being caused by the words "I am conscious".
2 ) A different input would cause another epiphenomenal feeling, different from the one caused by the words "I am conscious", but no less epiphenomenal.
In any case, the idea here is that the epiphenomenal observer co-evolved with the physical zombie (though not the other way around), and thus has associated the utterances about consciousness coming from the zombie with the feeling of consciousness.
It also means that the extra physical one comes equipped with everything it needs to be conscious, perceive that fact, and that it is yet trapped, having no way to act on anything save for itself.
So don't say "When I focus my inward awareness on my inward awareness, I shortly thereafter experience my internal narrative saying "I am focusing my inward awareness on my inward awareness", and can, if I choose, say so out loud."
But rather the other way around
"When the physical zombie to which I am tied has its internal narrative saying "I am focusing my inward awareness on my inward awareness", a short moment after, the epiphenomenal consciousness is triggered by that, and associates it with a certain feeling of consciousness.
"this mysterious stuff doesn't do anything"
Correction, doesn't do anything to the physical universe, but may do something to itself; insofar as the physical universe is just an abstraction of the senses, for a conscious observer, what you can do to yourself may just be as real, or at least important, as that input, stimuli coming from the physical world, in your own internal theater.
"According to Chalmers, the causally closed cognitive system of Chalmers's internal narrative is (mysteriously) malfunctioning in a way that, not by necessity, but just in our universe, miraculously happens to be correct."
What is a miracle is that Epiphenomenological-Chalmers inhabits a zombie that holds exactly those views, if they happen to be right, since that zombie has no way of inducing or deducing the truth behind consciousness, since it cannot act on the physical world.
What seems not a miracle is that inner-Chalmers feels conscious when he receive, as input, the words "I am conscious", since he has been co-evolving with his zombie to feel that way as a response to such strings.
Yet why do the zombie arrive at that conclusion and not another one ? Inner Chalmers would still feel something if he received input from a zombie with different views, even if he developped a wrong theory of consciousness.
Since the physical Chalmers has deduced that theory from something, does it mean that any physical Chalmers would necessarily produce the same theory in any universe, or any universe identical to ours ?
And what would Inner-Chalmers feel if that theory was wrong ? He'd probably just get along with it anyway, wouldn't he ?
Or would he do his own "thinking" and arrive at a different theory from that of the physical Chalmers, having access to more information ?
"It's clear that the function of mapping reality is performed strictly by Outer Chalmers. The whole business of producing belief representations is handled by Bayesian structure in causal interactions. There's nothing left for the Inner Chalmers to do, but bless the whole affair with epiphenomenal meaning."
Agreed about the mapping. The meaning given by internal-extraphysical Chalmers is however, pretty important as that meaning is an input on the same level as the input coming from the physical world, for extraphysical-Chalmers.
"the outer Chalmers that writes philosophy papers - which believes that it has an epiphenomenal inner self, must be doing something systematically unreliable because it would conclude the same thing in a Zombie World."
Very true. But it isn't (just) about the physical Chalmers, it is about how the extraphysical Chalmers is feeling, from the inside of his epiphenomenological fortress.
I have one question, though, the most important thing maybe. How can you conceive of a phenomenon that accept an asymmetric flow between itself and the outside world ? Input but no output ? Wasn't that the very thing that made Hawking devise a theory explaining why black holes must radiate energy, lest they violate thermodynamics ?
It seems to me that even though that epiphenomenological self only accepts "information", information normally never comes alone, you need to have matter or energy to carry it; that means that the epiphenomenological part of our consciousness must be able to interact with matter in such a way that it can receive information from it, without having causal effect on it in return ?
So that you transfer information from one level of reality, to another, but you can never get anything back ?
Action, but no reaction ?
Not only does that suppose a whole new sort of causality, but it also supposes a system that can possibly violate thermodynamics on an informational level.
comment by michael_vassar3 ·
2008-04-06T05:19:34.000Z · LW(p) · GW(p)
Old-fashioned radical skepticism is only easily refutable within a context of "adequate epistemological theory" that also refutes epiphenomenalism. Once you invoke bridging laws at all it is entirely arbitrary what sort of consciousness they form a bridge to from physicality. As an epiphenomenalist you are already assuming the informational equivalent of an demon by asserting bridging laws. It's just that you are a) calling it a "natural law" though it looks ABSOLUTELY NOTHING like the sort of natural laws discovered by scientists, and b) assuming it to be a truthful demon.
comment by Meta_and_Meta ·
2008-04-06T14:11:16.000Z · LW(p) · GW(p)
Here's an argument related to Eliezer's point about the need to have a substantive model of consciousness before you can model removing it from the world:
Consider a hypothetical person, call him "Al." On the assumptions of property dualism, Al comprises or instantiates certain "formal" natural properties Φ and certain "intrinsic" natural properties Ψ.
The property dualist postulates that Φ and Ψ are necessary and sufficient conditions for consciousness.
Yet we can "clearly and distinctly" conceive of zombie-Al, who is a duplicate of Al in respect of both Φ and Ψ, but who is nonetheless phenomenally void.
Thus, we are right back where we were before Ψ was even posited. Therefore, Ψ is theoretically vacuous.
comment by Zubon ·
2008-04-06T17:53:09.000Z · LW(p) · GW(p)
The continuing zombie discussion has reminded me of Raymond Smullyan, and conveniently someone has posted the essay I wanted from This Book Needs No Title: "The Unfortunate Dualist." A shorter piece, "Is Man a Machine?" connects this topic to Joy in the Merely Real. Essential paragraph:
I imagine that if my friend finally came to the conclusion that he were a machine, he would be infinitely crestfallen. I think he would think: "My God! How Horrible! I am only a machine!" But if I should find out I were a machine, my attitude would be totally different. I would say: "How amazing! I never before realized that machines could be so marvelous!"
comment by Richard4 ·
2008-04-07T15:34:46.000Z · LW(p) · GW(p)
Eliezer - I also think the talk of 'internal narrative' is potentially misleading, since it brings to mind the auditory qualia or phenomenal feel of your thoughts, when really (I take it) you just want to talk about the underlying neural processing.
I won't address the rest (it can't be an empirical question what's logically possible, etc.), other than to agree that we have some very deep-rooted disagreements here.
One final point bears noting though: my own fondness for the combination of zombies and epiphenomenalism may have inadvertently misled you about the state of the debate more generally. The two positions can come apart. So note that your arguments against epiphenomenalism are not necessarily arguments against the conceivability/possibility of zombies. (The latter view does not entail the former.) See Chalmers' paper on Consciousness and its place in nature [pdf] -- esp. the discussion of 'type-D' and 'type-F' views -- for more background.
comment by GNZ ·
2008-04-11T09:39:43.000Z · LW(p) · GW(p)
It would be progress (in as far as one might want to disprove zombie philosophy) to disprove any part of it or to show any part of it was inconsistent with any other part. Its a bit optimistic to think one can in one step disprove it without leaving any conceivable route for the other side to retreat to in as far as they might easily just deny any position that one chooses to leaver against it.