Gettier in Zombie World

post by hairyfigment · 2011-01-23T06:44:29.137Z · score: 1 (4 votes) · LW · GW · Legacy · 23 comments

Contents

  Gettier intuitions: How do they work?
  What about the zombies?
None
23 comments

A Dr. Nigel Thomas has tried to show a logical self-contradiction in Chalmers' "Zombie World" or zombiphile argument, in a way that would convince Chalmers (given perfect rationality, of course). The argument concerns the claim that we can conceive of a near-duplicate of our world sharing all of its physical laws and containing apparent duplicates of us, who copy everything we say or do for the same physical reasons that lead us to do it, but who lack consciousness. Chalmers says this shows the logical possibility of Zombie World (though nobody believes that it actually exists.) He concludes from this that our world has a nonphysical "bridging law" saying that "systems with the same functional organization have the same sort of conscious experiences," and we must regard this as logically independent of the world's physical laws. Thomas' response includes the following point:

Thus zombiphiles normally (and plausibly) insist that we know of our own consciousness directly, non-inferentially. Even so, there must be some sort of cognitive process that takes me from the fact of my consciousness to my (true) belief that I am conscious. As my zombie twin is cognitively indiscernible from me, an indiscernible process, functioning in just the same way, must lead it from the fact of its non-consciousness to the equivalent mistaken belief. Given either consciousness or non-consciousness (and the same contextual circumstances: ex hypothesis, ceteris is entirely paribus) the process leads one to believe that one is conscious. It is like a stuck fuel gauge, that reads FULL whether or not there is any gas in the tank.

While I think his full response has some flaws, it seems better than anything I produced on the subject — perhaps because I didn't try very hard to find a contradiction. Thomas tries to form a trilemma by arguing that we can't regard statements made by Zombie Chalmers about consciousness as true, or false, or meaningless. (If you think they deserve the full range of probabilistic truth values, then for the moment let "true" mean at least as much confidence as we place in our own equivalent statements and let "false" mean any lesser value.) But the important lemma requires an account of knowledge in order to work. To run through the other two lemmas: we know the zombie statements have meaning or truth values for us, the people postulating them, so let the rest of the argument apply to those meanings. And if we call the statements true then they must by assumption refer to something other than consciousness — call it consciousness-Z. But then Zombie Hairyfigment will (by assumption) say, "I have real consciousness and not just consciousness-Z." (This also seems to reduce probabilistically "false" statements to ordinary falsity, still by assumption.)

The remaining lemma tries to draw a contradiction out of the zombiphile argument's assumption that Chalmers has knowledge of his own consciousness. We need some way to recognize or rule out knowledge in order for this to work. Happily, on this one question standard philosophy seems to point clearly towards the answer we want.

(One Robert Bass apparently makes a related point in his paper, Chalmers and the Self-Knowledge Problem (pdf). But I took the argument in a slightly different direction by asking what the philosophers actually meant.)

 


 

Gettier intuitions: How do they work?

 

The famous Gettier Problem illustrates a flaw in the verbal definition of knowledge as "justified true belief". Gettier's original case takes a situation in the works of Plato involving Socrates (Theatetus 142d), and modifies it to make the situation clearer. In Gettier's version, S and his friend Jones have both applied for a job. S heard the president of the company say the job would go to Jones. S has also just counted the ten coins in Jones' pocket, and therefore believes that "The man who will get the job has ten coins in his pocket." But it turns out the job goes to S, who unbeknown to himself happened to have ten coins in his own pocket. He therefore had a true belief that seems justified, but I don't know of anyone who believes it should count as knowledge.

Some philosophers responded to this by saying that in addition to justification and truth, a belief needs to have no false lemmas hiding in its past in order to count as knowledge. But this led to the appearance of the following counterexample, as told in the historical summary here:

Fake Barn Country: Henry is looking at a (real) barn, and has impeccable visual and other evidence that it is a barn. He is not gettiered; his justification is sound in every way. However, in the neighborhood there are a number of fake, papiere-mâché barns, any of which would have fooled Henry into thinking it was a barn.

Henry does not appear to use any false lemmas in forming his belief, at least not explicit lemmas like S in the first problem. Yet most philosophers do not believe Henry has knowledge when he says, 'Hey, a barn,' since he would have thought this whether he saw a real barn or a barn facade. Interestingly, a lot of ordinary people may not share this intuition with the philosophers or may take a different position on the matter in different contexts. I will try to spell out what the Gettier intuitions actually point towards before judging them, in the hope that they point to something useful. For now we can call their object 'G-knowledge' or 'G-nosis'. (That part doesn't seem like proper Rationalist Taboo, but as far as I can tell my laziness has no fatal consequences.)

At one time I thought we could save the verbal definition and sweep all the Gettier cases into the No False Lemmas basket by requiring S to reject any practical possibility of deception or self-deception before his or her belief could possibly count as knowledge. This however does not work. The reason why it fails gives me an excuse to quote a delightful Gettier case (also from Lycan's linked historical summary) involving an apparent AI who knows better than to take anything humans say at face value:

Noninferential Nogot (Lehrer 1965; 1970). Mr. Nogot in S’s office has given S evidence that he, Nogot, owns a Ford. By a single probabilistic inference, S moves directly (without passing through ‘Nogot owns a Ford’) to the conclusion that someone in S’s office owns a Ford. (As in any such example, Mr. Nogot does not own a Ford, but S’s belief happens to be true because Mr. Havit owns one.)
Cautious Nogot (Lehrer 1974; sometimes called ‘Clever Reasoner’). This is like the previous example, except that here S, not caring at all who it might be that owns the Ford and also being cautious in matters doxastic, deliberately refrains from forming the belief that Nogot owns it.

The Cautious AI has evidently observed a link between claims of Ford ownership and the existence of Fords which seem to 'belong' to some human in the vicinity. But this S believes humans may have a greater tendency to say they own a Ford when somebody nearby owns one. I can think of postulates that would justify this belief, but let's assume none of them hold true. Then S will modify some of its numerical 'assumptions' if it learns the truth about the link. In principle we could keep using my first attempt at a definition if not for this:

And there is the obvious sort of counterexample to the necessity of ‘no-false-lemmas’ (Saunders and Champawat 1964; Lehrer 1965). Nondefective Chain: If S has at least one epistemically justifying and non-Gettier-defective line of justification, then S knows even if S has other justifying grounds that contain Gettier gaps. For example (Lehrer), suppose S has overwhelming evidence that Nogot owns a Ford and also overwhelming evidence that Havit owns one. S then knows that someone in the office owns a Ford, because S knows that Havit does and performs existential generalization; it does not matter that one of S’s grounds (S’s belief that Nogot owns a Ford) is false.

By the time I saw this problem I'd already tried to add something about an acceptable margin of error ε, changing what my definition said about "practical possibility" to make it agree with Bayes' Theorem. But at this point I had to ask if the rest of my definition actually did anything. (No.)

From this perspective it seems clear that in each Gettier case where S lacks G-nosis, the reader has more information than S and that leads to a different set of probabilities. I'll start nailing down what that means shortly. First let's look at the claim that G-nosis obeys Bayes.

My new definition leads to a more generous view of Henry or S in the simple case of No Fake Barns. Previously I would have said that S lacked knowledge both in Fake Barn Country and in the more usual case. But assume that S has unstated estimates of probability, which would change if a pig in a cape appeared to fly out of the barn and take to the sky. (If we assume a lack of even potential self-doubt, I have no problem saying that S lacks knowledge.) It looks like in many cases the Gettier intuitions allow vague verbal or even implied estimates, so long as we could translate each into a rough number range, neither end of which differs by more than ε from the 'correct' value. S would then have G-nosis for sufficiently forgiving values of ε.

And I do mean values, one ε for each 'number' that S uses. G-nosis must include a valid chain of probabilistic reasoning that starts from the starting point of S, and in which no value anywhere differs by more than ε from that which an omniscient human reader would assign. If you think that last part hides a problem or five, give yourself a pat on the back. But we can make it seem less circular by taking "omniscient" to mean that for every true claim which requires Bayesian adjustment, our reader uses the appropriate numbers.  (I'd call the all-knowing reader 'Kyon,' except I suspect we'll wind up excluding too many people even without this.)

Note for newcomers: this applies to logical evidence as well. We can treat standard true/false logic as a special case or limit of probability as we approach total certainty. 'Evidence' in probability means any fact that you gave a greater chance of truth on the assumption of some belief A than on the assumption of not-A. This clearly applies to seeing a proof of the belief. If you assume that you'll never see a valid proof of A and then later have to accept not-A as true, then you can just plug in zero for the probability of seeing the proof in the world of not-A and get a probability of 100% for A. So our proposed definition of knowledge seems general enough to include abstract math and concrete everyday 'knowledge'.

Another note about our definition, chiefly for newcomers: the phrase "every true claim" needs close attention. In principle Gödel tells us that every useful logical system (including Bayes) will produce statements it can't prove or disprove with logical certainty. In particular it can't prove its own logical self-consistency (unless it actually contradicts itself). If we regard the statement with the greater probability for us as true, that gives us a new system that creates a new unprovable statement, and so on. But none of these new axioms would change the truth values of statements we could prove within the old system. If we treat mathematically proven statements as overwhelmingly likely but not certain — if we say that for any real-world system that acts like math in certain basic ways, mathematical theorems have overwhelming probability — then it seems like none of the new axioms would have much effect on the probability of any statement we've been claiming to know (or have G-nosis of). In fact, that seems like our reason for wanting to call the "axioms" true. So I don't think they affect any practical case of G-nosis. We already more or less defined our reader as the limit of a human updating by Bayes (as the set of true statements that no longer require changes approaches the set of all true statements). Hopefully for any given statement we could have G-nosis of, we can define a no-more-than-countably-infinite set of true statements or pieces of evidence that get the job done by creating such a limit. I think I'll just assert that for every such G-knowable statement A, at worst there exists a sequence of evidence such that A's new probabilities (as we take more of the sequence into account) converge to some limit, and this limit does not change for any additional truth(s) we could throw into the sequence. Humanity's use of math acts like part of a sequence that works in this way, and because of this the set of unprovable Gödel "axioms" looks unnecessary. At this point we might not even need the part about a human reader. But I'll keep it for a little longer in case some non-human reader lacks (say) the Gettier intuition regarding counterfactuals. More on that later.

Third note chiefly for newcomers: I haven't addressed the problem of getting the numbers we plug in to the equation, or 'priors'. We know that by choosing sufficiently wrong priors, we can resist any push towards the right answer by finite real-world evidence. This seems less important if we use my limit-definition but still feels like a flaw, in theory. In practice, humans seem to form explanations by comparing the world to the output of black boxes that we carry inside of ourselves, and to which we've affixed labels such as "anger" or "computation". I don't think it matters if we call these boxes 'spiritual' or 'physical' or 'epiphenomenal', so long as we admit that stuff goes in, stuff comes out and for the most part we don't know how the boxes work. Now a vast amount of evidence suggests that if you started from the fundamental nature of reality and tried using it to duplicate the output of the "anger" box (or one of the many 'love' boxes), you'd need to add more conditions or assumptions than the effects of "computation" would require. Even if you took the easy way out and tried to copy a truly opaque box without understanding it, you'd need complicated materials and up to nine months of complex labor for the smallest model. (Also, how would a philosopher postulate love without postulating at least the number 2?) New evidence about reality could of course change this premise. Evidence always changes some of the priors for future calculations. But right now, whenever all else seems equal, I have to assign a greater probability to an explanation which uses only my "computation" box than to one which uses other boxes. (Very likely the "beauty" box plays a role in all cases. But if you tell me you can explain the world through beauty alone, I probably won't believe you.) This means I must tentatively conclude our omniscient reader would use a similar way of assigning priors. And further differences seem unlikely to matter in the limit. Even for real-world scenarios, the assumption of "sufficiently wrong priors" now looks implausible. (Humans didn't actually evolve to get the wrong answer. We just didn't evolve to get the right one. Our ancestors slid in under the margin of error.) All of which seems to let our reader assign a meaning to Eliezer's otherwise fallacious comment here.

(I had a line that I can't bear to remove entirely about the two worlds of Parmenides, prior probability vs the evidence, and timeless physics compared to previous attempts at reconciliation. But I don't think spelling it out further adds to this argument.)

Having established the internal consistency of our definition, we still need to look for Gettier counterexamples before we can call it an account of G-nosis. The ambiguity of Nogot allows for one test. If we assume that people do in fact show a greater chance of saying they own a Ford when somebody nearby owns one, and that S would not need to adjust any prior by more than ε, it seems to me that S does have knowledge. But we need more than hindsight to prove our case. Apparent creationist nut Robert C. Koons has three attempts at a counterexample in his book Realism regained (though he doesn't seem to look at our specific Bayesian definition). We can dismiss one attempt as giving S an obvious false prior. Another says that S would have used a false prior if not for a blow to the head. This implies that S has no Bayes-approved chain of reasoning from his/her actual starting point to the conclusion. Finally, Koons postulates that an "all-powerful genie" controlled the evidence for reasons unrelated to the truth of the belief A, and the result happens to lead to the 'correct' value. But if our 'human reader' would not consider this result knowledge then 'correct' must not mean what I've called G-nosis. Apparently the reader imagines the genie making a different whimsical decision, and calculates different results for most of the many other possible whims it could follow. This results in a high reader-assigned probability which we call P(E|¬A), or P of E if not-A, for the evidence to appear the way it does to S even if one treats S's belief as false. And so S still would not have G-nosis by our definition.

This seems to pinpoint the disagreement between intuitions in the case of Fake Barn. People who deny S in Fake Barn knowledge believe that P(E|¬A) has a meaning even if we think P(¬A)=0 — in the limit, perhaps, or in a case where someone mistakenly set the prior probability of A at 100% because the evidence seems so directly and clearly known that they counted it twice. Obviously if we also require that no value anywhere differ by too much from the reader's, then a sufficiently 'wrong' P(E|¬A) rules out G-nosis. (This seems particularly true if it equals P(E|A), since E would no longer count as evidence at all.) If the reader adds what S might call a 'no-silliness' assumption, ruling out any effect of the barn facades on P(E|¬A), then S does have G-nosis. Though of course this would make less sense if our reader brought out the 'silly' counterfactual in the first place.

I'd love to see how the belief in knowledge for Fake Barn correlates with getting the wrong answer on a logic test due to judging the conclusion ("sharks are fish,") instead of the argument or evidence ("fish live in the water," & "sharks live in the water.")

Finally, the Zombie World story itself has a form similar to a Gettier case. If we just assume two 'logically possible worlds' then the probability of Chalmers arriving at the truth about his own experiences, given only the process he used to reach his beliefs, appears to equal 50%. Clearly this does not count as G-nosis, nor would we expect it to. But let's adjust the number of worlds so that the probability of consciousness, given the process by which Chalmers came to believe in his consciousness, equals 1-ε. (Technically we should also subtract any uncertainty Chalmers will admit to, but I think he agrees that if this number grows large enough he no longer has G-nosis of the belief in question, rather then the contrary belief or the division of probability between them.) His belief now fits the definition. And it intuitively seems like knowledge, since ε means the chance of error that I'd find acceptable. This seems like very good news for our definition.

 


 

What about the zombies?

 

Before going on to the obvious question, I want to make sure we double-tap. Back to the first zombie! Until now I've tried to treat "G-nosis" as a strange construct that philosophers happen to value. Now that we have the definition, however, it seems to spell out what we need in order for our beliefs to fit the facts. Even the assumption that P(E|¬A) obeys a reasonable definition of counterfactuals in the limit seems necessary in that, if we assume this doesn't happen, we suddenly have no reason to think our beliefs fit the facts. "G-nosis" illustrates the process of seeking reality. If our beliefs don't fit the definition, then extending this process will bring them into conflict with reality — unless we somehow arrived at the right answer by chance without having any reason to believe it. I think the zombiphile argument does assume that we have reason to believe in our own consciousness and no omniscient reader could disagree.

We gave Chalmers a 50% chance of having conscious experiences, what philosophers call "qualia," given his evidence and the assumption of the two worlds. But he would object that he used qualia to reach his conclusion of having them — not in the sense of causation, but in the sense of giving his conclusion meaning. When he says he knows qualia this sounds exactly like Z-Chalmers' belief, but gains added content from his actual qualia. Our definition however requires him to use probabilistic reasoning on some level before he can trust his belief. This appears to mean that some time elapses between the qualia he (seemingly) uses to form his belief, and the formally acceptable conclusion. If we assume Zombie World exists, then the evidence available when the conclusion appears seems just like the evidence available to Z-Chalmers. So it seems like this added content appears after the fact, like the content we give to statements by Z-Chalmers. And if the real Chalmers treats it as new evidence then the same argument applies. So much for Zombie #1.

So does the addition of worlds save the zombiphile argument? I don't know. (We always see one bloody zombie left at the end of the film!) But anything that could deceive me about the existence of my own qualia seems able to deceive me about '2+2=4'. I therefore argue that, in this particular case, ε should not exceed the chance that Descartes' Demon is deceiving me now, or that arithmetic contradicts itself, and 2 plus 2 can equal 3. Obviously it seems like a stretch to call this a logical possibility.

Even in the zombiphile argument, we can't regard the "bridging law" that creates consciousness as wholly independent from physical or inter-subjectively functional causes. We must therefore admit a strong a priori connection between a set of physical processes (a part of what Chalmers calls "microphysical truths") and human experience or "phenomenal truths". This brings us a lot closer to agreement, since the Bayesian forms of physicalism claim only an overwhelming probability for the claim that necessity links a particular set of causes to consciousness. And if we trust that our own consciousness exists, I argue this shows we must not believe in Zombie World as a meaningful possibility.

I feel like I should get this posted now, so I won't try to say what this means for a not-so-Giant Look-Up Table that simulates a Boltzmann Brain concluding it has qualia. Feel free to solve that in the comments.

23 comments

Comments sorted by top scores.

comment by Jack · 2011-01-24T23:06:21.500Z · score: 1 (1 votes) · LW(p) · GW(p)

So I'm having difficulty getting traction on the actual content- there is too much going on and you're not especially systematic. You need more thesis and subthesis statements. Maybe we can diagram your argument?

Chalmers claims zombies are logically possible.

Thomas tries to form a trilemma by arguing that we can't regard statements made by Zombie Chalmers about consciousness as true, or false, or meaningless.

This looks bad but I don't see yet how this implies the logical impossibility of p-zombies.

But the important lemma requires an account of knowledge in order to work.

I take it this is the part you take on in your discussion of Gettier cases.

To run through the other two lemmas: we know the zombie statements have meaning or truth values for us, the people postulating them, so let the rest of the argument apply to those meanings. And if we call the statements true then they must by assumption refer to something other than consciousness — call it consciousness-Z. But then Zombie Hairyfigment will (by assumption) say, "I have real consciousness and not just consciousness-Z."

Okay, so zombie statements about their consciousness will not be true- since they will insist they are conscious and they are not. I'm leaning toward 'those statements are false' at the moment.

The remaining lemma tries to draw a contradiction out of the zombiphile argument's assumption that Chalmers has knowledge of his own consciousness. We need some way to recognize or rule out knowledge in order for this to work. Happily, on this one question standard philosophy seems to point clearly towards the answer we want.

In what way is Chalmers's argument dependent upon this assumption?

Can you explain what your analysis of Gettier type cases is supposed to show about zombie statements about consciousness?

I need a basic outline of the primary claims before I can go farther. (Hopefully this will also help you organize and explain your argument better).

comment by hairyfigment · 2011-01-25T01:02:01.498Z · score: 0 (0 votes) · LW(p) · GW(p)

This looks bad but I don't see yet how this implies the logical impossibility of p-zombies.

Er, what? It would rule out each of the exhaustive possibilities (if we can reduce the options to three, as I tried to do briefly.) This would imply the logical impossibility of Zombie World by reductio, because the assumptions of Zombie World include a p-zombie Chalmers making statements about qualia. (I edited the post after the fact to more clearly specify Chalmers' particular version of the zombiphile argument and to state his conclusion.)

(I could have spent more time on the lemma that we can't regard Z-Chalmers' statements as meaningless. But the meanings that we the readers take from them derive from the assumptions of the zombiphile argument. As near as I can tell, an inherent contradiction in these 'apparently conceivable' meanings for Z-Chalmers' statements would mean a contradiction in the argument, therefore etc. Q.E.D.)

The important lemma says we can't regard the statements as false because that would make Chalmers' own knowledge of the existence of qualia unreliable. Now this depends on his whole argument up to the "bridging law", not just the zombie part. I have to assume that one of us knows "qualia" exist (as I think he has to assume it in order to draw any conclusion.) This would create a major flaw in my argument if the assumption seemed doubtful in any meaningful way, or if Chalmers didn't seem so certain of it.

Even granting this, my analysis of Gettier cases leads me to call the lemma open to interpretation. If we try to find the best form of the zombiphile argument then we could technically save it from contradiction. This, however, requires us to call our belief in qualia less-than-perfect evidence for their existence. Our belief counts as evidence to the extent that it actually co-exists with qualia, or the bridging law that produces them from 'matter'. But the fact of our consciousness appears as certain as any fact we know. I therefore argue that a priori (or rather, depending on that last sentence alone) we cannot assign the Zombie World more probability than any of the claims we casually reject in the field of math, like the claim that arithmetic contradicts itself, or the claim that every single mathematician or programmer who helped examine (say) the proof of Gödel's Incompleteness Theorem actually made some hidden mistake than none of us can find. Likewise, we must assign at least the amount of probability that we laughably call 'mathematical certainty' to the link leading from the 'physical' or functional causes we postulated in p-zombies, to consciousness.

At some point I'll try to spell out exactly which causes I think do the work. In particular I want to see if they exist within any process that spits out an argument for believing in qualia (probably not.) But first let's try to nail these parts down.

(edited slightly for clarity)

comment by Jack · 2011-01-25T05:44:02.737Z · score: 0 (0 votes) · LW(p) · GW(p)

Ah very good. That was clear enough for me to pinpoint exactly where I think you've wrong.

The analyticity of a claim does not correspond all that well to the probability human's assign to it's truth. In other words, it may be true that p(hairyfigment is a zombie) is no more likely than 2+2=3. But that doesn't make 'hairfigment is a zombie' logically impossible even though 2+2=3 is. This is rather trivial to see actually. Right now I'm wearing Converse All-Star sneakers. I'm pretty certain of this and, in fact, I'm more certain of this than 4523199 x 66734=301851162066 (I've did this with a calculator, but lets pretend I did it by hand or that someone just handed me a piecer of paper with this equation on it). This is in fact true of the vast majority of mathematical and logical statements- statements whose negations are logically impossible. Nonetheless, it is not logically impossible that I am not wearing Converse All-Star sneakers. Humans are not logically omniscient and sufficiently complex truisms can involve a lot of uncertainty. In contrast, rather simple observations involve little uncertainty but are not logical truisms. It is also not logically impossible that you are being deceived by a Cartesian Demon.

Of course there is also the question of to what extent incorrect mathematical statements are actually logically possibilities and what it actually means for something to be a logical possibility (it may not be the same thing as a metaphysical possibility).

I'm not sure I see how the Gettier cases are fitting in though.

comment by hairyfigment · 2011-01-25T17:51:51.162Z · score: 0 (0 votes) · LW(p) · GW(p)

Take the assumptions that seem least convenient for my argument, as given in my previous comment. Say the probability that "arithmetic will never tell us '2+2=3'," does in fact converge to 1.

Let lim{P(x)} mean the limit of x's probability as we follow a sequence of evidence, such that no truths added to the middle of the sequence could change the limit. If A means "physical causes which duplicate Chalmers' actions and speech, and duplicate the functional process producing them, would produce qualia," and B means "arithmetic will never tell us '2+2=3'," and C means all of Chalmers' premises, I think my analysis of Gettier cases leads in part to this:

lim{P(B)} - lim{P(A|C)} <= | P(B) - lim{P(B)} |

I'd also argue that we should interpret probabilities as rational, logical degrees of belief. For this it helps if you've read the Intuitive Explanation of Bayes' Theorem. Then I can show a flaw in frequentism and perhaps in subjective-Bayesianism just by pointing out that EY's first problem has a unique answer, whether or not you choose to call it a probability. It has a unique answer even if we call the woman Martha. From there we can go straight to the problem of priors, as I did in my third note.

Going back to the math, then: if you think the right-hand side has a non-zero value, then logically you should adjust P(B). Because you must expect evidence that would lead you to change your mind, and that means you should change your mind now.

Ergo, logic tells us to believe A|C at least as much as we believe B.

comment by hairyfigment · 2011-01-25T08:21:44.485Z · score: 0 (0 votes) · LW(p) · GW(p)

I could be wrong, but I think this doesn't quite address it.

First, I don't deal directly with P(hairyfigment is a zombie). We should replace that with something like P(someone talking about qualia is a zombie| Zombie World exists & knowledge of qualia exists). And while I do get that second premise from introspection, I believe I could also get it from Chalmers. After that I follow logic to get the result.

Second, and this may bring us closer to the real disagreement, you don't know the claim "2+2=3" is logically impossible, at least not from logic itself. (If that sounds like what you wanted to say at the end there, you can skip to the next paragraph.) Gödel tells us that logic can't prove its own consistency or that of arithmetic. For all we know mathematically, it may allow us to prove both "2+2=4" and "2+2=3" from whatever set of assumptions you want to use. You can prove consistency from within a different set of assumptions, but that just pushes the problem elsewhere. In reality, we believe logic and arithmetic don't contradict themselves because they never have.

This brings me to Gettier. I think if we accept the Gettier intuitions, we have to define knowledge using the laws of probability. And this definition applies to logic. I pointed out in one of my notes that this makes sense formally. I also believe it makes practical sense for the reason I just gave. If we reject the definition I think we have to say that we'll never know if 2+2 can give us three, nor could any logically omniscient entity know.

Now, for the statement that "arithmetic will never tell us '2+2=3'," you could say we have a large probability of certainty. For the statement that "physical causes which duplicate Chalmers' actions and speech would produce qualia," you could say we have certainty of a large minimum probability. But surely the two give us equal "knowledge" of the statement. My imaginary, logically and empirically omniscient reader might give the two claims different numbers. The first might get P=1 in the limit while the second has P=1-ε by assumption. But I defined ε as the difference we feel we can ignore when it comes to the most certain claim we could possibly make! If you feel we can't ignore it then aren't you thinking of the wrong number? (Technically ε could equal zero.)

I'll have to think about this some more. For now, we can agree that I started by assuming all Chalmers' premises and arrived at an overwhelmingly high probability for the link he wants to prove logically unnecessary, yes?

comment by Jack · 2011-01-25T21:11:31.438Z · score: 1 (1 votes) · LW(p) · GW(p)

First, I don't deal directly with P(hairyfigment is a zombie). We should replace that with something like P(someone talking about qualia is a zombie| Zombie World exists & knowledge of qualia exists).

Huh? If we live in a Zombie world than by assumption everyone, whether they're talking about qualia or not, is a p-zombie.

I'm discussing whether or not you are a zombie (or that Chalmers is a zombie) because that becomes the crux of the issue once we stipulate that the cognitive mechanism by which we conclude we have qualia is not 100% reliable.

In reality, we believe logic and arithmetic don't contradict themselves because they never have.

There is no reason for us to get into that. Your claim here is the same as saying that we can never determine what is and is not logically possible. That it isn't an unreasonable claim but you can't proceed from it and show that p-zombies are logically impossible. Obviously.

We use logic and math, plus some kind of ontology to draw our map of reality. Chalmer's claim is that our physical ontology combined with logic and math is insufficient to describe the world we live in- namely the one where we have qualia. So by necessity we're bracketing questions about the consistency of logic and mathematics and assuming they work. The question is whether or not qualia is a logical extension of our physical ontology. Chalmer's claims it isn't.

This brings me to Gettier. I think if we accept the Gettier intuitions, we have to define knowledge using the laws of probability. And this definition applies to logic.

I already endorse strict Bayesian epistemology and think talk of "knowledge" is basically meaningless- beliefs just have probabilites. This applies to statements I make about logic as well. But that doesn't make logical claims that same as empirical claims! We might express our credence of them the same way but that doesn't make them categorically indistinguishable. Chalmer's point is that qualia isn't included in our physical description of the universe nor is it part of any logical extension of that ontology. It's a conceptually distinct phenomena that requires our description of the universe to have an additional term.

Now, for the statement that "arithmetic will never tell us '2+2=3'," you could say we have a large probability of certainty. For the statement that "physical causes which duplicate Chalmers' actions and speech would produce qualia," you could say we have certainty of a large minimum probability. But surely the two give us equal "knowledge" of the statement.

Yes, claims based on self-observation can have similar probabilities to deductive claims. But that isn't Chalmer's point. The self-observation you use to conclude you have qualia isn't a logical extrapolation from the physical description of your brain. When Chalmers says Zombie Dave is a logical or metaphysical possibility he isn't making a statement about the size of the set of worlds in which zombies exist. Rather, he is saying in the set of epistemically possible worlds (which includes logically impossible worlds since we are not logically omniscient) there are zombie worlds in the subset which is logically/metaphysically possible.

comment by hairyfigment · 2011-01-28T06:32:11.670Z · score: 0 (0 votes) · LW(p) · GW(p)

I see your objection, and I see more clearly than before the need for thesis statements.

I want to show that given only a traditional assumption of philosophy (a premise of cogito ergo sum, I think), we must believe in the claim: "physical causes which duplicate Chalmers' actions and speech, and which we could never physically distinguish from Chalmers himself, would produce qualia." Let's call this belief A for convenience. (I called a different statement A in a previous comment, but that whole version of the argument seems flawed.) It so happens that if we accept this claim we must reject the existence of p-zombies, but we care about p-zombies only for what they might tell us about A.

To that end, I argue that if we accept Chalmers' zombiphile or anti-A argument C, which includes the assumption I just mentioned, we must logically believe A.

Therefore, I don't argue that "the cognitive mechanism by which we conclude we have qualia is not 100% reliable." I argue that we would have to accept a slightly more precise form of that claim if we accept C, and then I show some of the consequences. (Poorly, I think, but I can improve that part.)

Likewise, I don't argue that "we can never determine what is and is not logically possible." I argue that we must believe certain claims about logic and math, like the claim B that "arithmetic will never tell us '2+2=3'," due to the same thought process we use to judge all rational beliefs. Now that seems less important if the argument in that previous comment fails. But I still think intuitively that if the probability of you the judge having qualia (call this belief Q) would not equal 1 in the limit, then lim{P(B)} would not equal 1. This of course seems consistent, since we don't need to assign P(B)=1 now, and we'd have to if we believed with certainty that the limit = 1. But on this line of thinking we have to call ¬B logically possible without qualification, thereby destroying any practical or philosophical use for this kind of possibility unless we supplement it with more Bayesian reasoning. The same argument leads me to view adding a new postulate, like a bridging law or a string of Gödel statements, as entirely the wrong approach.

I think this allows me to make a stronger statement than Robert Bass, who as I mentioned near the start of the post turns out to make a closely related argument in the linked PDF, but does not explicitly try to define what philosophers normally call "knowledge". (I don't know if this accounts for the dearth of Google or Google Scholar results for "robert bass" and either chalmers or zombie.) Once I give my definition perhaps I should have just pointed out that P(A|C)=P(Q|C) in the limit. Thus if someone who treats Q as certain has knowledge of Q (as I think C asserts), we can only escape the conclusion that we know A when we treat it as certain by giving P(A|C) a smaller acceptable difference ε from the limit. (Edited to remove mistake in expression.) Now I can certainly think of scenarios where the exact P(A|Q) would matter a lot. But since A has more specific conditions than 'uploading' and rules out more possible problems than either this or sleep, and since P(A|Q)>P(A|C), I think knowing the latter has a margin of error no greater than P(Q) would fully reassure me. (I guess we're imagining Omega telling me he'll reset me at some later time to exactly my current physical state, which carries other worries but doesn't make me fear zombie-hood as such.) And it seems inarguable that P(A|Q)>P(A|C), since C makes the further assumption that we'll never find a contradiction in ¬A. You'll notice this works out to P(A|Q)>P(Q|C), which by assumption seems pretty fraking certain.

comment by Jack · 2011-01-28T06:57:02.049Z · score: 0 (0 votes) · LW(p) · GW(p)

To clarify before proceeding:

"physical causes which duplicate Chalmers' actions and speech, and which we could never physically distinguish from Chalmers himself, would produce qualia."

As written this is under-defined and doesn't even obviously contradict anything Chalmers says. What set of worlds does this 'would' apply to?

comment by hairyfigment · 2011-01-29T01:56:16.791Z · score: 0 (0 votes) · LW(p) · GW(p)

Better answer with slightly less snow-shoveling fatigue:

Chalmers assumes for the sake of argument that his actions and speech have physical causes. So the quoted claim A, by my argument in the fourth-to-last paragraph of the post, already stipulates the presence of the evidence that we gain from introspection and use to argue for qualia. Thus "would" applies to any logically possible world chosen at random, which may or may not have a "bridging law" to produce qualia. Chalmers doesn't seem to address the probability of it having such a law given the physical causes or ontology that we find in A. I show this chance exceeds P(that we actually have qualia | the aforementioned evidence for qualia & the assumption that we'll never prove A logically) -- long derivation at the end of this comment, since previous comments had flaws. Chalmers' argument against physicalism depends on treating this long conditional proposition as certain enough for his purpose, as its denial would leave us with no reason to think qualia exist and indeed no clear definition of qualia. Without the proposition Chalmers would not have an argument so much as an assertion. I'll look at what all of this means in a second.

In the grandparent I compare it to the situation in math. Gödel showed that we can't prove arithmetic will never contradict itself (we can't prove B logically) and I wanted to express this by saying that in any random logically possible world P(B)<1. Below you express it differently, saying the probability that we live in a logically possible world does not equal 1. According to that way of speaking, then, the chance that at least one logically possible world exists does not equal 1. It equals P(B), since if we could find a contradiction in one world then logic 'leads to' a contradiction in all worlds. And yet I argue that we know B, by means of Bayesian reasoning and the neglect of incredibly small probabilities that we have no apparent way to plan for. This would mean we know in the same way that logically possible worlds might tell us something about reality. The form of argument that Chalmers uses therefore takes its justification from Bayesian reasoning (and the neglect of incredibly small probabilities that we have no apparent way to plan for). If we can show that this process offers greater justification for A than for Chalmers' argument, then we should place more trust in A. This would of course increase the probability of physicalism, which requires A and seems to deny Chalmers' conclusion.

(In a prior comment I tried connecting the chance of A directly to the chance of B. But that version of the argument failed.)

What follows depends on the claim that our actions and speech have physical causes. If you accept my take on Chalmers' actual defense, and use C' to mean the claim that we'll never find a contradiction in ¬A, while E means the aforementioned evidence for the evidence-finder having qualia (claim Q),

P(A|E&C')=P(Q|E&C')

and P(A|E) seems redundant, in which case

P(Q|E&C')=P(A|E&C')=P(A|C')

Now, P(Q|E)= P(Q|E&C')P(C') + P(Q|E&¬C')P(¬C')

and since ¬C' means P(A)=1 within logic, P(Q|E&¬C') means the chance of Q, given the evidence plus certainty that physical causes duplicating the evidence would produce qualia. So

P(Q|E&¬C')=1

P(Q|E)= P(Q|E&C')*P(C') + P(¬C')

As for A,

P(A)= P(A|C')P(C') + P(A|¬C')P(¬C')

and since P(A|¬C') means the chance of A if ¬A contradicts itself, P(A)= P(A|C')*P(C') + P(¬C')

by substitution, P(A)= P(Q|E&C')*P(C') + P(¬C')

P(A)=P(Q|E)

¬C' would clearly increase the chance of Q|E and C' would decrease it,

so P(A)>P(Q|E&C')

comment by hairyfigment · 2011-01-28T08:56:40.207Z · score: 0 (0 votes) · LW(p) · GW(p)

I'm starting to think we shouldn't talk about sets of worlds at all. But in those terms: once we make that assumption of knowledge in our own world, "would" applies to the set of logically possible worlds, which may or may not have a "bridging law" to produce qualia. Chalmers doesn't seem to address the probability of them having such a law given the physical causes or ontology that we find in A. I show that it exceeds P(that we actually have qualia | we know we have qualia & we'll never prove A logically).

As I argued in the grandparent, saying we need to add a "bridging law" therefore seems like a terrible way to express the situation.

comment by Jack · 2011-01-28T10:35:23.096Z · score: 0 (0 votes) · LW(p) · GW(p)

You need to be a lot more precise. It is a chore to figure out what you're talking about in some of these comments and we've gone several rounds and I'm still not sure what your thesis is.

once we make that assumption of knowledge in our own world, "would" applies to the set of logically possible worlds, which may or may not have a "bridging law" to produce qualia. Chalmers doesn't seem to address the probability of them having such a law given the physical causes or ontology that we find in A.

I think what you're saying here is that qualia will be found in the entire set of logically possible worlds physically identical to our world (any bridging law is non-physical we'll stipulate). Some of these worlds might have some kind of 'redundant' bridging law- but even if that isn't nonsensical we can ignore it.

Now, we can recognize that someone could discover that 2+2=3 but that doesn't mean 2+2=3 is a logical possibility in the sense that there exists a logically possible world in which 2+2=3. Rather, we would just conclude that the world we live in isn't logically possible under whatever system of logic is shown to be contradictory. If you want to do calculations that don't assume classical logic and the system of real numbers then you need do them over a larger set of worlds than those that are merely logically possible (what business you have doing calculations at all if those worlds are 'live' possibilities, I'll leave to you).

Now I agree with you that if we can be certain about the internal observation that leads us to conclude we have qualia then it follows that a bridging law is unnecessary as p-zombies also believe they have qualia for the same reasons non-p-zombies do. For p-zombies to be possible then, requires that mechanism to be unreliable. I'd be surprised if Chalmers actually argues we can be 100% certain we are not p-zombies-but alright lets say he does argue that. We can easily strengthen the argument as you suggest by simply saying that the cognitive mechanism that produces the belief that we possess qualia is not 100% reliable.

And in fact the worlds in which this cognitive mechanism fails are the worlds which, according to Chalmers, have no bridging law. As far as I'm concerned that solves the problem. We've answered the question which is- are there logically possible worlds physically identical to this one where Chalmers has no qualia. The answer is yes- those worlds in which the cognitive mechanism which alerts him of his qualia fails.

Whether or not those worlds are common, that is whether or not it is probable we are in such a world, is totally tangential to Chalmers argument. I don't see the point of your equations and what they mean doesn't make much sense to me- you seem to be conditionalizing on an argument and not evidence at times, which is confusing and controversial.

comment by hairyfigment · 2011-01-28T17:41:16.233Z · score: 0 (0 votes) · LW(p) · GW(p)

Ha, I wrote that answer in haste. Let me try once more.

comment by Jack · 2011-01-28T20:57:14.671Z · score: 0 (0 votes) · LW(p) · GW(p)

Cool, take your time.

comment by Jack · 2011-01-23T19:05:38.208Z · score: 1 (1 votes) · LW(p) · GW(p)

Like others said this needs to either focus on Zombies or Gettier- maybe focus on Gettier first and then used that as a tool to address Chalmers. I would just take out the notes to newcomers. Despite the length of this post it still feels rushed to me- the ideas you're dealing are pretty complex and you just keep piling one on top of the other. This makes it hard to follow even for someone familiar with a lot of those concepts.

I have a bit to say about the content but I'm going to make that another comment.

comment by hairyfigment · 2011-01-24T18:44:03.857Z · score: 0 (0 votes) · LW(p) · GW(p)

I can see parts that I could reasonably cut or break off. But I really want a chance of convincing some philosophers, so I have to address the points in the notes somehow. Perhaps in actual footnotes?

Do you plan to wait and address the content after I fix that?

comment by Jack · 2011-01-24T22:37:51.327Z · score: 0 (0 votes) · LW(p) · GW(p)

I can see parts that I could reasonably cut or break off. But I really want a chance of convincing some philosophers, so I have to address the points in the notes somehow. Perhaps in actual footnotes?

As someone inferentially closer to philosophers than most people here I don't think the 'notes for newcomers' section is particularly essential to this endeavor. But the main thing is- this looks like a 15-20 page argument you've condensed to 4. So what you need to do is either forgo this medium and just write a full paper or break down the argument into sections that can be posted separately.

Do you plan to wait and address the content after I fix that?

Nah, I just got confused and then distracted and then moved on. I'll try to do it now. Feel free to harangue me if I don't.

comment by torekp · 2011-01-23T14:54:30.686Z · score: 1 (1 votes) · LW(p) · GW(p)

This post is poorly organized. A complete recap of the overall zombie killer argument should come first. Only after that, if at all, should the Gettier cases be introduced.

The essay by Nigel Thomas is quite good, by the way. The following snippet captures a key point, and sheds much light on your discussion:

Thus zombiphiles normally (and plausibly) insist that we know of our own consciousness directly, non-inferentially. Even so, there must be some sort of cognitive process that takes me from the fact of my consciousness to my (true) belief that I am conscious. As my zombie twin is cognitively indiscernible from me, an indiscernible process, functioning in just the same way, must lead it from the fact of its non-consciousness to the equivalent mistaken belief. Given either consciousness or non-consciousness (and the same contextual circumstances: ex hypothesis, ceteris is entirely paribus) the process leads one to believe that one is conscious. It is like a stuck fuel gauge, that reads FULL whether or not there is any gas in the tank.

I recommend you edit your post and insert this quotation near the top.

comment by hairyfigment · 2011-01-23T17:53:21.043Z · score: 0 (0 votes) · LW(p) · GW(p)

Good point, I assumed most people here had read at least one post on philosophical zombies. Added explanation.

Gettier seems important if we want to convince any philosophers, though.

comment by ksagan · 2014-05-04T13:47:39.679Z · score: 0 (0 votes) · LW(p) · GW(p)

I tried, I really tried, to puzzle out what you're saying here, but at this rate, it'll be a lot quicker if someone else just confirms or denies this for me: This is what I came up with, upon reading the Wiki article on Gettier. Is this basically what you're saying?

Situation: I got the job. I believe Jones got the job. I know Jones has 10 coins in his pocket. I have 10 coins in my pocket, but I don't know that. Do I "know" the person who got the job has 10 coins in their pocket?

Classic Gettier Interpretation:

  • Belief-Jones got the job
  • Belief-Jones has 10 coins in his pocket
    • Conclusion-The guy who got the job has 10 coins in his pocket

Bayesian Gettier Interpretation(Example numbers used for ease of intuition; minimal significant digits used for ease of calculation):

  • Belief-Jones probably (90%) got the job
  • Belief-Jones probably (90%) has 10 coins in his pocket
    • Conclusion- The person who got the job probably (81%) has 10 coins in his pocket
  • Belief-Jones might not have gotten the job (10%)
  • Belief-People sometimes have 10 coins in their pocket (50%)
    • Conclusion-Someone other than Jones with 10 coins in their pocket might have gotten the job (5%)
  • Belief-The person who got the job probably (81%) has 10 coins in his pocket
  • Belief-Someone other than Jones with 10 coins in their pocket might have gotten the job (5%)
    • Conclusion-The person who got the job probably (86%) has 10 coins in his pocket

In case it were unclear, I consider the answer to the initial question "Yes".

comment by prase · 2011-01-24T13:28:42.178Z · score: 0 (0 votes) · LW(p) · GW(p)

The post is full of interesting ideas, but should be split into several shorter posts.

comment by anon895 · 2011-01-24T06:45:53.824Z · score: 0 (0 votes) · LW(p) · GW(p)

Partway through, I had the urge to look up a past comment saying something like "I've seen philosophers argue, in apparently total sincerity, whether a man in a desert seeing a mirage of a lake that coincidentally has a lake just beyond it "really" knows the lake is there".

Unfortunately I can't find it now; it probably either didn't use the exact word "mirage", used another metaphor entirely, or was actually on OB. Searching "mirage" brought up a similar metaphor in Righting a Wrong Question, but that's making a different point.

comment by timtyler · 2011-01-23T10:34:39.270Z · score: 0 (0 votes) · LW(p) · GW(p)

Zombies seem to be an unscientific idea. If there is no way to tell whether a particular agent is a zombie or not, what is the point of discussing the issue?

comment by timtyler · 2011-01-23T10:45:05.654Z · score: -3 (7 votes) · LW(p) · GW(p)

Zombies seem to be an unscientific idea.

If two creatures are physically identical, there is no way to tell whether they are zombies or not - and so nothing to discuss.

If two creatures are behaviourally identical (but physically different) there are no sensible grounds for claiming one to be more conscious than the other. One scientist could say that the one with the wetter brain is conscious, and another scientist could say that the one with the dryer brain was conscious - but they would have no way of resolving their disagreement.

Some philosophers like to discuss ideas that can't be resolved by science. They can go on about them endlessly - and there is no danger that the scientists will hijack their ideas and decide whether they are correct or not experimentally.