Posts

Comments

Comment by doctorlogic on Repairing Yudkowsky's anti-zombie argument · 2011-10-06T15:14:09.978Z · LW · GW

I think what you are saying is that if we possessed detailed understanding of a mind, we might discover a reductive explanation of qualia.

I think I'm saying more than this. We might find that it is impossible for beings like ourselves to not have qualia. By analogy, consider the Goldbach conjecture. It's possibly try but not provable with a finite proof. But it's also possibly false, and possibly provably so with a finite proof. It's conceivable that the Goldbach conjecture is true, and conceivable that it is false, but only one of the two cases is logically possible.

On the other hand, in the Mary thought experiment Mary has an incredibly large brain. Since she has by definition (yes indeed) a perfect “model” of a brain her model is in fact the brain itself, therefore her mind runs the same computations and (with extreme likelihood) produces the same qualia.

I'm afraid I don't see this. If qualia can be understood in terms of a model, then we can show that it reduces. But having a brain is not the same thing as having a model of a brain. Children have brains and can be certain of their qualia, but they have no model of their cognition.

The qualia that Chalmers is talking about is what distinguishes first-person experience from third-person experience. Even knowing everything material about how you think and behave, I still don't know what your first-person experience is like in terms of my own first-person experience. In fact, knowing another person's first-person experience in terms of my own might not be possible because of indeterminacy of translation. Even being in possession of a perfect model of your brain doesn't obviously tell me exactly what your first-person experience is like. This is the puzzle that drives the zombie/anti-reductionist stance.

What I am saying beyond this is two-fold. First, even if the perfect model is of my own brain, there's still a gap between my first-person experience and my "third-person" understanding of my own brain. In other words, finding a gap isn't evidence for non-reductionism.

Second, the gap doesn't invalidate the reductive inference if the reductive inference wouldn't allow you to bridge the gap in any case.

How does this weigh on the zombie argument?

Well, frankly, we're a lot more confident in physicalism based on the evidence than we are in the lack of flaws in the zombie argument.

It's certainly possible that we're talking at cross purposes or that I don't understand your claim. Are you making a distinction between first-person experience and third-person knowledge of brains? The typical philosopher's response would be that a superintelligence has exactly the same problem as we do.

Comment by doctorlogic on Repairing Yudkowsky's anti-zombie argument · 2011-10-05T13:52:44.556Z · LW · GW

The first, as I think Yudkowsky states, is that qualia are not very well defined. Human introspection is unreliable in many cases, and we're only consciously aware of a subset of processes in our brains. This means that the fact that zombies are conceivable doesn't mean they are logically possible. When we examine what consciousness entails in terms of attention to mental processes, zombies might be logically impossible.

Second, one of the false intuitions humans have about consciousness goes something like this:

"If I draw up a schematic or simulation of my brain seeing a red field, I, personally, don't then see what it is like to see the color red. Therefore, my schematic cannot be the whole story."

Of course, this intuition is completely silly. A model of my brain doing something isn't going to produce qualia in my own mind. Nevertheless, I think this intuition drives the Mary thought experiment. In the Mary experiment, Mary is omniscient about color and human vision and cognition, but has lived in a black and white environment all her life. When she sees red for the first time, she knows something more than she did before. (Though Dennet would say she now simply knows she can see the color red.)

As Bayesian reasoners, we have to ask ourselves, what might we expect if qualia do (versus do not) reduce to mechanistic processes?

If qualia do reduce to physics, then we would still find ourselves in the same situation as Mary. We don't expect models of brains to produce qualia in the brains of the modeler. At the same time, there are good reasons to expect physical brains to have qualia as Antonio Damasio has described in Self Comes To Mind. On the other hand, if qualia could have had any conceivable value, why should they have happened to be the qualia consistent with reduction? Why couldn't seeing a red field produce qualia consistent with seeing elephants on Thursdays?

Another way of putting this is to say that reductive inference isn't expected to create qualia in the reasoner. When I model water as H2O, my model doesn't feel moist! Rather, the inference works because the model predicts facts about water that didn't have to be that way if water didn't reduce. Similarly, reduction of minds to brains need not produce actual qualia in theorists. The theorists need only show that the alternatives get crushed in Bayesian fashion. The Mary experiment was supposed to show that reductionism was impossible, but it fails because the apparent qualia gap would exist whether or not we are mechanical.

Comment by doctorlogic on Chicago Meetup 11/14 · 2010-11-15T20:15:38.414Z · LW · GW

Yes, great conversation. Great meeting everyone.