Seeing Red: Dissolving Mary's Room and Qualia
post by orthonormal · 2011-05-26T17:47:55.751Z · LW · GW · Legacy · 157 commentsContents
Modeling Qualia TO BE CONTINUED Disclaimer Footnotes: None 157 comments
Essential Background: Dissolving the Question
How could we fully explain the difference between red and green to a colorblind person?
Well, we could of course draw the analogy between colors of the spectrum and tones of sound; have them learn which objects are typically green and which are typically red (or better yet, give them a video camera with a red filter to look through); explain many of the political, cultural and emotional associations of red and green, and so forth... but it seems that the actual difference between our experience of redness and our experience of greenness is something much harder to convey. If we focus in on that aspect of experience, we end up with the classic philosophical concept of qualia, and the famous thought experiment known as Mary’s Room1.
Mary is a brilliant neuroscientist who has been colorblind from birth (due to a retina problem; her visual cortex would work normally if it were given the color input). She’s an expert on the electromagnetic spectrum, optics, and the science of color vision. We can postulate, since this is a thought experiment, that she knows and fully understands every physical fact involved in color vision; she knows precisely what happens, on various levels, when the human eye sees red (and the optic nerve transmits particular types of signals, and the visual cortex processes these signals, etc).
One day, Mary gets an operation that fixes her retinas, so that she finally sees in color for the first time. And when she wakes up, she looks at an apple and exclaims, "Oh! So that's what red actually looks like."2
Now, this exclamation poses a challenge to any physical reductionist account of subjective experience. For if the qualia of seeing red could be reduced to a collection of basic facts about the physical world, then Mary would have learned those facts earlier and wouldn't learn anything extra now– but of course it seems that she really does learn something when she sees red for the first time. This is not merely the god-of-the-gaps argument that we haven't yet found a full reductionist explanation of subjective experience, but an intuitive proof that no such explanation would be complete.
The argument in academic philosophy over Mary's Room remains unsettled to this day (though it has an interesting history, including a change of mind on the part of its originator). If we ignore the topic of subjective experience, the arguments for reductionism appear to be quite overwhelming; so why does this objection, in a domain in which our ignorance is so vast3, seem so difficult for reductionists to convincingly reject?
Veterans of this blog will know where I'm going: a question like this needs to be dissolved, not merely answered.
That is, rather than just rehashing the philosophical arguments about whether and in what sense qualia exist4, as plenty of philosophers have done without reaching consensus, we might instead ask where our thoughts about qualia come from, and search for a simplified version of the cognitive algorithm behind (our expectation of) Mary's reaction. The great thing about this alternative query is that it's likely to actually have an answer, and that this answer can help us in our thinking about the original question.
Eliezer introduced this approach in his discussion of classical definitional disputes and later on in the sequence on free will, and (independently, it seems) Gary Drescher relied on it in his excellent book Good and Real to account for a number of apparent paradoxes, but it seems that academic philosophers haven't yet taken to the idea. Essentially, it brings to the philosophy of mind an approach that is standard in the mathematical sciences: if there's a phenomenon we don't understand, it usually helps to find a simpler model that exhibits the same phenomenon, and figure out how exactly it arises in that model.
Modeling Qualia
Our goal, then, is to build a model of a mind that would have an analogous reaction for a genuine reason5 when placed in a scenario like Mary's Room. We don't need this model to encapsulate the full structure of human subjective experience, just enough to see where the Mary's Room argument pulls a sleight of hand.
What kinds of features might our model require in order to qualify? Since the argument relies on the notions of learning and direct experience, we will certainly need to incorporate these. Another factor which is not immediately relevant, but which I argue is vital, is that our model must designate some smaller part of itself as the "conscious" mind, and have much of its activity take place outside of that part.
Now, why should the conscious/unconscious divide matter to the experience of qualia? Firstly, we note that our qualia feel ineffable to us: that is, it seems like we know their nature very well but could never adequately communicate or articulate it. If we're thinking like a cognitive scientist, we might hypothesize that an unconscious part of the mind knows something more fully while the conscious mind, better suited to using language, lacks access to the full knowledge6.
Secondly, there's an interesting pattern to our intuitions about qualia: we only get this feeling of ineffability about mental events that we're conscious of, but which are mostly processed subconsciously. For example, we don't experience the feeling of ineffability for something like counting, which happens consciously (above a threshold of five or six). If Mary had never counted more than 100 objects before, and today she counted 113 sheep in a field, we wouldn't expect her to exclaim "Oh, so that's what 113 looks like!"
In the other direction, there's a lot of unconscious processing that goes into the process of digestion, but unless we get sick, the intermediate steps don't generally rise to conscious awareness. If Mary had never had pineapple before, she might well extol the qualia of its taste, but not that of its properties as it navigates her small intestine. You could think of these as hidden qualia, perhaps, but it doesn't intuitively feel like there's something extra to be explained the way there is with redness.
Of course, there are plenty of other features we might nominate for inclusion in our model, but as it turns out, we can get a long way with just these two. In the next post, I'll introduce Martha, a simple model of a learning mind with a conscious/unconscious distinction, and in the third post I'll show how Martha reacts in the situation of Mary's Room, and how this reaction arises in a non-mysterious way. Even without claiming that Martha is a good analogue of the human mind, this will suffice to show why Mary's Room is not a logically valid argument against reductionism, since if it were then it would equally apply to Martha. And if we start to see a bit of ourselves in Martha after all, so much the better for our understanding of qualia...
TO BE CONTINUED
Disclaimer
One could reasonably ask what makes my attempt special on such a well-argued topic, given that I’m not credentialed as a philosopher. First, I'd reiterate that academic philosophers really haven’t started to use the concept of dissolving a question- I don’t think Daniel Dennett, for instance, ever explored this train of thought. And secondly, of those who do try and map cognitive algorithms within philosophy of mind, Eliezer hasn't tackled qualia in this way, while Gary Drescher gives them short shrift in Good and Real. (The latter essentially makes Dennett's argument that with enough self-knowledge qualia wouldn’t be ineffable. But in my mind this fails to really dissolve the question- see my footnote 4.)
Footnotes:
1. The argument is called "Mary’s Room" because the original version (due to Frank Jackson) posited that Mary had perfectly normal vision but happened to be raised and educated in a perfectly grayscale environment, and one day stepped out into the colorful world like Dorothy in The Wizard of Oz. I prefer the more plausible and philosophically equivalent variant discussed above, although it drifts away from the etymology of the argument’s name.
2. Ironically, it was a green apple rather than a red one, but Mary soon realized and rectified her error. The point stands.
3. In general, an important rationalist heuristic is to not draw far-reaching conclusions from an intuitively plausible argument about a subject (like subjective experience) which you find extremely confusing.
4. Before we move on, though, one key reductionist reply to Mary’s Room is that either qualia have physical effects (like causing Mary to say "Oh!") or they don't. If they do, then either they reduce to ordinary physics or you could expect to find violations of physical law in the human brain, which few modern philosophers would dare to bet on. And if they don't have any physical effects, then somehow whatever causes her to say "Oh!" has nothing to do with her actual experience of redness, which is an exceptionally weird stance if you ponder it for a moment; read the zombie sequence if you're curious.
Furthermore, one could object (as Dennett does) that Mary’s Room, like Searle’s Chinese Room, is playing sleight of hand with impossible levels of knowledge for a human, and that an agent who could really handle such massive quantities of information really wouldn't learn anything new when finally having the experience. But to me this is an unsatisfying objection, because we don’t expect to see the effect of the experience diminish significantly as we increase her level of understanding within human bounds– and at most, this objection provides a plausible escape from the argument rather than a refutation.
5. (and not, for instance, because we programmed in that specific reaction on its own)
6. Indeed, the vast majority of visual processing- estimating distances, distinguishing objects, even identifying colors- is done subconsciously; that's why knowing that something is an optical illusion doesn't make you stop seeing the illusion. Steven Pinker's How the Mind Works contains a treasure trove of examples on this subject.
157 comments
Comments sorted by top scores.
comment by [deleted] · 2011-05-26T18:52:21.457Z · LW(p) · GW(p)
A physically plausible scenario would involve growing up under a monochromatic light source.
Growing up without sensory input actually affects the brain; see Wikipedia's article on monocular deprivation. I'm actually an example of this - I was born without the Mystic Eyes Of Depth Perception so I'll never know what stereoscopic vision "feels like".
I propose that "qualia" is a word that, like "microevolution", is mainly used by people who are very confused (and dissolving the question is the appropriate approach).
↑ comment by [deleted] · 2011-05-26T19:01:32.562Z · LW(p) · GW(p)
I'm actually an example of this - I was born without the Mystic Eyes Of Depth Perception so I'll never know what stereoscopic vision "feels like".
If you turn something or move around it, even if you only use one eye to do this, your brain puts together the succeeding images to create a three-dimensional visual experience of the scene. Here is an example. If you're curious about "what it's like" to have stereo vision, in my opinion it is not far off from this, without the movement.
Replies from: Lightwave, john-lawrence-aspden↑ comment by john-lawrence-aspden · 2011-05-26T19:24:24.928Z · LW(p) · GW(p)
The link is well worth following. Wow! Stereo vision with one eye closed!
↑ comment by Kaj_Sotala · 2011-05-28T11:15:42.563Z · LW(p) · GW(p)
I propose that "qualia" is a word that, like "microevolution", is mainly used by people who are very confused (and dissolving the question is the appropriate approach).
Could you expand on this? I've seen before here the notion that the term "qualia" should be gotten rid of entirely, but I've never really understood it.
For instance, asking what kinds of processes are capable of producing qualia, in order to figure out which animals are capable of feeling pain, certainly seems relevant for utilitarian ethics. (You could reword the question as "which animals can feel pain", which avoids using the term 'qualia', but you're at heart still referring to the same concept.)
Replies from: Peterdjones↑ comment by Peterdjones · 2011-05-31T17:23:28.933Z · LW(p) · GW(p)
It's also pretty difficult to describe phenomena like synaesthesa without terms like qualia.
↑ comment by Peter_de_Blanc · 2011-05-28T13:06:46.268Z · LW(p) · GW(p)
Depth perception can be gained through vision therapy, even if you've never had it before. This is something I'm looking into doing, since I also grew up without depth perception.
Replies from: None↑ comment by [deleted] · 2011-05-28T14:00:40.795Z · LW(p) · GW(p)
I should have been more precise. I was born without a fully formed right eye - it has no lens and does not transmit a signal to my brain. Therefore, no "therapy" can improve my Vision Onefold. People in my situation (monocular blindness from birth) are extremely rare, so your assumption is understandable.
I can get around in 3D space just fine, and I'm extremely good at first-person shooters, so I know I'm not missing much. (Coincidentally, I have no interest in physical sports.) The wiggle images do "work" for me.
(On the other hand, "Possession of a single Eye is said to make the bearer equivalent to royalty.")
comment by Tyrrell_McAllister · 2011-05-27T16:28:00.894Z · LW(p) · GW(p)
I'm not sure that this enterprise should be called "dissolving the question".
The question at hand is, "Is there something about red things that Mary can learn only by having certain kinds of input fed into her visual cortex?" This seems like a question that should be answered, not dissolved.
If you were trying to dissolve this question, you would probably proceed by trying to show that the concept of the things that Mary can learn about red things is itself meaningless, or at least that its meaning is too vague to give meaning to the question. But I don't see why we would expect this concept to be so problematic. And I don't think that we need it to be problematic for there to be a satisfying reductionist resolution to the Mary's Room paradox.
But rather than dissolving the question, you seem to me to be "dissolving an answer". More precisely, you are trying to dissolve the intuitions that make the answer "Yes" seem so probable to so many people. This is a valuable thing to do, and I look forward to your next post.
comment by komponisto · 2011-05-26T21:33:11.102Z · LW(p) · GW(p)
The argument is called "Mary’s Room"...I prefer the more plausible and philosophically equivalent variant discussed above, although it drifts away from the etymology of the argument’s name
To be precise, the argument itself is called the Knowledge Argument (KA); "Mary's Room" is a name for the thought experiment Jackson used to present it.
Actually, it was one of two similar thought experiments in Jackson's original paper: the other one concerned a character named Fred who could see more colors than normal humans.
Replies from: CronoDAS↑ comment by CronoDAS · 2011-05-27T04:15:49.306Z · LW(p) · GW(p)
He really should have reversed the names. Colorblindness is more common in men than women, and it actually is possible for a few women to see an additional primary color.
To summarize the linked article: Most people have three distinct kinds of photoreceptors that react to different frequencies of light: one receptor that is most sensitive to red, one receptor that is most sensitive to green, and one that is most sensitive to blue. (You can tell what color something is by the differences in the strength of the responses of the three different photoreceptors.) Genes for the red and green receptors are present on the X chromosome, so men have only one copy. Colorblind men have an abnormal version of one of these genes, so instead of getting a gene for seeing red and a gene for seeing green, they end up with a gene for seeing red and a gene for seeing a slightly different shade of red (which is why they can't tell the difference between red and green). On the other hand, if a woman has one copy of the "defective" gene and one copy of the "normal" gene, she could end up with four kinds of color receptors instead of the normal three: the one for red, the one for green, the one for blue, and the one for the slightly different red. This would let her see a difference between colors that look identical to people with normal color vision.
comment by XiXiDu · 2011-05-27T11:01:58.627Z · LW(p) · GW(p)
That someone knows every physical fact about gold doesn't make that person own any gold.
The Mary’s Room thought experiment explicitly claims that Mary knows every physical fact about the given phenomenon but does at the same time implicitly suggest that some information is missing.
Mary was merely able to to dissolve part of human nature by incorporating an algorithmic understanding of it. Mary wasn't able to evoke the dynamic state sequence from the human machine by computing the algorithm.
Understanding something means to assimilate a model of what is to be understood. Understanding something completely means to be able to compute its algorithm, it means to incorporate not just a model of something, its static description, it means to become the algorithm entirely. To understand something completely means to remove its logical uncertainty by computing it.
You might object that Mary won't learn anything from experiencing the algorithm, but if Mary does indeed know everything about a given phenomenon then by definition she also knows how the algorithm feels from the inside.
Replies from: wedrifid, Peterdjones↑ comment by wedrifid · 2011-05-27T14:18:26.992Z · LW(p) · GW(p)
That someone knows every physical fact about gold doesn't make that person own any gold.
In the same vein:
Mary was merely able to to dissolve part of human nature by incorporating an algorithmic understanding of it.
Mary doesn't dissolve any part of human nature either. She dissolves an irregularity in her map.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-05-31T17:01:50.604Z · LW(p) · GW(p)
That someone knows every physical fact about gold doesn't make that person own any gold.
That's not the point: the point is why is it necessary, in the case of experience, and only in the case of experience to instantiate it in order to fully understand it. Obviously, it is true a that a descirption of a brain state won't put you into that brain state. But that doesn't show that there is nothing unusual about qualia. The problem is that there in no other case does it seem necessary to instantiate a brain state in otder to undertstand something.
If another version of Mary were shut up to learn everything about, say, nuclear fusion, the question "would she actually know about nuclear fusion" could only be answered "yes, of course....didn't you just say she knows everything"? The idea that she would have to instantiate a fusion reaction within her own body. That doesn't apply, any more than a description of photosynthesis will make you photosynthesise. We expect that the description of photosynthesis is complete, so that actually being able to photosynthesise would not add anything to our knowledge.
The list of things which the standard Mary's Room intution doesn't apply to is a long one. There seem to be some edge cases.: for instance, would an alternative Mary know everything about heart attacks without having one herself? Well, she would know everything except what a heart attack feels like —and what it feels like is a quale. the edge cases, like that one, are cases are just cases where an element of knowledge-by-acquaintance is needed for complete knowledge. Even other mental phenomena don't suffer from this peculiarity. Thoughts and memories are straightforwardly expressible in words — so long as they don't involve qualia.
So: is the response "well, she has never actually instantiated colour vision in her own brain" one that lays to rest and the challenge posed by the Knowledge argument, leaving physicalism undisturbed? The fact that these physicalists feel it would be in some way necessary to instantiate colour means they subscribe to the idea that there is something epistemically unique about qualia/experience, even if it resists the idea that qualia are metaphysically unique.
Is the assumtion of epistemological uniqueness to be expected given physicalism? Some argue that no matter how much you know about something "from the outside", you quite naturally wouldn't be expected to understand it from the inside. However, if physicalism is taken as the claim that everything ultimately has a possible physical explanation, that implies that everything has a description in 3rd person, objective language — that everything reduces to the 3rd person and the objective. What that means is that there can be no irreducible subjectivity: whilst brains may be able to generate subjective views, they must be utlimately reducible to objectivity along with everything else. Since Mary knows everything about how brains work, she must know how the trick is pulled off: she must be able to understand how and why and what kind of (apparent) subjetivity is produced by brains. So the Assumption of Epistemelogical Uniqueness does not cleanly rescue physicalism, for all that it is put forward by physcialists as something that is "just obvious".
Replies from: pjeby, wedrifid↑ comment by pjeby · 2011-05-31T17:53:00.650Z · LW(p) · GW(p)
if physicalism is taken as the claim that everything ultimately has a possible physical explanation, that implies that everything has a description in 3rd person, objective language — that everything reduces to the 3rd person and the objective.
The first part of your statement applies, the second doesn't. In LW jargon, "explaining is not the same as explaining away."
In other words, that you have an explanation for an experience doesn't mean the experience itself ceases to exist. You can totally have an explanation for why the sunset looks beautiful, and this doesn't in any way remove the beauty of the sunset.
The apparent ineffability of experiences is a function of the structure of the human brain. It's easy to imagine the cognitive architecture of a brain that could describe what "red" is like to another similar brain, and have the second brain be able to experience it. For such a species, Mary's Room would not be paradoxical, it'd be a stupid question nobody would even think of asking in the first place.
That philosophers are still arguing over it is a symptom of the general malaise in philosophy: that hardly anybody seems to notice when the stuff they're arguing about is directly premised on ideas that we already know (from the cognitive, physical, and information sciences) to be wrong, stupid, or just plain irrelevant.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-05-31T20:37:12.082Z · LW(p) · GW(p)
In other words, that you have an explanation for an experience doesn't mean the experience itself ceases to exist. You can totally have an explanation for why the sunset looks beautiful, and this doesn't in any way remove the beauty of the sunset.
Explaining does not in general mean explaining away, but fundamental 1st personness must be explained away in physical reduction.
The apparent ineffability of experiences is a function of the structure of the human brain.
What function and why does it apply only to experience, and not to all the other things the brain does?
It's easy to imagine the cognitive architecture of a brain that could describe what "red" is like to another similar brain, and have the second brain be able to experience it.
It's not easy to imagine with our brains and our red, so which are you changing--the brain or the red?
That philosophers are still arguing over it is a symptom of the general malaise in philosophy: that hardly anybody seems to notice when the stuff they're arguing about is directly premised on ideas that we already know (from the cognitive, physical, and information sciences) to be wrong, stupid, or just plain irrelevant.
Uh-huh. So we have a physical explanation of qualia. Where was that published?
Replies from: pjeby↑ comment by pjeby · 2011-06-01T00:29:15.420Z · LW(p) · GW(p)
What function and why does it apply only to experience, and not to all the other things the brain does?
I used the term "function" in the mathematical sense, not the teleological one.
The "structure" I referred to is the absence of the ability to introspect and alter brain states at a sufficient level of detail to describe "red".
It's not easy to imagine with our brains and our red, so which are you changing--the brain or the red?
The brain: as I said, "For such a species," (i.e. not humans).
If a species existed that could communicate in neural primitives, they would not see any point to the Mary's room problem, since if they knew what "red" was, they could communicate it, and the "ineffability" would not exist.
Analogously, I've seen it said that dolphins can use sound to convey pictures to each other -- by replaying the sound of reflected sonar images, they can communicate to another dolphin what they "saw" with sound. I don't know if this is actually true, but it helps to illustrate how translating knowledge into qualia requires physical support in the host organism.
That is, if this is really true of dolphins, then it is possible for one dolphin to "show" another dolphin something it has never "seen" before (in echolocation terms), and thus knowledge of qualia is communicable.
Again, the point here is that if you have a brain and sensory organs that allow it, qualia are no longer ineffable. They only seem so because humans have limited hardware.
Uh-huh. So we have a physical explanation of qualia. Where was that published?
We understand information science well enough to understand that knowledge and computation do not work in the naive way that philosophers think about them -- and in a way that is directly applicable to dissolving this question.
Mary's Room depends on an abstract conception of knowledge -- the idea that knowledge is independent of its representation. But in the real world, knowledge is never separable from a physical representation of that knowledge, and it is always subject to computational constraints imposed by that physical representation.
Mary's brain is computationally constrained as to what physical states it can enter by way of conscious intervention, lacking any physical input from the outside world. So it should be no surprise at all there will exist mental states that can be brought about by outside input and cannot be brought about through "knowledge" of a verbal kind.
In other words, the ineffability of any given experience is a reflection of the limits of our brains, rather than representing some mystical quality of experience. And Mary's Room only seems puzzling because our inbuilt intuitions about thinking lead us to believe that we should be able to know things (experience brain states) that we aren't physically capable of.
As I said, this is a great example of where philosophers argue at length about things that have as much connection to empirical reality as angels on the head of a pin do. We have no need of nonphysical hypotheses to explain such basic matters as untranslatable or incommunicable knowledge.
Your request for a "physical explanation of qualia" is a case in point, because there isn't anything that needs explaining about qualia.
If you taboo the word "qualia", and ask what it expands to, then you get one of various possible obvious and non-contradictory explanations. Personally, for purposes of the Mary's Room discussion, I expand "qualia" as "brain states that cannot be transmitted between humans without reference to prior experience by the recipient"... which makes the paradox vanish immediately.
Of course we would not expect Mary to be able to be directed to the brain states that can represent "red" if it is a state that can't be transmitted between humans without reference to prior experience by the recipient. It is only the false implicit assumption that humans can place themselves into arbitrary brain states through conscious intervention that leads anyone to think the question's a paradox.
That's why people pushing the paradox angle keep saying, "ah, but Mary knows everything about red" -- which is hiding the assumption under the expansion of the word "know".
See, I expand "know" to mean something along the lines of, "has a representation in her brain simulating certain properties of".
Which means, Mary has a representation in her brain simulating certain properties of everything about red.
This is a requirement, because unless we posit that Mary has infinite brain capacity (i.e., not a human being), she cannot possibly have a brain simulating everything about red!
So, when you expand "know" and "red" (as an instance of qualia) with some simple clarity, the entire paradox dissolves into a stupid question that didn't need to be asked in the first place... not unlike the dissolution of the tree-sound argument in the "Proper Uses Of Words" Sequence.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-06-02T01:14:07.828Z · LW(p) · GW(p)
Again, the point here is that if you have a brain and sensory organs that allow it, qualia are no longer ineffable. They only seem so because humans have limited hardware.
And because of something about qualia, since the ineffability applies only to them.
Uh-huh. So we have a physical explanation of qualia. Where was that published?
We understand information science well enough to understand that knowledge and computation do not work in the naive way that philosophers think about them -- and in a way that is directly applicable to dissolving this question.
It is naive to suppose all philosophers think the same way.
Mary's Room depends on an abstract conception of knowledge -- the idea that knowledge is independent of its representation. But in the real world, knowledge is never separable from a physical representation of that knowledge, and it is always subject to computational constraints imposed by that physical representation.
Learning and education depend on an abstract conception of knowledge. A researcher can dump the knowledge in their brain into a book which is then absorbed by a professor and taught to students.
Mary's brain is computationally constrained as to what physical states it can enter by way of conscious intervention, lacking any physical input from the outside world. So it should be no surprise at all there will exist mental states that can be brought about by outside input and cannot be brought about through "knowledge" of a verbal kind.
No, but it should be a surpise that out of eveything she could know, only one is dependent on the instantiation of a physical brain state.
In other words, the ineffability of any given experience is a reflection of the limits of our brains, rather than representing some mystical quality of experience. And Mary's Room only seems puzzling because our inbuilt intuitions about thinking lead us to believe that we should be able to know things (experience brain states) that we aren't physically capable of.
We have that intuition because evetything but qualia works that way. Why are qualia different?
As I said, this is a great example of where philosophers argue at length about things that have as much connection to empirical reality as angels on the head of a pin do. We have no need of nonphysical hypotheses to explain such basic matters as untranslatable or incommunicable knowledge.
You haven't actually explained the uniqueness of qualia at this point.
Your request for a "physical explanation of qualia" is a case in point, because there isn't anything that needs explaining about qualia.
What needs explaining is why they alone need physical instantiation to be known.
If you taboo the word "qualia", and ask what it expands to, then you get one of various possible obvious and non-contradictory explanations. Personally, for purposes of the Mary's Room discussion, I expand "qualia" as "brain states that cannot be transmitted between humans without reference to prior experience by the recipient"... which makes the paradox vanish immediately.
Why can other brain states be understood without transmission? We expect Mary to understand memory, cognition, etc.
Of course we would not expect Mary to be able to be directed to the brain states that can represent "red" if it is a state that can't be transmitted between humans without reference to prior experience by the recipient. It is only the false implicit assumption that humans can place themselves into arbitrary brain states through conscious intervention that leads anyone to think the question's a paradox.
It is the true fact that qualia alone have this epistemological uniqueness that makes it a puzzle.
That's why people pushing the paradox angle keep saying, "ah, but Mary knows everything about red"
Everything physical, ie all 3rd person descriptions.
-- which is hiding the assumption under the expansion of the word "know".
This is a requirement, because unless we posit that Mary has infinite brain capacity (i.e., not a human being), she cannot possibly have a brain simulating everything about red!
or anything else. Why is that not a problem in the case of everything else.
So, when you expand "know" and "red" (as an instance of qualia) with some simple clarity, the entire paradox dissolves into a stupid question that didn't need to be asked in the first place... not unlike the dissolution of the tree-sound argument in the "Proper Uses Of Words" Sequence.
Hmm. So either the qualiaphiles are missing something...or you are.
Replies from: pjeby↑ comment by pjeby · 2011-06-04T14:51:56.680Z · LW(p) · GW(p)
And because of something about qualia, since the ineffability applies only to them.
Uh, no, because "qualia" is just a word applied to things we don't know how to describe without reference to experience.
In other words, it's a term about language... not a term about the experiences being described.
Learning and education depend on an abstract conception of knowledge. A researcher can dump the knowledge in their brain into a book which is then absorbed by a professor and taught to students.
And that knowledge is represented in various physical forms: books, sights, sounds, symbols. The "abstractions" themselves are then physically represented by neural patterns in brains. At no time during this process is there anything non-physical occurring.
When you, as an observer, look on this process and claim that abstractions exist, what you are saying is that in your brain, there is a physical representation of a repeating pattern in your perception. When you say, "Person A communicated idea X to Person B", you are describing representations in your head, not the physical reality.
The physical reality is, you saw a set of atoms creating certain vibrations in the air, which led to chemical changes in another chunk of atoms nearby. As part of the process, the atoms in your brain also rearranged themselves, creating a -- wait for it -- abstracted representation of the events that took place.
In other words, all "abstraction" takes place in physical brains. It doesn't exist anywhere else.
No, but it should be a surpise that out of eveything she could know, only one is dependent on the instantiation of a physical brain state.
You've got that backwards. It should be no surprise at all that we can't directly communicate experience, because we don't have any physical organs for doing that. We do have organs for transmitting and receiving symbolic communication: in other words, signals that stand for things.
And in order to communicate by signals, the referents of the signals have to be known in advance. So, it is utterly and completely unsurprising that we have to be able to point to something red to communicate the idea of red.
Why can other brain states be understood without transmission? We expect Mary to understand memory, cognition, etc.
Because she's experienced them, and thus has referents that allow symbolic communication to take place. (If she hadn't experienced them, we also likely wouldn't be able to communicate with her at all!)
[several comments/questions implying specialness or puzzlingness of qualia]
Suppose I make up a term, foogly, and claim it is special. When you ask for some examples of this word, I point to various species of non-flying birds. You then say to me, "Those are just birds that don't fly."
"But ah!" I say, "Out of all the birds in the world, there are only these species of bird that don't fly. Clearly, there is something special about fooglies. What a puzzle!"
You say, "But they're just birds that can't fly!"
"Ah, but you haven't explained why they're special!"
"There's nothing to explain! Some don't have wings big enough, or muscles strong enough, or they lived in an area where it wasn't advantageous any more to fly, or whatever."
"Ah," I retort. "But then how come it's only fooglies that don't fly! You haven't explained anything."
"But, but..." you stammer. "You just made up that word, such that it means 'birds that don't fly'. The commonality isn't in the birds -- those different species of birds have nothing to do with each other. The commonality between them is in the word, that you made up to put them together. It has no more inherent rightness of grouping than that aboriginal word for 'women, fire, and dangerous things'. You're arguing about a word."
"That's all very nice," I say, "but you still haven't explained fooglies."
At this point, you are quite likely to think I am an idiot.
I, on the other hand, merely think you have failed to understand the sequence on the Proper Uses of Words -- a bare minimum requirement for having an intelligent discussion on Less Wrong about topics like this one.
The LW standard for philosophical discussion requires reference to things in the world. That, as far as possible, we expand our terms until the symbols are grounded in physical things, where we can agree or disagree about the physical things, rather than the words being used to describe the things.
When you do that, a huge swath of philosophical "puzzles" dissolve into thin air as the mirages that they are. There is nothing special about qualia, because it's a made-up word for "things we can't communicate symbolically without experiential referent".
What's more, even that definition is still a red herring, because there is nothing we can communicate symbolically without experiential referent. All our abstract words are actually built up from more concrete ones, such that we have the illusion that there are things that we can describe without experiential referent.
Take "abstract", for example. The only way to learn what that word means is by concrete examples of abstractions! To know what "communication" is, you have to have experienced some concrete forms of communication first
If language is a pyramid of concepts, each abstraction built up on others from more concrete concepts and experiences, then at some point there is a bottom or base to this pyramid... and the term qualia is simply pointing to all these things at the bottom of the pyramid, and claiming that they must be special somehow because, well, they're all at the bottom of the pyramid.
Yeah, they're at the bottom. So what? All it means is that they're stuff your brain has neural inputs already in place for, just like the only thing in common between birds that don't fly is that they lack the capacity to fly.
In other words, it's not a word for something special. It's a word for things that aren't special. Every animal with a brain has neural inputs, so qualia are abundant in the physical world.
It's only humans who think there's anything special about them, because humans also have the capacity to process symbols. And in fact, we are so accustomed to thinking in symbols, and being able to communicate in symbols, that we are surprised when we find ourselves unable to communicate symbolically about something.
But this is the exact same experience that we have when trying to communicate anything symbolically without a common reference point. As frustrating as it may feel, the simple truth is that you cannot communicate anything symbolically without a reference point, because symbols have to stand for something, that both parties to the communication have in common.
It's just that normally, we have no need to try to communicate something without a reference point.
Anyway, if you understand this much, then it's plain that Mary's Room is just a bunch of self-defeating words that can't happen in reality. For Mary to have "knowledge" of red, it has to have been communicated to her, either experientially or symbolically.
But, for it to have been communicated symbolically, there had to be a referent in experience... which would mean she'd have to have experienced red.
That's the physical reality, so this "thought experiment" cannot possibly take place physically.
Now, if you hypothesize a robot Mary or an alien Mary who has organs for communicating direct neural perception, or who has the ability to directly alter brain state, great. But in that case, Mary would not experience any surprise, since Mary would already have been able to induce the brain state in question.
Since a human Mary lacks either of these abilities, it should not be surprising that we cannot symbolically convey anything to her that is not grounded in something she already knows. That's just how symbolic communication works.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-06-05T20:53:57.918Z · LW(p) · GW(p)
And because of something about qualia, since the ineffability applies only to them.
Uh, no, because "qualia" is just a word applied to things we don't know how to describe without reference to experience.
That's vaguely phrased. "Quale" is defined as a term for sensory qualities and phenomenal feels. It is a further, non definitional fact that the set of qualia so defined coincides with the set of ineffable things.
In other words, it's a term about language... not a term about the experiences being described.
If you look at the locus classicus, CI Lewis's definition, qualia are not defined in terms of language at all.
"There are recognizable qualitative characters of the given, which may be repeated in different experiences, and are thus a sort of universals; I call these "qualia." But although such qualia are universals, in the sense of being recognized from one to another experience, they must be distinguished from the properties of objects. Confusion of these two is characteristic of many historical conceptions, as well as of current essence-theories. They round in practice".
Moreover, ineffability is two-sided: a particular class of entities isn't describable in a particular language. You can't put all the blame on language L when L can describe other thing adequately.
.. it should be a surpise that out of eveything she could know, only one is dependent on the instantiation of a physical brain state.
You've got that backwards. It should be no surprise at all that we can't directly communicate experience, because we don't have any physical organs for doing that. We do have organs for transmitting and receiving symbolic communication: in other words, signals that stand for things. And in order to communicate by signals, the referents of the signals have to be known in advance.
That is vaguely phrased. Of course, one has to know the meaning og signal-states in some sense. However, it is not clear that every symbol must match up one-for-one with a sensory referent. Moreover, abstract terms seem to work differently to concrete ones.
So, it is utterly and completely unsurprising that we have to be able to point to something red to communicate the idea of red.
It is only unsurprising if you have adopted a theory according to which someone would have to be acquainted by direct refrence with pentagons in order to understand the string "pentagon". However, that is not the case.
Why can other brain states be understood without transmission? We expect Mary to understand memory, cognition, etc.
Because she's experienced them, and thus has referents that allow symbolic communication to take place.
Does the super-neuroscientists Mary understand dementia,psychosis, etc, in your opinion? Does she have experiences of excitation levels accross her synaptic clefts?
(If she hadn't experienced them, we also likely wouldn't be able to communicate with her at all!)
It's begining to look like all male gynecologists should be sacked.
Suppose I make up a term, foogly, and claim it is special. When you ask for some examples of this word, I point to various species of non-flying birds. You then say to me, "Those are just birds that don't fly."
[..]
"But, but..." you stammer. "You just made up that word, such that it means 'birds that don't fly'. The commonality isn't in the birds -- those different species of birds have nothing to do with each other. The commonality between them is in the word, that you made up to put them together. It has no more inherent rightness of grouping than that aboriginal word for 'women, fire, and dangerous things'. You're arguing about a word."
Again, qualia isn't defined as "whatever is ineffable", so the analogy isn't analogous.
"That's all very nice," I say, "but you still haven't explained fooglies."
At this point, you are quite likely to think I am an idiot.
I, on the other hand, merely think you have failed to understand the sequence on the Proper Uses of Words -- a bare minimum requirement for having an intelligent discussion on Less Wrong about topics like this one.
Do you? I think I was hacking that stuff when EY was in diapers. And you're not using "quale" properly.
That, as far as possible, we expand our terms until the symbols are grounded in physical things, where we can agree or disagree about the physical things, rather than the words being used to describe the things."
Please explain how that theory applies to mathematics.
When you do that, a huge swath of philosophical "puzzles" dissolve into thin air as the mirages that they are. There is nothing special about qualia, because it's a made-up word for "things we can't communicate symbolically without experiential referent".
I've heard it all before. Projects to Dissolve all Philosophical Problems have been tried in the past, with disappointing results.
What's more, even that definition is still a red herring, because there is nothing we can communicate symbolically without experiential referent.
So you say. That's an unproven theory, for one thing. For another, there seem to be robust counterexamples, such as the ability of physicsts and mathematicians to communicate about unexperiencable higher dimensional spaces.
If language is a pyramid of concepts, each abstraction built up on others from more concrete concepts and experiences, then at some point there is a bottom or base to this pyramid... and the term qualia is simply pointing to all these things at the bottom of the pyramid, and claiming that they must be special somehow because, well, they're all at the bottom of the pyramid.
If they are at the bottom of the pyramid, they are special. You current agument, that what is at the bottom of the pyramid cannot be explained relies on that. And it amount to gainsaying the premise of Mary's Room: Mary doens't know everything about how the brain works, because he doesn't know how qualia work,because no reductive explanation of qualia is available, because qualia cannot be reduced to simpler concepts because they are at the bottom of the pyramid.
In other words, it's not a word for something special. It's a word for things that aren't special. Every animal with a brain has neural inputs, so qualia are abundant in the physical world.
That's vaguely phrased. You have conceded it is special with regard to its place in the conceptual hierarchy and its communicabulity, for all that you are holding out that a metaphysical explanation isn't required.
But this is the exact same experience that we have when trying to communicate anything symbolically without a common reference point.
So: are attempts to communicate with extraterrestrials doomed?
But, for it to have been communicated symbolically, there had to be a referent in experience... which would mean she'd have to have experienced red.
And she'd have to have a stroke to understand the effects of stroke on the brain? You need to be clearer about the difference between grounding symbol systems,and finding referents for individual symbols.
That's the physical reality, so this "thought experiment" cannot possibly take place physically.
You are taking it as a thought experiment where she succesfully learns colur qualia, although the expected outcome of the original story is that she doens't.
Replies from: pjeby, SilasBarta↑ comment by pjeby · 2011-06-06T03:27:36.763Z · LW(p) · GW(p)
You have conceded it is special with regard to its place in the conceptual hierarchy and its communicabulity, for all that you are holding out that a metaphysical explanation isn't required.
I've conceded that they're as special as birds that don't fly. That is, that they're things which don't require any special explanation. One of the things you learn from computer programming is that recursion has to bottom out somewhere. To me, the idea that there are experiential primitives is no more surprising than the fact that computer languages have primitive operations: that's what you make the non-primitives out of. No more surprising than the idea that at some point, we'll stop discovering new levels of fundamental particles.
Among programmers, it can be a fun pastime to see just how few primitives you can have in a language, but evolution doesn't have a brain that enjoys such games. So it's unsurprising that evolution would work almost exclusively in the form of primitives -- in other words, a very wide-bottomed pyramid.
Humans are the special ones - the only species that unquestionably uses recursive symbolic communication, and is therefore the only species that makes conceptual pyramids at all.
So, from my point of view, anything that's not a primitive neural event is the thing that needs a special explanation!
[mathematicians, male gynecologists, etc.]
You appear to be distorting my argument, by conflating experiential primitives and experiential grounding. Humans can communicate metaphorically, analogously, and in various other ways... but all of that communication takes place either in symbols (grounded in some prior experience), or through the direct analog means available to us (tone of voice, movement, drawing, facial expressions) to ground a communication in some actual, present-moment experience.
But, I expect you already knew that, which makes me think you're simply trolling.
Why are you here, exactly?
Clearly, you're not a Bayesian reductionist, nor do you appear to show any interest whatsoever in becoming one. In not one comment have I ever seen you learn something from your participation, nor do I see anything that suggests you have any interest in learning anything, or really doing anything else but generating a feeling of superiority through your ability to remain unconvinced of anything while putting on a show of your education.
Your language about arguments and concessions strongly suggest that you think this is a debating society, or that arguments are soldiers to be sent forth in support of a bottom line...
And I don't think I've ever seen you ask a single question that wasn't of the rhetorical, trying-to-score-points-off-your-opponent variety, which suggests you have very little interest in becoming... well, any less wrong than you currently are.
So, why are you here?
Replies from: Peterdjones↑ comment by Peterdjones · 2011-06-06T13:16:38.500Z · LW(p) · GW(p)
I've conceded that [qualia] are as special as birds that don't fly.
That's vaguely phrased. What does "special" mean? Is my guess about metaphysical explanation correct.?
That is, that they're things which don't require any special explanation. One of the things you learn from computer programming is that recursion has to bottom out somewhere.
I know. I'm a programmer.
[mathematicians, male gynecologists, etc.]
You appear to be distorting my argument, by conflating experiential primitives and experiential grounding. Humans can communicate metaphorically, analogously, and in various other ways... but all of that communication takes place either in symbols (grounded in some prior experience), or through the direct analog means available to us (tone of voice, movement, drawing, facial expressions) to ground a communication in some actual, present-moment experience.
You appear to be not even responding to my arguments.
Why are you here, exactly?
I am here to evaluate ideas and argumnents.
Clearly, you're not a Bayesian reductionist,
I have studied Bayesian reductionism and I find it flawed. I can explain why. Are you saying I should not be examining it critically, or that I should accept it in spite of its flaws?
nor do you appear to show any interest whatsoever in becoming one.
What sort of effort would that involve? Do you realise how religious your language sounds -- "you need to try harder to believe"?
In not one comment have I ever seen you learn something from your participation,
Well, Sir, I haven't seen you learn anything from me.
nor do I see anything that suggests you have any interest in learning anything, or really doing anything else but generating a feeling of superiority through your ability to remain unconvinced of anything while putting on a show of your education.
Most ideas are wrong, so I remain unconvinced by most of them.
Your language about arguments and concessions strongly suggest that you think this is a debating society, or that arguments are soldiers to be sent forth in support of a bottom line...
What are these forums for if not for debate? Are participants supposed to accept things uncritically? That's not rationallity where I come from.
And I don't think I've ever seen you ask a single question that wasn't of the rhetorical, trying-to-score-points-off-your-opponent variety, which suggests you have very little interest in becoming... well, any less wrong than you currently are.
Try considering the hypothesis that all that is true, and is explained by my already knowing how to be rational.
I mean, do you think LW has cornered some market in rationality? Do you think everyone who visits these boards can be assumed to be naively empty-headed? Do you think it might be a step forward to base your ad hominems on actual characteristics rather than assumed ones?
So, why are you here?
I'm the irritant that produces the pearl.
Replies from: pjeby, shokwave↑ comment by pjeby · 2011-06-06T17:19:59.986Z · LW(p) · GW(p)
What are these forums for if not for debate? Are participants supposed to accept things uncritically? That's not rationallity where I come from.
Way to conflate three entirely different things to suggest various deniable conclusions. A terrific example of the sort of "Dark Arts" debating tactics we are not interested in having on LessWrong.
I think perhaps you're looking for the Argument Clinic, instead.
I'm the irritant that produces the pearl.
In other words, you admit to being a troll. Thanks for clarifying that.
Congratulations on at least not being an immediately obvious one; I originally mistook you for an educable single-topic visitor from another site (rather than a determined troll), who might actually be educable. So, I'll stop replying entirely now.
Replies from: None, Peterdjones, Peterdjones↑ comment by [deleted] · 2011-06-08T01:02:35.796Z · LW(p) · GW(p)
This and your recent other discussions about qualia and zombies are a great example of getting useful explanations thanks to trolls. It finally clicked for me that an algorithmic explanation doesn't actually "leave anything out" and that reductionism doesn't fail. I kept reading "Mind Projection Fallacy", but couldn't see how I was committing it. Thanks for your efforts, PJ!
Replies from: pjeby↑ comment by pjeby · 2011-06-08T01:17:20.541Z · LW(p) · GW(p)
I kept reading "Mind Projection Fallacy", but couldn't see how I was committing it. Thanks for your efforts, PJ!
I'm glad someone got some use out of it.
a great example of getting useful explanations thanks to trolls
You (or someone else) could have gotten just as good of an explanation out of me by saying, "I don't understand how that's committing the MPF", so that's not really evidence in favor of trolls being valuable.
What's valuable is persistence, in that if you ask the question only once and stop saying, "yeah, but what I don't get about that is...", " wouldn't that then cause/mean...", etc., until you get a satisfactory answer.
Trolls are certainly persistent, but that doesn't mean the resulting conversation record will necessarily be of any use, alas.
Replies from: None↑ comment by [deleted] · 2011-06-08T01:44:29.548Z · LW(p) · GW(p)
You (or someone else) could have gotten just as good of an explanation out of me by saying, "I don't understand how that's committing the MPF", so that's not really evidence in favor of trolls being valuable.
Sure, in principle, but what really happened was that I read the first few explanations (mostly by Elizier and Dennett) and thought, "Nah, that doesn't really work. How am I projecting anything? You are all ignoring consciousness!". When others then mentioned the position, I automatically dismissed it. Only by seeing people stubbornly bring up really bad arguments against reductionism did I finally snap, "C'mon! I'm on your side, but that's just stupid. If $belief about qualia were true, how would you ever know? What's the different anticipation here? ... Waitaminnit, what am I anticipating here?" and that unraveled the whole thing in the end.
(Noted, however. Need to ask more.)
↑ comment by Peterdjones · 2011-06-09T18:27:26.183Z · LW(p) · GW(p)
What are these forums for if not for debate? Are participants supposed to accept things uncritically? That's not rationallity where I come from.
Way to conflate three entirely different things to suggest various deniable conclusions.
Asking a number of separate questions is not conflation. If you are not going to answer questions, I can only draw whatever conclusions I can from your silence.
Are the LW forums for debate, or not?
In other words, you admit to being a troll. Thanks for clarifying that.
Dropping out of a debate with questions unanswered and points unmet can make you look irrational -- but of course you don't have to engage with someone Innately Evill, do you?
Congratulations on at least not being an immediately obvious one; I originally mistook you for an educable
Again. you have this absolutely rigid idea that I (? everybody?) can only possibly be a learner (agreer? disciple?) here, although you actually know nothing about me, and therefore have no idea what I might have to teach. But having labelled me Innately Evil, that's never going to change. There is no fact of the matter that I am the learner and you the teacher; instead, just a bunch of stop signs in your mind.
↑ comment by Peterdjones · 2011-06-06T17:30:11.433Z · LW(p) · GW(p)
So, I'll stop replying entirely now.
You stopped making substantive replies some time ago. I suppose by "entirely" you will stop Ad Homming as well.
↑ comment by shokwave · 2011-06-06T15:18:40.639Z · LW(p) · GW(p)
I'm the irritant that produces the pearl.
Outmoded method of production for object-type [pearl]. Slow, inefficient, no quality control. Pearl only has superficial value. Synthetic pearls significantly less valued than organic: no value to actual physical configuration. Value attached to status associated with expensive or difficult-to-produce item. Recommend elimination of object-type [pearl].
(You're the irritant that produces something pjeby and I don't want.)
Replies from: Peterdjones↑ comment by Peterdjones · 2011-06-06T15:54:42.163Z · LW(p) · GW(p)
(...something pjeby and I don't want.)
Which would be understanding (=ability to explain) rather than unchallenged belief.
Replies from: MixedNuts, shokwave↑ comment by MixedNuts · 2011-06-06T16:07:21.603Z · LW(p) · GW(p)
People who view themselves as annoying others because they make them think tend to be trolls. (Other types of trolls include people who consciously troll for lulz, and people who can't stick to the local unwritten rules.)
I don't actually know any example of people consciously thinking of themselves as a pearl-producing irritant who aren't trolls. People who irritate other people and cause them to produce valuable thoughts tend to do most of the thinking themselves with pearls as a smaller side effect (controversial thinkers). The rest tend to be very poor thinkers whose arguments can be reconstructed by more skilled thinkers for interesting results (some theists; Marx), and they try not to be annoying.
Replies from: AdeleneDawner, Peterdjones↑ comment by AdeleneDawner · 2011-06-06T17:29:40.340Z · LW(p) · GW(p)
I don't actually know any example of people consciously thinking of themselves as a pearl-producing irritant who aren't trolls.
To be entirely fair, I have actually known such a person. It manifested as him showing up at a meditation meetup I went to on a regular basis, sitting quietly, not speaking unless directly asked a question, being generally ineffable when asked questions, and quietly giving up when several months (a year?) of this behavior didn't get the result he was looking for. I wouldn't even have known why he left if I hadn't tracked him down and asked.
Replies from: MixedNuts↑ comment by MixedNuts · 2011-06-06T20:35:45.034Z · LW(p) · GW(p)
Quite fair. If non-troll irritants are usually this unintrusive, there's a selection bias in my known examples.
Did he tell you what result he wanted? FWIW, I would have done what I do when communication norms break down: sit next to him, watch him, mirror him. (Learning his communication style, testing whether he's trying to teach by example, taming an animal.) Or maybe done what I do when I want to meet someone but am afraid: watch from afar, never dare approach.
Replies from: AdeleneDawner↑ comment by AdeleneDawner · 2011-06-06T22:15:33.315Z · LW(p) · GW(p)
Did he tell you what result he wanted?
It's not really relevant here, but he was looking to push the group toward Advaita Vedanta.
FWIW, I would have done what I do when communication norms break down: sit next to him, watch him, mirror him. (Learning his communication style, testing whether he's trying to teach by example, taming an animal.)
This is basically what he was aiming for, but what he was trying to teach was too subtle to really come across in a situation with as many distractions as that one had (it was a rather unusual mediation group) and also the details of his ineffability raised enough warning flags that he had trouble getting people to take him seriously.
He has a blog here if you're interested, but I should note that its topic and mode of discussion is a potential memetic hazard, along the lines of nihilism but likely harder to recover from.
↑ comment by Peterdjones · 2011-06-06T16:26:55.986Z · LW(p) · GW(p)
People who view themselves as annoying others because they make them think
I wish. Making someone think is almost impossible.
Replies from: MixedNuts↑ comment by MixedNuts · 2011-06-06T18:52:48.395Z · LW(p) · GW(p)
If you set out to make people think, yeah. You just end up being a gadfly.
If you set out to produce high-quality thoughts because you need them for something else, you'll make people think. Of course they'll already be thinkers (but you're posting on LW).
Replies from: Peterdjones↑ comment by Peterdjones · 2011-06-06T18:59:57.102Z · LW(p) · GW(p)
High quality thoughts have to be able to answer objections. That's why there is a comment section underneath each post. That is why lecturers call for questions after they have finished. etc etc.
↑ comment by shokwave · 2011-06-07T05:43:13.516Z · LW(p) · GW(p)
No, which would be hard-fought for beliefs, not correct beliefs.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-06-07T09:49:18.701Z · LW(p) · GW(p)
There's no reason they can' be both. Of course what we ultimately want is truth.Mysticism says you can grasp the truth about everything in a flash. According to non-mystical epistemology, it's a question of tentatively building theories and revising or abandoning them if they go wrong. Justification and corroboration are our proxies for truth.
↑ comment by SilasBarta · 2011-06-06T14:53:53.529Z · LW(p) · GW(p)
It's begining to look like all male gynecologists should be sacked.
There's an obvious joke just screaming to be made here.
↑ comment by wedrifid · 2011-05-31T17:09:00.943Z · LW(p) · GW(p)
That someone knows every physical fact about gold doesn't make that person own any gold.
I'm a little uncomfortable reading replies to myself that are based on a quote from someone else. Quote-of-a-quote can be indicated with "> >", that way it is clear that the text is from XiXi, not myself.
↑ comment by Peterdjones · 2011-05-31T16:18:54.636Z · LW(p) · GW(p)
The Mary’s Room thought experiment explicitly claims that Mary knows every physical fact about the given phenomenon but does at the same time implicitly suggest that some information is missing.
Not exactly. There is a sense in which seeing are red tomato conveys the same information as being told "there is a red tomato here". Nonetheless,there appears to be a difference between the two cases It is not clear whether the difference consists of some missing information,or something else.
Mary was merely able to to dissolve part of human nature by incorporating an algorithmic understanding of it. Mary wasn't able to evoke the dynamic state sequence from the human machine by computing the algorithm
There is nothing to stop Mary computing any algorithms. Humans can run through algorithms in their heads or with pencil and paper. Mary should have no problem since she is stipulated to be a super scientist. What she can't do is run the algorithm on the same hardware. She can only run it in the conscious/verbal part. not the automatic, unconscious, perceptual part.
Understanding something means to assimilate a model of what is to be understood. Understanding something completely means to be able to compute its algorithm, it means to incorporate not just a model of something, its static description, it means to become the algorithm entirely. To understand something completely means to remove its logical uncertainty by computing it.
Understanding something completely is only equivalent to understanding it's algorithm if a) it can be decomposed into a software description and a hardware platform. and b) all the significant work is being done by the software, with the hardware being just a neutral platform. Those conditions are not fulfilled in all cases.
We have seen that Mary actually can run through any algorithm, and if your expectation is that she still doesn't understand what colours look like, that would be a case where the hardware is making a difference.
You might object that Mary won't learn anything from experiencing the algorithm, but if Mary does indeed know everything about a given phenomenon then by definition she also knows how the algorithm feels from the inside.
She doesn't know everything by definition; she knows everything physical by definition. That doesn't tell us whether she would be able to figure out how things seem, phenomenally, from the inside. If we had instances of being able to successfully predict qualia from physics or neurology or whatever, there would be no need for an intuition-based parable like Mary's room. As it is. how much Mary would be able to figure out about her qualia is not something we know in advance: instead, our reaction to the story tells us what we think the answer is.
Replies from: XiXiDu↑ comment by XiXiDu · 2011-05-31T18:34:03.664Z · LW(p) · GW(p)
It is not clear whether the difference consists of some missing information,or something else.
I am still at the very beginning of learning math, so maybe I am completely confused here. I do not see how there can be a difference without any difference of information content. How the information are interpreted has a bearing on the information content, because humans are not partly software and partly hardware. Brains are physical, chemical systems. Any difference in the processing of sensory information would have a bearing on the measure of the brains Kolmogorov complexity. Therefore, even if the difference between Mary before and after her retina operation is not due to new sensory information, any difference in how previous information are being processed is equivalent to a difference of the neurological makeup of her brain, and therefore its algorithmic complexity.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-06-02T01:34:53.403Z · LW(p) · GW(p)
I am still at the very beginning of learning math, so maybe I am completely confused here. I do not see how there can be a difference without any difference of information content.
There can't. But that doesn't mean you can work back from information to medium.
It can hardly be disputed that qualia convey or encapsulate information. Yet the Mary story suggests something rather strange — that, although she has all the (physical) information about how colour works, she gains some extra information when she sees colours for the first time.
Information is something that is copied and transferred from place to place. That being the case, something is always left behing, namely the original basis (or format or medium or physical instantiation) of the information. Consider an epic poem in an oral tradition, that is then written down in manuscript, the text of which is then used in a printed book, which is then transferred to microfilm, which is then made into a CDROM. There is no way someone who has access to the CDROM could work backwards to the previous incarnations.
What is left behind is not just more information. Even if we had complete information about the original manuscript of our poem, we wouldn't have the book itself.
comment by thomblake · 2011-05-26T20:36:59.078Z · LW(p) · GW(p)
FWIW, I'm satisfied with Dennett's explanation. If Mary knows everything physical about color, then there's nothing for her to be surprised about when she sees red. If your intuitions tell you otherwise, then your intuitions are wrong.
This begs the question, to be sure, but think of it more like moving to a more appropriate field of battle.
Replies from: pjeby, orthonormal, dspeyer, Peterdjones↑ comment by pjeby · 2011-05-26T23:36:48.210Z · LW(p) · GW(p)
If Mary knows everything physical about color, then there's nothing for her to be surprised about when she sees red. If your intuitions tell you otherwise, then your intuitions are wrong.
Not really; it just means that our ability to imagine sensory experiences is underpowered. There are limits to what you can imagine and call up in conscious experience, even of things you have experienced. A person could imagine what it would be like to be betrayed by a friend, and yet still not be able to experience the same "qualia" as they would in the actual situation.
So, you can know precisely which neurons should fire to create a sensation of red (or anything else), and yet not be able to make them fire as a result.
Mere knowledge isn't sufficient to recreate any experience, but that's just a fact about the structure and limitations of human brains, not evidence of some special status for qualia. (It's certainly not an argument for non-materialism.)
Replies from: Nornagest, JamesAndrix, thomblake, Peterdjones↑ comment by Nornagest · 2011-05-26T23:48:52.454Z · LW(p) · GW(p)
That more or less corresponds to the way I break it down, and I'd take it a step further by saying that thinking of the problem this way reduces Mary's room to a definitional conflict. If we classify the experiential feeling of redness under "everything physical about color" -- which is quite viable given a reductionist interpretation of the problem -- then Mary by definition knows how it feels. This is probably impossible in practice if Mary has a normal human cognitive architecture, but that's okay, since we're working in the magical world of thought experiments where anything goes.
If we don't, on the other hand, then Mary can quite easily lack experiential knowledge of redness without fear of contradiction, by the process you've outlined. It's only an apparent paradox because of an ambiguity in our formulation of experiential knowledge.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-05-31T19:24:51.843Z · LW(p) · GW(p)
If we classify the experiential feeling of redness under "everything physical about color" -- which is quite viable given a reductionist interpretation of the problem -- then Mary by definition knows how it feels.
That's not how reduction works. You don't just declare a problem to consist only of (known) physics, and then declare it solved.You attempt to understand it in terms of known physics, and that attempt either succeeds or fails. Reductionism is not an apriori truth, or a method guaranteed to succeed. And no reduction of qualia has succeeded. Whether that me we need new explanations, new physics, non-reductionism or dualism is an open question.
Replies from: Nornagest↑ comment by Nornagest · 2011-05-31T20:31:55.577Z · LW(p) · GW(p)
I'm not sure you understand what I'm trying to say -- or, for that matter, what pjeby was trying to say. Notice how I never used the word "qualia"? That's because I'm trying to avoid becoming entangled in issues surrounding the reduction of qualia; instead, I'm examining the results of Mary's room given two mutually exclusive possible assumptions -- that such a reduction exists or that it doesn't -- and pointing out that the thought experiment generates results consistent with known physics in either case, provided we keep that assumption consistent within it. That doesn't reduce qualia as traditionally conceived to known physics, but it does demonstrate that Mary's room doesn't provide evidence either way.
↑ comment by JamesAndrix · 2011-05-27T03:16:11.714Z · LW(p) · GW(p)
Not being able to make the neurons fire doesn't mean you don't know how it would feel if they did.
I hate this whole scenario for this kind of "This knowledge is a given but wait no it is not." kind of thinking.
Whether or not all the physical knowledge is enough to know qualia is the question and as such it should not be answered in the conclusion of a hypothetical story, and then taken as evidence.
Replies from: pjeby, orthonormal, Peterdjones, wedrifid↑ comment by pjeby · 2011-05-27T18:33:40.838Z · LW(p) · GW(p)
Not being able to make the neurons fire doesn't mean you don't know how it would feel if they did.
Huh? That sounds confused to me. As I said, I can "know" how it would feel to be betrayed by a friend, without actually experiencing it. And that difference between "knowing" and "experiencing" is what we're talking about here.
Replies from: JamesAndrix↑ comment by JamesAndrix · 2011-05-28T02:32:25.465Z · LW(p) · GW(p)
From what you quoted I thought you were arguing that there was something for her to be surprised about.
Replies from: pjeby↑ comment by pjeby · 2011-05-28T03:28:26.258Z · LW(p) · GW(p)
I thought you were arguing that there was something for her to be surprised about.
Of course there's something for her to be surprised about. The non-materialists are merely wrong to think this means there's something mysterious or non-physical about that something.
Replies from: nshepperd, JamesAndrix↑ comment by nshepperd · 2011-05-28T05:40:30.786Z · LW(p) · GW(p)
It may be more accurate to say that when she sees a red object, that generates a feeling of surprise, because her visual cortex is doing something it has never done before. Not that there was ever any information missing -- but the surprise still happens as a fact about the brain.
Replies from: pjeby↑ comment by pjeby · 2011-05-28T16:16:21.556Z · LW(p) · GW(p)
It may be more accurate to say that when she sees a red object, that generates a feeling of surprise, because her visual cortex is doing something it has never done before. Not that there was ever any information missing -- but the surprise still happens as a fact about the brain.
We measure information in terms of surprise, so you're kind of contradicting yourself there.
The entire "thought experiment" hinges on getting you to accept a false premise: that "knowledge" is of a single kind. It then encourages you to follow this premise through to the seeming contradiction that Mary shouldn't be able to be surprised. It ignores the critical role of knowledge representation, and is thus a paradox of the form, "If the barber shaves everyone who doesn't shave themselves, does the barber shave him/herself?" The paradox comes from mixing two levels of knowledge, and pretending they're the same, in precisely the same way that Mary's Room does.
Replies from: nshepperd, wedrifid↑ comment by nshepperd · 2011-05-29T01:40:42.992Z · LW(p) · GW(p)
I mean surprise in the sense of the feeling, which doesn't have to be justified to be felt. Perhaps a better word is "enlightenment". Seeing red feels like enlightenment because the brain is put into a state it has never been in before, as a result of which Mary gains the ability (through memory) to put her brain into that state at will.
↑ comment by wedrifid · 2011-05-28T16:33:28.366Z · LW(p) · GW(p)
and is thus a paradox of the form, "If the barber shaves everyone who doesn't shave themselves, does the barber shave him/herself?"
That isn't a paradox. It is a simple logical question with the answer yes.
Replies from: pjeby↑ comment by pjeby · 2011-05-28T16:47:29.857Z · LW(p) · GW(p)
Hm, I guess that should probably be, "if the barber shaves only those who don't shave themselves."
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-05-28T17:44:09.692Z · LW(p) · GW(p)
"if and only if"-type language has to enter into.
If the barber shaves all and only those who don't save themselves...
Replies from: Dorikka↑ comment by Dorikka · 2011-05-30T06:28:09.576Z · LW(p) · GW(p)
don't save themselves...
Cracked me up. I think you might mean "shave" here.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-05-30T16:23:42.300Z · LW(p) · GW(p)
Oh no! The barber of Seville is coming! I'll hold him off, you save yourself!
Replies from: Alicorn↑ comment by JamesAndrix · 2011-05-28T05:00:45.322Z · LW(p) · GW(p)
What is it that she's surprised about?
Replies from: pjeby↑ comment by pjeby · 2011-05-28T05:06:42.684Z · LW(p) · GW(p)
The difference between knowing what seeing red is supposed to feel like, and what it actually feels like.
Replies from: JamesAndrix↑ comment by JamesAndrix · 2011-05-28T06:12:28.574Z · LW(p) · GW(p)
I think the idea that "what it actually feels like" is knowledge beyond "every physical fact on various levels" is just asserting the conclusion.
I actually think it is the posited level of knowledge that is screwing with our intuitions and/or communication here. We've never traced our own algorithms, so the idea that someone could fully expect novel qualia is alien. I suspect we're also not smart enough to actually have that level of knowledge of color vision, but that is what the thought experiment gives us.
I think the chinese room has a similar problem: a human is not a reliable substrate for computation. We instinctively know that a human can choose to ignore the scribbles on paper, so the chinese speaking entity never happens.
Replies from: pjeby↑ comment by pjeby · 2011-05-28T16:32:09.395Z · LW(p) · GW(p)
I think the idea that "what it actually feels like" is knowledge beyond "every physical fact on various levels" is just asserting the conclusion.
Ah, but what conclusion?
I'm saying, it doesn't matter whether you assume they're the same or different. Either way, the whole "experiment" is another stupid definitional argument.
However, materialism does not require us to believe that looking at a menu can make you feel full. So, there's no reason not to accept the experiment's premise that Mary experiences something new by seeing red. That's not where the error comes from.
The error is in assuming that a brain ought to be able to translate knowledge of one kind into another, independent of its physical form. If you buy that implicit premise, then you seem to run into a contradiction.
However, since materialism doesn't require this premise, there's no reason to assume it. I don't, so I see no contradiction in the experiment.
I actually think it is the posited level of knowledge that is screwing with our intuitions and/or communication here. We've never traced our own algorithms, so the idea that someone could fully expect novel qualia is alien. I suspect we're also not smart enough to actually have that level of knowledge of color vision, but that is what the thought experiment gives us.
If you think that you can be "smart enough" then you are positing a different brain architecture than the ones human beings have.
But let's assume that Mary isn't human. She's a transhuman, or posthuman, or some sort of alien being.
In order for her to know what red actually feels like, she'd need to be able to create the experience -- i.e., have a neural architecture that lets her go, "ah, so it's that neuron that does 'red'... let me go ahead and trigger that."
At this point, we've reduced the "experiment" to an absurdity, because now Mary has experienced "red".
Neither with a plain human architecture, nor with a super-advanced alien one, do we get a place where there is some mysterious non-material thing left over.
I think the chinese room has a similar problem
Not exactly. It's an intuition pump, drawing on your intuitive sense that the only thing in the room that could "understand" Chinese is the human... and he clearly doesn't, so there must not be any understanding going on. If you replace the room with a computer, then the same intuition pump needn't apply.
For that matter, suppose you replace the chinese room with a brain filled with individual computing units... then the same "experiment" "proves" that brains can't possibly "understand" anything!
Replies from: JamesAndrix↑ comment by JamesAndrix · 2011-05-28T17:07:36.570Z · LW(p) · GW(p)
However, materialism does not require us to believe that looking at a menu can make you feel full.
Looking at a menu is a rather pale imitation of the level of knowledge given Mary.
In order for her to know what red actually feels like, she'd need to be able to create the experience -- i.e., have a neural architecture that lets her go, "ah, so it's that neuron that does 'red'... let me go ahead and trigger that."
That is the conclusion you're asserting. I contend that she can know, that there is nothing left for her to be surprised about when that neuron does fire. She does not say "oh wow", she says "ha, nailed it"
If she has enough memory to store a physical simulation of the relevant parts of her brain, and can trigger that simulation's red neurons, and can understand the chains of causality, then she already knows what red will look like when she does see it.
Now you might say that in that case Mary has already experienced red, just using a different part of her brain, but I think it's an automatic consequence of knowing all the physical facts.
Replies from: pjeby↑ comment by pjeby · 2011-05-28T19:04:51.544Z · LW(p) · GW(p)
Looking at a menu is a rather pale imitation of the level of knowledge given Mary.
No matter how much information is on the menu, it's not going to make you feel full. You could watch videos of the food being prepared for days, get a complete molecular map of what will happen in your taste buds and digestive system, and still die of hunger before you actually know what the food tastes like.
I contend that she can know, that there is nothing left for her to be surprised about when that neuron does fire.
In which case, we're using different definitions of what it means to know what something is like. In mine, knowing what something is "like" is not the same as actually experiencing it -- which means there is room to be surprised, no matter how much specificity there is.
This difference exists because in the human neural architecture, there is necessarily a difference (however slight) between remembering or imagining an experience and actually experiencing it. Otherwise, we could become frightened upon merely imagining that a bear was in the room with us. (IOW, at least some portion of our architecture has to be able to represent "this experience is imaginary".)
However, none of this matters in the slightest with regard to dissolving Mary's Room. I'm simply pointing out that it isn't necessary to assume perfect knowledge in order to dissolve the paradox. It's just as easily dissolved by assuming imperfect knowledge.
And all the evidence we have suggests that the knowledge is -- and possibly must -- be imperfect.
But materialism doesn't require that this knowledge be perfectable, since to a true materialist, knowledge itself is not separable from a representation, and that representation is allowed (and likely) to be imperfect in any evolved biological brain.
Replies from: Creaticity, JamesAndrix↑ comment by Creaticity · 2011-05-30T21:16:00.631Z · LW(p) · GW(p)
No matter how much information is on the menu, it's not going to make you feel full. You could watch videos of the food being prepared for days, get a complete molecular map of what will happen in your taste buds and digestive system, and still die of hunger before you actually know what the food tastes like.
Metaphysics is a restaurant where they give you a thirty thousand page menu, and no food. - Robert M. Pirsig
↑ comment by JamesAndrix · 2011-05-28T20:04:03.012Z · LW(p) · GW(p)
No matter how much information is on the menu, it's not going to make you feel full.
"Feeling full" and "seeing red" also jumbles up the question. It is not "would she see red"
In which case, we're using different definitions of what it means to know what something is like. In mine, knowing what something is "like" is not the same as actually experiencing it -- which means there is room to be surprised, no matter how much specificity there is.
But isn't your "knowing what something is like" based on your experience of NOT having a complete map of your sensory system? My whole point this that the given level of knowledge actually would lead to knowledge of and expectation of qualia.
This difference exists because in the human neural architecture, there is necessarily a difference (however slight) between remembering or imagining an experience and actually experiencing it.
Nor is the question "can she imagine red".
The question is: Does she get new information upon seeing red? (something to surprise her.) To phrase it slightly differently: if you showed her a green apple, would she be fooled?
This is a matter-of-fact question about a hypothetical agent looking at its own algorithms.
Replies from: pjeby↑ comment by pjeby · 2011-05-28T21:17:08.052Z · LW(p) · GW(p)
"Feeling full" and "seeing red" also jumbles up the question. It is not "would she see red"
If there's a difference in the experience, then there's information about the difference, and surprise is thus possible.
But isn't your "knowing what something is like" based on your experience of NOT having a complete map of your sensory system? My whole point this that the given level of knowledge actually would lead to knowledge of and expectation of qualia.
How, exactly? How will this knowledge be represented?
If "red" is truly a material subject -- something that exists only in the form of a certain set of neurons firing (or analagous physical processes) -- then any knowledge "about" this is necessarily separate from the thing itself. The word "red" is not equal to red, no matter how precisely you define that word.
(Note: my assumption here is that red is a property of brains, not reality. Human color perception is peculiar to humans, in that it allows us to see "colors" that don't correspond to specific light frequencies. There are other complications to color vision as well.)
Any knowledge of red that doesn't include the experience of redness itself is missing information, in the sense that the mental state of the experiencer is different.
That's because in any hypothetical state where I'm thinking "that's what red is", my mental state is not "red", but "that's what red is". Thus, there's a difference in my state, and thus, something to be surprised about.
Trying to say, "yeah, but you can take that into account" is just writing more statements about red on a piece of paper, or adding more dishes to the menu, because the mental state you're in still contains the label, "this is what I think it would be like", and lacks the portion of that state containing the actual experience of red.
Replies from: JamesAndrix↑ comment by JamesAndrix · 2011-05-29T00:41:25.611Z · LW(p) · GW(p)
If there's a difference in the experience, then there's information about the difference,
The information about the difference is included in Mary's education. That is what was given.
Thus, there's a difference in my state, and thus, something to be surprised about.
Are you surprised all the time? If the change in Mary's mental state is what Mary expected it to be, then there is no surprise.
The word "red" is not equal to red, no matter how precisely you define that word.
How do you know?
If "red" is truly a material subject -- something that exists only in the form of a certain set of neurons firing (or analagous physical processes)
Isn't a mind that knows every fact about a process itself an analogous physical process?
Replies from: thomblake, orthonormal↑ comment by thomblake · 2011-05-31T17:51:49.384Z · LW(p) · GW(p)
The information about the difference is included in Mary's education. That is what was given.
This is how this question comes to resemble POAT. Some people read it as a logic puzzle, and say that Mary's knowing what it's like to see red was given in the premise. Others read it as an engineering problem, and think about how human brains actually work.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-05-31T20:46:08.927Z · LW(p) · GW(p)
That treatment of the POAT is flawed. The question that matter is whether there is relative motion between the air and the plane. A horizontally tethered plane in a wind tunnel would rise. The treadmill is just a fancy tether.
Replies from: thomblake↑ comment by thomblake · 2011-05-31T20:55:07.484Z · LW(p) · GW(p)
What? That's the best treatment of the question I've seen yet, and seems to account for every possible angle. This makes no sense:
A horizontally tethered plane in a wind tunnel would rise.
The plane in the thought experiment is not in a wind tunnel.
The treadmill is just a fancy tether.
Treated realistically, the treadmill should not have any tethering ability, fancy or otherwise. Which interpretation of the problem were you going with?
Replies from: Peterdjones↑ comment by Peterdjones · 2011-05-31T21:11:27.697Z · LW(p) · GW(p)
The plane in the thought experiment is not in a wind tunnel.
A plane can move air over its own airfoils. Or why not make it a truck on a treadmill?
↑ comment by orthonormal · 2011-05-31T15:41:01.982Z · LW(p) · GW(p)
By the way, you may not agree with my analysis of qualia (and if so, tell me), but I hope that the way this thread derailed is at least some indication of why I think the question needed dissolving after all. As with several other topics, the answer may be obvious to many, but people tend to disagree about which is the obvious answer (or worse, have a difficult time even figuring out whether their answer agrees or disagrees with someone else's).
Replies from: JamesAndrix↑ comment by JamesAndrix · 2011-05-31T17:44:17.649Z · LW(p) · GW(p)
I definitely welcome the series, though I have not finished it yet, and will need more time to digest it in any case.
↑ comment by orthonormal · 2011-05-27T19:54:03.208Z · LW(p) · GW(p)
It's at least evidence about the way our minds model other minds, and as such it might be helpful to understand where that intuition comes from.
↑ comment by Peterdjones · 2011-05-31T19:44:49.016Z · LW(p) · GW(p)
Not being able to make the neurons fire doesn't mean you don't know how it would feel if they did.
OK. Do you know that? Does Mary?
Replies from: JamesAndrix↑ comment by JamesAndrix · 2011-05-31T22:32:35.039Z · LW(p) · GW(p)
Well, through seeing red, yes ;-)
Through study, no. I think the knowledge postulated is beyond what we currently have, and must include how the algorithm feels from the inside. (edit: Mary does know through study.)
↑ comment by wedrifid · 2011-05-27T11:03:37.724Z · LW(p) · GW(p)
Whether or not all the physical knowledge is enough to know qualia is the question and as such it should not be answered in the conclusion of a hypothetical story, and then taken as evidence.
That does sound fallacious. Fortunately you don't need additional evidence.
An even better proposal: You should put the answer in the prologue and then not bother writing a story at all. Because we moved on from that kind of superstition years ago.
↑ comment by thomblake · 2011-05-27T14:52:34.908Z · LW(p) · GW(p)
So, you can know precisely which neurons should fire to create a sensation of red (or anything else), and yet not be able to make them fire as a result.
Maybe, but Mary nonetheless by hypothesis knows exactly what it would feel like if those neurons fire, since that's a physical fact about color. Like I said, that's begging the question in the direction of materialism, but assuming that fact is non-physical is begging the question in the direction of non-materialism.
Replies from: pjeby, MixedNuts↑ comment by pjeby · 2011-05-27T18:41:17.597Z · LW(p) · GW(p)
Like I said, that's begging the question in the direction of materialism
Not at all. The question is only confused because the paradox confuses "knowing what would happen if neurons fire" and "having those neurons actually fire" as being the same sort of knowledge. In the human cognitive architecture, they aren't the same thing, but that doesn't mean there's any mysterious non-physical "qualia" involved. It's just that we have different neuronal firings for knowing and experiencing.
If you taboo enough words and expand enough definitions, the qualia question is reduced to "if Mary has mental-state-representing-knowledge-of-red, but does not have mental-state-representing-experience-of-red, then what new thing does she learn upon experiencing red?"
And of course the bloody obvious answer is, the mental state representing the experience of red. The question is idiotic because it basically assumes two fundamentally different things are the same, and then tries to turn the difference between them into something mysterious. It makes no more sense than saying, "if cubes are square, then why is a sphere round? some extra mysterious thing is happening!"
So, it's not begging the question for materialism, because it doesn't matter how complete Mary's state of knowledge about neurons is. The question itself is a simple confusion of definitions, like the classic tree-forest-sound question.
Replies from: thomblake, Peterdjones↑ comment by thomblake · 2011-05-27T18:58:33.779Z · LW(p) · GW(p)
The question itself is a simple confusion of definitions, like the classic tree-forest-sound question.
I think we've at least touched upon why this question needs to be dissolved.
Reading the thought experiment as a logic problem, one should accept the conflation of the two putative mental states you've identified (calling them both 'knowing') and note that by hypothesis Mary 'knows' everything physical about color. Thus, the question is resolved entirely by determining whether the quale is non-physical. And so if you accept the premises of the thought experiment, it is not good for resolving disputes over materialism. Dennet, being a materialist, reads the question in this manner and simply agrees that Mary will not be surprised, since materialism is true.
Personally, I'm pretty okay with mental-state-representing-experience-of-red being part of "knowledge". Even if humans don't work that way, that's kindof irrelevant to the discussion (though it might explain why we have confused intuitions about this).
Replies from: pjeby↑ comment by pjeby · 2011-05-27T23:12:24.938Z · LW(p) · GW(p)
Dennet, being a materialist, reads the question in this manner and simply agrees that Mary will not be surprised, since materialism is true.
Then he is quite simply wrong. Knowledge can never be fully separated from its representation, just as one can never quite untangle a mind from the body it wears. ;-)
This conclusion is a requirement of actual materialism, since if you're truly materialist, you know that knowledge can't exist apart from a representation. Our applying the same label to two different representations is our own confusion, not one that exists in reality.
Reading the thought experiment as a logic problem, one should accept the conflation of the two putative mental states you've identified (calling them both 'knowing')
If you start from a nonsensical premise, you can prove just about anything. In this case, the premise is begging a question: you can only conflate the relevant types of knowledge under discussion, if you already assume that knowledge is independent of physical form... an assumption that any sufficiently advanced materialism should hold false.
Replies from: thomblake↑ comment by thomblake · 2011-05-29T19:53:11.339Z · LW(p) · GW(p)
This conclusion is a requirement of actual materialism, since if you're truly materialist, you know that knowledge can't exist apart from a representation. Our applying the same label to two different representations is our own confusion, not one that exists in reality.
It really doesn't have to be a confusion though. We apply the label 'fruit' to both apples and oranges - that doesn't mean we're confused just because apples are different from oranges.
Then he is quite simply wrong. Knowledge can never be fully separated from its representation, just as one can never quite untangle a mind from the body it wears. ;-)
I don't think either I or Dennett made that claim. You don't need it for the premise of the thought experiment. You just need to understand that any mental state is going to be represented using some configuration of brain-stuff...
According to the thought experiment, Mary "knows" everything physical about the color red, and that will include any relevant sense of the word "knows". And so if the only way to "know" what experiencing the color red feels like is to have the neurons fire that actually fire when seeing red, then she's had those neurons fire. It could be by surgery, or hallucination, or divine intervention - it doesn't matter, it was given as a premise in the thought experiment that she knows what that's like.
One way to make such a Mary would be to determine what the configuration of neurons in Mary's brain would be after experiencing red, then surgically alter her brain to have that configuration. The premise of the thought experiment is that she has this information, and so if that's the only way she could have gotten it, then that's what happened.
Replies from: AndyWood↑ comment by AndyWood · 2011-05-29T21:41:27.037Z · LW(p) · GW(p)
And so if the only way to "know" what experiencing the color red feels like is to have the neurons fire that actually fire when seeing red, then she's had those neurons fire.
This is going way beyond what I'd consider to be a reasonable reading of the intent of the thought experiment. If you're allowed to expand the meaning of the non-specific phrase "knows everything physical" to include an exact analogue of subjective experience, then the original meaning of the thought experiment goes right out the window.
My reading of this entire exchange has thomblake and JamesAndrix repeatedly begging the question in every comment, taking great license with the intent of the thought experiment, while pjeby keeps trying to ground the discussion in reality by pinning down what brain states are being compared. So the exchange as a whole is mildly illuminating, but only because the former are acting as foils for the latter.
You can't keep arguing this on the verbal/definitional level. The meat is in the bit about brain states.
Call the set of brain states that enable Mary to recall the subjective experience of red, Set R. If seeing red for the first time imparts an ability to recall redness that was not there before, then as far as I'm concerned that's what's meant by "surprise".
We know that seeing something red with her eyes puts her brain into a state that is in Set R. The question is whether there is a body of knowledge, this irritatingly ill-defined concept of "all 'physical' knowledge about red", that places her brain into a state in Set R. It is a useless mental exercise to divorce this from how human brains and eyes actually work. Either a brain can be put into Set R without experiencing red, or it can't. It seems very unlikely that descriptive knowledge could accomplish this. If you're just going to toss direct neuronal manipulation in there with descriptive knowledge, then the whole thought experiment becomes a farce.
↑ comment by Peterdjones · 2011-05-31T20:12:43.247Z · LW(p) · GW(p)
The question is idiotic because it basically assumes two fundamentally different things are the same, and then tries to turn the difference between them into something mysterious.
On the contrary, it is uncontentious that knowledge-by-descriptions and knowledge-by-acquaintance are both knowledge.
↑ comment by MixedNuts · 2011-05-27T15:02:39.251Z · LW(p) · GW(p)
Then she knows things humans in their current form can't learn except by seeing red. Either she found a way to reprogram herself, or she has seen red, or the problerm is ill-posed because it equivocates between what humans can learn at all and what they can learn from reading words in textbooks.
↑ comment by Peterdjones · 2011-05-31T19:05:47.989Z · LW(p) · GW(p)
Not really; it just means that our ability to imagine sensory experiences is underpowered.
Why does Mary need to imagine red in order to know what it looks like? If the physical understanding she already has accounts for it, then she should be able to figure it out from that, as per the Dennett response. Like several people in this thread, you are tacitly assuming that there is something special about qualia, such that they need to be imagined or instantiated in order to be known -- something that is unique about them, even though they are ultimately physical like everything else.
↑ comment by orthonormal · 2011-05-26T21:28:08.652Z · LW(p) · GW(p)
Oops, I realized during editing that surprise is not so much what we should look for as learning, but I forgot to remove the instances of "surprise" from this post. Done.
In a day or two, I can better explain why I reject Dennett's explanation, but for now it's enough to note that it doesn't dissolve the question at all.
↑ comment by dspeyer · 2011-05-27T00:36:31.328Z · LW(p) · GW(p)
This does not match my experience doing things after studying them thoroughly. Unless your definition of "everything physical about color" includes neurology far beyond the state of the art.
Replies from: thomblake, Peterdjones↑ comment by thomblake · 2011-05-27T14:48:50.003Z · LW(p) · GW(p)
Unless your definition of "everything physical about color" includes neurology far beyond the state of the art.
Indeed it does. I believe a strong reading also involves knowing the position of every colored object in the universe, and the favorite food of every person with red hair.
Replies from: Will_Sawin, dspeyer↑ comment by Will_Sawin · 2011-05-28T02:21:17.438Z · LW(p) · GW(p)
Wouldn't this include complete physical knowledge of the universe?
It is interesting to me that this is a contradiction in a finite universe. It intuitively feels like one might be able to analyze this self-reference and its source and derive a convincing argument against Mary's room, but I cannot find one right now.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-05-28T02:36:31.137Z · LW(p) · GW(p)
Consider one's brain state on the granularity that it stores information. It contains N bits of information.
What would those N bits of information be if you saw something red?
It is impossible to know this information, because it would take all your available N bits.
Yet if you don't know all this information, clearly you learn something new as soon as you see something red, at least, as long as your attention would then be drawn to the parts you didn't previously know about, which doesn't seem an unreasonable assumption at all.
Now suppose that you have that knowledge in some compressed form, of M<N bits. But then seeing red and entering that state would be like completing a calculation, which frequently produces the "Aha! I have learned something new!" response.
↑ comment by Peterdjones · 2011-05-27T12:53:43.566Z · LW(p) · GW(p)
The Dennetian Answer isnt based on anything that actually happens.It's based on having a really, really strong intuition of physicalism.
Replies from: orthonormal↑ comment by orthonormal · 2011-05-27T20:20:25.613Z · LW(p) · GW(p)
Exactly. It's not a refutation so much as a bullet-biting, which is unenlightening even when correct.
↑ comment by Peterdjones · 2011-05-31T18:35:34.144Z · LW(p) · GW(p)
FWIW, I'm satisfied with Dennett's explanation. If Mary knows everything physical about color, then there's nothing for her to be surprised about when she sees red. If your intuitions tell you otherwise, then your intuitions are wrong.
Since we don't actually have physical explanations of qualia, that is itself an intuition. It's one intuition against another. not one intuition against some fact that disproves it.
Replies from: thomblake, Will_Sawin↑ comment by thomblake · 2011-05-31T20:23:28.030Z · LW(p) · GW(p)
Since we don't actually have physical explanations of qualia, that is itself an intuition.
No, there are (or, at least can be in principle) various good reasons for thinking physicalism is true - it need not rest on mere intuition. And once you've assumed that physicalism is true, the above is a consistent, correct conclusion for the thought experiment.
If you think physicalism is false, I would not think the above explanation would feel very satisfying to you - but then, I was talking about what feels satisfying to me.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-05-31T20:30:21.342Z · LW(p) · GW(p)
No, there are (or, at least can be in principle) various good reasons for thinking physicalism is true
We still don't have a physical explanation of qualia. So it is still a kind of guess that physicalism, which has been successful in other areas cab be extended to all areas. It's an intuition based on evidence, but the intended intuition of the Mary story is based on evidence as well, since we have all had novel experiences which went beyond their descriptions.
If you think physicalism is false, I would not think the above explanation would feel very satisfying to you
Can't I just find it unsatisfying anyway?
↑ comment by Will_Sawin · 2011-05-31T19:38:53.346Z · LW(p) · GW(p)
wouldn't everything physical include, say, your complete brain state now and what your complete brain state would be if you saw something red?
and wouldn't it be impossible for you to hold that info in your brain?
Replies from: thomblake↑ comment by thomblake · 2011-05-31T20:18:20.717Z · LW(p) · GW(p)
and wouldn't it be impossible for you to hold that info in your brain?
Not if you could losslessly encode it enough and also work directly with such encodings.
Replies from: Will_Sawin↑ comment by Will_Sawin · 2011-05-31T20:41:31.509Z · LW(p) · GW(p)
It is difficult to losslessly compress something that has already been losslessly compressed, because its entropy/bit will be very high.
De-compressing something you already had losslessly encoded feels like learning or discovery, depending on whether it is externally induced.
Like, if you know the statement of a difficult math problem, then you know enough information to pin down the answer, but you do not know the answer. If I tell you the answer, you will feel like you just learned something. And you did, in some senses of the word, learn something.
Replies from: MixedNutscomment by Zetetic · 2011-05-27T05:02:37.516Z · LW(p) · GW(p)
I was sort of toying with an idea a while ago; it is somewhat old though I've tweaked it a bit for presentation here. I don't like it so much anymore, but I still think it has some potential so I'll go ahead and share it anyway:
Suppose we alter the Mary's room scenario so that Mary isn't human, but rather comes from a race of philosophical 'Empaths' that have the ability to perfectly convey subjective experience due to some ability to transmit and read off another each other's neural patterns as well as hack their own CNS.
In this altered scenario, Mary can treat her CNS like a program and convert scientific data in a way that exploits her ability to hack her own CNS, and so experience new sensations by running the computational representation of the physical changes on her CNS.
Suppose we have a world inhabited only by Empaths. Would they think up Mary’s room? Wouldn't every single one of them would be able to say that no new information was conveyed when Mary was exposed to light frequencies that stimulated her CNS and caused her to perceive color once she had already run the information on her CNS?
comment by Psychohistorian · 2011-05-28T04:55:25.529Z · LW(p) · GW(p)
This is somewhat circular. There isn't anyone who knows everything about the visual system. Thus, we're hypothesizing that knowing everything about the visual system is insufficient to understand what red looks like... prove that knowing everything about the visual system is insufficient to understand what red looks like.
Even given this, the obvious solution seems to be that "What red looks like" is a fact about Mary's brain. She needn't have seen red light to see red; properly stimulating some neurons would result in the same effect. That the experience is itself a data point that cannot be explained through other means seems obvious. One could not experience a taste by reading about it.
Maybe the best analogy is to data translation. You can have a DVD. You could memorize (let's pretend) every zero and every one in that DVD. But if you don't have a DVD player, you can never watch it. The human brain does not appear to be able to translate zeroes and ones into a visual experience. Similarly, people can't know what sex feels like for the opposite sex; you simply don't have the equipment.
DVD players do not require magic to work, why should the brain?
Replies from: Peterdjones, orthonormal↑ comment by Peterdjones · 2011-05-31T20:25:37.757Z · LW(p) · GW(p)
Maybe the best analogy is to data translation. You can have a DVD. You could memorize (let's pretend) every zero and every one in that DVD. But if you don't have a DVD player, you can never watch it
A better analogy would be: you have a DVD and a complete set of schematics for a DVD player, and the ability to understand both, but still can't figure out what the DVD would look like when viewed.
↑ comment by orthonormal · 2011-05-31T15:58:42.663Z · LW(p) · GW(p)
I think your analogy betrays you: an AI wouldn't need to have an actual DVD player to turn the ones and zeroes into an experience of the film, it would just need to know the right algorithm.
Let's be clear here- you're advocating an epistemically non-reductionist position, which should seem at least a little weird: if brains are made of atoms, why should the hanging questions of what an experience feels like be unanswerable from knowledge of the brain structure?
Replies from: Psychohistorian↑ comment by Psychohistorian · 2011-06-13T05:10:22.696Z · LW(p) · GW(p)
Let's be clear here - I'm advocating no such thing. My position is firmly reductionist. Also, we're talking about Mary, not an AI. That counterexample is completely immaterial and is basically shifting the goalposts, at least as I understand it.
Any experience is, basically, a firing of neurons. It's not something that "emerges" from the firing of neurons; it is the firing of neurons, followed by the firing of other neurons that record the experience in one's memory. What it feels like to be a bat is a fact about a bat brain. You neither have a bat brain nor have the capacity to simulate one; therefore, you cannot know what it feels like to be a bat. Mary has never had her red-seeing neurons fired; therefore, she does not know what red looks like.
If Mary were an advanced AI, she could reason as follows: "I understand the physics of red light. And I fully understand my visual apparatus. And I know that red would stimulate my visual censors by activiating neurons 2.839,834,843 and 1,2345. But I'm an AI, so I can just fire those neurons on my own. Aha! That's what red looks like!" Mary obviously has no such capacity. Even if she knows everything about the visual system and the physics of red light, even if she knows precisely which neurons control seeing red, she cannot fire them manually. Neither can she modify her memory neurons to reflect an experience she has not had. Knowing what red looks like is a fact about Mary's brain, and she cannot make her brain work that way without actually seeing red or having an electrode stimulate specific neurons. She's only human.
Of course, she could rig some apparatus to her brain that would fire them for her. If we give her that option, it follows that knowing enough about red would in fact allow her to understand what red looks like without ever seeing it.
Replies from: novalis, Peterdjones↑ comment by novalis · 2011-06-13T19:31:32.494Z · LW(p) · GW(p)
Doesn't it follow that Mary, since she knows everything about color, must have both electrodes and the desire and ability to perform of a brain surgery on herself? There is a truly fabulous story, rkunyngvba ol grq puvnat in which the protagonist does this, but since it only happens half way through, I don't want to spoil it.
Replies from: Peterdjones↑ comment by Peterdjones · 2011-06-13T19:49:37.230Z · LW(p) · GW(p)
Once gain, Mary knows everything knowable by description only. Whether that amounts to everything simpliciter is the puzzle.
↑ comment by Peterdjones · 2011-06-13T19:16:30.088Z · LW(p) · GW(p)
But you still haven't explained why she would need to fire her own neurons. She doens;t need to photosynthesise to understand photosynthesis.
comment by tyrsius · 2011-05-27T15:02:46.887Z · LW(p) · GW(p)
This thought experiment always seemed silly to me. As if somehow the experience of the visual cortex reacting to "color" input was not a piece of knowledge.
If someone has a poor ability to mentally visualize 3-dimensional objects, and is shown a set of formula that will draw a specific and very odd object (learning everything but what the object actually looks like), and is only ever allowed to graph on paper, then of course when we finally hand them a physical model of the object we have given them new information.
I don't see this as any different. We have imposed some limitation on a subject, given them every piece of knowledge that the limitation allows for, and then called this "all knowledge" (which it clearly is not). After you remove the limitation and the final pieces of knowledge are gained, you have not demonstrated that non-physical knowledge exists. The fallacy was calling the knowledge given to the restricted subject complete, when it was in fact not.
The knowledge is only new because of a PHYSICAL limitation that previously existed. Once the PHYSICAL limitation was removed, a PHYSICAL interaction resulted in new PHYSICAL knowledge.
All you have demonstrated is that it is possible restrict knowledge.
Replies from: orthonormal, Peterdjones↑ comment by orthonormal · 2011-05-27T20:34:20.626Z · LW(p) · GW(p)
I think you're mistaking the conclusion that the non-reductionist philosophers draw from the thought experiment. They're not generally substance dualists like Descartes. Instead, they claim that reductionism is false because it is epistemically incomplete, even within a purely physical world: a human (or an AI, or anything other than a bat) cannot ever understand the experience of being a bat, and therefore not all knowledge reduces to mathematical patterns of physical objects.
↑ comment by Peterdjones · 2011-05-31T12:26:34.074Z · LW(p) · GW(p)
The knowledge Mary has is all physical knowledge, where physical knowledge means the kind of thing that can be found in books. You deam the further, experiential knowledge she gains to be physical because sensory processing is physical, but that is a different sense of physical. If you think she learns something on exiting the room, and it seems you do, then you are conceding part of the claim, the part about the incompleteness of physical explanation, even if you insist that the epistemic problem doesn't lead to an dualistic metaphysics.
Replies from: tyrsius↑ comment by tyrsius · 2011-05-31T14:51:45.134Z · LW(p) · GW(p)
Only insofar as the definition of physical is limited to things you can find in books. I wholly reject such a definition.
@ Orthonormal. The conclusion seems to me to come very naturally from the thought experiment, if you allow for its assumptions. But that is what I think is silly, its assumptions. The thought experiment tries to define "all knowledge" in two different and contradictory ways.
If Mary has all knowledge, then there is nothing left for her to learn about red. If upon seeing red she learns something new, then she did not have all knowledge prior to seeing red.
It is their definition of knowledge, which is inconsistent, that leads to the entire thought experiment being silly.
Replies from: thomblake, orthonormal↑ comment by orthonormal · 2011-05-31T15:52:11.237Z · LW(p) · GW(p)
The practical point is that, if not all knowledge reduces to mathematical patterns of physical objects (the sort of thing that we can organize and learn from textbooks), then the actual project of reductionists becomes futile at a really early stage- we'd have to give up on fully understanding even a worm brain, since we could never have the knowledge of its worm-qualia.
I want to respond to your claim more thoroughly, but my response essentially consists of the second and third posts here. If you want to pick up this conversation on those threads, I'm all for it.
Also, welcome to Less Wrong!
Replies from: tyrsiuscomment by Emile · 2011-05-27T09:28:29.128Z · LW(p) · GW(p)
Let's take an analogy:
You're writing a video game that will run on hardware that doesn't do floating-point arithmetic (like the Nintendo DS), so you have to write a library to emulate floating-point arithmetic. You then port the game to a very similar system whose hardware does handle floats, so you replace your library with simple operations.
Mary's situation is similar: Mary is perfectly capable of anticipating any experience related to color, and "seeing red" doesn't change that, but allows her to use the functionality built in her hardware much more efficiently, replacing costly and slow verbal reasoning. So Mary's functionality hasn't changed - she's been refactored, and is reacting to that.
Another approach: the color-perceiving hardware in Mary's reacts to encountering new experiences by releasing chemicals / stimulating parts of the brain that corresponds to the subjective experience of "learning", hence her "Oh" of surprise, etc. The fact that she abstractly knows enough about that hardware to emulate it doesn't mean it won't react that way - it has never seen red before. Or in other words, part of Mary's brain is learning, so Mary experiences learning, even though if you consider her as a whole she hasn't aquired any new functionality.
So, your answer to "has Mary aquired new knowledge" depends on what you mean exactly, as it does for the tree falling in the forest:
Has Mary's map of the world changed ? Has her anticipation of future events changed ? No.
Has Mary subjectively experienced learning? Yes.
Did Mary become better at thinking about color? Yes.
↑ comment by Peterdjones · 2011-05-31T17:13:15.228Z · LW(p) · GW(p)
Mary's situation is similar: Mary is perfectly capable of anticipating any experience related to color, and "seeing red" doesn't change
"Anticipating colour" is vague. She may be able to anticipate "I will see red" without anticipating what it looks like.
that, but allows her to use the functionality built in her hardware much more efficiently, replacing costly and slow verbal reasoning. So Mary's functionality hasn't changed - she's been refactored, and is reacting to that.
Since she knows everything about neurology, why can't she figure out how the refactoring will change her phenomenology? Or can she?
comment by jsalvatier · 2011-05-26T19:36:24.857Z · LW(p) · GW(p)
Looks really interesting.
I especially like the notion that the interplay between the conscious and unconscious mind is important and the observation that some sensory inputs to our body feels ineffable and some do not.
comment by [deleted] · 2012-04-07T22:12:44.288Z · LW(p) · GW(p)
From just reading the definition of the Mary's Room problem my knee-jerk reaction was "this seems plausible." It is a textbook example of how an algorithm feels from the inside.
You might know everything there is to know intellectually about colours, but that does not induce sensations in your visual cortex. Humans don't work that way.
A bayesian AI might go "that is about what I expected" when you switch it from a black and white camera to a colour one, solely on the basis of the production parameters of the camera, a physics paper on optics and perhaps a single colour photo.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2012-04-07T22:32:21.428Z · LW(p) · GW(p)
You might know everything there is to know intellectually about colours, but that does not induce sensations in your visual cortex. Humans don't work that way.
(nods) This is likely true, but a lot of work is being done by the word "intellectually". If the conclusion of the thought experiment is that there is information to be obtained by experience that is not captured by whatever we're calling "intellectual" knowledge, or more generally that there's information in-principle-extractable from an event that isn't ordinarily extracted by particular cognitive systems, that's not really all that remarkable.
Using this thought experiment the way it is traditionally used requires a bit of sleight of hand, wherein we are encouraged to apply our intuitions about the kinds of knowledge our brains extract to all knowledge.
Replies from: Peterdjones↑ comment by Peterdjones · 2013-01-22T19:38:20.471Z · LW(p) · GW(p)
that there is information to be obtained by experience that is not captured by whatever we're calling "intellectual" knowledge, or more generally that there's information in-principle-extractable from an event that isn't ordinarily extracted by particular cognitive systems, that's not really all that remarkable.
it is not remarkable from many perspectives, but it is contradictory to some forms of physicalism, which argue that everything is understandable by the methods of physical science, ie from the outside.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-01-22T21:04:46.827Z · LW(p) · GW(p)
Can you clarify the contradiction? It seems to me there could easily be information in-principle-extractable from an event, that isn't ordinarily extracted by particular cognitive systems, that is understandable by the methods of physical science.
I mean, I certainly agree that if we posit the existence of information that is not in-principle understandable by the methods of physical science but is ordinarily extracted by particular cognitive systems, then that does contradict the forms of physicalism you're talking about here.
My point, though, is that the scenario described in Mary's Room does not give us any reason to believe that such information exists.
Replies from: Peterdjones↑ comment by Peterdjones · 2013-01-23T11:41:18.836Z · LW(p) · GW(p)
Several people on this site have responded to M's R with the claim that (in effect) such information does exist. The claim is usually expressed on the lines of an individual needing to be in a brain state, to personally instantiate it, in order to understand it. For instance: "You might know everything there is to know intellectually about colours, but that does not induce sensations in your visual cortex".
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-01-23T15:16:36.985Z · LW(p) · GW(p)
So, to repeat myself, I agree that if such information exists, then the forms of physicalism you're talking about here are false.
And I agree that several people have responded to Mary's Room with the claim that such information does exist, and I don't deny that they've done so, nor have I even denied that they've done so.
What I do deny is that Mary's Room demonstrates that such information exists, or that they are justified in believing anything different after being exposed to MR than before.
MR just invites me to generalize from my limited experience of knowing some things about color to a hypothesized state of intellectually knowing everything there is to know about color, and anticipates that I will ignorantly imagine keeping other aspects of my limited experience fixed. If I instead ignorantly imagine other aspects of my limited experience varying, it completely fails to demonstrate what it's claimed to demonstrate.
For example, if I start out believing that sensations of color are in-principle unavailable to a healthy human brain solely by virtue of being in that hypothesized state, then MR might feel like a compelling demonstration of that claim. "Oh look, there's Mary," I might say, "and I know she's never had such sensations, so clearly seeing a yellow banana is new information to her, therefore...etc. etc. etc.".
Conversely, if I don't believe that to start with, it might not. "Oh look," I might say, "there's Mary, who knows everything there is to know about color, and has probably therefore had vivid dreams of seeing color as her brain has made various connections with that information, so clearly seeing a yellow banana is not new information to her, therefore etc. etc."
There might be good reasons to reject that second intuition and embrace the first, or vice-versa, but thinking about Mary's Room is not a good reason to do either. It's just a question-begging invitation to visualize my preconceptions and treat them as confirming data.
All of that said, I certainly agree that there exist experiences which depend on certain classes of brain states to instantiate them, and that in the absence of those brain states those experiences are not possible.
Replies from: Peterdjones↑ comment by Peterdjones · 2013-01-23T15:38:20.351Z · LW(p) · GW(p)
What I do deny is that Mary's Room demonstrates that such information exists, or that they are justified in believing anything different after being exposed to MR than before.
No, it doens't demonstrate it like a mathematical proof. It isn't intended to work that way. it is supposed to be an intuition pump.
Conversely, if I don't believe that to start with, it might not. "Oh look," I might say, "there's Mary, who knows everything there is to know about color, and has probably therefore had vivid dreams of seeing color as her brain has made various connections with that information, so clearly seeing a yellow banana is not new information to her, therefore etc. etc."
To have dreams of colour is to be in the brain state. So you are not saying Mary would have the information of what yellow looks like without ever having been in a seeing-yellow state. These kind of loophole-finding objections are rather pointless because you can always add riders to thought experiment to block them: Marys skin has been bleached,s he has been given driugs to prevent dreaming, etc.
There might be good reasons to reject that second intuition and embrace the first, or vice-versa,
If we have reasons for an intuition, it isn't an intuition.
All of that said, I certainly agree that there exist experiences which depend on certain classes of brain states to instantiate them, and that in the absence of those brain states those experiences are not possible.
But that isn;t relevant. What is relevant is whether personally instantiating a state is necessary to understand something.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2013-01-23T16:56:54.972Z · LW(p) · GW(p)
If we're agreed about the nature of Mary's Room, great.
I decline to get into a discussion of how thought experiments are supposed to work, but I certainly agree with you that they aren't supposed to be mathematical proofs.
I also decline to get into yet another discussion about the nature of conscious experience.
Replies from: Peterdjones↑ comment by Peterdjones · 2013-01-23T17:19:45.549Z · LW(p) · GW(p)
If we're agreed about the nature of Mary's Room, great.
Agreed on what about Mary's room? I don't agree that there a "right" and "wrong" intuitions about it, and I am not a fan of "M's R is bad because all thought experiments are bad".
Replies from: TheOtherDave, DaFranker↑ comment by TheOtherDave · 2013-01-23T17:41:08.974Z · LW(p) · GW(p)
Agreed that Mary's Room doesn't demonstrate that information that is not in-principle understandable by the methods of physical science but is ordinarily extracted by particular cognitive systems exists; that it's solely intended as an intuition pump, as you say.
I certainly don't believe that all thought experiments are bad, but again, I decline to get into a discussion of how thought experiments are supposed to work.
Replies from: shminux↑ comment by Shmi (shminux) · 2013-01-23T18:04:49.114Z · LW(p) · GW(p)
I'm surprised by your patient discussion with Peterdjones. My experience was that he is impossible to get through to, so I gave up a long time ago. Have you had any success?
Replies from: TheOtherDave, Peterdjones↑ comment by TheOtherDave · 2013-01-23T18:41:39.068Z · LW(p) · GW(p)
I'm not quite sure what success looks like.
Mostly, I've been trying to clarify my initial point about Mary's Room, which we may have made minor progress on.
↑ comment by Peterdjones · 2013-01-23T18:11:42.204Z · LW(p) · GW(p)
You mean I remained unconvinced by your claim that reality isn't real?
↑ comment by DaFranker · 2013-01-23T17:36:56.300Z · LW(p) · GW(p)
Agreed on what about Mary's room?
Its "nature".
Mary's Room is:
- A thought experiment.
- Supposed to be an intuition pump.
- Not a formal proof of anything.
Possible conditional extension:
- Of usefulness dependent upon the relevance of its premises, the things it seeks to make you think about, and the reliability of human intuition.
comment by SilasBarta · 2011-05-26T18:36:24.184Z · LW(p) · GW(p)
I like it so far! One question, though:
Our goal, then, is to build a model of a mind that would express analogous reactions in Mary's Room for a genuine reason
Are you referring the question of how to model a mind that would have analogous reactions to:
- being told about the Mary's Room argument, or to
- being in the position of a real-world Mary?
I assume the first, but I couldn't tell. In any case, both are worthwhile targets for reduction.
Edit: I had previous sketched out a reduction of qualia, but it seems more focused on the first issue -- why we have the intuition that our sensory impressions are so ineffable.
Replies from: orthonormal↑ comment by orthonormal · 2011-05-26T18:38:57.005Z · LW(p) · GW(p)
I actually mean the second question; let me see if I can make that less ambiguous in the post. The first is important too, and will come up by the end.
And thanks!
UPDATE: Edited the paragraph slightly.
comment by PaddyC · 2024-12-14T00:51:30.028Z · LW(p) · GW(p)
Does the evidence that suggests taste testers find Pepsi max quite sweet when compared to regular cola prove qualia false ?
"So bad, tastes like water with a bit of sugar in it," one taste-tester said of Pepsi, while another said of Pepsi Max: "Surprisingly sweet and not in a good way."
https://amp.nine.com.au/article/6e138365-a76a-46bc-816b-d6964e7078e5
comment by PaddyC · 2024-07-08T08:41:44.983Z · LW(p) · GW(p)
When Mary exclaims “Oh” as she sees red for the first time, how would an observer know what Mary is expressing when she exclaims this ? If someone who spoke a different language were to exclaim the same thing in their language would their facial movements be the same obviously linguistically there would be differences that MAY prevent comprehension or would an observer be able to deduce what Mary speaking Italian was trying to express based on body language. When mentioning the subconscious and collective subconscious do a level of psychic powers come into play that help us understand things that are foreign to us ?
comment by Walker Vargas · 2022-08-20T17:17:37.670Z · LW(p) · GW(p)
I just had a thought. If Mary was presented with a red, a blue, and a green tile on a white background could she identify which was which without additional visual context clues like comparing them to her nails? If not, I would expect a p-zombie to have the same issue implying that that failure isn't to do with consciousness.
comment by handoflixue · 2011-07-24T13:24:56.338Z · LW(p) · GW(p)
If Mary had never counted more than 100 objects before, and today she counted 113 sheep in a field, we wouldn't expect her to exclaim "Oh, so that's what 113 looks like!"
I'd think that partly this is because 113 is an abstract internal representation - it can't be surprising, because we have to build it for ourselves. There is no "experience of 113" in normal human cognition. If I could just look at a pile of objects and go "That's 113" the way I go "that's 3!" or "that's 4!", I could imagine very well being taken back and going "Oh, so that's what 113 looks like o.o"
Equally, while digestion happens, it's not something I have any conscious awareness of. I've had various moments in my life where I've had a sudden visceral understanding of "oh, so that's what that internal process feels like o.o" I'd certain expect the sensation of "feeling a single food item as it passes through my digestion" would throw up that flag in short order.
comment by shokwave · 2011-06-06T15:30:45.274Z · LW(p) · GW(p)
Say I jam an electrode into Mary's optical nerve and send a pulse down it, causing the nerve to report to the brain that she sees red. In this hypothetical, her field of vision fills with red. Does this count as her experiencing the qualia of red?
If no: your concept of qualia is magical.
If yes: qualia doesn't do the work you want it to. With the word qualia, you're drawing a distinction between knowledge of the event and the actual event of the event - the event happening as distinct from a complete understanding of the event happening. This distinction is worthless! No sane person will tell a physicist "You can't know how fast a one kilogram ball will reach the end of a 10 meter 45-degree ramp in a frictionless environment! You haven't rolled a one kilo ball down said ramp in said environment!".
Mary, with her complete knowledge of human neuroscience, will be able to predict exactly what will happen inside her head, down to her exclamation of "Oh, that's what red looks like!". She could act on this prediction in exactly the same way that someone who had seen red could act on their qualia. Yet again: the distinction is worthless.
If this doesn't solve the qualia problem for you, tell me: why not?
Replies from: Peterdjones↑ comment by Peterdjones · 2011-06-06T15:44:34.184Z · LW(p) · GW(p)
Say I jam an electrode into Mary's optical nerve and send a pulse down it, causing the nerve to report to the brain that she sees red. In this hypothetical, her field of vision fills with red. Does this count as her experiencing the qualia of red?
Yes.
If yes: qualia doesn't do the work you want it to.
Who are you addressing and how do you know?
With the word qualia, you're drawing a distinction between knowledge of the event and the actual event of the event - the event happening as distinct from a complete understanding of the event happening.
The intended distinction is between complete theoretical or descriptive knowledge and knowledge by acquaintance.
This distinction is worthless! No sane person will tell a physicist "You can't know how fast a one kilogram ball will reach the end of a 10 meter 45-degree ramp in a frictionless environment! You haven't rolled a one kilo ball down said ramp in said environment!".
The distinction is worthless in that case. However, that does not mean it is worthless in other cases. The intended conclusion is that qualia are a unique case.
Mary, with her complete knowledge of human neuroscience, will be able to predict exactly what will happen inside her head, down to her exclamation of "Oh, that's what red looks like!".
It's possible that her ability to predict her own surprise won't involve an ability to predict what she is surprised at, how red looks.
She could act on this prediction in exactly the same way that someone who had seen red could act on their qualia.
You are saying that Mary would know what red looks like while on the room?
If this doesn't solve the qualia problem for you, tell me: why not?
Why doesn't that solve the qualia problem for everybody?
Replies from: shokwave↑ comment by shokwave · 2011-06-07T05:54:34.232Z · LW(p) · GW(p)
The intended distinction is between complete theoretical or descriptive knowledge and knowledge by acquaintance ... that does not mean it is worthless in other cases ...
Give me a case where this is a useful distinction.
It's possible that her ability to predict her own surprise won't involve an ability to predict what she is surprised at, how red looks.
This is exactly the useless distinction, right here. You're drawing a distinction between knowing how red looks, and being able to act as if you know how red looks. (Being able to predict her surprise vs being able to predict the look of red). In practice, everywhere, there is no difference. The only use of knowing how red looks is to check that your model of neuroscience is accurate, and we've already specified Mary's knowledge is complete.
You are saying that Mary would know what red looks like while on the room?
I'm not sure if I am. The way I phrased the question, I think that saying "yes, she experiences the qualia of red" means that she knows what red looks like while she's in the room, and saying "no, she does not experience the qualia of red" means she doesn't know what red looks like while inside the room.
I would be greatly honored if you could quickly outline what work you feel that qualia does - what theories does it support, what does it count as evidence against, what does it rule out and what does it prove, what impact does it have? What does it mean, if qualia existed?
Replies from: Peterdjones↑ comment by Peterdjones · 2011-06-07T09:34:32.789Z · LW(p) · GW(p)
Give me a case where this is a useful distinction.
The intended use is to tell us whether neuroscience is complete.
It's possible that her ability to predict her own surprise won't involve an ability to predict what she is surprised at, how red looks.
This is exactly the useless distinction, right here. You're drawing a distinction between knowing how red looks, and being able to act as if you know how red looks. (Being able to predict her surprise vs being able to predict the look of red). In practice, everywhere, there is no difference.
That is a bold claim. Surely there is a subjective difference between predicting the surpise, and predicting the red? Or are you looking at it behaviouristically, from the outside?
The only use of knowing how red looks is to check that your model of neuroscience is accurate, and we've already specified Mary's knowledge is complete.
No. We have specified that it is descpriptively complete. The whole point of the story is to explore whether complete descriptive knowledge is complete knowledge.
You are saying that Mary would know what red looks like while on the room? I''m not sure if I am
It would be helpful if you were sure.
The way I phrased the question, I think that saying "yes, she experiences the qualia of red" means that she knows what red looks like while she's in the room, and saying "no, she does not experience the qualia of red" means she doesn't know what red looks like while inside the room.
So which is it?
I would be greatly honored if you could quickly outline what work you feel that qualia does - what theories does it support, what does it count as evidence against, what does it rule out and what does it prove, what impact does it have? What does it mean, if qualia existed?
That qualia exists means there is something lemons taste like, and saxophones sound like and sunsets look like. Beyond that...why do you need to know? Are you planning to deny that there omething lemons taste like, and saxophones sound like and sunsets look like if it leads to implications you don't like?
comment by PhilGoetz · 2011-05-30T17:52:17.574Z · LW(p) · GW(p)
For example, we don't experience the feeling of ineffability for something like counting, which happens consciously (above a threshold of five or six). If Mary had never counted more than 100 objects before, and today she counted 113 sheep in a field, we wouldn't expect her to exclaim "Oh, so that's what 113 looks like!"
What about the case of 3 sheep? Are small numbers understood both analytically, and as qualia?
Replies from: orthonormal↑ comment by orthonormal · 2011-05-31T16:00:26.109Z · LW(p) · GW(p)
That's a really good question, and I just wanted to avoid that extra complication in this example. (As noted in the last post, I couldn't even fully do that. Oh well.)
comment by ColonelZen · 2017-01-13T04:28:50.849Z · LW(p) · GW(p)
I think I have long since "dissolved the problem quite elegantly" ...
Basically Yes Mary can "know all there is to know about color ...." before being exposed. Yes she does learn something new when exposed to color.
But basically the knowledge of how her brain reacts to color is information that does not exist in the universe prior to her exposure. She is physically changed and the new information is thus created.
If her knowledge were in essence complete, she might know beforehand EXACTLY how her own brain would change, what new neurons and synapses would developon exposure to color, how long it would take, etc..... But knowing what changes will happen inside your body doesn't make them happen any more than a diabetic can think away his disease if he happens to be a biochemist with utmost understandings of the mechanisms.
Information is ALAWAY manifest as physical structure and patterns. The information of Mary's color knowledge could not exist until she had exposure to color. (In a way out yonder sf world where scientists could effect the changes to her brain without exposing her to color, then no, in only that case would she NOT learn anything new and have no new learnings by exposure to color; in a world where exposure is the only way to create the linkage patterns in her brain - even if she knows what they will be - she does learn something new.)
I've a similar deconstruction of flying vermin.
The "knowledge argument" is dead.
*BTW I've never seen my particular arguments before, but was never a philosophy student as such. I don't claim that others haven't offered them first, possibly in less colorful terminology, but I'm damned if my arguments don't in fact destroy the two great "knowledge arguments". (Chalmer's zombies were fantasy from day one ... "modal logic" ... which one? ... aside there was never reason to believe in material possibility of philosophical zombies; absent material possibility there is nothing to argue about).
Replies from: TheAncientGeek↑ comment by TheAncientGeek · 2017-01-13T16:06:06.006Z · LW(p) · GW(p)
That's a fairly standard response which has been posed and answered several times in the comnents. For instance:
That's not the point: the point is why is it necessary, in the case of experience, and only in the case of experience to instantiate it in order to fully understand it. Obviously, it is true a that a descirption of a brain state won't put you into that brain state. But that doesn't show that there is nothing unusual about qualia. The problem is that there in no other case does it seem necessary to instantiate a brain state in otder to undertstand something.
If another version of Mary were shut up to learn everything about, say, nuclear fusion, the question "would she actually know about nuclear fusion" could only be answered "yes, of course....didn't you just say she knows everything"? The idea that she would have to instantiate a fusion reaction within her own body. That doesn't apply, any more than a description of photosynthesis will make you photosynthesise. We expect that the description of photosynthesis is complete, so that actually being able to photosynthesise would not add anything to our knowledge.
The list of things which the standard Mary's Room intution doesn't apply to is a long one. There seem to be some edge cases.: for instance, would an alternative Mary know everything about heart attacks without having one herself? Well, she would know everything except what a heart attack feels like —and what it feels like is a quale. the edge cases, like that one, are cases are just cases where an element of knowledge-by-acquaintance is needed for complete knowledge. Even other mental phenomena don't suffer from this peculiarity. Thoughts and memories are straightforwardly expressible in words — so long as they don't involve qualia.
So: is the response "well, she has never actually instantiated colour vision in her own brain" one that lays to rest and the challenge posed by the Knowledge argument, leaving physicalism undisturbed? The fact that these physicalists feel it would be in some way necessary to instantiate colour means they subscribe to the idea that there is something epistemically unique about qualia/experience, even if it resists the idea that qualia are metaphysically unique.
Is the assumtion of epistemological uniqueness to be expected given physicalism? Some argue that no matter how much you know about something "from the outside", you quite naturally wouldn't be expected to understand it from the inside. However, if physicalism is taken as the claim that everything ultimately has a possible physical explanation, that implies that everything has a description in 3rd person, objective language — that everything reduces to the 3rd person and the objective. What that means is that there can be no irreducible subjectivity: whilst brains may be able to generate subjective views, they must be utlimately reducible to objectivity along with everything else. Since Mary knows everything about how brains work, she must know how the trick is pulled off: she must be able to understand how and why and what kind of (apparent) subjetivity is produced by brains. So the Assumption of Epistemelogical Uniqueness does not cleanly rescue physicalism, for all that it is put forward by physcialists as something that is "just obvious".
comment by PhilGoetz · 2011-05-30T17:41:00.805Z · LW(p) · GW(p)
Well, we could of course draw the analogy between colors of the spectrum and tones of sound
Puzzle: We sense colors, which exist on a continuum, by how near one color is to each of the only 3 colors our retinas can sense directly, plus intensity. We sense tones, which exist on a continuum, directly - we can sense each separate wavelength directly. Yet we have the impression that there are more colors than sounds - we draw sounds on a line, but colors in a plane.
Replies from: saturn, AdeleneDawner↑ comment by saturn · 2011-06-06T06:32:58.408Z · LW(p) · GW(p)
If you're talking about only a single frequency of light or sound, a 2-dimensional point is enough to represent human perception—one dimension for frequency and another for intensity.
However, if you're talking about the full range of colors and sounds that humans can distinguish, colors can be described with only 3 dimensions, while an ideal perceptual representation of sound would need a separate dimension for every functioning hair cell.
Replies from: PhilGoetz↑ comment by PhilGoetz · 2011-06-13T04:40:22.779Z · LW(p) · GW(p)
I figured it out. An ideal perceptual representation of sound would only need 2 hair cells - if hair cells, like cones, reported a distance from the stimulus. A cone cell gives a signal whose intensity indicates how far the wavelength of the light it sensed is from its preferred frequency. 1 cone cell lets you order colors along a ray. 2 cone cells lets you order them along a line. 3 cone cells lets you order them on a plane.
A hair cell is specific to a frequency, so you can't combine the output from n hair cells to give an n-1 dimensional picture.
Replies from: saturn↑ comment by saturn · 2011-06-13T07:07:09.767Z · LW(p) · GW(p)
An ideal perceptual representation of sound would only need 2 hair cells - if hair cells, like cones, reported a distance from the stimulus.
That's true if you're talking about a stimulus that only contains a single frequency at a time, but real sounds and colors are mixtures of an entire spectrum of frequencies, each frequency having its own distinct amplitude.
For example, 2 hair cells, even if they had a wider frequency response, would not be enough to understand speech; for that you need at least 4 to 8 frequency bands.
↑ comment by AdeleneDawner · 2011-05-30T17:44:01.063Z · LW(p) · GW(p)
There are more colors than tones, but there are more dimensions to sound than just tone.