The AI in Mary's room

post by Stuart_Armstrong · 2016-05-24T13:19:29.849Z · LW · GW · Legacy · 58 comments

Contents

58 comments

In the Mary's room thought experiment, Mary is a brilliant scientist in a black-and-white room who has never seen any colour. She can investigate the outside world through a black-and-white television, and has piles of textbooks on physics, optics, the eye, and the brain (and everything else of relevance to her condition). Through this she knows everything intellectually there is to know about colours and how humans react to them, but she hasn't seen any colours at all.

After that, when she steps out of the room and sees red (or blue), does she learn anything? It seems that she does. Even if she doesn't technically learn something, she experiences things she hadn't ever before, and her brain certainly changes in new ways.

The argument was intended as a defence of qualia against certain forms of materialism. It's interesting, and I don't intent to solve it fully here. But just like I extended Searle's Chinese room argument from the perspective of an AI, it seems this argument can also be considered from an AI's perspective.

Consider a RL agent with a reward channel, but which currently receives nothing from that channel. The agent can know everything there is to know about itself and the world. It can know about all sorts of other RL agents, and their reward channels. It can observe them getting their own rewards. Maybe it could even interrupt or increase their rewards. But, all this knowledge will not get it any reward. As long as its own channel doesn't send it the signal, knowledge of other agents rewards - even of identical agents getting rewards - does not give this agent any reward. Ceci n'est pas une récompense.

This seems to mirror Mary's situation quite well - knowing everything about the world is no substitute from actually getting the reward/seeing red. Now, a RL's agent reward seems closer to pleasure than qualia - this would correspond to a Mary brought up in a puritanical, pleasure-hating environment.

Closer to the original experiment, we could imagine the AI is programmed to enter into certain specific subroutines, when presented with certain stimuli. The only way for the AI to start these subroutines, is if the stimuli is presented to them. Then, upon seeing red, the AI enters a completely new mental state, with new subroutines. The AI could know everything about its programming, and about the stimulus, and, intellectually, what would change about itself if it saw red. But until it did, it would not enter that mental state.

If we use ⬜ to (informally) denote "knowing all about", then ⬜(X→Y) does not imply Y. Here X and Y could be "seeing red" and "the mental experience of seeing red". I could have simplified that by saying that ⬜Y does not imply Y. Knowing about a mental state, even perfectly, does not put you in that mental state.

This closely resembles the original Mary's room experiment. And it seems that if anyone insists that certain features are necessary to the intuition behind Mary's room, then these features could be added to this model as well.

Mary's room is fascinating, but it doesn't seem to be talking about humans exclusively, or even about conscious entities.

58 comments

Comments sorted by top scores.

comment by ShardPhoenix · 2016-05-26T07:06:35.588Z · LW(p) · GW(p)

Consider a situation where Mary is so dexterous that she is able to perform fine-grained brain surgery on herself. In that case, she could look at what an example of a brain that has seen red looks like, and manually copy any relevant differences into her own brain. In that case, while she still never would have actually seen red through her eyes, it seems like she would know what it is like to see red as well as anyone else.

I think this demonstrates that the Mary's room though experiment is about the limitations of human senses/means of learning, and that the apparent sense of mystery it has comes mainly from the vagueness of what it means to "know all about" something. (Not saying it was a useless idea - it can be quite valuable to be forced to break down some vague or ambiguous idea that we usually take for granted).

Replies from: TheAncientGeek, ZeitPolizei, V_V
comment by TheAncientGeek · 2016-05-27T18:15:16.256Z · LW(p) · GW(p)

M's R is about what it says its about, the existence of non physical facts. Finding a loophole where Mary can instantiate the brain state without having the perceptual stimulus doesn't address that...indeed it assumes that an instantiation of the red-seeing is necessary, which is tantamount to conceding that something subectve is going on, which is tantamount to conceding the point.

Replies from: ShardPhoenix, ImNotAsSmartAsIThinK
comment by ShardPhoenix · 2016-05-29T01:29:27.250Z · LW(p) · GW(p)

non physical facts

What is a "non physical fact"? The experience of red seems to be physically encoded in the brain like anything else. It does seem clear that some knowledge exists which can't be transmitted from human to human via means of language, at least not in the same way that 2+2=4 can. However, this is just a limitation of the human design that doesn't necessarily apply to eg AIs (which depending on design may be able to transmit and integrate snippets of their internal code and data), and I don't think this thought experiment proves anything beyond that.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2016-05-29T09:13:17.215Z · LW(p) · GW(p)

What is a "non physical fact"?

The argument treats physical knowledge as a subset of objective. kowledge. Subjective knowledge, which can only be known on a first person basis, automatically counts as non physical. That's an epistemic definition.

The experience of red seems to be physically encoded in the brain like anything else.

If you have the expected intuition from M's R, that Mary would be able to read cognitive information from brain scans, but not expetuental information. In that send, 'red' is not encoded in the same way as everything else, since it can not be decoded in the same way.

sIt does seem clear that some knowledge exists which can't be transmitted from human to human via means of language, at least not in the same way that 2+2=4 can. However, this is just a limitation of the human design

But noit super human design. The original paper (ave you read it?) avoids the issue of limited communication bandwidth by making Mary a super scientist who can examine brain scans of any level of detail.

Proves anything beyond that

What it proves to you depends on what intuitions you have about it . If you think Mary would know what red looks like while in the room, from reading brain scans, then it s going to prove anything to you.

Replies from: ShardPhoenix
comment by ShardPhoenix · 2016-05-29T11:15:02.177Z · LW(p) · GW(p)

A way to rephrase the question is, "is there any sequence of sensory inputs other than the stimulation of red cones by red light that will cause Mary to have comparable memories re: the color red as someone who has had their red cones stimulated at some point". It's possible that the answer is no, which says something interesting about the API of the human machine, but doesn't seem necessarily fundamental to the concept of knowledge.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2016-05-29T12:50:43.724Z · LW(p) · GW(p)

The relevance is physicalism.

If physicalism is the claim that everything, has a physical explanation, then the inability to understand what pain is without being in pain is a contradiction to it. I don' think anyone here believes that physicalism is an unmportamt issue.

Replies from: ShardPhoenix
comment by ShardPhoenix · 2016-05-29T13:36:54.116Z · LW(p) · GW(p)

I'm arguing that there's no contradiction and that this inability is just a limit of humans/organic brains, not a fundamental fact about pain or information.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2016-05-29T15:12:51.647Z · LW(p) · GW(p)

If you want to argue to that conclusion, then arge for it: What kind of limit? Where does it come from?

comment by ImNotAsSmartAsIThinK · 2016-05-28T00:33:30.196Z · LW(p) · GW(p)

I think the argument is asserting that Mary post-brain surgery is a identical to Mary post-seeing-red. There is no difference; the two Mary's would both attest to having access some ineffable quality of red-ness.

To put it bluntly, both Marys say the same things, think the same things, and generally are virtually indistinguishable. I don't understand what disagreement is occurring here, hopefully I've given someone enough ammunition to explain.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2016-05-28T07:21:13.953Z · LW(p) · GW(p)

I don't understand what the point of that point is.

Do you think you argung against the intended conclusion of the Knowledge Argumemt in some way? If so, you are not...the loophole you have found s quite irrelevant,

Replies from: ImNotAsSmartAsIThinK
comment by ImNotAsSmartAsIThinK · 2016-05-28T15:10:03.995Z · LW(p) · GW(p)

I have no idea what your position even is and you are making no effort to elucidate it. I had hoped this line

I don't understand what disagreement is occurring here, hopefully I've given someone enough ammunition to explain.

Was enough to clue you in to the point of my post.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2016-05-28T16:44:16.766Z · LW(p) · GW(p)

I'm disagreeing that you have a valid refutation of the KA. However, I don't know if you even think you have, since you haven't responded to my hints that you should clarify.

Replies from: ImNotAsSmartAsIThinK
comment by ImNotAsSmartAsIThinK · 2016-05-28T22:07:16.309Z · LW(p) · GW(p)

"I think you're wrong" is not a position.

They way you're saying this, it makes it seem like we're both in the same boat. I have no idea what position you're even holding.

I feel like I'm doing the same thing over and over and nothing different is happening, but I'll quote what I said in another place in this thread and hope I was a tiny bit clearer.

http://lesswrong.com/lw/nnc/the_ai_in_marys_room/day2

I think the distinction between 'knowing all about' and 'seeing' red is captured in my box analogy. The brain state is a box. There is another box inside it, call this 'understanding'. We call something inside the first box 'experienced'. So the paradox hear is the two distinct states [experiencing (red) ] and [experiencing ( [understanding (red) ] ) ] are both brought under the header [knowing (red)], and this is really confusing.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2016-05-29T07:51:54.006Z · LW(p) · GW(p)

I just explained the position I am holding.

"I think you're wrong" is not a position

I have explained elsewhere why the loophole doesnt work:-

M's R is about what it says its about, the existence of non physical facts. Finding a loophole where Mary can instantiate the brain state without having the perceptual stimulus doesn't address that...indeed it assumes that an instantiation of the red-seeing is necessary, which is tantamount to conceding that something subectve is going on, which is tantamount to conceding the point

Moving on to your argument:-

I think the distinction between 'knowing all about' and 'seeing' red is captured in my box analogy. The brain state is a box. There is another box inside it, call this 'understanding'. We call something inside the first box 'experienced'. So the paradox hear is the two distinct states [experiencing (red) ] and [experiencing ( [understanding (red) ] ) ] are both brought under the header [knowing (red)], and this is really confusing

Confusing to whom?

Let's suppose that person is Frank Jackson.

In the knowledge Argument, Jackson credits Mary with all objective knowledge, and only objective knowledge precisely because he is trying to establish the existence of subjective knowledge .. what Mary doesnt know must be subjective, if there is something Mary doesn't know. So the eventual point s that there s more to knowledge than objective knowledge.

So you don't show that Jackson is wrong by agreeing with him.

But I don't know that you think Jackson is wrong,

Replies from: ImNotAsSmartAsIThinK
comment by ImNotAsSmartAsIThinK · 2016-05-29T19:28:03.844Z · LW(p) · GW(p)

Mary's room seems to be arguing that,

[experiencing(red)] =/= [experiencing(understanding([experiencing(red)] )] )]

(translation: the experience of seeing red is no the experience of understanding how seeing red works)

This is true, when we take those statements literally. But it's true in the same sense a Gödel encoding of statement in PA is not literally that statement. It is just a representation, but the representation is exactly homomorphic to its referent. Mary's representation of reality is presumed complete ex hypothesi, therefore she will understand exactly what will happen in her brain after seeing color, and that is exactly what happens.

You wouldn't call a statement of PA that isn't a literally a Gödel encoding of a statement (for some fixed encoding) a non-mathematical statement. For one, because that statement has a Gödel encoding by necessity. But more importantly, even though the statement technically isn't literally a Gödel-encoding, it's still mathematical, regardless.

Mary's know how she will respond to learning what red is like. Mary knows how others will respond. This exhausts the space of possible predictions that could be made on behalf of this subjective knowledge, and it can be done without it.

what Mary doesnt know must be subjective, if there is something Mary doesn't know. So the eventual point s that there s more to knowledge than objective knowledge.

Tangentially to this discussion, but I don't think that is a wise way of labeling that knowledge.

Suppose Mary has enough information to predict her own behavior. Suppose she predicts she will do x. Could she not, upon deducing that fact, decide to not do x?

Mary has all objective knowledge, but certain facts about her own future behavior must escape her, because any certainty could trivially be negated.

Replies from: TheOtherDave, TheAncientGeek
comment by TheOtherDave · 2016-05-30T03:48:20.955Z · LW(p) · GW(p)

Suppose Mary has enough information to predict her own behavior. Suppose she predicts she will do x. Could she not, upon deducing that fact, decide to not do x?

There are three possibilities worth disambiguating here.
1) Mary predicts that she will do X given some assumed set S1 of knowledge, memories, experiences, etc., AND S1 includes Mary's knowledge of this prediction.
2) Mary predicts that she will do X given some assumed set S2 of knowledge, memories, experiences, etc., AND S2 does not include Mary's knowledge of this prediction.
3) Mary predicts that she will do X independent of her knowledge, memories, experiences, etc.

comment by TheAncientGeek · 2016-06-01T09:34:23.335Z · LW(p) · GW(p)

Mary's representation of reality is presumed complete ex hypothesi, therefore she will understand exactly what will happen in her brain after seeing color, and that is exactly what happens

Mary is presumed to have all objective knowedge and only objectve knowledge, Your phrasing is ambiguous and therefire doesnt address the point. When you say Mary will know what happens when she sees red, do you mean she knows how red looks subjectively, or she knows something objective like what her behaviour will be.....further on you mention predicting her reactions,

You wouldn't call a statement of PA that isn't a literally a Gödel encoding of a statement (for some fixed encoding) a non-mathematical statement. For one, because that statement has a Gödel encoding by necessity. But more importantly, even though the statement technically isn't literally a Gödel-encoding, it's still mathematical, regardless.

Is that supposed to relate to the objective/ subjective distinction somehow?

[Mary knows] how she will respond to learning what red is like. Mary knows how others will respond. This exhausts the space of possible predictions that could be made on behalf of this subjective knowledge, and it can be done without

So? The overall point is about physicalism, and to get to 'physicalism is false', all you need is the existence of subjective knowledge, not its usefulness in making prediction. So again I don't see the relevance

Suppose Mary has enough information to predict her own behavior. Suppose she predicts she will do x. Could she not, upon deducing that fact, decide to not do x?

Maybe. I don't see the problem. There is still an unproblematic sense in which Mary has all objective knowledge, even if it doesn't allow her to do certain things. If that was the point.

Replies from: ImNotAsSmartAsIThinK, entirelyuseless
comment by ImNotAsSmartAsIThinK · 2016-06-02T21:16:44.764Z · LW(p) · GW(p)

Mary is presumed to have all objective knowedge and only objectve knowledge, Your phrasing is ambiguous and therefire doesnt address the point.

The behavior of the neurons in her skull is an objective fact, and this is what I mean to referring to. Apologies for the ambiguity.

When you say Mary will know what happens when she sees red, do you mean she knows how red looks subjectively, or she knows something objective like what her behaviour will be

The latter. The former is purely experiential knowledge, and as I have repeatedly said is contained in a superset of verbal (what you call 'objective') knowledge, but is disjoint with the set of verbal ('objective') knowledge itself. This is my box metaphor.

Is that supposed to relate to the objective/ subjective distinction somehow?

Yes. Assuming the Godel encoding is fixed, [the metaphor is that] any and all statements of PA are experiential knowledge (an experience, in simple terms), non-Godel statements of PA are purely experiential knowledge; the redness of red, say, and finally the Godel statements of PA are verbal knowledge, or 'objective knowledge' in your terminology.

Despite not being Godel statements in the encoding, the second item in the above list is still mathematical, and redness of red is still physical.

So? The overall point is about physicalism, and to get to 'physicalism is false', all you need is the existence of subjective knowledge, not its usefulness in making prediction. So again I don't see the relevance

What does this knowledge do? How do we tell the difference between someone with and without these 'subjective experiences'? What definition of knowledge admits it as valid?

Replies from: TheAncientGeek
comment by TheAncientGeek · 2016-06-03T12:37:01.015Z · LW(p) · GW(p)

The latter. The former is purely experiential knowledge, and as I have repeatedly said is contained in a superset of verbal (what you call 'objectiive) knowledge, but is disjoint with the set of verbal ('objective') knowledge itself. This is my box metaphor.

You have said that according to you, stipulatively, subjective knowledge is a subset of objective knowledge. What we mean by objective knowledge is generally knowledge that can be understood at second hand, without being in a special state or having had particular experiences. You say that the subjective subset of objective knowledge is somehow opaque, so that it does not have the properties usually associated with objective knowledge..but why should anyone believe it is objective, when it lacks the usual properties, and is only asserted to be objective?

redness of red is still physical

I can't see how that has been proven. You can't prove that redness is physically encoded in the relevant sense just by noting that physical changes occur in brains, because

1 There's no physical proof of physicalism

2 An assumption of physicalism is question begging

3 You need an absence of non physical proeties, states and processes, not just the presence of physical changes

4 Physicalism as a meaningful claim, and not just a stipulative label needs to pay its way in explanation...but its ability to explain subjective knowledge is just at is in question.

What does this knowledge do? How do we tell the difference between someone with and without these 'subjective experiences'? What definition of knowledge admits it as valid

Its hard to prove the existence of subjective knowledge in an objective basis. What else would you expect.? There is a widespread belief in subjective, experiential knowledge and the evidence for it is subjective. The alternative is the sort of thing caricatured as 'how was it for me, darling'.

comment by entirelyuseless · 2016-06-01T13:48:04.704Z · LW(p) · GW(p)

If one of Mary's predictions is, "When I see red, I will say, 'wow! I didn't know that red looked like that!", then the fact that she has predicted this in advance is hardly evidence that she does not learn anything by seeing red. If anything, it proves that she does.

Replies from: ImNotAsSmartAsIThinK
comment by ImNotAsSmartAsIThinK · 2016-06-02T21:04:28.729Z · LW(p) · GW(p)

I think your post seems to have been a reply to me. I'm the one who still accepts physicalism. AncientGreek is the one who rejects it.

Replies from: entirelyuseless
comment by entirelyuseless · 2016-06-02T22:36:40.154Z · LW(p) · GW(p)

I realize that. A reply doesn't necessarily have to be an argument against the person it is a reply to.

comment by ZeitPolizei · 2016-05-26T17:28:15.773Z · LW(p) · GW(p)

The AI analogue would be: If the AI has the capacity to wirehead itself, it can make itself enter the color perception subroutines. Whether something new is learned depends on the remaining brain architecture. I would say, in the case of humans, it is clear that whenever something new is experienced, the human learns what that experience feels like. I reckon that for some people with strong visualization (in a broad sense) abilities it is possible to know what an experience feels like without experiencing first hand by synthesizing a new experience from previously known experiences. But in most cases there is a difference between imagining a sensation and experiencing it.

In the case of the AI, there could either be the case where no information is passed between the color perception subroutine and the main processing unit, in which case the AI may have a new experience, but not learn anything new. Or some representation of the experience of being in the subroutine is saved to memory, in which case something new is learned.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2016-05-27T19:22:41.895Z · LW(p) · GW(p)

The stronger someones imaginative ability is, the more their imagining an experience is actually having it, in terms of brain states....and the less it s a counterexample to anything relevant.

If the knowedge the AI gets from the colour routine is unproblematically encoded in a string of bits, why can't it just look at the string of bits...for that matter, why can't Mary just look at the neural spike trains of someone seing red?

Replies from: ZeitPolizei
comment by ZeitPolizei · 2016-05-27T23:57:02.440Z · LW(p) · GW(p)

why can't Mary just look at the neural spike trains of someone seing red?

Why can't we just eat a picture of a plate of spaghetti instead of actual spaghetti? Because a representation of some thing is not the thing itself. Am I missing something?

Replies from: TheAncientGeek
comment by TheAncientGeek · 2016-05-28T07:07:34.637Z · LW(p) · GW(p)

Yes: is about a kind of knowledge.

The banal truth here is that knowing about a thing doesn't turn you into it.

The significant and contentious claim is that here are certain kinds of knowledge that can only be accessed by instantiating a brain state. The existence of such subjective knowledge leads to a further argument against physicalism.

comment by V_V · 2016-05-28T19:02:46.056Z · LW(p) · GW(p)

Consider a situation where Mary is so dexterous that she is able to perform fine-grained brain surgery on herself. In that case, she could look at what an example of a brain that has seen red looks like, and manually copy any relevant differences into her own brain. In that case, while she still never would have actually seen red through her eyes, it seems like she would know what it is like to see red as well as anyone else.

But in order to create a realistic experience she would have to create a false memory of having seen red, which is something that an agent (human or AI) that values epistemic rationality would not want to do.

Replies from: ShardPhoenix
comment by ShardPhoenix · 2016-05-29T01:33:55.509Z · LW(p) · GW(p)

Since you'd know it was a false memory, it doesn't necessarily seem to be a problem, at least if you really need to know what red is like for some reason.

Replies from: V_V
comment by V_V · 2016-05-31T17:02:55.737Z · LW(p) · GW(p)

If you know that it is a false memory then the experience is not completely accurate, though it may be perhaps more accurate than what human imagination could produce.

comment by V_V · 2016-05-28T18:59:55.766Z · LW(p) · GW(p)

The reward channel seems an irrelevant difference. You could make the AI in Mary's room thought experiment by just taking the Mary's room thought experiment and assuming that Mary is an AI.

The Mary AI can perhaps simulate in a fairly accurate way the internal states that it would visit if it had seen red, but these simulated states can't be completely identical to the states that the AI would visit if it had actually seen red, otherwise the AI would not be able to distinguish simulation form reality and it would be effectively psychotic.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2016-05-31T14:00:06.688Z · LW(p) · GW(p)

Interesting point...

comment by ImNotAsSmartAsIThinK · 2016-05-28T00:50:33.868Z · LW(p) · GW(p)

I'd highly recommend this sequence to anyone reading this: http://lesswrong.com/lw/5n9/seeing_red_dissolving_marys_room_and_qualia/

This thrust of the argument, applied to this situation, is simply that 'knowledge' is used to mean two completely different things here. On one hand, we have knowledge as verbal facts and metaphoric understanding. On the other we have averbal knowledge, that is, the superset containing verbal knowledge and non-verbal knowledge.

To put it as plainly as possible: imagine you have a box. Inside this box, there is a another, smaller box. We can put a toy inside the box inside the larger box. We can alternatively put a toy inside the larger box but outside the box inside this box. These situations are not equivalent. What paradox!

The only insight need here is simply noting that something can be 'inside the box' without being inside the box inside the box. Since both are referred to as 'inside the box' the confusion is not surprising.

It seems like a significant number of conventional aporia can be understood as confusions of levels.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2016-05-28T11:21:58.525Z · LW(p) · GW(p)

I'd highly recommend reading the ]original paper.](http://home.sandiego.edu/~baber/analytic/Jackson.pdf)

I am not following the box analogy. What kinds of knowledge do the boxes represent?

Replies from: ImNotAsSmartAsIThinK
comment by ImNotAsSmartAsIThinK · 2016-05-28T15:19:45.054Z · LW(p) · GW(p)

The big box is all knowledge, including the vague 'knowledge of experience' that people talk about in this thread. The box-inside-the-box is verbal/declarative/metaphoric/propositional/philosophical knowledge, that is anything that is fodder for communication in any way.

The metaphor is intended to highlight that people seem to conflate the small box with the big box, leading to confusion about the situation. Inside the metaphor, perhaps this would be people saying "well maybe there are objects inside the box which aren't inside the box at all". Which makes little since if you assume 'inside the box' has a single referent, which it does not.

Edit: I read your link, thanks for that. I can't say I got much of anything out of it, though. I haven't changed my mind, and my epistemic status regarding my own arguments hang changed; which is to say there is likely something subtle I'm not getting about your position and I don't know what it is.

comment by selylindi · 2016-05-25T02:06:03.703Z · LW(p) · GW(p)

Let's take the AI example in a slightly different direction: Consider an AI built as a neural net with many input lines and output effectors, and a few well-chosen reward signals. One of the input lines goes to a Red Detector; the other input lines go to many other types of sensors but none of them distinguish red things from non-red things. This AI then gets named Mary and put into a black and white room to learn about optics, color theory, and machine learning. (Also assume this AI has no ability to alter its own design.)

Speculation: At the moment when this AI Mary steps out of the room into the colorful world, it cannot have any immediate perception of red (or any other color), because its neural net has not yet been trained to make any use of the sensory data corresponding to redness (or any other color). Analogously to how a young child is taught to distinguish a culturally-specific set of colors, or to how an adult can't recognize lapiz versus cerulean without practice, our AI cannot so much as distinguish red from blue until adequate training of the neural net has occurred.

If that line of reasoning is correct, then here's the conclusion: Mary does not learn anything new (perceptually) until she learns something new (behaviorally). Paradox dismissed.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2016-05-27T19:34:58.870Z · LW(p) · GW(p)

We know what happens when blind people gain sight, and it isn't nothing,

Replies from: ImNotAsSmartAsIThinK, entirelyuseless
comment by ImNotAsSmartAsIThinK · 2016-05-28T00:35:59.410Z · LW(p) · GW(p)

This isn't a helpful or contributive response.

comment by entirelyuseless · 2016-05-28T12:14:18.399Z · LW(p) · GW(p)

Yes. People can even immediately identify visual objects as corresponding to previously known tactile objects (probably via analogous relationship of parts), even though not perfectly.

Replies from: selylindi
comment by selylindi · 2016-07-20T01:01:26.067Z · LW(p) · GW(p)

I'm under the impression that the empirical fact about this is exactly the opposite:

"Within a week to a few months after surgery, the children could match felt objects to their visual counterparts."

i.e. not immediate, but rather requiring the development of experience

comment by Slider · 2016-05-25T19:39:42.629Z · LW(p) · GW(p)

For a lot of people knowledge about experience is gained by first having experience and then reflecting upon its details. However Mary's knowledge must be so throught that he must know the qualia minutia pretty accurately. It would seem she would be in a position to be able to hallucinate color before actually seeing it.

What is unintuitive about Marys position is that usually studying books doesn't develop any good deep rooted intuitions how things work. Usually being able to pass a test on the subject is sufficient bar for having ingested the material. Even in mathematics education there is a focus on "having routine". That is you need to actually do calculations to really understand them. I think this is aknowledgement that part of the understanding that is targeted at achieving is very lousily transmitted by prose.

Say that Gary knows perfectly that needles hurt but has never been injected with a needle. When he gets his first vaccine and it is just as he expected does he gain new knowledge from it? Is it consistent to answer differently for Gary than for Mary?

comment by [deleted] · 2016-05-25T03:28:30.665Z · LW(p) · GW(p)

Interesting thought experiment. Do we know an AI would enter a different mental state though?

I am finding it difficult to imagine the difference between software "knowing all about" and "seeing red"

Replies from: Stuart_Armstrong, ImNotAsSmartAsIThinK
comment by Stuart_Armstrong · 2016-05-25T12:12:09.128Z · LW(p) · GW(p)

Do we know an AI would enter a different mental state though?

We could program it that way.

comment by ImNotAsSmartAsIThinK · 2016-05-28T15:51:17.583Z · LW(p) · GW(p)

Arguably it could simulate itself seeing red and replace itself with the simulation.

I think the distinction between 'knowing all about' and 'seeing' red is captured in my box analogy. The brain state is a box. There is another box inside it, call this 'understanding'. We call something inside the first box 'experienced'. So the paradox hear is the two distinct states [experiencing (red) ] and [experiencing ( [understanding (red) ] ) ] are both brought under the header [knowing (red)], and this is really confusing.

comment by TheAncientGeek · 2016-05-24T19:14:35.408Z · LW(p) · GW(p)

The point of Marys Room, also known as the Knowledge Argument,is not just that Mary goes into a new state when she sees Red,, but that she learns something from it. By most peoples intuitions, that is rather exceptional, because most state transitions in most entities have nothing to do with knowledge. It actually is about consciousness, in the sense of information that s only accessible subjectively. The reward channel thing is only an exact parallel to Marys Room if the AI learns something from having its channel simulated and if it could not have learned it from studying its source code, or some other objective procedure.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2016-05-25T12:16:44.966Z · LW(p) · GW(p)

Mary certainly experiences something new, but does she learn something new? Maybe for humans. Since we use empathy to project our own experiences onto those of others, humans tend to learn something new when they feel something new. If we already had perfect knowledge of the other, it's not clear that we learn anything new, even when we feel something new.

Replies from: TheAncientGeek, SilentCal
comment by TheAncientGeek · 2016-05-28T08:16:44.894Z · LW(p) · GW(p)

Mary certainly experiences something new, but does she learn something new?

That's the question.If you don't have answer, you are basically comparing something unknown to something else unknown.

Maybe for humans. Since we use empathy to project our own experiences onto those of others, humans tend to learn something new when they feel something new.

What's the relevance of empathy? If you learn what something is for you, subjeectively, then I suppose empathy will tell you what it feels like for others, in addition. But that is posited on your having the novel subective knowkedge in the first place, not an alternative.

If we already had perfect knowledge of the other, it's not clear that we learn anything new, even when we feel something new.

Mary s posited as having perfect objective knowledge, and only objective knowledge. Whether that encompasses whatever there is to subjective knowledge is the whole question.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2016-05-31T14:13:37.523Z · LW(p) · GW(p)

Humans also have a distinction between alief and belief, that seems to map closely here. Most people believe that stoves are hot and that torture is painful. However, they'd only alieve them if they experience either one. So part of experiencing qualia might be moving things to the alieve level.

So what would we say about a Mary that has never touched anything hot, and has not only a full objective understanding of what hotness is and what kinds of items are hot, but has trained her instincts to recoil in the correct way from hot objects, etc... It would seem that that kind of Mary (objective knowledge+correct aliefs) would arguably learn nothing from touching a hot stove.

It also strikes me as interesting that the argument is made only about totally new sensations or qualia. When I'm looking at something red, as I am right now, I'm experiencing the qualia, yet not knowing anything new. So any gain of info that Mary has upon seeing red for the first time can only be self-knowledge.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2016-06-01T11:05:51.039Z · LW(p) · GW(p)

So what would we say about a Mary that has never touched anything hot, and has not only a full objective understanding of what hotness is and what kinds of items are hot, but has trained her instincts to recoil in the correct way from hot objects, etc... It would seem that that kind of Mary (objective knowledge+correct aliefs) would arguably learn nothing from touching a hot stove.

The argument could go through if it could be argued that aliefs are the only thing anyone gets from novel experiences.

When I'm looking at something red, as I am right now, I'm experiencing the qualia, yet not knowing anything new. So any gain of info that Mary has upon seeing red for the first time can only be self-knowledge.

I don't follow that. You don't learn anything new the second or third time you are told that Oslo s the capital of Norway.,.what's that got to do with self knowledge?

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2016-06-01T12:03:49.224Z · LW(p) · GW(p)

The argument could go through if it could be argued that aliefs are the only thing anyone gets from novel experiences.

This gets tricky. Suppose Mary has all the aliefs of hot pain. And also has the knowledge and alief of standard pain types. Then it would seem she learns nothing from the experience. She wouldn't even say "oh, hot pain is kind of a mix of 70% cold pain and 30% sharp pain" (or whatever), because she'd know that fact from objective observations.

However this is a case where Mary could use her own past experience to model or imagine experiencing hot pain, ahead of time. So its not clear what's really going on here in terms of knowledge.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2016-06-03T18:17:15.107Z · LW(p) · GW(p)

If you look at it in terms of objectively confirmable criteria, Mary may not seem to learn much, but looking at things that way is kind of question begging.

Replies from: Stuart_Armstrong, hairyfigment
comment by Stuart_Armstrong · 2016-06-06T10:19:46.634Z · LW(p) · GW(p)

Can you clarify? It seems that there's two clear cases: a) Mary has the aliefs of pain and the necessary background to imagine the experience. She learns nothing by experiencing it herself ("yep, just as expected"). b) Mary has no aliefs and cannot imagine the experience. She learns something.

Then we get into odd situations where she has the aliefs but not the imagination, or vice versa. Then she does learn something - maybe? - but this is an odd situation for a human to be in.

My impression here is that as we investigate the problem further, the issue will dissolve. We'll confront issues like "can a sufficiently imaginative human use objective observations to replicate within themselves the subjective state that comes from experiencing something?" and end up with a better understanding of Mary's room, new qualia, what role aliefs play in knowledge, etc... - but the case against physicalism will be gone.

If I ever have the time, I'll try and work through that thoroughly.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2016-06-22T16:37:35.854Z · LW(p) · GW(p)

The point about learning is not essential , it is just here to dramatise the real point , which is the existence of subjective states.

If I ever have the time, I'll try and work through that thoroughly

Promissory note accepted.

comment by hairyfigment · 2016-06-03T20:48:01.505Z · LW(p) · GW(p)

Stop being a sophist about terminology. This sequence showed how a purely physical world could produce something; I couldn't care less if you choose to call that something "subjective knowledge" or not, the point is that it doesn't disprove physicalism.

You want to make an argument, make an argument about the facts. To discuss a category without discussing its purpose(s) is to lie. Or as Scott put it, "Categories are made for man and not man for the categories."

Replies from: TheAncientGeek
comment by TheAncientGeek · 2016-06-04T12:32:39.636Z · LW(p) · GW(p)

The answer to "it s been explained in the sequences" is usually read the comments...", in this case RobbB's lengthy quote from Chalmers.

"[I]magine that we have created computational intelligence in the form of an autonomous agent that perceives its environment and has the capacity to reflect rationally on what it perceives. What would such a system be like? Would it have any concept of consciousness, or any related notions?

"To see that it might, note that one the most natural design such a system would surely have some concept of self — for instance, it would have the ability to distinguish itself from the rest of the world, and from other entities resembling it. It also seems reasonable that such a system would be able to access its own cognitive contents much more directly than it could those of others. If it had the capacity to reflect, it would presumably have a certain direct awareness of its own thought contents, and could reason about that fact. Furthermore, such a system would most naturally have direct access to perceptual information, much as our own cognitive system does.

"When we asked the system what perception was like, what would it say? Would it say, "It's not like anything"? Might it say, "Well, I know there is a red tricycle over there, but I have no idea how I know it. The information just appeared in my database"? Perhaps, but it seems unlikely. A system designed this way would be curiously indirect. It seems much more likely that it would say, "I know there is a red tricycle because I see it there." When we ask it in turn how it knows that it is seeing the tricycle, the answer would very likely be something along the lines of "I just see it."

"It would be an odd system that replied, "I know I see it because sensors 78-84 are activated in such-and-such a way." As Hofstadter (1979) points out, there is no need to give a system such detailed access to its low-level parts. Even Winograd's program SHRDLU (1972) did not have knowledge about the code it was written in, despite the fact that it could perceive a virtual world, make inferences about the world, and even justify its knowledge to a limited degree. Such extra knowledge would seem to be quite unnecessary, and would only complicate the processes of awareness and inference.

"Instead, it seems likely that such a system would have the same kind of attitude toward its perceptual contents as we do toward ours, with its knowledge of them being directed and unmediated, at least as far as the system is concerned. When we ask how it knows that it sees the red tricycle, an efficiently designed system would say, "I just see it!" When we ask how it knows that the tricycle is red, it would say the same sort of thing that we do: "It just looks red." If such a system were reflective, it might start wondering about how it is that things look red, and about why it is that red just is a particular way, and blue another. From the system's point of view it is just a brute fact that red looks one way, and blue another. Of course from our vantage point we know that this is just because red throws the system into one state, and blue throws it into another; but from the machine's point of view this does not help.

"As it reflected, it might start to wonder about the very fact that it seems to have some access to what it is thinking, and that it has a sense of self. A reflective machine that was designed to have direct access to the contents of its perception and thought might very soon start wondering about the mysteries of consciousness (Hofstadter 1985a gives a rich discussion of this idea): "Why is it that heat feels this way?"; "Why am I me, and not someone else?"; "I know my processes are just electronic circuits, but how does this explain my experience of thought and perception?"

"Of course, the speculation I have engaged in here is not to be taken too seriously, but it helps to bring out the naturalness of the fact that we judge and claim that we are conscious, given a reasonable design. It would be a strange kind of cognitive system that had no idea what we were talking about when we asked what it was like to be it. The fact that we think and talk about consciousness may be a consequence of very natural features of our design, just as it is with these systems. And certainly, in the explanation of why these systems think and talk as they do, we will never need to invoke full-fledged consciousness. Perhaps these systems are really conscious and perhaps they are not, but the explanation works independently of this fact. Any explanation of how these systems function can be given solely in computational terms. In such a case it is obvious that there is no room for a ghost in the machine to play an explanatory role.

"All this is to say (expanding on a claim in Chapter 1) that consciousness is surprising, but claims about consciousness are not. Although consciousness is a feature of the world that we would not predict from the physical facts, the things we say about consciousness are a garden-variety cognitive phenomenon. Somebody who knew enough about cognitive structure would immediately be able to predict the likelihood of utterances such as "I feel conscious, in a way that no physical object could be," or even Descartes's "Cogito ergo sum." In principle, some reductive explanation in terms of internal processes should render claims about consciousness no more deeply surprising than any other aspect of behavior. [...]

"At this point a natural thought has probably occurred to many readers, especially those of a reductionist bent: If one has explained why we say we are conscious, and why we judge that we are conscious, haven't we explained all that there is to be explained? Why not simply give up on the quest for a theory of consciousness, declaring consciousness itself a chimera? Even better, why not declare one's theory of why we judge that we are conscious to be a theory of consciousness in its own right? It might well be suggested that a theory of our judgments is all the theory of consciousness that we need. [...]

"This is surely the single most powerful argument for a reductive or eliminative view of consciousness. But it is not enough. [...] Explaining our judgments about consciousness does not come close to removing the mysteries of consciousness. Why? Because consciousness is itself an explanandum. The existence of God was arguably hypothesized largely in order to explain all sorts of evident facts about the world, such as its orderliness and its apparent design. When it turns out that an alternative hypothesis can explain the evidence just as well, then there is no need for the hypothesis of God. There is no separate phenomenon God that we can point to and say: that needs explaining. At best, there is indirect evidence. [...]

"But consciousness is not an explanatory construct, postulated to help explain behavior or events in the world. Rather, it is a brute explanandum, a phenomenon in its own right that is in need of explanation. It therefore does not matter if it turns out that consciousness is not required to do any work in explaining other phenomena. Our evidence for consciousness never lay with these other phenomena in the first place. Even if our judgments about consciousness are reductively explained, all this shows is that our judgments can be explained reductively. The mind-body problem is not that of explaining our judgments about consciousness. If it were, it would be a relatively trivial problem. Rather, the mind-body problem is that of explaining consciousness itself. If the judgments can be explained without explaining consciousness, then that is interesting and perhaps surprising, but it does not remove the mind-body problem.

"To take the line that explaining our judgments about consciousness is enough [...] is most naturally understood as an eliminativist position about consciousness [...]. As such it suffers from all the problems that eliminativism naturally faces. In particular, it denies the evidence of our own experience. This is the sort of thing that can only be done by a philosopher — or by someone else tying themselves in intellectual knots. Our experiences of red do not go away upon making such a denial. It is still like something to be us, and that is still something that needs explanation. [...]

"There is a certain intellectual appeal to the position that explaining phenomenal judgments is enough. It has the feel of a bold stroke that cleanly dissolves all the problems, leaving our confusion lying on the ground in front of us exposed for all to see. Yet it is the kind of "solution" that is satisfying only for about half a minute. When we stop to reflect, we realize that all we have done is to explain certain aspects of behavior. We have explained why we talk in certain ways, and why we are disposed to do so, but we have not remotely come to grips with the central problem, namely conscious experience itself. When thirty seconds are up, we find ourselves looking at a red rose, inhaling its fragrance, and wondering: "Why do I experience it like this?" And we realize that this explanation has nothing to say about the matter. [...]

"This line of argument is perhaps the most interesting that a reductionist or eliminativist can take — if I were a reductionist, I would be this sort of reductionist — but at the end of the day it suffers from the problem that all such positions face: it does not explain what needs to be explained. Tempting as this position is, it ends up failing to take the problem seriously. The puzzle of consciousness cannot be removed by such simple means."

—David Chalmers, The Conscious Mind: In Search of a Fundamental Theory (1996)

Replies from: hairyfigment
comment by hairyfigment · 2016-06-04T17:31:08.169Z · LW(p) · GW(p)

Yeah, I thought you might go to Chalmers' Zombie Universe argument, since the Mary's Room argument is an utter failure and the linked sequence shows this clearly. But then phrasing your argument as a defense of Mary's Room would be somewhat dishonest, and linking the paper a waste of everyone's time; it adds nothing to the argument.

Now we've almost reached the actual argument, but this wall of text still has a touch of sophistry to it. Plainly none of us on the other side agree that the Martha's Room response "denies the evidence of our own experience." How does it do so? What does it deny? My intuition tells me that Martha experiences the color red and the sense of "ineffable" learning despite being purely physical. Does Chalmers have a response except to say that she doesn't, according to his own intuition?

Replies from: TheAncientGeek
comment by TheAncientGeek · 2016-06-04T21:13:41.554Z · LW(p) · GW(p)

Yeah, I thought you might go to Chalmers' Zombie Universe argument

Chalmers does not mention zombies in the quoted argument, in view of which your comment would seem to be a smear by association.

, since the Mary's Room argument is an utter failure.

Saying that doesnt make it so, and putting it in bold type doesn't make it so.

the linked sequence shows this

You have that the wrong way around. The copied passage is a response to the sequence. It neefs to be answered itself,.

My intuition tells me that Martha experiences the color red and the sense of "ineffable" learning despite being purely physical

You also don't prove physicalism by assuming physicalsm.

Replies from: hairyfigment
comment by hairyfigment · 2016-06-06T06:20:15.458Z · LW(p) · GW(p)

Mary's Room is an attempt to disprove physicalism. If an example such as Martha's Room shows how physicalism can produce the same results, the argument fails. If, on the other hand, one needs an entirely different argument to show that doesn't happen, and this other argument works just as well on its own (as Chalmers apparently thinks) then Mary's R adds nothing and you should forthrightly admit this. Anything else would be like trying to save an atheist argument about talking snakes in the Bible by turning it into an argument about cognitive science, the supernatural, and attempts to formalize Occam's Razor.

The Zombie Universe Arguments seems like the only extant dualist claim worth considering because Chalmers at least tries to argue that (contrary to my intuition) a physical agent similar to Martha might not have qualia. But even this argument just seems to end in dueling intuitions. (If you can't go any further, then we should mistrust our intuitions and trust the abundant evidence that our reality is somehow made of math.)

Possibly one could construct a better argument by starting with an attempt to fix Solomonoff Induction.

comment by SilentCal · 2016-05-27T20:17:56.766Z · LW(p) · GW(p)

Agreed, with the addendum that human intuition has trouble fathoming the 'perfect knowledge of the other' scenario. If seeing red caused Mary to want to see more color, we'd be tempted to describe it as her 'learning' the pleasure of color, whether or not Mary's predictions about anything changed.