Other minds and bats: the vampire Turing test

post by Stuart_Armstrong · 2014-03-25T13:36:38.932Z · LW · GW · Legacy · 52 comments

Thoughts inspired by Yvain's philosophical role-playing post.

Thomas Nagel produced a famous philosophical thought experiment "What Is It Like to Be A Bat?" In it, he argued that the reductionist understanding of consciousness was insufficient, since there exists beings - bats - that have conscious experiences that humans cannot understand. We cannot know what "it is like to be a bat", and looking reductively at bat brains, bat neurones, or the laws of physics, cannot (allegedly) grant us any understanding of this subjective experience. Therefore there remains an unavoidable subjective component to the problem of consciousness.

I won't address this issue directly (see for instance this, on the closely related subject of qualia), but instead look at the question: suppose someone told us that they actually knew what it was like to be a bat (as well as what it was like to be a human). Call such a being a vampire, for obvious reasons. So if someone claimed they were a vampire, how would we test this?

We can't simply ask them to describe what it's like to be a bat - it's perfectly possible they know what it's like to be a bat, but cannot describe it in human terms (just as we often fail to describe certain types of experiences to those who haven't experienced them). Could we run a sort of Turing test - maybe implant the putative vampire's brain into a bat body, and see how bat-like it behaved? But, as Nagel pointed out, this could be a test of whether they know how to behave like a bat behaves, not whether they know what it's like to be a bat.

I posit that one possible solution is to use the approach laid out in my post "the flawed Turing test". We need to pay attention as to how the "vampire" got their knowledge. If the vampire is a renown expert on bat behaviour and social interactions, who is also interested in sonar and paragliding - then them functioning as a bat is weak evidence as to them actually knowing what it is like to be a bat. But suppose instead that their knowledge comes from another source - maybe the vampire is a renown brain expert, who has grappled with philosophy of mind and spent many years examining the functioning of bat brains. But, crucially, they have never seen a full living bat in the wild or in the lab, they've never watched a natural documentary on bats, they've never even seen a photo of a bat. In that case, if they behave correctly when transplanted into a bat body, then it's strong evidence of them actually understanding what it's like to be a bat.

Similarly, maybe they got their knowledge after a long conversation with another "vampire". We have the recording of the conversation, and it's all about mental states, imagery, emotional descriptions and visualisation exercises - but not about physical descriptions or bat behaviour. In that case, as above, if they can function successfully as a bat, this is evidence of them really "getting it".

In summary, we can say "that person likely knows what it is like to be a bat" if "knowing what it's like to be a bat" is the most likely explanation for what we see. If they behave exactly like a bat when in a bat body, and we know they have no prior experience that teaches them how to behave like a bat (but a lot about the bat's mental states), then we can conclude that it's likely that they genuinely know what it's like to be a bat, and are implementing this knowledge, rather than imitating behaviour.


Comments sorted by top scores.

comment by Slider · 2014-03-26T01:59:53.937Z · LW(p) · GW(p)

I once saw a blind kid on TV that had developed a way of clicking with his mouth that he could use it to navigate sidewalks. This was pretty cool and it made me pay attention to my own sense of hearing and wondering what it must be like to use that kind of ability. I payed close attention to situations that it might be possible to hear the place of walls etc. Doing this for sometime it changed my relationship to my hearing.

I became aware when a sound is louder because additional bounces of wave energy hit my ear rather than having only the direct line-of-sight propagation. I picked up the threshold where I hear the primary sound and it's echo as simultanous sound or as two separate sounds. After paying attention to things that I theorethically knew why they would happen I could tap into kinds of "feels" in the sound. My mind somehow clicked and connected geometric forms to the echo timing profile. In understand only discrete sounds conciously but the prolonged direction-changing continous echo that a sloped wall makes I could sense intrisically. And I found out that for example claps are very directional and you can kind of like cast different claping to a wall like you would shine a flashlight.

All in all my sense of hearing became much more like my sense of seeing with good 3D-structure. Experiencing this new way of hearing was very interesting and cool. However once I got settled how to hear like a echolocator I had trouble conceptualising what it is like not to hear like it. My guess is that if you don't pay that much attention a lof information goes unextracted. But it was a big surprise that it wasn't "obvious" how much information a given hearing includes. I didn't gain a better ear. The amount of information I was receiving would have needed to stay same, but I guess I previously couldn't structure them properly.

And I realised that i had atleast two hearing modes even before this new "3D" mode. A mono mode where you can decipher what kind of sound it is and can recognise what causing it with only knowing that it is "nearby within hearing distance" and couldn't be able to face the sound and need to visually look for clues where the sound is coming. Then there is kinda "arrow" mode where you know to look at the direction where the sound is coming from. But it is kinda cool when in "3D" mode I can hear around the corner on what kind of space there is which I can't do in "arrow" mode.

Thinking about how sound waves work it kinda makes sense how the perception changes between "mono" and "arrow mode". If you are in a empty room and make big enough noise there is significant echo going from every direction. Without able to read the timing finestructure it feels like coming from everywhere. However if you in the same kind of room don't make quite as much noise then the component directly going towards you will dominate the echoes. There is also an explanation why the "arrow" isn't a pinpointer but a fuzzy approximation, when you try to read the texture/shape information as location information it will give a slightly contradictory result.

I am using language here where I first feel a certain way and then be puzzled on why it would feel this way and then start theorising in this way. I guess it's worth noting that having more theory won't give you insight in what your experience is. It was kinda mindopening to be able to target those feelings relatively theory-free and then the joy of finding the explanation. For example how sound propagation first felt "waterlike" and only afterwards confirming that that makes perfect sense as the waves are not equal in strength in all directions and do have dampening as they propagate.

I really couldn't confirm that I wasn't just reading too much into what I was supposedly experiencing, that I have just pretended to experience things while only actually wanting to experience them so. But then after I aquired the skill I passively would first pick up sounds and have 3D impressions of them when not actively pursuing to hear anything (and usually be frightened about it) and only then turning to look at them that this was a legit change in perception when the expectations formed by hearing would be confirmed by sight. For example I would drive by a post with a bike and suddenly be very aware of something square on my right, the wheel sounds giving enough echo basis that the post would pop-up against the background a lot more than it visually does. Or driving alleys making a sudden echo chamber on an otherwise echoless street. I also found out that glass sticks out a lot more than other materials (oh there is a large object to my right, oh it's just a window).

For me I have discovered what it is to be like an echolocator which I guess is supposed to be the main alien part in the bat metaphor. There is also a joke on how drugs make you "taste blue" but I have come to experience that and how it makes sense to "see sound". But the behavioural effects of this different kind of experiencing are not that telling or direct. I would not pass the vampire turing test because that isn't to the point, it would need to be refined to be that but it is not trivial how that would be made.

The operation that made me undergo this change seems to be paying attention. It doesn't seem to be that I learned a new fact. Althougth I clearly see that having atheory why I am feeling what I am feeling did have aguiding effect. Maybe call it a imagination aid? I would say it might be a deficiency in understanding and not knowledge that limits people not being able to experience bats. And it is possible for humans to understand what it is to be an echolocator. I would guess that if I had sufficiently clear descriptions on what kinds of "facets" my perceptions include I should be able to play it out how I would experience the situation had I that kind of a sense. So I think it might be possible to imagine seeing 4 primary colors but it takes skill in this "pay attention to your qualia structures" thingy that people are not in general very good at.

Replies from: primality
comment by primality · 2014-03-27T21:44:43.608Z · LW(p) · GW(p)

How long did it take to build this skill, and how did you do it?

Replies from: Slider, gwern
comment by Slider · 2014-03-30T11:53:37.090Z · LW(p) · GW(p)

Around 3-4 weekends. Althought being actively interested in your surrounding sounds is somewhat a big part of it and that happened between more intense sessions. I found that having and considering edge cases that are just in the limit of your perception is the most developing. I used a walk-in closet to familirise myself with the direct voice in contrast to the echo. Empty rooms are actually noisy. The drop in volume is significant enough that there is a clear difference in effort to produce equal sound even in mono mode. I also tried to have a reference sound I can produce uniformly in a variety of places and isn't disrupting to other people. One was base of my tongue against my palate. However this is a little confusing as the head internal accustics are not the most straigthforward ones and interfere with the external accoustics. I also had a button I would click into place and out of. This had trouble in that it often would have insignificant volume to get a proper feel for environment.

One most not forget about just being curious about sounds that just happen to be in the environment. Emergency vechicles are a great source of doppler and the volume output is really great. In urban areas there are plenty of clear surfaces and surface gaps making the moment have a nice variable microstructure. In more wide open spaces the scale of things makes it more easy to pick up on the echo components. Cars in general provide a pretty monotome moving sound source. Riding a bike also provides a constant mechanical noice that has relative position to you fixed and doesn't really tire you out in generating (+ is socially acceptable way of being noizy (you can even get away with devices explictly designed to generate noise (atleast if you are young enough))). I didn't really use it myself but cell phone button/ui noises should be pretty standard, narrow and somewhat acceptable.

In private areas clapping has pretty narrow sound profile althought is pretty directional that can make the volume non-standard when you haven't masterd that yet. Listening to the wall of a detached house with clapping could be done within it's yard. The smaller scale you are the higher you want the pitch to be (or can only latch into higher components).

The main thing is to be aware and ready to percieve. It's clearly a very learnable skill the main obstacle being paying that attention. I didn't use any reference or readymade learning materials. Having a goal was plenty in providing steps/structure to proceed (ie thinking that there are possibly harder and easier sounds, you focus on what could determine the easiness/hardness of a sound, having a bunch of hearing experiences focusing on what kinds of categories you can place them in can then be used to anticipate categorising novel experinces etc). You have ears, use them, play with them. Shockingly most people really don't. I have yet to generalise what other things could be achieved with "seriously playing". Being able to take into account findings "midflight" might be a critical thing that most learning alternatives lack. You won't need that many repetitions but you need to be level-appropriate (even as that level shifts).

comment by gwern · 2014-03-27T23:58:57.237Z · LW(p) · GW(p)

https://en.wikipedia.org/wiki/Human_echolocation mentions some training courses, and checking pages on them, they don't talk about units of years or months, but short 'workshops', which usually means that they won't last more than 3-4 days. So with intense training, it may be learnable quickly.

comment by NancyLebovitz · 2014-03-26T17:17:10.502Z · LW(p) · GW(p)

I talked with a woman who said she could visualize in four dimensions. I asked her how many corners a tesseract has. A correct answer would have given me very little information, but she didn't give me a correct answer.

Instead, she looked as though she was visualizing something, and then said she was trying to count the corners but couldn't keep track.

I asked her what a hypersphere looked like, and she said, "Like a regular sphere, but rounder".

I can't visualize in four dimensions, but it seems at least plausible that she could.

This seems like it's even harder to test than knowing what it's like to be a bat. On the other hand, your bat test is pretty hypothetical. How would you rate someone who showed bat-like brain activation when imagining being a bat?

Replies from: Squark, gwern
comment by Squark · 2014-03-26T20:56:08.985Z · LW(p) · GW(p)

I can visualize in 4D by projecting to 3D and keeping the 4th dimension as an "extra parameter". A tesseract has 16 corners, it's rather easy to see it.

comment by gwern · 2014-03-26T17:35:22.085Z · LW(p) · GW(p)

You could give her a link to a 4D game and see if she plays better than most people do. (There's a fair number; I once played http://www.urticator.net/maze/ and found it quite confusing.)

Replies from: NancyLebovitz
comment by NancyLebovitz · 2014-03-26T17:49:45.150Z · LW(p) · GW(p)

Reasonable-- not a test that was available at the time, though.

I've heard that some mathematicians say they can visualize in four or more dimensions, and there's some evidence of this in the math they're good at, but a fast search doesn't turn it up.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2014-03-26T18:55:53.298Z · LW(p) · GW(p)

From my maths background, I have some limited 4D visualisation abilities, but they're highly geometrical and dependent on symmetries (ie I can visualise a tesseract better than most, but not a tesseract in general position). Others seemed to be better - I really should have asked!

comment by V_V · 2014-03-25T18:41:14.707Z · LW(p) · GW(p)

I'm not sure what sort of knowledge we are talking about, and I suspect that Nagel's argument is based on an equivocation.

If we are talking about epistemic beliefs, expectations on observations, then we can certainly study the phenomenon of bat consciousness with a reductionist (that is, scientific) approach.

What we can't do is to simulate the mental states of a bat using our innate agent-simulation mental machinery, for the same reason a color-blind person can't simulate the mental state of a non-color-blind person perceiving red and green as different colors, or Mary can't simulate perceiving colors.
These experiences are a type of knowledge that can't be obtained from scientific research, even though they aren't (in principle) intrinsically epistemically meaningful: if you take Mary outside her back-and-white room, she will experience new mental states even though her epistemic beliefs essentially don't change.

It seems to me that this issue stems just from a limitation of our mental machinery, not an intrinsic flaw of the scientific method or a non-physical nature of consciousness.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-03-26T15:42:52.864Z · LW(p) · GW(p)

If everything can be explained in reductiomistic terms, then so can what it feels like to actually instantiate an experience. If actual experience can't be explained reductionalistically, then reductionism fails.

I see no reason why Mary should not update on "reductionism can explain anything" After all, she has just had a directly contrary experience.

Replies from: Pfft, V_V
comment by Pfft · 2014-03-26T19:03:40.880Z · LW(p) · GW(p)

The question is what "explains" mean. Her experience shows that "reading about science in a grey room can not put your brain into arbitrary states" (e.g. the states that arise from coloured rooms). So there are some things it can not make you "know", if by know you mean "empathatize with" (== "simulate with innate agent simulation machinery"). But that's not really surprising---it mostly sounds interesting because of equivocation with other senses of "know".

Daniel Dennet's analysis of Mary basically works by pointing out this equivocation. He notes that the kind of knowledge that Mary does have is enough to do impressive things such as recognizing a prank if she is handed a blue banana. So the thing she 'learns' is less dramatic than what the phrasing of the story ("now I finally know what it is like to experience colour!") would lead you to expect.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-03-26T19:17:35.386Z · LW(p) · GW(p)

If physical reductionsm explains everything, then it explains what it is like to be in some brain state. If you assume that instantiating a brain state is necessary to fully understanding it, you have conceded the point of Nagel and other qualiaphiles. It is not surprising--it is intuitive--that there is something special about instantiation. However, that is not compatible with strong reductionist intuitions. That is what makes Nagels argument interesting.

If you are willing to adopt how-was-it-for-me behaviourism, there is nothing special about Mary. But most find that position intuitive.

Replies from: Pfft
comment by Pfft · 2014-03-26T19:54:24.551Z · LW(p) · GW(p)

So let me preface this by saying that I don't have any world-shattering insights about this, I think my view is the bog-standard naive physicalism. But it seems to me that naive physicalism can handle these issues.

I think the concepts "explain" and "understand" cause confusion here, so lets switch to the hopefully easier "is experiencing" and "has ever experienced". So suppose that for any given subjective sensation, say redness, there is some class of brain states such that an observer is experiencing that sensation iff their brain-state is in the class. We can figure out what the class is e.g. by asking people what they are experiencing while we carefully examine their brain. Then, to ("fully") experience a sensation, you need to instantiate a given brain state. Does this mean that there is something special about instantiation? I mean, yes there is---but in a quite trivial sense, which seems compatible with reductionist intuitions.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-03-26T20:23:43.283Z · LW(p) · GW(p)

I am not sure what is supposed to be trivial or special here. If you belIevethat there is something extra about instantiating a brain state, such that you have to instantiate it to fully understandi it ,you have already rejected the strongest form of reductionism, because strong reductionism claims to explain everything reductionalistically, ie by presenting some sufficiently complicated formula.

You can .argue that you only need to put labels on brain states, and thereby explain Red as the brain state with a certain label. That would be ontologocally neutral and therefore not commit you to non-physicalist ontology . However, since it is non commital, it doesn't imply physicalism either, since you could equally be labelling subjective states with brain scans. You have decided to go one way and not the other,but that is your decision.

Taking correlation to be causation is one thing : taking it to be identity is another.

Replies from: Pfft
comment by Pfft · 2014-03-26T23:05:37.773Z · LW(p) · GW(p)

You keep using the word "understand" without defining it. I thought V_V's original comment was good because it points out that there are two plausible meanings that could be in play. If you want "predict observations", then nothing is missing. If you want "simulate using innate simulation machinery", then indeed you need something extra, but it's not a better metaphysics, it's a JTAG port to your brain.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-03-27T07:35:21.872Z · LW(p) · GW(p)

Why would I even want to simulate a state in my own brain, unless it brings some increase in knowledge?

You can rescue reductionism by maintaining that you get nothing from instantiating the state yourself, that Mary has no "aha!"

You can also rescue it by maintaining that it predicts the subjective state that brings about the aha.

But you can't maintain that there is some point to personal instantiation, which is not impactive on reductionism. If it's s thing and it's not predicted by reductionism, then there is a thing reductionism can't explain in it own terms.

Replies from: FeepingCreature
comment by FeepingCreature · 2014-03-27T11:48:57.968Z · LW(p) · GW(p)

Reductionism can explain it - ie. causally justify and predict its appearance and behavior. But reductionism cannot explain it to people - ie. induce in their head the appropriate pattern by vocal communication. Those are different meanings of the same word. Reductionism can causally map any bat-brain output to its inputs and state, but it cannot be used as an argumentive tool to induce in listeners an analogue of the bat's mindstate.

When a tree falls in a forest...

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-03-27T12:34:58.315Z · LW(p) · GW(p)

Reductionism can't explain it. It can't predict novel experience, eg what it is like to be bat on L.SD. Reductionism also can't predict experience by applying consistent laws. You can match off known brain states to known subjective experiences in a kind of dictionary or database, but that is not what we normally mean by EXPLANATION.

The idea that you cannot have an explanation that you cannot explain to anybody is problematical. How is that different to not having an explanation? (Owen Flanagan suggests that scientists could test a predictive theory of qualia again their own experience. But again, that abandons strong physicalism, since it accepts the existence of irreducible subjectivity)

Replies from: FeepingCreature
comment by FeepingCreature · 2014-03-27T15:29:32.824Z · LW(p) · GW(p)

Reductionism can't explain it. It can't predict novel experience, eg what it is like to be bat on L.SD.

It doesn't predict novel experience; in other words, it predicts I will not suddenly feel like a bat on LSD. This is correct, since I won't, not being a bat.

You can match off known brain states to known subjective experiences in a kind of dictionary or database, but that is not what we normally mean by EXPLANATION.

What I mean by explanation is "provide a causal model that predicts future behavior". What do you mean by explanation?

The idea that you cannot have an explanation that you cannot explain to anybody is problematical. How is that different to not having an explanation?

I have no idea what you're saying here. Did you mean "can have an explanation that you cannot explain to anybody" - and that's not what I said. I said reductionism cannot be used as a tool to induce in people's minds patterns analogous to a bat's minds. Let's taboo the word "explanation" here - say what you mean.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-03-29T15:12:31.906Z · LW(p) · GW(p)

It is no advertisement for a reductionistic theory of qualia that it can't ever make novel predictions, since other reductionistic theories can predict novel phenomena.

It is no advertisement for a reductionstic theory of qualia that it doesn't predict novel experiences in someone taking LSD, since, empirically, such experiences are reported.

Saying that something won't happen is not much of a prediction, or I can predict tomorrow's weather by saying there won't be a tornado.

Reductive explantation show how higher level properties and behaviours arise from lower level properties and behaviours. Stating that they arise is not showing how. Explanations answer how questions. By relating concepts. So as to increase understanding in the reader.

We have well known examples of reductive explanations, such as the reduction of heat to molecular motion. They are conceptual. They are not look up tables that match off one property against another. Nor are they causal models, in the sense of directed graphs, since a directed graph has no conceptual content.

We don't know how qualia, the HL properties in question, relate to their bases. If we did you could reverse the reduction, and construct code or electronics that could generate qualia. Which we can't, at all, Although we can build memory and cognition, and even, absent qualia, perception.

You still haven't said why you think instantiation is important.

If we had explanations that outputted some formula or sentence that told us what experience was like, we would not need personal instantiation ... Mary would not say aha! If such an explanation exists, I would like to see it.

If out putative explanations don't tell us what the experiences are like, then instantiation would be necessary to know what they are like,...and you wouldn't have an explanation of qualia, because qualia are what experiences feel like.

Replies from: FeepingCreature
comment by FeepingCreature · 2014-03-30T10:58:05.202Z · LW(p) · GW(p)

What do you expect reductionism to do? We already know what inputs correspond to what qualia, so that cannot be the question. We know that the process very probably involves the brain, because that's where light goes in and talk of qualia comes out. I would add to that that it's exceedingly unlikely that a tiny patch of dense, electrically active tissue would just happen to implement some basic physical law that is not found anywhere else in nature, and in any case this does not seem to me to be necessary.

I think the problem is that reductionist explanations, as they stand, lack the power to construct in our heads a model that bridges sensory, electric input and cognitive experiences. To which I say - would you expect it to? Cognition, reflection and perception are probably the parts of our minds most tightly coupled with the rest of the hardware. They're the areas where intuition has the largest importance for efficient processing, and thus where a physical account adds the least relative value. Complicated design, existing hacky intuitive model - it makes sense that this is the last-understood part of our minds.

That said, I think you're expecting too much from reductionism. How would you expect a working theory of qualia to look like? I think to say that a theory of qualia should allow us to cognitively simulate the subjective experience of a different cognitive model asks something of the theory that our minds cannot deliver. Our wetware has no reason to have this capability.

It's like saying a physical simulation of a loudspeaker is necessarily incomplete unless it can make sound come out of your CPU.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-04-01T11:03:36.764Z · LW(p) · GW(p)

I consider reductive explanation to be a form of explanation that offers a persuasive association of concepts together with the ability to make quantitative and novel predictions. If someone doubts that there .is a reductive explanation of heat, you hand them a textbook of thermodynamics, and the matter is settled.

I consider reductionism to the handwaving, philosophical, claim that reductive explanation is the best explanation and/or that it will eventually succeed in all cases.

So a reductive explanation of qualia would be something controversial written up in a text book; whereas reductionism about qualia would be the controversial claim that a reductive explanation of qualia is possible.

"We don't need new physics" is a typical handwaving claim for reductionism, which can be countered by the handwaving claim that we .do need new physics (eg Chalmers' The Conscious Mind)

I don't expect toasters to make coffee, and that isn't a problem, because it is not their function. It isn't obviously a problem that explanations don't induce what they are explaining. The standard explanation of photosynthesis doesn't make me photosynthesise,and is none the worse for it.Explanations are supposed to explain. That is their function. If there is something you can learn by actually having a quale that you didn't learn from a supposed explanation, then the supposed explanation was not afull explanation. That actually having qualia tells you something is what makes it a problem, rather than a wild irrelevancy, that an explanation don't induce what it is explaining.

I still don't know why you think the induction ofqualia is important.

I don't expect reductive explanations to deliver induction of what they explain. I do expect full explanations to fully explain. If Mary's Aha! tells her nothing, she never had a full explanation...andd that isn't due to lack of detail in the explanations or brainpower on her part, both of which are waived away in the story,

I don't know what you your intuitions are, because you won't tell me. However, I suspect that they may be inconsistent.

Replies from: FeepingCreature
comment by FeepingCreature · 2014-04-02T13:20:49.138Z · LW(p) · GW(p)

If there is something you can learn by actually having a quale that you didn't learn from a supposed explanation, then the supposed explanation was not afull explanation. That actually having qualia tells you something is what makes it a problem

Darn. I had a huge post written up about my theory of how qualia work, and how you'd build a mind that could generate new qualia from a reductionist description, when I realized that I had no way to quantify success, because there's no way to compare qualia between two brains. Our qualia are a feature of the cognitive architecture we use, and it'd be as silly to try and place them side by side as it would to try and compare ID handles between two different databases (even with the same schema, but especially with different schemata).

But this argument goes both ways. If I can't quantify success, how can you quantify failure? How is it possible to say a machine emulating a bat's mind, or my mind would lack additional knowledge it gained from actually having my qualia, if the input/output mapping is already perfect? Wouldn't that additional knowledge then necessarily have to be causally inert, and thus be purged by the next GC run?

The necessary absence of both success and failure hints at incoherence in the question.

I suspect the distinction is that the quale itself, stripped of the information it accompanies, doesn't tell me anything, any more than a database ID does. The meaning comes from what it references, and what it references can be communicated and compared. Not necessarily in a form that allows us to experience it - thus the "feeling that there's something missing" - but you can't communicate that anyways without mangling my cognitive architecture to support bat-like senses, at which point questioning that I have bat qualia will be like questioning that other people "truly perceive red" - mildly silly. The qualia were never what it was about - red isn't about red, it's about wavelength and fire and sunset and roses. The quale is just the database ID. I suspect that the ability to even imagine a different person that perceives green where you do red is a bug of our cognition - We gained the ability to model other minds relatively late, and there was no good reason for evolution to program in the fact that we cannot directly compare our database IDs to other people's. (I mean, when was that ever gonna become relevant?)

But that's just my intuition on the topic.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-04-02T17:04:50.261Z · LW(p) · GW(p)

It's important to distinguish novel qualia from foreign qualia. I may not be able to model bat qualia, but I solicit a novel quale by trying food I haven't tasted before, etc. Everyone has had an experience of that kind, and almost everyone finds that direct experience conveys knowledge that isn't conveyed by any description, however good.

You seem to have persuaded yourself that qualia don't contain information on the basis of an untested theory. I would suggest that the the experiences of novel qualia that everyone has had are empirical data that override theories.

We actually know quite a lot about qualia as a result of having them...it just isn't reductionistic knowledge.

You also seem have reinvented the idea I call computational zombies. It's phyicalistically possible for an .AI to be a functional duplicate of a human, but to lack qualia, eg by having the wrong qualia [edit: I meant wrong physics, not wrong qualia]. However, that doesn't prove qualia are causal idle.

It's also important to distinguish functional idleness and causal idleness. The computational zombie won't have a box labelled "qualia" in its functional design. But functional designs don't do anything unless implemented. A computer is causally driven by physics. If qualia are part of physics, then they're doing the driving along with [the rest of] it.

I don't know what question you think is rendered incoherent.

There may be naturallistic reasons to believe we shouldn't be able to model other minds, but that does not rescuetthe claim, if you are still making it, that there is a reductive explanation of qualia. Rather, it is an excuse for not having such an explanation.

Replies from: FeepingCreature
comment by FeepingCreature · 2014-04-02T18:22:36.478Z · LW(p) · GW(p)

You seem to have persuaded yourself of that qualia don't contain information on the basis of an untested theory. I would suggest that the the experience of novel qualia that e everyone has had are empirical data that override theories.

So what information does a new taste contain? What it's similar to, what it's dissimilar to, how to compare it, what it reminds you of, what foods trigger it - but those are all information that can be known; it doesn't need to be experienced. So what information does the pure quale contain?

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-04-02T18:46:49.327Z · LW(p) · GW(p)

If that information could be fully known (in a third person way) then you could just read the label and not drink the wine. Things dont seem to work that way.

Replies from: FeepingCreature
comment by FeepingCreature · 2014-04-02T19:15:04.649Z · LW(p) · GW(p)

Yeah but at that point you have to ask, what makes you think qualia, or rather, "the choice of qualia as supposed to a different representation" is a kind of information about the world? It seems to me more plausible that it's just a fact about the way our minds work. Like, knowing that the bat uses qualia is an implementation aspect of the batmind and can thus probably be separated from the behavior of the bat. (Bats make this easier because I suspect they don't reflect on the fact that they perceive qualia.) Are there functions that can only be expressed with qualia?

As a similar example, imagine a C-Zombie, a human that is not conscious and does not speak of consciousness or having qualia. This human's mind uses qualia, but his model of the world contains no quale-of-qualia, no awareness-of-awareness, no metarepresentation of the act of perception. Can I reimplement his mind, without cheating and making him lie, to not use qualia? Is he a different person after? (My intuition says "yes" and "no".)

[edit] My intuition wishes to update the first response to "maybe, idk".

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-04-05T14:16:52.041Z · LW(p) · GW(p)

Qualia are information for us: if you have no visual qualia, you are blind, etc.

comment by V_V · 2014-03-26T16:33:11.600Z · LW(p) · GW(p)

then so can what it l like to actual insatiable an experience.


Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-03-26T16:56:32.302Z · LW(p) · GW(p)


comment by TheAncientGeek · 2014-03-26T15:44:21.589Z · LW(p) · GW(p)

If everything can be explained in reductiomistic terms, then so can what it l like to actual instantiate an experience. If actual experience can't be explained reductionalistically, then reductions fails.

I see no reason why Mary should not update on "reductionism can explain anything" After all, she has just had direct contrary experience.

comment by TheAncientGeek · 2014-03-26T15:38:41.709Z · LW(p) · GW(p)

Ok. The vampire know's what it is like to be a bat. Let them express the knowledge in terms of physical reductionism and Nagel is defeated by direct demonstration. Excuse them for not doing so, and nothing significant is changed; now the set of creatures who know what it is like to be a bat is expanded to (bats, vampires)

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2014-03-26T15:57:50.869Z · LW(p) · GW(p)

A transsexual can provide us with some evidence that confirms that they "know what it's like to be a man and a woman" without needing to provide a full reductionist explanation. In fact, no-one currently can provide their knowledge of what it's like to be anything in full reductionist form. I'm claiming that we can have evidence of "X knows what it's like to be Z", without needing a full explanation.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-03-26T16:15:12.066Z · LW(p) · GW(p)

If it is true that no one can explain what it it likes be anything, that only reinforces Nagels point. It also doesn't affect Nagels point that we can have evidence that entity V has some essentially mysterious knowledge of what it is like to be entity B,because mysterious, physically inexplicable knowledge would just another physically inexplicable thing. Nagels point is about physicalism and physical understanding. "Here is physical knowledge of what it is like to be a bat"defeats Nagel, "Here is mysterious knowledge..." does not...nor does partial knowledge.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2014-03-26T16:21:58.643Z · LW(p) · GW(p)

No one has a fully reductive explanation for anger either, yet we function acceptably with the explanations we have.

I'm saying that we can have evidence that "X knows what Z is like" even if X cannot explain it to us in a way that makes us able to know what Z is like.

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-03-26T17:12:54.024Z · LW(p) · GW(p)

Anger isn't a example that rescues physicalism from qualiaphilia, since qualiaphiles can and do maintain that emotions are accompanied by ineffable phenomenal feels.

Nagel and co are arguing metaphysics via epistemology. No qualiaphile is asserting that naturalism isn't delivering results that are good enough fora range of practical purposes. Qualiaphiles are arguing that there are in-principle barriers to full physical understanding of consciousness that could indicate a non-physical component ontologocally.

We can have evidence that entity A knows what it is like to be entity B, but that is irrelevant to Nagel unless such knowledge is both physical and complete.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2014-03-26T18:58:36.854Z · LW(p) · GW(p)

that is irrelevant to Nagel unless such knowledge is both physical and complete.

I would still claim that incomplete knowledge is evidence against the likelihood of his position (by conservation of expected probability, it has to be, because a lack of any incomplete knowledge would be strong evidence for his position).

Replies from: TheAncientGeek
comment by TheAncientGeek · 2014-03-26T19:26:16.654Z · LW(p) · GW(p)

I dont see why.

comment by Dentin · 2014-03-25T20:07:15.491Z · LW(p) · GW(p)

I don't know why we even talk about stuff like this. If you can simulate and run a bat brain, then bat conciousness is reductionist. If you can simulate and run a human brain, then human conciousness is reductionist.

To the best of our understanding, it takes a change to the laws of physics to make these things unsimulatable. Therefore, the prior for reductionism is high enough that you need actual evidence to counter it, not just some philosopher's rambling argument.

It may also help to taboo the word 'conciousness' in these kinds of discussions. I find that the word brings a lot of questionable baggage to the table in addition to failing to describe what is actually going on.

Replies from: TheAncientGeek, Pfft, Stuart_Armstrong
comment by TheAncientGeek · 2014-03-26T15:39:46.162Z · LW(p) · GW(p)

I have a photograph of a bridge to sell you.

comment by Pfft · 2014-03-26T19:18:12.009Z · LW(p) · GW(p)

There is a difference between "the brain runs on physics" and having a reductive explanation of conscious experience. Running a simulation of a human brain and getting human behaviour out can be an explanation of behaviour, but not of experience. In order to explain experience, we should have the simulation provide experience as an output and verify that the predicted experience matches the actual one.

Do be concrete, if we had a properly worked out explanation of experience, I would want it to answer questions like "when someone says that jazz music is physically painful, what sensation are they referring to? Is there some different sound you could play which would produce the same sensation for me? If not, why not?"

I think the most interesting point in Nagel's paper is that in order to even be able to check whether a reductionist theory is making the right predictions or not, we will need to develop some skills in being aware of and precisely describe our subjective experience. (He also makes a separate claim, that we will need to be less confused about what subjective experience even means. But I think starting at the practical skills end of the problem is more promising).

comment by Stuart_Armstrong · 2014-03-26T13:40:02.448Z · LW(p) · GW(p)

There is a difference in, eg, going skydiving and watching a detailed brain simulation of someone going skydiving (I would pay different amounts for each of these, and so would most people). This remains true with reductionism. Therefore there is a difference between genuinely experiencing something, and seeing an abstract experience happening - and we can tell the difference. This seems to make questions of the type "is X experiencing Z" meaningful. Potentially even if Z is "being a bat".

Replies from: V_V, NancyLebovitz
comment by V_V · 2014-03-28T00:30:03.333Z · LW(p) · GW(p)

Basically, we are arguing semantics.

According to a functionalist definition of consciousness, you experience like a bat if you behave like a bat. That's essentially the "strong" Turing test view.
According to a structuralist definition, you experience like a bat if you share the same type of brain states. That's Searle's view.
According to eliminativism, consciousness and subjective experience are folk psychology concepts with no scientific utility, similar to sky gods causing lightning. That's the radical behaviorist view.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2014-03-31T10:19:31.127Z · LW(p) · GW(p)

And I'm arguing that there are ways of seeing evidence for "X knows what it is like to be Z" that are different from the oned above.

Suppose we have a transexual (female to male), who writes a book "what it's like to be a man - unexpected insights for women from one who used to be one of them." It's full of descriptions of facts about being a man that a) almost all men think are true, and b) almost all woman find surprising when they read about them.

Then some gal comes along and say "I've been confined to an all female colony for all my life, but I've chatted with many men online, and I think I really know what it's like to be a man." She them proceeds to name a lot of facts that are indeed generally true about men (and a lot of them were in the transexual's book). We look at her chat logs, and none of these facts were mentioned.

Then we'd be justified in saying that she really understood what it's like to be a man. If, however, we knew that she'd read the transexual's book, we'd be justified in rejecting that interpretation. So this is a "weak" Turing test view, along the lines of "if X passes the Turing test, and X was not trained specifically to pass the Turing test, then..."

Replies from: V_V
comment by V_V · 2014-03-31T12:26:09.019Z · LW(p) · GW(p)

Then we'd be justified in saying that she really understood what it's like to be a man. If, however, we knew that she'd read the transexual's book, we'd be justified in rejecting that interpretation.

Except that in both cases she actually knows, in an epistemic sense, what it is like to be a man. The only difference is that she may have never experienced certain mental states that are unique to men. So what?

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2014-03-31T12:29:01.954Z · LW(p) · GW(p)

Except that in both cases she actually knows, in an epistemic sense, what it is like to be a man.

Really? How do we know that? What makes female-male a difference of kind to human-bat, rather than a question of degree?

Replies from: V_V
comment by V_V · 2014-03-31T12:39:38.286Z · LW(p) · GW(p)

The difference is that you can't read a book from a human who used to be a bat. But if you could, (e.g. it was a vampire), or you were some super-neuroscientists who did very accurate studies on the bat brain, you could, in principle, know what it is like to be a bat, in an epistemic sense.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2014-03-31T12:43:22.606Z · LW(p) · GW(p)

In my model, the woman was deducing epistemic facts about men, and the most likely explanation was that she was generalising from the knowledge she had to construct a subjective experience that mirrored that of a man (rather than reading them in a book and copying it). This explanation has testable differences from getting the explanations from a book, eg whether she will answer correctly in "this male faces [unknown new situation]; what do they do?"

Replies from: V_V
comment by V_V · 2014-03-31T12:54:08.943Z · LW(p) · GW(p)

Sure, mental states are physical configurations of the brain, which is a piece of matter, so the question of whether a certain piece of matter in the universe is or was in a certain physical configuration is in principle amenable to scientific enquiry.

My question is, what is the point? I mean, in some circumstances it may certainly useful to determine whether somebody is lying or telling the truth, but in general, if somebody beliefs are epistemically correct, does it matter what specific subjective experiences are associated to them?

comment by NancyLebovitz · 2014-03-26T17:09:35.995Z · LW(p) · GW(p)

(I would pay different amounts for each of these, and so would most people).

Agreed, but I bet some people would pay more for the real thing, and others would pay more for the simulation.

Replies from: Stuart_Armstrong
comment by Stuart_Armstrong · 2014-03-26T18:54:02.972Z · LW(p) · GW(p)

Which is all I need to argue they are indeed different things, in relevant ways ^_^