State your physical account of experienced color
post by Mitchell_Porter · 2012-02-01T07:00:39.913Z · LW · GW · Legacy · 62 commentsContents
62 comments
Previous post: Does functionalism imply dualism? Next post: One last roll of the dice.
Don't worry, this sequence of increasingly annoying posts is almost over. But I think it's desirable that we try to establish, once and for all, how people here think color works, and whether they even think it exists.
The way I see it, there is a mental block at work. An obvious fact is being denied or evaded, because the conclusions are unpalatable. The obvious fact is that physics as we know it does not contain the colors that we see. By "physics" I don't just mean the entities that physicists talk about, I also mean anything that you can make out of them. I would encourage anyone who thinks they know what I mean, and who agrees with me on this point, to speak up and make it known that they agree. I don't mind being alone in this opinion, if that's how it is, but I think it's desirable to get some idea of whether LessWrong is genuinely 100% against the proposition.
Just so we're all on the same wavelength, I'll point to a specific example of color. Up at the top of this web page, the word "Less" appears. It's green. So, there is an example of a colored entity, right in front of anyone reading this page.
My thesis is that if you take a lot of point-particles, with no property except their location, and arrange them any way you want, there won't be anything that's green like that; and that the same applies for any physical theory with an ontology that doesn't explicitly include color. To me, this is just mindbogglingly obvious, like the fact that you can't get a letter by adding numbers.
At this point people start talking about neurons and gensyms and concept maps. The greenness isn't in the physical object, "computer screen", it's in the brain's response to the stimulus provided by light from the computer screen entering the eye.
My response is simple. Try to fix in your mind what the physical reality must be, behind your favorite neuro-cognitive explanation of greenness. Presumably it's something like "a whole lot of neurons, firing in a particular way". Try to imagine what that is physically, in terms of atoms. Imagine some vast molecular tinker-toy structures, shaped into a cluster of neurons, with traveling waves of ions crossing axonal membranes. Large numbers of atoms arranged in space, a few of them executing motions which are relevant for the information processing. Do you have that in your mind's eye? Now look up again at that word "Less", and remind yourself that according to your theory, the green shape that you are seeing is the same thing as some aspect of all those billions of colorless atoms in motion.
If your theory still makes sense to you, then please tell us in comments what aspect of the atoms in motion is actually green.
I only see three options. Deny that anything is actually green; become a dualist; or (supervillain voice) join me, and together, we can make a new ontology.
62 comments
Comments sorted by top scores.
comment by AlephNeil · 2012-02-01T14:11:08.257Z · LW(p) · GW(p)
[general comment on sequence, not this specific post.]
You have such a strong intuition that no configuration of classical point particles and forces can ever amount to conscious awareness, yet you don't immediately generalize and say: 'no universe capable of exhaustive description by mathematically precise laws can ever contain conscious awareness'. Why not? Surely whatever weird and wonderful elaboration of quantum theory you dream up, someone can ask the same old question: "why does this bit that you've conveniently labelled 'consciousness' actually have consciousness?"
So you want to identify 'consciousness' with something ontologically basic and unified, with well-defined properties (or else, to you, it doesn't really exist at all). Yet these very things would convince me that you can't possibly have found consciousness given that, in reality, it has ragged, ill-defined edges in time, space, even introspective content.
Stepping back a little, it strikes me that the whole concept of subjective experience has been carefully refined so that it can't possibly be tracked down to anything 'out there' in the world. Kant and Wittgenstein (among others) saw this very clearly. There are many possible conclusions one might draw - Dennett despairs of philosophy and refuses to acknowledge 'subjective experience' at all - but I think people like Chalmers, Penrose and yourself are on a hopeless quest.
Replies from: torekp, Mitchell_Porter↑ comment by torekp · 2012-02-06T18:37:24.999Z · LW(p) · GW(p)
Stepping back a little, it strikes me that the whole concept of subjective experience has been carefully refined so that it can't possibly be tracked down to anything 'out there' in the world.
And the first thing we should recognize is that this "refinement" is arbitrary and unjustified.
↑ comment by Mitchell_Porter · 2012-02-02T07:43:28.815Z · LW(p) · GW(p)
you don't immediately generalize and say: 'no universe capable of exhaustive description by mathematically precise laws can ever contain conscious awareness'. Why not?
My problem is not with mathematically precise laws, my problem is with the objects said to be governed by the laws. The objects in our theories don't have properties needed to be the stuff that makes up experience itself.
Quantum mechanics by itself is not an answer. A ray in a Hilbert space looks less like the world than does a scattering of particles in a three-dimensional space. At least the latter still has forms with size and shape. The significance of quantum mechanics is that conscious experiences are complex wholes, and so are entangled states. So a quantum ontology in which reality consists of an evolving network of states drawn from Hilbert spaces of very different dimensionalities, has the potential to be describing conscious states with very high-dimensional tensor factors, and an ambient neural environment of small, decohered quantum systems (e.g. most biomolecules) with a large number of small-dimensional tensor factors. Rather than seeing large tensor factors as an entanglement of many particles, we would see "particles" as what you get when a tensor factor shrinks to its smallest form.
I emphasize again that an empirically adequate model of reality as evolving tensor network would still not be the final step. The final step is to explain exactly how to identify some of the complicated state vectors with individual conscious states. To do this, you have to have an exact ontological account of phenomenological states. I think Husserlian transcendental phenomenology has the best ideas in that direction.
Once this is done, the way you state the laws of motion might change. Instead of saying 'tensor factor T with neighbors T0...Tn has probability p of being replaced by Tprime', you would say 'conscious state C, causally adjacent to microphysical objects P0...Pn, has probability p of evolving into conscious state Cprime' - where C and Cprime are described in a "pure-phenomenological" way, by specifying sensory, intentional, reflective, and whatever other ingredients are needed to specify a subjective state exactly.
This has the potential to get rid of the dualism because you are no longer saying conscious state C is really a coarse-graining of a microphysical state. The ontology employed in the subjective description, and the ontology employed for the purposes of stating an exact physical law, have become the same ontology - that is the aim. The Churchlands have written about this idea, but they come at it from the other direction, supposing that folk psychology might one day be replaced by a neurosubjectivity in which you interpret your experience in detail as "events happening to a brain". That might be possible, but the whole import of my argument is that there will have to be some change in the physical ontology employed to understand the brain, before that becomes possible.
Replies to other comments on this post will be forthcoming, but not immediately.
Replies from: algekalipso↑ comment by algekalipso · 2016-03-20T05:37:17.134Z · LW(p) · GW(p)
Quantum mechanics by itself is not an answer. A ray in a Hilbert space looks less like the world than does a scattering of particles in a three-dimensional space. At least the latter still has forms with size and shape. The significance of quantum mechanics is that conscious experiences are complex wholes, and so are entangled states. So a quantum ontology in which reality consists of an evolving network of states drawn from Hilbert spaces of very different dimensionalities, has the potential to be describing conscious states with very high-dimensional tensor factors, and an ambient neural environment of small, decohered quantum systems (e.g. most biomolecules) with a large number of small-dimensional tensor factors. Rather than seeing large tensor factors as an entanglement of many particles, we would see "particles" as what you get when a tensor factor shrinks to its smallest form.
[...]
Once this is done, the way you state the laws of motion might change. Instead of saying 'tensor factor T with neighbors T0...Tn has probability p of being replaced by Tprime', you would say 'conscious state C, causally adjacent to microphysical objects P0...Pn, has probability p of evolving into conscious state Cprime' - where C and Cprime are described in a "pure-phenomenological" way, by specifying sensory, intentional, reflective, and whatever other ingredients are needed to specify a subjective state exactly.
You are hitting the nail in the head. I don't expect people in LessWrong to understand this for a while, though. There is actually a good reason why the cognitive style of rationalists, at least statistically, is particularly ill-suited for making sense of the properties of subjective experience and how they constrain the range of possible philosophies of mind. The main problem is the axis of variability of "empathizer vs. systematizer." LessWrong is built on a highly systematizing meme-plex that attracts people who have a motivational architecture particularly well suited for problems that require systematizing intelligence.
Unfortunately, recognizing that one's consciousness is ontologically unitary requires a lot of introspection and trusting one's deepest understanding against the conclusions that one's working ontology suggests. Since LessWrongers have been trained to disregard their own intuitions and subjective experience when thinking about the nature of reality, it makes sense that the unity of consciousness will be a blind spot for as long as we don't come up with experiments that can show the causal relevance of such unity. My hope is to find a computational task that consciousness can achieve at a runtime complexity that would be impossible with a classical neural networks implemented with the known physical constraints of the brain. However, I'm not very optimistic this will happen any time soon.
The alternative is to lay out specific testable predictions involving the physical implementation of consciousness in the brain. I recommend reading David Pearce's physicalism.com, which outlines an experiment that would convince any rational eternal quantum mind skeptic that indeed the brain is a quantum computer.
comment by Manfred · 2012-02-01T08:48:11.334Z · LW(p) · GW(p)
This post would be a lot better if you kept definitions and referents distinct. For example, antiprotons make no sense, because if you look in my brain at what neurons perform the intricate membership dance for antiprotons, none of them are antiprotons. Therefore there are only three options. Deny that antiprotons exist; become a dualist; or keep definitions and referents distinct.
comment by kilobug · 2012-02-01T09:45:48.182Z · LW(p) · GW(p)
I'll only answer with an analogy. Visualize a cluster of magnetized particles on a hard disk platters. Billions of atoms linked with complicated quantum electromagnetic fields. A magnetic head goes over them. It fires electrical currents, that are sent a semi-conductor. That semi-conductor then starts doing computation, which is just electrons flowing from one part to another, from a semi-conductor to another. And then, another flow of electrons fired by a electron gun and hitting a screen, and they end up printing numbers.
You have that in your mind's eye ? Now, those numbers are the digits of Pi. According to my theory, the digits of Pi that appears on the screen are the same thing as some aspect of all those billions of Pi-less atoms in motion. Well, yes, and ? There is no dualism involved in that.
As for something being "green", we can detect "green" with webcams and computers. My Gimp as a "anti-red eye filter" that can not only detect a kind of red and even its shape, and remove it. Being green is a very physical property of light, or of matter that emits/absorbs light. There is even less dualism in that than in my Pi example, or in any other kind of file (text, pictures, sound, movie, ...) stored in a hard disk.
Replies from: algekalipso, jakub-supel, jakub-supel, whowhowho↑ comment by algekalipso · 2016-03-20T05:49:20.931Z · LW(p) · GW(p)
Both you and prase seem to be missing the point. The experience of green has nothing to with wavelengths of light. Wavelengths of light are completely incidental to the experience. Why? Because you can experience the qualia of green thanks to synesthesia. Likewise, if you take LSD at a sufficient dose, you will experience a lot of colors that are unrelated to the particular input your senses are receiving. Finally, you can also experience such color in a dream. I did that last night.
The experience of green is not the result of information-processing that works to discriminate between wavelengths of light. Instead, the experience of green was recruited by natural selection to be part of an information-processing system that discriminates between wavelengths of light. If it had been more convenient, less energetically costly, more easily accessible in the neighborhood of exploration, etc. evolution would have recruited entirely different qualia in order to achieve the exact same information-processing tasks color currently takes part in.
In other words, stating what stimuli triggers the phenomenology is not going to help at all in elucidating the very nature of color qualia. For all we know, other people may experience feelings of heat and cold instead of colors (locally bounded to objects in their 2.5D visual field), and still behave reasonably well as judged by outside observers.
Replies from: kilobug↑ comment by kilobug · 2016-03-23T16:21:15.657Z · LW(p) · GW(p)
The experience of green has nothing to with wavelengths of light. Wavelengths of light are completely incidental to the experience.
Not at all. The experience of green is the way our information processing system internally represent "light of green wavelength", nothing else. That if you voluntarily mess up with your cognitive hardware by taking drugs, or that during background maintenance tasks, or that "bugs" in the processing system can lead to "experience of green" when there is no real green to be perceived doesn't change anything about it - the experience of green is the way "green wavelenngth" is encoded in our information processing system, nothing less, nothing more.
Replies from: algekalipso↑ comment by algekalipso · 2016-03-31T03:36:18.082Z · LW(p) · GW(p)
I have seen this argument before, and I must confess that I am very puzzled about the kind of mistake that is going on here. I might call it naïve functionalist realism, or something like that. So whereas in "standard" naïve realism people find it hard to dissociate their experiences with an existing mind-independent world, they then go on to perceive everything as "seeing the world directly, nothing else, nothing more." Naïve realists will interpret their experiences as direct, unmediated, impressions of the real world.
Of course this is a problematic view, and there killer arguments against it. For instance, hallucinations. However, naïve realists can still come back and say that you are talking about cases of "misapprehension", where you don't really perceive the world directly anymore. That does not mean you "weren't perceiving the world directly before." But here the naïve realist has simply not integrated the argument in a rational way. If you need to explain hallucinations as "failed representations of true objects" you don't, anymore, need to in addition restate one's previous belief in "perceiving the world directly." Now you end up having two ontologies instead of one: Inner representations and also direct perception. And yet, you only need one: Inner representations.
Analogously, I would describe your argument as naïve functionalist realism. Here you first see a certain function associated to an experience, and you decide to skip the experience altogether and simply focus on the function. In itself, this is reasonable, since the data can be accounted for with no problem. But when I mention LSD and dream, suddenly that is part of another category like a "bug" in one's mind. So here you have two ontologies, where you can certainly explain it all with just one.
Namely, the green is a particular qualia, which gets triggered under particular circumstances. Green does not refer to the wavelength of light that triggers it, since you can experience it without such light. To instead postulate that this is in fact just a "bug" of the original function, but that the original function is in and of itself what green is, simply adds another ontology which, when taken on its own, already can account for the phenomena.
Replies from: kilobug↑ comment by kilobug · 2016-03-31T07:06:33.294Z · LW(p) · GW(p)
No, it is much more simple than that - "green" is a wavelength of light, and "the feeling of green" is how the information "green" is encoded in your information processing system, that's it. No special ontology for qualia or whatever. Qualia isn't a fundamental component of the universe like quarks and photons are, it's only encoding of information in your brain.
But yes, how reality is encoded in an information system sometimes doesn't match the external world, the information system can be wrong. That's a natural, direct consequence of that ontology, not a new postulate, and definitely not any other ontology. The fact that "the feeling of green" is how "green wavelength" is encoded in an information processing system automatically implies that if you perturbate the information processing system by giving it LSD, it may very well encode "green wavelength" without "green wavelength" being actually present.
In short, ontology is not the right level to look at qualia - qualia is information in a (very) complex information processing system, it has no fundamental existence. Trying to explain it at an ontological level just make you ask invalid questions.
Replies from: jakub-supel↑ comment by Jakub Supeł (jakub-supel) · 2023-01-02T13:06:13.124Z · LW(p) · GW(p)
Green is not a wavelength of light. Last time I checked, wavelength is measured in units of length, not in words. We might call light of wavelength 520nm "green" if we want, and we do BECAUSE we are conscious and we have the qualia of green whenever we see light of wavelength 520nm. But this is only a shorthand, a convention. For all I know, other people might see light of wavelength 520nm as red (i.e. what I describe as red, i.e. light of wavelength 700nm), but refer to it as green because there is no direct way to compare the qualia.
↑ comment by Jakub Supeł (jakub-supel) · 2023-01-02T12:57:40.997Z · LW(p) · GW(p)
I'm not sure how the first two paragraphs are analogous to consciousness at all. Yes, the screen prints out numbers. These printed numbers are still mere physical entities. The screen doesn't really produce the number Pi from the physical objects, it just manipulates the physical objects. Consciousness is not about manipulating physical objects, as two identical physical configurations could correspond to two distinct conscious experiences.
↑ comment by Jakub Supeł (jakub-supel) · 2023-01-02T12:52:12.381Z · LW(p) · GW(p)
As for something being "green", we can detect "green" with webcams and computers. My Gimp as a "anti-red eye filter" that can not only detect a kind of red and even its shape, and remove it. Being green is a very physical property of light, or of matter that emits/absorbs light. There is even less dualism in that than in my Pi example, or in any other kind of file (text, pictures, sound, movie, ...) stored in a hard disk.
Haha, no. Strictly speaking, we cannot detect "green" with webcams or computers (such an expression is only a simplification). We can detect light of a particular wavelength with a camera and we can detect a particular value of the G channel with a computer. But that's not the green color. The green color is what we see (and we can't even be sure that we see the same color when we use the word "green"). Any equivalence between that and the state of a camera of disk memory is false.
comment by drethelin · 2012-02-01T08:03:39.031Z · LW(p) · GW(p)
downvoted for following up "my response is simple" with a paragraph of blather.
things are no more "actually" green than they are "actually" solid, or photons "actually" travel. The universe does not work how it feels to us it works. Why do we need anything new? Are you denying that the reactions of minds to stimuli are real? Is this just a stupidly roundabout way to get people to admit that there's no such thing as green?
Replies from: jakub-supel↑ comment by Jakub Supeł (jakub-supel) · 2023-01-02T13:09:07.379Z · LW(p) · GW(p)
There is no such thing as 'green" in the physical universe (obviously). It has no explanatory power, it has no causal power, and there is no viable theory of how it can be produced by other things (e.g. by light of whatever wavelength, since my experience of light of that wavelength could be different than yours). Yet, we know that "green" exists. Therefore dualism.
comment by Richard_Kennaway · 2012-02-01T11:25:06.947Z · LW(p) · GW(p)
Or in brief:
We have mental experiences.
Everything else we know proves that there cannot possibly be any such thing as mental experiences.
I agree. This is a blue tentacle scenario, with the difference that it isn't hypothetical, we are all actually living it. But I can't join you in your third alternative, as I don't believe you're on track to a solution. I don't believe that anyone is. I don't think that anyone even has an idea of what a solution could look like.
But don't let that stop you trying.
Replies from: CAE_Jones, jakub-supel↑ comment by CAE_Jones · 2013-02-01T20:13:35.262Z · LW(p) · GW(p)
I notice that this qualia business has me terribly confused, and wonder if it is worth reading all of the literature that everyone smarter than me is referencing while discussing it.
Are we treating mental experience as something special, and not just awareness? I don't really get why the talk of experiencing color comes up so much (I did read about Mary's room after reading another LW discussion on the subject).
If it helps, I notice sighted people tend to treat blindness as seeing complete darkness. I know that, for my particular condition at least, this seems to be a horribly inaccurate prediction, but it helps that I started out with one eye nonfunctional. I suppose someone who went from perfect vision to completely blind over night might wake up perceiving darkness (though I'd expect it to be a little more complicated? Let's just assume they were knocked out and had their eyes removed while unconscious.).
What I do experience, though, isn't easily described, what with how the language is set up. All this really makes me think with regards to mental experience is that either I'm the main character of reality (considering how much is out there and how little contact there is between I and the rest of the world, this seems unlikely, but I suppose it could be a universe in which a complicated background is necessary to mention but not interact with, somehow...), or that mental experience is how we describe being able to observe our own cognitive processes, in our crude built-in way.
↑ comment by Jakub Supeł (jakub-supel) · 2023-01-02T14:27:01.630Z · LW(p) · GW(p)
I'm confused. Do you think you don't actually have mental experiences?
comment by beriukay · 2012-02-01T10:20:37.505Z · LW(p) · GW(p)
I'm noticing an ugh field around the word 'ontology'. Every time I've read it in this sequence of posts, I've had a strong and immediate urge to go do something else. Would you be so kind as to taboo the word and rephrase the sentences that use it?
comment by pedanterrific · 2012-02-01T09:40:09.162Z · LW(p) · GW(p)
Why limit this argument to just color? It seems a rather arbitrarily chosen property to dispute. Why not
Now look up again at that word "Less", and remind yourself that according to your theory, the shape that you are seeing is the same thing as some aspect of all those billions of shapeless atoms in motion.
If you can imagine that "vast molecular tinker-toy structures" (bleah) could add up to the shape "Less", what makes color so different?
Replies from: jakub-supel, whowhowho↑ comment by Jakub Supeł (jakub-supel) · 2023-01-02T13:11:42.691Z · LW(p) · GW(p)
Color is not a shape?
↑ comment by whowhowho · 2013-02-01T14:53:43.019Z · LW(p) · GW(p)
Inablity to imagine it. We know how how virtual geometrical structures --shapes--can be built up in other structures because we can build things that do that -- they're called GPUs, shaders, graphics subroutines and so on. If you can engineer something you understand it. There is a sense in which a computer has its own internal representation of a geometety other than its own phsyical geometery. We don't however know how to give a computer it's own red. It just stores a number which activates an led which activates our own red. We don't know how to write seeRed().
Replies from: pedanterrific↑ comment by pedanterrific · 2013-02-01T17:11:57.114Z · LW(p) · GW(p)
You lost me a little bit. We can write "see these wavelengths in this shape and make them black" (red-eye filters). What makes "seeing" shape different from "seeing" color?
Replies from: whowhowho↑ comment by whowhowho · 2013-02-01T17:21:14.512Z · LW(p) · GW(p)
We can give a computer an internal representation of shape, but not of colour as we experience it.
Replies from: pedanterrific↑ comment by pedanterrific · 2013-02-01T17:23:36.811Z · LW(p) · GW(p)
How would it function differently if it did have "an internal representation of color as we experience it"?
Replies from: jakub-supel, whowhowho↑ comment by Jakub Supeł (jakub-supel) · 2023-01-02T13:12:44.123Z · LW(p) · GW(p)
It would have conscious qualia.
↑ comment by whowhowho · 2013-02-01T17:37:16.403Z · LW(p) · GW(p)
That's hard to answer without specifying more about the nature of the AI, but it might say things like "what a beautiful sunset".
Replies from: pedanterrific↑ comment by pedanterrific · 2013-02-01T18:02:18.467Z · LW(p) · GW(p)
I'm not going to say the goalposts are moving, but I definitely don't know where they are any more. I was talking about red-eye filters built into cameras. You seemed to be suggesting that they do have "internal representations" of shape, but not of color, even though they recognize both shape and color in the same way. I'm trying to see what the difference is.
Essentially, why can a computer have an internal representation of shape without saying "wow, what a beautiful building" but an internal representation of color would lead it to say "wow, what a beautiful sunset"?
Replies from: whowhowho↑ comment by whowhowho · 2013-02-01T18:06:30.377Z · LW(p) · GW(p)
I don't know why you are talking about filters.
If you think you can write seeRed(), please supply some pseudocode.
Replies from: pedanterrific↑ comment by pedanterrific · 2013-02-01T18:28:22.692Z · LW(p) · GW(p)
What was wrong with this comment?
Replies from: whowhowho↑ comment by whowhowho · 2013-02-01T18:40:11.878Z · LW(p) · GW(p)
It doesn't relate to giving an internal system an internal represetnation of colour like ours. If you put the filter on, you don't go from red to black, you go from #FF0000 to #000000, or something.
Replies from: pedanterrific↑ comment by pedanterrific · 2013-02-01T19:42:29.106Z · LW(p) · GW(p)
Okay, so... we can't make computers that go from red to black, and we can't ourselves understand what it's like to go from #FF0000 to #000000, and this means what?
To me it means the things we use to do processing are very different. Say, a whole brain emulation would have our experience of color, and if we get really really good at cognitive surgery, we might be able to extract the minimum necessary bits to contain that experience of color, and bolt it onto a red-eye filter. Why bother, though? What's the relevant difference?
Replies from: whowhowho↑ comment by whowhowho · 2013-02-02T01:30:40.253Z · LW(p) · GW(p)
I don't see how a wodge of bits, in isolation from context, could be said to "contain" anything processing, let alone anything depending on actual physics. It;s hard to see how it could even contain any definite meaning, absent context. What does 100110001011101 mean?
Replies from: pedanterrific↑ comment by pedanterrific · 2013-02-02T03:31:58.563Z · LW(p) · GW(p)
Sorry- "minimum necessary (pieces of brain)", I meant to say. Like, probably not motor control, or language, or maybe memory.
Replies from: whowhowho↑ comment by whowhowho · 2013-02-03T20:38:41.602Z · LW(p) · GW(p)
Say, a whole brain emulation would have our experience of color, and if we get really really good at cognitive surgery, we might be able to extract the minimum necessary bits to contain that experience of color, and bolt it onto a red-eye filter. Why bother, though? What's the relevant difference?
The point of discussing the engineering of colour qualia is that it relates to the level of understanding of how consciousness works. Emulations bypass the need to understand something in order to duplicate it, and so are not relevant to the initial claim that the implementation of (colour) qualia is not understood within current science.
comment by amcknight · 2012-02-01T10:04:00.564Z · LW(p) · GW(p)
So reading about the philosophy of color in the stanford encyclopedia of philosophy tells me that there are approximately 9 theories if color that philosophers take seriously. Tell me which meaning of color you have in mind and then I will take your questions about color seriously.
comment by scientism · 2012-02-01T13:57:49.500Z · LW(p) · GW(p)
I'm a direct realist. Colour is a property of the world. My ontology includes the stuff of everyday experience - objects, surfaces, properties - as well as the stuff of science (cells, molecules, particles, black holes, etc). I've never heard a good argument for why I should want to eliminate everything except point particles or for why I should accept the incoherent notion that our perception of the world is mediated by illusory internal imagery. And, yes, I agree that contemporary materialism involves a Cartesian skeptical argument which presupposes a kind of crypto-dualism (i.e., an insubstantial homunculus that the world of colour and so forth can appear to and hence be illusory).
Replies from: whowhowho↑ comment by whowhowho · 2013-02-01T14:46:51.224Z · LW(p) · GW(p)
So do bats and bees and colour blind people see your red? if not whose is the right red -- who is Directly Perceiving reality correctly? (There is a reason direct realism is also called naive realism).
Replies from: scientism↑ comment by scientism · 2013-02-01T19:00:11.016Z · LW(p) · GW(p)
There's no such thing as my red or different reds that are individuated by perceiver. Different types of sensory organ allow us to see different aspects of the world. I'm blind to some aspects other animals can perceive and other animals are blind to some aspect I can perceive, and the same goes for various perceptual deficiencies.
Replies from: whowhowho↑ comment by whowhowho · 2013-02-01T19:38:44.390Z · LW(p) · GW(p)
Ten problems with direct realism:
if perceived qualities exist in external object, they have never been detected by science, as opposed to the 650nm reflectance characteristic, so it is a form of dualism (or rather pluralism: see below). It requires non-physical properties.
If perceived qualities exist in external objects, they need external objects to exist in. If some perceived qualities (dreams, after images) do not exist in external objects, then a sui generis method of projecting them into the world is assumed, which is required for non-veridical perception only. (An example is Everett Hall's exemplification) Indirect realism requires only one mechanism for veridical and non veridical perception,(the difference between the two being down to the specific circumstances)
One of the motivation for direct realism is the idea that to perceive truly is to perceive things as they are. However, other kinds of truth don't require such a criterion at all. A true sentence is generally not at all like situation it refers to. Truth in most contexts is the following of a set of normative rules which can themselves be quite arbitrary (as in rules linking symbols to their referents). Thus Direct Realism posits a sui generis kind of veridicality applying to perception only, along with several sui generis mechanisms to support it.
Another motivation for direct realism is a linguistic analysis : the argument goes that since sensory terms must have external criteria, then what is sensed and the way it is sensed are entirely external as a matter of metaphysics. However, a criterion is not, strictly speaking, a meaning. Smoke is a criterion of fire, but fire does not mean smoke — definitionally or linguistically. he questions "where are perceived qualities" and "how are sensation-words defined" just aren't the same. It's quite possible that we use reference to properties of external objects to "triangulate" what are actually inner sensations, since we don't have public access to them. It is furthemore likely that the core meanings of sensations words are just the way thing seem — even even if we have to fulfil the pubic nature of language by using a public, external stimulous as the criterion of meaning for a response. For example: "red" is the way ripe tomatoes look. "Ripe tomatoes" is the publically availabl stimulus, but "red" doesn't mean tomatoes — it means the way they look! In Fregean terms, this analysis is even more complex than the sense/reference dichotomy. Sensations words have a sense (in general: how an an X pears), an external referent and (X) and finally the phenomenal feel or sensation. The sense without the external referent does not fix the meaning of the internal referent. Once can know theoretically that "sweet" is the taste of sugar, but considerably more meaning is supplied by actually tasting sugar.
It is noticeable that Direct Realism applies much more readilly to sight than to other forms of perception. We never think that hearing the sound of a thing is the same as perceiving the thing as it is in itself. Likewise, textures, tastes and smells are of things. Pains are reactions to objects and events, not properties of external objects. Do we veridically perceive a brass tack as being painful? Pains are removed by anaesthetising the patient, not by changing the properties of the scalpel. Even direct realists have to concede that beauty is in the eye of the beholder: that easthetic reactions are not objective properties of objects. But this creates a dividing-line problem: is "marmite is horrible/yummy" an aesthetic reaction, or the registration of a property? We are wired up to percieve sour as nasty and sweet as nice: but surely to say that the nastiness or niceness is in the chemical is no different to saying the pain is in the brass tack.
It seems as though it is a unique feature of sightin humansthat it presents itself as an "open window on the world". For all we know, other sensory modalitites might have that exalted position in other species. For instance, much more of a dolphin's brain is devoted to processing sonar than to processing vision.
It is unlikely that there is a single set of perceived qualities, even though they are already over and above physical properties). It is reasonable assumption that different species perceive differently. A tomato would have to have or be able to generate different perceived qualitites accroding to whether a dog , a bee or a martian is looking at it. The "have" option goes from dualism to a wild pluralism. The "generate" option requires some completely undetected causal mechanism. If I take LSD and the colours of the tomato become more intense, as perceived by me, how does it know that something has changed in me? It's riduclous to suppose that a turd has the ability to waft attractive scents to flies and revolting stenches to humans
Tastes and smells have "yum" and "yuk" reactions attached to them that have to be relevant to the interests of an organism. Tastes and smells are not neutral and objective chemical tests, they carry a hefty behavioural loading relative to the interests of the organism and that means they must be specific reactions in the organism
Indirect realism and dualism are two different theories: a theory of indirect realism need not employ non-physical qualia. An adverbial theory of qualia (for instance) need not be indirectly realistic.
Some kind of dualism — where the perceived properties are supervening on my brain-states is more parsimonious even if dualistic. ( But it does not need to be dualistic, as noted above.itself.) The differences between dog-qualia, bee-qualia, human-qualia and martian-qualia can be accounted for by the differences between dog-brains, bee-brains, human-brains and martian-brains under a uniform set of rules.
The considerations here are almost entirely naturalistic. Science can determine whether or not the same mechanisms are involved in veridical perception and illusion. Whatever objections can be made against dualism on the basis of Occam's razor can be made more strongly against the pluralism of direct realism. This is not a matter of metaphysical imponderables. Direct Realism could be argued on the basis of the disadvantages of Indirect Realism, but what are they? Scepticism? Indirect realism is usually argued on the basis that the brain is known to operate in a certain way. However, a sceptical conclusion undermines the evidence. If brains are just appearances in minds, there are no brains. Since scepticism is self-refuting, there is no need to resist indirect realism because of its supposed consequences.
↑ comment by scientism · 2013-02-03T00:48:46.196Z · LW(p) · GW(p)
I can compare the colour of a surface to the colour of a standardised colour chip, which is as objective as, say, measuring something using a ruler. Colours may not participate in any phenomena found in the physical scientist's laboratory, but they do participate in the behaviour of organisms found in the psychologist's laboratory. So I fail to see a problem here.
Indirect realism requires two mechanisms for veridical and non-veridical perception, the same as direct realism: one for when an object is seen and one for when it isn't. Direct realism is more parsimonious because it doesn't needlessly posit an intervening representation or image in either case.
This isn't my motivation so I won't address it.
See above.
I disagree that direct realism more easily applies to sight. Direct realism is the best account of the phenomenology of all perception. I feel the texture of an object. I hear events, not objects, of course. Water dropping, pans crashing, musical instruments being player, a person talking, etc. I smell fresh bread, then I taste it. What I do not do is see, hear, touch, taste or smell intervening representations or images. So I'm not sure how indirect realism could more easily apply to these things. Pains, on the other hand, aren't perceived, they're had. Nobody would claim a pain is in the object causing me pain. (I'll address aesthetic response below.)
All perception puts us in contact with the world. I'm not sure what you're saying here.
I've already addressed this. A bee, dog, martian, etc, would be able to perceive different aspects of the same object. That doesn't mean the object has to somehow "generate" those properties for each organism. It has them. Bees can perceive a subset, dogs a different subset, martians another subset.
Direct realists are not committed to the idea that everything is in the environment, as if we were somehow taking things that don't rightfully belong to the environment and arbitrarily resettling them there. Reactions to things are had by the organism. Taste and smell are implicated in ingesting foreign objects and are obviously more closely allied with specific reactions in the organism because of this.
The very idea of perceiving something other than the world implies that there is something other than the world to be perceived. You can say it's a representation or image or model or whatever, and then try to butcher those terms into making sense, but at some point you've got to light it all up with "qualia" or "consciousness" or some other quasi-mystical notion. Nobody has figured this out, but even if they did, there still wouldn't be any good reasons to be an indirect realist.
Direct realism doesn't claim that objects have dog-qualia and human-qualia and bee-qualia instead of dog-brains having dog-qualia, etc, as you seem to think. Direct realism denies that there are qualia at all. Objects have coloured surfaces. Note that if there were qualia those qualia would have to be coloured in some sense, so you're missing something from your supposedly parsimonious account.
The best argument for direct realism is that it's phenomenologically accurate. The biggest flaw of indirect realism is that it's committed to some sort of mysticism, regardless of how your dress it up. You can move the problem around, call it "qualia" or "consciousness" or whatever, but it never goes away. It's a picture show in the mind or brain, and that's silly.
↑ comment by A1987dM (army1987) · 2013-02-03T13:00:01.718Z · LW(p) · GW(p)
I can compare the colour of a surface to the colour of a standardised colour chip, which is as objective as, say, measuring something using a ruler.
Not quite. Colour is a three-dimensional subspace of the infinite-dimensional space of possible light spectra, but which subspace it is depends on the spectral sensitivities of your cone cells. OTOH I do think that the cone cells of the supermajority of all humans use the exact same molecules as photoreceptors, but I'm not quite sure of that.
comment by Automaton · 2012-02-01T08:20:32.501Z · LW(p) · GW(p)
I would essentially deny that anything is actually green, but assert that there is a mental state of "experiencing green", which is a certain functional state of a mind. You say that reductionists believe "...the green shape that you are seeing is the same thing as some aspect of all those billions of colorless atoms in motion." I do not think that most reductionsts would (or should) take this position. There is nothing "the same" in the mental state of experiencing green as in the green object, there is only some property of the green object that causes us to have a green experience. My response to "If your theory still makes sense to you, then please tell us in comments what aspect of the atoms in motion is actually green." is that the atoms in motion comprise a mental state which is the experience of seeing green, and that this is all there is to our idea of the color green. Certainly, no aspect of the green object itself is identical to any brain state. So, I deny the existence of any such thing "green" which is both a property of green objects and a mental state, but claim that what we are talking about when we say we see green is nothing more than a mental state.
I posted the following about a JJC Smart quote on the issue in your last thread, but I'll repost it here in case you didn't see:
JJC Smart responds to people who would conflate experiences of seeing things with the actual things which are being seen in his 1959 paper "Sensations and Brain Processes". Here he's talking about the experience of seeing a yellow-green after image, and responding to objections to his theory that experiences can be equivalent to mental states.
Objection 4. The after-image is not in physical space. The brain-process is. So the after-image is not a brain-process.
Reply. This is an ignoratio elenchi. I am not arguing that the after-image is a brain-process, but that the experience of having an after-image is a brain-process. It is the experience which is reported in the introspective report. Similarly, if it is objected that the after-image is yellowy-orange but that a surgeon looking into your brain would see nothing yellowy-orange, my reply is that it is the experience of seeing yellowy-orange that is being described, and this experience is not a yellowy-orange something. So to say that a brain-process cannot be yellowy-orange is not to say that a brain-process cannot in fact be the experience of having a yellowy-orange after-image. There is, in a sense,no such thing as an after-image or a sense-datum, though there is such a thing as the experience of having an image, and this experience is described indirectly in material object language, not in phenomenal language, for there is no such thing. We describe the experience by saying, in effect, that it is like the experience we have when, for example, we really see a yellowy-orange patch on the wall. Trees and wallpaper can be green, but not the experience of seeing or imagining a tree or wallpaper. (Or if they are described as green or yellow this can only be in a derived sense.)
The theory he is defending in the paper is an identity theory where brain states are identical to mental states, but the point still holds for functionalist theories where mental states supervene on functional states.
Replies from: whowhowho, jakub-supel↑ comment by whowhowho · 2013-02-01T14:57:39.684Z · LW(p) · GW(p)
I would essentially deny that anything is actually green, but assert that there is a mental state of "experiencing green", which is a certain functional state of a mind.
And what functional state is that? Can you write seeGreen()?
Replies from: algekalipso↑ comment by algekalipso · 2016-03-20T08:40:29.630Z · LW(p) · GW(p)
With the aid of qualia computing and a quantum computer, perhaps ;-)
↑ comment by Jakub Supeł (jakub-supel) · 2023-01-02T14:31:47.734Z · LW(p) · GW(p)
"there is a mental state of "experiencing green", which is a certain functional state of a mind"
Alright... now, how do you explain the fact that this state of the mind has the property that it cannot be accessed/observed by anyone except its owner (I hope you know what I mean by the "owner"), while the properties of the brain can be observed by anyone in principle? Doesn't it mean that e.g. the image in the mind is not a brain process?
comment by Solvent · 2012-02-01T07:26:13.458Z · LW(p) · GW(p)
"Green" refers to objects which disproportionately reflect or emit light of a wavelength between 520 and 570nm.
Am I missing something?
EDIT: Wait, you're talking about the mental experience of "seeing green". Never mind, I think.
Replies from: meganisawizard↑ comment by meganisawizard · 2012-04-19T14:36:00.972Z · LW(p) · GW(p)
No, he's talking about the property 'green', meaning that which all things we label as 'green' have in common. There is no unique physical property that these objects share. In fact, two objects which appear the exact same color can have vastly different reflectance profiles. It's called metamerism, and it's a function of our visual system being run off of opponent matches.
comment by Will_Newsome · 2012-02-01T07:43:05.077Z · LW(p) · GW(p)
Upon a very casual reading I'm pretty sure I agree. But the thing I think I'm agreeing with seems so absurdly obvious that I fear it's not actually the thing I'm agreeing with.
comment by prase · 2012-02-01T20:37:45.469Z · LW(p) · GW(p)
My thesis is that if you take a lot of point-particles, with no property except their location, and arrange them any way you want, there won't be anything that's green like that; and that the same applies for any physical theory with an ontology that doesn't explicitly include color. To me, this is just mindbogglingly obvious, like the fact that you can't get a letter by adding numbers.
As a side note, except the locations, there are also velocities which can't be ignored in a classical description. Even in a quantum description, localised point photons aren't going to be green, energy has to be specified, not position.
Now not a side note: colour is relatively easy, from physical point of view (temperature would be more difficult). Green corresponds to certain wavelengths. From seeing green colour of the letters L-E-S-S I can infer that there are photons of certain energy emitted from the corresponding location on my screen. So there is an isomorphism between the "physical description" (in terms of wavelengths or energies of photons) and the word "green". I can build a detector which beeps whenever it encounters a green object. I have absolutely no idea what more you want to have in order to say that "there's something green in the configuration of particles", or whatever similar formulation you would choose.
If your theory still makes sense to you, then please tell us in comments what aspect of the atoms in motion is actually green.
Their emmission of light of more or less 550 nm in wavelegnth. Since this answer is trivial, I probably don't understand the question correctly.
Replies from: whowhowho↑ comment by whowhowho · 2013-02-01T15:00:56.275Z · LW(p) · GW(p)
have absolutely no idea what more you want to have in order to say that "there's something green in the configuration of particles", or whatever similar formulation you would choose.
Presuably, some explanation of how and why perceived colours match up to neural acivity, not just an observation to the e effect that they do. Reduction is a form of explanation, not an assertion of brute fact.
Replies from: prase↑ comment by prase · 2013-02-01T19:20:13.472Z · LW(p) · GW(p)
When people ask for an explanation, they want to hear something which makes them feeling less confused about the problem. But what exactly constitutes an explanation is very subjective - people with same knowledge of facts may differ significantly in their feelings of confusion. What you call brute assertion of facts is a sufficient explanation to me; it is perhaps not sufficient for Mitchell_Porter, but if I have to provide him an explanation he would accept, I need to understand what kinds of information are explanatory in his view. Stating that an explanation is better than non-explanation isn't much helpful - I knew that already.
Apart from that, Mitchell_Porter isn't asking for explanation in form of a detailed description of neural activity in response to green light reflecting on the retina. He makes it quite clear that whatever such detailed description can't possibly capture the concept of green, or something along these lines.
Replies from: whowhowho↑ comment by whowhowho · 2013-02-02T01:51:09.976Z · LW(p) · GW(p)
When people ask for an explanation, they want to hear something which makes them feeling less confused about the problem. But what exactly constitutes an explanation is very subjective - people with same knowledge of facts may differ significantly in their feelings of confusion.
It's not entirely subjective
What you call brute assertion of facts is a sufficient explanation to me; i
Some brute assertions or all brute assertions? The problem is that brute assertion as explanation sets the bar very low, leading to quodlibet. If "blue is how some neural activity seems from the inside" is an explanation, why isn't "the sky is blue because that is how god wills it"?
Apart from that, Mitchell_Porter isn't asking for explanation in form of a detailed description of neural activity in response to green light reflecting on the retina.
Why would he want that? It is uncontentious that pure description is not explanation. The reporter describes the battle; the historian explains the causes.
Replies from: prase↑ comment by prase · 2013-02-02T11:16:07.739Z · LW(p) · GW(p)
It's not entirely subjective
I am not saying that it is entirely subjective. Still I don't know what would you or the author of the post accept as valid explanation in this very case.
Some brute assertions or all brute assertions?
The assertion that certain wavelengths are perceived as green are sufficient explanation of the concept of "green" in terms of physics of elementary particles.
It is uncontentious that pure description is not explanation.
It is contentious whether to call the historian's work as description of the causes or explanation thereof. Anyway, I don't wish to get into a debate over the meaning of "explanation". I just wanted to point out that Mitchell_Porter appeared to be (at least when he was writing the post) quite certain that no detailed account of the correspondence between physics and neural activity could faithfully capture the concepts of colours, whether the account included causal relationships or not. He was claiming that a whole new ontology is needed, probably with colours and other qualia as primitives.
Replies from: whowhowho↑ comment by whowhowho · 2013-02-03T21:00:56.556Z · LW(p) · GW(p)
I am not saying that it is entirely subjective. Still I don't know what would you or the author of the post accept as valid explanation in this very case.
something that is as explanatory as other explanations. I don't think it is a case of raising the bar artificialy high
The assertion that certain wavelengths are perceived as green are sufficient explanation of the concept of "green" in terms of physics of elementary particles.
I don't think any special pleading is needed to reject that as an explanation. It doens't even look like an explanation--it contains no "because" clauses. Explanations do not generally look like blunt assertions, and and, as I noted before, allowing blunt assertions to be explanations leads to a free-for-all.
It is contentious whether to call the historian's work as description of the causes or explanation thereof. Anyway, I don't wish to get into a debate over the meaning of "explanation".
I don't see why not. It seems to be a key issue.
I just wanted to point out that Mitchell_Porter appeared to be (at least when he was writing the post) quite certain that no detailed account of the correspondence between physics and neural activity could faithfully capture the concepts of colours, whether the account included causal relationships or not.
comment by algekalipso · 2016-03-20T05:24:27.875Z · LW(p) · GW(p)
I am super late to the party. But I want to say that I agree with you and I find your line of research interesting and exciting. I myself am working on a very similar space.
I own a blog called Qualia Computing. The main idea is that qualia actually plays a causally and computationally relevant role. In particular, it is used in order to solve Constraint Satisfaction Problems with the aid of phenomenal binding. Here is the "about" of the site:
Qualia Computing? In brief, epiphenomenalism cannot be true. Qualia, it turns out, must have a causally relevant role in forward-propelled organisms, for otherwise natural selection would have had no way of recruiting it. I propose that the reason why consciousness was recruited by natural selection is found in the tremendous computational power that it afford to the real-time world simulations it instantiates through the use of the nervous system. More so, the specific computational horse-power of consciousness is phenomenal binding –the ontological union of disparate pieces of information by becoming part of a unitary conscious experience that synchronically embeds spaciotemporal structure. While phenomenal binding is regarded as a mere epiphenomenon (or even as a totally unreal non-happening) by some, one needs only look at cases where phenomenal binding (partially) breaks down to see its role in determining animal behavior.
Once we recognize the computational role of consciousness, and the causal network that links it to behavior, a new era will begin. We will (1) characterize the various values of qualia in terms of their computational properties, and (2) systematically explore the state-space of possible conscious experiences.
(1) will enable us to recruit the new qualia varieties we discover thanks to (2) so as to improve the capabilities of our minds. This increased cognitive power will enable us to do (2) more efficiently. This positive-feedback loop is perhaps the most important game-changer in the evolution of consciousness in the cosmos.
We will go from cognitive sciences to actual consciousness engineering. And then, nothing will ever feel the same.
Also, see: qualiacomputing.com/2015/04/19/why-not-computing-qualia/
I'm happy to talk to you. I'd love to see where your research is at.
comment by meganisawizard · 2012-04-19T14:36:22.977Z · LW(p) · GW(p)
One reason to be skeptical of labeling objects as colored is that different animals have 'shifted' spectrums. For example, many flowers that look uniformly colored to us will appear to be two different colors to bees. So is the flower one color or two? Do humans get full domain over seeing colors and animals are relegated to being 'wrong'? Why? In fact, most daytime birds have four cones, as opposed to our three. They can make many, many more color discriminations than we can. So we're 'colorblind' relative to the birds.
comment by RomeoStevens · 2012-02-01T07:06:16.978Z · LW(p) · GW(p)
You should probably look into Jeff Hawkins' work. http://www.ted.com/talks/jeff_hawkins_on_how_brain_science_will_change_computing.html
edit: is that really how the auto formatting works?
Replies from: pedanterrific↑ comment by pedanterrific · 2012-02-01T09:33:26.399Z · LW(p) · GW(p)
For future reference, [Jeff Hawkin's work](http://www.ted.com/talks/jeff_hawkins_on_how_brain_science_will_change_computing.html) becomes Jeff Hawkin's work.
Edit: Apparently underscores are the new asterisks?