Does functionalism imply dualism?
post by Mitchell_Porter
This post follows on from Personal research update, and is followed by State your physical explanation of experienced color.
In a recent post, I claimed that functionalism about consciousness implies dualism. Since most functionalists think their philosophy is an alternative to dualism, I'd better present an argument.
But before I go further, I'll link to orthonormal's series on dissolving the problem of "Mary's Room": Seeing Red: Dissolving Mary's Room and Qualia, A Study of Scarlet: The Conscious Mental Graph, Nature: Red in Truth, and Qualia. Mary's Room is one of many thought experiments bandied about by philosophers in their attempts to say whether or not colors (and other qualia) are a problem for materialism, and orthonormal presents a computational attempt to get around the problem which is a good representative of the functionalist style of thought. I won't have anything to say about those articles at this stage (maybe in comments), but they can serve as an example of what I'm talking about.
Now, though it may antagonize some people, I think it is best to start off by stating my position plainly and bluntly, rather than starting with a neutral discussion of what functionalism is and how it works, and then seeking to work my way from there to the unpopular conclusion. I will stick to the example of color to make my points - apologies to blind and colorblind readers.
My fundamental thesis is that color manifestly does exist - there are such things as shades of green, shades of red, etc - and that it manifestly does not exist in any standard sort of physical ontology. In an arrangement of point particles in space, there are no shades of green present. This is obviously true, and it's equally obvious for more complicated ontologies like fields, geometries, wavefunction multiverses, and so on. It's even part of the history of physics; even Galileo distinguished between primary qualities like location and shape, and secondary qualities like color. Primary qualities are out there and objectively present in the external world, secondary qualities are only in us, and physics will only concern itself with primary qualities. The ontological world of physical theory is colorless. (We may call light of a certain wavelength green light or red light, but that is because it produces an experience of seeing green or seeing red, not because the light itself is green or red in the original sense of those words.) And what has happened due to the progress of the natural sciences is that we now say that experiences are in brains, and brains are made of atoms, and atoms are described by a physics which does not contain color. So the secondary qualities have vanished entirely from this picture of the world; there is no opportunity for them to exist within us, because we are made of exactly the same stuff as the external world.
Yet the "secondary qualities" are there. They're all around us, in every experience. It really is this simple: colors exist in reality, they don't exist in theory, therefore the theory needs to be augmented or it needs to be changed. Dualism is an augmentation. My speculations about quantum monads are supposed to pave the way for a change. But I won't talk about that option here. Instead, I will try to talk about theories of consciousness which are meant to be compatible with physicalism - functionalism is one such theory.
Such a theory will necessarily present a candidate, however vague, for the physical correlate of an experience of color. One can then say that color exists without having to add anything to physics, because the color just is the proposed physical correlate. This doesn't work because the situation hasn't changed. If all you have are point particles whose only property is location, then individual particles do not have the property of being colored, nor do they have that property in conjunction. Identifying a physical correlate simply picks out a particular set of particles and says "there's your experience of color". But there's still nothing there that is green or red. You may accustom yourself to thinking of a particular material event, a particular rearrangement of atoms in space, as being the color, but that's just the power of habitual association at work. You are introducing into your concept of the event a property that is not inherently present in it.
It may be that one way people manage to avoid noticing this, is by an incomplete chain of thought. I might say: none of the objects in your physical theory are green. The happy materialist might say: but those aren't the things which are truly green in the sense you care about; the things which are green are parts of experiences, not the external objects. I say: fine. But experiences have to exist, right? And you say that physics is everything. So that must mean that experiences are some sort of physical object, and so it will be just as impossible for them to be truly green, given the ontological primitives we have to work with. But for some reason, this further deduction isn't made. Instead, it is accepted that objects in physical space aren't really green, but the objects of experience exist in some other "space", the space of subjective experience, and... it isn't explicitly said that objects there can be truly green, but somehow this difference between physical space and subjective space seems to help people be dualists without actually noticing it.
It is true that color exists in this context - a subjective space. Color always exists as part of an "experience". But physical ontology doesn't contain subjective space or conscious experience any more than it does contain color. What it can contain, are state machines which are structurally isomorphic to these things. So here we can finally identify how a functionalist theory of consciousness works psychologically: You single out some state machines in your physical description of the brain (like the networks in orthonormal's sequence of posts); in your imagination, you associate consciousness with certain states of such state machines, on the basis of structural isomorphism; and now you say, conscious states are those physical states. Subjective space is some neural topographic map, the subjectively experienced body is the sensorimotor homunculus, and so forth.
But if we stick to any standard notion of physical theory, all those brain parts still don't have any of the properties they need. There's no color there, there's no other space there, there's no observing agent. It's all just large numbers of atoms in motion. No-one is home and nothing is happening to them.
Clearly it is some sort of progress to have discovered, in one's physical picture of the world, the possibility of entities which are roughly isomorphic to experiences, colors, etc. But they are still not the same thing. Most of the modern turmoil of ideas about consciousness in philosophy and science is due to this gap - attempts to deny it, attempts to do without noticing it, attempts to force people to notice it. orthonormal's sequence, for example, seems to be an attempt to exhibit a cognitive model for experiences and behaviors that you would expect if color exists, without having to suppose that color actually exists. If we were talking about a theoretical construct, this would be fine. We are under no obligation to believe that phlogiston exists, only to explain why people once talked about it.
But to extend this attitude to something that most of us are directly experiencing in almost every waking moment, is ... how can I put this? It's really something. I'd call it an act of intellectual desperation, except that people don't seem to feel desperate when they do it. They are just patiently explaining, recapitulating and elaborating, some "aha" moment they had back in their past, when functionalism made sense to them. My thesis is certainly that this sense of insight, of having dissolved the problem, is an illusion. The genuineness of the isomorphism between conscious state and coarse-grained physical state, and the work of several generations of materialist thinkers to develop ways of speaking which smoothly promote this isomorphism to an identity, combine to provide the sense that no problem remains to be solved. But all you have to do is attend for a moment to experience itself, and then to compare that to the picture of billions of colorless atoms in intricate motion through space, to realize that this is still dualism.
I promised not to promote the monads, but I will say this. The way to avoid dualism is to first understand consciousness as it is in itself, without the presupposition of materialism. Observe the structure of its states and the dynamics of its passage. That is what phenomenology is about. Then, sketch out an ontology of what you have observed. It doesn't have to contain everything in infinite detail, it can overlook some features. But I would say that at a minimum it needs to contain the triad of subject-object-aspect (which appears under various names in the history of philosophy). There are objects of awareness, they are being experienced within a common subjective space, and they are experienced in a certain aspect. Any theory of reality, whether or not it is materialist, must contain such an entity in order to be true.
The basic entity here is the experiencing subject. Conscious states are its states. And now we can begin to tackle the ontological status of state machines, as a candidate for the ontological category to which conscious beings belong.
State machines are abstracted descriptions. We say there's a thing, it has a set of possible states; here are the allowed transitions between them, and the conditions under which those transitions occur. Specify all that and we have specified a state machine. We don't care about why those are the states or why the transitions occur; those are irrelevant details.
A very simple state machine might be denoted by the state transition network "1<->2". There's a state labeled 1 and another state labeled 2. If the machine is in state 1, it proceeds to state 2, and the reverse is also true. This state machine is realized wherever you have something that oscillates between two states without stopping in either. First the earth is close to the sun, then it is far from the sun, then it is close again... The Earth in its orbit instantiates the state machine "1<->2". I get involved with Less Wrong, then I quit for a while, then I come back... My Internet habits also instantiate the state machine "1<->2".
A computer program is exactly like this, a state machine of great complexity (and usually its state transition rules contain some dependence on external conditions, like user input) which has been physically instantiated for use. But one cannot claim that its states have any intrinsic meaning, any more than I can claim that the state 1 in the oscillating state machine is intrinsically about the earth being close to the sun. This is not true, even if I write down the state transition network in the form "CloseToTheSun<->FarFromTheSun".
This is another ontological deficiency of functionalism. Mental states have meanings, thoughts are always about something, and what they are about is not the result of convention or of the needs of external users. This is yet another clue that the ontological status of conscious states is special, that their "substance" matters to what they are. Of course, this is a challenge to the philosophy which says that a detailed enough simulation of a brain will create a conscious person, regardless of the computational substrate. The only reason people believe this, is because they believe the brain itself is not a special substrate. But this is a judgment made on the basis of science that is still at a highly incomplete stage, and certainly I expect science to tell us something different by the time it's finished with the brain. The ontological problems of functionalism provide a strong apriori reason for this expectation.
What is more challenging is to form a conception of the elementary parts and relations that could form the basis of an alternative ontology. But we have to do this, and the impetus has to come from a phenomenological ontology of consciousness that is as precise as possible. Fortunately, a great start was made on this about 100 years ago, in the heyday of phenomenology as a philosophical movement.
A conscious mind is a state machine, in the sense that it has states and transitions between them. The states also have structure, because conscious experiences do have parts. But the ontological ties that combine those parts into the whole are poorly apprehended by our current concepts. When we try to reduce them to nothing but causal coupling or to the proximity in space of presumed physical correlates of those parts, we are, I believe, getting it wrong. Clearly cause and effect operates in the realm of consciousness, but it will take great care to state precisely and correctly the nature of the things which are interacting and the ways in which they do so. Consider the ability to tell apart different shades of color. It's not just that the colors are there; we know that they are there, and we are able to tell them apart. This implies a certain amount of causal structure. But the perilous step is to focus only on that causal structure, detach it from considerations of how things appear to be in themselves, and instead say "state machine, neurons doing computations, details interesting but not crucial to my understanding of reality". Somehow, in trying to understand conscious cognition, we must remain in touch with the ontology of consciousness as partially revealed in consciousness itself. The things which do the conscious computing must be things with the properties that we see in front of us, the properties of the objects of experience, such as color.
You know, color - authentic original color - has been banished from physical ontology for so long, that it sounds a little mad to say that there might be a physical entity which is actually green. But there has to be such an entity, whether or not you call it physical. Such an entity will always be embedded in a larger conscious experience, and that conscious experience will be embedded in a conscious being, like you. So we have plenty of clues to the true ontology; the clues are right in front of us; we're subjectively made of these clues. And we will not truly figure things out, unless we remain insistent that these inconvenient realities are in fact real.
Comments sorted by top scores.
comment by lavalamp ·
2012-01-31T04:42:19.621Z · LW(p) · GW(p)
Yet the "secondary qualities" are there. They're all around us, in every experience. It really is this simple: colors exist in reality, ...
Sure, colors exist in reality, but they are patterns of neuronal excitations, not molecules. I don't see how this belief makes me a dualist. Actually this belief killed my belief in dualism.
Maybe I misread you, but I hear your post as saying, "Colors must exist in the territory, not just the map!" And I can't see why you believe that so strongly.
PS I greatly prefer this post to your previous one.
Replies from: Mitchell_Porter
↑ comment by Mitchell_Porter ·
2012-01-31T05:20:50.229Z · LW(p) · GW(p)
Thank you for producing a perfect example of what I called the "incomplete chain of thought"! What I called "subjective space" and "physical space", you have called "map" and "territory". This thing you call a "map", conscious experience, is part of the "territory" - part of reality - which itself is supposed to be coextensive with physics. So locating colors on the map doesn't get them off the territory. If everything real is made of physics, you still must either explain how certain patterns of neuronal excitations are actually green, or you must assert that nothing is actually green at any level of reality.
Replies from: lavalamp, APMason, occlude, haig
↑ comment by lavalamp ·
2012-01-31T06:11:12.679Z · LW(p) · GW(p)
If everything real is made of physics, you still must either explain how certain patterns of neuronal excitations are actually green, or you must assert that nothing is actually green at any level of reality.
Certain patterns of neuronal excitations feel like green from the inside. I don't understand this well enough to write a conscious computer program, but neither does anyone else (thank Bayes). I do believe that such a computer program can be written, and if that can be shown to be impossible, I will reconsider my position here (conversely, it seems that you must hold that no such computer program can be written).
It may happen that "nothing is actually green at any level of reality", and in that case, I still say that certain patterns of neuronal excitations feel like green from the inside, even if it's an illusion.
Replies from: ArisKatsaris
↑ comment by ArisKatsaris ·
2012-01-31T16:02:05.774Z · LW(p) · GW(p)
"Certain patterns of neuronal excitations feel like green from the inside."
If patterns are not a fundamental part of reality, but merely the mind's mapping of an uncaring territory, why should patterns feel anything from the inside, as opposed to being felt merely from the outside?
By saying that patterns feels something from the inside, you seem to claim that patterns are a part of reality that isn't merely the sum of their parts.
Replies from: lavalamp
↑ comment by lavalamp ·
2012-01-31T17:15:52.740Z · LW(p) · GW(p)
The patterns are an organization of reality that has higher-level meaning to our minds. The meaning, as with everything, is in the interpretation, not the physical atoms.
↑ comment by APMason ·
2012-01-31T05:30:17.329Z · LW(p) · GW(p)
you still must either explain how certain patterns of neuronal excitations are actually green
But that's just saying that lavalamp has a unique responsibility to solve the hard problem - everyone already knows it needs to be solved, and nobody knows how to do it. It doesn't undermine functionalism in particular. It's an open problem; we could just as well say that you must explain how [your preferred explanation of consciousness] is actually green.
Replies from: Dustin
↑ comment by Dustin ·
2012-01-31T05:49:12.846Z · LW(p) · GW(p)
Thank you. I've been typing and retyping trying to say that. I just gave up and refreshed and you'd done it already!
I guess I'm a little too tired.
Replies from: APMason
↑ comment by occlude ·
2012-02-03T20:00:38.211Z · LW(p) · GW(p)
This thing you call a "map", conscious experience, is part of the "territory" - part of reality - which itself is supposed to be coextensive with physics.
This is interesting, true, and really complicates any quest to maintain an accurate map.
Upvoted (the OP too). I think some of your interlocutors may be thinking past you here, in the sense that they have dismissed your central point as a triviality. But there are fundamental differences between interactions of particles in the open universe, the state changes that particle interactions cause in our sensory machinery, and what it feels like to be a brain having an experience. The suggestion that the experience of green might be illusory fails to consider that it is something occurring in a physical brain. In this sense, the most dismissive thing we might say about any quale is that it doesn't have the meaning we readily assign to it, but that's different from a claim of nonexistence.
I'm not philosophically sophisticated enough to judge whether this observation implies dualism. I think perhaps we'd find a lot more common ground if we discussed our expectations rather than our definitions (especially given the theological baggage that the term dualism carries).
Replies from: torekp
↑ comment by torekp ·
2012-02-06T13:26:03.468Z · LW(p) · GW(p)
I agree that this "map" is part of the "territory", and that's because the map that we're trying to construct in philosophy - an ontology - is a map claiming to cover everything in the universe including maps.
↑ comment by haig ·
2012-07-31T19:06:13.033Z · LW(p) · GW(p)
"If everything real is made of physics, you still must either explain how certain patterns of neuronal excitations are actually green, or you must assert that nothing is actually green at any level of reality."
This is a 'why' question, not a 'how' question, and though some 'why' questions may not be amenable to deeper explanations, 'how' questions are always solvable by science. Explaining how neuronal patterns generate systems with subjective experiences of green is a straightforward, though complex, scientific problem. One day we may understand this so well that we could engineer quales on demand, or create new types of never before seen quales according to some transformation rules. However, explaining 'why' such arrangements of matter should possess such interiority or subjectivity is, I think at least based on everything we currently know, unanswerable.
comment by WrongBot ·
2012-01-31T08:51:42.439Z · LW(p) · GW(p)
Do animals have qualia? If yes, what evolutionary advantage do they serve in animals? If no, how did this complex structure (of quantum microtubules or whatever else) suddenly appear? Is qualia-possession binary? Did some human ancestor with no qualia give birth to a child with qualia?
More generally, is there a plausible causal history of human qualia?
Replies from: whowhowho, Mitchell_Porter
↑ comment by whowhowho ·
2013-02-01T15:11:29.627Z · LW(p) · GW(p)
Do animals have qualia?
if you disbelieve in cruelty to animals, you probably beleive they do.
If yes, what evolutionary advantage do they serve in animals?
High calorie food tastes sweet, potential poisons taste bitter, etc.
If no, how did this complex structure (of quantum microtubules or whatever else) suddenly appear?
Micortubules are not unqiuely human.
↑ comment by Mitchell_Porter ·
2012-01-31T09:25:04.639Z · LW(p) · GW(p)
Finding yourself to be a conscious being is anthropically necessary. If the universe contains quantum-computational conscious beings and classical-computational zombies, and only the first are conscious, then you can only ever be the first kind of being, and you can only ever find that you had an evolutionary history that managed to produce such beings as yourself. (ETA: Also, you can only find yourself to exist in a universe where consciousness can exist, no matter how exotic an ontology that requires.)
Obviously I believe in the possibility of unconscious simulations of conscious beings. All it should require is implementing a conscious state machine on a distributed base. But I have no idea how likely it is that evolution should produce something like that. Consciousness does have survival value, and given that I take genuine conscious states to be something relatively fundamental, some fairly fundamental laws are probably implicated in the details of its internal causality. I simply don't know whether a naturally evolved unconscious intelligence would be likely to have a causal architecture isomorphic to that of a conscious intelligence, or whether it would be more likely to implement useful functions like self-monitoring in a computationally dissimilar way.
What I say about the internal causality of genuine consciousness may sound mysterious, so I will try to give an example; I emphasize this is not even speculation, it's just an ontology of consciousness which allows me to make a point.
One of the basic features of conscious states is intentionality - they're about something. So let us say that a typical conscious state contains two sorts of relations - "being aware of" a quale, and "paying attention to" a quale. Unreflective consciousness is all awareness and no attention, while a reflective state of consciousness will consist of attending to certain qualia, amid a larger background of qualia which are just at the level of awareness.
Possible states of consciousness would be specified by listing the qualia and by listing whether the subject is attending to them or just aware of them. (The whole idea is that when attending, you're aware that you are aware.) Now we have a state space, we can talk about dynamics. There will be a "physical law" governing transitions in the conscious state, whereby the next state after the current one is a function of the current state and of various external conditions.
An example of a transition that might be of interest, is the transition from the state "aware of A, aware of B, aware of C..." to the state "attending to A, aware of B, aware of C..." What are the conditions under which we start attending to something - the conditions under which we become aware of being aware of something? In this hypothetical ontology, there would be a fundamental law describing the exact conditions which cause such a transition. We can go further, and think about embedding this model of mind, into a formal ontology of monads whose mathematical states are, say, drawn from Hilbert spaces with nested graded subspaces of varying dimensionality, and which works to reproduce quantum mechanics in some limit. We might be able to represent the recursive nature of iterated reflection (being aware of being aware of being aware of A) by utilizing this subspace structure.
We are then to think of the world as consisting mostly of "monads" or tensor factors drawn from the subspaces of smallest dimensionality, but sometimes they evolve into states of arbitrarily high dimensionality, something which corresponds to the formation of entangled states in conventional quantum mechanics. But this is all just mathematical formalism, and we are to understand that the genuine ontology of the complex monadic states is this business about a subject perceiving a set of qualia under a mixture of the two aspects (awareness versus attention), and that the dynamical laws of nature that pertain to monads in reflective states are actually statements of the form "A quale jumps from awareness level to attention level if... [some psycho-phenomenological condition is met]".
Furthermore, it would be possible to simulate complex individual monads with appropriately organized clusters of simple monads, but ontologically you wouldn't actually have the complex states of awareness and attention being present, you would just have lots of simple monads being used like dots in a painting or bits in a computer.
I really do expect that the truth about how consciousness works is going to sound this weird and this concrete, even if this specific fancy is way off in its details.
Replies from: WrongBot, Emile
↑ comment by WrongBot ·
2012-01-31T09:54:58.193Z · LW(p) · GW(p)
Sorry, I think I was unclear. When I was wondering about the causal history of human qualia, I didn't mean the causal history of a particular quale in a human, but rather the causal history of why humans have qualia.
I don't think anthropics are a sufficient answer to that question; if there exist no plausible causal histories of humans with qualia, then either the humans or the qualia have to go.
↑ comment by Emile ·
2012-01-31T12:51:40.918Z · LW(p) · GW(p)
If the universe contains quantum-computational conscious beings and classical-computational zombies, and only the first are conscious, then you can only ever be the first kind of being, and you can only ever find that you had an evolutionary history that managed to produce such beings as yourself.
If zombies are possible, why can't this "you" you are talking to be a zombie? Zombies should be capable of reasoning correctly in the sleeping beauty problem, or about waking up in blue or red rooms, etc.
If you make a zombie clone of a human (not necessarily a perfect copy, merely one that's similar enough that it can't tell if it's a zombie or not), and have them both play a game where they are shown a button and have the choice to press it or not, if neither presses it they get $1.000, if both press it they get nothing, and if only the human presses it they get $1.000.000 (in all cases, the money is split between the copies). In such a scenario, you better hope that the zombie doesn't follow your advice and reason that it has to be a human.
comment by APMason ·
2012-01-31T04:20:34.290Z · LW(p) · GW(p)
I may be being slow here, but is there any way in which you're not just restating the hard problem of consciousness here? And that problem is a problem for all the alternatives so far, whether dualistic or monistic, and not just for functionalism? Whether you put consciousness on high-level organisation in the brain, or on quantum physics, or on some second substance, you're going to have to explain how consciousness happens. The only ones who avoid that duty are the ones who say that mental things are fundamental, and then I just roll my eyes all the way around. And I don't think the fact that functionalists haven't solved the hard problem necessarily makes them dualists. As you said, functionalists believe that "Subjective space is some neural topographic map, the subjectively experienced body is the sensorimotor homunculus, and so forth." Whatever criticism you want to level against that position, it sure doesn't seem dualistic.
Replies from: Mitchell_Porter
↑ comment by Mitchell_Porter ·
2012-01-31T07:28:51.303Z · LW(p) · GW(p)
There are levels to the ontological problem of consciousness. The first level is the level where you don't even have anything in your ontology that can be identified with consciousness. You can't get past that level until you admit that's where you're at. All standard nondualistic materialist theories of consciousness contain something which in the theory is called "consciousness", but which can't be the real thing, for the reasons discussed in this post.
Consider the problems faced instead by a dualistic theory which explicitly says that there is a "stream of consciousness" with all the properties of the real thing, existing in parallel with a physical world. Such a theory has well-known problems of causal redundancy and logical economy, but it doesn't have this problem of nothing being actually green, does it? Actual green exists in the stream of consciousness, along with all the other problematic realities of consciousness. The physical world remains colorless, but it doesn't matter because this is dualism and the mind is located alongside the physical world, not in it.
Another type of "theory" which doesn't have the problem of not containing consciousness is metaphysical idealism, the idea that there's nothing outside consciousness, and thus no physical world at all. It's all a dream or a hallucination by a disembodied entity.
So different theories of consciousness face very different problems. There are theories which explicitly, by construction, contain consciousness. Then there are theories which contain something they call consciousness, but which doesn't have the right properties to be the real thing. What I would like to see is a physical theory which contains consciousness, not because we dualistically add the real thing, but because it inherently already contains that sort of entity.
comment by bryjnar ·
2012-01-31T11:10:17.552Z · LW(p) · GW(p)
You constantly elide between the property of being green and the experience of something green. Which leads to the ancient mistake of saying that whatever constitutes your experience of something green must itself be green. Admittedly you put this enormous red herring in the mouth of your opponent, but it's totally unwarranted nonetheless.
Such a theory will necessarily present a candidate, however vague, for the physical correlate of an experience of color. One can then say that color exists without having to add anything to physics, because the color just is the proposed physical correlate.
The happy materialist might say: but those aren't the things which are truly green in the sense you care about; the things which are green are parts of experiences, not the external objects.
You also then essentially just say "But qualia! Intentionality! They're so real! There must be something more!", i.e. the same argument dualists have been making since the dawn of time, and that any attempts to dissolve the question have failed, since
all you have to do is attend for a moment to experience itself, and then to compare that to the picture of billions of colorless atoms in intricate motion through space, to realize that this is still dualism.
Furthermore, all the arguements you use are pretty much applicable across the board, and don't particularly relate to functionalism, so I think it's disingenuous of you to say that you're arguing for "functionalism implies dualism" rather than simply "dualism is true".
Replies from: Automaton
↑ comment by Automaton ·
2012-01-31T22:53:22.516Z · LW(p) · GW(p)
JJC Smart responds to people who would conflate experiences of seeing things with the actual things which are being seen in his 1959 paper "Sensations and Brain Processes". Here he's talking about the experience of seeing a yellow-green after image, and responding to objections to his theory that experiences can be equivalent to mental states.
Objection 4. The after-image is not in physical space. The brain-process is. So the after-image is not a brain-process.
Reply. This is an ignoratio elenchi. I am not arguing that the after-image is a brain-process, but that the experience of having an after-image is a brain-process. It is the experience which is reported in the introspective report. Similarly, if it is objected that the after-image is yellowy-orange but that a surgeon looking into your brain would see nothing yellowy-orange, my reply is that it is the experience of seeing yellowy-orange that is being described, and this experience is not a yellowy-orange something. So to say that a brain-process cannot be yellowy-orange is not to say that a brain-process cannot in fact be the experience of having a yellowy-orange after-image. There is, in a sense,no such thing as an after-image or a sense-datum, though there is such a thing as the experience of having an image, and this experience is described indirectly in material object language, not in phenomenal language, for there is no such thing. We describe the experience by saying, in effect, that it is like the experience we have when, for example, we really see a yellowy-orange patch on the wall. Trees and wallpaper can be green, but not the experience of seeing or imagining a tree or wallpaper. (Or if they are described as green or yellow this can only be in a derived sense.)
The theory he is defending in the paper is an identity theory where brain states are identical to mental states, but the point still holds for functionalist theories where mental states supervene on functional states.
comment by Manfred ·
2012-01-31T05:39:29.451Z · LW(p) · GW(p)
So, for example, nothing is ever a bridge, because it's all just a collection of atoms, and there are no little "bridge" labels on the atoms?
Let's switch to the same thing, but with ethics. Can things be right or wrong without having little "right" and "wrong" tags on the atoms? Have you read Lukeprog's metaethics sequence so far? Can things have the property "Manfred would call this right" and "Manfred would call this wrong" without having little "Mwctr" and "Mwctw" tags on the atoms?
comment by Amanojack ·
2012-02-04T09:12:08.326Z · LW(p) · GW(p)
The question of "dualism" isn't even a real question. Science tells us that a certain wavelength of light will appear to us as green. But what really is the point of knowing that? Well, it gives us a set of instructions for how to make us experience green. But the instructions for how to produce the subjective experience are not themselves the experience. The notion that if we could just figure out how to make people experience green through some manipulation we will have learned something amazing is silly. We can already do that by showing a green flag or telling someone not to think of a green rabbit.
comment by Lightwave ·
2012-01-31T09:06:52.004Z · LW(p) · GW(p)
Of course, this is a challenge to the philosophy which says that a detailed enough simulation of a brain will create a conscious person, regardless of the computational substrate.
So if we do simulate a brain and it tells you it's conscious and experiences green (through a camera), would you then agree that there's no need for dualism?
Replies from: Mitchell_Porter
↑ comment by Mitchell_Porter ·
2012-01-31T09:29:06.766Z · LW(p) · GW(p)
I don't think there's a need for dualism anyway; there's a need for a new physical ontology. But a simulation of a conscious brain should tell you that it's conscious even if it's not, or else it's not an effective simulation.
Replies from: Lightwave, Nick_Tarleton, Will_Newsome
↑ comment by Lightwave ·
2012-01-31T11:18:50.131Z · LW(p) · GW(p)
So p-zombies are possible, and in humans, the physical processes (of the brain) are somehow "magically" correlated / isomorphic to mental phenomena, whereas this doesn't happen in simulations, for what (unknown?) reasons?
Replies from: David_Gerard, whowhowho
↑ comment by David_Gerard ·
2012-02-01T23:39:40.747Z · LW(p) · GW(p)
The p-zombie theory holds that being able to conceive of something makes it possible; and because p-zombies are possible, therefore dualism. The tricky bit appears to be "conceive of" in a sense that implies possibility. Consider these statements:
- I can conceive of 2+2=4 being true in conventional Peano arithmetic.
- I can conceive of 2+2=5 being true in conventional Peano arithmetic.
- I can conceive of P being equal to NP.
- I can conceive of P not being equal to NP.
- I can conceive of p-zombies, therefore dualism.
- If I can conceive of p-zombies then dualism, which is a confused idea, therefore p-zombies is a confused idea by reductio ad absurdum.
With the second, I am claiming to "conceive of" something trivially false. I arguably haven't conceived of anything actually possible; I've just shuffled some words together.
With the third and fourth, I'm claiming to have conceived of something no-one knows (though many suspect 3 is false and 4 is true). To what extent have I actually thought it through? At some point I will hit a contradiction with one of them, though no-one has yet. Both are "conceivable" in some sense; certainly that they've formed a sentence in their head that they can try out for its logical implications. But one of those statements is as wrong as 2+2=5 nevertheless.
When someone claims that p-zombies are a conceivable thing at all, and that they have conceived of them, this doesn't actually say anything about the world or what is even possible; it just says they've formed a sentence in their head they think they can try out for its logical implications. But try telling them this. (I have, and haven't managed a sufficiently robust form of 6. to be convincing.)
(I still consider the fundamental argument in favour of dualism is that its advocates really want it to be true, and that p-zombies is like creationism for smart people.)
Replies from: David_Gerard
↑ comment by Nick_Tarleton ·
2012-02-01T21:19:17.639Z · LW(p) · GW(p)
The fact that you're saying that two things can be isomorphic or something close to it, and both say they're conscious, even though one is and one isn't — alternately, the claim that you can know things about your substrate just through introspection (why do you think you're conscious if you think something closely analogous would say so and be wrong?) — seems analogous to saying that zombies can exist, though maybe not as clearly problematic. Does this make you worry?
↑ comment by Will_Newsome ·
2012-02-01T00:25:17.838Z · LW(p) · GW(p)
Running a simulation that doesn't model already-existing phenomena is like minting counterfeit currency. Knowingly going along with a popular delusion is like knowingly passing along fake bills. But if instead of naive classical simulation, one works with the entangled details of the quantum information theoretic world, then that counts as adding value to the economy and not as counterfeiting. One way of exerting optimization pressure in this way is to selectively collapse parts of the universal wave function. A superintelligence could even do this to what humans call the "past". That's kind of a big deal.
-- shit I say on Facebook
I mostly care about this kinda stuff 'cuz I'm afraid of demons (knowingly passing along fake bills).
ETA: I'd like to hear an explanation for the downvotes, for my amusement.
Replies from: David_Gerard