Consciousness
post by Mitchell_Porter · 2010-01-08T12:18:39.776Z · LW · GW · Legacy · 232 commentsContents
232 comments
(ETA: I've created three threads - color, computation, meaning - for the discussion of three questions posed in this article. If you are answering one of those specific questions, please answer there.)
I don't know how to make this about rationality. It's an attack on something which is a standard view, not only here, but throughout scientific culture. Someone else can do the metalevel analysis and extract the rationality lessons.
The local worldview reduces everything to some combination of physics, mathematics, and computer science, with the exact combination depending on the person. I think it is manifestly the case that this does not work for consciousness. I took this line before, but people struggled to understand my own speculations and this complicated the discussion. So the focus is going to be much more on what other people think - like you, dear reader. If you think consciousness can be reduced to some combination of the above, here's your chance to make your case.
The main exhibits will be color and computation. Then we'll talk about reference; then time; and finally the "unity of consciousness".
Color was an issue last time. I ended up going back and forth fruitlessly with several people. From my perspective it's very simple: where is the color in your theory? Whether your physics consists of fields and particles in space, or flows of amplitude in configuration space, or even if you think reality consists of "mathematical structures" or Platonic computer programs, or whatever - I don't see anything red or green there, and yet I do see it right now, here in reality. So if you intend to tell me that reality consists solely of physics, mathematics, or computation, you need to tell me where the colors are.
Occasionally someone says that red and green are just words, and they don't even mean the same thing for different cultures or different people. True. But that's just a matter of classification. It's a fact that the individual shades of color exist, however it is that we group them - and your ontology must contain them, if it pretends to completeness.
Then, there are various other things which have some relation to color - the physics of surface reflection, or the cognitive neuroscience of color attribution. I think we all agree that the first doesn't matter too much; you don't even need blue light to see blue, you just need the right nerves to fire. So the second one seems a lot more relevant, in the attempt to explain color using the physics we have. Somehow the answer lies in the brain.
There is one last dodge comparable to focusing on color words, namely, focusing on color-related cognition. Explaining why you say the words, explaining why you categorize the perceived object as being of a certain color. We're getting closer here. The explanation of color, if there is such, clearly has a close connection to those explanations.
But in the end, either you say that blueness is there, or it is not there. And if it is there, at least "in experience" or "in consciousness", then something somewhere is blue. And all there is in the brain, according to standard physics, is a bunch of particles in various changing configurations. So: where's the blue? What is the blue thing?
I can't answer that question. At least, I can't answer that question for you if you hold with orthodoxy here. However, I have noticed maybe three orthodox approaches to this question.
First is faith. I don't understand how it could be so, but I'm sure one day it will make sense.
Second, puzzlement plus faith. I don't understand how it could be so, and I agree that it really really looks like an insurmountable problem, but we overcame great problems in the past without having to overthrow the whole of science. So maybe if we stand on our heads, hold our breath, and think different, one day it will all make sense.
Third, dualism that doesn't notice it's dualism. This comes from people who think they have an answer. The blueness is the pattern of neural firing, or the von Neumann entropy of the neural state compared to that of the light source, or some other particular physical entity or property. If one then asks, okay, if you say so, but where's the blue... the reactions vary. But a common theme seems to be that blueness is a "feel" somehow "associated" with the entity, or even associated with being the entity. To see blue is how it feels to have your neurons firing that way.
This is the dualism which doesn't know it's dualism. We have a perfectly sensible and precise physical description of neurons firing: ions moving through macromolecular gateways in a membrane, and so forth. There's no end of things we can say about it. We can count the number of ions in a particular spatial volume, we can describe how the electromagnetic fields develop, we can say that this was caused by that... But you'll notice - nothing about feels. When you say that this feels like something, you're introducing a whole new property to the physical description. Basically, you're constructing a dual-aspect materialism, just like David Chalmers proposed. Technically, you're a property dualist rather than a substance dualist.
Now dualism is supposed to be beyond horrible, so what's the alternative? You can do a Dennett and deny that anything is really blue. A few people go there, but not many. If the blueness does exist, and you don't want to be a dualist, and you want to believe in existing physics, then you have to conclude that blueness is what the physics was about all along. We represented it to ourselves as being about little point-particles moving around in space, but all we ever actually had was mathematics and correct predictions, so it must be that some part of the mathematics was actually talking about blueness - real blueness - all along. Problem solved!
Except, it's rather hard to make this work in detail. Blueness, after all, does not exist in a vacuum. It's part of a larger experience. So if you take this path, you may as well say that experiences are real, and part of physics must have been describing them all along. And when you try to make some part of physics look like a whole experience - well, I won't say the m word here. Still, this is the path I took, so it's the one I endorse; it just leads you a lot further afield than you might imagine.
Next up, computation. Again, the basic criticism is simple, it's the attempt to rationalize things which makes the discussion complicated. People like to attribute computational states, not just to computers, but to the brain. And they want to say that thoughts, perceptions, etc., consist of being in a certain computational state. But a physical state does not correspond inherently to any one computational state.
There's also a problem with semantics - saying that the state is about something - which I will come to in due course. But first up, let's just look at the problems involved in attributing a non-referential "computational state" to a physical entity.
Physically speaking, an object, like a computer or a brain, can be in any of a large number of exact microphysical states. When we say it is in a computational state, we are grouping those microphysically distinct states together and saying, every state in this group corresponds to the same abstract high-level state, every microphysical state in this other group corresponds to some other abstract high-level state, and so on. But there are many many ways of grouping the states together. Which clustering is the true one, the one that corresponds to cognitive states? Remember, the orthodoxy is functionalism: low-level details don't matter. To be in a particular cognitive state is to be in a particular computational state. But if the "computational state" of a physical object is an observer-dependent attribution rather than an intrinsic property, then how can my thoughts be brain states?
We didn't have this discussion before, so I won't try to anticipate the possible defenses of functionalism. No-one will be surprised, I suppose, to hear that I don't believe this either. Instead, I deduce from this problem that functionalism is wrong. But here's your chance, functionalists: tell the world the one true state-clustering which tells us the computation being implemented by a physical object!
I promised a problem with semantics too. Again I think it's pretty simple. Even if we settle on the One True Clustering of microstates - each such macrostate is still just a region of a physical configuration space. Thoughts have semantic content, they are "about" things. Where's the aboutness?
I also promised to mention time and unity-of-consciousness in conclusion. Time I think offers another outstanding example of the will to deny an aspect of conscious experience (or rather, to call it an illusion) for the sake of insisting that reality conforms entirely to a particular scientific ontology. Basically, we have a physics that spatializes time; we can visualize a space-time as a static, completed thing. So time in the sense of flow - change, process - isn't there in the model; but it appears to be there in reality; therefore it is an illusion.
Without trying to preempt the debate about time, perhaps you can see by now why I would be rather skeptical of attempts to deny the obvious for the sake of a particular scientific ontology. Perhaps it's not actually necessary. Maybe, if someone thinks about it hard enough, they can come up with an ontology in which time is real and "flows" after all, and which still gives rise to the right physical predictions. (In general relativity, a world-line has a local time associated with it. So if the world-line is that of an actually and persistently existing object, perhaps time can be real and flowing inside the object... in some sense. That's my suggestion.)
And finally, unity of consciousness. In the debate over physicalism and consciousness, the discussion usually doesn't even get this far. It gets stuck on whether the individual "qualia" are real. But they do actually form a whole. All this stuff - color, meaning, time - is drawn from that whole. It is a real and very difficult task to properly characterize that whole: not just what its ingredients are, but how they are joined together, what it is that makes it a whole. After all, that whole is your life. Nonetheless, if anyone has come this far with me, perhaps you'll agree that it's the ontology of the subjective whole which is the ultimate challenge here. If we are going to say that a particular ontology is the way that reality is, then it must not only contain color, meaning, and time, it has to contain that subjective whole. In phenomenology, the standard term for that whole is the "lifeworld". Even cranky mistaken reductionists have a lifeworld - they just haven't noticed the inconsistencies between what they believe and what they experience. The ultimate challenge in the science of consciousness is to get the ontology of the lifeworld right, and then to find a broader scientific ontology which contains the lifeworld ontology. But first, as difficult as it may seem, we have to get past the partial ontologies which, for all their predictive power and their seductive exactness, just can't be the whole story.
232 comments
Comments sorted by top scores.
comment by Joanna Morningstar (Jonathan_Lee) · 2010-01-09T16:54:49.378Z · LW(p) · GW(p)
Projecting the ontology of your (flawed) internal representations onto reality is a bad idea. "Doing a Dennet" is also not dealt with, except by incredulity.
It's a fact that the individual shades of color exist, however it is that we group them - and your ontology must contain them, if it pretends to completeness.
This is simply not the case. The fact that we can compare two stimuli more accurately than we can identify a stimuli merely means that internally we represent reality with lesser fidelity than our senses theoretically can achieve. On a reductionist view at most you've established "greater than" and "round to nearest" are implemented in neurons. You do not need to have colour.
Let's unpack "blueness". It's a property we ascribe to objects, yet it's trivial to "concieve of" blueness independent of an object. Neurologically, we process colour, motion, edge finding and so in in parallel; the linking of them together occurs at a higher level. Furthermore the brain fakes much of the data, giving the perception of colour vision, for example, in regions of the visual field where no ability to discriminate colour exists, and cases of blindness with continued concious perception of colour.
Brains compress input extensively; it would be crass to worry about the motion of every spot on a leopard separately - block them up as a single leopard. Asserting that the world must fit with our hallucination of reality lets you see things that are marginally visible, and get by with far worse sensory apparatus than needed. Cue optical illusions: this, this and this, for example. Individual shades don't exist as you want them to.
It is absurdly clear that the map your brain makes does not correspond to either the territory of your direct sense perception (at the retina) or reality. On precisely what basis do you assert to project from the ontology of a bad map to the territory?
"Blue" is a referent to properties of internal representations, which is translatable across multiple instances of primate brains. You say "X is blue", and I can check my internal representation of X to see whether I would categorise it as "blue". This does not require "blue" to be fundamental in ontology. There isn't a "blue thing" in physics, nor should there be. "Blue" existing means simply that there are things which this block of wetware puts in some equivalence class.
Lets move on to computation:
But if the "computational state" of a physical object is an observer-dependent attribution rather than an intrinsic property, then how can my thoughts be brain states?
Again, you seem project from an internal map of your own brain to the territory. Simply because I can look at a computer at multiple levels, say: Starting Excel, API calls, Machine instructions, microcode, functional units on the CPU, adders/multipliers/whatever on the CPU, logic gates, transistors, current flows or probability masses in the field of electrons, does not in principle invalidate any of the above views as correct views of an operating computer. The observer dependence isn't an issue if (modulo translation/equivalence classes for abstraction between languages) they all give the same function or behaviour. You can block things up as many low level behaviours or a smaller number of high level ones; this doesn't invalidate a computational view. What is the computation implemented by starting Excel? What details do you care about? It doesn't matter to a functionalist, as the computations are equivalent, albeit in different languages or formalisms.
The critique of aboutness is similar to your issues over colour. You percieve "X is about Y" and thus assume it to be ontologically fundamental. Semantic content is a compressed and inaccurate rendition of low level states: Useful for communicating and processing if you don't care about the details. Indeed the only reason we care about this kind of semantics is that our own wetware implements theory-of-mind directly. Good idea for predicting cognitive agents; not neccessarily a true statement about the world. The "Y" that "X" is "about" is another contraction - an infered property of a model.
"Time" is as flexible as your neural architecture wants it to be. Causality is a good idea, for Darwinian reasons, but people's perception of the flow of time is adjustable. I will point out that your senses imply strongly that the world is a 2D surface. Have you ever been able to see behind an object without moving your head? I haven't either, therefor clearly this 3D stuff is bunkum - the world is a flat plane and I directly percieve part of one side of it. Ditto time. Causality limits the state of a cognitive thing to be dependent on its previous states and its light cone at this point in space-time, and you percieve time to flow because you can remember previous brain states, and depending on them (compressed somewhat) is good for survival.
And now for unity of conciousness. It isn't unitary. Multiple personality, dissociative disorders, blindsight, sleepwalking, alien hand, need I go on? I percieve my own representation of reality to be unitary; I know for a fact that it's half made up. You claim that the individual issues "just can't" be the whole story. Why? Personal incredulity isn't an argument. The brain in the skull you call yours isn't just running a single cognitive entity. You move before even realising "you" were going to; you are unconcious of breathing until you decide to be. Why is a unitary conciousness fundamental? Why isn't it just a shortcut to approximate "you" and others in planning the future and figuring out the present?
Replies from: PhilGoetz, PhilGoetz, Mitchell_Porter↑ comment by PhilGoetz · 2010-01-11T05:39:56.782Z · LW(p) · GW(p)
Re. blueness: Mitchell is talking about qualia. Google the hard problem of consciousness.
↑ comment by PhilGoetz · 2010-01-11T04:46:59.463Z · LW(p) · GW(p)
Furthermore the brain fakes much of the data, giving the perception of colour vision, for example, in regions of the visual field where no ability to discriminate colour exists,
Just a note - I don't disagree with your point; but the claim that we can't discriminate color in our peripheral vision is simply false. I've done some informal experiments with this, because I was puzzled that textbooks say that our peripheral vision is primarily due to rods, which can't detect color; yet I see color in my peripheral vision.
If I stand with my nose and forehead pressed against the wall, holding a stack of shuffled yellow and red sheets of origami paper behind my back, close my eyes, and then hold one sheet up in each of my outstretched arms, and open my eyes, so that the sheets are each 90 degrees out from my central vision and I see them both at the same time, I can distinguish the two colors 100% of the time.
There's a serious problem with resolution; but color doesn't seem to be affected at all in any way that I can detect by central vs. peripheral vision.
Replies from: Jonathan_Lee, Bo102010, RobinZ↑ comment by Joanna Morningstar (Jonathan_Lee) · 2010-01-11T10:05:49.699Z · LW(p) · GW(p)
Of the same apparent intensity to a rod? If they're not, you'll guess correctly based on apparent brightness, and your brain fills in the colour based on memory of which colours of paper are around.
There are low levels of cones out to the periphery, but of such level as to be unreliable sources. For example, this notes that some monochromatic light is misidentified peripherally but not foevally, and that frequency discrimination drops by a factor of 50 or so.
↑ comment by RobinZ · 2010-01-11T15:37:54.698Z · LW(p) · GW(p)
Noting Jonathan_Lee's remarks, a suggestion for an experiment: place a monitor in the peripheral vision of the experimental which, at regular intervals, shows a random RGB color. The subject is to press a key indicating perceived color (e.g. [R]ed, [Y]ellow, [B]lue, [O]range, [G]reen, [P]urple, [W]hite, [B]lack) each time the color changes (perhaps an audio cue?). Compare results to same experiment with monitor directly in front.
↑ comment by Mitchell_Porter · 2010-01-11T03:15:34.848Z · LW(p) · GW(p)
It seems you agree that colors, the flow of time, meanings, and the unity of experience all appear to be there. The general import of your remarks is that reality isn't actually like that, it only appears to be like that. You state how things are, and then you state what's happening in the brain in order to create certain appearances. Color and time are external appearances, meaning and unity are internal appearances.
Some of what you say about the imperfections of conscious representations is not an issue for me. The fidelity of the mapping between external states and conscious states only has an incidental bearing on the nature of the conscious states themselves. Whether color is sometimes hallucinated is not the issue. Whether a color is nothing but an equivalence class is the issue.
In this regard I have observed a number of positions taken. Some people are at the stage of saying, color is a neural classification and I don't see any further problem. Some people say that color is how it "feels" to make such a classification. Since Dennett takes care to deny that there is anything that is actually colored, even in the mind, and that there are only words, dispositions to classify, and so forth, he arguably wishes to deny even that there is a "color feeling", though it's hard to be sure.
My position is very simple. All these things (color, time, meaning, unity) exist in consciousness; which means that they exist in at least one part of reality. The elements and the modes of combination offered by today's scientific ontology do not suffice to generate them. Therefore, today's scientific ontology is wrong, incomplete, however you want to put it.
So if we are to have a discussion, you need to say less about the imperfections of consciousness as a medium of representation, and more about the medium itself. Do you agree that color, time, meaning, unity exist in consciousness? If so, can you identify the physical or computational property which supposedly corresponds to, or is identical to, "appearing to experience" each of these various phenomena?
Replies from: RobinZ, Jonathan_Lee↑ comment by RobinZ · 2010-01-11T03:47:40.902Z · LW(p) · GW(p)
Since Dennett takes care to deny that there is anything that is actually colored, even in the mind, and that there are only words, dispositions to classify, and so forth, he arguably wishes to deny even that there is a "color feeling", though it's hard to be sure.
Citation or he didn't say it. Daniel Dennett coined the phrase "greedy reductionism" - partially to emphasize that he does not deny the existence of color, consciousness, etc. Unless you know of some place where he reversed his position, I would argue that you have misinterpreted his remarks. My understanding is that his position is that color is an idiosyncratic property of the human visual perception system with no simple referent in physics, not no referent at all.
(I normally wouldn't make such a big deal of it, but Dennett is one of the major figures on the physicalist side of this debate, and a mischaracterization of his views impedes the ability of bystanders to perform a fair comparison.)
Replies from: PhilGoetz, Bo102010, Mitchell_Porter↑ comment by PhilGoetz · 2010-01-11T04:52:38.946Z · LW(p) · GW(p)
In Explaining Consciousness, chapter 2, p. 28, Dennett says there is no purple in the brain when we see purple. That may be what he means.
I also heard Dennett quoted as saying there is no such thing as qualia, allegedly in "The taboo of subjectivity", p. 139, which I don't have.
Replies from: anonym, anonym, RobinZ, RobinZ↑ comment by anonym · 2010-01-12T07:51:50.057Z · LW(p) · GW(p)
Here is a full quote that makes clear in exactly what sense he doesn't believe in qualia:
So when we look one last time at our original characterization of qualia, as ineffable, intrinsic, private, directly apprehensible properties of experience, we find that there is nothing to fill the bill. In their place are relatively or practically ineffable public properties we can refer to indirectly via reference to our private property-detectors — private only in the sense of idiosyncratic. And insofar as we wish to cling to our subjective authority about the occurrence within us of states of certain types or with certain properties, we can have some authority — not infallibility or incorrigibility, but something better than sheer guessing — but only if we restrict ourselves to relational, extrinsic properties like the power of certain internal states of ours to provoke acts of apparent re-identification. So contrary to what seems obvious at first blush, there simply are no qualia at all.
Originally in Quining Qualia, 1988, by Dennett, and quoted on Multiple-Drafts Model.
↑ comment by anonym · 2010-01-12T04:20:41.845Z · LW(p) · GW(p)
The Taboo of Subjectivity is a book by B. Alan Wallace. It appears that Dennett wrote a review for that work, but I couldn't find it online. Are you referring to that review, or to something else?
Replies from: anonym↑ comment by anonym · 2010-01-12T04:59:43.871Z · LW(p) · GW(p)
I see what you meant now. Dennett was quoted in Wallace's book, on p.139. Sorry for the misunderstanding.
The quote, with some context, is:
Paul Churchland, one of the most prominent advocates of this view [eliminative materialism], declares that commonsense experience is probably irreducible to, and therefore incommensurable with, neuroscience; and for this reason familiar mental states should be regarded as nonexistent or at most as "false and misleading"^18. For similar reasons, philosopher Daniel Dennett bluntly asserts: "[t]here simply are no qualia at all."^19
18 . Paul M. Churchland, 1990, Matter and Consciousness: A Contemporary Introduction to the Philosophy of Mind, p. 41 & 48.
19 . Daniel C. Dennett, 1991, Consciousness Explained, p. 74.
↑ comment by RobinZ · 2010-01-22T13:55:04.549Z · LW(p) · GW(p)
Belatedly:
I think the reference on p. 28 is pointing out that the brain doesn't turn purple (and a purple brain wouldn't help anyway, as there are no eyes in the brain to see the purple). The remainder of the page is extending the example to further elaborate the problem of subjective experience.
I cannot find the reference to qualia quoted in The Taboo of Subjectivity at all - p. 74 is before Dennett even defines qualia, and p. 374 does not have those exact words - only the conclusion of a thought experiment illuminating his rejection of the concept.
↑ comment by Mitchell_Porter · 2010-01-11T04:36:44.996Z · LW(p) · GW(p)
He does not use the expression "color feeling", but here's a direct quote from Consciousness Explained, chapter 12, part 4:
You seem to be referring to a private, ineffable something-or-other in your mind's eye, a private shade of homogeneous pink, but this is just how it seems to you, not how it is... what it turns out to be in the real world in your brain is just a complex of dispositions.
He explicitly denies that there is any such thing as a "private shade of homogeneous pink" - which I would consider a reasonably apt description of the phenomenological reality. He also says there is something real, a "complex of dispositions". And, he also says that when we refer to color, we think we're referring to the former, but we're really referring to the latter. So, subjective color does not exist, but references to color do exist.
That still leaves room for there to be "appearances of pink". No actual pink, but also more than a mere belief in pink; some actual phenomenon, appearance, component of experience, which people mistakenly think is pink. But I see no trace of this. The thing which he is prepared to call real, the "complex of dispositions", is entirely cognitive (in the previous paragraph he refers to "innate and learned associations and reactive dispositions"). There is no reference to appearance, experience, or any other aspect of subjectivity.
Therefore, I conclude that not only does Dennett deny the existence of color (yes, I know he still uses the word, but he explicitly defines it to refer to something else), he denies that there is even an appearance of color, a "color feeling". In his account of color phenomenology, there are just beliefs about nonexistent things, and that's it.
Replies from: byrnema, Tyrrell_McAllister↑ comment by byrnema · 2010-01-11T04:59:32.162Z · LW(p) · GW(p)
So, subjective color does not exist, but references to color do exist.
The references to red together definitely form a physical network in my brain, right? I have a list of 10,000 things in my memory that are vividly red, some more vivid than others, and they're all potentially connected under this label 'red'. When that entire network is stimulated (say, by my seeing something red or imagining what "red" is), might I not also give that a label? I could call the stimulation of the entire network the "essence of red" or "redness" and have a subjective feeling about it.
I'm certain this particular theory about what "redness" occurs frequently. My question is, what's missing in this explanation from the dualist point of view? Why can't the subjective experience of red just be the whole network of red associations being simultaneously excited as an entity?
Above you wrote
Some people are at the stage of saying, color is a neural classification and I don't see any further problem.
So I guess I'm just asking, what's the further problem? (If you've already answered, would you please link to it?)
↑ comment by Tyrrell_McAllister · 2010-01-11T04:45:46.739Z · LW(p) · GW(p)
What are in those ellipses? In what you quote, I see that he's denying that it's "a private, ineffable something-or-other in your mind's eye". From what else I've read of Dennett, I'm sure that he has a problem with the "private" and "ineffable" part. Is it so clear that he has a problem with the "component of experience" part?
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-01-14T06:19:07.975Z · LW(p) · GW(p)
In the book, a character called Otto advocates the position that qualia exists. The full passage is Dennett making his case to Otto once again:
Replies from: MorendilWhat qualia are, Otto, are just those complexes of dispositions. When you say "This is my quale", what you are singling out, or referring to, whether you realize it or not, is your idiosyncratic complex of dispositions. You seem to be referring to a private, ineffable something-or-other in your mind's eye, a private shade of homogeneous pink, but this is just how it seems to you, not how it is. That "quale" of yours is a character in good standing in the fictional world of your heterophenomenology, but what it turns out to be in the real world in your brain is just a complex of dispositions.
↑ comment by Morendil · 2010-01-14T07:51:32.626Z · LW(p) · GW(p)
And how would you answer that passage of Dennett's ?
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-01-14T08:45:43.013Z · LW(p) · GW(p)
"Dear Dan - the shade of pink is real. In denying its existence, you are getting things backwards. The important methodological maxim to remember is that appearances are real. This does not mean that every time there is an appearance of an apple, there is an apple. It just means that every time there is an appearance of an apple, there is an appearance of an apple. It also does not mean that every time someone thinks there is an appearance of an apple, there is one. People can be mistaken in their auto-phenomenology - but not as mistaken as you would have us believe."
Husserl, who was only concerned with getting phenomenology right and not with any underlying ontology, had a "principle of principles" which expresses the first half of what I mean by "appearances are real":
everything originarily offered to us in "intuition" is to be accepted simply as what it is presented as being, but also only within the limits in which it is presented there.
In Husserl, every mode of awareness is a form of intuition, including sense perception. He's saying that every appearance has an element of certainty, but only an element.
Appealing to Husserl may be overkill, but the point is, there is a limit to the degree one can plausibly deny appearance. Denying the existence of color in the way Dennett appears to be doing is like saying that 0 = 1 or that nothing exists - it's only worth doing as an exercise in cognitive extremism; try believing something impossible and see what happens.
However, people do end up believing weird things out of apparent philosophical necessity. I think this is what is going on with Dennett; he does understand that there is nothing like that shade of pink in standard physical ontology, so rather than engage in a spurious identification of pinkness with some neural property, he just says there is no pink. It's just a word. It's there to denote a bundle of cognitive and behavioral dispositions. But there is no pink as such, outside or inside the head.
He's willing to take this drastic step because the truth of physics seems so nailed down, so indisputable. However, there is a sense in which we do not know what physics is about. It's a black-box causal structure, whose inputs and outputs show up in our experience looking a certain way (looking like objects distributed in space). But that doesn't tell us how they are in themselves.
If you take the Husserlian principle ontologically - conscious experience is offering us a glimpse of the genuine nature of one small sliver of reality, namely, what happens in consciousness - and combine it with a general commitment to the causal structure of physics, you get what I'm now calling Reverse Monism. Reverse, because it's the reverse of the usual reductionism. The usual reductionism says this appearance, this part of consciousness, is actually atoms in space doing something. Reverse monism says instead: this appearance must be what some part of physics (some part of the physical brain) actually is.
If the usual reductionistic accounts of conscious experience were plausible as identities, reverse monism wouldn't introduce anything new; it would just be looking at the same identity from the other end of the equation. However, the only thing these alleged identities have going for them, generally, is a common causal role. The thing which is supposed to be the neural correlate of blueness is in the right position to be caused by blue light and to get a person talking about blueness. But the thing in itself (e.g. cortical neurons firing) is nothing like blueness as such.
Now as it happens, all these theories about the neural correlates of consciousness (such as Drescher's gensyms) are speculative in the extreme. We're not talking about anything as well-founded as the Krebs cycle or the inverse square law; these are speculations about how the truth might be. So we are not under any obligation to consider the mismatch between subjective ontology and neural ontology which occurs in these theories as itself an established fact, that we just have to learn to live with. We are free to look for other theories in which an ontologically plausible identity, and not just a causally adequate identity, is posited. That's what I'm on about.
Replies from: Morendil↑ comment by Morendil · 2010-01-14T09:51:01.919Z · LW(p) · GW(p)
Husserl couldn't know what Dennett knows about the biology, psychology and evolutionary history of color perception.
Time and again you sweep aside the "bundle of cognitive and behavioral dispositions" Dennett refers to in his reply to Otto, in your appeal to the primacy of "redness" or "pinkness".
This has some intuitive appeal, because "red" and "pink" are short words and refer to something we experience as simple. Your position would be much harder to defend if you were looking for "the private, ineffable feeling of reading Lesswrong.com" as one commenter suggested: people would have an easier time denying the existence of that.
Yet - even though I'm not entirely sure that's what this commenter had in mind - I would say there is only a difference of degree, not of kind, between "the feeling of redness" and "the feeling of reading Lesswrong". The feeling of seeing the color red really is a complex of dispositions, something cobbled together from many parts over our long evolutionary history. The more we learn about color, the more complex it turns out to be. It only feels simple because it's a human universal.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-01-14T10:23:51.237Z · LW(p) · GW(p)
The "feeling of reading LessWrong" can be analysed in great detail. There's a classic work of phenomenology, Roman Ingarden's The Literary Work of Art, which goes into the multiple "strata" of meaning which turn the examination of small black shapes on white paper into the imagination of a possible world. Participating in a discussion like this involves a stream of complex intentional experiences against a steady background of embodied sensation.
Color experience is certainly not beyond further analysis, even at the phenomenological level. The three-dimensional model of hue, saturation, and intensity is a statement about the nature of subjective color. The idea that experiences are ineffable is just wrong. We're all describing them every day.
No amount of intricate new knowledge about the way that color perception varies or the functions that it performs can actually abolish the phenomenon. And most materialists don't try to abolish it, they try to identify it with something material. I think Dennett is trying to abolish phenomena as realities, in favor of a cognitive behaviorism, but that is really a topic for Dennett interpreters.
Instead, I want to know about your phenomenology of color. I assume that in fact you have it. But I'm curious to know, first, whether you'll admit to having it, or whether you prefer to talk about your experience in some other way; and second, how you describe it. Do you look at color and think "I'm seeing a bundle of dispositions"? Do you tell yourself "I'm not actually seeing it, I'm just associating the perceptual object with a certain abstract class"?
Replies from: Morendil↑ comment by Morendil · 2010-01-14T10:58:09.815Z · LW(p) · GW(p)
I'm not sure I ever "look at color" in isolation. There are colors and arrangements of color that I like and that I'll go out of my way to experience; I'm looking forward to an exhibition of Soulages' work in Paris, for instance.
When I look at a Soulages painting my inner narrative is probably something like "Wow, this is black... a luminous black which emphasizes straight, purposive brushstrokes in a way that's quite different from any other painter's use of color I've seen; how puzzling and delightful." It's different from the reflective black of my coffee cup nearby, the matte black of my phone handset or the black I see when I close my eyes. When I see my coffee cup I'm mostly seeing the reflections, when I see the handset it's the texture that stands out, when I close my eyes the black is a background to a dance of random splotches and blobs.
When I think about my perception of black in all the above instances I am certainly thinking in terms of dispositions and of abstract tags. There isn't a unitary "feeling of black" that persists after these various experiences of things I now call black.
↑ comment by Joanna Morningstar (Jonathan_Lee) · 2010-01-11T11:10:23.225Z · LW(p) · GW(p)
External only in that wetware is modelling something outside of the skull, rather than it's internal state. The intent was to state that merely because you perceive reality along certain ontological lines does not imply that reality has the same ontology.
This should be particularly obvious when your internal sense fails to correspond to reality; if conscious states are an imperfect guide to external states then why should the apparent ontology of consciousness be accurate?
In this regard I have observed a number of positions taken.
None of which you refute here or in the OP, especially those who deny that "blueness" is a veridical property of reality.
All these things (color, time, meaning, unity) exist in consciousness; which means that they exist in at least one part of reality.
No; it means that something referencing them exists in some part of reality (your skull). An equivalence relation; an internal tag that this object is blue.
To counter the realism, consider mathematicians, who consciously deal in infinite sets, or all theorems provable under some axioms (model theory). Just because something appears plainly to you does not mean it exists. Kant says it better than I can.
Do you agree that color, time, meaning, unity exist in consciousness?
Not if you mean more than perception by consciousness. Even in perception, they're just the ontology imposed by our neurology, and have neural correlates that suffice.
Consciousness isn't prior to perception or action; it's after it. There isn't a homunculus in there for experience to "appear to". If anything, there's a compressed model of your own behaviour to which experience is fed into; that's the "you" in the primate - a model of that same primate for planning and conterfactual reasoning.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-01-11T11:31:20.497Z · LW(p) · GW(p)
Do you agree that color, time, meaning, unity exist in consciousness?
Not if you mean more than perception by consciousness. Even in perception, they're just the ontology imposed by our neurology, and have neural correlates that suffice.
Let's suppose I have a hallucinatory perception of a banana. So, there's no yellow object outside my skull - we can both agree on that. It seems we also agree that I'm having a yellow perception.
But we part ways on the meaning of that. Apparently you think that even my hallucination isn't really yellow. Instead, there's some neural thing happening which has been tagged as yellow - whatever that means.
I really wonder about how you interpret your own experience. I suppose you experience colors just like I do, but (when you think about it) you tell yourself that what naively seems to be a matter of seeing a yellow object is actually experiencing what it's like to have a perception tagged as yellow. But how does that translate, subjectively? When you see yellow, do you tell yourself you're seeing the tag? Do you just semi-visualize a bunch of neurons firing in a certain way?
Replies from: SilasBarta, RobinZ, Jonathan_Lee↑ comment by SilasBarta · 2010-01-11T18:12:03.973Z · LW(p) · GW(p)
I suppose you experience colors just like I do, but (when you think about it) you tell yourself that what naively seems to be a matter of seeing a yellow object is actually experiencing what it's like to have a perception tagged as yellow. But how does that translate, subjectively? When you see yellow, do you tell yourself you're seeing the tag? Do you just semi-visualize a bunch of neurons firing in a certain way?
We went over this issue a bit in the previous discussion. My response (following Drescher) was: "To experience [yellow] is to feel your cognitive architecture assigning a label to sensory data."
As I elaborated:
... the phenomenal experience of blue is what it is like to be a program that has classified incoming data as being a certain kind of light, under the constraint of having to coherently represent all of its other data (other colors, other visual qualities, other senses, other combined extrapolations from multiple senses, etc) but with limited comparison abilities.
The point being: I can't give a complete answer now, but I can tell you what the solution will look like. It will involve describing how a cognitive architecture works, then looking at the distinctions it has to make, then looking at what constraints these distinctions operate under (e.g. color being orthogonal to sound [unless you have synaesthesia], etc.), then identifying what parts of the process can access each other.
Out of all of that, only certain data representations are possible, and one of these (perhaps, hopefully, the only one) is the one with the same qualities as our perception of color. You know you're at the solution, when you say, Aha! If I had to express what information I receive, under all those constraints, that is what qualities it would need to have.
To that, you replied:
But I still do not see where the color is. ...
Though you object to the comparison, this is the same kind of error as demanding that there be a fundamental "chess thing" in Deep Blue. There is no fundamental color, just as there is no fundamental chess. There is only a regularity the system follows, compressible by reference to the concept of color or chess.
↑ comment by RobinZ · 2010-01-11T16:48:56.507Z · LW(p) · GW(p)
I suppose you experience colors just like I do, but (when you think about it) you tell yourself that what naively seems to be a matter of seeing a yellow object is actually experiencing what it's like to have a perception tagged as yellow.
I am intrigued by your wording, here. I suppose I experience colors just like you do, but - when I think about it - I tell myself that what is, in fact seeing a yellow object is, in fact the same thing as experiencing what it's like to have a perception tagged as yellow. I believe these descriptions to be equivalent in the same sense that "breaking of hydrogen bonds between dihydrogen monoxide molecules, leading to those molecules traveling in near-independent trajectories outside the crystalline structure" is equivalent to "ice sublimating".
↑ comment by Joanna Morningstar (Jonathan_Lee) · 2010-01-11T12:14:30.044Z · LW(p) · GW(p)
But we part ways on the meaning of that. Apparently you think that even my hallucination isn't really yellow. Instead, there's some neural thing happening which has been tagged as yellow - whatever that means.
The relevant part of the optical cortex which fires on yellow objects has fired; the rest of your brain behaves as if there were a yellow banana out in front of it. "Tagging" seemed like the best high level term for it. A collection of stimuli are being collected together as an atomic thing. There's a neural thing happening, and part of that neural thing is normally caused by yellow things in the visual field.
The most obvious point where it has subjective import is when things change[1]. I probably experience colours as you do; when I introspect on colour, or time, I cannot find good cause to distinguish it from "visualising" an infinite set or a function. The only apparent different is that reality isn't under concious control. I don't assume that the naive ontology that is presented to me is a true ontology.
[1] There are a pair of coloured mugs (blue and purple) that I can't distinguish in my peripheral vision, for example. When I see one in my peripheral vision, it is coloured (blue, say); when I look at it directly, there is a period in which it is both blue and purple, as best I can describe, before definitively becoming purple. Head MRI's do this too.
Edit: The problem is that there isn't an easy way to introspect on the processes leading to perceptions; they are presented ex nihilo. As best I can tell, there's no good distinguisher of my senses from "experiencing what it's like to have a perception tagged as yellow"
comment by SilasBarta · 2010-01-08T18:32:31.092Z · LW(p) · GW(p)
You really need to link the previous post -- and important subthreads, 1, 2, 3 -- when you make a post like this. Other people need to be able to easily access the discussions you refer to. (Yes, that compilation may be biased toward what I was involved in.)
There were several notable replies that you also should have accounted for here, including:
1) How "where is color?" question is turned around to "where is the chess in Deep Blue?"
2) The Gary Drescher approach of equating qualia with generated symbols.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-01-10T02:01:33.878Z · LW(p) · GW(p)
I thought of linking, but I wanted a fresh start, something self-contained. It's a debatable choice. What I really wish is that when I posted this two days ago, I'd thought of creating in advance a thread for each major question. It will be difficult to migrate the discussion there now, but I would like to try.
Responding briefly:
1) I think "where is color?" and "where is chess?" are just different sorts of questions. The latter is an instance of "where is meaning?" or "where is computation?". Because meaning and computation can be imputed to symbols and artefacts by convention, the where-is-chess discussion needs to keep the human original in view as well. A systematic answer should first say where is chess when humans play each other. Then we can talk about chess computers playing each other, and whether that situation contains chess only by convention, or intrinsically, or whether machine chess is a mixture of intrinsic and attributed meaning and computation.
2) If you wish to speak of there being symbols in the brain, again, you have to take a position on computation and meaning. Then, if you've managed to identify an actual physical thing or property which you think can be called a symbol, then you need to explain how to get color out of that particular physical thing.
comment by Paul Crowley (ciphergoth) · 2010-01-08T16:12:50.601Z · LW(p) · GW(p)
Let me try coming at this another way. What would you not expect in a Turing-implementable Universe?
- Life
- Life that perceives, eg, threats (ie has organs adapted to be sensitive to things like light, and reacts adaptively when these organs get signals correlated with the presence of a threat)
- Life that perceives threats and reacts to them in such a way that other closely related living things react to their reaction as if the threat was there (ie, some form of communication)
- Life whose later actions are adaptively changed by earlier perceptions (in the sense of perceptions above), ie memory
- Life that communicates the perceptions it remembers
- Life whose communication has grammar, so it can say things like "I saw a tiger yesterday" or "I saw a red thing"
- Life that asks what is red about red
- something else?
EDIT NB: I'm asking what you see that you would not expect to see if you were looking into a Turing-universe from the outside. If your position is that there's nothing in this Universe visible to an external observer that shows it to be non-Turing, including our utterances, please make that explicit.
Replies from: Mitchell_Porter, Larks, ciphergoth↑ comment by Mitchell_Porter · 2010-01-10T06:48:30.600Z · LW(p) · GW(p)
By life I assume you mean replicators.
Turing computability is not much of an issue for me. It amounts to asking whether the state transitions in an entity can be abstractly mimicked by the state transitions in a Turing machine. For everything in your list, the answer ought to be, yes it can.
However, that is a very limited and abstract resemblance. You could represent the mood of a person, changing across time, with a binary string. But to say that their sequence of moods is a binary string is descriptive rather than definitional.
It sounds like you do want to reduce all mental or conscious phenomena to strictly computational properties. Not just to say that the mind has certain computational properties, but that it has nothing but such properties; that the very definition of a mind is the possession of certain computational properties or capacities.
To do this, first you will need to provide an objective criterion for the attribution of computational properties, such as states. You can chop up a physical state space in many ways, so as to define "high-level" states; which such clustering of physical states, out of all the possibilities, is the one that you will use, and why? Then, you may need to explain what is computational about these states. If you want to say that they have representational content, for example, you will need to say what they are representing and on what basis you attribute this meaning to them. And finally, if you also wish to say that sensory qualities like colors are nothing but computational properties, you will need to say which computational properties, and something about why they "feel" that way.
All of this assumes that you agree that color and meaning do exist in experience. If they are there, they need to be explained. If they do not need to be explained, it can only be because they are not there.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-01-10T09:56:08.557Z · LW(p) · GW(p)
So your position is that there's no problem accounting for everything we observe in human behaviour, including behaviour like saying "where does subjective experience come from?", with a physics much like standard physics; but to account for the actual subjective experience itself we need a new physics? That current physics leaves us with a world of what I term M-zombies, who talk about subjective experience but don't have any?
↑ comment by Larks · 2010-01-09T18:22:01.851Z · LW(p) · GW(p)
I imagine you would expect all those; one would simply not expect the subjective experience of the colour blue.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-01-09T18:36:00.642Z · LW(p) · GW(p)
You seem to have given exactly the reply that my "EDIT NB", added before your reply, was designed to forestall.
Can you state that in terms of what you would see looking in from the outside? For example, do you think you would not see life that used phrases such as "the subjective experience of the colour blue"?
Replies from: Larks↑ comment by Larks · 2010-01-09T18:49:56.614Z · LW(p) · GW(p)
I meant I did agree with you, and that externally everything would appear exactly the same. However, from what I think is Mitchell Porter's point of view, the one thing you would not expect from such a universe is the possibility of being inside it. P-Zombies, I suppose.
EDIT: Also, sorry for not being clear re your NB.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-01-09T19:22:35.447Z · LW(p) · GW(p)
Ah, OK! I don't know whether anyone's going to try to mount a zombie-based defence of Porter's position. These are the articles it would need to reply to. M-zombies would be distinct from P-zombies in that Porter believes that physics can account for our non-zombieness, but M-zombies would still write articles asking where subjective experience comes from, even though they don't have any.
↑ comment by Paul Crowley (ciphergoth) · 2010-01-09T19:31:38.346Z · LW(p) · GW(p)
EDIT: commenters below have caused me to think better of my impatient tone below. Please imagine strikethrough through
In the absence of any replies[*] from you more than 24 hours after you posted the original article, which I think is a little rude and I hope is accounted for by unexpected real-world constraints, I shall resort to further attempts to anticipate and forestall possible replies.
"I don't know" isn't an acceptable answer either. The question isn't "what will happen in such a Universe", it's "at what point to you balk at the possibility". You balk before the end of "it could be just like our Universe" and after the beginning (which is, say, the game of Life) so you have to be able to identify a balk point somewhere on the scale.
EDIT: would appreciate downvote explanation - thanks! EDIT: [*] to any comments in this thread, not just to my comments - thanks Alicorn for prompting me to clarify
Replies from: Alicorn, Morendil, SilasBarta↑ comment by Alicorn · 2010-01-09T19:43:51.007Z · LW(p) · GW(p)
This is an asynchronous medium, and Mitchell_Porter is not obliged to address your inquiry anyway. It's possible he hasn't even seen your comment. Perhaps you could send him a PM, which would be harder for him to miss, and ask him if he'd have a look at your question without accusing him of being rude for not having done so already.
Edit: This comment also serves as your downvote explanation.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-01-09T19:46:33.549Z · LW(p) · GW(p)
Thanks, it's good of you to explain.
Not my enquiry specifically - he's made no comments at all since posting the article. I think that if you make a top level post you do have an obligation to take part in the subsequent discussion.
Replies from: Alicorn↑ comment by Alicorn · 2010-01-09T19:48:13.278Z · LW(p) · GW(p)
If I'd written a post that'd gotten downvoted into the negative that decisively, I'd take a day or two off to avoid posting extremely defensive comments. I have no idea if that's what Mitchell is doing, but while he probably should make some attempt to field comments on his post, chiding him for being untimely is not nice.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-01-09T19:59:41.129Z · LW(p) · GW(p)
From the votes, it looks like people agree with you rather than me on this, which I take seriously. If anyone else wants to downvote me on this one, I'd slightly prefer they downvote the grandparent comment rather than my one above that, so I know it's the chiding rather than the argument that's getting downvoted.
↑ comment by Morendil · 2010-01-09T20:22:40.975Z · LW(p) · GW(p)
Needling your interlocutor for a prompt reply makes it sound as you're more interested in "winning the debate" than in getting a considered reply from them. If it takes someone a couple days to let the dust settle, consider possible counter-arguments or lines of retreat, and frame a careful reply, don't begrudge them that.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-01-09T21:57:07.742Z · LW(p) · GW(p)
I'd like to think about this more, but what you say sounds convincing just now. I've been ill this week, which is why I've been online so much, which may be affecting my judgement.
↑ comment by SilasBarta · 2010-01-09T20:16:38.834Z · LW(p) · GW(p)
If it makes you feel any better, in the last discussion, several posters referenced my explanation, which you would think would bump me up on his reply priority list. It didn't.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-01-09T21:58:59.476Z · LW(p) · GW(p)
While I'm hoping for my comments to receive a reply, I'm looking forward to all his replies. We enjoy such a high standard of debate here that it makes me impatient for more.
comment by Psychohistorian · 2010-01-09T04:16:44.867Z · LW(p) · GW(p)
Asking about the blueness of blue, or anything to do with color, is deliberately misleading. You admit that seeing blue is an event caused by the firing of neurons; the fact that blue light stimulating the retina causes this firing of neurons is largely besides the point. The question is, simply, "How does neurons firing cause us to have a subjective experience?"
The best answer that I think can be given at this point is, "We don't really know, but it's extremely unlikely it involves magic, and if we knew enough to build something that worked almost exactly like our brain, it'd have subjective experience too."
As for the "type" distinction, the idea that blueness in the brain must emerge from some primal blueness, or whatever exactly you're trying to argue, seems like a serious mistake in categories. As another commenter said, there's no chess in deep blue. There's no sense of temperature in a thermostat, no concept of light in a photoreceptor, no sense of lesswrong.com in my CPU. The demand that blueness cannot be merely physical seems to be the mere protestations of a brain contemplating that which it did not evolve to contemplate.
Replies from: Technologos↑ comment by Technologos · 2010-01-09T20:14:43.268Z · LW(p) · GW(p)
I wonder if "How does neurons firing cause us to have a subjective experience?" might be unintentionally begging Mitchell_Porter's question. Best I can tell, neurons firing is having a subjective experience, as you more or less say right afterwards.
comment by Tyrrell_McAllister · 2010-01-08T16:00:00.478Z · LW(p) · GW(p)
But a common theme seems to be that blueness is a "feel" somehow "associated" with the entity, or even associated with being the entity. To see blue is how it feels to have your neurons firing that way.
This is the dualism which doesn't know it's dualism.
As a reductionist who disagrees with your overall critique of reductionism, I have to say that you hit the nail on the head here. Some self-styled reductionists do seem prone to "explaining" subjective experience by saying that it's nothing more than what certain algorithms feel like from the inside. As you say, that's really a dualist account if you leave it there.
Replies from: None↑ comment by [deleted] · 2010-01-08T18:25:16.391Z · LW(p) · GW(p)
My problem is I don't see how you can avoid a "that's how an algorithm feels from the inside" explanation somewhere down the line. Even if you create some theory that purports to account for the (say) mysterious redness of red, isn't there still a gap to bridge between that account and whatever your subjective perception - your feeling - of red is? I'm confused as to what an 'explanation' for the mysterious redness of red would even look like.
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2010-01-08T18:45:40.405Z · LW(p) · GW(p)
If you can't even imagine what an answer would look like, you should doubt that you've successfully asked a question.
That's not supposed to be a conversation-stopper. It's just that the first step in the conversation should be to make the question clear.
Replies from: None, LauraABJ↑ comment by [deleted] · 2010-01-09T01:54:14.995Z · LW(p) · GW(p)
This is a useful heuristic, but if anything it seems to dissolve the initial question of "Where's the qualia?" As DanArmak and RobinZ channeling Dennet point out elsewhere in the thread, questions about qualia don't appear to be answerable.
↑ comment by LauraABJ · 2010-01-09T00:47:39.541Z · LW(p) · GW(p)
What I think Mitchell is looking for (an he can correct me if I'm wrong) as an explanation of experience is some model that describes the elements necessary for experience and how they interact in some quantitative way. For example, let's pretend that flesh brains are not the only modules capable of experience, and that we can build experiences out of other materials. A theory of experience would help to answer: what materials can be used, what processing speeds are acceptable (ie, can experience exist in stasis), what cpus/processors/algorithms must be implemented, and what outputs will convince us that experience is taking place (vs creating a Chinese letter box). Now, I don't think we will have any way of answering these questions before uploading/AI, but I can conceive of ways of testing many variables in experience once a mind has been uploaded. We could change one variable- ask the subject to describe the change- change it back and ask the subject what his memory of the experience is, etc,etc. We can run simulations that are deliberately missing normal algorithms until we find which pieces of a mind are the bare bone essentials of experience. To me this is just another question for the neuroscientists and information theorists, once our technology is advanced enough to actually experiment on it. It is only a 'problem' if you believe p-zombies are possible, and that we might create entities that describe experience without having it.
Replies from: Lironcomment by thomblake · 2010-01-08T15:33:18.162Z · LW(p) · GW(p)
I don't understand where this perceived confusion comes from (despite, or because, I read much of the relevant literature).
If we have an electronic device that emits light at 450THz and another that detects light and reports what "color" it is (red), then we can build/execute all of that without accounting for "redness" (except of course in the step where it decides what to call the "color"). Is there a problem here?
Is color a special topic here? Do we have the same issues in phenomenology of sound?
If we have an electronic device that outputs sound at 264.298 Hz and another that detects sound and reports the "musical note" (middle C) then we can build/execute all of that without reference to "middle C -ness". Is this a problem?
Replies from: MrHen, MatthewB, SilasBarta, Nubulous↑ comment by MrHen · 2010-01-08T20:20:40.131Z · LW(p) · GW(p)
I would like to see a better reply to this comment. Why doesn't this address the problem of Color from the OP? Is it because the jump from wavelength to the label of "Blue" hasn't been defined?
From the OP:
So if you intend to tell me that reality consists solely of physics, mathematics, or computation, you need to tell me where the colors are.
I'm not trying to be tricksy or smart. I am trying to understand the question and why the above isn't an answer. In essence, all confusion would be lifted if you replaced the words "red" and "green" with something else in the following paragraph:
Whether your physics consists of fields and particles in space, or [...] whatever - I don't see anything red or green there, and yet I do see it right now, here in reality.
As in,
Whether your physics consists of fields and particles in space, or [...] whatever - I don't see anything {word A} or {word B} there, and yet I do see {word C} right now, here in reality.
Words A, B, and C all belong to some category and that category is not "seen" in physics but is "seen" in reality. Thomblake is trying to use "wavelengths":
Whether your physics consists of fields and particles in space, or [...] whatever - I don't see [any wavelengths] there, and yet I do see [wavelengths] right now, here in reality.
This makes no sense to me, so something must be wrong in my translations. What is it?
Replies from: Mitchell_Porter, Dustin↑ comment by Mitchell_Porter · 2010-01-11T03:45:29.803Z · LW(p) · GW(p)
Physics contains waves and physics contains lengths, so if someone were to say "physics contains wavelengths!", I wouldn't object, because I can see the wavelengths in the ontology of physics. But if someone says "physics contains colors", I don't see them and I have a problem; and if someone says "colors are wavelengths", I also have a problem, because I don't see what's color-like about a wavelength. How does being 650 nanometers long make an object red?
Most people here aren't saying red is a wavelength anyway. They're saying red is an aspect of a brain state normally caused by light of a certain wavelength arriving at the eye. But the problem is the same, it's just that the physical property supposedly identical with "being red" is far more complex and not completely specified.
Replies from: MrHen↑ comment by MrHen · 2010-01-11T15:11:34.644Z · LW(p) · GW(p)
Thank you for answering.
[I]f someone says "colors are wavelengths", I also have a problem, because I don't see what's color-like about a wavelength. How does being 650 nanometers long make an object red?
I guess my only response is that if you change the wavelength, its redness disappears. If you return the wavelength to the right frequency, redness returns. Similar experiments can be done for each color in turn.
Presumably, experiments can also be done to damage the eye so that it doesn't respond to certain frequencies and the ability to perceive redness disappears.
Do these things imply some connection between redness and wavelengths? If not, than I feel I still am not understanding what you mean by Color. What is Color? Or is that the whole point of the question?
ETA: After thinking a little more, I may have gotten closer to understanding. The relationship between wavelengths and color may only go one way. Wavelengths may turn into Colors via the eye, but not every experience of a Color implies wavelengths hitting the eye. Examples of the latter are hallucinations and dreams. So the question remains, "Where is the color?" If the answer is "Wavelengths," where are the Wavelengths when I am dreaming?
Am I close?
ETA2: Further thoughts on perceiving redness: If you were never able to perceive wavelengths that correlate to redness, would you know redness? If your eyes were damaged to stop seeing red you would probably continue to dream with red. But if you have never seen red, would you dream in red? This is relevant to discovering the source of redness in non-wavelength related experiences which is slightly different than the question, "Where is the color?"
Replies from: RobinZ↑ comment by RobinZ · 2010-01-15T23:33:13.557Z · LW(p) · GW(p)
ETA2: Further thoughts on perceiving redness: If you were never able to perceive wavelengths that correlate to redness, would you know redness? If your eyes were damaged to stop seeing red you would probably continue to dream with red. But if you have never seen red, would you dream in red? This is relevant to discovering the source of redness in non-wavelength related experiences which is slightly different than the question, "Where is the color?"
I don't know if colorblind people dream in color, but colorblind synesthetes can experience colors their eyes don't register.
Replies from: AdeleneDawner, MrHen↑ comment by AdeleneDawner · 2010-01-15T23:51:00.338Z · LW(p) · GW(p)
I wish they had a better description of that. I'm synesthetic, with normal color vision, but sometimes get sensations of colors that seem impossible to experience. 'A kind of greenish-purple', for example - no, I don't mean blue, and it's not a purple pattern with green bits, it's green and purple at the same time.
I also get 'colors' that make even less sense. For example, I'll occasionally get a color that 'looks' grey but doesn't 'feel' grey; it feels like it should have a separate label, and my mind refuses to categorize the stimulus with things that evoke 'truly grey' reactions. That makes me wonder if I'm experiencing the same effect as the one mentioned in the article.
Replies from: MrHen, RobinZ↑ comment by RobinZ · 2010-01-16T01:24:28.721Z · LW(p) · GW(p)
That's incredibly interesting - I recall that the article mentioned colorsighted synesthetes observing synesthetic colors that felt different from similar real colors, without going into any particular detail.
Replies from: AdeleneDawner↑ comment by AdeleneDawner · 2010-01-16T01:44:44.038Z · LW(p) · GW(p)
I don't know how much more detail could be given, really. I don't think I can do any better job of describing it than I just did, and I like to think I'm pretty good at that kind of thing.
Replies from: RobinZ↑ comment by MrHen · 2010-01-16T16:15:39.954Z · LW(p) · GW(p)
From the article linked (synesthete is a keyword explained in the article):
They found a synesthete who was color blind. That may seem strange, but what it really means is that the subject had problems with his retina that left him able to distinguish only an extremely narrow range of wavelengths when looking at most images in the world — his brain was fine, but his eyes weren’t quite up to the job. But when he saw certain numbers, he experienced colors that he otherwise never saw.
And that, I guess, answers that question. Awesome.
↑ comment by Dustin · 2010-01-09T06:36:14.857Z · LW(p) · GW(p)
I voted this comment up, because I too do not see how the root comment isn't an answer and I'd really like to know why the OP doesn't think it is an answer.
I don't understand what he means when he says:
And all there is in the brain, according to standard physics, is a bunch of particles in various changing configurations. So: where's the blue? What is the blue thing?
IT seems like the first sentence answer the questions asked.
↑ comment by MatthewB · 2010-01-08T15:57:26.237Z · LW(p) · GW(p)
There is also the fact that what we describe as "redness" is purely by virtue or our anatomy and the ranges at which our eyes' structure receives the light that is then interpreted by our brain as red or blue.
Red, or Blue are just words that we used to describe a state that exists in the universe. What would happen to these colors if our eyes were made of a type of structure that picked up EM radiation all the way from gamma-rays to long-wavelength radio waves?
What "color" would we think something in the Microwave spectrum was? What about the X-Ray Colored objects?
↑ comment by SilasBarta · 2010-01-08T18:49:39.934Z · LW(p) · GW(p)
FYI: Unlike electronic devices, your visual system doesn't detect absolute color, hence these illusions.
Replies from: thomblake↑ comment by Nubulous · 2010-01-09T07:50:59.111Z · LW(p) · GW(p)
Since we can presumably generate the appropriate signals in the optic nerve from scratch if we choose, light and its wavelength have nothing whatsoever to do with color.
Replies from: Blueberry↑ comment by Blueberry · 2010-01-09T08:02:36.674Z · LW(p) · GW(p)
Downvoted for strange non sequitur. We could theoretically pipe in the appropriate electrical impulses to the part of your brain responsible for auditory processing, but that doesn't mean hearing has "nothing whatsoever" to do with sound.
Replies from: AdeleneDawner↑ comment by AdeleneDawner · 2010-01-09T08:09:46.466Z · LW(p) · GW(p)
The upvote was mine; I agree that 'nothing whatsoever' was too strong, but thought that the point about qualia observably having more to do with brainstates than the stimulii that evoke them was useful.
comment by Morendil · 2010-01-08T20:57:27.973Z · LW(p) · GW(p)
What dire consequences should we expect if we do, in fact, deny that there is anything that is blue ?
For my money, the discussion in p.375 and onwards of Consciousness Explained says all there is to say (in addition to theories of electromagnetism, optics and so on) about the experience of color.
I can't really do justice to that section in a comment here, but I will note its starting point:
Many have noticed that it is curiously difficult to say just what properties of things in the world colors could be. [...] What is beyond dispute is that there is no simple, nondisjunctive property of surfaces such that all and only the surfaces with that property are red.
The key insight for me is here:
The fact that apples have the surface reflectance properties they do is as much a function of the photopigments that were available to be harnessed in the cone cells in the eyes of fructivores as it is of the effects of the interactions between sugar and other compounds in the chemistry of the fruit.
There is no reason, prior to megayears of evolution, to expect that anything such as color exists. That changes with apple trees' "need" to advertise the ripeness of their fruit, to creatures which, though hardly conscious, happen to be equipped with the ability to discriminate certain properties of the fruit at a distance. This "need" results from a) the existence of a certain optimization algorithm, Darwinian evolution, and b) contigent facts about the environment in which this algorithm unfolds.
What I take the "experience of color" to be, if it has to be anything, is an evolved equilibrium between competing strategies, together with the entire history of the genes in which these strategies were encoded.
Replies from: Eliezer_Yudkowsky, RobinZ↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-09T14:34:07.863Z · LW(p) · GW(p)
Um, even I would have to judge that saying anything whatsoever about apple trees, in answer to the question Mitchell is asking, is blatantly running away from the scary and hence interesting part of the problem, which will concern itself solely with matters in the interior part of the skull. Anyone talking about apple trees is running away from their ignorance of the inside of the skull, and finding something they understand better to talk about. Reductionist or not, I cannot defend that.
Replies from: Morendil↑ comment by Morendil · 2010-01-09T15:25:05.108Z · LW(p) · GW(p)
Your response puzzles me. What I take Dennett to be saying is that apple trees and the insides of skulls are deeply entangled. "Perception" is a term that will recur often when we seek to explain the entangled history of apple trees and mobile fructivores. And I'd be rather surprised to find Dennett running away from hard questions.
OK, you've said where the scary part of the problem is. Can you say more about what is scary about "doing a Dennett"; or about what you take to be the scary part of the problem ?
There are some unsettling, if not scary, things that come out of considering apple trees, that do not come out of considering only the insides of skulls and simplifying color as being all about light wavelengths. For instance, if "redness" is as gerrymandered a category as Dennett's view implies, then it would be in pratice impossible to design from scratch a mind that has the same "qualia" of redness, for lack of a better word, that we have.
↑ comment by RobinZ · 2010-01-08T21:27:31.673Z · LW(p) · GW(p)
Your first line ("What dire consequences should we expect if we do, in fact, deny that there is anything that is blue ?") is an appeal to the consequences of a belief about a matter of fact, and therefore irrelevant.
What remains without that is good.
Replies from: Morendilcomment by HalFinney · 2010-01-10T18:20:31.485Z · LW(p) · GW(p)
Thomas Nagel's classic essay What is it like to be a bat? raises the question of a bat's qualia:
Our own experience provides the basic material for our imagination, whose range is therefore limited. It will not help to try to imagine that one has webbing on one's arms, which enables one to fly around at dusk and dawn catching insects in one's mouth; that one has very poor vision, and perceives the surrounding world by a system of reflected high-frequency sound signals; and that one spends the day hanging upside down by one's feet in an attic. In so far as I can imagine this (which is not very far), it tells me only what it would be like for me to behave as a bat behaves. But that is not the question. I want to know what it is like for a bat to be a bat. Yet if I try to imagine this, I am restricted to the resources of my own mind, and those resources are inadequate to the task. I cannot perform it either by imagining additions to my present experience, or by imagining segments gradually subtracted from it, or by imagining some combination of additions, subtractions, and modifications.
I also wonder whether Deep Blue could be said to possess chess qualia of a type which are similarly inaccessible to us. When we play chess we are somewhat in the position of the man in Searle's Chinese Room who simulates a Chinese woman. We simulate Deep Blue when we play chess, and our lack of access to any chess qualia no more disproves their existence than the failure of Searle's man to understand Chinese.
Do you think it will ever be possible to say whether chess qualia exist, and what they are like? Will we ever understand what it is like to be a bat?
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-01-12T09:08:07.516Z · LW(p) · GW(p)
Do you think it will ever be possible to say whether chess qualia exist, and what they are like? Will we ever understand what it is like to be a bat?
Being a bat shouldn't be incomprehensible (and in fact Nagel makes some progress in his essay). You still have a body and a sensorium, they're just different. Getting your sense of space by yelling at the world and listening to the echoes - it's weird, but it's not beyond imagining. The absence of higher cognition might be the hardest thing for a human to relate to, but everyone has experienced some form of mindless behavior in themselves, dominated by sensation, emotion, and physical activity. You just have to imagine being like that all the time.
Being a quantum holist[*] and all that, when it comes to consciousness, I don't believe in qualia for Deep Blue because I don't think consciousness arises in that way. If it's like something to be a rock, then maybe the separate little islands of silicon and metal making up Deep Blue's processors still had that. But I'm agnostic regarding how to speak about the being of the very simplest things, and whether it should be regarded as lying on a continuum with the being of conscious beings.
Anyway, I answer both your questions yes, and I think other people may as well be optimistic too, even if they have a different theoretical approach. We should expect that it will all make sense one day.
[*] ETA: What I mean by this is the hypothesis that quantum entanglement creates local wholes, that these are the fundamental entities in nature, and that the individual consciousness inhabits a big one of these. So it's a brain-as-quantum-computer hypothesis, with an ontological twist thrown in.
comment by Mitchell_Porter · 2010-01-10T01:35:16.029Z · LW(p) · GW(p)
Another thread for answers to specific questions.
Second question: Where is computation?
Replies from: HalFinneyPeople like to attribute computational states, not just to computers, but to the brain. And they want to say that thoughts, perceptions, etc., consist of being in a certain computational state. But a physical state does not correspond inherently to any one computational state... To be in a particular cognitive state is to be in a particular computational state. But if the "computational state" of a physical object is an observer-dependent attribution rather than an intrinsic property, then how can my thoughts be brain states?
↑ comment by HalFinney · 2010-01-11T21:52:49.954Z · LW(p) · GW(p)
I don't think your question is well represented by the phrase "where is computation".
Let me ask whether you would agree that a computer executing a program can be said to be a computer executing a program. Your argument would suggest not, because you could attribute various other computations to various parts of the computer's hardware.
For example, consider a program that repeatedly increments the value in a register. Now we could alternatively focus on just the lowest bit of the register and see a program that repeatedly complements that bit. Which is right? Or perhaps we can see it as a program that counts through all the even numbers by interpreting the register bits as being concatenated with a 0. There is a famous argument that we can in fact interpret this counting program as enumerating the states of any arbitrarily complex computation.
Chalmers in the previous link aims to resolve the ambiguity by certain rules; basically some interpretations count and some don't. And maybe there is an unresolved ambiguity in the end. But in practice it seems likely that we could take brain activity and create a neural network simulation which runs accurately and produces the same behavioral outputs as the brain; the same speech, the same movements. At least, if you were to deny this possibility, that would be interesting.
In summary, although one can theoretically map any computation to any physical system; for a system like we believe the brain to be, with its simultaneous complexity and organizational unity, it seems likely that one could come up with a computational program that would capture the brain's behavior, claim to have qualia, and pose the same hard questions about where the color blue lay among the electronic circuits.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-01-12T09:42:41.074Z · LW(p) · GW(p)
I don't think your question is well represented by the phrase "where is computation".
If people want to say that consciousness is computation, they had better be able to say what computation is, in physical terms. Part of the problem is that computational properties often have a representational or functional element, but that's the problem of meaning. The other part of the problem is that computational states are typically vague, from a microphysical perspective. Using the terminology from thermodynamics of microstates and macrostates - a microstate is a complete and exact description of all the microphysical details, a macrostate is an incomplete description - computational states are macrostates, and there is an arbitrariness in how the microstates are grouped into macrostates. There is also a related but distinct sorites problem: what defines the physical boundary of the macro-objects possessing these macrostates? How do you tell whether a given elementary particle needs to be included, or not?
I don't detect much sympathy for my insistence that aspects of consciousness cannot be identified with vague entities or properties (and possibly it's just not understood), so I will try to say why. It follows from insisting that consciousness and its phenomena do actually exist. To be is to be something, something in particular. Vaguely defined entities are not particular enough. Every perception that ever occurs is an actual thing that briefly exists. (Just to be clear, I'm not saying that the object of every perception exists - if that were true, there would be no such thing as perceptual error - but I'm saying that perceptions themselves do exist.) But computational macrostates are not exactly defined from a micro level. So they are either incompletely specified, or else, to become completely specified, the fuzziness must be filled out in a way that is necessarily arbitrary and can be done in many ways. The definitional criteria for computational or functional states are simply not strict enough to compel a unique micro completion.
Also, macrostates have no causal power - all causality is micro - and yet the whole point of functionalism is to make mental states causally efficacious.
You didn't say any of this, Hal, but I want to provide some context for the question.
Let me ask whether you would agree that a computer executing a program can be said to be a computer executing a program. Your argument would suggest not, because you could attribute various other computations to various parts of the computer's hardware.
You can call it that, but it is an attribution made by an observer and not a property intrinsic to the purely physical reality. The relationship between the objective physical facts and the attributed computational properties is that the former constrains but does not determine the latter. As Chalmers observes, Putnam's argument is a little excessive. But it is definitely a fact that any complex state-machine can also be described in simpler terms by defining new states which are equivalence classes of the old states, and also that we choose to ignore many of the strictly physical properties of our computers when we conceive of them as computational devices. Any complex physical object allows a very large number of interpretations as a state machine, none of which is intrinsically realer than any other, and this rules out the identification of such states with conscious states, whose existence does not depend on the whim of an external observer.
in practice it seems likely that we could take brain activity and create a neural network simulation which runs accurately and produces the same behavioral outputs as the brain; the same speech, the same movements. At least, if you were to deny this possibility, that would be interesting.
Yes, I do think you should be able to have a brain simulation which would not be conscious and yet do all those things. It's already clear that we can have incomplete "simulations" which claim to be thinking or feeling something, but don't. The world is full of chatbots, lifelike artificial characters, hardware and software constructed to act or communicate anthropomorphically, and so on. There is going to be some boundary, defined by detail and method of simulation, on one side of which you actually have consciousness, and on the other side of which, you do not.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-01-12T10:18:01.822Z · LW(p) · GW(p)
To be is to be something, something in particular. Vaguely defined entities are not particular enough. Every perception that ever occurs is an actual thing that briefly exists.
In other words, ontologically fundamental mental entities. Could we move on please?
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-01-12T10:32:34.824Z · LW(p) · GW(p)
A thing doesn't have to be fundamental in order to be exact. If individual electrons are fundamental, an "entity" consisting of one electron in a definite location, and the other electron in another definite location, is not a vague entity.
The problem is not reduction per se. The problem discussed here is the attempt to identify definitely existing entities with vaguely defined entities.
comment by DanArmak · 2010-01-08T14:21:12.932Z · LW(p) · GW(p)
All your questions come down to: why does our existence feel like something? Why is there subjective, personal, conscious experience? And why does it feel the way it does and not some other way?
In the following, I assume that your position about qualia deserving an explanation is correct. I don't have a fully formed opinion yet myself - I defer an explanation - but here's what I came up with by assuming your position.
First, I propose that we both accept the Materialistic Hypothesis as regards minds. In the following text I will use the abbreviation MP for the materialistic, physical world. My formulation of the hypothesis states:
There is an MP world which is objective, common to everyone, and exists independently of our conscious, subjective experiences.
All information I have about the experiences (e.g. of color) of others is part of the MP world. I can receive such information only through MP means, via the self-reporting of others. I cannot in any way experience or inspect their experiences directly, in the way I have my own experience, or in some third non-materialistic way. Symmetrically, I can only provide information to anyone else about my own experiences via the MP world. (I am not special.)
If we ignore subjective/conscious experience, our physical theories are a complete description of the MP world. They may be modified, extended or refined in the future, but it is reasonable to assume (until shown otherwise) that they will remain theories of the MP world only. IOW, the MP world is "closed on itself": MP theories do not naturally say anything about the existence or properties of conscious experience such as that of color.
MP models, and the sum of all information that can be gotten from the MP world, provides a complete description of the behavior of brains and other embodiments of "minds". IOW, Descartes' dualism is false: there is no extra-physical "soul" agent violating the MP world's internal causality.
MP models of brains have a one-to-one correspondence to all the conscious states, feelings and experiences of the minds in those brains. Your experience of green can be identified with some brain-state, and occurs whenever that brain state arises, and only then. (By item 2, this is unfalsifiable.)
Do you disagree with any of this?
If you accept this hypothesis, then it follows that it is impossible to say anything about conscious mind states. There are no experiments, observations, or tools that could tell us anything about them, even in principle. You can build a model of your own consciousness if you like, but it will be based entirely on introspection, and we will be able to achieve similar results by building the mirror model in MP terms of your brain-states.
Now, it's possible that future discoveries will refute part of this hypothesis - by leading to such complex or weird MP theories that it would be easier to postulate Descartian dualism, for instance. But until that occurs, our subjective experiences cannot be grounds for declaring MP theories incomplete. They are apparently complete as regards the MP world.
When you say that the color green "exists", or that your experience of green "exists", this is misleading. It is not the same sense of "exists" as in "this apple exists". I'm not denying the "existence" of your, er, qualia, but we should not use the same word or infer the same qualities that MP existing objects have.
Replies from: LauraABJ, Mitchell_Porter↑ comment by LauraABJ · 2010-01-08T16:48:46.668Z · LW(p) · GW(p)
I agree with your interpretation of our current physical and experiential evidence. I believe the perceived dualistic problem arises from imperfections in our current modeling of brain states and control of our own. We cannot easily simulate experiential brain states, reconfigure our own brains to match, and try them out ourselves. We cannot make adjustments of these states on a continuum that would allow us to say physical state A corresponds exactly to experience B and here's the math. We cannot create experience on a machine and have it tell us that it is experiencing. Without internal access to our source-code, our experiences come into our consciousness fully formed and appear magical.
That being said, the blunt tools we do have--descriptions of other's experiences, drugs, brain stimulation, fMRI, and psychophysics--do seem to indicate that experience follows directly from physical states of the brain without the need for a dualist explanation. Perhaps the problem will dissolve itself once uploading is possible and individual experiences are more tradeable and malleable.
↑ comment by Mitchell_Porter · 2010-01-10T03:20:38.504Z · LW(p) · GW(p)
Do you disagree with any of this?
I certainly think about things differently:
1'. There is a world, which includes subjective experiences, and (presumably) things which are not subjective experiences.
2'. All information I have about the world, including the subjective experiences of other people, comes through my subjective experiences.
3'. I possess mathematical/physical theories which appear adequate to describe much of the posited world to varying degrees, but which do not refer to subjective experiences.
4'. Subjective experiences are causally consequential; they are affected by sensation and they affect behavior, among other things.
5'. The way the world actually is and the way the world actually works is a little more complicated than any theory I currently possess.
Replies from: ciphergoth, RobinZ, DanArmak↑ comment by Paul Crowley (ciphergoth) · 2010-01-10T11:47:50.877Z · LW(p) · GW(p)
Do you disagree with any of this?
I certainly think about things differently
This is really frustrating. When you ask questions of us who disagree with you, we tend to say "I don't think the question is well posed". But when we ask questions of you, you won't say yes, or no, or explicitly reject the question - you just return to your own questions. If you don't think the questions you're being asked are well-posed enough to answer, could you say more about why? Otherwise we're not engaging, we're just talking past each other.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-01-11T07:37:48.749Z · LW(p) · GW(p)
It can take a long time to say what the problem is. I just spent several hours trying to do this in Dan's case, and I'm not sure I succeeded. The questions aren't ill-posed, but the whole starting point was problematic. In effect I wanted to demonstrate the possibility of an alternative starting point. Dan managed to respond, and now I to that, and even this comment of yours contributed, but it took a lot of time and consideration of context even to produce an imperfect further reply. It's a tradeoff between responding adequately and responding promptly. There's been an improvement in communication since last time, but it can still get better.
Replies from: MrHen↑ comment by RobinZ · 2010-01-10T03:59:21.460Z · LW(p) · GW(p)
Clarify 5', please: do you intend to say that the base rules of the world are more complicated than the current physics - e.g. how a creature in a Conway's Game of Life board might say, "I know that any live cell with two or three live neighbours lives on to the next generation, but I'm missing how cells become live"?
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-01-12T10:43:00.551Z · LW(p) · GW(p)
The basic ingredients and their modes of combination (not interaction, but things like part-whole relations) need to be different. See descriptions of the second type and I want to be a monist.
Replies from: RobinZ, RobinZ↑ comment by RobinZ · 2010-01-12T12:51:14.469Z · LW(p) · GW(p)
What are "part-whole relations"? That doesn't sound like a natural category in physics.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-01-14T06:38:51.158Z · LW(p) · GW(p)
In physics, if A is part of B, it means it's a spatial part. I think the "parts" of a conscious experience are part of it in some other way. I say this very metaphorically, and only metaphorically, but it's more like the way that polyhedra have faces. The components of a conscious experience, I would think, don't even occur independently of conscious experiences.
There's a whole sub-branch of ontology concerning part-whole relations, called mereology. It potentially encompasses not only spatial parts, but also subsets, "logical parts", "metaphysical parts" (e.g. the property is part of the thing with the property), the "organic wholes" of various holisms, and so on. Of course, this is philosophy, so you have people with sparse ontologies, who think most of this is not really real, and then you have the people who are realists about various abstract or exotic relations.
I think I've invented a name for my own ontological position, by the way - reverse monism. I'll have to explain what that means somewhere...
Replies from: RobinZ↑ comment by RobinZ · 2010-01-14T13:42:24.590Z · LW(p) · GW(p)
Before I respond to this: how much physics have you studied? Just high school, or the standard three semesters of college work? How well did you do in those classes? Have you read any popular-science discussions of physics, etc. outside of the classes you took? Have you studied any particular field of physics-related problems (e.g. materials science/engineering)?
I'm asking this because your discussion of part-whole relations doesn't sound like something a scientist would invoke. If you are an expert, I'll back off, but I have to wonder if you've ever used Newton's Laws on a deeper level than cannonballs fired off cliffs.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-01-15T09:10:40.605Z · LW(p) · GW(p)
I come from theoretical physics. I've trashed my career several times over, but I've always remained engaged with the culture. However, I've also studied philosophy, and that's where all this talk of ontology comes from.
Replies from: RobinZ↑ comment by RobinZ · 2010-01-15T14:29:09.218Z · LW(p) · GW(p)
Can you explain that in terms of physics? According to my understanding, 'part-whole relations' are never explicitly described in the models; only implicit in the solution to the common special cases. For example, quantum mechanics includes no description of temperature; we prove temperature in quantum mechanics through statistical mechanics, without ever invoking additional laws.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-01-18T08:19:50.428Z · LW(p) · GW(p)
Certainly there's no fundamental physical law which talks about part-whole relations. "Spatial part" is a higher-order concept. But it's still an utterly basic one. If I say "the proton is part of that nucleus", that's a physically meaningful statement.
We might have avoided this digression if, instead of part-whole relations, I'd mentioned something like "spatial and temporal adjacency" as an example of the "modes of combination" of fundamental entities which exist in physical ontology. If you take the basic physical reality to be "something at a point in space-time" (where something might be a particle or a bit of field), and then say, how do I conceptually build up bigger, more complicated things? - you do that by putting other somethings at the space-time points "next door" - locations adjacent in space, or upstream/downstream in time.
There are other perspectives on how to make complexity out of simplicity in physics. A more physical perspective would look at interaction, and separate objects becoming dynamically bound in some way. This is the basis of Mario Bunge's philosophy of systems (Bunge was a physicist before he became a philosopher); it's causal interaction which binds subsystems into systems.
So, trying to sum up, we can say that the modes of combination of basic entities in physics have a non-causal aspect - connectedness, being next to each other, in space and time - and a causal aspect - interaction, the state of one affecting the state of another. And these aspects are even related, in that spatiotemporal proximity is required for causal interaction to occur.
Finally, returning to your question - how do I expect physical ontology to change - part of the answer is that I expect the elementary non-causal bindings between things to include options besides spatial adjacency. Spatial proximity builds up spatial geometry and spatially extended objects. I think there will be ontological complexes where the relational glue is something other than space, that conscious states are an instance of this, and that such complexes show up in our present physics in the form of entanglement. Going back to the language of monads - spatial relations are inter-monadic, but intra-monadic relations will be something else.
↑ comment by DanArmak · 2010-01-10T08:11:18.485Z · LW(p) · GW(p)
1'. There is a world, which includes subjective experiences, and (presumably) things which are not subjective experiences.
Let's talk about the MP world, which is restricted to non-subjective items. Do you really doubt it exists? Is it only "presumable"? Do you have in mind an experiment to falsify it?
2'. All information I have about the world, including the subjective experiences of other people, comes through my subjective experiences.
And these subjective experiences are all caused by, and contain the same information as, objective events in the MP world. Therefore all information you have about the MP world is also contained in the MP world. Do you agree?
3'. I possess mathematical/physical theories which appear adequate to describe much of the posited world to varying degrees, but which do not refer to subjective experiences.
Do you agree with my expectation that even with future refinements of these theories, the MP world's theories will remain "closed on MP-ness" and are not likely to lead to descriptions of subjective experiences?
4'. Subjective experiences are causally consequential; they are affected by sensation and they affect behavior, among other things.
Sensation and behaviour are MP, not subjective.
Each subjective experience has an objective, MP counterpart which ultimately contains the same information (expanding on my point (2)). They have the same correlations with other events, and the same causative and explanatory power, as the subjective experiences they cause (or are identical to). Therefore, in a causal theory, it is possible to assign causative power only to MP phenomena without loss of explanatory power. Such a theory is better, because it's simpler and also because we have theories of physics to account for causation, but we cannot account for subjective phenomena causing MP events.
Do you agree with the above?
I can put this another way, as per my item (5): to say that sensation affects (or causes) subjective experience is to imply the logical possibility of a counterfactual world where sensation affects experience differently or not at all. However, if we define sensation as the total of all relevant MP events - the entire state of your brain when sensing something - then I claim that sensation cannot, logically, lead to any subjective experience different from the one it does lead to. IOW, sensation does not cause experience, it is identical with experience.
This theory appears consistent with all we know to date. Do you expect it to be falsified in the future?
5'. The way the world actually is and the way the world actually works is a little more complicated than any theory I currently possess.
This doesn't seem related to my own item (5), so please respond to that as well - do you agree with it?
As for your response, I agree that our MP theories are incomplete. Do you think that more complete theories would not, or could not, remain restricted to the MP world? (item 3)
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-01-11T07:05:37.791Z · LW(p) · GW(p)
I think I must try one more largely indirect response and see if that leaves anything unanswered.
Reality consists, at least in part, of entities in causal interaction. There will be some comprehensive and correct description of this. Then, there will be descriptions which leave something out. For example, descriptions which say nothing about the states of the basic entities beyond assigning each state a label, and which then describe those causal interactions in terms of state labels. The fundamental theories we have are largely of this second type. The controversial aspects of consciousness are precisely those aspects which are lost in passing to a description of the second type. These aspects of consciousness are not causally inert, or else conscious beings wouldn't be able to notice them and remark upon them; but again, all the interesting details of how this works are lost in the passage to a description of the second type, which by its very nature can only describe causality in terms of arbitrary laws acting on entities whose natures and differences have been reduced to a matter of labels.
What you call "MP theories" only employ these inherently incomplete descriptions. However, these theories are causally closed. So, even though we can see that they are ontologically incomplete, people are tempted to think that there is no need to expand the ontology; we just need to find a way to talk about life and everything we want to explain in terms of the incomplete ontology.
Since ontological understandings can develop incrementally, in practice such a program might develop towards ontologically complete theories anyway, as people felt the need to expand what they mean by their concepts. But that's an optimistic interpretation, and clearly a see-no-evil approach also has the potential to delay progress.
Let's talk about the MP world, which is restricted to non-subjective items. Do you really doubt it exists? Is it only "presumable"? Do you have in mind an experiment to falsify it?
I have trouble calling this a "world". The actual world contains consciousness. We can talk about the parts of the actual world that don't include consciousness. We can talk about the actual world, described in some abstract way which just doesn't mention consciousness. We can talk about a possible world that doesn't contain consciousness.
But the way you set things up, it's as if you're inviting me to talk about the actual world, using a theoretical framework which doesn't mention consciousness, and in a way which supposes that consciousness also plays no causal role. It just seems the maximally unproductive way to proceed. Imagine if we tried to talk about gravity in this way: we assume models which don't contain gravity, and we try to talk about phenomena as if there was no such thing as gravity. That's not a recipe for understanding gravity, it's a recipe for entirely dispensing with the concept of gravity. Yet it doesn't seem you want to do without the concept of consciousness. Instead you want to assume a framework in which consciousness does not appear and plays no role, and then deduce consequences. Given that starting point, it's hardly surprising that you then reach conclusions like "it is impossible to say anything about conscious mind states". And yet we all do every day, so something is wrong with your assumptions.
Also, in your follow-up, you have gone from saying that consciousness is completely outside the framework, to some sort of identity theory - "each subjective experience has an objective, MP counterpart", "sensation ... is identical with experience". So I'm a little confused. You started out by talking about the feel of experience. Are you saying that you think you know what an experience is, ontologically, but you don't understand why it feels like anything?
Originally you said:
All your questions come down to: why does our existence feel like something?
but that's not quite right. The feel of existence is not an ineffable thing about which nothing more can be said, except that it's "something". To repeat my current list of problem items, experience includes colors, meanings, time, and a sort of unity. Each one poses a concrete problem. And for each one, we do have some sort of phenomenological access to the thing itself, which permits us to judge whether a given ontological account answers the problem or not. I'm not saying such judgments are infallible or even agreed upon, just that we do possess the resources to bring subjective ontology and physical ontology into contact, for comparison and contrast.
Does this make things any clearer? You are creating problems (impossibility of knowledge of consciousness) and limitations (future theories won't contain descriptions of subjective experience) by deciding in advance to consider only theories with the same general ontology we have now. Meanwhile, on the side you make a little progress by deciding to think about consciousness as a causal element after all, but then you handicap this progress by insisting on switching back to the no-consciousness ontology as soon as possible.
As a footnote, I would dispute that sensation and behavior, as concepts, contain no reference to subjectivity. A sensation was originally something which occurred in consciousness. A behavior was an act of an organism, partly issuing from its mental state. They originally suppose the ontology of folk psychology. It is possible to describe a behavior without reference to mental states, and it is possible to define sensation or behavior analogously, but to judge whether the entities picked out by such definitions really deserve those names, you have to go back to the mentalistic context in which the words originated and see if you are indeed talking about the same thing.
Replies from: DanArmak↑ comment by DanArmak · 2010-01-16T19:13:30.425Z · LW(p) · GW(p)
What you call "MP theories" only employ these inherently incomplete descriptions. However, these theories are causally closed.
If they are causally closed, then our conscious experience cannot influence our behaviour. Then our discussion about consciousness is logically and causally unconnected to the fact of our consciousness (the zombie objection). This contradicts what you said earlier, that
These aspects of consciousness are not causally inert
So which is correct?
Also, I don't understand your distinction between the two types of theories or of phenomena. Leaving casuality aside, what do you mean by:
descriptions which say nothing about the states of the basic entities beyond assigning each state a label
If those entities are basic, then they're like electrons - they can't be described as the composition or interaction of other entities. In that case describing their state space and assigning labels is all we can do. What sort of entities did you have in mind?
Given that starting point, it's hardly surprising that you then reach conclusions like "it is impossible to say anything about conscious mind states". And yet we all do every day, so something is wrong with your assumptions.
About-ness is tricky.
If consciousness is acausal and not logically necessary, then zombies would talk about it too, so the fact that we talk and anything we say about it proves nothing.
If consciousness is acausal but logically necessary, then the things we actually say about it may not be true, due to acausality, and it's not clear how we can check if they're true or not (I see no reason to believe in free will of any kind).
Finally, if consciousness is causal, then we should be able to have causally-complete physical theories that include it. But you agree that the "MP theories" that don't inculde concsiousness are causally closed.
Also, in your follow-up, you have gone from saying that consciousness is completely outside the framework, to some sort of identity theory - "each subjective experience has an objective, MP counterpart", "sensation ... is identical with experience". So I'm a little confused. You started out by talking about the feel of experience. Are you saying that you think you know what an experience is, ontologically, but you don't understand why it feels like anything?
Here's what I meant. If experience is caused by, or is a higher-level description of, the physical world (but is not itself a cause of anything) - then every physical event can be identified with the experience it causes.
I emphatically do not know what consciousness is ontologically. I think understanding this question (which may or may not be legitimate) is half the problem.
All I know is that I feel things, experience things, and I have no idea how to treat this ontologically. Part of the reason is a clash of levels: the old argument that all my knowledge of physical laws and ontology etc. is a part of my experience, so I should treat experience as primary.
The feel of existence is not an ineffable thing about which nothing more can be said, except that it's "something".
I said that "all your questions come down to, why does our existence feel like something? and why does it feel the way it does?"
You focus on the second question - you consider different (counterfactual) possible experiences. When ask why we experience colors, you're implicitly adding "why colors rather than something else?" But to me that kind of question seems meaningless because we can't ask the more fundamental question of "why do we experience anything at all?"
The core problem is that we can't imagine or describe lack-of-experience. This is just another way of saying we can't describe what experience is except by appealing to shared experience.
If we encountered aliens (or some members of LW) and they simply had no idea what we were talking about when we discussed conscious experience - there's nothing we could say to explain to them what it is, much less why it's a Hard Problem.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-01-17T11:29:25.825Z · LW(p) · GW(p)
If they [MP theories] are causally closed, then our conscious experience cannot influence our behaviour.
There are two ways an MP theory can be causally closed but not contain consciousness. First, it can be wrong. Maxwell's equations in a vacuum are causally closed, but that theory doesn't even describe atoms, let alone consciousness.
The other possibility (much more relevant) is that you have an MP theory which does in fact encompass conscious experiences and give them a causal role, but the theory is posed in such a way that you cannot tell what they are from the description.
If those entities are basic, then they're like electrons - they can't be described as the composition or interaction of other entities. In that case describing their state space and assigning labels is all we can do. What sort of entities did you have in mind?
Let's take a specific aspect of conscious experience - color vision. For the sake of argument (since the reality is much more complicated than this), let's suppose that the totality of conscious visual sensation at any time consists of a filled disk, at every point in which there is a particular shade of color. If an individual shade of color is completely specified by hue, saturation, and intensity, then you could formally represent the state of visual sensory consciousness by a 3-vector-valued function defined on the unit disk in the complex plane.
Now suppose you had a physical theory in which an entity with such a state description is part of cause and effect within the brain. It would be possible to study that theory, and understand it, without knowing that the entity in question is the set of all current color sensations. Alternatively, the theory could be framed that way - as being about color, etc - from the beginning.
What's the difference between the two theories, or two formulations of the theory? Much and maybe all of it would come back to an understanding of what the terms of the theory refer to. We do have a big phenomenological vocabulary, whose meanings are ultimately grounded in personal experience, and it seems that to fully understand a hypothetical MP theory containing consciousness, you have to link the theoretical terms with your private phenomenological vocabulary, experience, and understanding. Otherwise, there will only be an incomplete, more objectified understanding of what the theory is about, grounded only in abstraction and in the world-of-external-things-in-space part of experience.
Of course, you could arrive at a theory which you initially understood only in the objectified way, but then you managed to make the correct identifications with subjective experience. That's what proposals for neural correlates of consciousness (e.g. Drescher's gensyms) are trying to do. When I criticize these proposals, it's not because I object in principle to proceeding that way, but because of the details - I don't believe that the specific candidates being offered have the right properties for them to be identical with the elements of consciousness in question.
If experience is caused by, or is a higher-level description of, the physical world (but is not itself a cause of anything) - then every physical event can be identified with the experience it causes.
I don't think we need to further consider epiphenomenalism, unless you have a special reason to do so. Common sense tells us that experiences are both causes and effects, and that a psychophysical identity theory is the sort of theory of consciousness we should be seeking. I just think that the thing on the physical end of the identity relationship is not at all what people expect.
You close out with the question of "why do we experience anything at all?" That is going to be a hard problem, but maybe we need to be able to say what we feel and what feeling is before we can say why feeling is.
Consider the similar question, "why is there something rather than nothing?" I think Heidegger, for one, was very interested in this question. But he ended up spending his life trying to make progress on what existence is, rather than why existence is.
I like to think that "reverse monism" is a small step in the right direction, even regarding the question "why is there experience", because it undoes one mode of puzzlement: the property-dualistic one which focuses on the objectified understanding of the MP theory, and then says "why does the existence of those objects feel like something". If you see the relevant part of the theory as simply being about those feelings to begin with, then the question should collapse to "why do such things exist" rather than "why do those existing things feel like something". Though that is such a subtle difference, that maybe it's no difference at all. Mostly, I'm focused on the concrete question of "what would physics have to be like for a psychophysical identity theory to be possible?"
Replies from: DanArmak↑ comment by DanArmak · 2010-01-27T21:16:40.944Z · LW(p) · GW(p)
Apologies for the late and brief reply. My web presence has been and will continue to be very sporadic for another two weeks.
there are two ways an MP theory can be causally closed but not contain consciousness. First, it can be wrong. Maxwell's equations in a vacuum are causally closed, but that theory doesn't even describe atoms, let alone consciousness.
If it was wrong, how could it be causally closed? No subset of our physical theories (such as Maxwell's equations) is causally disconnected from the rest of them. They all describe common interacting entities.
The other possibility [....] an MP theory which does in fact encompass conscious experiences and give them a causal role, but the theory is posed in such a way that you cannot tell what they are from the description.
Our MP theory has a short closed list of fundamental entities and forces which are allowed to be causative. Consciousness definitely isn't one of these.
You wish to identify consciousness with a higher-level complex phenomenon that is composed of these basic entities. Before rearranging the fundamental physical theories to make it easier to describe, I think you ought to show evidence for the claim that some such phenomenon corresponds to "consciousness". And that has to start with giving a better definition of what consciousness is.
Otherwise, even if you proved that our MP theories can be replaced by a different set of theories which also includes a C-term, how do we know that C-term is "consciousness"?
You close out with the question of "why do we experience anything at all?" That is going to be a hard problem
It needn't be. Here's a very simple theory: a randomly evolved feature of human cognition makes us want to believe in consciouness.
Relevant facts: we believe in and report conscious experience even though we can't define in words what it is or what its absence would be like. (Sounds like a mental glitch to me.) This self-reporting falls apart when you look at the brain closely, as you can observe that experiences, actions, etc. are not only spatially but also temporally distributed (as they must be); but people discussing consciousness try to explain our innate feelings rather than build a theory on those facts - IOW, without the innate feeling we wouldn't even be talking about this. Different people vary in their level of support for this idea, and rational argument (as in this discussion) is weak at changing it. We know our cognitive architecture reliably gives rise to some ideas and behaviors, which are common to practically every culture: e.g. belief in spirits, gods, or an afterlife.
Here's a random mechanism too: cognitive architecture makes regularly think "I am conscious!". Repeated thoughts, with nothing opposing them (at younger ages at least), become belief (ref: people being brought to believe anything not frowned upon by society, tend to keep believing it).
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-01-31T10:20:08.841Z · LW(p) · GW(p)
If it was wrong, how could it be causally closed?
Causal closure in a theory is a structural property of the theory, independent of whether the theory is correct. We are probably not living in a Game-of-Life cellular automaton, but you can still say that the Game of Life is causally closed.
Consider the Standard Model of particle physics. It's an inventory of fundamental particles and forces and how they interact. As a model it's causally closed in the sense of being self-sufficient. But if we discover a new particle (e.g. supersymmetry), it will have been incomplete and thus "wrong".
You wish to identify consciousness with a higher-level complex phenomenon that is composed of these basic entities. Before rearranging the fundamental physical theories to make it easier to describe, I think you ought to show evidence for the claim that some such phenomenon corresponds to "consciousness". And that has to start with giving a better definition of what consciousness is.
Otherwise, even if you proved that our MP theories can be replaced by a different set of theories which also includes a C-term, how do we know that C-term is "consciousness"?
I totally agree that good definitions are important, and would be essential in justifying the identification of a theoretical C-term or property with consciousness. For example, one ambiguity I see coming up repeatedly in discussions of consciousness is whether only "self-awareness" is meant, or all forms of "awareness". It takes time and care to develop a shared language and understanding here.
However, there are two paths to a definition of consciousness. One proceeds through your examination of your own experience. So I might say: "You know how sometimes you're asleep and sometimes you're awake, and how the two states are really different? That difference is what I mean by consciousness!" And then we might get onto dreams, and how dreams are a form of consciousness experienced during sleep, and so the starting point needs to be refined. But we'd be on our way down one path.
The other path is the traditional scientific one, and focuses on other people, and on treating them as objects and as phenomena to be explained. If we talk about sleep and wakefulness here, we mean states exhibited by other people, in which certain traits are observed to co-occur: for example, lying motionless on a bed, breathing slowly and regularly, and being unresponsive to mild stimuli, versus moving around, making loud structured noises, and responding in complex ways to stimuli. Science explains all of that in terms of physiological and cognitive changes.
So this is all about the relationship between the first and second paths of inquiry. If on the second path we find nothing called consciousness, that presents one sort of problem. If we do find, on the second path, something we wish to call consciousness, that presents a different problem and a lesser problem, namely, what is its relationship to consciousness as investigated in the first way? Do the two accounts of consciousness match up? If they don't, how is that to be resolved?
These days, I think most people on the second path do believe in something called consciousness, which has a causal and explanatory role, but they may disagree with some or much of what people on the first path say about it. In that situation, you only face the lesser problem: you agree that consciousness exists, but you have some dispute about its nature. (Of course, the followers of the two paths have their internal disagreements, with their peers, as well. We are not talking about two internally homogeneous factions of opinion.)
Here's a very simple theory: a randomly evolved feature of human cognition makes us want to believe in consciousness.
If you want to deny that there actually is any such thing as consciousness (saying that there is only a belief in it), you'll need to define your terms too. It may be that you are not denying consciousness as such, just some particular concept of it. Let's start with the difference between sleep and wakefulness. Do you agree that there is a subjective difference there?
comment by Mitchell_Porter · 2010-01-10T01:32:07.494Z · LW(p) · GW(p)
This article contains three simple questions which I want to see answered. To organize the discussion, I'm creating a thread for each question, so people with an answer can state it or link to it. If you link, please provide a brief summary of your answer here as well.
First question: Where is color?
I see a red apple. The redness, I grant you, is not a property of the thing that grew on the tree, the object outside my skull. It's the sensation or perception of the apple which is red. However, I do insist that something is red. But if reality is nothing but particles in space, and empty space is not red, and the particles are not red, then what is? What is the red thing; where is the redness?
Replies from: Alicorn, Psychohistorian, HalFinney, AndyWood, Wei_Dai, Vladimir_Nesov, FAWS, Cyan, Jonii, Kyro↑ comment by Alicorn · 2010-01-10T01:50:11.368Z · LW(p) · GW(p)
However, I do insist that something is red.
Why?
That aside: Red is something that arises out of things that are not themselves red. Right now I'm wearing a sweater, which is made out of things that are not themselves sweaters (plastic, cotton with a little nylon and spandex; or on a higher level, buttons, sleeves, etc.) A sweater came into existence when the sleeves were sewn onto the rest of it (or however it was pieced together); no particles, however, came into existence. My sweater just is a spatial relationship between a vague set of particles (I say "vague" because it could lose a button and still be a sweater, but unlike a piece of lint, the button is really part of the sweater). If I put it through the shredder, no particles would be destroyed, but it would not be a sweater.
My sweater is also red. When I look at it, I experience the impression of looking at a red object. No particles come into existence when I start to have this experience, but they do arrange differently. Some of the ways my brain can be arranged are such that when they are instantiated, I experience the impression of looking at something red - such as the ones that come into play when I look at my sweater in light good enough for me to have color vision. If I put my brain through the shredder, not only would I die, but I'd also no longer be experiencing the impression of looking at my sweater.
Why does the fact that individual particles are not red prompt, for you, the question of what is red, but the fact that individual particles are not sweaters does not prompt the question of what sweaters are?
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-01-12T10:26:08.894Z · LW(p) · GW(p)
However, I do insist that something is red.
Why?
Why do I believe it, or why do I insist on it? I believe it because I see it, and I insist on it because other people keep telling me that redness is actually some thing which doesn't sound red.
Why does the fact that individual particles are not red prompt, for you, the question of what is red, but the fact that individual particles are not sweaters does not prompt the question of what sweaters are?
When you say a "sweater just is a spatial relationship between a vague set of particles", it is not mysterious how that set of particles manages to have the properties that define a sweater. If we ignore the aspect of purpose (meant to be worn), and consider a sweater to be an object of specified size, shape, and substance - I know and can see how to achieve those properties by arranging atoms appropriately.
But when you say that experiencing redness is also a matter of atoms in your brain arranging themselves appropriately, I see that as an expression of physicalist faith. Not only are the details unknown, but it is a complete mystery as to how, even in principle, you would make redness by combining the entities and properties available in physical ontology. And this is not the case for size, shape, and substance; it is not at all mysterious how to make those out of basic physics.
As I say in the main article, the specific proposals to get subjective color out of physics are actually property dualisms. They posit an identity between the actual color, that we see in experience, and some complicated functional, computational, or other physical property of part of the brain. My position is that the color experience, the thing we are trying to understand, is nothing like the thing on the other side of the alleged identity; so if that's your theory, you should be a property dualist. I want to be a monist, but that is going to require a new physical ontology, in which things that do look like experiences are among the posited entities.
Replies from: Alicorn↑ comment by Alicorn · 2010-01-12T14:10:10.048Z · LW(p) · GW(p)
people keep telling me that redness is actually some thing which doesn't sound red.
Sound red? If nothing sounds red, that means you are free of a particular sort of synesthesia. :P
Anyway: Suppose somebody ever-so-carefully saws open your skull and gives your brain a little electric shock in just exactly the right place to cause you to taste key lime pie. The shock does something we understand in terms of physics. It encourages itty bits of your brain to migrate to new locations. The electric shock does not have a mystical secondary existence on a special experiences-only plane that's dormant until it gets near your brain but suddenly springs into existence when it's called upon to generate pie taste, does it?
We don't know enough about the brain to do this manually, as it were, for all possible experiences; or even anything very specific. fMRIs will tell us the general neighborhood of brain activity as it happens; and hormone levels will tell us some of the functions of some of the soup in which the organ swims; and electrical brain stimulation experiments will let us investigate more directly. Of course we aren't done yet. The brain is fantastically complicated and there's this annoying problem where if we mess with it too much we get charged with murder and thrown in jail. But they have yet to run into a wall; why do you think that, when they've found where pain lives and can tell you what you must have injured if you display behavior pattern X, they're suddenly going to stop making progress?
My position is that the color experience, the thing we are trying to understand, is nothing like the thing on the other side of the alleged identity
So your problem is that the two things lack a primitive resemblance? My sweater's unproblematic because it and the yarn that was used to make it are... what, the same color? Both things you already associate with sweaters? If somebody told you that the brain doesn't handle emotions, the heart does - the literal, physical heart - would you buy that more readily because the heart is traditionally associated with emotion and that sounds right, so there's some kind of resemblance going on? If that's what's going on, I hope having it pointed out helps you discard that... "like", in English, contains so much stuff all curled up in it that if you're relying on your concept thereof to guide your theorizing, you're in deep trouble.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-01-13T08:05:28.292Z · LW(p) · GW(p)
So your problem is that the two things lack a primitive resemblance?
They lack any resemblance at all. A trillion tiny particles moving in space is nothing like a "private shade of homogeneous pink", to use the phrase from Dennett (he has quite a talent for describing things which he then says aren't there). And yet one is supposed to be the same thing as the other.
The evidence from neuroscience is, originally, data about correlation. Color, taste, pain have some relationship with physical brain events. The correlation itself is not going to tell you whether to be an eliminativist, a property dualist, or an identity theorist.
I am interested in the psychological processes contributing to this philosophical choice but I do not understand them yet. What especially interests me is the response to this "lack of resemblance" issue, when a person who insists that A is B concedes that A does not "resemble" B. My answer is to say that B is A - that physics is just formalism, implying nothing about the intrinsic nature of the entities it describes, and that conscious experience is giving us a glimpse of the intrinsic nature of at least a few of those entities. Physics is actually about pink, rather than about particles. But what people seem to prefer is to deny pinkness in favor of particles, or to say that the pinkness is what it's like to be those particles, etc.
Replies from: Psychohistorian, thomblake↑ comment by Psychohistorian · 2010-01-13T19:03:29.187Z · LW(p) · GW(p)
A trillion tiny particles moving in space is nothing like a "private shade of homogeneous pink"
A trillion tiny particles moving in space is like a "private shade of homogeneous pink" in that it reflects light that stimulates nerves that generate a private shade of homogenous pink. If you forbid even this relationship, you've assumed your conclusion. If not, you use "nothing" too freely. If this is a factual claim, and not an assumption, I'd like to see the research and experiments corroborating it, because I doubt they exist, or, indeed, are even meaningfully possible at this time.
To use my previous example, the electrical impulses describing a series of ones and zeroes are "nothing like" lesswrong.com, yet here we are.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-01-14T05:21:36.285Z · LW(p) · GW(p)
I'm referring to the particles in the brain, some aspect of which is supposed to be the private shade of color.
Replies from: Psychohistorian↑ comment by Psychohistorian · 2010-01-14T20:35:42.281Z · LW(p) · GW(p)
I don't see how this is meaningfully distinct from Alicorn's sweater. Sweater-ness is not a property of cloth fibers or buttons.
I think the real problem here is that consciousness is so dark and mysterious. Because the units are so small and fragile, we can't really take it apart and put it back together again, or hit it with a hammer and see what happens. Our minds really aren't evolved to think about it, and, without the ability to take it apart and put it back together and make it happen in a test tube - taking good samples seems to rather break the process - it's extremely difficult to force our minds to think about it. By contrast, we're quite used to thinking about sweaters or social organization or websites. We may not be used thinking about say, photosynthesis or the ATP cycle, but we can take them apart and put them back together again, and recreate them in a test tube.
↑ comment by thomblake · 2010-01-14T20:45:55.372Z · LW(p) · GW(p)
It might behoove you to examine Luciano Floridi's treatment of "Levels of Abstraction" - he seems to be getting at much the same thing, if I'm understanding you correctly. To read in in a pragmatist light: there's a certain sense in which we want to talk about particles, and a sense in which we want to talk about pinkness, and on the face of it there's no reason to prefer one over another.
It does make sense to assert that Physics is trying to explain "pinkness" via particles, and is therefore about pinkness, not about particles.
↑ comment by Psychohistorian · 2010-01-13T19:00:11.632Z · LW(p) · GW(p)
Where is lesswrong.com? "On the internet" would be the naive answer, but there's no part of the internet we could naively recognize as being lesswrong.com. A bunch of electrical impulses get interpreted as ones and zeroes which get translated in a certain language, which converts them into another language (English), which each mind interacting with the site translates in its own way. At the base level, before minds get involved, there's nothing more complex that a bunch of magnets and electric signals and some servers and so on (I'm not a computer person, so cut me some slack on the details). Yet, out of all of that emerges your post, this comment, and so on.
I know that it is in principle possible to understand how all of this comes together, but I also know that I do not in fact understand it. If I were really to look at how complex this site is - down to the level of the chemist who makes the fertilizer to supply the farmer who feeds the truck driver who delivers the petroleum that gets refined into the plastic that makes the keyboard of the engineer who maintains the power plant that keeps the server running - I have absolutely no idea what's going on, and probably never will even if I devoted my entire life to understanding how this website comes together. In fact, I have good reason to believe there are parts of what's going on that I don't even know are parts of what are going on - I don't even understand the basic underlying structure at a complete level. But if a bunch of people were really dedicated to it, they could probably figure it out, so that by asking the group of them, you could figure out what you needed to know about how the site works; in other words, it is in principle understandable, even if no one understands it in its entirety.
There is thus nothing particularly problematic about saying, "So, I don't get how this whole consciousness thing works, but there's probably no magic involved," just as there's no magic (excepting EY's magic) involved in putting this site together. Saying, "I can't naively figure out how some extremely complicated system works, therefore, the answer is: magic!" is simply not a reasonable solution. It is possible that there is something more going on in the brain that we can currently understand, but it seems exceedingly unlikely that it is in principle un-understandable.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-01-14T05:40:53.727Z · LW(p) · GW(p)
There is thus nothing particularly problematic about saying, "So, I don't get how this whole consciousness thing works, but there's probably no magic involved," just as there's no magic (excepting EY's magic) involved in putting this site together.
If I were to say to you that negative numbers can be made by adding together positive numbers, you just have to add them together in the right way - that would sound strange and wrong, yes? If you start at 1, and keep adding 1, you do not expect your sum to equal -1 (or the square root of -1, or an apple) at any stage. When people say that they do not see how piling up atoms can give rise to color, meaning, consciousness, etc., they are engaged in this sort of reasoning. They're saying: I may not know every property that very large numbers / very large piles of atoms would exhibit, but it would be magic to get that property from those ingredients.
Replies from: Psychohistorian, wnoise, Cyan↑ comment by Psychohistorian · 2010-01-14T20:47:28.417Z · LW(p) · GW(p)
The problem with the analogy is that we know a whole lot about numbers - math is an artificial language which we created and decided upon the axioms of. How do you know enough about matter and neurons to know that it relates to consciousness in the way that adding positive numbers relates to negative numbers or apples? But I've made this point before.
What I would find more interesting is an explanation of what magic would do here. It seems obvious that our perception of a homogenous shade of pink is, in some significant way, related to lightwave frequencies, retinas, and neurons. Let's assume there is some "magic" involved that in turn converts this physical phenomena into an experience. Wouldn't it have to interact with neurons and such, so that it generates an experience of pink and not an experience of strawberry-rhubarb pie? If it's epiphenomenal, how could it accomplish this, and how could it be meaningful? If it's not epiphenomenal, how does it interact with actual matter? Why can't we detect it?
It's quite clear that when it comes to how consciousness works, the current best answer is, "We don't get it, but it has something to do with the brain and neurons." Answering, "We don't get it, but it has something to do with the brain and neurons and magic" appears to be an inferior answer.
This may be a cheap shot around these parts, but the non-materialist position feels a lot like an argument for the existence of God.
Replies from: Jack, Mitchell_Porter↑ comment by Jack · 2010-01-14T20:54:44.586Z · LW(p) · GW(p)
It's quite clear that when it comes to how consciousness works, the current best answer is, "We don't get it, but it has something to do with the brain and neurons." Answering, "We don't get it, but it has something to do with the brain and neurons and magic" appears to be an inferior answer.
This is perfect and I'm not sure there is much more to say.
↑ comment by Mitchell_Porter · 2010-01-15T08:22:09.820Z · LW(p) · GW(p)
How do you know enough about matter and neurons to know that it relates to consciousness in the way that adding positive numbers relates to negative numbers or apples?
It's our theories of matter which are the problem - and which are clear enough for me to say that something is missing. My position as stated here actually is an identity theory. Experiences are a part of the brain and are causally relevant. But the ontology of physics is wrong, and the attempted reduction of phenomenology to that ontology is also wrong. Instead, phenomenology is giving us a glimpse of the true ontology. All that we see directly is the inner ontology of the conscious experience itself, but one supposes that there is some relationship to the ontology of everything else.
↑ comment by wnoise · 2010-02-09T00:55:31.742Z · LW(p) · GW(p)
If I were to say to you that negative numbers can be made by adding together positive numbers,
\sum_{n=0}^{infinity} 2^n "=" -1.
That is a bit tongue in cheek, but there are divergent sums that are used in serious physical calculations.
Replies from: Blueberry, Bo102010↑ comment by Blueberry · 2010-02-09T01:32:53.154Z · LW(p) · GW(p)
there are divergent sums that are used in serious physical calculations.
I'm curious about this. More details please!
Replies from: wnoise↑ comment by wnoise · 2010-02-09T05:23:22.305Z · LW(p) · GW(p)
These mostly crop up in quantum field theory, where various formal expressions have infinite values. These can often be "regularized" to give finite results, or at least turned into a form that while still infinite, can be "renormalized" by such means as considering various terms as referring to observed values, rather than the "bare values", which are carefully tweaked (often taking limits as they go to zero) in a coordinated way, so that the observed values remain okay.
Letting s be the sum above, in some sense what we're "really" saying is that s = 1 + 2 s, which can be seen by formal manipulation. This has two solutions in the (one-point compactification of) the complex numbers: infinity, and -1. When doing things like summing Feynmann diagrams, we can have similar things where a physical propagator is essentially described as a bare propagator plus perturbative terms that should be written in terms of products of propagators, leading again to infinite series that diverge (several interlocked infinite series, actually -- the photon propagator should include terms with each charged particle, the electron should include terms with photon intermediates, etc.).
IIRC, The Casimir effect can be explained by using Zeta function regularization to sum up contributions of an infinite number of vaccuum modes, though it is certainly not the only way to perform the calculation
http://cornellmath.wordpress.com/2007/07/28/sum-divergent-series-i/ and the next two posts are a nice introduction to some of these methods.
Wikipedia has a fair number of examples:
- http://en.wikipedia.org/wiki/1_−_2_%2B_3_−_4_%2B_·_·_·
- http://en.wikipedia.org/wiki/1_−_2_%2B_4_−_8_%2B_·_·_·
- http://en.wikipedia.org/wiki/1_%2B_1_%2B_1_%2B_1_%2B_·_·_·
- http://en.wikipedia.org/wiki/1_%2B_2_%2B_3_%2B_4_%2B_·_·_·
Explicit physics calculations I do not have at the ready.
EDIT: please do not take the descriptions of the physics above too seriously. It's not quite what people actually do, but it's close enough to give some of the flavor.
↑ comment by Cyan · 2010-01-15T14:58:53.343Z · LW(p) · GW(p)
Can you clarify why
When people say that they do not see how piling up atoms can give rise to color, meaning, consciousness, etc., they are engaged in this sort of reasoning.
does not also apply to the piling up of degrees of freedom in a quantum monad?
I have another question, which I expect someone has already asked somewhere, but I doubt I'll be able to find your response, so I'll just ask again. Would a simulation of a conscious quantum monad by a classical computation also be conscious?
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-01-16T06:01:39.381Z · LW(p) · GW(p)
Answer to the first question: everything I say here about redness applies to the other problem topics. The final stage of a monadic theory is meant to have the full ontology of consciousness. But it may have an intermediate stage in which it is still just mathematical.
Answer to the second question: No. In monadology (at least as I conceive it), consciousness is only ever a property of individual monads, typically in very complex states. Most matter consists of large numbers of individual monads in much simpler states, and classical computation involves coordinating their interactions so that the macrostates implement the computation. So you should be able to simulate the dynamics of a single complex monad using many simple monads, but that's all.
Replies from: Cyan↑ comment by Cyan · 2010-01-16T12:25:05.044Z · LW(p) · GW(p)
So you should be able to simulate the dynamics of a single complex monad using many simple monads, but that's all.
And would such a computation claim to be conscious, p-zombie-style?
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-01-16T12:56:01.413Z · LW(p) · GW(p)
If the conscious being it was simulating would do so, then yes.
On the general topic of simulation of conscious beings, it has just occurred to me... Most functionalists believe a simulation would also be conscious, but a giant look-up table would not be. But if the conscious mind consists of physically separable subsystems in interaction - suppose you try simulating the subsystems with look-up tables, at finer and finer grains of subdivision. At what point would the networked look-up-tables be conscious?
Replies from: Cyan, RobinZ, Cyan↑ comment by Cyan · 2010-01-16T13:48:11.518Z · LW(p) · GW(p)
Would a silicon-implemented Mitchell Porter em, for no especial reason (lacking consciousness, it can have none), attempt to reimplement itself in a physical system with a quantum monad?
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-01-17T06:53:22.985Z · LW(p) · GW(p)
In terms of current physics, a monad is supposed to be a lump of quantum entanglement, and there are blueprints for a silicon quantum computer in which the qubits are dopant phosphorus atoms. So consciousness on a chip is not in itself a problem for me, it just needs to be a quantum chip.
But you're talking about an unconscious classical simulation. OK. The intuition behind the question seems to be: because of its beliefs about consciousness, the simulation will think it can't be conscious in its current form, and will try to make itself so. It doesn't sound very likely. But it's more illuminating to ask a different question: what happens when an unconscious simulation of a conscious mind, holding a theory about consciousness according to which such a simulation cannot be conscious, is presented with evidence that it is such a simulation itself?
First, we should consider the conscious counterpart of this, namely: an actually conscious being, with a theory of consciousness, is presented with evidence that it is the sort of thing that cannot be conscious according to its theory. To some extent this is what happened to the human race. The basic choice is whether to change the theory or to retain it. It's also possible to abandon the idea of consciousness; or even to retain the concept of consciousness but decide that it doesn't apply to you.
So, let's suppose I discover that my skull is actually full of silicon chips, not neurons, and that they appear to only be performing classical computations. This would be a rather shocking discovery for a lot of mundane reasons, but let's suppose we get those out of the way and I'm left with the philosophical problem. How do I respond?
To begin with, the situation hasn't changed very much! I used to think that I had a skull full of neurons which appear to only be performing classsical computations. But I also used to think that, in reality, there was probably something quantum happening as well, and so took an interest in various speculations about quantum effects in the brain. If I find my brain to in fact be made of silicon chips, I can still look for such effects, and they really might be there.
To take the thought experiment to its end, I have to suppose that the search turns up nothing. The quantum crosstalk is too weak to have any functional significance. Where do I turn then? But first, let's forget about the silicon aspect here. We can pose the thought experiment in terms of neurons. Suppose we find no evidence of quantum crosstalk between neurons. Everything is decoherent, entanglement is at a minimum. What then?
There are a number of possibilities. Of course, I could attempt to turn to one of the many other theories of consciousness which assume that the brain is only a classical computer. Or, I could turn to physics and say the quantum coherence is there, but it's in some new, weakly interacting particle species that shadows the detectable matter of the brain. Or, I could adopt some version of the brain-in-a-vat hypothesis and say, this simply proves that the world of appearances is not the real world, and in the real world I'm monadic.
Now, back to the original scenario. If we have an unconscious simulation of a mind with a monadic theory of consciousness, and the simulation discovers that it is apparently not a monad, it could react in any of those ways. Or rather, it could present us with the simulation of such reactions. The simulation might change its theory; it might look for more data; it might deny the data. Or it might simulate some more complicated psychological response.
Replies from: Cyan↑ comment by Cyan · 2010-01-17T17:52:29.601Z · LW(p) · GW(p)
Thanks for clearing up the sloppiness of my query in the process of responding to it. You enumerated a number of possible responses, but you haven't committed a classical em of you to a specific one. Are you just not sure what it would do?
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-01-18T07:48:43.964Z · LW(p) · GW(p)
It's a very hypothetical scenario, so being not sure is, surely, the correct response. But I revert to pondering what I might do if in real life it looks like conscious states are computational macrostates. I would have to go on trying to find a perspective on physics whereby such states exist objectively and have causal power, and in which they could somehow look like or be identified with subjective experience. Insofar as my emulation concerned itself with the problem of consciousness, it might do that.
Replies from: Cyan↑ comment by RobinZ · 2010-01-16T15:42:49.712Z · LW(p) · GW(p)
I think Eliezer Yudkowsky's remarks on giant lookup tables in the Zombie Sequence just about cover the interesting questions.
↑ comment by Cyan · 2010-01-16T13:26:41.849Z · LW(p) · GW(p)
The reason lookup tables don't work is that you can't change them. So you can use a lookup table for, e.g., the shape of an action potential (essentially the same everywhere), but not for the strengths of the connections between neurons, which are neuroplastic.
Replies from: Nick_Tarleton↑ comment by Nick_Tarleton · 2010-01-16T13:50:25.894Z · LW(p) · GW(p)
A LUT can handle change, if it encodes a function of type (Input × State) → (Output × State).
Replies from: Cyan↑ comment by Cyan · 2010-01-16T14:05:10.595Z · LW(p) · GW(p)
Since I can manually implement any computation a Turing machine can, for some subsystem of me, that table will have to contain the "full computation" table that checks every possible computation for whether it halts before I die. I submit such a table is not very interesting.
Replies from: wedrifid↑ comment by HalFinney · 2010-01-13T18:04:37.690Z · LW(p) · GW(p)
Suppose it turned out that the part of the brain devoted to experiencing (or processing) the color red actually was red, and similarly for the other colors. Would this explain anything?
Wouldn't we then wonder why the part of the brain devoted to smelling flowers did not smell like flowers, and the part for smelling sewage didn't stink?
Would we wonder why the part of the brain for hearing high pitches didn't sound like a high pitch? Why the part which feels a punch in the nose doesn't actually reach out and punch us in the nose when we lean close?
I can't help feeling that this line of questioning is bizarre and unproductive.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-01-14T05:57:35.604Z · LW(p) · GW(p)
Hal, what would be more bizarre - to say that the colors, smells, and sounds are somewhere in the brain, or to say that they are nowhere at all? Once we say that they aren't in the world outside the brain, saying they are inside the brain is the only place left, unless you're a dualist.
Most people here are saying that these things are in the brain, and that they are identical with some form of neural computation. My objection is that the brain, as currently understood by physics, consists of large numbers of particles moving in space, and there is no color, smell, or sound in that. I think the majority response to that is to say that color, smell, sound is how the physical process in question "feels from the inside" - to which I say that this is postulating an extra property not actually part of physics, the "feel" of a physical configuration, and so it's property dualism.
If the redness, etc, is in the brain, that doesn't mean that the brain part in question will look red when physically examined from outside. Every example of redness we have was part of a subjective experience. Redness is interior to consciousness, which is interior to the thing that is conscious. How the thing that is conscious looks when examined by another thing that is conscious is a different matter.
↑ comment by AndyWood · 2010-01-10T07:53:37.092Z · LW(p) · GW(p)
Red is in your mind. It's a sensation. It's what it feels like inside when you're looking at something you call red. Nothing is actually red, it's just a verbal symbol you assign to a particular inner sensation.
I will very, very happily grant that we do not have a good explanation for how the brain creates such subjective, inner sensations. Notice I wrote "how" and not "why".
There is no such thing as a zombie as they are usually defined, because every time you make a brain, so long as it is physically indistinguishable from a regular brain, it's going to be conscious. That's what happens when you make a brain that way (you can try it out by having a child). On the other hand, if we allow the definition of a zombie to be changed just a little bit, so that it includes unconscious things that are merely behaviorally indistinguishable from conscious people, then I see no problem at all with that kind of zombie. But their insides would be physically, detectably different. Their brains would not work the same way as ours. If they did, then they would be conscious.
Regular brains produce private, inner sensation, just as surely as the sun produces radiation. The right question is not why should this be so?, but how does it do it? And I grant that this is an unanswered question. Heterophenomenology is just fine as far as it goes, but it doesn't go this far. All that stuff about only asking why an agent verbalizes or believes that it sees red is profoundly unsatisfying as an explanation for redness, and for a very good reason. It's like behaviorism in psychology - maybe fine for some purposes, but inherently limited, in the negative sense. It ignores the inner sensation that we all know is there.
Now, that's no reason to suppose that the red sensation is not explainable or reducible. We just don't understand the brain well enough yet, so we'll just have to keep thinking about it and wait until we do.
↑ comment by Wei Dai (Wei_Dai) · 2010-01-10T07:53:03.598Z · LW(p) · GW(p)
Did anyone else notice the similarity of Mitchell's arguments in this post, and the one in his comment to one of my posts? Here he says that there is no color in a purely physical description of a mind, and in his comment to my post he said that there is no utility function in a purely physical description of a mind.
I think this argument actually works better here (with color), because my counter-argument to his comment doesn't work. What I said was that in principle we know how to go from a utility function to a physical description of an object (by creating an AI) and so in principle we also know how to go from a physical description to a utility function.
Here, we don't know how to go from a color to a physical description of a mind that can experience that color, nor can we tell what color a mind is experiencing or capable of experiencing, given a physical description of it. But I'm not sure we should expect this state of affairs to continue forever.
↑ comment by Vladimir_Nesov · 2010-01-10T01:48:45.919Z · LW(p) · GW(p)
From the sequence on reductionism:
Of course, this only answers to where is the behavior of seeing color -- but then the correspondence between (introspective) behavior and experience is strict, even if the way in which it is achieved may be nontrivial:
↑ comment by FAWS · 2010-01-10T03:00:15.182Z · LW(p) · GW(p)
Reposting my question from upthread:
I'm not quite sure I understand the problem with blueness as you see it.
Suppose nouroscience was advanced enough that it could manipulate your perception of colors in any arbitrary way just by manipulating your neurons. For example they could make you perceive blue as you previously perceived red and the other way round, induce synaesthesia and make you perceive the smell of roses, the taste of salt, the note C or other things as blue. They could change your perception of color completely, leaving your new perception of colors only as similar to your old one as that one was to your perception of smells, flavor or sounds. If all of this was true, would that be enough for you to accept that blueness is sufficiently explicable by the behaviour of neurons alone? Or would you argue that while neurons are enough to induce the sensation of blueness this sensation itself is still something beyond the mere behaviour of neurons?
↑ comment by Cyan · 2010-01-15T14:58:55.411Z · LW(p) · GW(p)
What would be your reply to
Replies from: Mitchell_PorterBut if reality is nothing but quantum monads in space, and empty space is not red, and the quantum monads are not red, then what is?
↑ comment by Mitchell_Porter · 2010-01-16T05:53:05.567Z · LW(p) · GW(p)
The redness must be in the monad. The point of postulating monads is to have an ontological correlate to the phenomenological unity of consciousness. Redness is interior to consciousness, which is interior to the thing that is conscious. A theory of monads might have an intermediate stage of incomplete descriptions, in which it is described purely formally, mathematically, and causally, but the objective is to have a theory in which there is something recognizably identical with a total instantaneous conscious experience. This is also the point of reverse monism.
↑ comment by Jonii · 2010-01-10T10:35:47.927Z · LW(p) · GW(p)
Simple "Our experience is what world looks like if you happen to be a bunch of neurons firing inside an apes head" is surprisingly strong reply to the questions you raised. If we take neurons that are put together like they are in brain, and give it sensory input that resembles blue ball, we can ask that brain what it thinks it's seeing. It'll answer "blue ball", and there's nothing weird happening here. Here, answering, thinking or even seeing the blue ball is simply a brain state, purely physical phenomenon.
The mystery, as far as I can tell, happens when we notice that we could theoretically try to see the world as that brain would it see, and suddenly we are actually experiencing that blue ball and all the qualias that come with it. Now the magic is in a much smaller space: There is no magic happening when brain claims it sees blue things, and there's nothing mysterious when we take brains point of view and try to understand how we'd see the world if we were just a brain. So the mystery of consciousness seems to hide in "in a world with no observers, can we sensibly talk about what would the world be like if seen from non-sentient things perspective". If we can, the qualia seem to be in every aspect identical to that perspective.
So what's the ontology of perspective? I have no idea, but perspective seems to go hand in hand with something physical even existing, so we could be strictly materialistic while still acknowledging the existence of qualia.
↑ comment by Kyro · 2011-04-07T01:19:50.476Z · LW(p) · GW(p)
Color is in the wavelength of the photon. Blue is a label we use to identify a particular frequency range within the electromagnetic spectrum.
Replies from: pjebycomment by Mitchell_Porter · 2010-01-10T01:37:07.023Z · LW(p) · GW(p)
A final thread for answers to specific questions.
Third question: Where is meaning?
Thoughts are about things. What aspect of the physical state of an object makes that state "about" something in particular, or about anything at all?
Replies from: RobinZ, PhilGoetz↑ comment by RobinZ · 2010-01-10T02:39:15.706Z · LW(p) · GW(p)
(I answer this question, because the discussion of color in the prior post was hopeless from a communication standpoint.)
Consider a simple device, consisting of a chamber containing a measured amount of mercury and a very narrow tube rising from this chamber. As the temperature of the mercury changes, the volume changes as a simple function (roughly linear, but more importantly monotonically). (As the mercury is highly thermally conductive, this temperature is roughly uniform.) This change in volume causes a small amount of the mercury to expand into the narrow tube - the precise amount linearly proportional to the change in volume. It is mathematically clear, therefore, that the height of mercury in the tube is a monotonic function of the temperature of the mercury. The net result of creating this device is a stable, predictable, reliable correlation in the universe between two things - and the tube, therefore, can be marked at intervals (the first thing) corresponding to particular temperatures (the second thing).
We call this a thermometer, of course. And when the mercury is next to the "76" label on the thermometer, we say that this means that the temperature is 76 degrees.
Does this make sense? It would be useful to know whether this sounds like a "wretched subterfuge", as Kant called compatibilist theories of free will.
Replies from: Alicorn↑ comment by Alicorn · 2010-01-10T02:48:59.220Z · LW(p) · GW(p)
Just to play with the idea: is cricket chirping about the temperature in Fahrenheit?
Replies from: RobinZ↑ comment by RobinZ · 2010-01-10T02:57:05.143Z · LW(p) · GW(p)
Let us be precise: the frequency of cricket chirping is reliably correlated with the temperature in Fahrenheit - to be specific, the Fahrenheit temperature is approximately the number of chirps in 13 seconds plus 40 - and therefore a particular frequency of cricket chirping means a particular temperature.
The cricket chirping itself usually means other things.
↑ comment by PhilGoetz · 2010-01-10T02:11:50.541Z · LW(p) · GW(p)
I think this is a big distraction. As you pointed out in a comment, this is the purpose of the "Where is the chess in Deep Blue?" question, and it has nothing to do with the question you're posing about qualia. That way lies madness and John Searle.
comment by Technologos · 2010-01-08T17:22:42.850Z · LW(p) · GW(p)
I defer to Wittgenstein: the limits of our language are the limits of the world. We can literally ask the questions above, but I cannot find meaning in them. Blueness, computational states, time, and aboutness do not seem to me to have any implementation in the world beyond the ones you reject as inadequate, and I simply don't see how we can speak meaningfully (that is, in a way that allows justification or pursues truth) about things outside the observable universe.
comment by Vladimir_Nesov · 2010-01-08T13:47:44.143Z · LW(p) · GW(p)
I don't believe that confabulating a confused topic is a useful activity that is expected to advance understanding. We'd all be better off avoiding this mode of thinking, and building on better-understood concepts instead.
Replies from: Tyrrell_McAllister↑ comment by Tyrrell_McAllister · 2010-01-08T15:11:12.115Z · LW(p) · GW(p)
What do you mean by "confabulating"? Do you just mean that people here aren't really confused about this topic?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2010-01-08T15:21:49.219Z · LW(p) · GW(p)
They are, which is why constructing one more confused argument in terms of confused concepts is no use.
comment by Paul Crowley (ciphergoth) · 2010-01-08T13:35:09.552Z · LW(p) · GW(p)
Given that you accept heterophenomenology, I wish you'd put this in explicitly heterophenomenological terms - in terms of accounting for the utterances that people make, in other words. The reason I keep banging on about this is that I think that it is the key move in defusing the confusions you exhibit here.
Replies from: Mitchell_Porter↑ comment by Mitchell_Porter · 2010-01-11T11:44:10.569Z · LW(p) · GW(p)
I accept heterophenomenology only in the sense that people can indeed be mistaken in describing their experiences. On those occasions, you only need to account for the description. But I would say "folk phenomenology" is correct about the basics.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-01-11T11:58:51.779Z · LW(p) · GW(p)
Accepting heterophenomenology means accepting that if a theory successfully accounts for everything you can observe from the outside, there is no further work to do. I hope to do a top-level post about this soon.
comment by Jonii · 2010-01-10T10:33:03.997Z · LW(p) · GW(p)
Simple "Our experience is what world looks like if you happen to be a bunch of neurons firing inside an apes head" is surprisingly strong reply to the questions you raised. If we take neurons that are put together like they are in brain, and give it sensory input that resembles blue ball, we can ask that brain what it thinks it's seeing. It'll answer "blue ball", and there's nothing weird happening here. Here, answering, thinking or even seeing the blue ball is simply a brain state, purely physical phenomenon.
The mystery, as far as I can tell, happens when we notice that we could theoretically try to see the world as that brain would it see, and suddenly we are actually experiencing that blue ball and all the qualias that come with it. Now the magic is in a much smaller space: There is no magic happening when brain claims it sees blue things, and there's nothing mysterious when we take brains point of view and try to understand how we'd see the world if we were just a brain. So the mystery of consciousness seems to hide in "in a world with no observers, can we sensibly talk about what would the world be like if seen from non-sentient things perspective". If we can, the qualia seem to be in every aspect identical to that perspective.
So what's the ontology of perspective? I have no idea, but perspective seems to go hand in hand with something physical even existing, so we could be strictly materialistic while still acknowledging the existence of qualia.
comment by Wei Dai (Wei_Dai) · 2010-01-10T07:33:49.292Z · LW(p) · GW(p)
Did anyone else notice the similarity of Mitchell's arguments in this post, and the one in his comment to one of my posts? Here he says that there is no color in a purely physical description of a mind, and in his comment to my post he said that there is no utility function in a purely physical description of a mind.
I think this argument actually works better here (with color), because my counter-argument to his comment doesn't work. What I said was that in principle we know how to go from a utility function to a physical description of an object (by creating an AI) and so in principle we also know how to go from a physical description to a utility function.
But here, we don't know how to go from a color to a physical description of a mind that can experience that color, nor can we tell what color a mind is experiencing or capable of experiencing, given a physical description of it, even in principle.
comment by FAWS · 2010-01-09T22:01:03.876Z · LW(p) · GW(p)
I'm not quite sure I understand the problem with blueness as you see it.
Suppose nouroscience was advanced enough that it could manipulate your perception of colors in any arbitrary way just by manipulating your neurons. For example they could make you perceive blue as you previously perceived red and the other way round, induce synaesthesia and make you perceive the smell of roses, the taste of salt, the note C or other things as blue. They could change your perception of color completely, leaving your new perception of colors only as similar to your old one as that one was to your perception of smells, flavor or sounds. If all of this was true, would that be enough for you to accept that blueness is sufficiently explicable by the behaviour of neurons alone? Or would you argue that while neurons are enough to induce the sensation of blueness this sensation itself is still something beyond the mere behaviour of neurons?
comment by [deleted] · 2010-01-08T20:54:41.823Z · LW(p) · GW(p)
No offense, but...
Your previous article on the subject got downvoted to -15, and yet you posted a second article anyway? Why did you do that? Did you perform further research to determine whether all of us were confused, or only you? Did you try to determine whether there was any question to answer or not? Did you try to figure out why it seemed like there was a question to answer?
I don't know you very well at all, but it appears that you're an intelligent and useful person. I'm guessing that seeing the response to your articles will be leaving you disappointed and discouraged. Please remember that we humans have a greater than optimal tendency to give up when faced with unimportant and nominal failures; we'd like you to learn from your mistakes and come back stronger. Possibly, the best thing for you to do is try to forget about this issue for a few weeks and see what your attitude toward it is then.
Replies from: Mitchell_Porter, timtyler↑ comment by Mitchell_Porter · 2010-01-13T12:08:52.349Z · LW(p) · GW(p)
Thank you for the concern, but things have been fairly mellow this time around anyway.
When I was a teenager, I thought about the mind as people here do, at least some of the time. I was happy to think of consciousness as something like a video camera aimed at its own output. But I know that by the time I was 20, I was thinking differently, and I do not expect to ever turn back. It's clear to me that computer science and mathematical physics only address a subset of the world's ontology, and that the reductionisms we have consist at best of partial descriptions, and at worst of misidentifications. Also, in the study of phenomenology, especially Husserl's transcendental phenomenology, I've had a glimpse of how to think rigorously about the rest of ontology.
This is my larger agenda. The problem of the Singularity is being approached within the existing scientific ontology, which is incomplete, and the solutions being developed, like CEV and TDT, are also stated in terms of that ontology. To really know what you're doing, when attempting to initiate a Friendly Singularity, you'd need to understand those solutions, or their analogues, in terms of the true ontology. But to do that requires knowledge of the true ontology.
So, while trying to figure out a better ontology, I have an interest in understanding the thought processes of people who are satisfied with the existing one, because such people dominate the Singularity enterprise. Ideally I'd be able to provoke some sense of philosophical crisis and inadequacy, but obviously that isn't happening. However, I think there has been minor progress. I intend to let the current discussion wind down - to reply where there's more to be said, but not to get into "Yes it is, no it isn't" exchanges - and to get on with the larger enterprise, once it's over. These discussions have all already occurred, at a higher level of sophistication on all sides, in the philosophical literature, and I should relocate the ontological component of my project in that direction.
comment by MrHen · 2010-01-08T16:33:03.962Z · LW(p) · GW(p)
Honestly, I read this:
Someone else can do the metalevel analysis and extract the rationality lessons.
And noticed that the post is currently rated at -2. All signs are telling me to not bother reading this post. I probably will anyway, but I felt like reminding my future self why the karma system is here. :P
EDIT:
Color was an issue last time.
Where is "last time"?
Replies from: ciphergoth, RobinZ↑ comment by Paul Crowley (ciphergoth) · 2010-01-08T17:29:24.877Z · LW(p) · GW(p)
The thing is, Mitchell Porter is clearly a very intelligent and thoughtful person, who seems to be sinking huge amounts of his cognitive resources into this pointless, meaningless, doomed project. If we could persuade him of the futility and folly of it, it would probably be worth it.
Replies from: byrnema, MrHen↑ comment by byrnema · 2010-01-08T18:16:22.585Z · LW(p) · GW(p)
On the other hand, this idea of qualia -- whatever it is actually about -- is a sticking point for the dualists. We should try to understand what they're talking about instead of just asserting they're not talking about anything.
If we can look at dualist arguments and identify the exact location of our different thinking, then we own the argument and have a chance of explaining it to them. If we only understand the problem on the level that "well, we understand the reductionist view and it doesn't present any problems about qualia" then we don't actually understand anything about dualism.
Otherwise the message is: dualists just need to become reductionists in order to get over their qualia problem.
Personally, I can't relate to dualism either and I am curious about why I can't.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-01-08T22:46:07.439Z · LW(p) · GW(p)
Consciousness Explained does try to explain why people have the idea of qualia.
The next post Porter needs to do on this is one explicitly addressing the position Dennett sets out in Consciousness Explained. That position is certainly popular enough here on LW that I don't see how we're going to have a useful discussion until he makes that post. I'm disappointed that that wasn't the conclusion he drew from the previous discussion.
Replies from: whpearson↑ comment by MrHen · 2010-01-08T19:03:29.933Z · LW(p) · GW(p)
Fair enough. I did end up reading the post but was confused. I got the feeling I was jumping into the middle of a topic/conversation and missed all of the setup. I will read the link from RobinZ and see if it fills in the gaps.
Although, one clarification would be nifty. I am assuming that the discussion about Color has really little to do with Color itself and more to do with the representation of Colorness in our "head", hence the hole topic of dualism. Am I even close?
EDIT: Actually, thinking more about what you said, I find your comment extremely valuable. Not so much in that I feel I should persuade anyone of anything, but more in that there are more reasons to read posts than I was initially considering. :P
↑ comment by RobinZ · 2010-01-08T17:45:25.175Z · LW(p) · GW(p)
Last time was "How to think like a quantum monadologist". Having read both, I consider this one superior, thanks to it containing less substance to hold its confusions.
Edit: See SilasBarta's links.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-08T13:57:26.868Z · LW(p) · GW(p)
Thought I accounted for aboutness already, in The Simple Truth. Please explain what aspect of aboutness I failed to account for here.
Replies from: Tyrrell_McAllister, SilasBarta, MatthewB, Mitchell_Porter↑ comment by Tyrrell_McAllister · 2010-01-08T15:45:07.055Z · LW(p) · GW(p)
That is a 6,777 word dialogue that covers many things. Can you summarize the part that is an account of aboutness specifically?
Skimming it, you seem to me to be saying that a physical system A is about a physical system B if each state that B is in (up to some equivalence relation) causes A to be in a distinct state (up to some equivalence relation). Hence, the pebbles in the bucket are "about" the sheep in the field because the number of sheep in the field causes the number of pebbles in the bucket to take on a certain value.
I write that summary knowing that it probably misses something crucial in your account. As I say, I only skimmed the essay, trying to skip jokes and blatant caricatures (e.g., when your foil says, "Now, I’d like to move on to the issue of how logic kills cute baby seals -"). My summary is just to give you a launching point from which to correct potential misunderstandings, should you care to.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-08T18:41:19.750Z · LW(p) · GW(p)
The whole dialogue is targeted specifically at decomposing the mysteriously opaque concepts of "truth" and "semantics" and "aboutness" for people who are having trouble with it. I'm not sure there's a part I could slice off for this question, given that someone is asking the question at all.
Maybe I'd ask, "In what sense are the pebbles not about the sheep? If the pebbles are about the sheep, in what sense is this at all mysterious?"
I make no claims about aboutness. Rather, I understand how the pebble-and-bucket system works. If you want to claim that there is a thing called "aboutness" which remains unresolved, it's up to you to define it.
Replies from: Tyrrell_McAllister, thomblake↑ comment by Tyrrell_McAllister · 2010-01-08T19:05:00.822Z · LW(p) · GW(p)
I make no claims about aboutness. Rather, I understand how the pebble-and-bucket system works. If you want to claim that there is a thing called "aboutness" which remains unresolved, it's up to you to define it.
Then, to call this an "account of aboutness", you should explain what it is about the human mind that makes it feel as though there is this thing called "aboutness" that feels so mysterious to so many. As you put so well here: "Your homework assignment is to write a stack trace of the internal algorithms of the human mind as they produce the intuitions that power the whole damn philosophical argument."
If you did this in your essay, it was too dispersed for me to see it as I skimmed. What I saw was a caricature of the rationalizations people use to justify their beliefs. I didn't see the origin of the intuitions standing behind their beliefs.
↑ comment by thomblake · 2010-01-08T18:50:07.632Z · LW(p) · GW(p)
I understand how the pebble-and-bucket system works. If you want to claim that there is a thing called "aboutness" which remains unresolved, it's up to you to define it.
Yes, I think this was the part that was missing in the initial reply.
↑ comment by SilasBarta · 2010-01-08T18:34:05.457Z · LW(p) · GW(p)
Agree with Tyrrell_McAllister. You need to be a lot more specific when you make a claim like this.
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-08T18:40:05.610Z · LW(p) · GW(p)
The whole dialogue is targeted specifically at decomposing the mysteriously opaque concepts of "truth" and "semantics" and "aboutness" for people who are having trouble with it. I'm not sure there's a part I could slice off for this question, given that someone is asking the question at all.
↑ comment by Mitchell_Porter · 2010-01-10T09:59:15.636Z · LW(p) · GW(p)
The story only addresses how representation works in a simple mechanism exterior to a mind (i.e., the pebble method works because the number of pebbles is made to track the number of sheep). The position one usually takes in these matters is that the semantics of artefacts (like the meaning of a sound) are contingent and dependent upon convention, but that the semantics of minds (like the subject matter of a thought) are intrinsic to their nature.
It is easy to see that the semantics of the pebbles has some element of contingency - they might have been counting elephants rather than sheep. It is also easy to see that the semantics of the pebbles derives from the shepherd's purposes and actions. So there is no challenge here to the usual position, stated above.
But what you don't address is the semantics of minds. Do you agree with the distinction between intrinsic and mind-dependent representation? If so, how does intrinsic representation come about? What is it about the physical aspect of a thought that connects it to its meaning?
Replies from: Eliezer_Yudkowsky↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-10T13:18:37.018Z · LW(p) · GW(p)
What's in the shepherd that's not in the pebbles, exactly?
Let's move to the automated pebble-tracking system where a curtain twitches as the sheep passes, causing a pebble to fall into the bucket (the fabric is called Sensory Modality, from a company called Natural Selections). What is in the shepherd that is not in the automated, curtain-based sheep-tracking system?
Replies from: Mitchell_Porter, Tyrrell_McAllister↑ comment by Mitchell_Porter · 2010-01-16T08:10:12.716Z · LW(p) · GW(p)
What is in the shepherd that is not in the automated, curtain-based sheep-tracking system?
Do you agree that there is a phenomenon of subjective meaning to be accounted for? The question of meaning does not originate with problems like "why does pebble-tracking work?". It arises because we attribute semantic content both to certain artefacts and to our own mental states.
If we view the number of pebbles as representing the number of sheep, this is possible because of the causal structure, but it actually occurs because of "human interpretation". Now if we go to mental states themselves, do you propose to explain their representational semantics in exactly the same way – human interpretation; which creates foundationless circularity – or do you propose to explain the semantics of human thought in some other way – and if so in what way – or will you deny that human thoughts have a semantics at all?
↑ comment by Tyrrell_McAllister · 2010-01-10T18:00:17.923Z · LW(p) · GW(p)
Even as a reductionist, I'll point out that the shepherd seems to have something in him that singles out the sheep specifically, as opposed to all other possible referents. The sheep-tracking system, in contrast, could just as well be counting sheep-noses instead of sheep. Or it could be counting sheep-passings—not the sheep themselves, but rather just their act of passing past the fabric. It's only when the shepherd is added to the system that the sheep-out-in-the-field get specified as the referents of the pebbles.
ETA: To expand a bit: The issue I raise above is basically Quine's indeterminacy of translation problem.
One's initial impulse might be to say that you just need "higher resolution". The idea is that the pebble machine just doesn't have a high-enough resolution to differentiate sheep from sheep-passings or sheep-noses, while the shepherd's brain does. This then leads to questions such as, How much resolution is enough to make meaning? Does the machine (without the shepherd) fail to be a referring thing altogether? Or does its "low resolution" just mean that it refers to some big semantic blob that includes sheep, sheep-noses, sheep-passings, etc.?
Personally, I don't think that this is the right approach to take. I think it's better to direct our energy towards resolving our confusion surrounding the concept of a computation.
comment by JanetK · 2010-01-12T10:24:50.173Z · LW(p) · GW(p)
“The local worldview reduces everything to some combination of physics, mathematics, and computer science, with the exact combination depending on the person. I think it is manifestly the case that this does not work for consciousness.” No it doesn't work because you have left out BIOLOGY. You cannot just jump from physics and algorithms to how brains function. Here is the outline of a possible path: 1.We know that consciousness has an important function because it consumes a great deal of energy – that's how evolution works. 2.Animals move – therefore they must have a model of where they are, where they are going etc. - like the old Swedish joke is 'I cant yump when I got no place to stood'. 3.To make a model, animals need to sense the environment and translate the info into elements of the model (perception). 4.In order to use the model to plan and monitor motor action, they have to also model themselves – so the model is of the animal-in-the-world - the tree is not the real tree in reality but the modeled tree and the me in the model is not the real me in reality but the modeled me. 5.In order to make a good model that was useful it would have to be a unified global model of the animal in the world – all the parts of the model have to be brought together in order to create the best fit scenario and in order for various functions to use the information. 6.In order to make a good model that could be used to plan and valuate actions it would have to model the needs of the animal such as goals, motivations, emotions etc – the model has to have a theory of mind for the animal - so my thoughts in the model are not my real thoughts in reality but the modeled mind. When we introspect we are aware of our model of ourselves but not of ourselves in reality. Definitions can be a problem here – do we use the word 'mind' for cognition or for awareness? For we have trouble if we confuse these two things. 7.To make the model more useful it should be predictive to overcome the time it takes to construct the model – so if 'now' is t, then the model would be created from the information the brain has at t-x used to predict what reality will be after x duration where x is the time it takes to construct the model – this allows errors in motor actions to be monitored and corrected because the sensory data coming it does not match the model prediction – even the 'now' is a modeled now and not the now in reality. 8.So the biological criteria for a good model are unity, speed, accuracy and predictive power. The elements used to create the model must be easily manipulated in order to achieve these goals and must also be capable of being stored as memories, imagined, communicated etc. The qualia of the model will be anything and everything that is biologically possible and makes a good model. We have the data that the sense organs can measure and some effective ways of representing that information in the model. So the question “Why red?” can be answered with “Why not – it works.” And the question “Where is the red?” can be answered by “ In the structural elements of the model”. If someone has a better way to model the frequency of light, I have never heard of it. If you cannot envisage this modeling as a sequential computer program that because it isn't one. It is a massively parallel assembly of overlapping feedback loops that involve most of the cortex, the thalamus, the basal ganglia and even points in the brain stem. It has more in common with analogue computers then digital ones.
Replies from: RobinZcomment by sharpneli · 2010-01-08T20:27:39.927Z · LW(p) · GW(p)
You can do a Dennett and deny that anything is really blue.
I'd like to see what he'd do if presented with blue and a red balls and given a task: "Pick up the blue ball and you'll receive 3^^^3 dollars".
Even though many claim to be confused about these common words their actual behaviour betrays them. Which raises the question that what is the benefit of this wondering of "blueness"? What does it help anyone to actually do?
Replies from: RobinZ↑ comment by RobinZ · 2010-01-08T20:47:51.412Z · LW(p) · GW(p)
I believe you are confused about what Dennett asserts. Quining Qualia would probably be the most obviously relevant essay easily located online, if you want to read him in his own words.
If you don't, the key point is that Dennett maintains that qualia, as commonly described, are necessarily:
- ineffable
- intrinsic
- private
- directly or immediately apprehensible in consciousness
...and that nothing actually exists with these properties. You see blue things, but there is no pure experience of blue behind your seeing blue things.
Edit: Allow me to emphasize that I do not consider the confusion to reflect poorly upon yourself - yours was a reasonable reading of Mitchell_Porter's characterization of Dennett's remarks. A better wording for the opening of my reply would be: "I think the quote doesn't reflect what Dennett believes."
Replies from: sharpneli↑ comment by sharpneli · 2010-01-08T21:04:52.520Z · LW(p) · GW(p)
It seems I was wrong about Dennett's claims and misinterpreted the relevant sentence.
However the original question remains and can be rephrased: What predictions follow from world containing some intrinsic blueness?
The topmost cached thought I have is that this is exactly the same kind of confusion as presented in Excluding the Supernatural. Basically qualia is assumed as an ontologically basic thing, instead of neural firing pattern.
The big question is therefore (as presented in this thread already in various forms): What would you predict if you'd find yourself in a world with distinct blueness compared to a world without?
Replies from: RobinZ↑ comment by RobinZ · 2010-01-08T21:10:43.776Z · LW(p) · GW(p)
Ah, I apologize - I had not realized you had the other point in your comment. That strikes me as a key angle, and one of the reasons why I upvoted ciphergoth's question.
comment by Sniffnoy · 2010-01-08T19:25:18.291Z · LW(p) · GW(p)
Can't we just define "blue" or "blueness" or what have you to be an equivalence class and be done with it?
Replies from: thomblake↑ comment by thomblake · 2010-01-08T19:27:39.567Z · LW(p) · GW(p)
Well we wouldn't want to "just define" a word that's supposed to refer to something in the world, without figuring out what that thing is yet.
Replies from: Sniffnoy↑ comment by Sniffnoy · 2010-01-08T19:44:16.288Z · LW(p) · GW(p)
OK, but it's not too hard to describe what makes a thing blue. The only obvious sticking point is who's standard of blueness we're using. Perhaps a "blueness function" would be better than an equivalence class of all things blue, then. Regardless, determining whether or not a given thing is blue doesn't seem to be what the OP is asking about; I'm suggesting that this suffices.
Replies from: PhilGoetzcomment by Alexxarian · 2010-01-10T03:57:52.063Z · LW(p) · GW(p)
You bunch of left-brainers, you will never get it right unless you start thinking completely differently. You've reached a dead end and you're desperately banging your heads against the wall. Here are links to the stuff you need to read in order to finally figure out how it actually all fits together:
Room for a view on the metaphysical subject of personal identity - Daniel Kolak
I am You - Daniel Kolak
http://rs767.rapidshare.com/files/286063752/Kolak__Daniel_-_I_am_You.zip
CTMU, The Cognitive-Theoretic Model of the Universe: A New Kind of Reality Theory - Christopher Langan
Book on Taboo Against Knowing Who You Are - Alan Watts
http://www.scribd.com/doc/3054442/Alan-Watts-Book-on-Taboo-Against-Knowing-Who-You-Are
From Science to God, The Mystery of Consciousness and the Meaning of Light - Peter Russell
http://www.peterrussell.com/SG/contents.php
Ashtavakra Gita
http://www.realization.org/page/doc0/doc0004.htm
Also:
http://en.wikipedia.org/wiki/Monism
http://en.wikipedia.org/wiki/Nondualism
Namaste.
Replies from: RobinZ↑ comment by RobinZ · 2010-01-10T04:03:55.614Z · LW(p) · GW(p)
Who are you responding to? I am inclined to believe it is physicalists such as myself, but in that case your remark that about having "reached a dead end and you're desperately banging your heads against the wall" is a non sequitur. I'm not banging my head against the wall due to a dissatisfaction with my worldview, I'm banging my head against the wall due to a failure to find agreement with Mitchell_Porter.
Replies from: Alexxarian↑ comment by Alexxarian · 2010-01-10T13:28:24.572Z · LW(p) · GW(p)
I was not referring to your satisfaction with your worldview but its actual correctness. Read at least Kolak's 'Room for a view on the metaphysical subject of personal identity'.
Replies from: RobinZ, Jack↑ comment by RobinZ · 2010-01-10T20:43:08.358Z · LW(p) · GW(p)
What follows is the abstract from "Room for a view: on the metaphysical subject of personal identity", Daniel Kolak.
Sydney Shoemaker leads today's "neo-Lockean" liberation of persons from the conservative animalist charge of "neo-Aristotelians" such as Eric Olson, according to whom persons are biological entities and who challenge all neo-Lockean views on grounds that abstracting from strictly physical, or bodily, criteria plays fast and loose with our identities. There is a fundamental mistake on both sides: a false dichotomy between bodily continuity versus psychological continuity theories of personal identity. Neo-Lockeans, like everyone else today who relies on Locke’s analysis of personal identity, including Derek Parfit, have either completely distorted or not understood Locke's actual view. Shoemaker's defense, which uses a "package deal" definition that relies on internal relations of synchronic and diachronic unity and employs the Ramsey-Lewis account to define personal identity, leaves far less room for psychological continuity views than for my own view, which, independently of its radical implications, is that (a) consciousness makes personal identity, and (b) in consciousness alone personal identity consists--which happens to be also Locke's actual view. Moreover, the ubiquitous Fregean conception of borders and the so-called "ambiguity of is" collapse in the light of what Hintikka has called the "Frege trichotomy." The Ramsey-Lewis account, due to the problematic way Shoemaker tries to bind the variables, makes it impossible for the neo-Lockean ala Shoemaker to fulfill the uniqueness clause required by all such Lewis style definitions; such attempts avoid circularity only at the expense of mistaking isomorphism with identity. Contrary to what virtually all philosophers writing on the topic assume, fission does not destroy personal identity. A proper analysis of public versus perspectival identification, derived using actual case studies from neuropsychiatry, provides the scientific, mathematical and logical frameworks for a new theory of self-reference, wherein "consciousness," "self-consciousness," and the "I," can be precisely defined in terms of the subject and the subject-in-itself.
I think it is safe to say that this was not written for a general audience. Before I spend any more of my time trying to decipher text with no expectation of enjoyment, I would like to know - in lay terms - what bearing it has upon Mitchell_Porter's remarks.
Edit: If, as Jack states, there is no relation, it would behoove you to write a summary in lay terms as a top level post rather than drop it into a merely tangentially related discussion.
Replies from: Richard_Kennaway↑ comment by Richard_Kennaway · 2010-01-10T21:44:57.215Z · LW(p) · GW(p)
"Frameworks for a new theory" are too dear at ten a penny, and the above text seems to me as worthless as anything output by the Postmodernism Generator. The other sources that Alexxarian linked seem to me no more interesting.
A more readable text by Kolak, which I see was linked by Alexxarian in an earlier comment, is "I am You". The pages available on Google may give a measure of whether Kolak is worth reading.
Wikipedia describes him as "one of the most prolific philosophers in the world". I tremble! Actually, some of the things mentioned in the wiki article look interesting, but the article clearly fails NPOV, being copied from one on croatia.org lauding this Famous Croatian. Certainly, his productivity is awesome.
Replies from: Alexxarian↑ comment by Alexxarian · 2010-01-11T02:00:24.215Z · LW(p) · GW(p)
I have read Kolak's 'I am You' and I can honestly say that he logically proves that we are all the same consciousness. Kolak's writing can appear cryptic but I can guarantee you that it makes perfect sense if you concentrate hard enough on it and read it in its full context. The wikipedia article should be ignored since it is only a copy paste. However, if you look up all the books he's contributed to you will see that he indeed is quite a prolific philosopher. Why is this relevant? It is relevant because it demonstrates a fact about reality that most people currently alive aren't even able to envision. This fact has to do with the actual nature of consciousness. I implore you to trust me on this and to make an honest attempt at reading Kolak's 'I am you', or at least his paper. The majority of you in this forum have a higher IQ than me and a much better ability at thinking logically; this should make your ability to understand this not very heard as long as you open up your mind a bit. I have provided above a link to 'I am You' in it's entirety. Please do remember that all paradigm shifting insights have been met by strong skepticism from the majority of the intellectual and close-minded elite. You have here an opportunity to learn something that will drastically widen your horizon of understanding. That is an understatement though. This knowledge is so drastic that if it starts spreading it will most likely lead to en eventual merging of identity between all of us. This is something unavoidable though. The Singularity is indeed near. In the surprising case that this message actually convinces you to try to read 'I am You', please do not give up during the second chapter, it gets much easier to understand later on.
Replies from: RobinZ, pdf23ds, byrnema↑ comment by RobinZ · 2010-01-11T15:58:48.978Z · LW(p) · GW(p)
I implore you to trust me on this and to make an honest attempt at reading Kolak's 'I am you', or at least his paper.
I'm sorry, but we don't know you well enough to just take your word for it. If this material is that interesting and valuable, the appropriate course is (a) to write an essay-level treatment of the material as a top-level post or (b) restrict yourself to mentioning it when someone asks for reading recommendations.
Replies from: Alexxarian↑ comment by Alexxarian · 2010-01-11T16:48:24.870Z · LW(p) · GW(p)
I guess I could try that if my score ever gets good enough to actually allow me to write a top-level post. However, Kolak is in a similar position to where Einstein was when he initially introduced the theory of relativity. This subject is so complex and hard to explain that I would basically have to recreate Kolak's explanation in order to make you understand what this is about. All I'm willing to say is this: Kolak proves that we are all the same immortal consciousness, in the same way that eastern philosophers have realized it thousands of years ago. However, he uses western philosophical reasoning to come to the same conclusions.
pdf23ds writes that the phrases I use are '(cheap) signals that lead to negative impressions around these parts.' I understand what you mean and unfortunately this is true since phrases of this kind of enthusiasm have been used in vain and corrupted by most of those who choose to express them. I can however promise you that the subject of knowledge I'm here talking about deserves phrases like these. How could it not? The ideas Kolak presents are larger than life, they are larger than what most of you have been capable of contemplating so far. But again, all I'm trying to do here is to convince you to read the source. Some would say that Kolak deserves a Nobel Prize for his work. This would be true if the true understanding of what he explains didn't make things like Nobel Prizes completely obsolete. Again, please stop being so cynical for once, suspend your disbelief and make an honest attempt at looking at the explanation. I'm not saying you should suspend your judgement once reading it. By all means, be as critical and logical and skeptical as you possibly can while reading it. Please remember that from a certain perspective we're nothing but protoplasm in a relatively complex configuration. It wasn't long ago that you didn't even exist and it won't be long until you stop existing again, according to your own understanding. How can being aware of this not make you a bit more humble about your present knowledge and a bit more open to realize something more? I used to be a materialist, physicalist, positivist, 'life is ultimately just sad because it all ends' itivist. Maybe you think life is great despite of this. But the point is that you are IMMORTAL and all you need to do is realize this. Once you do, you will see that all the rationalizations you have made about life being great even if one is mortal are just that, desperate rationalizations. It's not your fault though, immortality is quite hidden from the standard human perspective but it can be found. So don't let this chance slip away. And no, it isn't based on faith any more than the understanding that 2+2=4 is based on it, I can totally promise you that. Again, if I sound like some religious fanatic, I'm sorry, it isn't my intention. It's just that I don't know any better. English isn't my first language, I have an INFP personality, if that means anything. I'm highly neurotic. Hmm, maybe I could try:
I have spent a very big portion of my awake state pondering consciousness. I came to discover the essence of Kolak's theory several years before I knew anything about him. The theory cannot be understood without a 'Eureka' type realization. Neither can most complex theorems in mathematics or logic so don't let that discourage you. I'm barely intelligent enough to understand this, it would be very hard for me to explain it all over again and that is utterly pointless since it has already been explained. It would be like if someone wanted you to read Einstein's paper on relativity and you asked that person to write his own version. It is THERE, just a few clicks away. The text would appear on the same screen that you are reading this on. Wake up.
Didn't help, did it? I guess I can't transcend my neurology. Anyway, you do not HAVE to read it. It's not imperative to know the truth about things. The survival of our species and of this whole universe isn't imperative either. It all comes down to desires and the choices that arise from them. So enjoy existence no matter what you choose to do from here :)
Replies from: ciphergoth, Tyrrell_McAllister, RobinZ↑ comment by Paul Crowley (ciphergoth) · 2010-01-11T16:52:47.181Z · LW(p) · GW(p)
Either we're all too dumb to see the enlightenment you're bringing, or we're all too smart to fall for it. Either way, it should be clear by now that your quest is hopeless. Please give up.
↑ comment by Tyrrell_McAllister · 2010-01-11T17:23:47.842Z · LW(p) · GW(p)
Suppose we were to grant that it is useful to think of all people as together constituting one person. Let's just grant that, just for the sake of argument.
It doesn't follow that that person's immortality is what we care about. It's still useful to distinguish individual biological organisms, just as it's useful to distinguish my phone from my water bottle. These distinctions might be vague at their boundaries, as all distinctions are. But they are still useful at their core. And I can still be interested in the immortality of the individual organism.
Replies from: Alexxarian↑ comment by Alexxarian · 2010-01-11T18:03:26.751Z · LW(p) · GW(p)
True, but the issue at stake is whether the real and actual you, the actual foundation for your personal identity, is immortal or not. Kolak proves that it is. He also proves that we are all as connected to each other as your future and past selves are connected to your current self. Simply because there is only one identical self in the universe. If one realizes this then focusing on whether it is 'useful to distinguish between individual biological organisms' becomes of secondary importance. Furthermore, such a view has a drastic effect on ethics.
ciphergoth writes 'Either we're all too dumb to see the enlightenment you're bringing, or we're all too smart to fall for it. Either way, it should be clear by now that your quest is hopeless. Please give up.'
What is this? Am I supposed to feel bad or laugh or what? I know I appear arrogant to you but the fact is that I know something you don't and all I'm asking for is that you check out this information on your own. There is no hidden agenda. I'm not asking you to sell your soul to the devil or to give up logic or become a member of Scientology. I'm not asking you to believe anything. The only act of faith you need to perform is to temporarily believe that I might be on to something, give in, and take a proper look at the source material. As I said before, Kolak has done a marvelous job at explaining this and there is no point in me rewriting what he's already written. Furthermore, the knowledge he presents is for most individualistic humans so controversial that it becomes nearly impossible to pass it on in the form of a dialog. I was drunk when I wrote my first post so I apologize for calling you (whoever felt included) 'a bunch of left-brainers'. I hoped you wouldn't take it so seriously.
If after reading Kolak's 'I am You' you still feel this was a waste of time and that I, and especially Kolak, are blabbering idiots then so be it. I would be very surprised if that was the case though. But seriously guys, how can you be so adamant to try reading a book?! Don't you trust your own ability to reason? If the book is bullshit then it will become apparent to you. To be able to understand this builds upon the logical tools you already use, it doesn't require you to abandon them. Do you actually expect me to explain the theory to you here, in a short post, through my brain? Haven't I proven by now that I'm not qualified for something like that? The only thing I can hope for is to awaken your curiosity for Kolak's work, and even at that I'm failing. Here's my last attempt:
'Guys, something totally amazing has happened. You won't believe me because it sounds too good to be true but this time it actually is so please please do believe me. There is this guy who has written a book that explains that we are all immortal consciousness, aka God or whatever you wanna call it. It proves that we are all each other, just like our billions of cells are all each of us, and it's totally mindblowingly amazing. But I can't explain it because it's freaking hard, it's quite complex you see. But Kolak does explain it in his 600+ pages long book. so please please read it. I borrowed the book from a Swedish university, scanned it, uploaded it and totally breached the copyright law just so that anyone interested could read it. Please, please believe me. If just 10% of us realized this the world would become like freaking heaven. Kurzweil's Singularity is the technical part, this is the spiritual part of it. Do you remember when you were a kid and dreamed of the endless possibilities of reality? Before you became cynical and disillusioned by people's stupidity and cruelty and by your body's and the universe's apparent entropy? Please try to tap back into that way of seeing things and READ THE BOOK.'
Lol xD
Replies from: Vladimir_Nesov, Eliezer_Yudkowsky↑ comment by Vladimir_Nesov · 2010-01-11T18:06:30.622Z · LW(p) · GW(p)
Please stop posting.
↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-11T18:12:18.969Z · LW(p) · GW(p)
I observe a systematic pattern of downvoted comments. If you can't take this hint, further comments from you will be removed. Goodbye.
Replies from: Vladimir_Nesov, Alexxarian↑ comment by Vladimir_Nesov · 2010-01-11T18:54:11.793Z · LW(p) · GW(p)
Perhaps the rule "consistent downvotes of most comments -> stop posting (for a long while)" should be given authority by being added to the "About" page, which may be enough in some cases, removing the need for actual comment-deleting.
Replies from: byrnema, RobinZ↑ comment by byrnema · 2010-01-11T19:51:01.724Z · LW(p) · GW(p)
I find the process of (1) identifying trolls, (2) trying to convince them to stop posting and (3) kicking them off if they don't listen to be a very unpleasant experience. It introduces all sorts of negativity and it makes me uncomfortable, mainly because the norms and boundaries are subjective and I empathize (but don't sympathize) with the trolls.
I think it would be best if the process was entirely objective and automated -- people could know the rules from the About page and altercations with trolls wouldn't be required.
The main rule would be that people cannot comment if their karma is below some negative value K.
If we wanted to give a "time-out" (-->stop posting (for a long while)) instead of a ban, then this could be automated by an algorithm whereby anyone with negative karma gets one karma point per day until they're back to 0.
Given this second scheme, I think that even a K as high as K=-1 would be fine, because then a person with K=-1 only needs to wait one day until they can comment again.
Replies from: anonym, ciphergoth, pdf23ds↑ comment by anonym · 2010-01-12T06:38:05.213Z · LW(p) · GW(p)
I suggested something similar recently.
I agree that an automated system would be preferable. By getting (temporarily) deactivated without fanfare, the actual trolls would be robbed of the attention and responses they crave, and the naive innocents will be more likely to believe they've actually done something wrong and should reconsider their behavior, instead of getting self-righteous and feeling maligned and aggrieved.
↑ comment by Paul Crowley (ciphergoth) · 2010-01-11T23:34:30.991Z · LW(p) · GW(p)
If you automate it people will just sockpuppet their way around it, and other such attacks.
What we have is working well. Let's not create a system that people will be encouraged to game.
Replies from: byrnema↑ comment by byrnema · 2010-01-12T00:38:04.769Z · LW(p) · GW(p)
I definitely see your point: if the system is automated and objective, (that is, without any component of public shaming), trolls won't feel as much as though it's a social norm preventing them from posting so they will feel like it's OK to work around the system. However, it's the atmosphere of public shaming that makes me uncomfortable.
We could try to emphasize the social norm as much as possible in the About description, something along the lines of, "If you have negative karma, it is because the net opinion of readers on Less Wrong of your comments is negative. Thus, users with negative karma are discouraged from commenting by an interruption in their ability to post comments."
Also, I submit that it wouldn't be that rewarding to sock-puppet yourself to positive karma, just to post a comment and get negative karma again.
↑ comment by pdf23ds · 2010-01-12T06:02:31.700Z · LW(p) · GW(p)
I find the process [...] to be a very unpleasant experience. It introduces all sorts of negativity and it makes me uncomfortable
Not me. And anyway, it's not you in charge of this process, so you could try to ignore it to avoid the stress, if at all possible. (It may not be.)
A lot of negativity seems unavoidable whenever anyone comes around the community disagrees with, either as a troll (like the "immortal" guy), or just as someone pushing weird and likely wrong opinions. I'm not sure the "kicking" part itself is a big part of that, so I'm not sure your solution would help.
↑ comment by RobinZ · 2010-01-11T19:00:25.723Z · LW(p) · GW(p)
Do you believe such a rule would be effective?
Edit: I don't mean to be snide or anything - I don't know if the sort of person to ignore persistent downvoting would also ignore a rule posted in the "About" page about persistent downvoting. I believe they would not, but I can't back that up with evidence.
Replies from: Cyan, Vladimir_Nesov↑ comment by Cyan · 2010-01-11T19:06:13.643Z · LW(p) · GW(p)
I'd support the addition of the rule to the About page not because I'd have expectations about anyone's behavior but because it is apparently Eliezer's policy to enforce such a rule, so we should probably state the rule explicitly somewhere that newcomers are likely to see it.
Replies from: RobinZ↑ comment by Vladimir_Nesov · 2010-01-11T19:04:09.944Z · LW(p) · GW(p)
In a sizable portion of cases, yes. But then again, how many people have we asked to leave? Maybe 5 to 8, no more. On the other hand, presence of this rule may make the endgame shorter, as non-admin users would be able to appeal to it.
Replies from: RobinZ↑ comment by Alexxarian · 2010-01-25T04:40:43.058Z · LW(p) · GW(p)
OK, I take the hint. Sorry about my trollish style. It was never my intention to annoy any of you, straight the contrary. I would however be interested in receiving feedback on http://lesswrong.com/lw/1nz/on_morphological_freedom_and_personal_identity/
Since my Karma is so low, I can only post drafts. At the moment I don't have very high expectations of getting it up any time soon, seeing how little you appreciate my ideas and style of expression. As said, any feedback would be appreciated, including a permanent ban :)
Replies from: Eliezer_Yudkowsky, Alicorn↑ comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2010-01-25T05:24:30.871Z · LW(p) · GW(p)
Draft is not appropriate for Less Wrong in any case. I suggest that you might find your discourse better appreciated elsewhere.
↑ comment by pdf23ds · 2010-01-11T04:43:25.067Z · LW(p) · GW(p)
as long as you open up your mind a bit. ... have been met by strong skepticism from the majority of the intellectual and close-minded elite. You have here an opportunity to learn something that will drastically widen your horizon of understanding.
Danger! Danger!
Those phrases are (cheap) signals that lead to negative impressions around these parts.
Replies from: Bo102010↑ comment by byrnema · 2010-01-11T02:58:56.919Z · LW(p) · GW(p)
This is interesting.
But you should be warned: As generally true as it is that "all paradigm shifting insights have been met by strong skepticism from the majority of the intellectual and close-minded elite", it is also true that people are conditioned to know that when someone says this,this someone is about to attest to something crazy.
Perhaps because it's a defensive posture, so we know they've met general resistance already.
I would hazard that what you describe that Kolak presents is a compelling philosophy, the kind of thing someone could believe or not believe without any actual material consequences.
↑ comment by Jack · 2010-01-10T21:19:29.628Z · LW(p) · GW(p)
Googling Kolak I can see he indeed holds the very strange view that I am identical to you. But this particular paper basically consists of a decent outline of the present personal identity debate and a brief statement of Kolak's own view which for me, is a vague, mysterious description of a view that basically says "I am the the thing that the indexical "I" points to in this sentence." Which is a perfectly fine view except that it answers hardly any questions (My reply is "Duh, now is that thing a body or a psychological state?") Then somehow in the remaining paragraphs this becomes "the brain is not sufficient for personal identity" and strange, out of context Wittgenstein quotes. Anyway, it didn't really have anything to do with this discussion.
Replies from: thomblake