Book Review: Consciousness Explained

post by Charlie Steiner · 2018-03-06T03:32:58.835Z · LW · GW · 20 comments

Contents

  I.
  II.
  III.
  IV.
  V.
  VI.
  VII.
  VIII.
  IX.
None
20 comments
The trouble with brains, it seems, is that when you look in them, you discover that there’s nobody home.

I.

This is a book I've long been aware of, but never got that itch to read. Maybe I trusted the field of philosophy too little, assuming that a book called "Consciousness Explained" was probably not very good. Maybe I trusted the field of philosophy too much, assuming that if someone had actually explained consciousness while I was a toddler, I would have been informed somehow before now. Either way, I was wrong, and the book is great.

I'm going to try to give what is either a short tour, or a long compilation of quotes. I'm leaving out several whole chapters, nearly every thought experiment, most of the examples and science, and some very nice language. And yet, this is still long enough that I encourage you, even if you do like reading Dan Dennett on consciousness, if you don't like long things, maybe don't read this all in one sitting - stop at V or VI and pretend that's the end of part one.

II.

Dennett quickly warns the reader that he's aware that the contents may sound counterintuitive.

We shouldn’t expect a good theory of consciousness to make for comfortable reading — the sort that immediately “rings bells,” that makes us exclaim to ourselves, with something like secret pride: “Of course! I knew that all along! It’s obvious, once it’s been pointed out!” The mysteries of the mind have been around for so long, and we have made so little progress on them, that the likelihood is high that some things we all tend to agree to be obvious are just not so.

This is not the mysterian claim that his ideas about consciousness are likely because they are counterintuitive, but it does signal a core claim of the book: the intuitive view of the problem of consciousness is broken from the foundation up. Naturally, if the intuitive theory is wrong, the right theory is counterintuitive.

Where, exactly, is our intuition going wrong? The most important example is introduced by considering how, on macroscopic scales, it is convenient to treat observers as point-like entities:

We explain the startling time gap between the sound and sight of distant fireworks by noting the different transmission speeds of sound and light. They arrive at the observer (at that point) at different times, even though they left the source at the same time.
What happens, though, when we close in on the observer, and try to locate the observer’s point of view more precisely, as a point within the individual? The simple assumptions that work so well on larger scales begin to break down. There is no single point in the brain where all information funnels in, and this fact has some far from obvious — indeed, quite counterintuitive — consequences.

Cartesian dualism is hopelessly wrong. But while materialism of one sort or another is now a received opinion approaching unanimity, even the most sophisticated materialists today often forget that once Descartes’s ghostly res cogitans is discarded, there is no longer a role for a centralized gateway, or indeed for any functional center to the brain. The pineal gland is not only not the fax machine to the Soul, it is also not the Oval Office of the brain, and neither are any of the other portions of the brain. The brain is Headquarters, the place where the ultimate observer is, but there is no reason to believe that the brain itself has any deeper headquarters, any inner sanctum, arrival at which is the necessary or sufficient condition for conscious experience. In short, there is no observer inside the brain.

This book might have also been called "455 Pages Of Implications Of There Being No Homuncular Observer Inside The Brain." But that probably wouldn't have sold as well. This is absolutely the most important point of the book, and it shows up again and again in different variations. There is no Cartesian Theater where "you" watch what's happening in "your brain." There is no Central Meaner who has top-down control over the meaning of what you're going to say, before it gets to your speech center. There is no Inner Senser who the brain feeds sense data to to consecrate it with consciousness. And so on and so forth.

You'd think this would get old, but the examples I gathered in the last paragraph are spread out over many chapters of neuroscience, thought experiments, and discussions of philosophical practice. Dennett spends a lot of time defending the distributed nature of the brain, and uses it to do a lot of heavy philosophical lifting, but it's always in a slightly new context, and it feels worthwhile each time.

III.

A key methodological question of the book is how to walk the line between the two extreme stances on peoples' reports of their conscious phenomena. At one extreme, one takes everything people say about their conscious experience as gospel truth. Dennett spends plenty of time impugning peoples' reliability as witnesses of their consciousness, with data and experiments such as trying to read a playing card with your peripheral vision.

Just about every author who has written about consciousness has made what we might call the first-person-plural presumption: Whatever mysteries consciousness may hold, we (you, gentle reader, and I) may speak comfortably together about our mutual acquaintances, the things we both find in our streams of consciousness. And with a few obstreperous exceptions, readers have always gone along with the conspiracy.
This would be fine if it weren’t for the embarrassing fact that controversy and contradiction bedevil the claims made under these conditions of polite mutual agreement. We are fooling ourselves about something. Perhaps we are fooling ourselves about the extent to which we are all basically alike. Perhaps when people first encounter the different schools of thought on phenomenology, they join the school that sounds right to them, and each school of phenomenological description is basically right about its own members’ sorts of inner life, and then just innocently overgeneralizes, making unsupported claims about how it is with everyone.

At the opposite extreme, one spends all one's time explaining the reports themselves, and never seems to explain any conscious phenomena at all - the dreaded behaviorism.

Dennett's approach is given the mouthful of a name "heterophenomenology" (the study of other peoples' phenomena), and really means something along the lines of using reports of conscious experience as data to fill in a descriptive model of a reporter, which has room for both accurate and mistaken reports.

Suppose you are confronted by a “speaking” computer, and suppose you succeed in interpreting its output as speech acts expressing its beliefs and opinions, presumably “about” its conscious states. The fact that there is a single, coherent interpretation of a sequence of behavior doesn’t establish that the interpretation is true; it might be only as if the “subject” were conscious; we risk being taken in by a zombie with no inner life at all. You could not confirm that the computer was conscious of anything by this method of interpretation. Fair enough. We can’t be sure that the speech acts we observe express real beliefs about actual experiences; perhaps they express only apparent beliefs about nonexistent experiences. Still, the fact that we had found even one stable interpretation of some entity’s behavior as speech acts would always be a fact worthy of attention. Anyone who found an intersubjectively uniform way of interpreting the waving of a tree’s branches in the breeze as “commentaries” by “the weather” on current political events would have found something wonderful demanding an explanation, even if it turned out to be effects of an ingenious device created by some prankish engineer.
Happily, there is an analogy at hand to help us describe such facts without at the same time presumptively explaining them: We can compare the heterophenomenologist’s task of interpreting subjects’ behavior to the reader’s task of interpreting a work of fiction. Some texts, such as novels and short stories, are known — or assumed — to be fictions, but this does not stand in the way of their interpretation. In fact, in some regards it makes the task of interpretation easier, by canceling or postponing difficult questions about sincerity, truth, and reference.
Consider some uncontroversial facts about the semantics of fiction. A novel tells a story, but not a true story, except by accident. In spite of our knowledge or assumption that the story told is not true, we can, and do, speak of what is true in the story. “We can truly say that Sherlock Holmes lived in Baker Street and that he liked to show off his mental powers. We cannot truly say that he was a devoted family man, or that he worked in close cooperation with the police”. What is true in the story is much, much more than what is explicitly asserted in the text. It is true that there are no jet planes in Holmes’s London (though this is not asserted explicitly or even logically implied in the text), but also true that there are piano tuners (though — as best I recall — none is mentioned, or, again, logically implied). In addition to what is true and false in the story, there is a large indeterminate area: while it is true that Holmes and Watson took the 11:10 from Waterloo Station to Aldershot one summer’s day, it is neither true nor false that that day was a Wednesday.
There are delicious philosophical problems about how to say (strictly) all the things we unperplexedly want to say when we talk about fiction, but these will not concern us. Perhaps some people are deeply perplexed about the metaphysical status of fictional people and objects, but not I. In my cheerful optimism I don’t suppose there is any deep philosophical problem about the way we should respond, ontologically, to the results of fiction; fiction is fiction; there is no Sherlock Holmes. Setting aside the intricacies, then, and the ingenious technical proposals for dealing with them, I want to draw attention to a simple fact: the interpretation of fiction is undeniably do-able, with certain uncontroversial results. First, the fleshing out of the story, the exploration of “the world of Sherlock Holmes,” for instance, is not pointless or idle; one can learn a great deal about a novel, about its text, about the point, about the author, even about the real world, by learning about the world portrayed by the novel. Second, if we are cautious about identifying and excluding judgments of taste or preference, we can amass a volume of unchallengeably objective fact about the world portrayed. All interpreters agree that Holmes was smarter than Watson; in crashing obviousness lies objectivity.

So, in short, when interpreting text about consciousness, one tries to fit this text into an internally consistent model that is a model of the thing the text is describing.

The heterophenomenological method neither challenges nor accepts as entirely true the assertions of subjects, but rather maintains a constructive and sympathetic neutrality, in the hopes of compiling a definitive description of the world according to the subjects. Any subject made uneasy by being granted this constitutive authority might protest: “No, really! These things I am describing to you are perfectly real, and have exactly the properties I am asserting them to have!” The heterophenomenologist’s honest response might be to nod and assure the subject that of course his sincerity was not being doubted. But since believers in general want more — they want their assertions to be believed and, failing that, they want to know whenever their audience disbelieves them — it is in general more politic for heterophenomenologists, whether anthropologists or experimenters studying consciousness in the laboratory, to avoid drawing attention to their official neutrality.

My suggestion, then, is that if we were to find real goings-on in people’s brains that had enough of the “defining” properties of the items that populate their heterophenomenological worlds, we could reasonably propose that we had discovered what they were really talking about — even if they initially resisted the identifications. And if we discovered that the real goings-on bore only a minor resemblance to the heterophenomenological items, we could reasonably declare that people were just mistaken in the beliefs they expressed, in spite of their sincerity.

Obviously Dennett (1991) is cribbing from Yudkowsky (2008) here, as he does in various places throughout the book. Overall, I think making the reader learn the word "heterophenomenology" was worth it - the idea shows up in the book not only as a method of semi-detached intepretation, but also as a model of the sort of thing one can know about consciousness from a third-person perspective, which proves useful in taking on various philosophical puzzlers.

On the other hand, the whole thing could have been done more precisely - all these mentions of heterophenomenology rely heavily on intuition to fill in the blanks. Part of the reason why more precision was impossible is because of how heterophenomenology is used as part of Dennett's campaign of intuition pumps to try to move people from the intuitive view to a non-Cartesian view. A precise model would inspire people to ask "where's the consciousness?" as soon as it was introduced - instead, the book tries to change peoples' minds slowly.

IV.

Another point of the book, if very minor in comparison to "the model of a person as a pointlike object breaks down when you get close to them," is the difference between representeds and representings. There's a long section on the experience of events in time that trades on this distinction. In the Cartesian Theater model, the order of conscious events is uniquely determined by the order in which the events are shown onstage in the Theater. But if there's no Theater, how do we judge what order events occurred in?

The answer is that just like how we don't represent orange light on the retinas with orange-colored neurons, we don't have to to represent events that are ordered in time with neurons that are ordered in time. We use the neural equivalent of timestamps to represent time in a non-temporal way, and can compare these timestamps in a distributed way. People found this idea hard to imagine, because they imagined consciousness as if there just had to be a Cartesian Theater somewhere.

Every event in your brain has a definite spatio-temporal location, but asking “Exactly when do you become conscious of the stimulus?” assumes that some one of these events is, or amounts to, your becoming conscious of the stimulus. This is like asking “Exactly when did the British Empire become informed of the truce in the War of 1812?” Sometime between December 24, 1814, and mid-January, 1815 — that much is definite, but there simply is no fact of the matter if we try to pin it down to a day and hour. Even if we can give precise times for the various moments at which various officials of the Empire became informed, no one of these moments can be singled out as the time the Empire itself was informed. The signing of the truce was one official, intentional act of the Empire, but the participation by the British forces in the Battle of New Orleans was another, and it was an act performed under the assumption that no truce had yet been signed. A case might be made for the principle that the arrival of the news at Whitehall or Buckingham Palace in London should be considered the official time at which the Empire was informed, since this was the “nerve center” of the Empire. Descartes thought the pineal gland was just such a nerve center in the brain, but he was wrong. Since cognition and control — and hence consciousness — is distributed around in the brain, no moment can count as the precise moment at which each conscious event happens.

We human beings do make judgments of simultaneity and sequence of elements of our own experience, some of which we express, so at some point or points in our brains the corner must be turned from the actual timing of representations to the representation of timing, and wherever and whenever these discriminations are made, thereafter the temporal properties of the representations embodying those judgments are not constitutive of their content. The objective simultaneities and sequences of events spread across the broad field of the cortex are of no functional relevance unless they can also be accurately detected by mechanisms in the brain. We can put the crucial point as a question: What would make this sequence the stream of consciousness? There is no one inside, looking at the wide-screen show displayed all over the cortex, even if such a show is discernible by outside observers. What matters is the way those contents get utilized by or incorporated into the processes of ongoing control of behavior, and this must be only indirectly constrained by cortical timing. What matters, once again, is not the temporal properties of the representings, but the temporal properties represented, something determined by how they are “taken” by subsequent processes in the brain.

V.

What, then is Dennett's alternative picture of consciousness? He calls it the multiple drafts model, and what it is is a wholehearted embracing of the distributed nature of human minds. If you probe someone's consciousness different ways, like asking them to press a button now versus report what they remember later, you can sometimes get different answers, because the state of someone's mind is distributed throughout their brain, and different probes can access different facts about that state. It's like the thing that gets probed, which gets fixated into consciousness when we direct attention to it, has multiple drafts of itself available to different systems of the brain, and these drafts get passed around and edited as time passes.

You have probably experienced the phenomenon of driving for miles while engrossed in conversation (or in silent soliloquy) and then discovering that you have utterly no memory of the road, the traffic, your car-driving activities. It is as if someone else had been driving. Many theorists (myself included, I admit) have cherished this as a favorite case of “unconscious perception and intelligent action.” But were you really unconscious of all those passing cars, stop lights, bends in the road at the time? You were paying attention to other things, but surely if you had been probed about what you had just seen at various moments on the drive, you would have had at least some sketchy details to report.

The other key feature (perhaps you can guess) is that there really is no Central Place in the brain where the important stuff happens. Important stuff happens all over!

We don’t directly experience what happens on our retinas, in our ears, on the surface of our skin. What we actually experience is a product of many processes of interpretation — editorial processes, in effect. They take in relatively raw and one-sided representations, and yield collated, revised, enhanced representations, and they take place in the streams of activity occurring in various parts of the brain. This much is recognized by virtually all theories of perception, but now we are poised for the novel feature of the Multiple Drafts model: Feature detections or discriminations only have to be made once. That is, once a particular “observation” of some feature has been made, by a specialized, localized portion of the brain, the information content thus fixed does not have to be sent somewhere else to be rediscriminated by some “master” discriminator. In other words, discrimination does not lead to a representation of the already discriminated feature for the benefit of the audience in the Cartesian Theater — for there is no Cartesian Theater.
These spatially and temporally distributed content-fixations in the brain are precisely locatable in both space and time, but their onsets do not mark the onset of consciousness of their content. It is always an open question whether any particular content thus discriminated will eventually appear as an element in conscious experience, and it is a confusion, as we shall see, to ask when it becomes conscious. These distributed content-discriminations yield, over the course of time, something rather like a narrative stream or sequence, which can be thought of as subject to continual editing by many processes distributed around in the brain, and continuing indefinitely into the future. This stream of contents is only rather like a narrative because of its multiplicity; at any point in time there are multiple “drafts” of narrative fragments at various stages of editing in various places in the brain.

I think Dennett would agree that the multiple drafts model is absolutely a step in the right direction, but is also a convenient fiction, a crutch for our imagination because we're still in the process of uncovering better ways of imagining and understanding the complication of the human brain.

VI.

Thus far has only brought us to the middle of the book. Here the text becomes a little more hit and miss, and a lot less summarizable in small bites. I will resort to a list:

VII.

A story about visual perception that's too good to not reproduce:

Almost twenty years ago, Paul Bach-y-Rita developed several devices that involved small, ultralow-resolution video cameras that could be mounted on eyeglass frames. The low-resolution signal from these cameras, a 16-by-16 or 20-by-20 array of “black and white” pixels, was spread over the back or belly of the subject in a grid of either electrical or mechanically vibrating tinglers called tactors.
After only a few hours of training, blind subjects wearing this device could learn to interpret the patterns of tingles on their skin, much as you can interpret letters traced on your skin by someone’s finger. The resolution is low, but even so, subjects could learn to read signs, and identify objects and even people’s faces, as we can gather from looking at this photograph taken of the signal as it appears on an oscilloscope monitor.
Fig 11.4
The result was certainly prosthetically produced conscious perceptual experience, but since the input was spread over the subjects’ backs or bellies instead of their retinas, was it vision? Did it have the “phenomenal qualities” of vision, or just of tactile sensation?
Recall one of our experiments in chapter 3. It is quite easy for your tactile point of view to extend out to the tip of a pencil, permitting you to feel textures with the tip, while quite oblivious to the vibrations of the pencil against your fingers. So it should not surprise us to learn that a similar, if more extreme, effect was enjoyed by Bach-y-Rita’s subjects. After a brief training period, their awareness of the tingles on their skin dropped out; the pad of pixels became transparent, one might say, and the subjects’ point of view shifted to the point of view of the camera, mounted to the side of their heads. A striking demonstration of the robustness of the shift in point of view was the behavior of an experienced subject whose camera had a zoom-lens with a control button. The array of tinglers was on his back, and the camera was mounted on the side of his head. When the experimenter without warning touched the zoom button, causing the image on the subject’s back to expand or “loom” suddenly, the subject instinctively lurched backward, raising his arms to protect his head. Another striking demonstration of the transparency of the tingles is the fact that subjects who had been trained with the tingler-patch on their backs could adapt almost immediately when the tingler-patch was shifted to their bellies. And yet, as Bach-y-Rita notes, they still responded to an itch on the back as something to scratch — they didn’t complain of “seeing” it — and were perfectly able to attend to the tingles, as tingles, on demand.
These observations are tantalizing but inconclusive. One might argue that once the use of the device’s inputs became second nature the subjects were really seeing, or, contrarily, that only some of the most central “functional” features of seeing had been reproduced prosthetically. What of the other “phenomenal qualities” of vision? Bach-y-Rita reports the result of showing two trained subjects, blind male college students, for the first time in their lives, photographs of nude women from Playboy magazine. They were disappointed — “although they both could describe much of the content of the photographs, the experience had no affectual component; no pleasant feelings were aroused. This greatly disturbed the two young men, who were aware that similar photographs contained an effectual component for their normally sighted friends”.

VIII.

Dennett is somewhat infamous for denying the existence of qualia (singular: quale) - the private, ineffable stuff that makes the redness of red so red. But it's not that he denies the existence of redness - it's the private and ineffable part that's the problem.

Consider, for instance, the curious fact that monkeys don’t like red light. Given a choice, rhesus monkeys show a strong preference for the blue-green end of the spectrum, and get agitated when they have to endure periods in red environments. Why should this be? Humphrey points out that red is always used to alert, the ultimate color-coding color, but for that very reason ambiguous: the red fruit may be good to eat, but the red snake or insect is probably advertising that it is poisonous. So “red” sends mixed messages. But why does it send an “alert” message in the first place? Perhaps because it is the strongest available contrast with the ambient background of vegetative green or sea blue, or — in the case of monkeys — because red light (red to reddish-orange to orange light) is the light of dusk and dawn, the times of day when virtually all the predators of monkeys do their hunting.
The affective or emotional properties of red are not restricted to rhesus monkeys. All primates share these reactions, including human beings. If your factory workers are lounging too long in the rest rooms, painting the walls of the rest rooms red will solve that problem — but create others (see Humphrey, forthcoming). Such “visceral” responses are not restricted to colors, of course. Most primates raised in captivity who have never seen a snake will make it unmistakably clear that they loathe snakes the moment they see one, and it is probable that the traditional human dislike of snakes has a biological source that explains the biblical source, rather than the other way around. That is, our genetic heritage queers the pitch in favor of memes for snake-hating.
Now here are two different explanations for the uneasiness most of us feel (even if we “conquer” it) when we see a snake:
(1) Snakes evoke in us a particular intrinsic snake-yuckiness quale when we look at them, and our uneasiness is a reaction to that quale.
(2) We find ourselves less than eager to see snakes because of innate biases built into our nervous systems. These favor the release of adrenaline, bring fight-or-flight routines on line, and, by activating various associative links, call a host of scenarios into play involving danger, violence, damage. The original primate aversion is, in us, transformed, revised, deflected in a hundred ways by the memes that have exploited it, coopted it, shaped it. (There are many different levels at which we could couch an explanation of this “functionalist” type. For instance, we could permit ourselves to speak more casually about the power of snake-perceptions to produce anxieties, fears, anticipations of pain, and the like, but that might be seen as “cheating” so I am avoiding it.)
The trouble with the first sort of explanation is that it only seems to be an explanation. The idea that an “intrinsic” property (of occurrent pink, of snake-yuckiness, of pain, of the aroma of coffee) could explain a subject’s reactions to a circumstance is hopeless — a straightforward case of a virtus dormitiva. Convicting a theory of harboring a vacuous virtus dormitiva is not that simple, however. Sometimes it makes perfectly good sense to posit a temporary virtus dormitiva, pending further investigation. Conception is, by definition we might say, the cause of pregnancy. If we had no other way of identifying conception, telling someone she got pregnant because she conceived would be an empty gesture, not an explanation. But once we’ve figured out the requisite mechanical theory of conception, we can see how conception is the cause of pregnancy, and informativeness is restored. In the same spirit, we might identify qualia, by definition, as the proximal causes of our enjoyment and suffering (roughly put), and then proceed to discharge our obligations to inform by pursuing the second style of explanation. But curiously enough, qualophiles (as I call those who still believe in qualia) will have none of it; they insist that qualia “reduced” to mere complexes of mechanically accomplished dispositions to react are not the qualia they are talking about. Their qualia are something different.

It is in this context that Dennett says that qualia - those things on the screen in the Cartesian Theater - don't exist.

Another theme of the book that reaches its crescendo in this section is Dennett's defense of a restricted sort of un-duplicability of qualia (though of course he would never call it that), as a natural consequence of how brains work and the limits of third-person knowledge, as identified with heterophenomenology. There's more handwaving than rigorous argument supporting this, but it does seem like pretty reasonable handwaving.

Consider what it must have been like to be a Leipzig Lutheran churchgoer in, say, 1725, hearing one of J. S. Bach’s chorale cantatas in its premier performance. (This exercise in imagining what it is like is a warm-up for chapter 14, where we will be concerned with consciousness in other animals.) There are probably no significant biological differences between us today and German Lutherans of the eighteenth century; we are the same species, and hardly any time has passed. But, because of the tremendous influence of culture — the memosphere — our psychological world is quite different from theirs, in ways that would have a noticeable impact on our respective experiences when hearing a Bach cantata for the first time. Our musical imagination has been enriched and complicated in many ways (by Mozart, by Charlie Parker, by the Beatles), but also it has lost some powerful associations that Bach could count on. His chorale cantatas were built around chorales, traditional hymn melodies that were deeply familiar to his churchgoers and hence provoked waves of emotional and thematic association as soon as their traces or echoes appeared in the music. Most of us today know these chorales only from Bach’s settings of them, so when we hear them, we hear them with different ears. If we want to imagine what it was like to be a Leipzig Bach-hearer, it is not enough for us to hear the same tones on the same instruments in the same order; we must also prepare ourselves somehow to respond to those tones with the same heartaches, thrills, and waves of nostalgia.
A clearer case of imagination-blockade would be hard to find, but note that it has nothing to do with biological differences or even with “intrinsic” or “ineffable” properties of Bach’s music. The reason we couldn’t imaginatively relive in detail the musical experience of the Leipzigers is simply that we would have to take ourselves along for the imaginary trip, and we know too much.

There's also a really good description of ineffability in terms of Jell-O boxes that, unfortunately, I will have to butcher in order to relate. Tl;dr: If you tear a Jell-O box into two pieces, one piece will be a detector for the other - it only fits perfectly with that one shape of torn cardboard. But this property is indescribable - if you try to explain what shape it is that the piece of Jell-O box detects, you can only wave your hands at the piece of cardboard plaintively. This is a metaphor for the experience of trying to describe what it is that red looks like.

IX.

The book closes with one final piece of shameless plagiarism from the heyday of LessWrong.

When we learn that the only difference between gold and silver is the number of subatomic particles in their atoms, we may feel cheated or angry — those physicists have explained something away: The goldness is gone from gold; they’ve left out the very silveriness of silver that we appreciate. And when they explain the way reflection and absorption of electromagnetic radiation accounts for colors and color vision, they seem to neglect the very thing that matters most. But of course there has to be some “leaving out” — otherwise we wouldn’t have begun to explain. Leaving something out is not a feature of failed explanations, but of successful explanations.
Only a theory that explained conscious events in terms of unconscious events could explain consciousness at all. If your model of how pain is a product of brain activity still has a box in it labeled “pain,” you haven’t yet begun to explain what pain is, and if your model of consciousness carries along nicely until the magic moment when you have to say “then a miracle occurs” you haven’t begun to explain what consciousness is.
This leads some people to insist that consciousness can never be explained. But why should consciousness be the only thing that can’t be explained? Solids and liquids and gases can be explained in terms of things that aren’t themselves solids or liquids or gases. Surely life can be explained in terms of things that aren’t themselves alive — and the explanation doesn’t leave living things lifeless. The illusion that consciousness is the exception comes about, I suspect, because of a failure to understand this general feature of successful explanation. Thinking, mistakenly, that the explanation leaves something out, we think to save what otherwise would be lost by putting it back into the observer as a quale — or some other “intrinsically” wonderful property. The psyche becomes the protective skirt under which all these beloved kittens can hide. There may be motives for thinking that consciousness cannot be explained, but, I hope I have shown, there are good reasons for thinking that it can.
My explanation of consciousness is far from complete. One might even say that it was just a beginning, but it is a beginning, because it breaks the spell of the enchanted circle of ideas that made explaining consciousness seem impossible. I haven’t replaced a metaphorical theory, the Cartesian Theater, with a nonmetaphorical (“literal, scientific”) theory. All I have done, really, is to replace one family of metaphors and images with another, trading in the Theater, the Witness, the Central Meaner, the Figment, for Software, Virtual Machines, Multiple Drafts, a Pandemonium of Homunculi. It’s just a war of metaphors, you say — but metaphors are not “just” metaphors; metaphors are the tools of thought. No one can think about consciousness without them, so it is important to equip yourself with the best set of tools available.

20 comments

Comments sorted by top scores.

comment by Ben Pace (Benito) · 2018-03-06T10:06:52.125Z · LW(p) · GW(p)

From your Dennett-quote at the end:

This leads some people to insist that consciousness can never be explained. But why should consciousness be the only thing that can’t be explained? Solids and liquids and gases can be explained in terms of things that aren’t themselves solids or liquids or gases. Surely life can be explained in terms of things that aren’t themselves alive — and the explanation doesn’t leave living things lifeless. The illusion that consciousness is the exception comes about, I suspect, because of a failure to understand this general feature of successful explanation. Thinking, mistakenly, that the explanation leaves something out, we think to save what otherwise would be lost by putting it back into the observer as a quale — or some other “intrinsically” wonderful property.

I want to propose that Dennett is slightly mistaken; it's not that people haven't generalised it to all the relevant cases. Many people have learned that things in their map can be explained in terms of simpler things in their map (e.g. biology, physics, psychology, etc).

However, it can be additionally hard to generalise this to the map itself, - sure, your map of your map should have some reduction (i.e. the brain is made of atoms) but does that really apply to your map, as opposed to your map of your map? People successfully generalise, but they fail to go meta.

comment by Kaj_Sotala · 2018-03-13T09:24:10.760Z · LW(p) · GW(p)

Curated this post because:

  • It's a good in-depth book review, summarizing some of the insights from the book while also giving people a good idea of whether they might want to read the book.
  • Understanding consciousness seems relevant for a range of ethical questions (compare e.g. lukeprog's recent report on Consciousness and Moral Patienthood for OpenPhil)
  • The topic in general feels like the kind of reductionist-philosophy-geekery that appeals to the kinds of people we might want to have on the site.

A reason not to curate the post might have been that despite the above points, several people may nevertheless feel that it's not sufficiently connected to the topics of epistemic or instrumental rationality. However, we've got a bunch of more directly related posts in our curation queue, so in light of that and the above points, I'm curating this. (We've been slacking a little on curating posts and now have a bunch of them chosen for curation; doing it at a pace of one per day.)

comment by Ben Pace (Benito) · 2018-03-06T10:05:26.847Z · LW(p) · GW(p)

Loved the review - seems like an awesome book. I recall reading it aged 13-14, where he says of chapter 6 that it is very technical and can be skipped; unfortunately I tried anyway and then never finished the book.

The most significant part of it for me was the section you describe this way:

Dennett's approach is given the mouthful of a name "heterophenomenology" (the study of other peoples' phenomena), and really means something along the lines of using reports of conscious experience as data to fill in a descriptive model of a reporter, which has room for both accurate and mistaken reports...
...in short, when interpreting text about consciousness, one tries to fit this text into an internally consistent model that is a model of the thing the text is describing.

I remember this being a big insight for me; Dennett had shown that anything that I experienced or thought about, must trace back to some real-world data that I can observe. It was a big shock to me that you could just draw something of an arbitrary line at observable data like "text that people write when we ask them about consciousness", and then propose hypotheses to explain such data.

(It wasn't until much later that I realised a good theory of consciousness should go farther and make novel predictions about what people will say in very specific situations, rather than just explaining what you have already seen. But still.)

comment by philip_b (crabman) · 2018-03-13T15:21:38.271Z · LW(p) · GW(p)

Can anyone provide a comparison between this book and Consciousness. An Introduction - Susan Blackmore. The latter has been recommended to me, but after having read a chapter I haven't been impressed.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-03-13T17:24:12.924Z · LW(p) · GW(p)

I can’t provide an in-depth comparison, but I have read both books, and Dennett’s was vastly superior.

comment by Kenny · 2018-04-05T21:27:48.403Z · LW(p) · GW(p)

If you haven't read his other book, Darwin's Dangerous Idea, I strongly recommend you do!

comment by Gordon Seidoh Worley (gworley) · 2018-03-06T19:13:22.342Z · LW(p) · GW(p)

Wow, this gives me a much more favorable view of Dennett. I had been of the opinion that he was both opposed to phenomenology and the existence of qualia, but in fact he is himself a phenomenologists and clearly seems to support, though would perhaps not say he does, the existence of qualia as a class of phenomena. The disagreements seem to come from two places.

First, he rejects the primacy of any phenomenological account as universal, to which I say "duh", but I realize not all phenomenologists are as epistemologically careful as me and, it seems, Dennett, so I understand how the myth of his anti-phenomenological stance has been perpetrated.

Second, he rejects qualia because it is often treated as if it were opaque. I agree, and although I try to rehabilitate qualia as a technical term for the class of phenomena that differentiate the phenomenally conscious from the not, I empathize with his choice because at other times I've done the same thing and rejected the idea that qualia might exist because it seemed to posit an explanation by introducing an epicycle.

I guess in hindsight I should be less surprised, and makes me wonder if others who seem to be mistaken based on my understanding of their work in this area like Chalmers and Searle do in fact accurately describe reality and it is only the representation of their work that has stripped the nuance that would allow me to see this.

comment by Said Achmiz (SaidAchmiz) · 2018-03-06T04:29:07.067Z · LW(p) · GW(p)

Excellent review. Consciousness Explained is one of my favorite books of philosophy, and one of the works that cemented Dennett, in my mind, as one of the greatest contemporary philosophers of mind (and an outstanding author). It is well worth reading.

comment by TAG · 2023-09-18T14:59:50.001Z · LW(p) · GW(p)

>Consider what it must have been like to be a Leipzig Lutheran churchgoer in, say, 1725, hearing one of J. S. Bach’s chorale cantatas in its premier performance.

 

If all the ineffability of experience comes from associations, then novel experiences should be effable-- but they are not.

 

There's also a really good description of ineffability in terms of Jell-O boxes that, unfortunately, I will have to butcher in order to relate. Tl;dr: If you tear a Jell-O box into two pieces, one piece will be a detector for the other - it only fits perfectly with that one shape of torn cardboard. But this property is indescribable - if you try to explain what shape it is that the piece of Jell-O box detects, you can only wave your hands at the piece of cardboard plaintively.

If all the ineffability of experience comes from complexity, then simple experiences should be effable-- but they are not.

Replies from: Charlie Steiner
comment by Charlie Steiner · 2023-09-18T20:05:13.026Z · LW(p) · GW(p)

Sorry, what's a simple experience? There's externally simple experiences like looking at a black room in the dark, but It's not like those experiences use a smaller number of neurons than my other experiences.

Replies from: TAG
comment by TAG · 2023-09-19T08:57:48.830Z · LW(p) · GW(p)

Experiences like seeing a single colour, or hearing a single musical tone. THe number of neurons is irrelevant, since they are not experienced.

Replies from: Charlie Steiner
comment by Charlie Steiner · 2023-09-19T10:28:06.039Z · LW(p) · GW(p)

Yeah, good point, we build models of the world, or at least of our senses, we don't automatically build models of what our neurons are doing.

(Maybe any learning in the brain can be interpreted as a "model" of the neurons that feed into the learning neurons, but the details of that sort of thing aren't available to our faculties for navigating the world, doing abstract reasoning, or communicating - they're happening at a lower layer in the software stack of the brain.)

That's veering towards a more "Mary's room" sort of definition of "ineffability," where you can't freely exchange world-models and experiences, which isn't really what the Jell-O box analogy was about - it was about interpersonal comparisons, and our inability to experience what other people experience.

But I guess they're connected. Suppose we're both listening to a simple tone, but my pitch perception is more accurate than yours. If you want to experience my experience for yourself, you might try taking your own experience and then imagining "adding on some extra pitch perception" - an act of model-to-experience exchange reminiscent of what Mary's supposed to try.

comment by ZeitPolizei · 2018-03-10T19:08:36.240Z · LW(p) · GW(p)
Another theme of the book that reaches its crescendo…

The paragraph beginning with this sentence is duplicated and butchered.

Replies from: Charlie Steiner
comment by Charlie Steiner · 2018-03-10T19:15:14.837Z · LW(p) · GW(p)

Thanks, fixed! I probably tried to edit the post on mobile.

comment by TAG · 2018-03-07T13:31:20.202Z · LW(p) · GW(p)

An amusingly-written critique of some of Dennett's ideas:

http://www-personal.umich.edu/~lormand/phil/cons/qualia.htm

Replies from: paul-torek
comment by Paul Torek (paul-torek) · 2018-03-16T14:17:42.277Z · LW(p) · GW(p)

Lormand has a better take than Dennett. Dennett thinks qualia would have to be irreducible. Lormand writes

If such arguments were convincing, they would weigh against any reductive theory of qualia. But they should not be convincing. A powerful but not dismissive response turns on distinguishing qualia properties from ways of representing them. Even if a creature has a special way of representing phenomenal properties that is unavailable to us, we can in principle objectively specify, express, or test for these phenomenal properties in other ways.

Dennett simply defines qualia overly narrowly, letting the least naturalistic philosophers own the term.

Replies from: SpectrumDT
comment by SpectrumDT · 2023-10-29T04:02:33.665Z · LW(p) · GW(p)

What article or book is that quote from?

comment by zulupineapple · 2018-04-24T12:40:13.625Z · LW(p) · GW(p)

Is it weird that this theory of consciousness was, for me, comfortable reading — the sort that immediately “rings bells,” that makes us exclaim to ourselves, with something like secret pride: “Of course! I knew that all along!” ? Does that mean it's a bad theory?

There is no Cartesian Theater where "you" watch what's happening in "your brain."

Even if there was such a part of the brain, it would itself be composed of parts that aren't entirely conscious, and so none of those parts would alone be "you". There are no alternatives, besides postulating souls (or elementary particles of consciousness). The latter isn't quite apriori impossible, but if it's appealing to you, then you're doing materialism wrong.

comment by romeostevensit · 2018-03-08T07:35:21.429Z · LW(p) · GW(p)

Is it in keeping with the spirit of Dennett's approach if I wonder what manner of self hatred would lead a man to spend a career trying to disprove his own existence?

Replies from: gjm
comment by gjm · 2018-03-14T00:52:23.881Z · LW(p) · GW(p)

Probably not, but in any case that isn't what Dennett has done.