Book Review: Consciousness Explainedpost by Charlie Steiner · 2018-03-06T03:32:58.835Z · score: 101 (27 votes) · LW · GW · 15 comments
The trouble with brains, it seems, is that when you look in them, you discover that there’s nobody home.
This is a book I've long been aware of, but never got that itch to read. Maybe I trusted the field of philosophy too little, assuming that a book called "Consciousness Explained" was probably not very good. Maybe I trusted the field of philosophy too much, assuming that if someone had actually explained consciousness while I was a toddler, I would have been informed somehow before now. Either way, I was wrong, and the book is great.
I'm going to try to give what is either a short tour, or a long compilation of quotes. I'm leaving out several whole chapters, nearly every thought experiment, most of the examples and science, and some very nice language. And yet, this is still long enough that I encourage you, even if you do like reading Dan Dennett on consciousness, if you don't like long things, maybe don't read this all in one sitting - stop at V or VI and pretend that's the end of part one.
Dennett quickly warns the reader that he's aware that the contents may sound counterintuitive.
We shouldn’t expect a good theory of consciousness to make for comfortable reading — the sort that immediately “rings bells,” that makes us exclaim to ourselves, with something like secret pride: “Of course! I knew that all along! It’s obvious, once it’s been pointed out!” The mysteries of the mind have been around for so long, and we have made so little progress on them, that the likelihood is high that some things we all tend to agree to be obvious are just not so.
This is not the mysterian claim that his ideas about consciousness are likely because they are counterintuitive, but it does signal a core claim of the book: the intuitive view of the problem of consciousness is broken from the foundation up. Naturally, if the intuitive theory is wrong, the right theory is counterintuitive.
Where, exactly, is our intuition going wrong? The most important example is introduced by considering how, on macroscopic scales, it is convenient to treat observers as point-like entities:
We explain the startling time gap between the sound and sight of distant fireworks by noting the different transmission speeds of sound and light. They arrive at the observer (at that point) at different times, even though they left the source at the same time.
What happens, though, when we close in on the observer, and try to locate the observer’s point of view more precisely, as a point within the individual? The simple assumptions that work so well on larger scales begin to break down. There is no single point in the brain where all information funnels in, and this fact has some far from obvious — indeed, quite counterintuitive — consequences.
Cartesian dualism is hopelessly wrong. But while materialism of one sort or another is now a received opinion approaching unanimity, even the most sophisticated materialists today often forget that once Descartes’s ghostly res cogitans is discarded, there is no longer a role for a centralized gateway, or indeed for any functional center to the brain. The pineal gland is not only not the fax machine to the Soul, it is also not the Oval Office of the brain, and neither are any of the other portions of the brain. The brain is Headquarters, the place where the ultimate observer is, but there is no reason to believe that the brain itself has any deeper headquarters, any inner sanctum, arrival at which is the necessary or sufficient condition for conscious experience. In short, there is no observer inside the brain.
This book might have also been called "455 Pages Of Implications Of There Being No Homuncular Observer Inside The Brain." But that probably wouldn't have sold as well. This is absolutely the most important point of the book, and it shows up again and again in different variations. There is no Cartesian Theater where "you" watch what's happening in "your brain." There is no Central Meaner who has top-down control over the meaning of what you're going to say, before it gets to your speech center. There is no Inner Senser who the brain feeds sense data to to consecrate it with consciousness. And so on and so forth.
You'd think this would get old, but the examples I gathered in the last paragraph are spread out over many chapters of neuroscience, thought experiments, and discussions of philosophical practice. Dennett spends a lot of time defending the distributed nature of the brain, and uses it to do a lot of heavy philosophical lifting, but it's always in a slightly new context, and it feels worthwhile each time.
A key methodological question of the book is how to walk the line between the two extreme stances on peoples' reports of their conscious phenomena. At one extreme, one takes everything people say about their conscious experience as gospel truth. Dennett spends plenty of time impugning peoples' reliability as witnesses of their consciousness, with data and experiments such as trying to read a playing card with your peripheral vision.
Just about every author who has written about consciousness has made what we might call the first-person-plural presumption: Whatever mysteries consciousness may hold, we (you, gentle reader, and I) may speak comfortably together about our mutual acquaintances, the things we both find in our streams of consciousness. And with a few obstreperous exceptions, readers have always gone along with the conspiracy.
This would be fine if it weren’t for the embarrassing fact that controversy and contradiction bedevil the claims made under these conditions of polite mutual agreement. We are fooling ourselves about something. Perhaps we are fooling ourselves about the extent to which we are all basically alike. Perhaps when people first encounter the different schools of thought on phenomenology, they join the school that sounds right to them, and each school of phenomenological description is basically right about its own members’ sorts of inner life, and then just innocently overgeneralizes, making unsupported claims about how it is with everyone.
At the opposite extreme, one spends all one's time explaining the reports themselves, and never seems to explain any conscious phenomena at all - the dreaded behaviorism.
Dennett's approach is given the mouthful of a name "heterophenomenology" (the study of other peoples' phenomena), and really means something along the lines of using reports of conscious experience as data to fill in a descriptive model of a reporter, which has room for both accurate and mistaken reports.
Suppose you are confronted by a “speaking” computer, and suppose you succeed in interpreting its output as speech acts expressing its beliefs and opinions, presumably “about” its conscious states. The fact that there is a single, coherent interpretation of a sequence of behavior doesn’t establish that the interpretation is true; it might be only as if the “subject” were conscious; we risk being taken in by a zombie with no inner life at all. You could not confirm that the computer was conscious of anything by this method of interpretation. Fair enough. We can’t be sure that the speech acts we observe express real beliefs about actual experiences; perhaps they express only apparent beliefs about nonexistent experiences. Still, the fact that we had found even one stable interpretation of some entity’s behavior as speech acts would always be a fact worthy of attention. Anyone who found an intersubjectively uniform way of interpreting the waving of a tree’s branches in the breeze as “commentaries” by “the weather” on current political events would have found something wonderful demanding an explanation, even if it turned out to be effects of an ingenious device created by some prankish engineer.
Happily, there is an analogy at hand to help us describe such facts without at the same time presumptively explaining them: We can compare the heterophenomenologist’s task of interpreting subjects’ behavior to the reader’s task of interpreting a work of fiction. Some texts, such as novels and short stories, are known — or assumed — to be fictions, but this does not stand in the way of their interpretation. In fact, in some regards it makes the task of interpretation easier, by canceling or postponing difficult questions about sincerity, truth, and reference.
Consider some uncontroversial facts about the semantics of fiction. A novel tells a story, but not a true story, except by accident. In spite of our knowledge or assumption that the story told is not true, we can, and do, speak of what is true in the story. “We can truly say that Sherlock Holmes lived in Baker Street and that he liked to show off his mental powers. We cannot truly say that he was a devoted family man, or that he worked in close cooperation with the police”. What is true in the story is much, much more than what is explicitly asserted in the text. It is true that there are no jet planes in Holmes’s London (though this is not asserted explicitly or even logically implied in the text), but also true that there are piano tuners (though — as best I recall — none is mentioned, or, again, logically implied). In addition to what is true and false in the story, there is a large indeterminate area: while it is true that Holmes and Watson took the 11:10 from Waterloo Station to Aldershot one summer’s day, it is neither true nor false that that day was a Wednesday.
There are delicious philosophical problems about how to say (strictly) all the things we unperplexedly want to say when we talk about fiction, but these will not concern us. Perhaps some people are deeply perplexed about the metaphysical status of fictional people and objects, but not I. In my cheerful optimism I don’t suppose there is any deep philosophical problem about the way we should respond, ontologically, to the results of fiction; fiction is fiction; there is no Sherlock Holmes. Setting aside the intricacies, then, and the ingenious technical proposals for dealing with them, I want to draw attention to a simple fact: the interpretation of fiction is undeniably do-able, with certain uncontroversial results. First, the fleshing out of the story, the exploration of “the world of Sherlock Holmes,” for instance, is not pointless or idle; one can learn a great deal about a novel, about its text, about the point, about the author, even about the real world, by learning about the world portrayed by the novel. Second, if we are cautious about identifying and excluding judgments of taste or preference, we can amass a volume of unchallengeably objective fact about the world portrayed. All interpreters agree that Holmes was smarter than Watson; in crashing obviousness lies objectivity.
So, in short, when interpreting text about consciousness, one tries to fit this text into an internally consistent model that is a model of the thing the text is describing.
The heterophenomenological method neither challenges nor accepts as entirely true the assertions of subjects, but rather maintains a constructive and sympathetic neutrality, in the hopes of compiling a definitive description of the world according to the subjects. Any subject made uneasy by being granted this constitutive authority might protest: “No, really! These things I am describing to you are perfectly real, and have exactly the properties I am asserting them to have!” The heterophenomenologist’s honest response might be to nod and assure the subject that of course his sincerity was not being doubted. But since believers in general want more — they want their assertions to be believed and, failing that, they want to know whenever their audience disbelieves them — it is in general more politic for heterophenomenologists, whether anthropologists or experimenters studying consciousness in the laboratory, to avoid drawing attention to their official neutrality.
My suggestion, then, is that if we were to find real goings-on in people’s brains that had enough of the “defining” properties of the items that populate their heterophenomenological worlds, we could reasonably propose that we had discovered what they were really talking about — even if they initially resisted the identifications. And if we discovered that the real goings-on bore only a minor resemblance to the heterophenomenological items, we could reasonably declare that people were just mistaken in the beliefs they expressed, in spite of their sincerity.
Obviously Dennett (1991) is cribbing from Yudkowsky (2008) here, as he does in various places throughout the book. Overall, I think making the reader learn the word "heterophenomenology" was worth it - the idea shows up in the book not only as a method of semi-detached intepretation, but also as a model of the sort of thing one can know about consciousness from a third-person perspective, which proves useful in taking on various philosophical puzzlers.
On the other hand, the whole thing could have been done more precisely - all these mentions of heterophenomenology rely heavily on intuition to fill in the blanks. Part of the reason why more precision was impossible is because of how heterophenomenology is used as part of Dennett's campaign of intuition pumps to try to move people from the intuitive view to a non-Cartesian view. A precise model would inspire people to ask "where's the consciousness?" as soon as it was introduced - instead, the book tries to change peoples' minds slowly.
Another point of the book, if very minor in comparison to "the model of a person as a pointlike object breaks down when you get close to them," is the difference between representeds and representings. There's a long section on the experience of events in time that trades on this distinction. In the Cartesian Theater model, the order of conscious events is uniquely determined by the order in which the events are shown onstage in the Theater. But if there's no Theater, how do we judge what order events occurred in?
The answer is that just like how we don't represent orange light on the retinas with orange-colored neurons, we don't have to to represent events that are ordered in time with neurons that are ordered in time. We use the neural equivalent of timestamps to represent time in a non-temporal way, and can compare these timestamps in a distributed way. People found this idea hard to imagine, because they imagined consciousness as if there just had to be a Cartesian Theater somewhere.
Every event in your brain has a definite spatio-temporal location, but asking “Exactly when do you become conscious of the stimulus?” assumes that some one of these events is, or amounts to, your becoming conscious of the stimulus. This is like asking “Exactly when did the British Empire become informed of the truce in the War of 1812?” Sometime between December 24, 1814, and mid-January, 1815 — that much is definite, but there simply is no fact of the matter if we try to pin it down to a day and hour. Even if we can give precise times for the various moments at which various officials of the Empire became informed, no one of these moments can be singled out as the time the Empire itself was informed. The signing of the truce was one official, intentional act of the Empire, but the participation by the British forces in the Battle of New Orleans was another, and it was an act performed under the assumption that no truce had yet been signed. A case might be made for the principle that the arrival of the news at Whitehall or Buckingham Palace in London should be considered the official time at which the Empire was informed, since this was the “nerve center” of the Empire. Descartes thought the pineal gland was just such a nerve center in the brain, but he was wrong. Since cognition and control — and hence consciousness — is distributed around in the brain, no moment can count as the precise moment at which each conscious event happens.
We human beings do make judgments of simultaneity and sequence of elements of our own experience, some of which we express, so at some point or points in our brains the corner must be turned from the actual timing of representations to the representation of timing, and wherever and whenever these discriminations are made, thereafter the temporal properties of the representations embodying those judgments are not constitutive of their content. The objective simultaneities and sequences of events spread across the broad field of the cortex are of no functional relevance unless they can also be accurately detected by mechanisms in the brain. We can put the crucial point as a question: What would make this sequence the stream of consciousness? There is no one inside, looking at the wide-screen show displayed all over the cortex, even if such a show is discernible by outside observers. What matters is the way those contents get utilized by or incorporated into the processes of ongoing control of behavior, and this must be only indirectly constrained by cortical timing. What matters, once again, is not the temporal properties of the representings, but the temporal properties represented, something determined by how they are “taken” by subsequent processes in the brain.
What, then is Dennett's alternative picture of consciousness? He calls it the multiple drafts model, and what it is is a wholehearted embracing of the distributed nature of human minds. If you probe someone's consciousness different ways, like asking them to press a button now versus report what they remember later, you can sometimes get different answers, because the state of someone's mind is distributed throughout their brain, and different probes can access different facts about that state. It's like the thing that gets probed, which gets fixated into consciousness when we direct attention to it, has multiple drafts of itself available to different systems of the brain, and these drafts get passed around and edited as time passes.
You have probably experienced the phenomenon of driving for miles while engrossed in conversation (or in silent soliloquy) and then discovering that you have utterly no memory of the road, the traffic, your car-driving activities. It is as if someone else had been driving. Many theorists (myself included, I admit) have cherished this as a favorite case of “unconscious perception and intelligent action.” But were you really unconscious of all those passing cars, stop lights, bends in the road at the time? You were paying attention to other things, but surely if you had been probed about what you had just seen at various moments on the drive, you would have had at least some sketchy details to report.
The other key feature (perhaps you can guess) is that there really is no Central Place in the brain where the important stuff happens. Important stuff happens all over!
We don’t directly experience what happens on our retinas, in our ears, on the surface of our skin. What we actually experience is a product of many processes of interpretation — editorial processes, in effect. They take in relatively raw and one-sided representations, and yield collated, revised, enhanced representations, and they take place in the streams of activity occurring in various parts of the brain. This much is recognized by virtually all theories of perception, but now we are poised for the novel feature of the Multiple Drafts model: Feature detections or discriminations only have to be made once. That is, once a particular “observation” of some feature has been made, by a specialized, localized portion of the brain, the information content thus fixed does not have to be sent somewhere else to be rediscriminated by some “master” discriminator. In other words, discrimination does not lead to a representation of the already discriminated feature for the benefit of the audience in the Cartesian Theater — for there is no Cartesian Theater.
These spatially and temporally distributed content-fixations in the brain are precisely locatable in both space and time, but their onsets do not mark the onset of consciousness of their content. It is always an open question whether any particular content thus discriminated will eventually appear as an element in conscious experience, and it is a confusion, as we shall see, to ask when it becomes conscious. These distributed content-discriminations yield, over the course of time, something rather like a narrative stream or sequence, which can be thought of as subject to continual editing by many processes distributed around in the brain, and continuing indefinitely into the future. This stream of contents is only rather like a narrative because of its multiplicity; at any point in time there are multiple “drafts” of narrative fragments at various stages of editing in various places in the brain.
I think Dennett would agree that the multiple drafts model is absolutely a step in the right direction, but is also a convenient fiction, a crutch for our imagination because we're still in the process of uncovering better ways of imagining and understanding the complication of the human brain.
Thus far has only brought us to the middle of the book. Here the text becomes a little more hit and miss, and a lot less summarizable in small bites. I will resort to a list:
- Orwellian vs. Stalinesque falsification. Suppose a grey truck passed you yesterday, but you mistakenly remember it being yellow. On this comfortably long length scale, there seems to be a clear distinction between having consciously experienced a grey truck and then changed your memory ("Orwellian" revision), and having erroneously experienced seeing a yellow truck all along (a "Stalinesque" show trial). We can imagine probing you shortly after the truck passed, and asking you if it was yellow, and getting different answers. But at short time scales, like if you made the error three seconds after the truck passed, there is no fact of the matter about your conscious state - our approximation of a pointlike observer starts breaking down. Your mental state does not have to cleanly fall into either having totally experienced a grey truck and then forgotten, or having totally experienced a yellow truck erroneously.
- Evolution of consciousness. Begins as a fairly standard tour of the evolution of things that represent other things. One fun idea is the picture of verbal imagination as a literal evolutionary descendant of talking to onesself - but this falsifiable speculation seems to indeed be false, upon further thought. There's also a lot of neuroanatomy that is impossible to reproduce in short form.
- A long analogy about Von Neumann architercture and virtual machines. The story we tell ourself about consciousness is a serial story produced on parallel hardware. So Dennett proposes some level of description - describing a virtual machine, if you will - in which it makes perfect sense to talk about the brain as having a single stream of consciousness. Dennett flirts with taking this analogy quite far, combining it with the "memes as software" analogy, but I think that this only gets him into trouble.
- Models of word generation. This is the chapter against a Central Meaner who intends pure propositions, which we then convert into words. Instead, a model of word generation is proposed based on specialist homunculi that might all simultaneously be activated to some degree - an interesting chapter, and one of the more speculative ones about the internal functioning of human brains. These days, of course, everybody and their duck knows about neural nets, which might give us an edge in imagining the computational model of the brain.
- The slightly disappointing definition of consciousness. The middle of the book culminates in Dennett taking a stand that consciousness is, at the appropriate level of description, the presence of this "virtual machine" that corresponds to the protagonist of the story we tell about ourselves. This is a very high-level property, currently identifiable largely by handwaving - which puts it in good company among other semi-reasonable definitions of consciousness. I call it slightly disappointing because this isn't built to in a narratively satisfying way, nor does it serve as the cornerstone for the remainder of the book - except for a chapter on the development of selfhood, where it shows up again.
A story about visual perception that's too good to not reproduce:
Almost twenty years ago, Paul Bach-y-Rita developed several devices that involved small, ultralow-resolution video cameras that could be mounted on eyeglass frames. The low-resolution signal from these cameras, a 16-by-16 or 20-by-20 array of “black and white” pixels, was spread over the back or belly of the subject in a grid of either electrical or mechanically vibrating tinglers called tactors.
After only a few hours of training, blind subjects wearing this device could learn to interpret the patterns of tingles on their skin, much as you can interpret letters traced on your skin by someone’s finger. The resolution is low, but even so, subjects could learn to read signs, and identify objects and even people’s faces, as we can gather from looking at this photograph taken of the signal as it appears on an oscilloscope monitor.
The result was certainly prosthetically produced conscious perceptual experience, but since the input was spread over the subjects’ backs or bellies instead of their retinas, was it vision? Did it have the “phenomenal qualities” of vision, or just of tactile sensation?
Recall one of our experiments in chapter 3. It is quite easy for your tactile point of view to extend out to the tip of a pencil, permitting you to feel textures with the tip, while quite oblivious to the vibrations of the pencil against your fingers. So it should not surprise us to learn that a similar, if more extreme, effect was enjoyed by Bach-y-Rita’s subjects. After a brief training period, their awareness of the tingles on their skin dropped out; the pad of pixels became transparent, one might say, and the subjects’ point of view shifted to the point of view of the camera, mounted to the side of their heads. A striking demonstration of the robustness of the shift in point of view was the behavior of an experienced subject whose camera had a zoom-lens with a control button. The array of tinglers was on his back, and the camera was mounted on the side of his head. When the experimenter without warning touched the zoom button, causing the image on the subject’s back to expand or “loom” suddenly, the subject instinctively lurched backward, raising his arms to protect his head. Another striking demonstration of the transparency of the tingles is the fact that subjects who had been trained with the tingler-patch on their backs could adapt almost immediately when the tingler-patch was shifted to their bellies. And yet, as Bach-y-Rita notes, they still responded to an itch on the back as something to scratch — they didn’t complain of “seeing” it — and were perfectly able to attend to the tingles, as tingles, on demand.
These observations are tantalizing but inconclusive. One might argue that once the use of the device’s inputs became second nature the subjects were really seeing, or, contrarily, that only some of the most central “functional” features of seeing had been reproduced prosthetically. What of the other “phenomenal qualities” of vision? Bach-y-Rita reports the result of showing two trained subjects, blind male college students, for the first time in their lives, photographs of nude women from Playboy magazine. They were disappointed — “although they both could describe much of the content of the photographs, the experience had no affectual component; no pleasant feelings were aroused. This greatly disturbed the two young men, who were aware that similar photographs contained an effectual component for their normally sighted friends”.
Dennett is somewhat infamous for denying the existence of qualia (singular: quale) - the private, ineffable stuff that makes the redness of red so red. But it's not that he denies the existence of redness - it's the private and ineffable part that's the problem.
Consider, for instance, the curious fact that monkeys don’t like red light. Given a choice, rhesus monkeys show a strong preference for the blue-green end of the spectrum, and get agitated when they have to endure periods in red environments. Why should this be? Humphrey points out that red is always used to alert, the ultimate color-coding color, but for that very reason ambiguous: the red fruit may be good to eat, but the red snake or insect is probably advertising that it is poisonous. So “red” sends mixed messages. But why does it send an “alert” message in the first place? Perhaps because it is the strongest available contrast with the ambient background of vegetative green or sea blue, or — in the case of monkeys — because red light (red to reddish-orange to orange light) is the light of dusk and dawn, the times of day when virtually all the predators of monkeys do their hunting.
The affective or emotional properties of red are not restricted to rhesus monkeys. All primates share these reactions, including human beings. If your factory workers are lounging too long in the rest rooms, painting the walls of the rest rooms red will solve that problem — but create others (see Humphrey, forthcoming). Such “visceral” responses are not restricted to colors, of course. Most primates raised in captivity who have never seen a snake will make it unmistakably clear that they loathe snakes the moment they see one, and it is probable that the traditional human dislike of snakes has a biological source that explains the biblical source, rather than the other way around. That is, our genetic heritage queers the pitch in favor of memes for snake-hating.
Now here are two different explanations for the uneasiness most of us feel (even if we “conquer” it) when we see a snake:
(1) Snakes evoke in us a particular intrinsic snake-yuckiness quale when we look at them, and our uneasiness is a reaction to that quale.
(2) We find ourselves less than eager to see snakes because of innate biases built into our nervous systems. These favor the release of adrenaline, bring fight-or-flight routines on line, and, by activating various associative links, call a host of scenarios into play involving danger, violence, damage. The original primate aversion is, in us, transformed, revised, deflected in a hundred ways by the memes that have exploited it, coopted it, shaped it. (There are many different levels at which we could couch an explanation of this “functionalist” type. For instance, we could permit ourselves to speak more casually about the power of snake-perceptions to produce anxieties, fears, anticipations of pain, and the like, but that might be seen as “cheating” so I am avoiding it.)
The trouble with the first sort of explanation is that it only seems to be an explanation. The idea that an “intrinsic” property (of occurrent pink, of snake-yuckiness, of pain, of the aroma of coffee) could explain a subject’s reactions to a circumstance is hopeless — a straightforward case of a virtus dormitiva. Convicting a theory of harboring a vacuous virtus dormitiva is not that simple, however. Sometimes it makes perfectly good sense to posit a temporary virtus dormitiva, pending further investigation. Conception is, by definition we might say, the cause of pregnancy. If we had no other way of identifying conception, telling someone she got pregnant because she conceived would be an empty gesture, not an explanation. But once we’ve figured out the requisite mechanical theory of conception, we can see how conception is the cause of pregnancy, and informativeness is restored. In the same spirit, we might identify qualia, by definition, as the proximal causes of our enjoyment and suffering (roughly put), and then proceed to discharge our obligations to inform by pursuing the second style of explanation. But curiously enough, qualophiles (as I call those who still believe in qualia) will have none of it; they insist that qualia “reduced” to mere complexes of mechanically accomplished dispositions to react are not the qualia they are talking about. Their qualia are something different.
It is in this context that Dennett says that qualia - those things on the screen in the Cartesian Theater - don't exist.
Another theme of the book that reaches its crescendo in this section is Dennett's defense of a restricted sort of un-duplicability of qualia (though of course he would never call it that), as a natural consequence of how brains work and the limits of third-person knowledge, as identified with heterophenomenology. There's more handwaving than rigorous argument supporting this, but it does seem like pretty reasonable handwaving.
Consider what it must have been like to be a Leipzig Lutheran churchgoer in, say, 1725, hearing one of J. S. Bach’s chorale cantatas in its premier performance. (This exercise in imagining what it is like is a warm-up for chapter 14, where we will be concerned with consciousness in other animals.) There are probably no significant biological differences between us today and German Lutherans of the eighteenth century; we are the same species, and hardly any time has passed. But, because of the tremendous influence of culture — the memosphere — our psychological world is quite different from theirs, in ways that would have a noticeable impact on our respective experiences when hearing a Bach cantata for the first time. Our musical imagination has been enriched and complicated in many ways (by Mozart, by Charlie Parker, by the Beatles), but also it has lost some powerful associations that Bach could count on. His chorale cantatas were built around chorales, traditional hymn melodies that were deeply familiar to his churchgoers and hence provoked waves of emotional and thematic association as soon as their traces or echoes appeared in the music. Most of us today know these chorales only from Bach’s settings of them, so when we hear them, we hear them with different ears. If we want to imagine what it was like to be a Leipzig Bach-hearer, it is not enough for us to hear the same tones on the same instruments in the same order; we must also prepare ourselves somehow to respond to those tones with the same heartaches, thrills, and waves of nostalgia.
A clearer case of imagination-blockade would be hard to find, but note that it has nothing to do with biological differences or even with “intrinsic” or “ineffable” properties of Bach’s music. The reason we couldn’t imaginatively relive in detail the musical experience of the Leipzigers is simply that we would have to take ourselves along for the imaginary trip, and we know too much.
There's also a really good description of ineffability in terms of Jell-O boxes that, unfortunately, I will have to butcher in order to relate. Tl;dr: If you tear a Jell-O box into two pieces, one piece will be a detector for the other - it only fits perfectly with that one shape of torn cardboard. But this property is indescribable - if you try to explain what shape it is that the piece of Jell-O box detects, you can only wave your hands at the piece of cardboard plaintively. This is a metaphor for the experience of trying to describe what it is that red looks like.
The book closes with one final piece of shameless plagiarism from the heyday of LessWrong.
When we learn that the only difference between gold and silver is the number of subatomic particles in their atoms, we may feel cheated or angry — those physicists have explained something away: The goldness is gone from gold; they’ve left out the very silveriness of silver that we appreciate. And when they explain the way reflection and absorption of electromagnetic radiation accounts for colors and color vision, they seem to neglect the very thing that matters most. But of course there has to be some “leaving out” — otherwise we wouldn’t have begun to explain. Leaving something out is not a feature of failed explanations, but of successful explanations.
Only a theory that explained conscious events in terms of unconscious events could explain consciousness at all. If your model of how pain is a product of brain activity still has a box in it labeled “pain,” you haven’t yet begun to explain what pain is, and if your model of consciousness carries along nicely until the magic moment when you have to say “then a miracle occurs” you haven’t begun to explain what consciousness is.
This leads some people to insist that consciousness can never be explained. But why should consciousness be the only thing that can’t be explained? Solids and liquids and gases can be explained in terms of things that aren’t themselves solids or liquids or gases. Surely life can be explained in terms of things that aren’t themselves alive — and the explanation doesn’t leave living things lifeless. The illusion that consciousness is the exception comes about, I suspect, because of a failure to understand this general feature of successful explanation. Thinking, mistakenly, that the explanation leaves something out, we think to save what otherwise would be lost by putting it back into the observer as a quale — or some other “intrinsically” wonderful property. The psyche becomes the protective skirt under which all these beloved kittens can hide. There may be motives for thinking that consciousness cannot be explained, but, I hope I have shown, there are good reasons for thinking that it can.
My explanation of consciousness is far from complete. One might even say that it was just a beginning, but it is a beginning, because it breaks the spell of the enchanted circle of ideas that made explaining consciousness seem impossible. I haven’t replaced a metaphorical theory, the Cartesian Theater, with a nonmetaphorical (“literal, scientific”) theory. All I have done, really, is to replace one family of metaphors and images with another, trading in the Theater, the Witness, the Central Meaner, the Figment, for Software, Virtual Machines, Multiple Drafts, a Pandemonium of Homunculi. It’s just a war of metaphors, you say — but metaphors are not “just” metaphors; metaphors are the tools of thought. No one can think about consciousness without them, so it is important to equip yourself with the best set of tools available.
Comments sorted by top scores.