Consistently Inconsistent
post by Kaj_Sotala · 2011-08-04T22:33:21.748Z · LW · GW · Legacy · 25 commentsContents
25 comments
Robert Kurzban's Why Everyone (Else) Is a Hypocrite: Evolution and the Modular Mind is a book about how our brains are composed of a variety of different, interacting systems. While that premise is hardly new, many of our intuitions are still grounded in the idea of a unified, non-compartmental self. Why Everyone (Else) Is a Hypocrite takes the modular view and systematically attacks a number of ideas based on the unified view, replacing them with a theory based on the modular view. It clarifies a number of issues previously discussed on Overcoming Bias and Less Wrong, and even debunks some outright fallacious theories that we on Less Wrong have implicitly accepted. It is quite possibly the best single book on psychology that I've read. In this posts and posts that follow, I will be summarizing some of its most important contributions.
Chapter 1: Consistently Inconsistent (available for free here) presents evidence of our brains being modular, and points out some implications of this.
As previously discussed, severing the connection between the two hemispheres of a person's brain causes some odd effects. Present the left hemisphere with a picture of a chicken claw, and the right with a picture of a wintry scene. Now show the patient an array of cards with pictures of objects on them, and ask them to point (with each hand) something related to what they saw. The hand controlled by the left hemisphere points to a chicken, the hand controlled by the right hemisphere points to a snow shovel. Fine so far.
But what happens when you ask the patient to explain why they pointed to those objects in particular? The left hemisphere is in control of the verbal apparatus. It knows that it saw a chicken claw, and it knows that it pointed at the picture of the chicken, and that the hand controlled by the other hemisphere pointed at the picture of a shovel. Asked to explain this, it comes up with the explanation that the shovel is for cleaning up after the chicken. While the right hemisphere knows about the snowy scene, it doesn't control the verbal apparatus and can't communicate directly with the left hemisphere, so this doesn't affect the reply.
Now one asks, what did ”the patient” think was going on? A crucial point of the book is that there's no such thing as the patient. ”The patient” is just two different hemispheres, to some extent disconnected. You can either ask what the left hemisphere thinks, or what the right hemisphere thinks. But asking about ”the patient's beliefs” is a wrong question. If you know what the left hemisphere believes, what the right hemisphere believes, and how this influences the overall behavior, then you know all that there is to know.
Split-brain patients are a special case, but there are many more examples of modularity, both from injured and healthy people. Does someone with a phantom limb ”believe” that all of their limbs are intact? If you ask them, they'll say no, but nonetheless they feel pain in their missing limb. In one case, a patient was asked to reach for a cup of coffee with his phantom arm. Then the experimenter yanked the cup toward himself. The patient let out a shout of pain as his phantom fingers ”got caught” in the cup's handle. A part of his brain ”really believed" the handle was there.
We might be tempted to say that the patient ”really” doesn't believe in the phantom limb, because that's what he says. But this only tells us that the part of his brain controlling his speech doesn't believe in it. There are many, many parts of the brain that can't talk, probably more than parts that can.
There are also cases of ”alien hand syndrome” - patients reporting that one of their hands moves on its own, and has its own will. It might untuck a previously tucked shirt, causing a physical fight between the hands. The parts of the brain controlling the two hands are clearly not well-coordinated. In blindsight, people report being blind, but yet when asked to guess what letter they're being shown, they perform above chance. One patient was given the task of walking through a cluttered hallway. He made his way through it, side-stepping any obstacles on the route, but was not aware of the fact that he had ever changed his course. Kurzban mentions this as another example of why we should not believe that the talking part of the brain is special, because it was in some sense wrong.
Not convinced by weird cases of brain damage? Let's move on to healthy patients. Take visual illusions. For many illusions, we're consciously aware of the fact that two squares are of the same color, or that two lines are of the same length, but we still see them as different. One part of us ”believes” they are the same, while another ”believes” that they are different.
But maybe the visual system is a special case. Maybe it does such low-level processing that it simply isn't affected by high-level information. But there are pictures that look meaningless to you, until you're told what they represent, at which point the image becomes clear. Or play someone a recording backwards, and tell them to look for specific words in it. They'll be able to hear the words you specified – but only after you've told them what words to look for. So clearly, our sensory systems can be affected by high-level information.
The take-home lesson is that, just as in the case of brain-damaged patients, normal human brains can have mutually inconsistent information in different parts. Two or more parts of your brain can ”disagree” about some information, and one part ”knowing” that it believes in a true thing doesn't update the part that disagrees. Yet although some information can update other parts of the brain, other kinds of information can stay isolated in their own parts.
Let's take a brief look at some issues related to modularity. "Why do people lock their refrigerator doors for the night?" is a question that has confused economists. Sure, you might lock your refrigerator door to make it more difficult to satisfy your night-time food cravings. But if people don't want to snack in the middle of the night, then they simply shouldn't snack in the middle of the night.
In a unitary view of the mind, the mind has a vast store of information and various preferences. Faced with a decision, the mind integrates together all the relevant information and produces a decision that best satisfies its preferences. Under this view, things such as the current time or what room you're in shouldn't matter for the outcome. If this were the case, nobody would ever need to lock their refrigerator doors. Many people implicitly presume a unitary view of the mind, but as will be shown later on, a modular view will explain this behavior much better.
Moral hypocrisy is another case of inconsistency. Suppose we had an android that had been programmed with a list of things about what is and what isn't immoral. Such an android might always consistently follow his rules, and never act hypocritically. Clearly humans are not like this: our endorsed principles are not the only forces guiding our behavior. By postulating a modular mind with different, even mutually exclusive sets of beliefs, we can better explain inconsistency and hypocrisy than by presuming a unified mind.
The rest of the book further expands and builds on these concepts. Chapters 2 and 3 suggest that the human mind is made up of a very large number of subroutines, each serving a specific function, and that the concept of ”self” is problematic and much less useful than people might think. Chapter 4 discusses the idea that if we view our mind as a government, then the conscious self is more like a press secretary than a president. Chapter 5 talks about modules that may not be designed to seek out the truth, and chapters 6 and 7 goes further to discuss why some modules may actually function better if they're actively wrong instead of just ignorant. Chapters 8 and 9 show how inconsistencies in the modular mind create various phenomena relating to issues of ”self-control” and hypocrisy. I'll be summarizing the content of these chapters in later posts.
25 comments
Comments sorted by top scores.
comment by k3nt · 2011-08-06T18:16:14.179Z · LW(p) · GW(p)
Thanks for the link. I read the free chapter. The rest of it ... $15+ for a kindle version? Seriously?
Here's a line that spoke to me, toward the end of chapter 1:
"if you like the metaphor of your mind as a government, then “you”—the part of your brain that experiences the world and feels like you’re in “control”—is better thought of as a press secretary than as the president."
For those who haven't paid attention to too many press conferences, the job of the press secretary is to be a lying sack of s**t who will justify anything done by the administration, no matter how repugnant, stupid, immoral or illegal.
Which of course does seem to be the job of our 'rational' selves, way too much of the time.
comment by brazil84 · 2011-08-05T14:04:43.543Z · LW(p) · GW(p)
I agree 100%. I think Eliezer made this point very nicely a few years back:
Some philosophers have been much confused by such scenarios, asking, "Does the claimant really believe there's a dragon present, or not?" As if the human brain only had enough disk space to represent one belief at a time! Real minds are more tangled than that.
. . Yet it is a physical fact that you can write "The sky is green!" next to a picture of a blue sky without the paper bursting into flames.
comment by Pavitra · 2011-08-03T18:33:53.131Z · LW(p) · GW(p)
I've never heard of people locking their refrigerator doors at night.
Replies from: Desrtopa, Xachariah, JamesAndrix, Raw_Power↑ comment by Desrtopa · 2011-08-03T22:44:36.532Z · LW(p) · GW(p)
I've never heard of a refrigerator door with a lock.
Replies from: gwern↑ comment by gwern · 2011-08-03T23:03:21.045Z · LW(p) · GW(p)
My family has one on the freezer in the garage. You use a lock on them when you worry about theft, or about accidental opening - in our case, the freezer has a couple thousand dollars of meat (we buy in bulk, half a year's supply or so), and we learned our lesson about the latch when it was left open for a day, half the meat thawed, and we lost several days and a lot of meat in dealing with it. When it's locked, it's definitely not coming open!
↑ comment by Xachariah · 2011-08-05T07:55:50.193Z · LW(p) · GW(p)
It is a common action for people who overeat or have special dietary restrictions.
My grandfather had to eat right for his heart condition, but he couldn't trust himself while half asleep. Just using willpower wasn't enough, since he couldn't think to exercise willpower in that state. The solution was to lock up the fridge and pantry at night, and make getting his copy of the key difficult enough to wake himself up fully. If one was more cynical, it also meant he couldn't just stay up and cheat, then blame 'sleep-eating' either.
Regardless, it got results and in the end that's what matters.
Replies from: jhuffman, Michelle_Z↑ comment by Michelle_Z · 2011-08-23T22:44:11.519Z · LW(p) · GW(p)
Suddenly I realize why when I decide the night before to wake up at 6:30am to do something, I shut the alarm off at 6:30 and go back to sleep, instead.
↑ comment by JamesAndrix · 2011-08-09T08:05:46.627Z · LW(p) · GW(p)
When I saw that, I thought it was going to be an example of a nonsensical question, like "When did you stop beating your wife?".
comment by Shmi (shminux) · 2011-08-04T00:18:36.727Z · LW(p) · GW(p)
I recommend reading through the reviews on Amazon. They seem to suggest that the author never attempted to falsify his model of modular mind, and thus has fallen prey to the confirmation bias. Of course, it's best to read the actual book and form your own opinion.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2011-08-04T12:11:58.009Z · LW(p) · GW(p)
I just looked at the 20 most helpful reviews on Amazon. As far as I could tell, the only one that seemed to suggest those things was this one, which seemed to mostly critique Kurzban on fallacious grounds. Kurzban personally comments on the review, pointing out some of the fallacies.
Replies from: None↑ comment by [deleted] · 2011-08-05T01:18:52.562Z · LW(p) · GW(p)
My overall impression from years of reading them is that if you look at only "helpful" reviews you will filter out negative ones disproportionately to their actual helpfulness, presumably because of a human tendency to dislike negativity. One workaround is to look at, say 3-star reviews (these tend to list both pros and cons) and sort just the three star reviews by helpfulness. There are other workarounds. The first thing I usually do is sort by "newest first" and skim a couple of pages of those. Edit: but checking the Amazon page, there are only 23 reviews in all for that particular book, so, you can just read all of them, which you seem to have done.
comment by gwern · 2011-08-03T18:44:46.491Z · LW(p) · GW(p)
Does Kurzban keep a clear distinction between what we ought to do and how we actually act and make decisions? Because it sounds from your review so far as if he's mixing normative rationality with empirical observations, his 'is' and 'oughts', if you will.
It doesn't matter if my hemispheres have different information and different goals, they still control only one body and random grabbing of control back-and-forth is unlikely to be the optimal tradeoff between their utility functions, to say the least...
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2011-08-04T10:59:29.366Z · LW(p) · GW(p)
He mostly sticks to a descriptive view, and talks about why our brains work the way they do without making judgements about how they should work. The main exception is the last chapter and the epilogue, where he claims that while most inconsistencies probably don't cause major issues, we really should try to do something to overcome moral hypocrisy.
There are also a few places where he e.g. argues that psychology should spend more effort on figuring out the causes of our moral intuitions, and takes shots at critics of evolutionary psychology who attack evpsych on sloppy grounds. Both while normative, those comments are more related to science than behavior in general.
comment by AmagicalFishy · 2011-08-08T00:47:35.647Z · LW(p) · GW(p)
This may be me trying to justify to myself the concept of "I" (oh how the mighty fall), but something struck me as a bit extreme:
When people ask questions like, "What does the patient think?" or "What do I, [name], think of this?" — I've always assumed that, that which is being questioned is the conscious mind. If a patient, for example, says, "I know that I don't have an arm, but I have a phantom limb. Part of my body still believes I have an arm." — Then "the patient" refers to that part of his brain that knows he doesn't have an arm, yeah?
I like the idea of comparing the "self" to a press secretary. Ideally, a press secretary is supposed to be honest. At LessWrong, people make conscious efforts to give the press secretary a few lessons in honesty. I can't dispute the fact that the mind is modular (I fully advocate it, even) , but I think the idea of one's self—while it should be open to reconstruction—is an important one.
comment by scav · 2011-08-05T12:36:25.838Z · LW(p) · GW(p)
It clarifies a number of issues previously discussed on Overcoming Bias and Less Wrong, and even debunks some outright fallacious theories that we on Less Wrong have implicitly accepted.
I look forward to this bit, but I'll be disappointed if it only debunks something trivial or something that is already disputed here. If it debunks something I implicitly accept, then I'll be delighted of course.
Does it go into any detail about some of the modules a typical human mind contains, such that you can experiment on yourself without brain surgery?
comment by christopherj · 2013-11-02T02:10:59.650Z · LW(p) · GW(p)
It was an excellent read (I only read the free chapter 1). I've long known about the modularity of minds, but didn't really think it through to the conclusions. Mostly I've just been pointing out how my fight-or-flight activation is rather independent of my knowledge of the safety or danger of various activities (such as driving and roller coaster).
Now excuse me while I go ponder how to apply Bayesian updating to my minds various modules which may or may not take orders from my conscious. Also whether that would be a good idea -- I haven't read chapters 5-7 where it talks about the benefits of being ignorant or wrong, but I am familiar with the tendency to overestimate one's capabilities and that it seems to be beneficial to do so (in most cases).
comment by roland · 2011-08-06T23:19:50.952Z · LW(p) · GW(p)
Another good book:
How We Decide by Jonah Lehrer
http://www.amazon.com/How-We-Decide-Jonah-Lehrer/dp/0547247990
comment by Nisan · 2011-08-03T19:05:19.357Z · LW(p) · GW(p)
I look forward to seeing some of our assumptions challenged. Particularly this:
Replies from: JGWeissmanchapters 6 and 7 goes further to discuss why some modules may actually function better if they're actively wrong instead of just ignorant
↑ comment by JGWeissman · 2011-08-03T19:12:27.207Z · LW(p) · GW(p)
Replies from: epigeiosI look forward to seeing some of our assumptions challenged. Particularly this:
chapters 6 and 7 goes further to discuss why some modules may actually function better if they're actively wrong instead of just ignorant
↑ comment by epigeios · 2011-08-04T02:20:53.654Z · LW(p) · GW(p)
The assumption that what's right (as in true) is what's right (as in best). It's an assumption that comes from the experience of a rational mind, as a result of braving the valley of bad rationality. It is an assumption not often shared by irrational minds (religion, etc.).
That is, if there is a brain module that is actually better off being wrong, and not just because it makes the truth easier to find, then it shows that there is a probable undiscovered alternative.
Moreover, even if that's what it boils down to, that wrong makes right easier, this will at least challenge the assumptions common among brain scientists. Those assumptions that are born from an inability to grasp beyond the dimensions of the space of their current understanding. It is an extremely common trap for practicing scientists to fall into: to forget that there is always another possibility; and if one limits oneself to knowledge related to what one is researching, then that other possibility is probably right.
Replies from: lessdazed↑ comment by lessdazed · 2011-08-04T03:46:54.757Z · LW(p) · GW(p)
Nisan was referring to an assumption: IGNORANCE>FALSE BELIEF, for all beliefs. You are referring to the related assumption: If TRUE, then BEST TO BELIEVE, so TRUE BELIEF>IGNORANCE and TRUE BELIEF>FALSE BELIEF, once again, all of these for all beliefs.
I don't think either are widely assumed around here, and I think the link JGWeissman provided shows that.
the assumptions common among brain scientists. Those assumptions that are born from an inability to grasp beyond the dimensions of the space of their current understanding.
Like many others, I'm not familiar enough with the field to have any idea what those assumptions might be.