The limits of introspection
post by Scott Alexander (Yvain) · 2011-07-16T21:00:58.436Z · LW · GW · Legacy · 43 commentsContents
43 comments
Related to: Inferring Our Desires
The last post in this series suggested that we make up goals and preference for other people as we go along, but ended with the suggestion that we do the same for ourselves. This deserves some evidence.
One of the most famous sets of investigations into this issue was Nisbett and Wilson's Verbal Reports on Mental Processes, the discovery of which I owe to another Less Wronger even though I can't remember who. The abstract says it all:
When people attempt to report on their cognitive processes, that is, on the processes mediating the effects of a stimulus on a response, they do not do so on the basis of any true introspection. Instead, their reports are based on a priori, implicit casual theories, or judgments about the extent to which a particular stimulus is a plausible cause of a given response. This suggests that though people may not be able to observe directly their cognitive processes, they will sometimes be able to report accurately about them. Accurate reports will occur when influential stimuli are salient and are plausible causes of the responses they produce, and will not occur when stimuli are not salient or are not plausible causes.
In short, people guess, and sometimes they get lucky. But where's the evidence?
Nisbett & Schachter, 1966. People were asked to get electric shocks to see how much shock they could stand (I myself would have waited to see if one of those see-how-much-free-candy-you'll-eat studies from the post last week was still open). Half the subjects were also given a placebo pill which they were told would cause heart palpitations, tremors, and breathing irregularities - the main problems people report when they get shocked. The hypothesis: people who took the pill would attribute much of the unpleasantness of the shock to the pill instead, and so tolerate more shock. This occurred right on schedule: people who took the pill tolerated four times as strong a shock as controls. When asked why they did so well, the twelve subjects in the experimental group came up with fabricated reasons; one example given was "I played with radios as a child, so I'm used to electricity." Only three of twelve subjects made a connection between the pill and their shock tolerance; when the researchers revealed the deception and their hypothesis, most subjects said it was an interesting idea and probably explained the other subjects, but it hadn't affected them personally.
Zimbardo et al, 1965. Participants in this experiment were probably pleased to learn there were no electric shocks involved, right up until the point where the researchers told them they had to eat bugs. In one condition, a friendly and polite researcher made the request; in another, a surly and arrogant researcher asked. Everyone ate the bug (experimenters can be pretty convincing), but only the group accosted by the unpleasant researcher claimed to have liked it. This confirmed the team's hypothesis: the nice-researcher group would know why they ate the bug - to please their new best friend - but the mean-researcher group would either have to admit it was because they're pushovers, or explain it by saying they liked eating bugs. When asked after the experiment why they were so willing to eat the bug, they said things like "Oh, it's just one bug, it's no big deal." When presented with the idea of cognitive dissonance, they once again agreed it was an interesting idea that probably affected some of the other subjects but of course not them.
Maier, 1931. Subjects were placed in a room with several interesting tools and asked to come up with as many solutions as possible to a puzzle about tying two cords together. One end of each cord was tied to the ceiling, and when the subject was holding on to one cord they couldn't reach the other. A few solutions were obvious, such as tying an extension cord to each, but the experiment involved a more complicated solution - tying a weight to a cord and using it as a pendulum to bring it into reach of the other. Subjects were generally unable to come up with this idea on their own in any reasonable amount of time, but when the experimenter, supposedly in the process of observing the subject, "accidentally" brushed up against one cord and set it swinging, most subjects were able to develop the solution within 45 seconds. However, when the experimenter asked immediately afterwards how they came up with the pendulum idea, the subjects were completely unable to recognize the experimenter's movement as the cue, and instead came up with completely unrelated ideas and invented thought processes, some rather complicated. After what the study calls "persistent probing", less than a third of the subjects mentioned the role of the experimenter.
Latane & Darley, 1970. This is the famous "bystander effect", where people are less likely to help when there are others present. The researchers asked subjects in bystander effect studies what factors influenced their decision not to help; the subjects gave many, but didn't mention the presence of other people.
Nisbett & Wilson, 1977. Subjects were primed with lists of words all relating to an unlisted word (eg "ocean" and "moon" to elicit "tide"), and then asked the name of a question, one possible answer to which involved the unlisted word (eg "What's your favorite detergent?" "Tide!"). The experimenters confirmed that many more people who had been primed with the lists gave the unlisted answer than control subjects (eg more people who had memorized "ocean" and "moon" gave Tide as their favorite detergent). Then they asked subjects why they had chosen their answer, and the subjects generally gave totally unrelated responses (eg "I love the color of the Tide box" or "My mother uses Tide"). When the experiment was explained to subjects, only a third admitted that the words might have affected their answer; the rest kept insisting that Tide was really their favorite. Then they repeated the process with several other words and questions, continuing to ask if the word lists influenced answer choice. The subjects' answers were effectively random - sometimes they believed the words didn't affect them when statistically they probably did, other times they believed the words did affect them when statistically they probably didn't.
Nisbett & Wilson, 1977. Subjects in a department store were asked to evaluate different articles of clothing in a line. As usually happens in this sort of task, people disproportionately chose the rightmost object (four times as often as the leftmost), no matter which object was on the right; this is technically referred to as a "position effect". The customers were asked to justify their choices and were happy to do so based on different qualities of the fabric et cetera; none said their choice had anything to do with position, and the experimenters dryly mention that when they asked the subjects if this was a possibility, "virtually all subjects denied it, usually with a worried glance at the interviewer suggesting they felt that they...were dealing with a madman".
Nisbett & Wilson, 1977. Subjects watched a video of a teacher with a foreign accent. In one group, the video showed the teacher acting kindly toward his students; in the other, it showed the teacher being strict and unfair. Subjects were asked to rate how much they liked the teacher, and also how much they liked his appearance and accent, which were the same across both groups. Because of the halo effect, students who saw the teacher acting nice thought he was attractive with a charming accent; people who saw the teacher acting mean thought he was ugly with a harsh accent. Then subjects were asked whether how much they liked the teacher had affected how much they liked the appearance and accent. They generally denied any halo effect, and in fact often insisted that part of the reason they hated the teacher so much was his awful clothes and annoying accent - the same clothes and accent which the nice-teacher group said were part of the reason they liked him so much!
There are about twice as many studies listed in the review article itself, but the trend is probably getting pretty clear. In some studies, like the bug-eating experiment, people perform behaviors and, when asked why they performed the behavior, guess wrong. Their true reasons for the behavior are unclear to them. In others, like the clothes position study, people make a choice, and when asked what preferences caused the choice, guess wrong. Again, their true reasons are unclear to them.
Nisbett and Wilson add that when they ask people to predict how they would react to the situations in their experiments, people "make predictions that in every case were similar to the erroneous reports given by the actual subjects." In the bystander effect experiment, outsiders predict the presence or absence of others wouldn't affect their ability to help, and subjects claim (wrongly) that the presence or absence of others didn't affect their ability to help.
In fact, it goes further than this. In the word-priming study (remember? The one with Tide detergent?) Nisbett and Wilson asked outsiders to predict which sets of words would change answers to which questions (would hearing "ocean" and "moon" make you pick Tide as your favorite detergent? Would hearing "Thanksgiving" make you pick Turkey as a vacation destination?). The outsiders' guesses correlated not at all with which words genuinely changed answers, but very much with which words the subjects guessed had changed their answers. Perhaps the subjects' answers looked a lot like the outsiders' answers because both were engaged in the same process: guessing blindly.
These studies suggest that people do not have introspective awareness to the processes that generate their behavior. They guess their preferences, justifications, and beliefs by inferring the most plausible rationale for their observed behavior, but are unable to make these guesses qualitatively better than outside observers. This supports the view presented in the last few posts: that mental processes are the results of opaque preferences, and that our own "introspected" goals and preferences are a product of the same machinery that infers goals and preferences in others in order to predict their behavior.
43 comments
Comments sorted by top scores.
comment by Academian · 2012-12-31T23:26:47.234Z · LW(p) · GW(p)
tl;dr: I was excited by this post, but so far I find reading the cited literature uncompelling :( Can you point us to a study we can read where the authors reported enough of their data and procedure that we can all tell that their conclusion was justified?
I do trust you, Yvain, and I know you know stats, and I even agree with the conclusion of the post --- that people are imperfect introspectors --- but I'm discouraged to continue searching through the literature myself at the moment because the first two articles you cited just weren't clear enough on what they were doing and measuring for me to tell if their conclusions were justified, other than by intuition (which I already share).
For example, none of your summaries says whether the fraction of people noticing the experimenters' effect on their behavior was enough to explain the difference between the two experiment groups, and this seems representative of the 1977 review article you cited as your main source as well.
I looked in more detail at your first example, the electric shocks experiment (Nisbett & Schachter, 1966), on which you report
... people who took the pill tolerated four times as strong a shock as controls ... Only three of twelve subjects made a connection between the pill and their shock tolerance ...
I was wondering, did the experimenters merely observe
(1) a "Statistically Significant" difference between PILL-GROUP and CONTROL-GROUP? And then say "Only 3 of 12 people in the pill group managed to detect the effect of the placebo on themselves?"
Because that's not a surprise, given the null hypothesis that people are good introspectors... maybe just those three people were affected, and that caused the significant difference between the groups! And jumping to conclusions from (1) is a kind of mistake I've seen before from authors assuming (if not in their minds, at least in their statistical formulae) that an effect is uniform across people, when it clearly probably isn't.
Or, did the experimenters observe that
(2) believing that only those three subjects were actually affected by (their knowledge of) the pill was not enough to explain the difference between the groups?
To see what the study really found, after many server issues with the journal website I tracked down the original 1966 article, which I've made available here. The paper doesn't mention anything about people's assessments of whether being (told they were) given a pill may have affected their pain tolerance.
Wondering why you wrote that, I went to the 1977 survey article you read, which I've made available as a searchable pdf here. There they say, at the bottom left of page 237, that their conclusion about the electric shocks vs pills was based on "additional unpublished data, collected from ... experiments by Nisbett and Schachter (1966)". But their description of that was almost as terse as your summary, and in particular, included no statistical reasoning.
Like I said, I do intuitively agree with the conclusion that people are imperfect introspectors, but I worry that the authors and reviewers of this article may have been sloppy in finding clear, quantitative evidence for this perspective, perhaps by being already too convinced of it...
comment by KPier · 2011-07-16T23:57:08.040Z · LW(p) · GW(p)
I've noticed on your last posts that most of the studies cited are decades old; is that because this is considered a settled question in behavioral science, because a lot of these experiments wouldn't pass modern ethics standards, or something else?
typo:
Replies from: Yvain, Millerwhen the subject was holding on two one cord ey couldn't reach the other.
↑ comment by Scott Alexander (Yvain) · 2011-07-17T03:28:10.135Z · LW(p) · GW(p)
It's because I am lazy enough that I took all of these from a single excellent review article on the subject written in the late 1970s. As far as I know, the research since then has confirmed the same points.
Fixed the typo, and thank you, but I find the mental processes generating it interesting. I used "two" instead of "to" right before the number "one" - my guess is that my being about to write "one" semantically primed my concept of "number", making me write "to" as "two". If I could think of a way to search, I'd love to see how many of the same two/to error on the Internet occur right around mention of a number.
↑ comment by Miller · 2011-07-17T00:08:09.477Z · LW(p) · GW(p)
Strikes me as a behaviorist -> cognitivist paradigm shift. Scientists just got tired of the old way (or more specifically, it simply stopped being new). That'd be my armchair guess.
edit :Someone better qualified should answer that. I'm not even sure that's behaviorism.
comment by calcsam · 2011-07-16T23:56:34.721Z · LW(p) · GW(p)
Solution: use large N by watching for recurring patterns in oneself, instead of trying to say too much about any particular data point.
Replies from: lessdazed, SilasBarta↑ comment by lessdazed · 2011-07-17T11:43:45.973Z · LW(p) · GW(p)
Am I different enough from others to sacrifice the even larger N of everything I observe any person does?
Replies from: calcsam↑ comment by calcsam · 2011-07-17T14:29:01.452Z · LW(p) · GW(p)
Confused, could you elaborate?
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-07-17T18:04:39.234Z · LW(p) · GW(p)
My interpretation: you'd suggested improving the accuracy of my guesses about myself by observing recurring patterns of my behavior. I think lessdazed is countersuggesting improving the accuracy of my guesses about people by observing recurring patterns of everyone's behavior. If I'm similar enough to community averages, then lessdazed's approach will work better (thanks to the larger data set); if I'm too dissimilar it will work worse. Thus the question: am I different enough from others to justify looking for patterns only in my own behavior?
Replies from: calcsam↑ comment by SilasBarta · 2011-07-17T01:46:25.072Z · LW(p) · GW(p)
Right. Or, you know, you could just pray and get your answers from the Ultimate Font of knowledge. I heard that source throws off all kinds of evidence of the validity of its claims.
Replies from: fubarobfusco↑ comment by fubarobfusco · 2011-07-17T04:59:53.538Z · LW(p) · GW(p)
Umm ... why is this relevant here?
Replies from: SilasBarta, endoself↑ comment by SilasBarta · 2011-07-18T13:46:08.311Z · LW(p) · GW(p)
I was taking calcsam's purported epistemology and evidentiary standards for arguments seriously. I agree it was a bit "guerilla" though.
↑ comment by endoself · 2011-07-17T20:16:53.203Z · LW(p) · GW(p)
It is not relevant at all. It's an ad hominem; calcsam is a theist. I have downvoted it.
Replies from: MixedNuts↑ comment by MixedNuts · 2011-07-17T20:20:04.394Z · LW(p) · GW(p)
The more charitable interpretation is that Silas is saying "Yeah, right, that would totally work. About as well as crossing your fingers and guessing."
Which is still pretty shitty.
Replies from: SilasBarta↑ comment by SilasBarta · 2011-07-18T13:46:40.802Z · LW(p) · GW(p)
Not ad hominem, ad epistemologiam.
comment by Will_Newsome · 2011-07-16T23:23:18.635Z · LW(p) · GW(p)
So, so you think you can tell Heaven from Hell,
blue skies from pain.
Can you tell a green field from a cold steel rail?
A smile from a veil?
Do you think you can tell?
And did they get you to trade your heroes for ghosts?
Hot ashes for trees?
Hot air for a cool breeze?
Cold comfort for change?
And did you exchange a walk on part in the war for a lead role in a cage?
— "Wish You Were Here" by Pink Floyd from the album Wish You Were Here.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2011-07-17T01:08:38.522Z · LW(p) · GW(p)
(This is what I internally sing to myself whenever someone tells me they think they know what their values are. The part about trades has a hint of self-deception-induced susceptibility to Dutch books, at least in one pragmatic interpretation, though of course the lyrics are ambiguous on many levels.)
comment by ksvanhorn · 2011-07-17T03:53:49.241Z · LW(p) · GW(p)
OK, so you've given us the bad news. Is there any good news, i.e., research showing how you can get a more accurate picture of what your true preferences are and the true reasons for your behaviors?
Replies from: Yvain, lessdazed↑ comment by Scott Alexander (Yvain) · 2011-07-17T21:45:16.975Z · LW(p) · GW(p)
I don't think there are true preferences. In one situation you have one tendency, in another situation you have another tendency, and "preference" is what it looks like when you try to categorize tendencies. But categorization is a passive and not an active process: if every day of the week I eat dinner at 6, I can generalize to say "I prefer to eat dinner at 6", but it would be false to say that some stable preference toward dinner at 6 is what causes my behavior on each day.
I think the best way to salvage preferences is to consider them as tendencies in reflective equilibrium. I'll explain that more later.
Replies from: Curiouskid↑ comment by Curiouskid · 2011-12-26T14:40:43.928Z · LW(p) · GW(p)
Just thought I'd link to the place where you address this question so that others don't have to go digging around.
comment by Armok_GoB · 2011-07-17T16:36:03.132Z · LW(p) · GW(p)
One simple policy to adopt in response to this is blacklisting certain types of questions and refusing to answer them. My current blacklist is "What is your favourite?" and "Why did you do that?" in their various forms.
Any other general categories like those two that needs to be on the list?
Any ideas on things to remember applying them and discovering more indirect phrasings of them, including nonverbal internal phrasings? (I think this might be related to the 5 second level stuff)
Any better ideas for what to return instead than "I don't remember" and "I don't know" when explaining the phenomena and linking to articles online is not an option? Most often it isn't, and using those oversimplifications I suspect might be making me forget the real reason and generally causing havock.
Replies from: TheOtherDave↑ comment by TheOtherDave · 2011-07-17T18:08:42.095Z · LW(p) · GW(p)
My usual response to "what's your favorite X?" is "well, 'favorite' is hard, but here's are some Xes I like..."
Replies from: Armok_GoB↑ comment by Armok_GoB · 2011-07-17T19:41:14.111Z · LW(p) · GW(p)
Well, that's what they usually ends up asking eventually, but I don't like answering a different question than the one actually asked due to how many won't notice. It feels like dark arts.
Replies from: Nisan, Jade↑ comment by Nisan · 2011-07-17T22:50:45.301Z · LW(p) · GW(p)
Oftentimes the role of "What is your favorite X?" is a conversation-starter. I think it's perfectly honest to interpret "What's your favorite kind of music?" as "What's a kind of music that you like and which you're willing to have a conversation about?".
An exception is "What's your favorite superhero?", because you may be called on to defend your choice in an argument.
Replies from: Armok_GoB↑ comment by Jade · 2011-07-25T22:07:33.951Z · LW(p) · GW(p)
When asked for favorites or 'what do you like to do for fun,' I offer recommendations (for myself and/or the questioner). Or, to help the questioner generate recommendations, I give recent likes, potential likes, and/or liked/disliked characteristics. This way, we have ideas of what to do in the future and don't get stuck on past interests or activities that have become boring. The NY Times website also uses the word “recommend,” instead of “like,” on its Facebook-share button. [If you didn’t know this already: information about your preferences may be used by another’s (esp. a stranger’s) brain to calibrate how much to associate with or help you; see for example “Musical Taste and Ingroup Favouritism:” http://gpi.sagepub.com/content/12/3/319.abstract.]
"Why did you do that?" --> "Multiple factors..."
"What will you do" or "What should you do?"--> "Depends..."
comment by Jordan · 2011-07-16T23:50:24.166Z · LW(p) · GW(p)
Great post, great review of the literature.
Where do you get most of your references? Do you wade through the literature, or do you use review papers? I'd love to see a book length compilation with the same density as this post.
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2011-07-17T03:31:42.589Z · LW(p) · GW(p)
With one or two exceptions, these were all taken from the link "Verbal Reports on Mental Processes" at the beginning of the post.
comment by Khaled · 2011-07-17T06:54:25.482Z · LW(p) · GW(p)
I think one useful thing is to try and find out why some explanations are more plausible than others (which seems standard, the fact of which explanation is actually true then, won't affect the guess that much).
When asked a question by an experimenter, I imagine myself trying to give a somewhat quick answer (rather than ask the experimenter to repeat the experiment isolating some variables so that I can answer accurately). I imagine my mind going through reasons until it hits a reason that sounds like ok, i.e. would convince me if I heard it from someone else, and pick that up.
Many of those researches don't seem to give "the time to answer" as a variable. What if the subjects were asked to think it over for 30 minutes before answering? I am not suggesting they will get the right answer, but perhaps a different answer, since different brain parts may be included in the decision then.
Replies from: Spurlockcomment by Khaled · 2011-07-22T10:23:15.682Z · LW(p) · GW(p)
In relation to connectionism, wouldn't that be the expected behavior? Taking the example of Tide, wouldn't we expect "ocean" and "moon" to give a headstart to "Tide" when the "favorite detergent" fires up all detergent names in the brain. But we wouldn't expect "Tide", "favorite", and "why" to give a headstart to "ocean" and "moon"?
Perhaps the time between eliciting "Tide" and asking for the reason for choosing it would be relevant (since asking for the reason while the "ocean" and "moon" are still active in the brain can give more chance for choosing them as the reason)?
comment by Cerberus · 2011-07-17T11:46:15.593Z · LW(p) · GW(p)
The classical way to deal with this problem is critical thinking: whenever you seem to arrive at a certain conclusion, do your utmost to defend the opposite conclusion (or some other proposition entirely). If this is at all possible, you must admit that you simply do not know the answer (yet).
Replies from: Spurlock↑ comment by Spurlock · 2011-07-18T17:11:48.798Z · LW(p) · GW(p)
Yes, but while this works in principle, there are a number of ways in which this process can fail in humans. Suffice to say, it takes a lot of knowledge and practice to be able to do this in a trustworthy way, and we don't have any real data showing that even veteran "rationalists" actually do this more effectively.
Replies from: Cerberuscomment by thescoundrel · 2011-11-14T22:44:17.020Z · LW(p) · GW(p)
So, the fundamental attribution error tells us that everyone is doing things for reasons that adhere to an internally consistent story... does this tell us that we are all making up that internally consistent story as we go, with little to no understanding of the true influences to our decision making? At some point, we all believe that we can improve our decision making ability- thats would seem to be why we are all here. Is the important take away here that we need to constantly review our actions before we take them, to make sure they are rational?
Replies from: lessdazed, dlthomas↑ comment by lessdazed · 2011-11-15T01:33:27.324Z · LW(p) · GW(p)
So, the fundamental attribution error tells us that everyone is doing things for reasons that adhere to an internally consistent story...
I'm not sure how to read this sentence.
People commit the fundamental attribution error when they think others do things nearly entirely due to their natures, when in reality their situations are extremely influential.
Depending on what you meant, my further response is one of the following: many different things could be fit into an internally consistent story, so people's actions aren't too dependent on that (but see also cached selves); we don't think that people are trying to conform to their stories, but that they are expressing their nature.
does this tell us that we are all making up that internally consistent story as we go, with little to no understanding of the true influences to our decision making?
Our never-ending excuse making for ourselves is miraculously somewhat closer to correct than our evaluating the causes of others' actions, though it may be systematically untrue in the opposite way. We don't understand others' decision making, but we don't naturally understand our own either.
Is the important take away here that we need to constantly review our actions before we take them, to make sure they are rational?
This is a bad way to phrase it because it implies the first step is generating a plan for action and the second is checking to see if it is rational, but we know the program to generate it is greatly influenced by the most mundane and irrelevant things. It could be the right plan, particularly if there are only a few possible responses, but if there are many, it pretty much won't be.
Less wrong thought needs to be part of the entire process - it needs to guide the entire planning stage, not just the thoughts in it. For example, for some types of important phone calls I dress a certain way - this is part of organizing my responses such that they are optimal. I don't do phone job interviews lying naked on the couch and then deliberate over what comes to mind naturally as words to say in the conversation, thinking, "would this be good to say or not?" I get dressed, sit at my desk, plan (in a calm state) what responses should be if something is said,etc.
Replies from: thescoundrel↑ comment by thescoundrel · 2011-11-15T03:35:26.985Z · LW(p) · GW(p)
Our never-ending excuse making for ourselves is miraculously somewhat closer to correct than our evaluating the causes of others' actions, though it may be systematically untrue in the opposite way. We don't understand others' decision making, but we don't naturally understand our own either.
Can you clarify what you mean here? Right now, all I read from it is "We have a slightly greater probability of correctly identifying our own actions than we do those of an outside observer." While that may be correct in some cases, it actually seems to contradict the the focus of the text:
These studies suggest that people do not have introspective awareness to the processes that generate their behavior. They guess their preferences, justifications, and beliefs by inferring the most plausible rationale for their observed behavior, but are unable to make these guesses qualitatively better than outside observers.
Concerning this statement:
Less wrong thought needs to be part of the entire process - it needs to guide the entire planning stage, not just the thoughts in it.
If decision making begins before conscious thought, and is "greatly influenced by the most mundane and irrelevant things", and the conscious portion of the brain is the only part we have to work with in planned, rational decision making (at least consistently- learned, rational habits not withstanding), then it would follow that a first step before we take action would be to review the action we are about to take and make sure it lines up with rational thought- since we know our brain will attempt to explain why it is taking the action independent of the actual cause, only by ensuring the action is rational before we take it can we keep from undercutting ourselves as we attempt to accomplish our goals.
comment by [deleted] · 2014-06-13T06:41:36.631Z · LW(p) · GW(p)
Rational expectations are exhausting!
"Participants considered introspective information more than behavioral information for assessing bias in themsel ves, but not others. This divergence did not arise simply from di V erences in introspective access. The blind spot persisted when observers had access to the introspections of the actor whose bias they judged. And, participants claimed that they, but not their peers, should rely on introspections when making self-assessments of bias. Only after being educated about the importance of nonconscious processes in guiding judgm ent and action—and thereby about the fallibility of introspection—did participants cease denying their relative susceptibility to bias"'