Anchoring and Adjustment
post by Eliezer Yudkowsky (Eliezer_Yudkowsky)
Suppose I spin a Wheel of Fortune device as you watch, and it comes up pointing to 65. Then I ask: Do you think the percentage of countries in the United Nations that are in Africa is above or below this number? What do you think is the percentage of UN countries that are in Africa? Take a moment to consider these two questions yourself, if you like, and please don’t Google.
Also, try to guess, within five seconds, the value of the following arithmetical expression. Five seconds. Ready? Set . . . Go!
1 × 2 × 3 × 4 × 5 × 6 × 7 × 8
Tversky and Kahneman recorded the estimates of subjects who saw the Wheel of Fortune showing various numbers.1 The median estimate of subjects who saw the wheel show 65 was 45%; the median estimate of subjects who saw 10 was 25%.
The current theory for this and similar experiments is that subjects take the initial, uninformative number as their starting point or anchor; and then they adjust upward or downward from their starting estimate until they reach an answer that “sounds plausible”; and then they stop adjusting. This typically results in under-adjustment from the anchor—more distant numbers could also be “plausible,” but one stops at the first satisfying-sounding answer.
Similarly, students shown “1 × 2 × 3 × 4 × 5 × 6 × 7 × 8” made a median estimate of 512, while students shown “8 × 7 × 6 × 5 × 4 × 3 × 2 × 1” made a median estimate of 2,250. The motivating hypothesis was that students would try to multiply (or guess-combine) the first few factors of the product, then adjust upward. In both cases the adjustments were insufficient, relative to the true value of 40,320; but the first set of guesses were much more insufficient because they started from a lower anchor.
Tversky and Kahneman report that offering payoffs for accuracy did not reduce the anchoring effect.
Strack and Mussweiler asked for the year Einstein first visited the United States.2 Completely implausible anchors, such as 1215 or 1992, produced anchoring effects just as large as more plausible anchors such as 1905 or 1939.
There are obvious applications in, say, salary negotiations, or buying a car. I won’t suggest that you exploit it, but watch out for exploiters.
And watch yourself thinking, and try to notice when you are adjusting a figure in search of an estimate.
Debiasing manipulations for anchoring have generally proved not very effective. I would suggest these two: First, if the initial guess sounds implausible, try to throw it away entirely and come up with a new estimate, rather than sliding from the anchor. But this in itself may not be sufficient—subjects instructed to avoid anchoring still seem to do so.3 So, second, even if you are trying the first method, try also to think of an anchor in the opposite direction—an anchor that is clearly too small or too large, instead of too large or too small—and dwell on it briefly.
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by Andrew2 ·
2007-09-08T00:02:36.000Z · LW(p) · GW(p)
When I do this demo in class (see here for details or here for the brief version), I phrase it as "the percentage of countries in the United Nations that are in Africa." This seems less ambiguous than Kahneman and Tversky's phrasing (although, I admit, I haven't done any experiment to check). It indeed works in the classroom setting, although with smaller effects than reported by Kahneman and Tversky (see page 89 of the linked article above).
Replies from: None
↑ comment by [deleted] ·
2011-06-26T23:25:10.873Z · LW(p) · GW(p)
That book is indeed a great one and I have used many ideas from it in teaching an undergraduate probability class myself. I'm a grad student in applied math, so I may not see you in many of the same conferences, etc., so LW appears to be as good a place as any to say thanks. The Bayesian Data Analysis book is also quite good.
comment by Brett_A._Thomas ·
2007-09-08T00:30:32.000Z · LW(p) · GW(p)
By the way, I'm very tired, so this might just be my misreading, but I found the UN question to be ambiguous - "Do you think the percentage of African countries in the UN is above or below [65%]?" I read that as, "Of all the countries in Africa, what percentage of them are in the UN?", not as what I believe to be the intended "Of all the countries that are in the UN, how many of them are African?" The answer to the former can quite obviously be guessed as "100% or darn close", but the answer to the latter is less obvious.
comment by Philip_Hunt ·
2007-09-08T03:32:42.000Z · LW(p) · GW(p)
Brett: """By the way, I'm very tired, so this might just be my misreading, but I found the UN question to be ambiguous - "Do you think the percentage of African countries in the UN is above or below [65%]?" I read that as, "Of all the countries in Africa, what percentage of them are in the UN?", not as what I believe to be the intended "Of all the countries that are in the UN, how many of them are African?" The answer to the former can quite obviously be guessed as "100% or darn close", but the answer to the latter is less obvious."""
I don't think it's ambiguous at all. The question, as worded, clearly means "Of all the countries in Africa, what percentage of them are in the UN?". And equaklly clearly, that's not what the questioner intended.
comment by Unnamed2 ·
2007-09-08T03:58:01.000Z · LW(p) · GW(p)
You're a few years behind on this research, Eliezer.
The point of the research program of Mussweiler and Strack is that anchoring effects can occur without any adjustment. "Selective Accessibility" is their alternative, adjustment-free process that can produce estimates that are too close to the anchor. The idea is that, when people are testing the anchor value, they bring to mind information that is consistent with the correct answer being close to the anchor value, since that information is especially relevant for answering the comparative question. Then when they are then asked for their own estimate, they rely on that biased set of information that is already accessible in their mind, which produces estimates that are biased towards the anchor.
In 2001, Epley and Gilovich published their first of several papers designed to show that, while the Selective Accessibility process occurs and creates adjustment-free anchoring effects, there are also cases where people do adjust from an anchor value, just as Kahneman & Tversky claimed. The examples that they've used in their research are trivia questions like "What is the boiling point of water on Mount Everest?" where subjects will quickly think of a relevant, but wrong, number on their own, and they'll adjust from there based on their knowledge of why the number is wrong. In this case, most subjects know that 212F is the boiling point of water at sea level, but water boils at lower temperatures at altitude, so they adjust downward. This anchoring & adjustment process also creates estimates that are biased towards the anchor, since people tend to stop adjusting too soon, once they've reached a plausible-seeming value.
Gilovich and Epley have shown that subjects give estimates farther from the anchor (meaning that they are adjusting more) on these types of questions when they are given incentives for accuracy, warned about the biasing effect of anchors, high in Need For Cognition (the dispositional tendency to think things through a lot), or shaking their head (which makes them less willing to stop at a plausible-seeming value; head-nodding produces even less adjustment than baseline). None of these variables matter on the two-part questions with an experimenter provided anchor, like the Africa UN %, where selective accessibility seems to be the process creating anchoring effects. The relevance of these variables is the main evidence for their claim that adjustment occurs with one type of anchoring procedure but not the other.
The one manipulation that has shown some promise at debiasing Selective Accessibility based anchoring effects is a version of the "consider the opposite" advice that Eliezer gives. Mussweiler, Strack & Pfeiffer (2000) argued that this strategy helps make a more representative set of information accessible in subjects' minds, and they did find debiasing when they gave subjects targeted, question-specific instructions on what else to consider. But they did not try teaching subjects the general "consider the opposite" strategy and seeing if they could successfully apply it to the particular case on their own.
Mussweiler and Gilovich both have all of their relevant papers available for free on their websites.
Epley, N., & Gilovich, T. (2001). Putting adjustment back in the anchoring and adjustment heuristic: Differential processing of self-generated and experimenter-provided anchors. Psychological Science, 12, 391–396.
Mussweiler, T., Strack, F., & Pfeiffer, T. (2000). Overcoming the inevitable anchoring effect: Considering the opposite compensates for selective accessibility. Personality and Social Psychology Bulletin, 26, 1142-1150.
Replies from: Swimmer963
↑ comment by Swimmer963 ·
2012-08-28T18:35:49.465Z · LW(p) · GW(p)
Gilovich and Epley have shown that subjects give estimates farther from the anchor (meaning that they are adjusting more) on these types of questions when they are given incentives for accuracy, warned about the biasing effect of anchors, high in Need For Cognition (the dispositional tendency to think things through a lot), or shaking their head (which makes them less willing to stop at a plausible-seeming value; head-nodding produces even less adjustment than baseline).
Shaking their heads? If this is really an effective way to de-bias your thinking a tiny bit...COOL! I will try that!
comment by Henry_V ·
2007-09-08T13:14:58.000Z · LW(p) · GW(p)
Can some of the anchoring effect can be explained by the use of a kind of implicit confidence interval?
Suppose that I (subconsciously) have an estimate of 20% for the proportion of UN countries that are African. Further suppose that I think a 95% confidence interval ranges from 10% to 30%.
If I start at a high anchor, I will adjust downwards until I'm within the 95% CI, i.e., 30%. If I start at a low anchor, I adjust upwards until I'm within the 95% CI, i.e., 10%. In my head, I may consider 10% and 30% as not statistically different from one another.
I'm not talking about exact statistical inference, but I wonder if this process is part of what's going on in the subject's head.
I have tried a classroom bargaining experiment, where I give random "valuations" to students. I then assign random ownership (so that half the class become sellers). Without knowing what the item is (it's just "some good"), the initial offerers tend to have a disadvantage because they use their own valuations as anchors.
When I change the setup by telling them that "it's a used Toyota," the final bargained prices tend to more closely (but not perfectly) split the surplus.
I'm reminded of a story that my father tells about being in the army and learning to shoot. After missing the target, the instructor told them to use "bold sight adjustments" because shooters tend to be too timid in adjusting their aims. The phrase "bold sight adjustments" became part of our family vocabulary.
comment by Gavin2 ·
2007-09-10T15:42:43.000Z · LW(p) · GW(p)
I like to avoid looking at the prices of things that I want to buy, and instead ask myself "how much would I be willing to pay for this?" It's my way of overcoming anchoring bias, and works pretty well.
comment by Terra ·
2008-04-12T14:01:12.000Z · LW(p) · GW(p)
I wrote a post looking at the two numbers
"1 x 2 x 3 x 4 x 5 x 6 x 7 x 8" made a median estimate of 512,
"8 x 7 x 6 x 5 x 4 x 3 x 2 x 1" made a median estimate of 2,250.
From a computer science perspective.
One of the interesting things I noticed was that both averaged guesses are close to powers of two and that given a little bit of fudging you can make a pretty good guess about how our brain creates that guess.
2^8 + 2^1 = 512
2^8+ 2^3 = 2048
(note that 2^3 is 8, but 2^1 is 2 instead of 1, so if you fudge all the numbers to their closest power of two and then do multiplication you get the answer they created.
4 x 3 x 2 x 1 you would get 2^4 + 2^2 which is 64
1 x 2 x 3 x 4 you would get 2^2 + 2^1 which is 32
comment by pookleblinky ·
2008-06-10T18:12:04.000Z · LW(p) · GW(p)
Let's make the debiasing technique more rigorous.
How much more unlikely is it that I will throw 15 consecutive snake-eyes, than that I will throw 11 consecutive snake eyes?
I should allocate about -170 dB of belief to the likelihood of throwing 11 snake-eyes, and about -232 dB to the likelihood I will throw 15 snake-eyes. The ~60 dB difference indicates the latter event is 6 orders of magnitude more unlikely.
What does it mean if someone thinks the difference is smaller?
If 6 orders of magnitude of improbability are glossed over, that means the person does not comprehend it in gut terms.
To what other event might I allocate -60 dB to? How about flipping a coin 20 times and getting all Heads?
Now we're getting somewhere. Let us ask ourselves a series of restricted Aumann Questions (on various statements in general knowledge) and calculate our joint belief. The difference between the belief we allocated, and the belief we ought to have allocated, is a measure of our flattened sense of improbability. We can take this into account, and adjust our anchors accordingly. We can, in effect, see how finely-tuned is our sense of improbability.
i.e. Suppose I take a restricted Aumann test of 40 questions regarding various general facts. I assign a joint probability of -150 dB to the survey. If I were better calibrated, my priors ought to have increased this to -100. I now know I must be aware a possible 50 dB gap between my beliefs and reality, I ought to be wary of any parochial adjustment. How wary? I should attach very little confidence to any adjustment under one order of magnitude...
Replies from: pnrjulius
↑ comment by pnrjulius ·
2012-04-03T01:18:25.211Z · LW(p) · GW(p)
I ran that one in my head and thought, "that's got to be about a million times less likely." And indeed it was, 6 orders of magnitude. To some extent, I may just have gotten lucky... but I think that lurking on Less Wrong for the last couple years may have made me appreciate probabilities at a more intuitive level.
So does this mean Less Wrong actually works?
comment by Neel_Nanda ·
2012-04-24T06:14:32.560Z · LW(p) · GW(p)
But how would you slip an anchor in a normal conversation? Does it have to be phrased as a possible question or can it just be a random number they see or hear?
comment by mfb ·
2012-08-04T21:55:14.856Z · LW(p) · GW(p)
What about many artificial anchors?
Make a list with powers of 1.2 from 1 to 10. Look at it to estimate some absolute number, assuming you can somehow estimate the correct order of magnitude.
In a similar way, for probabilities, make a list from 0 to 1 with a logarithmic scale of ratios in some interesting range.
It does not help for the year Einstein first visited america, but I would really use anchors for that: 1933 as upper limit, 1880 as lower limit, and the remaining timespan would be guesswork for me.
Looking at a biography, I think the answer is 1964+34-185+4*27 (to reduce the spoiler impact :p)
comment by mszegedy ·
2014-04-08T06:37:28.569Z · LW(p) · GW(p)
I've found that going by significant digits helps.
"If I represented the date that Einstein came to the US with only one significant digit of precision, what would it be? Definitely 2000. What about two? Definitely 1900. What about three? Probably 1900 again; I'm willing to take that bet. But four digits of precision? I'm not sure at all. I'll leave it as 1900."
The answer came out way off, but hopefully it prevented any anchoring, and it also accurately represents my knowledge of Einstein (namely, I know which properties of physics he discovered, and I know that he wrote his most important papers in the earlier half of the 190Xs, which must have also been when he came to the US). In hindsight, I might have should have taken historical context into account (why would Einstein leave for the US in the first place? if I had considered this, my guess would probably have ended up as 1910 or 1920), but that's hindsight bias or a lesson to be learned.
An improvement to this method might be that I explicitly consider the range of numbers that would make it come out as a significant digit (if the three-significant-digit number is 1900, then he came between 1895 and 1904; does that sound more plausible than him coming sometime between 1905 and 1914?). But this might just make the anchoring effect worse, or introduce some other bias.
comment by DimitriK ·
2014-11-13T16:42:00.183Z · LW(p) · GW(p)
On the question of Einstein I anchored, but I don't see how else I could have done it. I don't know much about his personal history but I get the sense Einstein had some contributions to the atom bomb, and had fled Europe to escape nazi prosecution. I anchored on 1945 as the end of WW2 and figured he must have left a fair bit sooner, possibly before the war as nazi persecution had already started before the war was underway.
I guessed 1937. I can't see how else I could have gone about it with the limited information I had. If I can't google for the question I have to go with what's a familiar piece of information and adjust from there.
I looked it up after and he was visiting the us in '33 and decided to not go back to Germany when hitler came to power. I wasnt correct but anchoring let me make a reasonably good guess when I was dealing with a lack of information
comment by Lucent ·
2019-11-26T01:39:54.071Z · LW(p) · GW(p)
Is there general agreement that anchoring experiments are a subversion of an evolutionary trait that is generally beneficial? It's rare to be in a group, be presented with a "random" number, and then be asked a question whose answer will be an unrelated number. Unless you have a lot of group status, it's much less harmful to your standing to be wrong with many others frequently than it is beneficial to be right alone infrequently. It's only recent in our evolutionary history that the balance has tipped in the other direction.