Which cognitive biases should we trust in?
post by Andy_McKenzie · 2012-06-01T06:37:44.383Z · LW · GW · Legacy · 42 commentsContents
42 comments
There have been (at least) a couple of attempts on LW to make Anki flashcards from Wikipedia's famous List of Cognitive Biases, here and here. However, stylistically they are not my type of flashcard, with too much info in the "answer" section.
Further, and more troublingly, I'm not sure whether all of the biases in the flashcards are real, generalizable effects; or, if they are real, whether they have effect sizes large enough to be worth the effort to learn & disseminate. Psychology is an academic discipline with all of the baggage that entails. Psychology is also one of the least tangible sciences, which is not helpful.
There are studies showing that Wikipedia is no less reliable than more conventional sources, but this is in aggregate, and it seems plausible (though difficult to detect without diligently checking sources) that the set of cognitive bias articles on Wikipedia has high variance in quality.
We do have some knowledge of how many of them were made, in that LW user nerfhammer wrote a bunch. But, as far as I can tell, s/he didn't discuss how s/he selected biases to include. (Though, s/he is obviously quite knowledgable on the subject, see e.g. here.)
As the articles stand today, many (e.g., here, here, here, here, and here) only cite research from one study/lab. I do not want to come across as whining: the authors who wrote these on Wikipedia are awesome. But, as a consumer the lack of independent replication makes me nervous. I don't want to contribute to information cascades.
Nevertheless, I do still want to make flashcards for at least some of these biases, because I am relatively sure that there are some strong, important, widespread biases out there.
So, I am asking LW whether you all have any ideas about, on the meta level,
1) how we should go about deciding/indexing which articles/biases capture legit effects worth knowing,
and, on the object level,
2) which of the biases/heuristics/fallacies are actually legit (like, a list).
Here are some of my ideas. First, for how to decide:
- Only include biases that are mentioned by prestigious sources like Kahneman in his new book. Upside: authoritative. Downside: potentially throwing out some good info and putting too much faith in one source.
- Only include biases whose Wikipedia articles cite at least two primary articles that share none of the same authors. Upside: establishes some degree of consensus in the field. Downside: won't actually vet the articles for quality, and a presumably false assumption that the Wikipedia pages will reflect the state of knowledge in the field.
- Search for the name of the bias (or any bold, alternative names on Wikipedia) on Google scholar, and only accept those with, say, >30 citations. Upside: less of a sampling bias of what is included on Wikipedia, which is likely to be somewhat arbitrary. Downside: information cascades occur in academia too, and this method doesn't filter for actual experimental evidence (e.g., there could be lots of reviews discussing the idea).
- Make some sort of a voting system where experts (surely some frequent this site) can weigh in on what they think of the primary evidence for a given bias. Upside: rather than counting articles, evaluates actual evidence for the bias. Downside: seems hard to get the scale (~ 8 - 12 + people voting) to make this useful.
- Build some arbitrarily weighted rating scale that takes into account some or all of the above. Upside: meta. Downside: garbage in, garbage out, and the first three features seem highly correlated anyway.
Second, for which biases to include. I'm just going off of which ones I have heard of and/or look legit on a fairly quick run through. Note that those annotated with a (?) are ones I am especially unsure about.
- anchoring
- availability
- bandwagon effect
- base rate neglect
- choice-supportive bias
- clustering illusion
- confirmation bias
- conjunction fallacy (is subadditivity a subset of this?)
- conservatism (?)
- context effect (aka state-dependent memory)
- curse of knowledge (?)
- contrast effect
- decoy effect (aka independence of irrelevant alternatives)
- Dunning–Kruger effect (?)
- duration neglect
- empathy gap
- expectation bias
- framing
- gambler's fallacy
- halo effect
- hindsight bias
- hyperbolic discounting
- illusion of control
- illusion of transparency
- illusory correlation
- illusory superiority
- illusion of validity (?)
- impact bias
- information bias (? aka failure to consider value of information)
- in-group bias (this is also clearly real, but I'm also not sure I'd call it a bias)
- escalation of commitment (aka sunk cost/loss aversion/endowment effect; note, contra Gwern, that I do think this is a useful fallacy to know about, if overrated)
- false consensus (related to projection bias)
- Forer effect
- fundamental attribution error (related to the just-world hypothesis)
- familiarity principle (aka mere exposure effect)
- moral licensing (aka moral credential)
- negativity bias (seems controversial & it's troubling that there is also a positivity bias)
- normalcy bias (related to existential risk?)
- omission bias
- optimism bias (related to overconfidence)
- outcome bias (aka moral luck)
- outgroup homogeneity bias
- peak-end rule
- primacy
- planning fallacy
- reactance (aka contrarianism)
- recency
- representativeness
- self-serving bias
- social desirability bias
- status quo bias
Happy to hear any thoughts!
42 comments
Comments sorted by top scores.
comment by David_Gerard · 2012-06-01T09:51:26.574Z · LW(p) · GW(p)
I first read the title as asking which among the biases themselves we should trust in, rather than which lists we should trust in. Would a clarified title be worth the disruption to RSS feeds?
Replies from: Andy_McKenzie↑ comment by Andy_McKenzie · 2012-06-01T16:28:16.937Z · LW(p) · GW(p)
Since this was so highly upvoted, I changed it. Thanks!
Replies from: Oscar_Cunningham↑ comment by Oscar_Cunningham · 2012-06-01T18:37:41.691Z · LW(p) · GW(p)
It's still misleading now. Did you actually change it? If so, what was it before?
Replies from: Andy_McKenzie↑ comment by Andy_McKenzie · 2012-06-01T19:43:29.837Z · LW(p) · GW(p)
Before it was "which cognitive biases should we trust". Now it is "which cognitive biases should we trust in".
To be very clear, I am interested in
which among the biases themselves we should trust in
that is the second part of the post. This makes me a bit confused as to why DG's comment is upvoted so highly.
More evidence to be hesitant about making judgments on the basis of LW votes.
comment by Will_Newsome · 2012-06-01T15:28:25.845Z · LW(p) · GW(p)
Many cognitive biases don't exist as such because many (most?) psychology results are wrong and heuristics and biases is no exception, entire methodologies (and thus entire sets of claimed biases) can be misguided. Don't compartmentalize your knowledge of the many flaws of scientific research and especially psychology and neuroscience research. I've seen many people who are willing to link to studies that failed replication in support of a claim about the existence of a cognitive bias. Imagine someone did this for parapsychology! And parapsychology is generally more rigorous—even so it's not enough to outweigh skepticism, because everyone knows that all psychology research is suspect, even research with apparently decent methodology and statistics. People are willing to lower their standards for H&B because "the world is mad" has become part of a political agenda justifying LessWrong's claims to rationality. Don't unevenly lower your standards, don't be unevenly selective about what methodologies or results you're willing to accept. Make that your claim to fame, not your supposed knowledge of biases.
Replies from: Andy_McKenzie↑ comment by Andy_McKenzie · 2012-06-01T16:34:17.747Z · LW(p) · GW(p)
On the meta level, you seem to be saying that none of them are worth learning. So, going down to the object level, do you deny that confirmation bias, hindsight bias, and anchoring are real effects? (to choose three ones I am particularly confident in)
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-06-01T17:20:34.044Z · LW(p) · GW(p)
I deny confirmation bias in many forms—it's not consistently formulated, and this has been acknowledged as a serious problem, which is why Eliezer's post on the subject is titled "Positive Bias", a narrower concept. I don't deny hindsight bias or anchoring. So of course I don't assert that none of them are worth learning. That said, because it's hard to tell which are real and which aren't, one must be extremely careful.
Perhaps most importantly for LessWrong, I deny the kind of overconfidence bias found by the most popular studies of the subject: it simply disappears when you ask for frequencies rather than subjective probabilities, and brains naturally use frequencies.
Replies from: Andy_McKenzie↑ comment by Andy_McKenzie · 2012-06-01T17:32:35.568Z · LW(p) · GW(p)
OK, fair. Thanks. If you want to run through the list and say which ones you do and don't agree with, I'd find that helpful.
Replies from: Will_Newsome, Will_Newsome, Will_Newsome↑ comment by Will_Newsome · 2012-06-01T17:56:04.171Z · LW(p) · GW(p)
That'd take time. I'll say I'm generally skeptical of any results from Tversky, Kahneman, or Dawes. E.g., conjunction fallacy. Not that they don't have good results, but they're very spotty, and cited even when they're out to lunch. I'll add that I really like Gigerenzer, including his sharp critiques of Tversky and Kahneman's naive "compare to an allegedly ideal Bayesian reasoner" approach. Gigerenzer has a very thorough knowledge of both H&B and statistics.
I'll also note that oftentimes you don't need strong empirical data to know a bias exists. E.g., many people have anecdotal experience with the planning fallacy, and I doubt anyone would deny the existence of the anchoring effect once it'd been brought to their attention. Of course, even then studies are helpful to know how strong the biases are. Often, though, I wish psychology just stopped using statistics, which sets up all kind of perverse incentives and methodological costs without adding much.
Replies from: lukeprog, Andy_McKenzie↑ comment by lukeprog · 2012-06-02T01:10:52.355Z · LW(p) · GW(p)
I'll add that I really like Gigerenzer, including his sharp critiques of Tversky and Kahneman's naive "compare to an allegedly ideal Bayesian reasoner" approach.
Here is my collection of critiques of Gigerenzer on Tversky & Kahneman. I side with Tversky & Kahneman on this one.
Replies from: Will_Newsome, Will_Newsome↑ comment by Will_Newsome · 2012-06-02T14:38:28.481Z · LW(p) · GW(p)
Ah, one meta thing to keep in mind, is that as a Bayesian it's actually sort of hard to even understand what Gigerenzer could possibly mean sometimes. Gigerenzer understands and appreciates Bayesianism, so I knew I had to just be entirely missing something about what he was saying. E.g., I was shocked when he said that the conjunction rule wasn't a fundamental rule of probability and only applied in certain cases. I mean, what? It falls directly out of the axioms! Nonetheless when I reread his arguments a few times I realized he actually had an important meta-statistical point. Anyway, that's just one point to keep in mind when reading Gigerenzer, since you and I lean Bayesian. (As we should: Bayes really is a lot better.)
Replies from: lukeprog↑ comment by lukeprog · 2012-06-02T17:37:25.691Z · LW(p) · GW(p)
Okay. I also look forward to hearing what specific meta-statistical point you think Gigerenzer was making.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-06-02T17:49:01.735Z · LW(p) · GW(p)
Would you prefer a discussion post, email, or a comment reply here? (I'll want to write a long response that covers all the points all that once, at least about Gigerenzer etc.)
Replies from: lukeprog, lukeprog↑ comment by lukeprog · 2012-06-29T18:55:05.677Z · LW(p) · GW(p)
Has this happened yet? I didn't miss it, right?
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-06-29T22:44:02.797Z · LW(p) · GW(p)
Correct, I'll be sure to let you know when it happens.
↑ comment by Will_Newsome · 2012-06-02T01:20:25.572Z · LW(p) · GW(p)
I can only see a little from your links, but what I do see misses Gigerenzer's point. Kahneman (and Tversky's?) replies to Gigerenzer also miss the point. Also note that some of Gigerenzer's studies contradict some of Tversky and Kahneman's results, or at least the conclusions that are frequently drawn from those results. E.g., overconfidence disappearing when you use frequencies instead of subjective probabilities. That said, I generally like Stanovich, so I'll look closer at what he says specifically.
I should note, it's really unfortunate that this word "normative" isn't tabooed more.
Also, Dawes is often totally out to lunch—you've seen a few reasons why in a highly upvoted comment on one of your posts. Do you agree that Dawes and his cadre of researchers are not trustworthy? (Note that Eliezer often recommends Dawes' book, "Rational Choice in an Uncertain World". I read an '80s edition of that book and was horrified at the poor scholarship. Before then I'd had a high opinion of H&B.)
I'm interested in our disagreement, in my view it seems pretty important, because it shapes our priors for how much respect we should give to the common man's opinion. I'll read more from your links (e.g. buy or steal a book or two) and give you my updated opinion.
Replies from: lukeprog↑ comment by lukeprog · 2012-06-02T02:09:18.842Z · LW(p) · GW(p)
I would need more details in order to comment on specific studies or results. Which passages from Dawes reflect poor scholarship? Which Gigerenzer studies contradict K&H results or conclusions (stated in which papers)? I also look forward to a more specific explanation of what you think Gigerenzer's point is, and why the articles I linked to fail to address it.
(For those who are interested, I believe the highly-upvoted comment Will refers to is this one.)
↑ comment by Andy_McKenzie · 2012-06-01T22:50:15.429Z · LW(p) · GW(p)
Gigerenzer has a very thorough knowledge of both H&B and statistics.
Thanks. Can you recommend of a short primer of his (like a summary article)?
I'll also note that oftentimes you don't need strong empirical data to know a bias exists. E.g., many people have anecdotal experience with the planning fallacy, and I doubt anyone would deny the existence of the anchoring effect once it'd been brought to their attention.
That anecdotal experience is quite easy to quantify. There's a reason the word "experiment" and "experience" are so similar.
Often, though, I wish psychology just stopped using statistics, which sets up all kind of perverse incentives and methodological costs without adding much.
I think the problem is not statistics but bad statistics. If data sharing, replication, and transparency of methods all continue to increase, I do expect most of psychology's current problems will be vastly mitigated. But, that's not the world we live in today.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-06-01T23:47:35.684Z · LW(p) · GW(p)
Thanks. Can you recommend of a short primer of his (like a summary article)?
Check out fastandfrugal.com. For critiques of Kahneman, I don't think there's a single summary, just Google Scholar Gigerenzer Kahneman.
If data sharing, replication, and transparency of methods all continue to increase, I do expect most of psychology's current problems will be vastly mitigated.
Is blind application of data-mining packages increasing or staying constant at this point? If increasing, do the good trends outweigh it?
Replies from: wedrifid, Andy_McKenzie↑ comment by wedrifid · 2012-06-02T00:39:41.077Z · LW(p) · GW(p)
Is blind application of data-mining packages increasing or staying constant at this point? If increasing, do the good trends outweigh it?
What is it about the blind application of data-mining packages that is not-good? (If it works for achieving the goals of the user more effectively than whatever they were doing before then good for them!)
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-06-02T00:43:55.307Z · LW(p) · GW(p)
I can't tell if you're making a joke or arguing that hand-applied statistical practices of amateurs are actually worse for truth-seekers than automated data-mining.
Replies from: wedrifid↑ comment by wedrifid · 2012-06-02T00:54:23.402Z · LW(p) · GW(p)
I can't tell if you're making a joke or arguing
Was going for "ask a question in the hope of getting a literal answer".
I don't have much information about when data mining packages are used, how effective they are for those uses or what folks would have done if they had not used them.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-06-02T00:55:14.948Z · LW(p) · GW(p)
I see. I don't have any good resources for you, sadly. I'd ask gwern.
Replies from: wedrifid↑ comment by wedrifid · 2012-06-02T01:03:24.641Z · LW(p) · GW(p)
I see. I don't have any good resources for you, sadly.
I was essentially asking for your pure opinion/best guess. ie. An unpacking of what I infer were opinions/premises implied by "[not] good". Nevermind. I'll take it to be approximately "blind application of data-mining packages is worse than useless and gives worse outcomes than whatever they would or wouldn't have done if they didn't have the package".
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-06-02T01:06:47.100Z · LW(p) · GW(p)
Sorry, I just don't have a strong opinion. It's hard for me to consider the counterfactual, because there's lots of selection effects on what studies I see both from the present time and the time before software data-miners were popular.
↑ comment by Andy_McKenzie · 2012-06-02T00:12:27.498Z · LW(p) · GW(p)
Is blind application of data-mining packages increasing or staying constant at this point? If increasing, do the good trends outweigh it?
Good question and really hard to tell. Certainly it happens now! But I bet it happened in the past too. Whether data-sharing standards in publications has been rising is something that is observable (i.e., people saying what they did to the data), and I'd be willing to bet on it empirically getting better.
↑ comment by Will_Newsome · 2012-06-02T18:13:07.169Z · LW(p) · GW(p)
One more comment: statistics has a lot of problems even in theory. Using statistics to interpret how well people compare to an allegedly ideal statistical model, when you're studying complex systems like brains, and when methodology is really hard, is also just a host of degrees of freedom along which to err or fuge the results.
↑ comment by Will_Newsome · 2012-06-02T18:09:56.949Z · LW(p) · GW(p)
I'll put it again differently: I'm suspicious of all researchers who seem to have ideological axes to grind. Dawes clearly does, and many others seem to as well. Ideology plus incentives to spin and exaggerate just don't tend to do good.
comment by Vaniver · 2012-06-01T15:18:46.077Z · LW(p) · GW(p)
- hyperbolic discounting (this is clearly real, but I'm not sure I would not call it a bias)
"Discounting" in general makes great sense. The reason that's on the bias list is because of the word "hyperbolic."
Discounting means multiplying future values by some function F(t), which is generally in [0,1]. There are three simple choices: the first is no discounting, just setting it always equal to one (rarely recommended); exponential discounting, in which F(t)=exp(r t); and hyperbolic discounting, in which F(t)=1/(1+k t). Hyperbolic discounting appears to be what people natively use, but exponential discounting is what economists recommend because it's consistent in time. If I offer you a choice between A after x time units and B after x+y time units, under both no and exponential discounting the answer does not depend on x. (In the no discounting case, it doesn't depend on y either.) With hyperbolic discounting, it does depend on x- with the particularly bothersome result that a hyperbolic discounter's preferences might switch from B to A as x decreases. This can lead to self-thwarting behavior: if Bob2012 chooses $100 in six years over $50 in five years, he goes against the wishes of Bob2017, who would choose $50 now over $100 in a year.
Replies from: David_Gerard, Andy_McKenzie↑ comment by David_Gerard · 2012-06-01T15:56:30.142Z · LW(p) · GW(p)
Hyperbolic discounting appears to be what people natively use
As I understand it, hyperbolic discounting appears to be what WEIRDs use. Are there replicated studies on other groups?
Replies from: ciphergoth, Vaniver↑ comment by Paul Crowley (ciphergoth) · 2012-06-02T10:41:36.672Z · LW(p) · GW(p)
It's very well established in pigeons and rats. eg http://www.ncbi.nlm.nih.gov/pmc/articles/PMC2648524/
Replies from: David_Gerard↑ comment by David_Gerard · 2012-06-02T21:03:33.134Z · LW(p) · GW(p)
Aha, thank you!
↑ comment by Andy_McKenzie · 2012-06-01T16:29:30.997Z · LW(p) · GW(p)
Ah, thanks, I should have been more diligent. Updated the post.
comment by mwengler · 2012-06-01T13:55:00.193Z · LW(p) · GW(p)
To improve the ability to avoid cognitive biases, I recommend [Charlie Munger's multiple mental models which are discussed somewhat here. My own approach (without studies to back it up of course) is heavy on looking at things in multiple different ways, and sort of keeping in mind salient points of each. The idea here is that the map is not the territory, some maps may be very good at showing you some things about the real territory, and having multiple different maps of the same territory at least gives you some good diversity of interesting features to consider.
In some sense, the multiple maps approach is opposite to the list of biases approach. Multiple maps adds to the list of things you may know about something. A list of biases serves to remove things from the list of things you may know as you identify some of them with some of the biases on your list.
Of course doing both (multiple maps AND checking things against a list of biases) is a multiple maps kind of approach: the list of biases is just another map.
Replies from: Andy_McKenzie↑ comment by Andy_McKenzie · 2012-06-01T16:40:51.368Z · LW(p) · GW(p)
My own approach (without studies to back it up of course) is heavy on looking at things in multiple different ways, and sort of keeping in mind salient points of each
Do you adjust for anchoring when you do this?
Multiple maps adds to the list of things you may know about something
Why do you want to have a list of things that you know about something? Seems to me like collecting facts ("things you know") about just about any topic is easy and that the tricky part is integrating into a decision or probability estimate them.
Replies from: mwengler↑ comment by mwengler · 2012-06-01T21:30:12.910Z · LW(p) · GW(p)
Why do you want to have a list of things that you know about something? Seems to me like collecting facts ("things you know") about just about any topic is easy and that the tricky part is integrating into a decision or probability estimate them.
Because the "truth" of the things I "know" about something is graded on a curve. If I look at a company purely as a physicist I may notice that their chief scientist is smart and their technological ideas are sound. As a physicist I would think this company a good investment.
But to be good investment, the physics has to be right AND the management has to be able to hire the right people and get them to do useful things, AND the product they are building has to be something of interest to customers AND the product has to be marketed in such a way that potential customers see it as something they want AND the company has to bring all this to fruition before they run out of money.
If I look at it ONLY as a physicist I am very likely to estimate probabilities incorrectly. With only the brilliance of their ideas and scientists in my mind, I am likely to "anchor" on these, implicitly assuming every other factor somehow doesn't matter as much as the one I bother to study.
If I look at it in multiple ways, if there is something that really is more important I am much more likely to discover it, than if I stopped trying to learn more about the something once I had three or four facts and a physicsts model of it. You never want to invest in anything where the pitch starts with "Assume a spherical chicken."
Replies from: ciphergoth, Andy_McKenzie↑ comment by Paul Crowley (ciphergoth) · 2012-06-02T10:44:00.194Z · LW(p) · GW(p)
This appears to finish early, or have a stray word at the end. [Now fixed - thanks!]
Replies from: mwengler↑ comment by Andy_McKenzie · 2012-06-01T22:44:29.770Z · LW(p) · GW(p)
Thanks for the explanation. I now understand what you were saying and agree that this approach is complementary to the study of cognitive biases.