Fun and Games with Cognitive Biases

post by Cosmos · 2011-02-18T20:38:28.192Z · LW · GW · Legacy · 28 comments

Contents

  Confirmation Bias
  Fundamental Attribution Error
  Bias Blind Spot
  Anchoring Bias
  Representativeness Bias
  Projection Bias
  Planning Fallacy
  Availability Heuristic
  Hindsight Bias
  Halo Effect
  Confabulation
  Overconfidence Bias
  Summary
None
28 comments

You may have heard about IARPA's Sirius Program, which is a proposal to develop serious games that would teach intelligence analysts to recognize and correct their cognitive biases.  The intelligence community has a long history of interest in debiasing, and even produced a rationality handbook based on internal CIA publications from the 70's and 80's.  Creating games which would systematically improve our thinking skills has enormous potential, and I would highly encourage the LW community to consider this as a potential way forward to encourage rationality more broadly.

While developing these particular games will require thought and programming, the proposal did inspire the NYC LW community to play a game of our own.  Using a list of cognitive biases, we broke up into groups of no larger than four, and spent five minutes discussing each bias with regards to three questions:

  1. How do we recognize it?
  2. How do we correct it?
  3. How do we use its existence to help us win?

The Sirius Program specifically targets Confirmation Bias, Fundamental Attribution Error, Bias Blind Spot, Anchoring Bias, Representativeness Bias, and Projection Bias.  To this list, I also decided to add the Planning Fallacy, the Availability Heuristic, Hindsight Bias, the Halo Effect, Confabulation, and the Overconfidence Effect.  We did this Pomodoro style, with six rounds of five minutes, a quick break, another six rounds, before a break and then a group discussion of the exercise.

Results of this exercise are posted below the fold.  I encourage you to try the exercise for yourself before looking at our answers.

Caution: Dark Arts!  Explicit discussion of how to exploit bugs in human reasoning may lead to discomfort.  You have been warned.

Confirmation Bias

Fundamental Attribution Error

Bias Blind Spot

Anchoring Bias

Representativeness Bias

Projection Bias

Planning Fallacy

Availability Heuristic

Hindsight Bias

Halo Effect

Confabulation

Overconfidence Bias

Summary

How long do you think it should take to solve a major problem if you are not wasting any time?  Everything written above was created in a sum total of one hour of work.  How many of these ideas had never even occurred to us before we sat down and thought about it for five minutesTake five minutes right now and write down what areas of your life you could optimize to make the biggest difference.  You know what to do from there.  This is the power of rationality.

28 comments

Comments sorted by top scores.

comment by Mass_Driver · 2011-02-19T09:34:07.423Z · LW(p) · GW(p)

Voted up for entertainingly, clearly, and concisely summarizing many applications of knowledge about many biases.

How long do you think it should take to solve a major problem if you are not wasting any time? Everything written above was created in a sum total of one hour of work. How many of these ideas had never even occurred to us before we sat down and thought about it for five minutes?

Woh, woh, slow down, please. Do you mean to say that you became aware of biases, internalized your belief in their importance, gathered the relevant info, became familiar with LW norms about style and tone, and wrote the article, all in one hour? Even if you did, do you mean to imply that you thereby solved a major problem? Because it seems to me that psychologists, cognitive scientists, and to a lesser extent the LW community have described a major problem -- human biases -- which is still largely unsolved in the sense that it continues to annoy and befuddle even people at the 99th percentile of rationality. By summarizing that problem (however aptly) you have not even done the work of observing and describing it, let alone solving it.

Take five minutes right now and write down what areas of your life you could optimize to make the biggest difference. You know what to do from there. This is the power of rationality.

That works the first 200 times, but at a certain point the low-hanging fruit is gone and the suboptimal habits you have turn out not to be as "irrational" as you thought -- they may thwart your consciously held goals, but they also serve your secret, shameful, or hard-to-articulate desires, which, despite not being part of your ideal self-image, still get plenty of votes on what kind of attitude you adopt and how you spend your time. There are ways to craft and mold yourself into a better person, but just writing down a list of self-improvement projects, even if you persist at it for five whole minutes, isn't likely to result in much (or any) lasting change.

Replies from: bentarm, Cosmos
comment by bentarm · 2011-02-20T19:35:28.155Z · LW(p) · GW(p)

Take five minutes right now and write down what areas of your life you could optimize to make the biggest difference. You know what to do from there. This is the power of rationality.

That works the first 200 times, but at a certain point the low-hanging fruit is gone

200 times sounds like a pretty good deal to me. If you just did this once a week, that's 4 years of continual improvement.

comment by Cosmos · 2011-02-20T21:42:42.351Z · LW(p) · GW(p)

Do you mean to say that you became aware of biases, internalized your belief in their importance, gathered the relevant info, became familiar with LW norms about style and tone, and wrote the article, all in one hour?

Certainly there were a lot of prerequisites that went into being able to do this exercise, and I did not mean to imply that everything that went into writing the above article itself was only one hour in total. The people here in the LW community are highly likely to have the prerequisites to do this exercise without additional time investment. Those five minutes included explaining the bias in question, when it was unfamiliar to any members of the group.

Even if you did, do you mean to imply that you thereby solved a major problem?

One of the explicit goals of the exercise was to gain awareness of the biases in question, which is the first step in modifying our behavior. Immediately following the exercises, everyone who participated was able to point out examples of it occurring left and right. Correcting these habits of thought will take reinforcement over a period of time, but by becoming self-aware and having others to point them out to us as well we are drastically closer to solving the problem than before the one hour of work.

That works the first 200 times, but at a certain point the low-hanging fruit is gone and the suboptimal habits you have turn out not to be as "irrational" as you thought

If you have actually picked all of your low-hanging fruit then congratulations, you are a supremely powerful human being.

I fully agree that what is holding us back is often conflicting emotional desires, and as you correctly point out there are methods of modifying those as well. We make mistakes on both an analytical and an emotional level, and dealing with both is vitally important to becoming the most effective person possible. Failing to take five minutes and actually trying optimize the situation is just one analytical failure mode, which I am trying to address with this one post.

comment by patrissimo · 2011-03-03T08:03:47.608Z · LW(p) · GW(p)

Feed confirmatory evidence to others, give them tests to run which you know beforehand are confirmatory

This is not a way to take advantage of confirmation bias. Confirmation bias means that others look for confirming evidence for their true theories, and ignore disconfirming evidence. This process is not much affected by you adding extra confirmatory evidence - they can find plenty on their own. Instead, it is a way to fool rational people - for example, Bayesians who update based on evidence will update wrong if fed biased evidence. Which doesn't really fit here.

The way to actually use confirmation bias to convince people of things is to present beliefs you want to transmit to them as evidence for things they already believe. Then confirmation bias will lead them to believe this new evidence without question, because they wish to believe it to confirm their existing beliefs.

Replies from: wedrifid, Kaj_Sotala, David_Gerard
comment by wedrifid · 2011-03-03T15:36:11.407Z · LW(p) · GW(p)

Instead, it is a way to fool rational people - for example, Bayesians who update based on evidence will update wrong if fed biased evidence. Which doesn't really fit here.

It should be noted that it is a way to fool Bayesians over whom you have some kind of epistemic advantage. That is, you have to be for some reason better able to provide deceptive data than they are at accounting for your ability or inclination to deceive. That is hard to do without an overwhelming advantage in one of intelligence, power, knowledge or anonymity.

comment by Kaj_Sotala · 2012-03-21T15:32:13.274Z · LW(p) · GW(p)

Another way to take advantage of confirmation bias is exemplified by horoscopes: offering people predictions that are sufficiently vague that no matter what happens, people can find a way to interpret the prediction as having come true.

Also, someone who wanted to be respected by many people could write semi-nuanced opinion texts that could be plausibly interpreted to favor either side in a debate. In the "best" case, supporters of both sides will read the text and like you for being on their side.

comment by David_Gerard · 2011-03-03T14:35:54.474Z · LW(p) · GW(p)

The way to actually use confirmation bias to convince people of things is to present beliefs you want to transmit to them as evidence for things they already believe. Then confirmation bias will lead them to believe this new evidence without question, because they wish to believe it to confirm their existing beliefs.

Yep. This works pretty well, too. Useful phrases: "As you already know ..." "... and you know all this already" "I haven't told you anything you didn't know already".

Replies from: TheOtherDave, wedrifid
comment by TheOtherDave · 2011-03-03T14:46:03.386Z · LW(p) · GW(p)

Leading questions are good for this too, though they take a bit more care.

That is, if you pick the right questions phrased the right way, then when people answer you can follow up with "Enthusiastic agreement! In other words, $thing-I-wanted-to-convince-you-of. Exactly! Praise, praise, praise! Now I'm going to talk distractingly for a little while so you don't have a chance to examine the identity I'm asserting. Oh look: a monkey!"

Replies from: David_Gerard
comment by David_Gerard · 2011-03-04T11:56:47.857Z · LW(p) · GW(p)

This definitely has to go into the children's picture book My First Machiavelli.

comment by wedrifid · 2011-03-03T15:28:22.504Z · LW(p) · GW(p)

It works well... except with those strange folks who find it obnoxious and are tempted to slap you with "No, damn you! It is evidence against what I believed to be true. I prefer to be contradicted than subverted. Don't try that again!"

comment by JJ10DMAN · 2011-02-26T22:55:46.782Z · LW(p) · GW(p)

I think the most straightforward "edutainment" design would be a "rube or blegg" model of presenting conflicting evidence and then revealing the Word of God objective truth at the end of the game - different biases can be targetted with different forms of evidence, different models of interpretation (e.g. whether or not players can assign confidence levels in their guesses), and different scoring methods (e.g. whether the game is iterative, whether it's many one shots but probability of success over many games is the goal, etc.).

A more compelling example that won't turn off as many people (ew, edutainment? bo-ring) would probably be a multiplayer game in which the players are randomly led to believe incompatible conclusions and then interact. Availability of public information and the importance of having been right all along or committing strongly to a position early could be calibrated to target specific biases and fallacies.

As someone with aspirations to game design, this is a particularly interesting concept. One great aspect of video game culture is that most multiplayer games are one-offs from a social perspective: There's no social penalty for denigrating an ally's ability since you will never see them again, and there's no gameplay penalty for being wrong. This means that insofar as any and all facets in the course of a game where trusting an ally is not necessary, one can greatly underestimate the ally's skill FOREVER without ever being critically wrong. This makes online gaming perhaps the most fertile incubator of socially negative confirmation bias anywhere ever. If an ally is judged poorly, there's no penalty for declaring them as poor prematurely, and in fact people seem to apply profound confirmation bias on all evidence for the remainder of the game.

Could a game effectively be designed to target this confirmation bias and give the online gaming community a more constructive and realistic picture? I'll definitely be mulling this over. Great post.

Replies from: PeterisP
comment by PeterisP · 2011-03-01T18:14:35.932Z · LW(p) · GW(p)

If I understand your 'problem' correctly - estimating potential ally capabilities and being right/wrong about that (say, when considering teammates/guildmates/raid members/whatever), then it's not nearly a game-specific concept - it applies to any partner-selection without perfect information, like mating or in job interviews. As long as there is a large enough pool of potential parners, and you don't need all of the 'good' ones, then false negatives don't really matter as much as the speed or ease of the selection process and the cost of false positives, where you trust someone and he turns out to be poor after all.

There's no major penalty for being picky and denigrating a potential mate (or hundreds of them), especially for females, as long as you get a decent one in the end; In such situations the optimal evaluation criteria seem to be 'better punish a hundred innocents than let one bad guy/loser past the filter', the exact opposite of what most justice systems try to achieve.

There's no major penalty for, say, throwing out a random half of CV's you get for a job vacancy if you get too many responses - if you get a 98% 'fit' candidate up to final in-person interviews, then it doesn't matter that much if you lose a 99% candidate that you didn't consider at all - the cost of interviewing an extra dozen of losers would be greater than the benefit.

The same situation happens also in MMOG's, and unsurprisingly people tend to find the same reasonable solutions as in real life.

comment by divia · 2011-02-28T23:02:25.021Z · LW(p) · GW(p)

I made an Anki deck of this post with the key 89ff552e6e8086a6.

Replies from: lukeprog, SoerenMind
comment by lukeprog · 2011-03-17T22:37:40.356Z · LW(p) · GW(p)

Do all your Anki decks go into the 'Less Wrong Sequences' deck, or are there a bunch of them all over the place? If the latter, is there a full list of your decks somewhere?

Replies from: divia
comment by divia · 2011-03-18T05:50:08.241Z · LW(p) · GW(p)

These are the only lesswrong cards I've made other than what's in the Less Wrong Sequences deck, but I have tons of other Anki decks. If you're interested in hearing about them, message me.

comment by SoerenMind · 2014-09-03T09:03:54.033Z · LW(p) · GW(p)

Does anyone know how to search for Anki decks by their key? I was thinking the number at the end of a link (e.g. ankiweb.net/shared/info/1458237580) would work, but it doesn't contain letters.

comment by Vladimir_Nesov · 2011-02-18T21:58:47.882Z · LW(p) · GW(p)

The intelligence community has a long history of interest in debiasing, and even produced a rationality handbook based on internal CIA publications from the 70's and 80's.

Thanks for the link, sounds interesting!

Replies from: Will_Newsome
comment by Will_Newsome · 2011-02-19T12:05:56.859Z · LW(p) · GW(p)

I heard from someone who heard from someone that the handbook was correctional due to flagrant institutional anti-epistemology and thus the culture might not be as rational as one might expect from just reading the CIA rationality publications. Just a warning...

comment by AlexMennen · 2011-02-19T22:11:44.705Z · LW(p) · GW(p)

Can you separate answers to "3. How do we use its existence to help us win?" from answers to the other two? Reading both in the same list can be rather jarring.

comment by steven0461 · 2011-02-18T21:16:21.550Z · LW(p) · GW(p)

This sounds useful but not like fun or like a game.

Replies from: Cosmos, Swimmer963
comment by Cosmos · 2011-02-20T21:50:02.590Z · LW(p) · GW(p)

It doesn't sound like fun to you, which implies you didn't try it. FWIW, everyone who participated in the game thought that was one of the most fun meetups we have had to date. I greatly enjoyed the activity myself. The fast pace kept everyone fully engaged in the activity, and the rotating topic kept the conversation from getting bogged down. Cognitive biases are a topic of interest for our rationalist group, doing this alone instead of with a group of friends might indeed be less fun, but as you pointed out still quite useful.

Replies from: rfrankel
comment by rfrankel · 2011-02-25T00:27:02.061Z · LW(p) · GW(p)

Seconded - I was one of the participants and it was, indeed, fun. There were plenty of laughs, and even if there hadn't been, hanging out with good people and learning counts as fun in my book.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2011-02-18T23:36:23.772Z · LW(p) · GW(p)

I think this was more of a preliminary brainstorm. Making games would require a lot of people-hours of work. Although a good idea! (Possible birthday present for family member = java applet rationality game?)

comment by Psy-Kosh · 2011-02-18T22:12:58.389Z · LW(p) · GW(p)

Interesting, though one bit confused me.

Could you clarify this bit?

Get people to internalize the FAE about their own behavior to take more agency in their lives

Thanks.

Replies from: Cosmos
comment by Cosmos · 2011-02-20T21:52:03.550Z · LW(p) · GW(p)

We were not sure how exactly to accomplish this, but if you could convince someone that the outcomes in their life were primarily a result of their effort (instead of being dictated by external circumstances), that could motivate them to try harder.

Replies from: Psy-Kosh
comment by Psy-Kosh · 2011-02-22T16:48:29.930Z · LW(p) · GW(p)

Ah, thanks. Though the flipside of that might be that it might convince them that past failures prove that they're a lost cause.

Replies from: bigjeff5
comment by bigjeff5 · 2011-02-28T01:42:48.608Z · LW(p) · GW(p)

That's when you help them crank up the Overconfidence Bias. ;)

comment by zntneo · 2011-02-25T06:07:17.100Z · LW(p) · GW(p)

"Do not cite studies, turn the results of the study into a story" could you give an example?