Biases: An Introduction

post by Rob Bensinger (RobbBB) · 2015-03-11T19:00:31.605Z · LW · GW · 14 comments

Contents

  Noticing Bias
  A Word About This Text
None
14 comments

Imagine reaching into an urn that contains seventy white balls and thirty red ones, and plucking out ten mystery balls.

Perhaps three of the ten balls will be red, and you’ll correctly guess how many red balls total were in the urn. Or perhaps you’ll happen to grab four red balls, or some other number. Then you’ll probably get the total number wrong.

This random error is the cost of incomplete knowledge, and as errors go, it’s not so bad. Your estimates won’t be incorrect on average, and the more you learn, the smaller your error will tend to be.

On the other hand, suppose that the white balls are heavier, and sink to the bottom of the urn. Then your sample may be unrepresentative in a consistent direction.

That kind of error is called “statistical bias.” When your method of learning about the world is biased, learning more may not help. Acquiring more data can even consistently worsen a biased prediction.

If you’re used to holding knowledge and inquiry in high esteem, this is a scary prospect. If we want to be sure that learning more will help us, rather than making us worse off than we were before, we need to discover and correct for biases in our data.

The idea of cognitive bias in psychology works in an analogous way. A cognitive bias is a systematic error in how we think, as opposed to a random error or one that’s merely caused by our ignorance. Whereas statistical bias skews a sample so that it less closely resembles a larger population, cognitive biases skew our thinking so that it less accurately tracks the truth (or less reliably serves our other goals).

Maybe you have an optimism bias, and you find out that the red balls can be used to treat a rare tropical disease besetting your brother, and you end up overestimating how many red balls the urn contains because you wish the balls were mostly red.

Like statistical biases, cognitive biases can distort our view of reality, they can’t always be fixed by just gathering more data, and their effects can add up over time. But when the miscalibrated measuring instrument you’re trying to fix is you, debiasing is a unique challenge.

Still, this is an obvious place to start. For if you can’t trust your brain, how can you trust anything else?

Noticing Bias

Imagine meeting someone for the first time, and knowing nothing about them except that they’re shy.

Question: Is it more likely that this person is a librarian, or a salesperson?

Most people answer “librarian.” Which is a mistake: shy salespeople are much more common than shy librarians, because salespeople in general are much more common than librarians—seventy-five times as common, in the United States.¹

This is base rate neglect: grounding one’s judgments in how well sets of characteristics feel like they fit together, and neglecting how common each characteristic is in the population at large.² Another example of a cognitive bias is the sunk cost fallacy—people’s tendency to feel committed to things they’ve spent resources on in the past, when they should be cutting their losses and moving on.

Knowing about these biases, unfortunately, doesn’t make you immune to them. It doesn’t even mean you’ll be able to notice them in action.

In a study of bias blindness, experimental subjects predicted that they would have a harder time neutrally evaluating the quality of paintings if they knew the paintings were by famous artists. And indeed, these subjects exhibited the very bias they had predicted when the experimenters later tested their prediction. When asked afterward, however, the very same subjects claimed that their assessments of the paintings had been objective and unaffected by the bias.³

Even when we correctly identify others’ biases, we exhibit a bias blind spot when it comes to our own flaws.⁴ Failing to detect any “biased-feeling thoughts” when we introspect, we draw the conclusion that we must just be less biased than everyone else.⁵

Yet it is possible to recognize and overcome biases. It’s just not trivial. It’s known that subjects can reduce base rate neglect, for example, by thinking of probabilities as frequencies of objects or events.

The approach to debiasing in this book is to communicate a systematic understanding of why good reasoning works, and of how the brain falls short of it. To the extent this volume does its job, its approach can be compared to the one described in Serfas (2010), who notes that “years of financially related work experience” didn’t affect people’s susceptibility to the sunk cost bias, whereas “the number of accounting courses attended” did help.

As a consequence, it might be necessary to distinguish between experience and expertise, with expertise meaning “the development of a schematic principle that involves conceptual understanding of the problem,” which in turn enables the decision maker to recognize particular biases. However, using expertise as countermeasure requires more than just being familiar with the situational content or being an expert in a particular domain. It requires that one fully understand the underlying rationale of the respective bias, is able to spot it in the particular setting, and also has the appropriate tools at hand to counteract the bias.⁶

The goal of this book is to lay the groundwork for creating rationality “expertise.” That means acquiring a deep understanding of the structure of a very general problem: human bias, self-deception, and the thousand paths by which sophisticated thought can defeat itself.

A Word About This Text

Map and Territory began its life as a series of essays by decision theorist Eliezer Yudkowsky, published between 2006 and 2009 on the economics blog Overcoming Bias and its spin-off community blog Less Wrong. Thematically linked essays were grouped together in “sequences,” and thematically linked sequences were grouped into books. Map and Territory is the first of six such books, with the series as a whole going by the name Rationality: From AI to Zombies.⁷

In style, this series run the gamut from “lively textbook” to “compendium of vignettes” to “riotous manifesto,” and the content is correspondingly varied. The resultant rationality primer is frequently personal and irreverent—drawing, for example, from Yudkowsky’s experiences with his Orthodox Jewish mother (a psychiatrist) and father (a physicist), and from conversations on chat rooms and mailing lists. Readers who are familiar with Yudkowsky from Harry Potter and the Methods of Rationality, his science-oriented take-off of J.K. Rowling’s Harry Potter books, will recognize the same iconoclasm, and many of the same themes.

The philosopher Alfred Korzybski once wrote: “A map is not the territory it represents, but, if correct, it has a similar structure to the territory, which accounts for its usefulness.” And what can be said of maps here, as Korzybski noted, can also be said of beliefs, and assertions, and words.

“The map is not the territory.” This deceptively simple claim is the organizing idea behind this book, and behind the four sequences of essays collected here: Predictably Wrong, which concerns the systematic ways our beliefs fail to map the real world; Fake Beliefs, on what makes a belief a “map” in the first place; Noticing Confusion, on how this world-mapping thing our brains do actually works; and Mysterious Answers, which collides these points together. The book then concludes with “The Simple Truth,” a stand-alone dialogue on the idea of truth itself.

Humans aren’t rational; but, as behavioral economist Dan Ariely notes, we’re predictably irrational. There are patterns to how we screw up. And there are patterns to how we behave when we don’t screw up. Both admit of fuller understanding, and with it, the hope of leaning on that understanding to build a better future for ourselves.


¹ Wayne Weiten, Psychology: Themes and Variations, Briefer Version, Eighth Edition (Cengage Learning, 2010).

² Richards J. Heuer, Psychology of Intelligence Analysis (Center for the Study of Intelligence, Central Intelligence Agency, 1999) .

³ Katherine Hansen et al., “People Claim Objectivity After Knowingly Using Biased Strategies,” Personality and Social Psychology Bulletin 40, no. 6 (2014): 691–699 .

⁴ Emily Pronin, Daniel Y. Lin, and Lee Ross, “The Bias Blind Spot: Perceptions of Bias in Self versus Others,” Personality and Social Psychology Bulletin 28, no. 3 (2002): 369–381 .

⁵ Joyce Ehrlinger, Thomas Gilovich, and Lee Ross, “Peering Into the Bias Blind Spot: People’s Assessments of Bias in Themselves and Others,” Personality and Social Psychology Bulletin 31, no. 5 (2005): 680–692.

⁶ Sebastian Serfas, Cognitive Biases in the Capital Investment Context: Theoretical Considerations and Empirical Experiments on Violations of Normative Rationality (Springer, 2010).

⁷ The first edition of Rationality: From AI to Zombies was released as a single sprawling ebook, before the series was edited and split up into separate volumes. The full book can also be found on http://lesswrong.com/rationality.

14 comments

Comments sorted by top scores.

comment by colossal_noob · 2020-01-11T01:21:33.843Z · LW(p) · GW(p)

Hi guys,

I'm really not happy about this claim:

"Most people answer “librarian.” Which is a mistake: shy salespeople are much more common than shy librarians, because salespeople in general are much more common than librarians—seventy-five times as common, in the United States."

The question is whether or not the person is more likely to be a librarian or a salesperson given that we know that they're shy. In other words, it's a posterior probability. It's a question about P(librarian|shy) vs. P(salesperson|shy). The statement that salespeople are, in general, 75 times more common than librarians is a question of prior probability, i.e. P(librarian) vs. P(salesperson).

We can easily make it be the case that the shy person is still more likely to be a librarian despite the prior probabilities given above by just saying "Assume 100% of librarians are shy and 1% of salespeople are shy." Now, given that the person is shy, the odds are 1:0.75 that they are a librarian.

Replies from: SaidAchmiz, sil-ver, None, mingyuan, fanilo-rabenjamina, loti
comment by Said Achmiz (SaidAchmiz) · 2020-01-10T09:59:17.635Z · LW(p) · GW(p)

Indeed. I made the same point elsewhere [LW(p) · GW(p)], and furthermore concluded that the claim that the subjects were succumbing to base rate neglect is not well-supported by the source material. (In fact, even the conclusion that “librarian” is the wrong answer is not supported by the cited sources!)

comment by Rafael Harth (sil-ver) · 2021-11-20T09:55:21.104Z · LW(p) · GW(p)

There is no way that the posterior odds are more than toward the librarian. I would be very surprised if it were . Spelled out, The point of the example seems to be "people forget the base rate, and once you know the base rate, it's obvious that it's more significant than the update based on shyness". I don't need a source for this; it doesn't matter whether the update based on shyness is or or something in between; any of that is dominated by the base rate.

comment by [deleted] · 2021-06-30T10:23:21.055Z · LW(p) · GW(p)

loti made the point I'm about to make above, but appears to have taken it back; I'm not sure why, as it seem totally right. 

Anyway: it's certainly true that it doesn't strictly follow from the fact that there are 75 times as many salespeople as librarians (and you know this) that you ought to be more confident that someone is a salesperson than a librarian, if all you know about them is that they are shy. However, that conclusion does follow on totally plausible assumptions about the frequency of shy people among librarians and the frequency of shy people among salespeople (and you having credences close to these frequencies). It follows, for instance, if four out of five librarians are shy, and only one in twenty salespeople are shy.

For it to not be the case that you should think the person is more likely be shy, given the base rate, you would have to think the frequency of shy people among librarians is 75 times higher than the frequency of shy people among salespeople. For that to be possible, the rate of shy people among salespeople would have to be less than or equal to one in 75. That is very low, I'd find that somewhat surprising. Even more surprising would be to find out that the frequency of shy librarians is close to 100%.

Maybe that's the case; I don't know. Probably Rob Bensinger doesn't know for sure either; so yeah, he probably shouldn't have been so categorical when he said that this is "a mistake". But I think we can forgive Rob Bensinger here for using this as an example of base-rate neglect, because it's pretty plausible that it is.

In fact, even if, as it happens, the numbers work out, and people get the right answer here, I expect most people don't worry about the base rate at all when they answer this question, so they're getting the right answer purely by luck; if that's right, then this would still be an example of base-rate neglect.

comment by mingyuan · 2020-06-03T20:26:43.819Z · LW(p) · GW(p)

Ah! Yes! I've never been able to properly formulate an answer to why this example bothers me so much, but you did it! Thank you!

comment by Fanilo RABENJAMINA (fanilo-rabenjamina) · 2023-02-08T12:31:40.972Z · LW(p) · GW(p)

"25%" of mankind are shy.

"75%" of librarians are shy.

"1%" of salesmen are shy.

The most valued finding (environment's milestone) is the shy salesman. The average valued finding is the shy librarian or corrolary bookworm. We already know shy persons in our surrondings. We are searching objets that map the territory. The bias is about reading the map, not seeing its heterogeneity or multiple authors.

comment by loti · 2020-03-01T04:39:03.026Z · LW(p) · GW(p)

Hi colossal_noob,

The point this example is trying to make, perhaps, can be better understood with the expansions of bayes rules.

P(librarian|shy) = (P(shy|librarian) * P(librarian)) / P(shy)

P(salespeople|shy) = (P(shy|salespeople) * P(salespeople)) / P(shy)

The cognitive bias presented here is to ignore the difference between P(librarian) vs P(salespeople), and draw conclusion solely based on P(shy|librarian) vs P(shy|salespeople). Since, salespeople are more likely to be shy (i.e P(shy|salespeople) > P(shy|librarian)), the bias leads to the wrong conclusion P(librarian|shy) > P(salespeople|shy).

comment by Дмитрий Зеленский (dmitrii-zelenskii) · 2019-08-21T11:27:00.298Z · LW(p) · GW(p)

"Most people answer “librarian.” Which is a mistake: shy salespeople are much more common than shy librarians, because salespeople in general are much more common than librarians—seventy-five times as common, in the United States" - ...this completely ignores the fact that works have personality requirements. Salespeople have to actually, y'know, talk to many people. I would not deem impossible that less than half a percent of salespeople and more than of half of librarians are shy.

Replies from: jora
comment by jora · 2019-08-22T03:57:29.605Z · LW(p) · GW(p)

Considering the fact that salespeople are seventy-five times more common as librarians, your estimates will give 7.5 more shy salespeople then shy librarians. You fell under base rate neglect bias right after you read about it, which is a very good manifestation of bias blindness.

My math was wrong

comment by haptic-feedback · 2018-12-02T14:30:51.082Z · LW(p) · GW(p)

There are several missing spaces on this page. For example:

- "books,covering"

- "somebeliefs"

- "otherobjects"

- "ourplace"

comment by Ray Culp · 2023-02-28T17:05:32.378Z · LW(p) · GW(p)

It seems to me that the problem with the "librarian / salesperson" example is that the term "shy" is not clearly defined. How shy? Does the person feel slightly uncomfortable speaking to unfamiliar people, or is it so severe that they start blushing, sweating and possibly stammering? How shy can a person be and still get a job as a salesperson? When does it become an exclusionary criteria, rendering the base rate irrelevant? So, for example, if you meet a new person and all you know about them is that they speak just one language, are they more likely to be a translator/interpreter or a pediatric surgeon? Well, obviously, the correct answer is translator/interpreter, since there are seventy times as many translators/interpreters in the United States than pediatric surgeons. Right? Or not? Hmmm... :-)

comment by E A (ebrahim akbari) · 2024-02-20T08:09:57.897Z · LW(p) · GW(p)

With respect to “People Claim Objectivity After Knowingly Using Biased Strategies,” study on bias blindness, I propound evaluating art is not an accurate barometer, as it is extremely abstract and difficult for a common person to gauge and effectively rank, so the only anchor for comparision would be the artist's credentials

comment by Fanilo RABENJAMINA (fanilo-rabenjamina) · 2019-02-13T16:52:21.319Z · LW(p) · GW(p)

"statistical unbiased" is important for a data project but is neglected by everyday's intuition because you will never meet the full dataset, the thousands of persons or the red/white solution balls.

Intuition, or "System 1" in the article, is the most important for viablility and for survival. It really feels like System 1 has all the working memory it wants and system 2 has the burden of proof.

The inevitable bias is that the understanding process of System 2 seems to always end in System 1, System 2 has to cast knowledge into System 1's sensibility, improving intuition or failing to scale it up.

So how can we cast probabilities into System 1's decision making ?

Euclid knows how, axioms, definitions and forever true simple theorems. Math quantifiers can help too, it is very easy to cast their semantics in everyday's intuition.

The inevitable bias is our perception of our own intuition. Some people don't introspect, it even seems a sin for them, unnatural. It is not obvious for them that their intuition would benefit from litterally "bitting the apple".

I look at the sky, it is not really empty, it is blue because the biosphere absorbs the other wavelength, the biosphere is warm, breathable, smooth, perfumed, but may be the only nest in the whole universe. Who wants to see the sky like that or with even more discernment ?

There is a bootstrap problem when System 2 knows that System 1 should change its paradigm, because the content has to be casted for System 1 which has the working memory.

For example you can browse wikipedia for the Bayes Theorem, it needs reading, interpretation, and weighting the equations to sort the information in an order that your intuition approve, factual and/or sensible, meaningful.

All these steps require working memory therefore you're stuck there with your own IQ or with a very long list of intermediate steps.

After few steps one may select the interpretation that with Bayesian equations you can quantify causal hypothesis and update the weights of each cause after each event until the set of causes becomes stable.

A machine could run tons of experiments within an hour and store the stable causality chain in an unreadable format.

The inevitable bias is that you don't see any causality chain worth to be calculated every days, incessantly, consciently. None, some System 1 just want to understand all the concepts involved in all correct causality chains to see the surrounding reality with a telescope, in line with their idea of a viable homo sapiens.

The story telling of all the concepts involved in all correct causality chains seems worth too, according to the idea that all homo sapiens is part of progress through pedagogy.

If you want to cast probabilities into you intuitive decision making, you need decision making archetypes that first convince you that calculating is worth the pain because it clearly improves the archetypes' live efficiency.

An arbitrary example : Hacking a dating site https://www.youtube.com/watch?v=d6wG_sAdP0U

It’s an arbitrary example that came to my mind. The real idea is to convince shy Steve, the librarian, that talking to the nice girl he met every day is not as risky as he think.

Steve is shy, first he knows he can embarrass the girl and he cannot predict that. Second, he thinks that talking to her only as a friend is a betray of his feelings and just another risk.

So Steve has absolutely no clue about the correct causality chain of seducing the girl he likes and he absolutely cannot start with a random weighting of the possible causes for the first try.

Steve's System 1 needs an archetype of girls' affection (I don't really know) updatable in few retries and enlightening his own love feelings.

The target event is a radical active "can you date me" with fewer risks.

The 1st intuition may be that a girl's affective attention is dynamic not static. And it's not dynamic in all directions.

The 2nd intuition may be that if he can contemplate his girl's best behavior then he should become aware that he has himself a contemplable aspect.

Then Steve should bite the apple for his true love. Having a seduction agenda seems impure. But love is to think very frequently about a person, the more you care for the more you remember the context.

If Steve can intuitively see the library as the scene where his love affair deploys her dynamic affective attention then he has room to catch some recurring declaration windows.

Aware that his love is diverse he may see different windows, one for taking care, one for curiosity, one physical beauty, one for romantic, one for enthusiasm, one for radical dating declaration.

How calculations can really improve this archetype of love decision making. I'm not sure, at the archetype stage quantitative and numerical is not the same thing.

When you search for declaration windows of different aspects of your love, you compare area tranquilities, extrapolate moods, rate responses to radical dating, understand how you match each other.

So once you are in a rational love fall, the more the causes are identified the less an numerical unbiased reasoning would harm your love feelings.

But the numerical extreme it is necessary only for someone who can met the entire dataset. Maybe there will be an mobile application to reduce the divorce rate.