The Boundaries of Biases

post by Vaniver · 2010-12-01T00:43:02.154Z · LW · GW · Legacy · 15 comments

Thinking about meta-contrarianism and biases (among other things), I came to the following question:

When are biases a good thing?

Since the caution too important to leave to the conclusion, I'm going to put it before I give an answer, even though the flow will be fudged as a result. In Epistemology and the Psychology of Human Judgment (a book I strongly recommend), Bishop and Trout talk a lot about statistical prediction rules, where linear regressions on data often outperform human experts. One of the findings they discussed was that not only did experts have lower accuracy than the statistically generated rules, when given the result of the rule and the option to defect from its prediction they were much more likely to chose to defect when the rule was right and they were wrong than the other way around. So, for almost all of the experts, the best choice was "stick to the rule, even if you think it's wrong." Likewise, even if you've got a long explanation as to why your action isn't biased and how this is a good idea just this once, you should stick to the rule.

But there's still something very important to keep in mind: rules have domains in which they apply. "Do as the Romans do" has the domain of "when in Rome." Even if the rule is a black box, such that you do not understand how it created its outputs given its inputs, you can trust the outputs so long as you know which inputs are appropriate for that box. Sticking to the behavior of other Romans will make you fit in better, even if you're not socially aware enough to notice differences in how much you fit in. But if you're in Japan, "do as the Romans do" is bad advice.

When you know how the rule works- when you're at level 2 or 3 understanding- then you can probably look at a rule and decide if it's appropriate in that circumstance, because you can see what determines the domain of the rule. When you understand the proverb, you know it really means "When in X, do as the residents of X do", and then you can pretty easily figure out that "do as the Romans do" only fits when you're in Rome. But, as we can see with this toy example, pushing these boundaries helps you understand the rule- "why is it a bad idea to do what the Romans do in Japan?"

On that note, let's return to our original question. When are biases a good thing?

Someone who likes to capitalize the word Truth would probably instantly reply "never!". But the fact that their reply was instant and their capitalization is odd should give us pause. That sounds a lot like a bias, and we're trying to evaluate those. It could be that adjusting the value of truth upwards in all situations is a good bias to have, but we've got to establish that on a level that's more fundamental.

And so after a bit of thought, we come up with a better way to redefine the original problem. "If a bias is a decision-making heuristic that has negative instrumental value, are there ranges where that same decision-making heuristic has positive instrumental value?"

Three results immediately pop out: the first is that we've constructed a tautological answer to our initial question- never, since we've defined them as bad.

The second result is still somewhat uninteresting- as your thoughts take time and energy, decision-making heuristics have a natural cost involved. Meticulous but expensive heuristics can be negative value compared to sloppy but cheap heuristics for many applications; you might be better off making a biased decision about laundry detergent since you can use that time and energy to better effect elsewhere. But sometimes it's worth expunging low-damage biases at moderate cost because bias-expunging experience is a good in its own right; then, when it comes to make a big decision about buying a house or a car you can use your skills at unbiased purchasing developed by treating laundry detergent as a tough choice.

The more interesting third result is a distinction between bias and bigotry. Consider a bigoted employer and a biased employer: the bigoted employer doesn't like members of a particular group and the biased employer misjudges members of a particular group. The bigoted employer would only hire people they dislike if the potential hire's superiority to the next best candidate is high enough to overcome the employer's dislike of the potential hire, and the biased employer wants to hire the best candidate but is unconsciously misjudging the quality of the potential hire. Both will only choose the disliked/misjudged potential hire in the same situation- where the innate quality difference is higher than degree of dislike/misjudging- but if you introduce a blind system that masks the disliked characteristic of potential hires, they have opposite responses. The bigoted employer is made worse off- now when choosing between the top set of candidates he might accidentally choose a candidate that would satisfy him less than one he chose with perfect information- but the biased employer is made better off- now, instead of having imperfect information he has perfect information, and will never accidentally choose a candidate that would satisfy him less than the one he chose otherwise. Notice that subtracting data made the biased employer's available information more perfect.

That distinction is incredibly valuable for anti-discrimination thinkers and anyone who talks with them; much of the pushback against anti-discrimination measures seems to be because they're not equipped to think about and easily discuss the difference between bigotry and bias.

This is a question that we can ask about every bias. Is racism ever appropriate? Yes, if you're casting for a movie where the character's race is relevant. Is sexism ever appropriate? Yes, if you're looking to hire a surrogate mother (or, for many of us, a mate). But for other biases the question becomes more interesting.

For example, when does loss aversion represent a decision-making heuristic with positive instrumental value?

First, we have to identify the decision-making heuristic. A typical experiment that demonstrates loss aversion presents the subject with a gamble: 50% chance of gaining X, 50% chance of losing a pre-determined number between 0 and X. That range has a positive expected value, so a quick calculation suggests that taking the gamble is a good plan. But until the loss is small enough (typical numeric values for the loss aversion effect are about twice, so until the loss is less than X/2), subjects don't take the bet, even though the expected value is positive. That looks an awful lot like "doublecount your losses when comparing them to gains."

When is that a good heuristic? Well, it is if utility is nonlinear; but the coefficient of 2 for losses seems pretty durable, suggesting it's not people doing marginal benefit / marginal loss calculations in their head. The heuristic seems well-suited to an iterated zero-sum game where your losses benefit the person you lose to, but your opponent's losses aren't significant enough to enter your calculations. If you're playing a game against one other person, then if they lose you win. But if you're in a tournament with 100 entrants, the benefit to you from your opponent's loss is almost zero, while the loss to you from your loss is still doubled- you've fallen down, and in doing so you've lifted a competitor up.

An example of a bias false positive (calling a decision biased when the decision was outside of the bias's domain) for loss aversion is here, from our first Diplomacy game in the Less Wrong Diplomacy Lab. Germany had no piece in Munich, which was threatened by an Italian piece with other options, and Germany could move his piece in Kiel to Munich (preventing Italian theft) or to Holland (gaining an independent center but risking Munich). If Germany made decisions based only on the number of supply centers it would control after 1901, he would prefer Kie-Hol to Kie-Mun at P(Tyr-Mun)<1, and only be indifferent at P(Tyr-Mun)=1. If Germany made decisions based on the number of supply centers it would control after 1901 minus the number of supply centers the other players controlled divided by 6 (Why 6? Because each player has 6 opponents, and this makes Germany indifferent to a plan that increases or decreases the number of centers each player controls by the same amount), he would be indifferent at P(Tyr-Mun)=5/6. If Germany didn't discount the gains of other countries, he would be indifferent at P(Tyr-Mun)=1/2. If Germany takes into account board position, the number drops even lower- and if Germany has a utility function over supply centers that drops from 5 to 6, as suggested by Douglas Knight, then Germany might never be indifferent between Hol and Mun (but in such a situation, Germany would be unlikely to subscribe to loss aversion).

 

A final note: we're talking about heuristics, here- the ideal plan in every case is to correctly predict utility in all possible outcomes and maximize predicted utility. But we have to deal with real plans, which almost always involves applying rules to situations and pattern-matching. I've just talked about one bias in one situation here- which biases have you internalized more deeply by exploring their boundaries? Which biases do you think have interesting boundaries?

15 comments

Comments sorted by top scores.

comment by DSimon · 2010-12-01T05:21:15.725Z · LW(p) · GW(p)

The more interesting third result is a distinction between bias and bigotry. Consider a bigoted employer and a biased employer: the bigoted employer doesn't like members of a particular group and the biased employer misjudges members of a particular group. [...] if you introduce a blind system that masks the disliked characteristic of potential hires, they have opposite responses. The bigoted employer is made worse off- now when choosing between the top set of candidates he might accidentally choose a candidate that would satisfy him less than one he chose with perfect information- but the biased employer is made better off [...]

This is a really interesting point. One thing of note is that in practice there's likely a spectrum between outright bigotry and completely unconscious bias, just as there are points in-between deliberate lying and innocent self-deception.

comment by Eugine_Nier · 2010-12-01T04:58:54.338Z · LW(p) · GW(p)

Likewise, even if you've got a long explanation as to why your action isn't biased and how this is a good idea just this once, you should stick to the rule.

Reminds me of Eliezer's post on ethical inhibitions.

comment by lukeprog · 2010-12-25T17:10:34.546Z · LW(p) · GW(p)

I second the recommendation for Epistemology and the Psychology of Human Judgment.

comment by DSimon · 2010-12-01T05:18:00.267Z · LW(p) · GW(p)

Meticulous but expensive heuristics can be negative value compared to sloppy but cheap heuristics for many applications; you might be better off making a biased decision about laundry detergent since you can use that time and energy to better effect elsewhere.

Hold on, computationally cheap is not the same as biased. If you just want to get some laundry detergent and get out in a hurry, you can just go ahead and pick the familiar one (provided it's not too expensive) on the grounds that it's a safe decision and that the payoff of additional investigation will probably not be worth it.

There's no bias there, provided your confidence in your result is proportional to how well you investigated the question (i.e. from a brand being familiar, conclude "It's a pretty safe decision to buy this brand", not "This is the best brand of detergent").

Replies from: Vaniver
comment by Vaniver · 2010-12-01T11:03:58.076Z · LW(p) · GW(p)

Hold on, computationally cheap is not the same as biased.

You're right; I'm being sloppy with the word "bias," partly to make that point. The heuristics I'm interested in for that section aren't heuristics that are merely efficient, but heuristics that sacrifice accuracy for computational cheapness. "Just buy whatever you got last time" is not a good recipe for getting good detergent, because it doesn't even ask how satisfied you were with the detergent! But when you're in a hurry, increases in detergent quality are less valuable than the time it would take to get them.

Other options for sloppy but cheap heuristics would be things like "buy the detergent closest to the start of the aisle" or "buy the detergent with the prettiest packaging"- things that I would be willing to call "biased" in most senses of the word but wouldn't call heuristics with negative instrumental value.

Replies from: DSimon
comment by DSimon · 2010-12-01T16:30:57.065Z · LW(p) · GW(p)

The heuristics I'm interested in for that section aren't heuristics that are merely efficient, but heuristics that sacrifice accuracy for computational cheapness.

Ok, I see what you mean. Sounds like the difference between a bad bias and a good bias is whether the person realizes they're sacrificing accuracy, and is consistent in dealing with the consequences of that.

Or maybe a better way to put it is: bad biases lose accuracy, good biases only lose precision.

Replies from: Vaniver
comment by Vaniver · 2010-12-02T00:24:12.606Z · LW(p) · GW(p)

bad biases lose accuracy, good biases only lose precision.

That is a beautiful way to put it.

comment by Zachary_Kurtz · 2010-12-01T04:43:12.954Z · LW(p) · GW(p)

related idea: when could seeking to improve our maps could we lose instrumental rationality?

I have an example of this. Was at a meeting at work last year, where a research group was proposing (to get money) for a study to provide genetic "counseling" to poor communities in Harlem. One person raised the objection: (paraphrasing) we can teach people as much as we can about real genetic risk factors for diseases, but without serious education, most people probably won't get it.

They'll hear "genes, risk factor" and probably just overestimate their actual risk and lead to poor decision making based on misunderstanding information. In striving to improve epistemic rationality we could impair true instrumental "winning."

So in this case, being completely naive leads to better outcomes than having more, if incomplete knowledge.

Not sure what the outcome of the actual study was.

Replies from: Vaniver
comment by Vaniver · 2010-12-01T11:15:32.432Z · LW(p) · GW(p)

You might find this post interesting; I think it touches on the issue you bring up, but from another direction.

comment by bentarm · 2010-12-01T23:32:39.322Z · LW(p) · GW(p)

One of the findings they discussed was that not only did experts have lower accuracy than the statistically generated rules, when given the result of the rule and the option to defect from its prediction they were much more likely to chose to defect when the rule was right and they were wrong than the other way around.

Ok, I haven't read the book. But doesn't the second statement follow logically from the first? Both seem equivalent to saying that when the experts disagreed with the rule, the rule was right more often than they were.

Replies from: Vaniver, orthonormal
comment by Vaniver · 2010-12-02T01:04:23.223Z · LW(p) · GW(p)

But doesn't the second statement follow logically from the first? Both seem equivalent to saying that when the experts disagreed with the rule, the rule was right more often than they were.

Short answer: Yes, but people need help being logical.

Long answer: I lent out my copy, so I can't check that section of the book, but if I remember correctly there was a big discussion how the experts, for almost every time they chose to defect, had come up with a long rationalization of why their defection was appropriate for this circumstance. The experts thought there were factors that didn't go into the model which were strongly relevant, and so on.

But, whenever that was true, that was generally something that could be introduced into the model, and the model would outperform the experts again.

It needs to be discussed explicitly because people hate the idea of not being able to switch to manual control. "Ok, maybe it's better at guessing than I am, but what if I know it's making the wrong choice?" The answer is "sorry, no, you don't know that at all." And that is so painful that, for most people, it needs to be swallowed separately. You don't need to believe you believe the rule is better, you need to believe the rule is better.

comment by orthonormal · 2010-12-02T23:12:13.915Z · LW(p) · GW(p)

I think the first finding was that experts did worse than the rule, when they weren't given the predictions of the rules; the second was that they still did worse than the rule when given its prediction in advance of answering.

comment by Eugine_Nier · 2010-12-01T05:28:38.462Z · LW(p) · GW(p)

This is a question that we can ask about every bias. Is racism ever appropriate? Yes, if you're casting for a movie where the character's race is relevant. Is sexism ever appropriate? Yes, if you're looking to hire a surrogate mother (or, for many of us, a mate). But for other biases the question becomes more interesting.

I'm curious what definition of racism/sexism you're using here? If you mean a biased belief that doesn't correspond to reality and/or a 'bigoted preference' that is effectively a terminal value then the examples you give aren't racist.

If you mean making any decision on the basis of race a less contrived, if more mind killing, example is deciding whether to cross the street on the basis of the skin color of the person walking in your direction.

Replies from: Vaniver
comment by Vaniver · 2010-12-01T11:12:16.351Z · LW(p) · GW(p)

I'm curious what definition of racism/sexism you're using here?

That's a very good question, because those are very stretchy words. In those sentences, the definition I had in mind was "making a decision based on race/sex," with the intention of pointing out that sometimes making decisions based on race or sex is the optimal/correct thing to do. As you point out, those examples don't fit into either "bias" or "bigotry" because it involves correct judging (and, with the exception of the mate example, perhaps) no terminal value preferences.

I appreciate the suggestion, but I prefer to keep my posts as mind-friendly as possible.

comment by teageegeepea · 2010-12-01T02:04:29.903Z · LW(p) · GW(p)

I don't have much directly to say, just some links. I had a discussion with Paul Hewitt on biased hiring at Overcoming Bias which migrated to a top-level post at my own blog. It was not an ideal conversation since I was trying to find whether there was any data on gendered productivity differences and he was trying to exposit the logic of a model in which employers fail to learn.

Karl Smith replied to Bryan Caplan on stereotypes by analogizing it to pollution here.