Knowing About Biases Can Hurt People

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-04-04T18:01:50.000Z · LW · GW · Legacy · 82 comments

Once upon a time I tried to tell my mother about the problem of expert calibration, saying: “So when an expert says they’re 99% confident, it only happens about 70% of the time.” Then there was a pause as, suddenly, I realized I was talking to my mother, and I hastily added: “Of course, you’ve got to make sure to apply that skepticism evenhandedly, including to yourself, rather than just using it to argue against anything you disagree with—”

And my mother said: “Are you kidding? This is great! I’m going to use it all the time!”

Taber and Lodge’s “Motivated Skepticism in the Evaluation of Political Beliefs” describes the confirmation of six predictions:

  1. Prior attitude effect. Subjects who feel strongly about an issue—even when encouraged to be objective—will evaluate supportive arguments more favorably than contrary arguments.
  2. Disconfirmation bias. Subjects will spend more time and cognitive resources denigrating contrary arguments than supportive arguments.
  3. Confirmation bias. Subjects free to choose their information sources will seek out supportive rather than contrary sources.
  4. Attitude polarization. Exposing subjects to an apparently balanced set of pro and con arguments will exaggerate their initial polarization.
  5. Attitude strength effect. Subjects voicing stronger attitudes will be more prone to the above biases.
  6. Sophistication effect. Politically knowledgeable subjects, because they possess greater ammunition with which to counter-argue incongruent facts and arguments, will be more prone to the above biases.

If you’re irrational to start with, having more knowledge can hurt you. For a true Bayesian, information would never have negative expected utility. But humans aren’t perfect Bayes-wielders; if we’re not careful, we can cut ourselves.

I’ve seen people severely messed up by their own knowledge of biases. They have more ammunition with which to argue against anything they don’t like. And that problem—too much ready ammunition—is one of the primary ways that people with high mental agility end up stupid, in Stanovich’s “dysrationalia” sense of stupidity.

You can think of people who fit this description, right? People with high g-factor who end up being less effective because they are too sophisticated as arguers? Do you think you’d be helping them—making them more effective rationalists—if you just told them about a list of classic biases?

I recall someone who learned about the calibration/overconfidence problem. Soon after he said: “Well, you can’t trust experts; they’re wrong so often—as experiments have shown. So therefore, when I predict the future, I prefer to assume that things will continue historically as they have—” and went off into this whole complex, error-prone, highly questionable extrapolation. Somehow, when it came to trusting his own preferred conclusions, all those biases and fallacies seemed much less salient—leapt much less readily to mind—than when he needed to counter-argue someone else.

I told the one about the problem of disconfirmation bias and sophisticated argument, and lo and behold, the next time I said something he didn’t like, he accused me of being a sophisticated arguer. He didn’t try to point out any particular sophisticated argument, any particular flaw—just shook his head and sighed sadly over how I was apparently using my own intelligence to defeat itself. He had acquired yet another Fully General Counterargument.

Even the notion of a “sophisticated arguer” can be deadly, if it leaps all too readily to mind when you encounter a seemingly intelligent person who says something you don’t like.

I endeavor to learn from my mistakes. The last time I gave a talk on heuristics and biases, I started out by introducing the general concept by way of the conjunction fallacy and representativeness heuristic. And then I moved on to confirmation bias, disconfirmation bias, sophisticated argument, motivated skepticism, and other attitude effects. I spent the next thirty minutes hammering on that theme, reintroducing it from as many different perspectives as I could.

I wanted to get my audience interested in the subject. Well, a simple description of conjunction fallacy and representativeness would suffice for that. But suppose they did get interested. Then what? The literature on bias is mostly cognitive psychology for cognitive psychology’s sake. I had to give my audience their dire warnings during that one lecture, or they probably wouldn’t hear them at all.

Whether I do it on paper, or in speech, I now try to never mention calibration and overconfidence unless I have first talked about disconfirmation bias, motivated skepticism, sophisticated arguers, and dysrationalia in the mentally agile. First, do no harm!


Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).

comment by michael_vassar · 2007-04-04T19:17:04.000Z · LW(p) · GW(p)

Humans aren't just not perfect Bayesians. Very very few of us are even Bayesian wannabes. In essence, everyone who thinks that it is more moral/ethical to hold some proposition than to hold it's converse is taking some criterion other than appearent truth as normative with respect to the evaluation of beliefs.

Replies from: DSimon, datadataeverywhere
comment by DSimon · 2010-09-30T15:32:36.223Z · LW(p) · GW(p)

This is something of a nitpick, but I think that it is more moral/ethical to hold a proposition than to hold its converse if there is good reason to think that that proposition is true. Is this un-Bayesian?

Replies from: robert-miles, christopherj
comment by Robert Miles (robert-miles) · 2011-12-04T13:03:37.052Z · LW(p) · GW(p)

It's a meta-level/aliasing sort of problem, I think. You don't believe it's more ethical/moral to believe any specific proposition, you believe it's more ethical/moral to believe 'the proposition most likely to be true', which is a variable which can be filled with whatever proposition the situation suggests, so it's a different class of thing. Effectively it's equivalent to 'taking apparent truth as normative', so I'd call it the only position of that format that is Bayesian.

comment by christopherj · 2013-10-14T02:30:58.962Z · LW(p) · GW(p)

This website seems to have two definitions of rationality: rationality as truth-finding, and rationality as goal-achieving. Since truth deals with "is", and morality deals with "ought", morality will be of the latter kind. Because they are two different definitions, at some point they can be at odds -- but what if your primary goal is truth-finding (which might be required by your statement if you make no exceptions for beneficial self-deception)? How would you feel about ignoring some truths, because they might lead you to miss other truths?

This article is about how learning some truths can prevent you from learning other truths, with an implication that order of learning will mitigate these effects. In some cases, you might be well served by purging truths from your mind (for example, "there is a miniscule possibility of X" will activate priming and availability heuristic). Some truths are simply much more useful than others, so what do you do if some lesser truths can get in the way of greater truths?

Replies from: Nornagest
comment by Nornagest · 2013-10-14T02:55:59.006Z · LW(p) · GW(p)

Neither truth-finding nor goal-achieving quite captures the usual sense of the word around here. I'd say the latter is closer to how we usually use it, in that we're interested in fulfilling human values; but explicit, surface-level goals don't always further deep values, and in fact can be actively counterproductive thanks to bias or partial or asymmetrical information.

Almost everyone who thinks they terminally value truth-finding is wrong; it makes a good applause light, but our minds just aren't built that way. But since there are so many cognitive and informational obstacles in our way, finding the real truth is at some point going to be critically important to fulfilling almost any real-world set of human values.

On the other hand, I don't rule out beneficial self-deception in some situations, either. It shouldn't be necessary for any kind of hypothetical rationalist super-being, but there aren't too many of those running around.

comment by datadataeverywhere · 2010-09-30T17:00:30.723Z · LW(p) · GW(p)

This seems like a shorthand for denying the existence of morals and ethics. I don't think that's what you mean, but I've heard that exact argument used to support nihilism.

If I say "torture is unethical", I might mean "I believe that torture, for its own sake and without a greater positive offset, is unethical", which is objectively true (please, I entreat you to examine my source code). But it would be just as objectively true to say the negation if I actually believed the negation. Is it neither moral nor immoral to hold the belief that torture is a bad thing?

comment by anonymous2 · 2007-04-04T21:39:02.000Z · LW(p) · GW(p)

Hmm... thanks for writing this. I just realized that I may resemble your argumentative friend in some ways. I should bookmark this.

Replies from: crypto
comment by crypto · 2011-12-20T18:15:34.684Z · LW(p) · GW(p)

Stanovich's "dysrationalia" sense of stupidity is one of my greatest fears.

comment by Rafe_Furst · 2007-04-04T23:04:58.000Z · LW(p) · GW(p)

I didn't know whether to post this reply to "Black swans from the future" or here, so I'll just reference it:

Good post, Eliezer.

comment by HalFinney · 2007-04-05T02:47:05.000Z · LW(p) · GW(p)

I've pointed before to this very good review of Philip Tetlock's book, Expert Political Judgment. The review describes the results of Tetlock's experiments evaluating expert predictions in the field of international politics, where they did very poorly. On average the experts did about as well as random predictions and were badly outperformed by simple statistical extrapolations.

Even after going over the many ways the experts failed in detail, and even though the review is titled "Everybody’s An Expert", the reviewer concludes, "But the best lesson of Tetlock’s book may be the one that he seems most reluctant to draw: Think for yourself."

Does that make sense, though? Think for yourself? If you've just read an entire book describing how poorly people did who thought for themselves and had a lot more knowledge than you do, is it really likely that you will do better to think for yourself? This advice looks like the same kind of flaw Eliezer describes here, the failure to generalize from knowledge of others' failures to appreciation of your own.

Replies from: RobinZ, RobinZ, MarsColony_in10years
comment by RobinZ · 2011-10-21T18:45:05.405Z · LW(p) · GW(p)

There's a better counterargument than that in Tetlock - one of the data points he collected was from a group of university undergraduates, and they did worse than the worst experts, worse than blind chance. Thinking for yourself is the worst option Tetlock considered.

Replies from: Peterdjones
comment by Peterdjones · 2012-08-21T12:23:38.756Z · LW(p) · GW(p)

Thinking for yourself is the worst option Tetlock considered.

Worse for making predictions, I suppose. But if people never think for themselves, we are never going to have any new ideas. Statistical extrapolation may be great for prediction, but it is poor for originality. So we value thinking for oneself. But the hit-rate is terrtlble. We have to put up with huge amounts of crap to get the gems. Most Ideas are Wrong, as I like to say when people tell me I'm being "too critical".

Replies from: RobinZ
comment by RobinZ · 2012-08-21T16:16:18.086Z · LW(p) · GW(p)

Worse for making predictions, I suppose.

Oh, it's less general than that - it's worse for political forecasting specifically. Other kinds of prediction (e.g. will this box fit under this table?), thinking for yourself is often one of the better options.

But, you know, political forecasting is one of the things we often care about. So knowing rules of thumb like "trust the experts, but not very much" is quite helpful.

comment by RobinZ · 2011-10-21T18:51:00.186Z · LW(p) · GW(p)

Actually, when I was rereading the comments and saw your mention of Tetlock, I thought you would point out the bit where he noted the hedgehog predictors made inferior predictions within their area of expertise than without.

comment by MarsColony_in10years · 2015-03-30T08:42:02.295Z · LW(p) · GW(p)

Fantastic article. The problem is that now I have a pet theory with which to dismiss anything said by a TV pundit with whom I disagree: I'd be better off guessing myself or at random than listening to them.

Maybe I can estimate how many variables various conclusions rest on, and how much uncertainty is in each, in order to estimate the total uncertainty in various possible outcomes. I'll have to pay special attention to any evidence that undercuts my beliefs and assumptions, to try to avoid confirmation bias.

Replies from: ChristianKl, Epictetus
comment by ChristianKl · 2015-03-30T10:30:21.531Z · LW(p) · GW(p)

Fantastic article. The problem is that now I have a pet theory with which to dismiss anything said by a TV pundit with whom I disagree: I'd be better off guessing myself or at random than listening to them.

That's great, stop watching TV. TV pundits are an awful source of information.

Replies from: MarkusRamikin
comment by MarkusRamikin · 2015-03-30T14:53:51.020Z · LW(p) · GW(p)

stop watching TV

One of my past life decisions I consistently feel very happy about.

comment by Epictetus · 2015-03-30T18:16:42.416Z · LW(p) · GW(p)

The problem is that now I have a pet theory with which to dismiss anything said by a TV pundit with whom I disagree: I'd be better off guessing myself or at random than listening to them.

TV pundits are entertainers. They're hired less for their insightful commentary and more for their ability to engage an audience.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-04-05T04:08:38.000Z · LW(p) · GW(p)

Hal, to be precise, the bias is generalizing from knowledge of others' failures to skepticism about disliked conclusions, but failing to generalize to skepticism about preferred conclusions or one's own conclusions. That is, the error is not absence of generalization, but imbalance of generalization, which is far deadlier. I do agree with you that the reviewer's conclusion is not supported (to put it mildly) by the evidence under review.

comment by Rafe_Furst · 2007-04-05T14:23:58.000Z · LW(p) · GW(p)

So why, then, is this blog not incorporating more statistical and collective de-biasing mechanisms? There are some out-of-the-box web widgets and mildly manual methods to incorporate that would at the very least provide new grist for the discussion mill.

comment by Michael_Rooney · 2007-04-05T16:29:31.000Z · LW(p) · GW(p)

The error here is similar to one I see all the time in beginning philosophy students: when confronted with reasons to be skeptics, they instead become relativists. That is, where the rational conclusion is to suspend judgment about an issue, all too many people instead conclude that any judgment is as plausible as any other.

comment by HalFinney · 2007-04-05T17:06:30.000Z · LW(p) · GW(p)

I would love to hear more about such methods, Rafe. This blog tends to be a somewhat abstract and "meta" but I would like to do more case studies on specific issues and look at how we could come to a less biased view of the truth. I did a couple of postings on the "Peak Oil" controversy a few months ago along these lines.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-04-05T18:10:51.000Z · LW(p) · GW(p)

Rafe, name three.

Rooney, I don't disagree that this would be a mistake, but in my experience the balance of evidence is very rarely exactly even - because hypotheses have inherent penalties for complexity. Where there is no evidence in favor of a complicated proposed belief, it is almost always correct to reject it, not suspend judgment. The only cases I can think of where I suspend judgment are binary or small discrete hypothesis spaces, like "Was it murder or suicide?", or matters like the anthropic principle, where there is no null hypothesis to take refuge in, and any position is attackable.

comment by CarlShulman · 2007-04-05T21:11:55.000Z · LW(p) · GW(p)

I have also had repeated encounters with individuals who take the bias literature to provide 'equal and opposite biases' for every situation, and take this as reason to continue to hold their initial beliefs. The situation is reminiscent of many economic discussions, where bright minds question whether the effect of a change on some quantity will be positive, negative or ambiguous. The discussants eagerly search for at least one theoretical effect that could move the quantity in a positive direction, one that could move it in the negative, and then declare the effect ambiguous after demonstrating their cleverness, without evaluating the actual size of the opposed effects.

I would recommend that when we talk about opposed biases, at least those for which there is an experimental literature, we should give rough indications of their magnitudes to discourage our audiences from utilizing the 'it's all a wash' excuse to avoid analysis.

comment by Kip_Werking · 2007-04-06T03:42:42.000Z · LW(p) · GW(p)

As someone who seems to have "thrown the kitchen sink" of cognitive biases at the free will problem, I wonder if I've suffered from this meta-bias myself. I find only modest reassurance in the facts that: (i) others have agreed with me and (ii) my challenge for others to find biases that would favor disbelief in free will has gone almost entirely unanswered.

But this is a good reminder that one can get carried away...

comment by Michael_Rooney · 2007-04-06T20:35:39.000Z · LW(p) · GW(p)

Eliezer, I agree that exactly even balances of evidence are rare. However, I would think suspending judgment to be rational in many situations where the balance of evidence is not exactly even. For example, if I roll a die, it would hardly be rational to believe "it will not come up 5 or 6", despite the balance of evidence being in favor of such a belief. If you are willing to make >50% the threshold of rational belief, you will hold numerous false and contradictory beliefs.

Also, I have some doubt about your claim that when "there is no evidence in favor of a complicated proposed belief, it is almost always correct to reject it". If you proposed a complicated belief of 20th century physics (say, Bell's theorem) to Archimedes, he would be right to say he has no evidence in its favor. Nonetheless, it would not be correct for Archimedes to conclude that Bell's theorem is therefore false.

Perhaps I am misunderstanding you.

Replies from: DanielLC, bigjeff5, encounterpiyush
comment by DanielLC · 2009-12-27T07:04:34.422Z · LW(p) · GW(p)

If you gave him almost anything else that complex, it actually would be false. Once something gets even moderately complex, there is a huge number of other things that complex.

Technically, he should figure that there's just a one in 10^somethingorother chance that it's true, but you can't remember all 10^somethingorother things that are that unlikely, so you're best off to reject it.

comment by bigjeff5 · 2011-02-21T23:40:17.227Z · LW(p) · GW(p)

For example, if I roll a die, it would hardly be rational to believe "it will not come up 5 or 6", despite the balance of evidence being in favor of such a belief.

A Bayesian would not say definitively that it would not come up as 5 or 6. However, if you were to wager on whether or not the dice will come up as either 5 or 6, the only rational position is to bet against it. Given enough throws of the die, you will be right 2/3 of the time.

At the most basic level, the difference between Bayesian reasoning and traditional rationalism is a Bayesian only thinks in terms in likelihoods. It's not a matter of "this position is at a >50% probability, therefore it is correct", it is a matter of "this position is at a >50% probability, so I will hold it to be more likely correct than incorrect until that probability changes".

It's a difficult way of thinking, as it doesn't really allow you to definitively decide anything with perfect certainty. There are very few beliefs in this world for which a 100% probability exists (there must be zero evidence against a belief for this to occur). Math proofs, really, are the only class of beliefs that can hold such certainty. As such the possibility of being wrong pretty much always exists, and must always be considered, though by how much depends on the likelihood of the belief being incorrect.

If you proposed a complicated belief of 20th century physics (say, Bell's theorem) to Archimedes, he would be right to say he has no evidence in its favor.

If no evidence is given for the belief, of course he is right to reject it. It is the only rational position Archimedes can take. Without evidence, Archimedes must assign a 0%, or near 0%, probability to the likelihood that the 20th century position is correct. However, if he is presented with the evidence for which we now believe such things, his probability assignment must change, and given the amount of evidence available it would be irrational to reject it.

Just because you were wrong does not mean you were thinking irrationally. The converse of that is also true: just because you were right does not mean you were thinking rationally.

Also note that it is a fairly well known fact that 20th century physics are broken - i.e. incorrect, or at least not completely correct. We simply have nothing particularly viable to supersede them with yet, so we are stuck until we find the more correct theories of physics. It would be pretty funny to convince Archimedes of their correctness, only to follow it up with all the areas where modern physics break down.

Replies from: wedrifid, JGWeissman
comment by wedrifid · 2011-02-22T01:49:17.098Z · LW(p) · GW(p)

However, if you were to wager on whether or not the dice will come up as either 5 or 6, the only rational position is to bet against it.

You need to specify even odds. Bayesians will bet on just about anything if the price is right.

Replies from: bigjeff5
comment by bigjeff5 · 2011-02-22T23:36:38.877Z · LW(p) · GW(p)

Odds on dice are usually assumed even unless specified otherwise, but it's never wrong to specify it, so thanks.

Replies from: wedrifid
comment by wedrifid · 2011-02-23T02:27:33.499Z · LW(p) · GW(p)

Odds on dice are usually assumed even unless specified otherwise

On the other hand when considering rational agency some come very close to defining 'probability' based on what odds would be accepted for bets on specified events.

comment by JGWeissman · 2011-02-22T23:58:58.591Z · LW(p) · GW(p)

There are very few beliefs in this world for which a 100% probability exists

There are none.

Replies from: bigjeff5
comment by bigjeff5 · 2011-02-23T00:57:49.147Z · LW(p) · GW(p)

Thanks, I was a little unsure of stating that there is no such thing as 100% probability. That post is very helpful.

Replies from: raylance
comment by raylance · 2011-08-27T01:30:03.704Z · LW(p) · GW(p)

Ah, the Godelian "This sentence is false."

comment by encounterpiyush · 2013-03-10T04:53:48.479Z · LW(p) · GW(p)

It would be irrational to believe "it will not come up 5 or 6" because P(P(5 or 6) = 0) = 0, so you know for certain that its false. As you said "Claims about the probability of a given claim being true, helpful as they may be in many cases, are distinct from the claim itself." Before taking up any belief (if the situation demands taking up a belief, like in a bet, or living life), a Bayesian would calculate the likelihood of it being true vs the likelihood of it being false, and will favour the higher likelihood. In this case, the likelihood that "it will not come up 5 or 6" is true is 0, so a Bayesian would not take up that position. Now, you might observe that the belief that "1,2,3 or 4 will come up" is true also holds holds the likelihood of zero. In the case of a dice role, any statement of this form will be false, so a Bayesian will take up beliefs that talk probabilities and not certainties . (As Bigjeff explains, "At the most basic level, the difference between Bayesian reasoning and traditional rationalism is a Bayesian only thinks in terms in likelihoods")

Ofcourse, one can always say "I don't know", but saying "I don't know" would have an inferior utility in life than being a Bayesian. So, for example, assume that your life depends on a series of dice rolls. You can take two positions: 1) You say "I believe I don't know what the outcome would be" on every roll. 2) You bet on every dice roll according to the information you have (in other words, You say "I believe that outcome X has Y chance of turning up". Both positions would be of course be agreeable, but the second position would give you a higher payoff in life. Or so Bayesians believe.

comment by pdf23ds · 2007-04-06T21:09:40.000Z · LW(p) · GW(p)

"Nonetheless, it would not be correct for Archimedes to conclude that Bell's theorem is therefore false."

I think this is a terrible hypothetical to use to illuminate your point, since most of Archimedes' decision would be based on how much evidence is proper to give to the source of information he gets the theorem from. I would say that, for any historically plausible mechanism, he'd certainly be correct in rejecting it.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-04-06T23:40:16.000Z · LW(p) · GW(p)

Rooney, where there isn't any evidence, then indeed it may be appropriate to suspend judgment over a large hypothesis space, which indeed is not the same as being able to justifiably adopt a random such judgment - anyone who wants to assign more than default probability mass is being irrational.

I concur that Bell's theorem is a terrible hypothetical, because the whole point is that, in real life, without evidence, there's absolutely no way for Archimedes to just accidentally hit on Bell's theorem - in his lifetime he will not reach that part of the search space; anything he tries without evidence will be wrong. It's exactly like saying, "But what if you did buy the winning lottery ticket? Then it would have high expected utility."

I don't think that 50% is a distinguished threshold for probability. Heck, I don't think 1 in 20 is a distinguished threshold for probability. The point of a binary decision space is that it is small and discrete, not that it is binary.

comment by Michael_Rooney · 2007-04-07T01:15:29.000Z · LW(p) · GW(p)

Eliezer, I think we are misunderstanding each other, possibly merely about terminology.

When you (and pdf) say "reject", I am taking you to mean "regard as false". I may be mistaken about that.

I would hope that you don't mean that, for if so, your claim that "no evidence in favor -> almost always false" seems bound to lead to massive errors. For example, you have no evidence in favor of the claim "Rooney has string in his pockets". But you wouldn't on such grounds aver that such a claim is almost certainly false. The appropriate response would be to suspend judgment, i.e., to neither reject nor accept. Perhaps I am not understanding what counts as a suitably "complicated" belief.

As for Archimedes meeting Bell's theorem, perhaps it was too counter-factual an example. However, I wouldn't say it's comparable to the "high utility" of the winning lottery ticket: it the case of the lottery, the relevant probabilities are known. By contrast, Archimedes (supposing he were able to understand the theorem) would be ignorant of any evidence to confirm or disconfirm it. Thus I would hope that he would refrain from rejecting it, merely regarding it as a puzzling vision from Zeus, perhaps.

comment by pdf23ds · 2007-04-07T02:08:15.000Z · LW(p) · GW(p)

The probability that an arbitrary person has string in their pockets (given that they're wearing pockets at the time) is knowable, and given no other information we could say that it's X%. The proper attitude towards the claim "Rooney has string in his pockets" is that it has about an X% chance of being true. (Unless we get other evidence to the contrary--and the fact that someone made the claim might be evidence here.)

Say X is 3%. Then I should say that Rooney very likely has no string in his pockets. Say X were 50%. Then I should say that there's an even chance Rooney has string in his pockets. In neither case am I withholding judgment. Given what you've said, Rooney, I think you might say that the latter would be withholding judgment? Or would you say that neither assertion is justified, and in that case, what does it mean to withhold judgment?

I think there's a post somewhere last year where Eliezer went over these points.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-04-07T02:54:55.000Z · LW(p) · GW(p)

Pdf, maybe you're referring to "I Don't Know"?

Rooney, I think you're interpreting "reject" as "state with certainty that it is not true" or "behave as if there is definite evidence against it". Whereas what I mean is that one should bet at odds that are tiny or even infinitesimal when dealing with an evidentially unsupported belief in a very large search space. You have no choice but to deal this way with the vast majority of such beliefs if you want your total probabilities to sum to 1.

comment by Michael_Rooney · 2007-04-07T17:11:15.000Z · LW(p) · GW(p)

By "suspending judgment" I mean neither accepting a claim as true, nor rejecting it as false. Claims about the probability of a given claim being true, helpful as they may be in many cases, are distinct from the claim itself. So, pdf, when you say "The proper attitude towards the claim "Rooney has string in his pockets" is that it has about an X% chance of being true", where X is unknown, I don't see how this is materially different from saying "I don't know if Rooney has string in his pockets", which is to say that you are (for the moment at least) suspending judgment about whether the claim (call it 'string') is true or false. And where X is estimated (on the basis of some hypothetical evidence) to be (say) .4, what is the proper attitude toward 'string'? Saying "'string' has a 40% chance of being true" doesn't answer the question, it makes a different claim, assigning probability. In such situations, the rational course of action is to suspend judgment about 'string'. You may of course hold beliefs about the probability of 'string' being true and act on those beliefs accordingly (by placing real or hypothetical bets, etc.), but in such cases you're neither accepting nor rejecting 'string'.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-04-07T17:38:29.000Z · LW(p) · GW(p)

You have no choice but to bet at some odds. Life is about action, action is about expected utility, and expected utility demands that you assign some subjective weighting to outcomes based on how likely they are. Walking down the street, I offer to bet you a million dollars against one dollar that a stranger has string in their pockets. Do you take the bet? Whether you say yes or no, you've just made a statement of probability. The null action is also an action. Refusing to bet is like refusing to allow time to pass.

Nor do I permit probabilities of zero and one. All belief is belief of probability.

comment by Michael_Rooney · 2007-04-08T00:13:32.000Z · LW(p) · GW(p)

I have to bet on every possible claim I (or any sentient entity capable of propositional attitudes in the universe) might entertain as a belief? That is highly implausible as a descriptive claim. Consider the claim "Xinwei has string in his pockets" (where Xinwei is a Chinese male I've never met). I have no choice but to assign probability to that claim? And all other claims, from "language is the house of being" to "a proof for Goldbach's conjecture will be found by an unaided human mind"? If Eliezer offers me a million dollars to bet on someone's pocket-contents, then, yes, if the utility is right, I will calculate probabilities, meager though my access to evidence may be. But that is not life. The null action may be an action, but lack of belief is not a belief. "I've never thought about it" is not equivalent to "it's false" or "it's very improbable".

(Did Neanderthals assign probabilities, or was it a module that emerged at about the same time as the FOXP gene? Or did it have to wait until the invention of games of chance in western Europe? Is someone who refuses to bet on anything for religious reasons ipso facto irrational?)

And you don't take the belief "2 + 2 = 4" as having probability of 1? Nor "2 + 2 = 5" as 0?

I'm off, out of ISP range for a day, so I won't reply for a bit. Cheers.

comment by Joe2 · 2009-02-21T03:10:20.000Z · LW(p) · GW(p)

Michael Rooney: I don't think Eliezer is saying that it's invalid to say "I don't know." He's saying it's invalid to have as your position "I should not have a position."

The analogy of betting only means that every action you take will have consequences. For example, the decision not to try to assign a probability to the statement that Xinwei has a string in his pocket will have some butterfly effect. You have recognized this, and have also recognized that you don't care, and have taken the position that it doesn't matter. The key here is that, as you admit, you have taken a position.

comment by DanielLC · 2009-12-27T07:05:59.998Z · LW(p) · GW(p)

And now that we know that we're going to be more biased. Why'd you have to say that?

Replies from: wedrifid
comment by wedrifid · 2009-12-27T07:45:22.497Z · LW(p) · GW(p)

Why'd you have to say that?

Because knowing about biases can also help people. A cornerstone premise of Eliezer's entire life strategy.

comment by mat33 · 2011-10-05T16:12:46.058Z · LW(p) · GW(p)

"Sophistication effect. Politically knowledgeable subjects, because they possess greater ammunition with which to counter-argue incongruent facts and arguments, will be more prone to the above biases."

Well, what about that always taking on the strongest opponent and the strongest arguments business? ;)

Actually, when I see a fellow with third degree in Philosophy, I leave him for someone, who'll have a similiar degree. It isn't that Sorbonne initiates are hopeless, it's arguments with 'em, that really are (hopeless).

comment by pums · 2011-10-10T20:10:41.673Z · LW(p) · GW(p)

"Things will continue historically as they have" is in some contexts hardly the worst thing you could assume, particularly when the alternative is relying on expert advice that a) is from people who historically have not had skill at predicting things and b) are making predictions reliant on complex ideas that you're in no position to personally evaluate.

comment by Peacewise · 2011-11-02T23:23:29.000Z · LW(p) · GW(p)

I think I've got a pretty good feeling on those 6 predictions and have seen them in action numerous times. Most especially in discussions on religion. Does the following seem about right LWers?

The prior attitude effect, both atheists and theists have prior strong feelings of their respective positions and many of them tend to evaluate their supportive arguments more favourably, whilst also aggressively attacking counters to their arguments as predicted by the disconfirmation bias.

The internet being what it is, provides a ready source of material to confirm ones bias.

Polarization of attitude will occur, as a direct result of the disconfirmation bias. One classic example of this is the tendency in internet forum for one person to state their position and expect another to refute it, thereby polarizing the argument - that the people then "naturally" fall into a disconfirmation bias situation is quite ironic in my opinion. Is the classic debating style of "your for and I'm against" or vice versa an example of structured disconfirmation bias?

Whilst the sophistication effect as described precludes, or perhaps ignores that one measure of sophistication is to know the topic being discussed from multiple angles. I would hold that a person who uses their knowledge to only counter someone else's argument is utilizing sophism, whilst a person who is intellectually honest will argue for both cases.

comment by alexvermeer · 2012-02-10T19:41:43.993Z · LW(p) · GW(p)

The link to the paper is dead. I found a copy here: Taber & Lodge (2006).

Replies from: Kenny
comment by Kenny · 2014-06-26T23:48:41.194Z · LW(p) · GW(p)

Here's yet another link, this one not seemingly associated with an individual course:

comment by lukeprog · 2012-05-22T03:57:32.822Z · LW(p) · GW(p)

As far as I can tell, there have been few other studies which demonstrate the sophistication effect. One new study on this is West et al. (forthcoming), "Cognitive Sophistication Does Not Attenuate the Bias Blind Spot."

Here is the abstract:

The so-called bias blind spot arises when people report that thinking biases are more prevalent in others than in themselves. Bias turns out to be relatively easy to recognize in the behaviors of others, but often difficult to detect in our own judgments. Most previous research on the bias blind spot has focused on bias in the social domain. In two studies, we found replicable bias blind spots with respect to many of the classic cognitive biases studied in the heuristics and biases literature (e.g., Tversky & Kahneman, 1974). Further, we found that none of these bias blind spots were attenuated by measures of cognitive sophistication such as cognitive ability or thinking dispositions related to bias. If anything, a larger bias blind spot was associated with higher cognitive ability. Additional analyses indicated that being free of the bias blind spot does not help a person avoid the actual classic cognitive biases. We discuss these findings in terms of a generic dual-process theory of cognition.

Replies from: John_Maxwell_IV
comment by John_Maxwell (John_Maxwell_IV) · 2012-05-22T05:32:54.296Z · LW(p) · GW(p)

Have there been any attempts to measure biases in researchers who study biases?

Replies from: lukeprog, khafra, TheOtherDave
comment by lukeprog · 2012-05-22T10:58:44.745Z · LW(p) · GW(p)

Not that I know of.

comment by khafra · 2012-05-22T12:28:57.765Z · LW(p) · GW(p)

No formal ones I know of, although I'm sure Will Newsome would like that. But Kahneman and Tversky did say that every bias they studied, they first detected in themselves.

comment by TheOtherDave · 2012-05-22T16:01:36.985Z · LW(p) · GW(p)

Unfortunately, the results of all such studies were rejected, due to... well, you know.

comment by Brendan · 2012-09-15T17:06:49.268Z · LW(p) · GW(p)

"For a true Bayesian, information would never have negative expected utility". I'm probably being a technicality bitch, attacking an unintended interpretation, but I can see bland examples of this being false if taken literally: A robot scans people to see how much knowledge they have and harms them more if they have more knowledge, leading to a potential for negative utility given more knowledge.

comment by NancyLebovitz · 2012-09-15T18:12:01.663Z · LW(p) · GW(p)

"For a true Bayesian, information would never have negative expected utility."

Is this true in general? It seems to me that if a Bayesian has limited information handling ability, then they need to give some thought (not too much!) to the risks of being swamped with information and of spending too many resources on gathering information.

Replies from: alex_zag_al, beoShaffer, None, TheOtherDave, Richard_Kennaway, shminux
comment by alex_zag_al · 2012-09-15T19:41:11.325Z · LW(p) · GW(p)

Yeah, certainly. The search might be expensive. Or, some of its resources might be devoted to distinguishing the most relevant among the information it receives - diluting its input with irrelevant truths makes it work harder to find what's really important.

An interpretation of the original statement that I think is true, though, is that in all these cases, receiving the information and getting a little more knowledgeable offsets the negative utility of whatever price was paid for it. The negative utility of the combination of search+learning is always negative because of the searching part of it - if you kept the searching but removed the learning at the end, it'd be even worse.

comment by beoShaffer · 2012-09-15T20:20:10.403Z · LW(p) · GW(p)

if a Bayesian has limited information handling ability

I believe that in this situation "true Bayesian" implies unbounded processing power/ logical omniscience.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-09-16T01:15:11.257Z · LW(p) · GW(p)

I suggest that "true Bayesian" is ambiguous enough (this seems to use it in the sense of a human using the principles of Bayes) that some other phrase-- perhaps "unlimited Bayesian"-- would be clearer.

comment by [deleted] · 2012-09-15T20:41:50.854Z · LW(p) · GW(p)

The cost of gathering or processing the information may exceed the value of information, but the information is always positive value; At worst, you do nothing different, and the rest of the time you make a more informed choice.

comment by TheOtherDave · 2012-09-15T22:30:48.674Z · LW(p) · GW(p)

I'm not exactly sure what "a true Bayesian" refers to, if anything, but it's possible that being whatever that is precludes having limited information handling ability.

comment by Richard_Kennaway · 2012-09-15T22:42:43.231Z · LW(p) · GW(p)

Is this true in general?

Yes, in this technical sense.

It seems to me that if a Bayesian has limited information handling ability

A true Bayesian has unlimited information handling ability.

Replies from: alex_zag_al
comment by alex_zag_al · 2012-09-16T00:32:19.718Z · LW(p) · GW(p)

A true Bayesian has unlimited information handling ability.

I think I see that - because if it didn't, then not all of its probabilities would be properly updated, so its degrees of belief wouldn't have the relations implied by probability theory, so it wouldn't be a true Bayesian. Right?

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-09-16T10:00:21.740Z · LW(p) · GW(p)

Yes, one generally ignores the cost of making these computations. One might try to take it into account, but then one is ignoring the cost of doing that computation, etc. Historically, the "Bayesian revolution" needed computers before it could happen.

And, I notice, it has only gone as far as the computers allow. "True Bayesians" also have universal priors, that assign non-zero probability density to every logically possible hypothesis. Real Bayesian statisticians never do this; all those I have read deny that it is possible.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-09-16T18:53:57.792Z · LW(p) · GW(p)

And, I notice, it has only gone as far as the computers allow. "True Bayesians" also have universal priors, that assign non-zero probability density to every logically possible hypothesis. Real Bayesian statisticians never do this; all those I have read deny that it is possible.

It is impossible, even in principal. The only way to have universal priors over all computable universes is if you have access to a source of hypercomputation, but that would mean the universe isn't computable so the truth still isn't in your prior set.

Replies from: Richard_Kennaway
comment by Richard_Kennaway · 2012-09-16T18:56:24.067Z · LW(p) · GW(p)

Is that written up as a theorem anywhere?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-09-18T00:50:51.039Z · LW(p) · GW(p)

That depends on how one wants to formalize it.

comment by shminux · 2012-09-16T01:48:31.453Z · LW(p) · GW(p)

"True Bayesian" is in this case a "True Scotsman", if some information has negative utility for you, you are not a true Bayesian.

comment by ricketybridge · 2013-01-09T01:56:39.863Z · LW(p) · GW(p)

Given the unbelievable difficulty in overcoming cognitive bias (mentioned in this article and many others), is it even realistic to expect that it's possible? Maybe there are a lucky few who may have that capacity, but what about a majority of even those with above-average intelligence, even after years of work at it? Would most of them not just sort of drill themselves into a deeper hole of irrationality? Even discussing their thoughts with others would be of no help, given the fact that most others will be afflicted with cognitive biases as well. Since this blog is devoted to precisely that effort (i.e. helping people become more rational), I would think that those who write posts here must have reason to believe that it is indeed quite possible, but do you have any examples of such improvement? Have any scientists done any studies on overcoming cognitive bias? The ones I've seen only show that being aware of cognitive bias barely removes its effects.

It almost seems like the only way to truly overcome cognitive biases is to do something like design a computer program based on something you know for sure you're not biased about (e.g. statistics that people formed correct opinions about in various experiments) and then run it for something you are likely to be biased about.

I apologize if there are already a bunch of posts (or even comments!) answering this question; I've been on the site like all day and haven't come across any, so I figured it couldn't hurt to ask.

Replies from: orthonormal
comment by orthonormal · 2013-03-24T22:27:25.402Z · LW(p) · GW(p)

My main takeaway from this is that "I know about this bias, therefore I'm more immune to it" is wrong. To be less susceptible to a bias, you need to practice habits that help (like the premortem as a counter to the planning fallacy), not just know a lot of cognitive science.

comment by lukeprog · 2013-01-21T05:27:30.296Z · LW(p) · GW(p)

Critical Review recently devoted an issue to discussions of this 2006 study. Taber & Lodge's reply to the symposium on their paper is available here.

comment by AuroraDragon · 2013-05-01T04:44:47.520Z · LW(p) · GW(p)

I Think is a good thing to be Humble to yourself, not to ague with yourself. if you you are always in self-doubt, you never speak out and learn. If you don't hear yourself, only how 'smart' you sound, you never learn from your mistakes. I try to learn from my - and other's- mistakes but I think observation of yourself is truly the key to being a rationalist, to remove self-imposed blocks on the path of understanding.

I Think it is great that you have such real-life experience, and have the courage to try. Keep living, learning and trying!

(I know this might be off-topic, but this is my first post and I don't know where to start, so i posted somewhere that inspired me to write.)

comment by Entraya · 2014-01-11T18:40:18.838Z · LW(p) · GW(p)

On a related note to such despicable people; I just had a few minutes talk with a very old friend on mine who matched this description. I just wanted an update on his situation and see if the boundless rage and annoyance I experienced then still fit. It's not super relevant, but the exact moment i started writing to him, my hands started shaking and i could feel a pressure on my chest, and my mind started clouding over. It's probably something that's shot into my system, but the exact reason why and what i dont know. Do any of you happen to know about this?

Also, there's the added danger than someone otherwise smart, may lure in people to the dark side of things, and make them believe things like 9/11 conspiracies. It also taught me to trust my gut feeling sometimes instead of what seems to be factual evidence, and not to have belief in belief. This is one of my most embarrassing things I've ever experienced

comment by Psilence · 2014-10-28T05:25:44.298Z · LW(p) · GW(p)

You don't believe in free will, correct?

comment by Holograph · 2014-12-24T16:20:15.104Z · LW(p) · GW(p)

I fear that the most common context in which people learn about cognitive biases is also the most detrimental. That is, they're arguing about something on the internet and someone, within the discussion, links them an article or tries to lecture them about how they really need to learn more about cognitive biases/heuristics/logical fallacies etc.. What I believe commonly happens then is that people realise that these things can be weapons; tools to get the satisfaction of "winning". I really wish everyone would just learn this in some neutral context (school maybe?) but most people learn this with an intent, and I think it colours their use of rationality in general, perhaps indefinitely. :/ But maybe I'm just being too pessimistic.

Replies from: hairyfigment
comment by hairyfigment · 2014-12-24T21:11:44.086Z · LW(p) · GW(p)

Your last sentence is funny, considering I immediately thought: 'If we taught them in school and plenty of bad effects remained, which seems well within the realm of possibility, you might be wishing people learned about fallacies in a context that made them seem more important.'

comment by [deleted] · 2015-08-08T04:53:59.095Z · LW(p) · GW(p)

THIS is the proper use of humility. I hope I'm less of a fanatic and more tempered in my beliefs in the future.

comment by Unknow0059 · 2021-07-17T13:30:25.140Z · LW(p) · GW(p)

It seems to me like this is as intended. Most people who talk about biases and fallacies do so in the veil of them being wrong and bad, instead of mere tools, more or less sophisticated and consciously knowable. I am skeptical about what good argument and reasoning entails and whether any such single instance exists.

comment by seank · 2022-03-18T16:30:22.355Z · LW(p) · GW(p)

For a salient example, look no further than the politics board of 4chan. Stickied for the last five years is a list of 24 logical fallacies. Unfortunately, this doesn't seem to dissuade the conspiratorial ramblings, but rather, lends an appearance of sophistication to their arguments for anyone unfamiliar with the subject. It's how you get otherwise curious and bright 15 year olds parroting anti-semitic rhetoric.

Replies from: edward-pascal
comment by Edward Pascal (edward-pascal) · 2022-12-15T17:23:24.576Z · LW(p) · GW(p)

I find on the internet that people treat logical fallacies like moves on a Chessboard. Meanwhile, IRL, they're sort of guidelines you might use to treat something more carefully. An example I often give is that in court we try to establish the type of person the witness is -- because we believe so strongly that Ad Hominem is a totally legitimate matter.

But Reddit or 4chan politics and religion is like, "I can reframe your argument into a form of [Fallacy number 13], check and mate!"

It's obviously a total misunderstanding of what a logical fallacy even is. They treat it like rules of logical inference, which it is definitely not (and would disprove what someone said, however outside of exotic circumstances, such a mistake would be trivial to spot).