Is the confirmation bias really a bias?

post by Lionel (lionel) · 2023-06-14T14:06:06.315Z · LW · GW · 6 comments

This is a link post for https://lionelpage.substack.com/p/reassessing-the-confirmation-bias

The confirmation bias is a widely discussed cognitive phenomenon that has drawn considerable attention in psychology and behavioural economics. 

Traditionally, the search for confirmatory information has been portrayed as a bias, suggesting a systematic deviation from rational information processing. However, upon closer examination, this characterisation lacks a solid foundation in a precise understanding of what truly constitutes the best information processing strategy. 

Optimal information acquisition involves making the best use of limited time and resources. In this context, the optimal strategy to acquire information can require looking for confirmatory evidence that aligns with one’s pre-existing beliefs. This point has been made by several researchers, and recently by Zhong (Optimal Dynamic Information Acquisition, Econometrica, 2022) in a general framework where a decision-maker can fully flexibly decide the type of information to collect under two constraints: more informative signals are more costly and waiting longer to collect information is costly. Zhong’s result is that the decision-maker’s optimal strategy is to look for confirmatory information in the form of a Poisson process that gives from time to time strong signals confirming the decision-maker’s prior belief (when this prior belief is right).

I develop this general point in the Substack post linked. Another interesting post on this topic is Klein’s The Curious Case of the Confirmation Bias, that presents older criticisms about the psychological evidence about the confirmation bias.

In the end, it looks like the notion of “confirmation bias” has been at best overused, at worst it may be a pure misconception to think of it as a bias.

6 comments

Comments sorted by top scores.

comment by Neil (neil-warren) · 2023-06-14T15:11:14.923Z · LW(p) · GW(p)

Hello! 

The universe is so complex that it is exceedingly unlikely that you will get it right on the first try--and in order to err and err and err again you can't just stick to one theory and latch on to it through fire and snow no matter what befalls you. While confirmation probably makes sense at the local level, given the limited time and resources that you mentioned, it doesn't make sense on greater scales where the stakes are so high that it's worth it being absolutely right rather than being efficient in finding an interim solution. The issues commonly discussed on LessWrong tend to be of the latter category, and so confirmation bias is indeed a bias here. 

Replies from: lionel
comment by Lionel (lionel) · 2023-06-15T04:07:26.515Z · LW(p) · GW(p)

Any analysis obviously depends on the setting considered. My point addressed common settings where the confirmation bias has typically been considered. Other settings, with other payoffs, could potentially give other answers. That being said, your point stresses stakes size, and I believe that large stakes do not invalidate Zhong's analysis if you have full flexibility about how to acquire information. In his model, you are only going to make a decision if you have reached the right level of confidence (which may be very demanding if the stakes are high). 

I think that an issue with high stakes may arise for a different reason: you may in practice not have enough flexibility in your choice of information source to get a source that would confirm your initial belief with enough certainty. I.e.: If you believe current AI developments are safe, you may not be able to get/create an information source that will send you a signal providing enough certainty that they are indeed safe. In that case, you are in a sense not hitting your budget constraint with purely confirmatory information and it could make sense to also get additional information that is not purely confirmatory (quick conjecture on my part).

In any case, the model helps think about the problem and how different aspects of this problem such as initial beliefs, costs of errors, cost of acquiring information influence the best policy to acquire information.

comment by krs (katherine-slattery) · 2023-06-14T20:15:44.899Z · LW(p) · GW(p)

I think a useful heuristic for updating beliefs is to ask yourself "What would make this belief false?" rather than casting the issue in the framework of confirmation vs. balance. To make this concrete, consider the example of flat earthers vs. scientists. If you believed in a flat earth, there are any number of tests you could do to (e.g. watching ships sail down below the horizon) that would lead you to update towards falsifying your belief. This type of information seeking is neutral with respect to confirming your beliefs. This also allows us to look for more direct evidence around our beliefs rather than appealing to indirect methods such as whether or not a person agrees with us (see hug the query [? · GW]).

Second, I haven't looked into the work of Weijie Zhong, but I was wondering if there might be a bias variance tradeoff at play here for efficient information seeking (i.e. obtaining only confirmatory evidence seems likely lead to low variance but high bias)? 

Replies from: Jiro, lionel
comment by Jiro · 2023-06-17T09:14:59.492Z · LW(p) · GW(p)

I think a useful heuristic for updating beliefs is to ask yourself “What would make this belief false?”

I believe that Russell's teapot does not exist.

comment by Lionel (lionel) · 2023-06-15T03:52:02.389Z · LW(p) · GW(p)

On point 2, interesting question about bias-variance. His model looks at beliefs moving in the range 0-1. The world is either 0 or 1. The question is what kind of flow of information will allow you to make the likely right decision in the minimal amount of time.

On point 1, I think Zhong's framework is general enough to cover the examples you give. If you can choose the type of information to collect very flexibly, and if more informative signals are more costly, it makes more sense to look for confirmation because, given your beliefs, you are more likely to quickly be confident enough to act on your beliefs. Contrarian or neutral sources are useful, but in expectation, given your beliefs, they would require you to take more time before making a decision.

comment by localdeity · 2023-06-14T15:07:32.025Z · LW(p) · GW(p)

You might be interested in Gigerenzer's "bias bias" paper (reviewed here):

Behavioral economics began with the intention of eliminating the psychological blind spot in rational choice theory and ended up portraying psychology as the study of irrationality. In its portrayal, people have systematic cognitive biases that are not only as persistent as visual illusions but also costly in real life—meaning that governmental paternalism is called upon to steer people with the help of “nudges.” These biases have since attained the status of truisms. In contrast, I show that such a view of human nature is tainted by a “bias bias,” the tendency to spot biases even when there are none. This may occur by failing to notice when small sample statistics differ from large sample statistics, mistaking people’s random error for systematic error, or confusing intelligent inferences with logical errors. Unknown to most economists, much of psychological research reveals a different portrayal, where people appear to have largely fine-tuned intuitions about chance, frequency, and framing. A systematic review of the literature shows little evidence that the alleged biases are potentially costly in terms of less health, wealth, or happiness. Getting rid of the bias bias is a precondition for psychology to play a positive role in economics.

An example from the paper:

Unsystematic Error Is Mistaken for Systematic Error

The classic study of Lichtenstein et al. [about causes of death] illustrates the second cause of a bias bias: when unsystematic error is mistaken for systematic error. One might object that systematic biases in frequency estimation have been shown in the widely cited letter-frequency study (Kahneman, 2011; Tversky and Kahneman, 1973). In this study, people were asked whether the letter K (and each of four other consonants) is more likely to appear in the first or the third position of a word. More people picked the first position, which was interpreted as a systematic bias in frequency estimation and attributed post hoc to the availability heuristic. After finding no single replication of this study, we repeated it with all consonants (not only the selected set of five, each of which has the atypical property of being more frequent in the third position) and actually measured availability in terms of its two major meanings, number and speed, that is, by the frequency of words produced within a fixed time and by time to the first word produced (Sedlmeier et al., 1998). None of the two measures of availability was found to predict the actual frequency judgments. In contrast, frequency judgments highly correlated with the actual frequencies, only regressed toward the mean. Thus, a reanalysis of the letter-frequency study provides no evidence of the two alleged systematic biases in frequency estimates or of the predictive power of availability.