Biases are engines of cognition

post by remember, Gabriel Alfour (gabriel-alfour-1) · 2022-11-30T16:47:58.318Z · LW · GW · 7 comments

Contents

7 comments

This work was done while at Conjecture.

Humans are not perfect reasoners. Cognitive biases are not biases; they are engines of cognition, each exploiting regularities of the universe.  Combining enough of these “cognitive biases” was enough to create a general intelligence but doesn’t even approximate perfect reasoners.

These cognitive biases initially arose through the directed search process of evolution. In evolution, it is far easier to identify regularities in the universe and build up biases that leverage these local regularities, than to figure out how to reason apriori. So we are built to rely on our intuition and vibes in social settings rather than explicitly calculate who knows what. Once enough biases have accumulated, and enough regularities have been taken advantage of, these biases combine to form human general intelligence.

Human general intelligence then launched a more directed search process in the form of culture and transfer of knowledge and practices. These were found in a fairly ad-hoc fashion and constructed on top of previous biases that form human intelligence. So we rely on biased intuitions to decide who to trust, who to emulate, and then we emulate and learn from them, allowing us to pass knowledge and know-how between generations. This takes the form of learning strategies suited to our environment, both natural and social. 

Even science, which seeks deeper mechanistic understanding, is still a directed search process that targets what regularities we can find in the universe. We don't reason from quantum mechanics to a unified theory of everything. Instead, we often isolate and attack problems as we come across them, build a fragmentary understanding, and only integrate later, if we do at all. Despite the importance of molecular mechanisms in understanding evolution, Darwin first constructed his theory based on the similarities between species and a vague understanding that the world was very old. We didn’t understand the underlying mechanism at the time; all we could do was construct scaffolds of understanding around these local regularities and push deeper when more of the terrain was revealed.

So biases are engines of cognition that can be accumulated through directed search. The end result is interlocking biases tied to understanding and exploiting specific regularities. This is not perfect reasoning, nor even an approximation thereof. But somehow, it was enough to add up to general intelligence.

7 comments

Comments sorted by top scores.

comment by Steven Byrnes (steve2152) · 2022-11-30T17:07:26.292Z · LW(p) · GW(p)

Hmm, the only way I can make sense of this article is to replace the word “biases” by “heuristics” everywhere in the article including the title. Heuristics are useful, whereas biases are bad by definition. Heuristics tend to create biases and biases tend to be created by the use of heuristics, such that I can imagine people mixing up the two terms.

Sorry if I’m misunderstanding.

Replies from: adamShimi, Gunnar_Zarncke
comment by adamShimi · 2022-11-30T17:12:51.059Z · LW(p) · GW(p)

I'm confused by your confusion, given that I'm pretty sure you understand the meaning of cognitive bias, which is quite explicitly the meaning of bias drawn upon here.

Replies from: gjm
comment by gjm · 2022-11-30T19:53:22.571Z · LW(p) · GW(p)

A cognitive bias, according to the page you link to, is "a systematic pattern of deviation from norm or rationality in judgment".

This article is not, as I understand it, proposing that human general intelligence is built by piling up deviations from rationality. It is proposing that human general intelligence is built by piling up rules of thumb that "leverage [] local regularities". I agree with Steven: those are heuristics, not biases. The heuristic is the thing you do. The bias is the deviation from rationality that results. It's plausible that in some sense our minds are big interlocking piles of heuristics, fragments of cognition that work well enough in particular domains even though sometimes they go badly wrong. It is not plausible that our minds are piles of biases.

comment by Gunnar_Zarncke · 2022-12-01T19:08:57.345Z · LW(p) · GW(p)

I agree that heuristics would have been a better word choice, but its intended purpose was clear to me. Bias and heuristic look at the same thing through a different lens.

Replies from: Aiyen
comment by Aiyen · 2022-12-02T01:37:54.391Z · LW(p) · GW(p)

Is that true?  Isn't at least one clear difference that it's difficult to stop engaging in a bias, but heuristics are easier to set aside?  For example, if I think jobs in a particular field are difficult to come by, that's a heuristic, and if I have reason to believe otherwise (perhaps I know a particular hiring agent and know that they'll give me a fair interview), I'll discard it temporarily.  On the other hand, if I have a bias that a field is hard to break into, maybe I'll rationalize that even with my contact giving me a fair hearing it can't work.  It's not impossible to decide to act against a bias, but it's harder not to overcorrect.  

Replies from: Gunnar_Zarncke
comment by Gunnar_Zarncke · 2022-12-02T10:37:34.148Z · LW(p) · GW(p)

if I have a bias that a field is hard to break into

Where does your bias come from? 

comment by Victor Novikov (ZT5) · 2022-12-02T06:09:24.675Z · LW(p) · GW(p)

Nice write-up! I'm glad someone brought up this idea.

Here's my take on this:

The human mind is an engine of cognition. Evolutionarily speaking, the engine is optimized for producing correct motor-outputs. Whether its internal state is epistemically true or not does not matter (to evolution), expect insofar that affects present and future motor-outputs.

The engine of cognition is made of bias/heuristics/parts that reason in locally invalid ways. Validity is a property of the system as a whole: the local errors/delusions (partially) cancel out. Think something like SSC's Apologist and Revolutionary [LW · GW]: one system comes up with ideas (without checking if they are reasonable or possible), one criticises them (without checking if the criticism is fair). Both are "delusional" on their own, but the combined effect of both is something approaching sanity.

One can attempt to "weaponize [? · GW]" the bias to improve the speed/efficiency of cognition. However, this can cause dangerous cognitive instability, as many false beliefs are self-reinforcing: the more you believe it the harder it is to unbelieve it. A bias that reinforces itself. And once the cognitive engine has gone outside its stability envelope, there is no turning back: the person who fell prey to the bias is unlikely to change their mind until they crash hard into reality, and possibly not even then (think pyramid schemes, cults, the Jonestown massacre, etc).