Expert trap: Why is it happening? (Part 2 of 3) – how hindsight, hierarchy, and confirmation biases break conductivity and accuracy of knowledge

post by Paweł Sysiak (pawel-sysiak) · 2023-06-09T23:00:55.043Z · LW · GW · 0 comments

Contents

  Why is expert trap happening?
    Forgetting
    Hierarchy bias
    My-side bias
  Expert trap in the wild
    Replication crisis
    Experts are often worse at predicting reality
    Education system: learning sounds not concepts
None
No comments

Crossposted from Pawel’s blog

This essay has three parts. In part one [LW · GW], I include notes on epistemic status, I give a summary of the topic. But mainly I describe what is the expert trap.

Part two is about context. In “Why is expert trap happening?” I dive deeper explaining biases and dynamics behind it. Then in “Expert trap in the wild” I try to point out where it appears in reality.

Part three [LW · GW]is about “Ways out”. I list my main ideas of how to counteract the expert trap. I end with conclusions and with a short Q&A.

How to read it? In this part, there may be a lot of knowledge that you already are familiar with. All chapters make sense on their own. Feel free to treat this article like a Q&A page and skip parts of it.

Why is expert trap happening?

I think there are three main causes of the expert and they are: forgetting, hierarchy bias, and my-side bias.

Forgetting

First, forgetting. Keep in mind I am not quoting any relevant research here. This is my hypothesis. The core mechanism is the following. When we learn something we forget how was it not to understand it. We are in a constant process of forming new memories and forgetting old ones. Some are prioritized over others. In context of learning, as soon as we learn something we tend to deprioritize reasons why it was hard to understand it. It may be costly to keep the score of it because we need to keep a second, seemingly more important thread – how to understand the problem we just learned about. And these two are different. It is as if we unplug certain cables connecting old contexts and plug them into new ones. Maybe they seem redundant to our brain as the direction is to keep ascending further.

Also, if my-side bias is in play (on that later) we will deprioritize memories that don’t strengthen our self-image. Sate of confusion, not being able to understand things feel unpleasant like a shoe that is too tight. So we may deprioritize these memories and prioritize ones that are leaving us with positive states: Everything fits correctly. I am smart.

Hierarchy bias

I see the expert trap as largely caused by hierarchy bias. In Elephant in the brain Robin Hanson and Kevin Simler explain this distortion. The main thesis of the book is that we are deeply hierarchical creatures and at the same time don’t view ourselves this way.

Elephant in the Brain made a large impression on my thinking. Suddenly a lot of convoluted reasoning paths straightened up. The consequences of it, however, were quite hard to digest. Quite a lot of them are ugly to imagine. No wonder this mechanism is hidden from us. One idea that helped me was to realize that hierarchies, which often feel to me strongly negative, can be viewed in a positive light. They enable collaboration, motivate people to take action, and are also magnets and filters for the right communities to form.

The authors explain the main thesis of the book with some uncertainty. It makes sense to be skeptical here. It is a large reinterpretation of how we usually assume human motivations work. It’s not that hierarchies are completely overlooked, but Hanson and Simler assert a significantly broader influence they have on human motivations. I wrestled with these ideas for a bit and stumbled upon a take that was revelatory to me. I think one can be more certain of a hypothesis when evidence for it comes from different knowledge areas. Frans de Wall explains something similar but from the perspective of a primatologist:

In primatology, when we talk about primates, almost everything is viewed through a perspective of a hierarchy in the group. In social sciences, when we talk about humans, hierarchy is rarely ever mentioned

Elephant in the Brain also argues that the extent our brains are driven by hierarchies is inherently hidden from us. If we were to be aware that we are primarily motivated by ascending hierarchies, we would potentially be a lot worse at it. It’s a lot easier to deceive other people if you yourself don’t know you’re attempting to deceive them.

I think finding from Elephant in the brain also applies to learning. One of the main motivations behind the drive to acquire knowledge is a need to impress others and ascend in hierarchies. Robin Hanson sees three main functions of academia. Alongside preserving and teaching, “Academia functions to create and confer prestige to associated researchers, students, firms, cities, and nations.” Link

This may help explain our drive to use jargon. Sometimes more complex vocabulary is a shortcut to create a more specific definition. However, it may more often be used because people want to be viewed as more knowledgeable. Again this is most likely a subconscious force. So people who use jargon unnecessarily are not only confusing others but also themselves.

My-side bias

The third dynamic causing the expert trap is my-side bias (largely overlapping with the definition of ’motivated reasoning’). We have a fundamentally distorted view of who we are. Our ego distorts facts, manufactures impressions and memories to perceive ourselves in the most positive light. This dynamic is described at length by Daniel Kahneman, Daniel Gilbert, Julia Galef among others. I wrote a more extensive description of my-side bias, but here is the summary. There are two main sub-mechanisms of it: the illusory-superiority and ego-shield

First, illusory superiority. Our ego distorts facts, manufactures impressions and memories to create the best possible self-image.

The majority of people think they are above average. In one study, 96 percent of cancer patients claimed to be in better health than the average cancer patient.

We select positive and filter out negative information about ourselves. In one study researchers showed that if somebody praises a person – they will look for evidence of how competent the source is. If somebody criticizes them – they will look for evidence of how incompetent the source is.

We evaluate things more positively once they become our own. “Consumers evaluate kitchen appliances more positively after they buy them, job seekers evaluate jobs more positively after they accept them, and high school students evaluate colleges more positively after they get into them. Racetrack gamblers evaluate their horses more positively when they are leaving the betting window than when they are approaching it, and voters evaluate their candidates more positively when they are exiting the voting booth than when they are entering it."

We look for positive explanations of things we are already doing. If we are eating ice cream we will think it’s not as bad as if we weren’t eating it.

We evaluate actions higher when we realize they were done by us. We will find more mistakes in our own work if we were tricked to think it wasn’t done by us.

My-side bias also works as ego-shield, something like the immune system for our psychology. When experiences make us feel sufficiently unhappy this system will kick in and we will shift blame or manufacture facts in order to create more positive versions of memories. “Able-bodied people are willing to pay far more to avoid becoming disabled than disabled people are willing to pay to become able-bodied again because able-bodied people underestimate how happy disabled people are.” examples from Stumbling on Happiness.

I believe confirmation bias is a sub-effect of my-side bias. It’s something very similar to illusory superiority described above but in relation to opinions we hold. From the ocean of available data, we filter for information that strengthens views we already have.

In one study researchers juxtaposed two groups. One was pro-capital punishment and another was against it. Researchers fabricated two texts supporting each group's ideas in an equal way. Both groups read both texts. Groups came out of this exercise more polarized, believing stronger than they believed before. Link

If we do something, we’re less critical of it and interpret it as a job better done. In one study participants were tasked with finding errors in simple reasoning exercises. Participants first did the exercises themselves. When given the chance to do revisions, they didn’t make many corrections. They were a lot more effective at spotting their own mistakes when researchers told them they were looking at somebody else's work.

Expert trap in the wild

Replication crisis

It has been found that many classical scientific studies that form the foundation of modern science are difficult or impossible to reproduce. This has been particularly widely discussed in the fields of psychology and medicine. In 2017 in response to this crisis, eight hundred scientists and mathematicians signed a paper to "Redefine statistical significance". It proposes that in "fields where the threshold for defining statistical significance for new discoveries is P < 0.05, we propose a change to P < 0.005". Daniel Kahneman, who spoke widely on this subject, sees my-side and confirmation biases as the main drivers. To put it simply — scientists may be subconsciously finding ways to prove theories that will make them better scientists, making them more highly acclaimed in their field.

Experts are often worse at predicting reality

According to research by Philip Tetlock, laid out in the book Expert Political Judgment, experts are often worse at predicting reality than informed non-experts:

"Studies have found that deep expertise in a subject does not positively correlate with accuracy in judgment. As part of his research on forecasting, professor Phillip Tetlock conducted a study with 284 political experts, that generated over 80,000 informed (where the estimate matched the area of expertise of the individual) and uninformed predictions over the course of twenty years. Surprisingly, Tetlock discovered that specialists are less reliable than non-experts, even within their specific area of study. In fact, the study concludes that after a certain point, deepening one's knowledge about a specific topic is affected by the law of diminishing returns and can hinder the ability to accurately predict a certain outcome. The results of the study can be attributed to the fact that subject matter experts are more likely to suffer from confirmation bias and are more likely to feel the pressure associated with reputational damage, both of which can affect their ability to produce accurate predictions.” Link

Education system: learning sounds not concepts

I believe large parts of educational systems are riddled with the expert trap. Most students experience school as boring. I think this is caused by circulating knowledge that has low conductivity and is, in large parts, illusory. When something is deeply understood, knowledge works across distant areas, and resulting synthesis connects and untangles things. If knowledge is delivered this way it feels energizing, enlightening, and revelatory. Later I will describe Richard Feynman’s approach to learning. His incredible lecture “Fun to Imagine” unifies chemistry, math, and physics into one interconnected field.

Eliezer Yudkowsky in his essay ”Guessing the Teacher’s Password” [LW · GW] points to the failed dynamics of education. Largely how we are habituated to learn is through memorization and guessing teachers’ answers.

There is an instinctive tendency to think that if a physicist says “light is made of waves,” and the teacher says “What is light made of?” and the student says “Waves!”, then the student has made a true statement. That’s only fair, right? We accept “waves” as a correct answer from the physicist; wouldn’t it be unfair to reject it from the student?

Suppose the teacher asks you why the far side of a metal plate feels warmer than the side next to the radiator. If you say “I don’t know,” you have no chance of getting a gold star—it won’t even count as class participation. But, during the current semester, this teacher has used the phrases “because of heat convection,” “because of heat conduction,” and “because of radiant heat.” One of these is probably what the teacher wants. You say, “Eh, maybe because of heat conduction?” This is not a hypothesis about the metal plate. This is not even a proper belief. It is an attempt to guess the teacher’s password.

“What remains is not a belief, but a verbal behavior.”

Real learning, Eliezer points out, is about being aware of the difference between an explanation and a password. Learning is about finding knowledge that is in close contact with how we anticipate it should show up in reality. If that hypothesis is true what differences do I expect to see in reality? If that hypothesis is true what I shouldn’t expect to encounter?

Maybe, if we drill students that words don’t count, only anticipation-controllers, the student will not get stuck on “Heat conduction? No? Maybe heat convection? That’s not it either?”


Read Part 3 – Ways Out [LW · GW]

0 comments

Comments sorted by top scores.