Expert trap – Ways out (Part 3 of 3)

post by Paweł Sysiak (pawel-sysiak) · 2023-07-22T13:06:14.617Z · LW · GW · 0 comments

Contents

  Intro
  Ways out
    Feynman learning (way out #1)
    First-principle knowledge (way out #2)
    Psychological safety and working in groups (way out #3)
    Asking not you (way out #4)
    Adversarial collaborations (way out #5)
    Noticing surprise (way out #6)
  The stakes
  Q&A
      Are you conflating the expert trap with correcting biases and epistemology in general?
      These are very large claims. What is your confidence about them?
      Are you saying there is a way to learn my Ph.D. level math in five minutes?
      If my-side bias is true how come there are people who lack confidence or who are depressed? Is there a variation in how confident we are between disciplines?
None
No comments

Crossposted from Pawel’s blog

This is the third part of the series, but you can read it on its own. In part one [LW · GW], I include notes on epistemic status, I give a summary of the topic. But mainly I describe what is the expert trap.

Part two [LW · GW] is about context. In “Why is expert trap happening?” I dive deeper explaining biases and dynamics behind it. Then in “Expert trap in the wild” I try to point out where it appears in reality.

Part three is about “Ways out”. I list my main ideas of how to counteract the expert trap. I end with conclusions and with a short Q&A.

How to read it? There may be some knowledge you already are familiar with. All chapters make sense on their own. Feel free to read this like a Q&A page and skip parts of it.

Intro

I think that the biases I described can be fixed if we would adopt different norms around learning, thinking, and evaluating knowledge. This may be a very hard task though. Some people who did a lot of thinking on this topic are skeptical about the extent biases may be corrected. On one side are Yudkowsky, Center for Applied Rationality, Jacob Falkovich who think that rationality is a learnable skill. On the other side are Kahneman, Scott Alexander, who disagree. Read more on this debate here ”Is Rationalist Self-Improvement Real?” [LW · GW] (and don’t miss comments by Scott Alexander).

I put some probability that this may be an almost impossible task. These biases may be largely evolutionarily determined. I am fairly certain that if baby humans were dropped on an abandoned island and if over centuries they would start a new society, without any cultural influence, these biases would resurface largely intact.

That said, I personally put a higher credence that this is fixable; that If we, as a society, would prioritize it highly enough; if we would adopt new norms around learning, teaching, talking, epistemology there I think these biases could be corrected to a significant degree. And I am pretty sure that not enough work was done to verify this. It’s hard to disagree with such thorough minds like Kahneman and Scott Alexander but I will place my bet.

Ways out

Feynman learning (way out #1)

Perhaps the most powerful way to counteract the expert trap was practiced by Richard Feynman. If you don’t know him, he was one of the most accomplished physicists of the twentieth century. If you were to take one idea from all this writing take this one. Feynman followed:

"If I cannot explain it simply, I don't understand it well enough”.

When he was attempting to understand a concept from high-level physics he would approach freshman students and try to explain it in the simplest possible way. When he couldn’t do it he would come back to study more. He did it over and over again until he would land on descriptions of complex topics that were simple to explain.

This simple method may, in a large way, break the constraints of the expert trap. Whenever Feynman deepened his knowledge he also was forced to explain it at the more basic level. This approach forces one not to fall into an expertise silo. One needs to keep explaining more complex knowledge with language, metaphors, concepts taken from more basic levels, other realms of understanding, or other knowledge areas. Learning things this way makes one see how things connect. It is as if instead of working up the expertise levels and making a lot of short connections one connects to a much broader knowledge base – and therefore verifies knowledge from more varied directions and more knowledge pathways.

Feynman learning also means moving towards something more complex while checking in with your previous states of mind when you didn’t understand. If the expert trap is affected by disconnecting contexts – when you are at the higher level of expertise you forget what you didn’t get previously – then this method forces you to keep your cables plugged into the prior contexts.

One of my absolute favorite validation of this approach is “Fun to Imagine”, a one-hour video of Richard Feynman casually explaining concepts from physics and chemistry. I learned some bits of this knowledge in school but it all seemed disconnected. It seemed divided into different knowledge areas. Feynman cuts across all of them; integrating concepts such as matter, heat, magnetism, electricity into one holistic explanation. It is one of these interpretations that is impossible to unsee. Whenever I learn anything new on these topics this explanation is my new foundation. I come back to it to reference, append, visualize, or verify new knowledge.

First-principle knowledge (way out #2)

One way to acquire knowledge that avoids the expert trap dynamic is to approach it from first principles. Knowledge is a spectrum. At one end of it, we copy knowledge. We fall into group think. This is the default way of reasoning. On the other end, we use first principles. One can get closer to that ideal but it’s both impractical and impossible to verify everything on our own. You will eventually stumble across a question you won’t be able to answer.

Holden Karnofsky's article, Minimal-trust investigations, captures an actionable account of getting closer to first-principled knowledge. In the piece, he gives an example of how he approached the first evaluation of Against Malaria Foundation (an org that provides bed nets with insecticide in Africa and Asia). The primary goal of this exercise was to defer to the knowledge of others as little as possible. Holden explains how he, all by himself, delved into the research, checked calculations, explored counterfactuals, and verified all other variables influencing the topic. It’s fun to read this because the result of this investigation is very consequential. For many years now, Against Malaria Foundation is one of the most recommended charities by givewell.org, an organization founded by Holden.

Holden admitted that during his life he managed to do only a handful of minimal-trust investigations. He thinks that approaching learning this way, even though laborious, was very valuable because it influenced something broader in his process. Even doing a couple of these investigations equipped him with new intuitions on how to more accurately evaluate ideas. For example, since then, he acquired a habit of quickly clicking into a research quoted in the texts he reads. He realized that spotting low-quality research is something one can do quickly and with little effort.

Psychological safety and working in groups (way out #3)

Another way out of the expert trap is to engage all people's perspectives when working in a group setting. In “What Google Learned From Its Quest to Build the Perfect Team?” Charles Duhigg explains a three-year-long process in Google to understand what drives the highest-performing teams. For a long time, the researchers could not pin anything down. The team performance wasn’t correlated with how diligent the team was or how many top performers were on it. After looking into many hypotheses, researchers understood that one of the most important dynamics was psychological safety. Members of the best performing groups at Google 1) spoke roughly the same amount (equal conversational turn-taking) and 2) scored high on social sensitivity tests (Being good at “Reading the Mind in the Eyes test”; guessing what people are thinking or feeling from photos of people’s eyes).

The way I read these findings is that what’s most valuable is being able to source each team member’s perspective equally. If one or a few people force a dominant point of view it is going to create a weaker outcome. By default, the cognition of each individual member is distorted and biased. What if the best way out of that is being able to create a collaborative environment where everybody is encouraged and feels safe to share their viewpoint? In doing so, participants are counteracting, bit by bit, their skewed individual perspectives. Perhaps, the most effective ideas don’t come from positions that are the most expert, but that are the least biased.

Asking not you (way out #4)

I think the expert trap is also influenced by Creator’s bias. And when I say creators I don’t think artists, writers, journalists. All of us are creators of some sort: people who are into brewing coffee, need to craft an important email, create a presentation, or when thinking for a long time about a concept. When we create we become experts on the piece we are spending extended time on. And there is always a misalignment between what the creator assumes and what the piece really communicates. Creators often spend 1000x more time with the content than somebody who digests their piece. During this time creators try to communicate their intentions, and project their skill into the piece but things like My-side bias, rationalizations, Typical mind fallacy are gradually distorting their perception. Also, there may be a largely physiological process at play. When one uses their senses they gradually lose grip on what they perceive. For example, when one eats, listens, smells the first impressions are the most vivid. The longer one perceives something, the less accurately they are able to judge its appearance in the world.

I think there are a couple of ways to counter-act Creator’s bias. Firstly, it helps to be perceptive of the first impressions of creations. Remember how it feels when you create it for the first time or when you glance at it after a break. This impression will be soon distorted or gone. Secondly, it helps to look for opportunities to find distance from your own work. It helps to take frequent and long breaks – when you proactively try not to look at your work. Your job is to find ways to forget your own intentions and see work as far removed from you as possible. Find ways to see it like a person who sees it for the first time.

Lastly, it helps to be extremely skeptical about your point of view on your own work. You are guaranteed you already have a distorted perspective. The most effective way to do that is to ask other people how they view it. But it’s crucial to find the right audience and not to fall into sampling bias. Pick people who are unaffiliated with you or don’t have stakes in the work. Try not to ask leading questions. There is much more to the art of getting quality feedback rather than misleading data. The best resources I found is Design Sprint and the chapters by Michael Margolis. I am still perplexed that findings from user research developed in the tech world are bleeding slowly into other disciplines like sociology, urbanism, art, survey design, and storytelling.

Adversarial collaborations (way out #5)

One of the most interesting accounts of trying to get past the expert trap is described by Daniel Kahneman. He calls this method Adversarial Collaboration. Once a person has a hypothesis they find a competent person who holds an opposing view. They collaborate to find the cruxes of their positions and find common ground. There should be a neutral arbiter and prior to the experiment, both parties should discuss what sorts of results would change their minds.

Kahneman himself held several workshops to stress-test his own ideas. What’s interesting he found that it was more difficult than anticipated to change his mind. Neither he nor any of his collaborators ever did. As far as I understand Kahneman still values the process and improved his thinking but he never changed his fundamental positions. Others were more successful at that. Scott Alexander invited other people to do several adversarial collaborations and let them publish results in the form of articles in his blog [? · GW].

Noticing surprise (way out #6)

One of my favorite questions I like to ask people I spent time with is what they found surprising about our experience together. It is interesting how often people have a hard time answering. Sometimes this question seems not interesting and sometimes even annoying. I ask this question also to evoke it for myself. I want to get myself out of the common trance – I am right. Everything is as it should be.

As described earlier, I think we are often overriding memories to our own advantage or selecting information that confirms beliefs we previously held. To simplify this dynamic, if we believe that x=2 and then we see that x=3 we will override our memory to perceive that we always thought x=3. Alternatively, if we think Y is good and we read a bunch of varied evaluations of Y then we will filter for confirmations that Y is good. We are in this constant confirmation tunnel. We are bending our perceptions and, in relation to how much we are wrong, we are hardly ever surprised. But when one pays attention to the feeling of surprise or confusion I think one can slightly rapture this dynamic. Perhaps there is this tiny moment, a short opening between thinking that x=2 and seeing that x=3 when we can register that we were wrong, that this result is surprising. Perhaps, if we keep doing that we can calibrate our thinking. We can build better intuitions on how we reason and where we are imprecise, wrong, and mistaken.

The stakes

Fixing biases discussed in this article is an extremely difficult task. Daniel Kahneman, who worked on them for years, still doubts that people can learn how to avoid them. Kahneman stated maybe in the clearest way one of the main cruxes that may lie below the expert trap, he calls it Believe perseverance:

“Beliefs persevere even without any social pressure. … The belief will not change when the reasons are defeated. The causality is reversed. People believe the reasons because they believe in the conclusion … We believe what the people we love and trust believe. This is not a conscious decision to conform by hiding one's true beliefs. … this is how we believe.” – Daniel Kahneman Adversarial collaboration, Edge

To me, this shows the extent of the expert trap. The more we learn the more fortified we will be in our perspectives. It is such a challenge, both collective and individual, to avoid being distorted by our own points of view. I think that correcting these tendencies correlates with people becoming more careful, humble, and flexible thinkers. This, in turn, indirectly may improve the way we form opinions, have discussions, create democracies, and coordinate in large groups. The question of coordination seems crucial because we may be living, as others are pointing out, in the hinge of history [? · GW], the most important century, a moment we owe to the future. I think this is the era of looming existential risks: unaligned artificial intelligence, nuclear wars, man-made pandemics, climate change, and other technologies we don’t think aboutand still may be invented. People are attempting to find solutions within each discipline. But maybe more work needs to be done on a more general level, fixing how we value, think, and coordinate. I believe there is a good chance that on a societal level, there are ways to do something about it. And this may be one of the most important puzzle pieces to get right.

 


Q&A

Are you conflating the expert trap with correcting biases and epistemology in general?

I think these areas largely overlap. However, I am only focusing on biases that relate directly to the expert trap.

These are very large claims. What is your confidence about them?

I promised at the beginning to remind you about epistemic status. You can read it at the very beginning.

Are you saying there is a way to learn my Ph.D. level math in five minutes?

No, but I sense that in almost any area conductivity of knowledge can be significantly improved. What if the knowledge that takes six years to comprehend, with a teacher like Feynman, would take one month?

If my-side bias is true how come there are people who lack confidence or who are depressed? Is there a variation in how confident we are between disciplines?

I am almost certain that on aggregate, taking all people across all knowledge areas we are skewing assessments to our favor and we are overconfident. If we would assume 50% means that a person is well-calibrated. I think it is highly likely that on average (across all people and all knowledge disciplines) people would score 70% or higher. I think this dynamic is significantly stronger than conventionally assumed. And of course, there is some variation and some outliers between people and disciplines.

0 comments

Comments sorted by top scores.