[Link] Evaluating experts on expertise

post by Kaj_Sotala · 2012-01-05T18:49:46.457Z · LW · GW · Legacy · 2 comments

Contents

2 comments

https://ignoranceanduncertainty.wordpress.com/2011/08/11/expertise-on-expertise/

Nice article on meta-expertise, ie. the skill of figuring out which experts are actually experts. The author notes that there are domains in which can't really be mastered, and then lays out some useful-seeming tests for distinguishing them:

Cognitive biases and styles aside, another contributing set of factors may be the characteristics of the complex, deep domains themselves that render deep expertise very difficult to attain. Here is a list of tests you can apply to such domains by way of evaluating their potential for the development of genuine expertise:

  1. Stationarity? Is the domain stable enough for generalizable methods to be derived? In chaotic systems long-range prediction is impossible because of initial-condition sensitivity. In human history, politics and culture, the underlying processes may not be stationary at all.
  2. Rarity? When it comes to prediction, rare phenomena simply are difficult to predict (see my post on making the wrong decisions most of the time for the right reasons).
  3. Observability? Can the outcomes of predictions or decisions be directly or immediately observed? For example in psychology, direct observation of mental states is nearly impossible, and in climatology the consequences of human interventions will take a very long time to unfold.
  4. Objective or even impartial criteria? For instance, what is “good,” “beautiful,” or even “acceptable” in domains such as music, dance or the visual arts? Are such domains irreducibly subjective and culture-bound?
  5. Testability? Are there clear criteria for when an expert has succeeded or failed? Or is there too much “wiggle-room” to be able to tell?

Finally, here are a few tests that can be used to evaluate the “experts” in your life:

  1. Credentials: Does the expert possess credentials that have involved testable criteria for demonstrating proficiency?
  2. Walking the walk: Is the expert an active practitioner in their domain (versus being a critic or a commentator)?
  3. Overconfidence: Ask your expert to make yes-no predictions in their domain of expertise, and before any of these predictions can be tested ask them to estimate the percentage of time they’re going to be correct. Compare that estimate with the resulting percentage correct. If their estimate was too high then your expert may suffer from over-confidence.
  4. Confirmation bias: We’re all prone to this, but some more so than others. Is your expert reasonably open to evidence or viewpoints contrary to their own views?
  5. Hedgehog-Fox test: Tetlock found that Foxes were better-calibrated and more able to entertain self-disconfirming counterfactuals than hedgehogs, but allowed that hedgehogs can occasionally be “stunningly right” in a way that foxes cannot. Is your expert a fox or a hedgehog?
  6. Willingness to own up to error: Bad luck is a far more popular explanation for being wrong than good luck is for being right. Is your expert balanced, i.e., equally critical, when assessing their own successes and failures?

2 comments

Comments sorted by top scores.

comment by DuncanS · 2012-01-05T23:48:03.006Z · LW(p) · GW(p)

What are we distinguishing experts from? People who claim to be experts, and even think that they are experts, but are not. Having been around a bit, I find that in most cases such people have a distinctive cognitive style which can itself be detected in various ways.

Real experts make an attempt to explain things properly - they may simplify, but the simplified argument always maps onto a proper argument that has the same effect. Non-expert experts tend to explain mysterious points in ways that you still don't understand, or in terms of explanations that seem to make sense at first, but turn out not to contain everything you actually need to understand what's going on.

Experts of all kinds tend to have some expertise on a wide variety of things, and in some of these you will also know something and can check whether they really understand or not. Non-expert experts may claim to have wide knowledge, but tend to be very specialised. They tend to mess up in the areas that you can check.

Non-expert experts tend to make detailed and lengthy calculations based on shaky presumptions. Expert experts can calculate, but won't bother you with it - they will instead look at the same thing from several points of view - each viewpoint chosen to illustrate to you what's really going on.

Non-expert experts can draw some really bizarre conclusions - and won't really notice that they are bizarre. Expert experts will notice when one of their own conclusions is bizarre, and will tell you of its oddness. And won't draw quite so many such conclusions in the first place.

There are usually things you can check. Have they used the correct units? Experts won't confuse watts with joules, or force with power. They won't talk as if volts or amps are powerful. They won't say that 40C is twice as hot as 20C.

Non-expert experts tend to have particularly strong views about the morality of people who disagree with them, and regard that as important to understanding the subject. Expert experts may have strong views about the morality of the non-experts, but generally don't talk about it as they regard them as irrelevant to understanding the subject.

Non-expert experts tend to try and make themselves look smart. Expert experts tend to make their listeners feel smart.

Go read something by Richard Feynman, or something like "The selfish gene". Or the sequences. Once you get used to what a proper explanation is, it's much easier to avoid being fobbed off with something else.

comment by gwern · 2012-01-05T20:01:22.555Z · LW(p) · GW(p)

Back to the caveats. First, no deliberation makes practice useless. Having spent approximately 8 hours every day sleeping for the past 61 years (178,120 hours) hasn’t made me an expert on sleep. Likewise, deliberative but ineffective practice methods deny us top-level expertise. Early studies of Morse Code experts demonstrated that mere deliberative practice did not guarantee best performance results; specific training regimes were required instead. Autodidacts with insight and aspirations to attain the highest performative levels in their domains eventually realise how important getting the “right” coaching or teaching is.