Layers of Expertise and the Curse of Curiosity

post by Gyrodiot · 2019-02-12T23:41:45.980Z · LW · GW · 1 comments

Contents

  Discovery
  Learning iteration
  Underconfident experts
  Validation scarcity
  Teaching shift
  Exploration
  Underlying assumptions
  What to expect from this model
None
1 comment

Epistemic status: oversimplification of a process I'm confident about; meant as proof of concept.

Related to: Double-Dipping in Dunning-Kruger [LW · GW]

Expertise comes in different, mostly independent layers. To illustrate them, I will describe the rough process of a curious mind discovering a field of study.

Discovery

In the beginning, the Rookie knows nothing. They have no way to tell what's true or false in the field. Anything they say about it will probably be nonsense, or at best, not better than chance.

Consider a child discovering astronomy. They know the Sun and the Moon move in the sky, that other planets and stars exist, but they wonder about the mysterious domain of space. They open a book, or watch a few videos, and their first discoveries are illuminating. The Moon goes around the Earth which goes around the Sun, the other stars are very very far. Everything makes sense, because beginner material is designed to make sense.

The basic facts are overwhelming. They feel so valuable and wondrous that they have to be shared with other children. They know nothing! The knowledge gap is so large that the enlightened child is viewed as an Expert, and for a while the little explorer does feel like one.

However, the child is still a Rookie. They start talking about how planets go in perfect circles around the Sun, that there's nothing but interstellar space beyond Pluto except maybe comets, because introductory material is fuzzy on the details. The child may be overconfident, until someone more educated points out the mistake. Then Curiosity kicks in.

Learning iteration

When discovering gaps in their knowledge, one with a curious mind will strive to fill them. They will seek new material, kind teachers, and if they're lucky they'll learn more and more. This is the first layer of expertise: accumulation of true facts. Repeatedly, they will be confronted with their own ignorance: for each new shard of knowledge they reveal, dozens appear still shrouded. Every time they think they've exhausted the field, an unexpected complexity will show them wrong.

At some point they will internalize the pattern: the field is deep, and full of more details than they can learn in a lifetime. They will be cautious about their learning process, acknowledging they may be wrong, that their models of reality aren't perfect, that they don't know all there is to know about their field. This is the second layer of expertise: realization of one's limitations.

Faced with the ever-incomplete nature of their discoveries, the curious mind will still learn, and eventually hit against the open problems of their field. Suddenly, reaching new knowledge is much more expensive. The frontier is full of conjectures, uncertainties and gaps. Venturing outside the well-studied questions comes with the risk of accidentally spouting nonsense. We can't have that! After learning so much, making Rookie mistakes would be unforgivable, wouldn’t it?

Underconfident experts

A failure mode appears when an Expert confuses:

  1. knowing all there is to know about X;
  2. knowing all that is currently known about X;
  3. knowing more about X than almost any non-Expert.

An Expert may be very familiar with their own limits, but they may forget how far they have pushed them. They may consider the solutions of actually unsolved problems as something they should know. They may look at other Experts, lament they aren't as knowledgeable as them on certain details, downplay how much they actually know, and underestimate the quality of their own advice since it's not perfect.

This happens if the Expert has no way to figure out their own level. Sure, you can teach the basic facts to Rookies, but any Intermediate can do that, right? Maybe you can even teach advanced stuff to Intermediates, but you feel like you have to point out everything that you can't teach because you don't know everything, and surely a True Expert should know this better than you...

What's missing is the third layer of expertise: evaluation of your own competence. The value of your expertise is mostly relative to the state of the art. For instance, any chemistry undergrad knows more about radioactivity than Marie Curie ever did. Yet she was the leading expert of her time, made immense contributions to the field, and still died following what we consider today a Rookie mistake (high doses of radiation are bad for your health).

One of the upsides of PhD graduation is that you get explicit confirmation by your peers that you're the leading expert (or nearly) on your chosen topic. This is sometimes hard to accept. Research comes with a lot of failures, and it takes time to internalize that an Expert is allowed to fail so often. However, this problem is no longer limited to academic settings.

Validation scarcity

The curious mind today is in luck. Vast swathes of knowledge are available on the Internet. You can binge-read your favorite encyclopedia, specialized blogs, follow scientists on Twitter, dive into arXiv if you're driven enough.

However, easy access to knowledge doesn't help you reach the third layer of Expertise. You may learn as much as you can, and figure out your limitations, yet Acknowledged Experts won't have time to validate you. Online courses will give you some sort of diploma, but you're not sure it's as valuable as college degrees, and you heard that even those are mostly signaling something other than Expertise.

Increasing the supply of knowledge creates lots of learners, who need validation. However, this demand for validation increases much slower than its supply, making it harder to get. Worse, overconfident learners won’t hesitate to post nonsense online, drowning the competent voices and misleading other confused learners. Only the Experts will be able to tell them apart, and they don’t have enough time for everyone!

The amount of Expert attention remaining equal, or growing slowly, the underconfident will fail to find validation and won't join the ranks of Experts, while the overconfident will not get corrected, hindering their progress.

Hence the curse of Curiosity: the more accessible knowledge is, the harder it is to ensure (and signal) you got it right.

Teaching shift

The above assumes that Experts are confident enough of their own level to be able to evaluate others in the field. This is the fourth layer of expertise: peer judgment. The ability to provide feedback, to point out someone else’s mistakes and progress, to keep Curiosity in the right direction.

Since this fourth layer is interactive by definition, there will be a signal of achievement, of understanding, some kind of proof that an Expert will evaluate. This could be sample problems to solve, a performance to give, an elaborate project to craft. It must be hard to fake, and quick to recognize. However, this signal is reliable only if it’s endorsed by Experts themselves. You can very well have a token degree for having completed an online course, but no assurance that it truly validates your expertise.

Part of teaching is making sure your students understand you and actually learn. Each field has its own methods to differentiate genuine understanding from guessing the teacher’s password [LW · GW].

As quality sources of knowledge get shared and incrementally refined, the value created by teachers shifts to evaluation rather than basic transmission. The curse of Curiosity entails that validation is scarce, and there is more to gain by designing proper tests of expertise, better rewards for curiosity, than by adding to the heap of available facts or making a lesson slightly clearer.

Exploration

One can excel in the first four layers of expertise: knowing lots of things, being aware of what you don’t know, of how much is currently known in general, and how much you and anyone else do know relatively to each other. This includes being able to show your skill, but with those layers alone, you’re ultimately limited to the state of the art.

Actively trying to figure out what isn’t yet known is the fifth layer of expertise: novel research. The previous layers aren’t a prerequisite. You could discover something new about a field without knowing much about it, but you shouldn’t count on it, and you may not even notice your lucky push of the boundaries of knowledge.

The curse of Curiosity doesn’t affect, strictly speaking, the fifth layer. You can attain by yourself a level high enough to do productive research, without needing peer validation. However, you don’t want to waste time on explorations that an Expert would recognize as confused or futile, and you may be underconfident that you’re an Expert yourself.

You don’t need a license to do great things [LW · GW]. Still, you need to be reasonably confident that your efforts have a positive expected value, to stay motivated. As a corollary to the curse of Curiosity: self-confidence being harder to attain means there’s a fast-growing pool of unaware Experts, perfectly capable of doing productive research but believing they can’t.

I would argue this is a neglected problem.

Underlying assumptions

The above reasoning rests on the hypothesis (among others) that current expert validation supply scales roughly linearly with the existing number of Experts, as if each of them had a bounded amount of time to assess each piece of work, by grading, reviewing or otherwise producing valuable feedback.

In particular, we assume there isn’t any validation method that: (a) scales well with the number of Rookies, (b) is hard enough to fake to constitute a reliable validation signal.

This assumption doesn’t hold for domains where there are cheap ways to test predictions. Anyone can test their own expertise in intuitive ballistics by throwing balls; anyone can test their fluency in basic arithmetic by checking their results against a calculator. We assume that for most domains, the vast majority of advanced validation is done by peers; only the foremost experts have the resources to test brand new predictions, which is where validation bottoms out; all other experts are either playing catch-up [LW · GW], or doing something other than original research.

The model also overlooks the gradual nature of expertise, as domain practitioners aren’t neatly separated between Experts and non-Experts. I posit that the curse of Curiosity holds anyway at every level of expertise, i.e. that the more accessible Nth-level knowledge is, the harder it is to find above-Nth-level validation. This position is stronger than the original formulation, and I’m slightly less confident about it.

What to expect from this model

As stated above, the curse of Curiosity implies a fast-growing pool of knowledgeable apparent Rookies, which aren’t recognized (and don’t consider themselves) as Experts. In other words, there’s a talent overhang, where a sudden improvement in validation methods would unlock a flood of previously hidden competent people.

I expect that the rise of average proficiency, in most academic domains, in the general population, is no longer constrained by knowledge scarcity, but by validation scarcity. Therefore, greater educative value would be created by better tests, better credentials, or easier access to experts, rather than clearer textbooks or wider diffusion of courses.

As an aside, I plan to clarify further my mental models of expertise, based on the five layers described above. I also hope to find more ideas related to scalable validation.

Thanks to gjm, ChristianKl, and other kind proofreaders for their feedback!

1 comments

Comments sorted by top scores.

comment by ryan_b · 2019-02-15T18:37:17.134Z · LW(p) · GW(p)

I've often wondered how far we could get if our training systems consisted less of one-to-many instruction and instead focused on deeply monitored, iterated group performance. The latter is how the most extreme environments operate, like space missions and the military, but the expense is hard to justify.

On the other hand, I don't know of any middle-ground attempts at doing this for more routine environments. Doing such training for a standard office environment, which relies on standard consumer hardware and has no unusual safety or performance requirements, is doubtless much cheaper. How much cheaper would it have to be, and what kind of performance would it have to deliver, to make it worth considering as an alternative to the standard model of education?

As I write this it occurs to me that most of the distinction is down to the environment, and this model could easily suffer from a lack of emphasis on the value-added tasks that companies are concerned with. Of course, formal education does not provide any focus on those tasks either; the degree just offers some confidence that once provided with them, they can be accomplished.