The Paradox of Expert Opinion
post by Emrik (Emrik North) · 2021-09-26T21:39:45.752Z · LW · GW · 9 commentsContents
Sampling bias due to evidential luck is inevitable Adversarial Goodhart amplifying deception None 9 comments
The best-informed opinions tend to be the most selection-biased ones.
If you want to know whether string theory is true and you're not able to evaluate the technical arguments yourself, who do you go to for advice? Well, seems obvious. Ask the experts. They're likely the most informed on the issue. Unfortunately, they've also been heavily selected for [? · GW] belief in the hypothesis. It's unlikely they'd bother becoming string theorists in the first place unless they believed in it.
If you want to know whether God exists, who do you ask? Philosophers of religion agree: 70% accept or lean towards theism compared to 16% of all PhilPaper Survey respondents.
If you want to know whether to take transformative AI seriously, what now?
The people who've spent the most time thinking about this are likely to be the people who take the risk seriously. This means that the most technically eloquent arguments are likely to come from the supporter side, in addition to hosting the greatest volume of persuasion. Note that this will stay true even for insane causes like homeopathy: I'm a disbeliever, but if I were forced to participate in a public debate right now, my opponent would likely sound much more technically literate [LW · GW] on the subject.
To be clear, I'm not saying this is new. Responsible people who run surveys on AI risk are well aware that this is imperfect information, and try to control for it. But it needs to be appreciated for its generality, and it needs a name.
Sampling bias due to evidential luck is inevitable
This paradox stays true even in worlds where all experts are perfectly rational and share the same priors and values.
As long as
- experts are exposed to different pieces of evidence (aka evidential luck [? · GW]), and
- decide which field of research to enter based on something akin to Value of Information [? · GW] (even assuming everyone shares the same values), and
- the field has higher VoI the more you accept its premises,
then the experts in that field will have been selected for how much credence they have in those premises to some extent.
But, as is obvious, experts will neither be perfectly rational nor care about the same things as you do, so the real world has lots more potential for all kinds of filters [? · GW] that make it tricky to evaluate expert testimony.
Adversarial Goodhart amplifying deception
There are well-known problems with the incentives experts face, especially in academia. Thus, Adversarial Goodhart [LW · GW]:
When you optimize for a proxy, you provide an incentive for adversaries to correlate their goal with your proxy, thus destroying the correlation with your goal.
Whatever metric we use to try to determine expertise, researchers are going to have an incentive to optimize for that metric, especially when their livelihoods depend on it. And since we can't observe expertise directly, we're going to have to rely very heavily on proxy measures.
Empirically, it looks like those proxies include: number of citations, behaving conspicuously "professional" in person and in writing, confidence, how difficult their work looks, and a number of other factors. Now, we care about actual expertise , but, due to the proxies above, the metric will contain some downward or upward error such that .
When researchers are rewarded/selected for having a high , we incentivize them to optimise for both and . They can do this by actually becoming better researchers, or by increasing the Error—how much they seem like an expert in excess of how expert they are. When we pick an individual with a high , that individual is also more likely to have a high . Adversarial Goodhart makes us increasingly overestimate expertise the higher up on the proxy distribution we go.
But, of course, these incentives are all theoretical. I'm sure the real world works fine.
9 comments
Comments sorted by top scores.
comment by Dustin · 2021-09-27T22:20:39.773Z · LW(p) · GW(p)
The examples make me think of reference class tennis.
Replies from: Emrik North↑ comment by Emrik (Emrik North) · 2021-09-27T23:17:57.781Z · LW(p) · GW(p)
A central question related to this post is "which reference class should you use to answer your question?" A key point is that it depends on how much selection pressure there is on your reference class with respect to your query.
comment by Vladimir_Nesov · 2021-09-26T23:59:42.020Z · LW(p) · GW(p)
The examples don't work. For string theory, the math of it is meaningful regardless of whether it holds about our world, and in physics consensus reliably follows once good evidence becomes available, so the issue is not experts misconstruing evidence. For homeopathy, there is a theoretical argument that doesn't require expertise. For religion, the pragmatic content relevant to non-experts is cultural/ethical, not factual. The more technical theological claims studied by experts are in the realm of philosophy, where some of them are meaningful and true, it's their relevance that's dubious, but philosophical interest probably has validity to it vaguely analogous to that of mathematical interest.
Replies from: Emrik North↑ comment by Emrik (Emrik North) · 2021-09-27T00:37:04.160Z · LW(p) · GW(p)
I'm not sold yet on why any of the examples are bad?
I know very little of string theory, so maybe that's the one I think is most likely to be a bad example. I assume string theorists are selected for belief in the field's premises, whether that be "this math is true about our world" or "this math shows us something meaninfwl". Physicists who buy into either of those statements are more likely to study string theory than those who don't buy them. And this means that a survey of string theorists will be biased in favour of belief in those premises.
I'm not talking inside view. It doesn't matter to the argument in the post whether it is unreasonable to disagree with string theory premises. But it does matter whether a survey of string theorists will be biased or not. If not, then that's a bad example.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2021-09-27T01:12:32.638Z · LW(p) · GW(p)
Math studied by enough people is almost always meaningful. When it has technical issues, it can be reformulated to fix them. When it's not yet rigorous, most of its content will in time find rigorous formulations. Even physics that disargees with experiment can be useful or interesting as physics, not just as math. So for the most part the wider reservations about a well-studied topic in theoretical physics are not going to be about truth, either mathematical or physical, but about whether it's interesting/feasible to test/draws too much attention/a central enough example of physics or math.
comment by Zac Hatfield-Dodds (zac-hatfield-dodds) · 2021-09-26T23:17:37.765Z · LW(p) · GW(p)
This paradox stays true even in worlds where all researchers are perfectly rational.
No, if there is common knowledge of rationality then Aumann's Agreement Theorem beats evidential luck.
Replies from: Charlie Steiner, Emrik North↑ comment by Charlie Steiner · 2021-09-28T05:24:45.237Z · LW(p) · GW(p)
Maybe the "everyone's rational" world still allows different people to have different priors, not just evidence?
Replies from: Emrik North↑ comment by Emrik (Emrik North) · 2021-09-28T11:50:17.189Z · LW(p) · GW(p)
Right, good point. Edited to point out that same priors (or the same complexity measure for assigning priors) is indeed a prerequisite. Thanks!
↑ comment by Emrik (Emrik North) · 2021-09-26T23:42:37.028Z · LW(p) · GW(p)
This is a good point in the sense that communication between the researchers could in theory make all of them converge to the same beliefs, but it assumes that they all communicate absolutely every belief to everyone else faster than any of them can form new beliefs from empirical evidence.
But either way it's not a crux to the main ideas in the post. My point with assuming they're perfectly rational is to show that there are systemic biases independent of the personal biases human researchers usually have.
(Edit: It was not I who downvoted your comment.)