Making Expertise Legible: Being right should make you respected, not the other way around
post by ejacob · 2021-02-07T23:15:08.122Z · LW · GW · 11 commentsThis is a link post for https://eidan.substack.com/p/making-expertise-legible
Contents
It would be good if journalists could find and honestly portray contrarian opinions, and if contrarians could trust they won’t be misrepresented. None 11 comments
I will be hopping on a long train of thought largely already fleshed out by Scott Alexander and Zvi [LW · GW]. The problem they are talking about is complicated, and so I recommend reading those linked articles, but for those with little time or poor memory I will briefly summarize.
Political appointments and government bureaucracies are selective systems; behaviors that reinforce the power of people above and around you usually get rewarded with promotions and more power. (This is Zvi’s concept of Immoral Mazes [? · GW]). The CDC is such a system. SA argues that this is one reason why the rationalist blogosphere has recognized that information from folks like Zvi about the Coronavirus is generally more accurate and useful than information from authoritative figures like Dr. Fauci. According to Scott Alexander, Zvi can optimize for “being right,” but:
When the Director of the CDC asserts an opinion, she has to optimize for two things - being right, and keeping power. If she doesn't optimize for the second, she gets replaced as CDC Director by someone who does. That means she's trying to solve a harder problem than Zvi is, and it makes sense that sometimes, despite having more resources than Zvi, she does worse at it.
Zvi’s response proposed that the situation is actually worse than this; he thinks that there’s not even any attempt at optimizing some combination of being right and keeping power, the desire to do good for good’s sake being trained out of such people long ago. Instead, people in high-level positions act out of a learned instinct to preserve their position.
Scott Alexander added a bonus post indicating that part of the problem of why “real” expertise doesn’t reach the policy level is that journalists can’t find insider sources willing to be contrarian, and that outsider sources simply don’t met the standard to be cited in serious articles. In addition, many experts don’t trust the journalists to faithfully reproduce their real opinions instead of writing hatchet jobs. I’d like to focus on these two problems. If we can improve either of them, and get more contrarian opinions taken seriously (when they’re right, ideally), that can only help put pressure on institutions to improve.
I. A Journalism-Side Solution
It would be good if journalists could find and honestly portray contrarian opinions, and if contrarians could trust they won’t be misrepresented.
No newspaper or online magazine seems to be able to keep the trust of outsiders for very long, and the recent debacle with Jordan Peterson makes me even more convinced that some journalists are especially motivated to discredit outsiders at every opportunity. But the growth in long-form podcast interviews, as well as platforms like Substack, shows that a demand for outsider voices exists. I think media organizations might be leaving money on the floor when they allow their journalists to burn bridges with contrarians. If sufficiently well-established people in journalism became convinced of this, or a new organization arose with a reputation for not straw-manning everyone, we might see a healthier public dialogue that results in saner policy being adopted faster. I don’t have any actionable ideas about how to do this but would be interested in hearing them.
II. An Expert-Side Solution
It would be good if people who are Smart and Correct About Things were taken more seriously, and if authorities had stronger incentive to not be intentionally Stupid and Incorrect About Things. I have some half-baked ideas on a way this could be approached.
I recently downloaded a browser extension from Ground News. Ground News is an app for fighting political echo chambers. It works by identifying the perspective of an article I’m reading and recommending pieces on the same topic from other political perspectives. It also identifies what it calls “blindspots,” stories that are disproportionately focused on by the right or left (for example, a story about an islamist terror attack in a Middle Eastern nation might be ignored by left-wing media while many right-wing sources address it; the opposite for stories about the Trump family using campaign donations to pay off their own debts).
I can imagine a version of this centered on expert sources and institutions. Say Dr. Anthony Fauci is quoted in a news article: a tooltip appears indicating his tendency to agree or disagree with major institutions or other experts in his field. If he’s made specific predictions, the tool can show how accurate he’s been, and so on.
Independently, public figures could be catalogued and rated on a website similar in concept to Rate My Professor. Each figure could have their own page with a brief summary of positions they’ve publicly held, how often they’ve changed their mind, whether or not they agree with centralized opinions like those of the CDC or if the big organizations eventually come around to positions they were early adopters of, etc. With the participation of the experts themselves, it can even become a sort of Celebrity-League PredictIt; experts who wish to improve their reputations could do so by making bold contrarian claims that turn out to be true.
To some extent I think I am just reinventing Wikipedia with Politifact smushed inside. But a well-made machine learning system could automate the process of linking together people who write or speak about similar issues, and identifying and operationalizing claims they make. Given enough participation and interest, we can make the insiders’ reputations (and job security) depend more on being right as early as possible, as outsiders beat them to the obvious conclusions over and over again.
A public record of who made the right call and when on issues of public import like Coronavirus might help bring public officials who operate on simulacrum level 3 towards level 2 (or maybe even 1, may we be so blessed).
I am looking forward to any reactions.
11 comments
Comments sorted by top scores.
comment by crl826 · 2021-02-09T00:08:23.629Z · LW(p) · GW(p)
I've been thinking about this a lot.
Imagine Reddit + Prediction market. Instead of betting/winning money, you get enhanced karma and increased posting/commenting weight.
If you predict something successfully, say number of COVID deaths for the week, your posts and votes would have more weight than the people who failed to correctly predict the number of deaths.
Replies from: Viliam↑ comment by Viliam · 2021-02-09T23:07:28.311Z · LW(p) · GW(p)
Possible abuse: making many good predictions in order to accumulate points that you later sacrifice in making an intentionally incorrect expert opinion about your pet topic.
You could even do it without sacrificing the accumulated points by making outrageous conditional predictions with conditions you believe are not going to happen; e.g. "if Elon Musk succeeds to get his rocket on Mars in 2021, the government will stop vaccinating people against COVID (because all the powerful people will abandon Earth, and ignore the suffering of the poor)" -- a safe prediction to make if you believe Elon Musk will not get to Mars in 2021.
Replies from: crl826↑ comment by crl826 · 2021-02-09T23:15:21.449Z · LW(p) · GW(p)
Utopia isn't an option, but that aside....I would argue that it is still better today where people can consistently be wrong and still get to consistently give incorrect opinions. And everyone else in the market will still learn along the way which is an improvement over current state.
I don't think there would be much of a market for ridiculous conditional like that so not too worried about that.
Replies from: Viliam↑ comment by Viliam · 2021-02-10T17:47:52.501Z · LW(p) · GW(p)
Yes.
By the way, Stack Exchange (which is not a prediction market, but still a mechanism that tracks reputation for people who give good answers) tracks the score separately per area of expertise, so e.g. correctly answering 100 questions on software development will not make you an expert on parenting, and vice versa. So this a possible approach to check for people who are "experts in X, crackpots in Y".
Which again is not perfect, for example correctly answering 100 question on Java will automatically make you an expert on PHP. Perhaps the area of expertise could be defined more fluently, e.g. using tags, and displaying how much the person is an expert on given tags. (Which again raises a question of how the tags are assigned, especially if assigning the tag is part of the controversy...)
But yes, even "farming karma at X, burning it at Y" is preferable to current system that is without any tracking whatsoever (except for maybe "this person has a diploma" or "this person works for a newspaper" which gives them unlimited karma to burn).
comment by Sumit Gupta (sumit-gupta) · 2023-11-18T15:18:45.075Z · LW(p) · GW(p)
behaviors that reinforce the power of people above and around you usually get rewarded with promotions and more power.
Don't outshine your master.
What kinds of behaviours help you to achieve this goal??
comment by jaspax · 2021-02-08T04:46:40.768Z · LW(p) · GW(p)
This is good thinking, but I worry that implementation is going to run into major difficulties.
- Having a journalist who does this has only a little value, but even having one journalist is actually kind of hard. The journalist will need to have support from his editor and the institution as a whole, both of which will need to accept the hit to respectability that will come from the perception that they promote cranks and conspiracy theorists. Hard to pull off. There are already people who are famous for interviewing contrarians and outsiders (this is Joe Rogan's whole schtick), but that fact is precisely what keeps them from being taken seriously.
- This sounds like a neat idea, and it would probably work for a small user base like LW or (tentatively) HN. But if it grew much beyond that size the political pressure on it would become extremely distorting. The object lesson of political fact-checking sites is illustrative here: there were a few weeks in which they were genuinely useful and non-partisan, before partisan tribal pressures turned them into a punchline.
↑ comment by Viliam · 2021-02-09T23:24:05.949Z · LW(p) · GW(p)
Do the mainstream journalists only misrepresent "cranks and conspiracy theorists" and interview everyone else fairly; or is it more like they misrepresent pretty much everyone, but most people are misinterpreted in non-hostile ways so they don't mind much (maybe enough to refuse another interview, but not enough to sue)?
My impression (and brief personal experience) suggests that a typical journalist pretty much misrepresents everyone and everything, but it is a misinterpretation in random direction, so usually not very bad, just... needless and confusing. More precisely, I believe that a typical journalist already has the whole story written before they interview you, and they are just fishing for a quote which taken out of context would support the story. However, their story usually does not needlessly make you a bad guy; it's just a combination of their stereotypes, and minor exaggerations they believe will be interesting to the reader.
If this model is essentially correct, then you could have a journalist that only interviews non-controversial people, but builds a prestige of interviewing without misrepresenting (kinda like Joe Rogan, but avoids controversial people).
In Slovakia, there is a journalist who started his career just like this: he was writing blogs containing interviews with various people, providing the video within the text, so that everyone would see it's the same, only perhaps edited for greater legibility. His blog on a major newspaper's website was very popular; then the newspaper hired him. (He avoids interviewing too controversial people.)
Replies from: jaspax, crl826↑ comment by jaspax · 2021-02-10T12:40:15.333Z · LW(p) · GW(p)
I agree that journalists misrepresent everyone; I disagree that the direction is mostly random, and it's not random precisely because "a typical journalist already has the whole story written before they interview you". In politically-charged situations (a category which includes an ever-growing number of things), this means that an interviewee who is on the same side as the journalist will get favorable representation, while an interviewee on the opposite side will get unfavorable representation. When writing about topics on which there is no particular political orthodoxy, the errors will be mostly random.
You could interview fairly but non-controversially, but this limits you to areas where there is not yet any widespread controversy, a small and shrinking territory.
↑ comment by crl826 · 2021-02-10T00:16:39.003Z · LW(p) · GW(p)
I hesitate to bring this up since politics, but in the US it is a very common perception that the media is liberally biased.
And the fact that certain stories are almost exclusively discussed in certain outlets based on politics makes me think that it is not random error.
Also, wouldn't avoiding controversial figures be the opposite of helpful if you are trying to get new information out. Seems to not solve the problem of getting legible expertise that is contrary to popular opinion into the marketplace.
Replies from: Viliam↑ comment by Viliam · 2021-02-10T18:10:15.395Z · LW(p) · GW(p)
I like this!
Also, wouldn't avoiding controversial figures be the opposite of helpful if you are trying to get new information out.
Uh, depends on how exactly you set the controvery threshold. I didn't mean literally zero-controversy topics (these are quite rare recently), but rather something like: "Alex Jones - definitely no; Donald Trump - probably yes". It would probably be better described as staying within the Overton window.
I mean, I don't like Trump, but giving him dozen questions and then literally writing what he answered without twisting his words... that seems like, dunno, basic human decency. (Especially if I add the disclaimer "I interviewed, but his opinions are his own".) Yet somehow journalists seem to fail at this.
Also, there are all kinds of information that journalists report on incorrectly, not just controversies.
Replies from: crl826↑ comment by crl826 · 2021-02-10T23:40:58.231Z · LW(p) · GW(p)
It would probably be better described as staying within the Overton window.
It's a different name, but by definition, this standard means you are not getting new, unorthodox opinions to the public.
OP was trying to figure out how to have respectability follow 'rightness'. Only talking to people who are already respectable doesn't help that at all.