Reliably wrong
post by NancyLebovitz · 2010-12-09T14:46:02.157Z · LW · GW · Legacy · 19 commentsContents
19 comments
Discussion of a book by "Dow Jones 36,000" Glassman". I'm wondering whether there are pundits who are so often wrong that their predictions are reliable indicators that something else (ideally the opposite) will happen.
19 comments
Comments sorted by top scores.
comment by CronoDAS · 2010-12-09T21:18:19.659Z · LW(p) · GW(p)
A bit of silliness, courtesy of an old Paul Krugman blog post:
[W]ith apologies to Brad DeLong, when reading WSJ editorials you need to bear two things in mind:
- The WSJ editorial page is wrong about everything.
- If you think the WSJ editorial page is right about something, see rule #1.
After all, here’s what you would have believed if you listened to that page over the years: Clinton’s tax hike will destroy the economy, you really should check out those people suggesting that Clinton was a drug smuggler, Dow 36000, the Bush tax cuts will bring surging prosperity, Saddam is backing Al Qaeda and has WMD, there isn’t any housing bubble, US households have a high savings rate if you measure it right. I’m sure I missed another couple of dozen high points.
More seriously, I've heard that the Wall Street Journal editorial page acts like our clever argurer: the writers are given a conclusion (usually something along the lines of "cut taxes!") by their bosses and ordered to write an editorial in support of that conclusion. Saying "This policy should be implemented because it would make my bosses wealthier (at the expense of others)" isn't very persuasive, so they need to get clever about it...
Replies from: JoshuaZ↑ comment by JoshuaZ · 2010-12-10T18:57:16.905Z · LW(p) · GW(p)
Even if they are acting as clever arguers for their bosses, that shouldn't make them actively anti-correlated with correct claims. That's not in their interest and would without a lot of brain power and dedication to just that goal be extremely difficult.
Replies from: None↑ comment by [deleted] · 2010-12-10T19:57:50.960Z · LW(p) · GW(p)
It should if we assume the interests of their bosses are not the same as the interests of the readers. Assume that, say, cutting the taxes of the rich will lose me five utilons, but give their bosses that many. If they're trying to persuade me to support tax cuts for the rich, then predicting a loss of utilons for me will make it less likely that I support it. However, saying it will give me more utilons will make it more likely that I support it.
For them to be actively anti-correlated with correctness, then, we only have to assume that the boss' interests are actively anti-correlated with those of the readers.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2010-12-10T20:28:01.040Z · LW(p) · GW(p)
But the claim that the boss' interest are actively anti-correlated with readers is an incredibly strong claim itself. In the vast majority of contexts this won't be true. Who benefits if new technologies are discovered? Who suffers if there's a nuclear war or if we reach peak oil faster than generally anticipated?
It is conceivable that within the narrow confines of what the WSJ editorials generally discuss there's an anti-correlation. But even that set of narrow topics is pretty wide. To the point where this seems unlikely. It seems more likely to me that there might be specific ideologies which in their narrow realm are anti-correlated with truths about controversial questions. But to construct explicit unambiguous examples of that even in economics I need to do something like select merchantilism as the relevant economic theory.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2010-12-11T15:39:08.840Z · LW(p) · GW(p)
I agree it's a strong claim, but I can see a mechanism that makes it a little more plausible. Where the owners of WSJ have the same interests as their readers, the WSJ need not write about it because the readers will do what is in their mutual interests. It is only when their interests are opposed that the WSJ has to work to persuade them to work against their own interests and for Murdoch's interests.
comment by Emile · 2010-12-09T16:43:01.361Z · LW(p) · GW(p)
Most interesting claims are narrow; the narrower a claim, the more likely it is to be wrong.
If you have an expert who seems systematically wrong, "opposite" predictions will be broad claims that are often right. But that's not very useful - it's easy to make a series of broad claims, most of which are right.
comment by jsalvatier · 2010-12-09T15:48:56.333Z · LW(p) · GW(p)
This seems like a very unlikely sort of phenomena, reversed stupidity != intelligence etc. Why would you expect such people?
Replies from: simpleton, Miller, NancyLebovitz↑ comment by simpleton · 2010-12-09T21:24:32.930Z · LW(p) · GW(p)
It's common in certain types of polemic. People hold (or claim to hold) beliefs to signal group affiliation, and the more outlandishly improbable the beliefs become, the more effective they are as a signal.
It becomes a competition: Whoever professes beliefs which most strain credibility is the most loyal.
Replies from: None↑ comment by [deleted] · 2010-12-09T21:57:15.844Z · LW(p) · GW(p)
I think that most people who tell pollsters they believe conspiracy theories wouldn't bet on them.
Replies from: David_Gerard↑ comment by David_Gerard · 2010-12-10T00:54:38.951Z · LW(p) · GW(p)
Data on that question would be an interesting thing to gather, though I might guess they would take attempts to measure their belief as somehow a manifestation of the conspiracy. (Everything is evidence for the conspiracy.)
↑ comment by NancyLebovitz · 2010-12-09T15:51:44.863Z · LW(p) · GW(p)
Possibly just an aesthetic preference. You probably have a point.
I think such people might exist when the possibilities for prediction are relatively constrained, but even then, some fraction of their consistent wrongness would be a matter of luck, and couldn't be used for prediction.
Replies from: None↑ comment by [deleted] · 2010-12-09T19:59:01.198Z · LW(p) · GW(p)
In fact, when the possibilities for prediction are relatively constrained, but there are a lot of people making predictions, and the system is complicated enough that you can't expect most people to be mostly right, we'd have some people being consistently wrong by chance alone.
comment by DanielLC · 2010-12-10T22:50:00.953Z · LW(p) · GW(p)
I've read that actively managed mutual funds are, on average, slightly worse than random chance allows.
Replies from: JGWeissman↑ comment by JGWeissman · 2010-12-10T23:02:32.734Z · LW(p) · GW(p)
This could result from adding management fees to "as bad as random" performance.
Replies from: DanielLC, Vaniver