Reversed stupidity sometimes provides useful information
post by Scott Alexander (Yvain) · 2011-09-28T10:28:37.513Z · LW · GW · Legacy · 13 commentsContents
13 comments
In his recent CATO article, Reversed Stupidity Is Not Intelligence, Eliezer writes:
To psychoanalyze these people’s flaws, even correctly, and even if they constitute a numerical majority of the people talking about “quantum,” says nothing at all about whether the smartest people who believe in “quantum” might perhaps be justified in doing so ... there are large numbers of embarrassing people who believe in flying saucers, but this cannot possibly be Bayesian evidence against the presence of aliens, unless you believe that aliens would suppress flying-saucer cults, so that we are less likely to see flying-saucer cults if aliens exist than if they do not exist. So even if you have truly and correctly identified a cluster of people who believe X for very bad, no good, awful, non-virtuous reasons, one does not properly conclude not-X, but rather calls it all not-evidence.
I think the statement makes a correct point - don't dismiss an idea just because a few proponents are stupid - but is too strong as written. In some cases, we can derive information about the truth of a proposition by psychoanalyzing reasons for believing it.
There are certain propositions that people are likely to assert regardless of whether or not they are true. Maybe they're useful for status disputes, or part of a community membership test, or just synchronize well with particula human biases. "X proves the existence of God" commonly gets asserted whether or not X actually proves the existence of God. Anything that supports one race, gender, political party, or ideology over another is also suspect. Let's call these sorts of propositions "popular claims". Some true propositions might be popular claims, but popular claims are popular whether or not they are true.
Some popular claims are surprising. Without knowing anything about modern society, one might not predict that diluting chemicals thousands of times to cure diseases, or claiming the government is hiding alien bodies, would be common failure modes. You don't know these are popular claims until you hear them.
If a very large group of people make a certain assertion, and you always find it to be false, you now have very good evidence that it's a popular claim, a proposition that people will very often assert even if it's false.
Normally, when someone asserts a proposition, you assume they have good evidence for it - in Bayesian terms, the probability that they would assert it is higher if there is evidence than if there is not evidence. But people assert popular claims very often even when there is no evidence for them, so someone asserting a popular claim provides no (or little) evidence for it, leaving you back at whatever its prior is.
Time for an example: suppose two respected archaeologists (who happen to be Mormon) publish two papers on the same day. The first archaeologist claims to have found evidence that Native Americans are descended from ancient Israelites. The second archaeologist claims to have found evidence that Zulus are descended from Australian aborigines.
On the face of it, these two claims are about equally crazy-sounding. But I would be much more likely to pay attention to the claim that the Zulus are descended from aborigines. I know that the Mormons have a bias in favor of believing Indians are descended from Israelites, and probably whatever new evidence the archaeologist thinks she's found was just motivated by this same bias. But no one believes Zulus are descended from Australians. If someone claims they are, she must have some new and interesting reason to think so.
(to put it another way, we expect a Mormon to privilege the hypothesis of Israelite descent; her religion has already picked it out of hypothesis-space. We don't expect a Mormon to privilege the hypothesis of Australian descent, so it's more likely that she came to it honestly).
If then I were to learn that there was a large community of Mormons who interpreted their scripture to say that Zulus were descended from Australians, I would consider it much more likely that the second archaeologist was also just parroting a religious bias, and I would no longer be quite as interested in reading her paper.
In this case, reversed stupidity is intelligence - learning that many people believed in an Australian-Zulu connection for religious reasons decreases my probability that the posited Australian-Zulu connection is real. I can never go lower than my whatever my prior for an Australian - Zulu connection would be, but I can discount a lot of the evidence that might otherwise take me above my prior.
So in summary, a proposition asserted for stupid reasons can raise your probability that it is the sort of proposition that people assert for stupid reasons, which in turn can lower your probability that the next person to assert it will have smart reasons for doing so. Reversed stupidity can never bring the probability of an idea lower than its prior, but it can help you discount evidence that would otherwise bring it higher.
13 comments
Comments sorted by top scores.
comment by JoshuaZ · 2011-09-28T14:02:14.597Z · LW(p) · GW(p)
This doesn't seem to be reversing stupidity so much as taking into account potential biases.
There is however a very closely related idea which might have more validity- if a lot people have a lot of motivation for finding evidence for a claim and they've found very little then I should conclude that the evidence probably doesn't exist. The analogy here would be that if I had never heard of the Australia-Zulu claim and didn't think any group had a reason to believe it I wouldn't assign it as low a probability as I would for the claim where people have spent years trying to find every scrap of evidence that could possibly support the position.
comment by Multipartite · 2011-09-28T22:56:19.881Z · LW(p) · GW(p)
Note that these people believing this thing to be true does not in fact make it any likelier to be false. We judge it to be less {more likely to be true} than we would for a generic positing by a generic person, down to the point of no suspicion one way or the other, but this positing is not in fact reversed into a positive impression that something is false.
If one takes two otherwise-identical worlds (unlikely, I grant), one in which a large body of people X posit Y for (patently?) fallacious reasons and one in which that large body of people posit the completely-unrelated Z instead, then it seems that a rational (?) individual in both worlds should have roughly the same impression on whether Y is true or false, rather than in the one world believing Y to be very likely to be false.
One may not give the stupid significant credence when they claim it to be day, but one still doesn't believe it any more likely to be night (than one did before).
((As likely noted elsewhere, the bias-acknowledgement situation results in humans being variably treated as more stupid and less stupid depending on local topic of conversation, due to blind spot specificity.))
comment by Jayson_Virissimo · 2011-09-28T11:44:56.893Z · LW(p) · GW(p)
Saying that X asserting A provides very weak evidence for A should not be confused with saying Y asserting B provides evidence for not-B. One claim is about magnitude while the other is about sign. In most situations, the latter commits one to violating conservation of evidence, but the former does not.
Replies from: ciphergoth, Jack↑ comment by Paul Crowley (ciphergoth) · 2011-09-28T17:34:05.218Z · LW(p) · GW(p)
No, if you learn first that X is being claimed, then you learn that many people claim it for stupid reasons, then after learning the second fact your confidence in X goes down, as in this example:
learning that many people believed in an Australian-Zulu connection for religious reasons decreases my probability that the posited Australian-Zulu connection is real.
There's no violation of confirmation of evidence: not learning today that a particular religious tribe believe in an Australian-Zulu connection increases my confidence in the proposition by a tiny amount, just as Barack Obama not walking through the door right now decreases my confidence in his existence.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2011-09-28T20:27:31.704Z · LW(p) · GW(p)
Barack Obama not walking through the door right now decreases my confidence in his existence.
He's insanely unlikely to walk through my door, but if after all the evidence he manages to not actually exist, the world must be working in a very strange way, in which case it might be the case that his probability of (apparently) walking through my door is greater. Am I missing a simple argument?
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2011-09-29T07:55:12.969Z · LW(p) · GW(p)
You're making a case that I should have less confidence in Obama's existence as a result of his walking through the door? I can see where you're going with it :-)
↑ comment by Jack · 2011-09-28T17:41:41.158Z · LW(p) · GW(p)
There are circumstances where X asserting A provides evidence against A (for not-A). Some speakers are less reliable than others and there is no necessary reason a speaker can't be so unreliable that her claims provide zero evidence for or against what she asserts. Moreover, there is no reason a speaker can't be anti-reliable. Perhaps she is a pathological liar. In this case her statements are inversely correlated with the truth and an assertion of A should be taken as evidence for not-A. As long as your math is right there is no reason for this to violate conservation of evidence.
comment by Jack · 2011-09-28T17:33:02.637Z · LW(p) · GW(p)
In this case the fact that many people believe in a Native American-Israelite connection is not evidence against a Native American-Israelite connection, just evidence against the claims of Mormon archaeologists on the subject. You might update similarly if you learned that particular archaeologist was incompetent or dishonest. The causal connection between the fact and the evidence has been undermined. But I think this is more a matter of formalizing how we represent evidence, I doubt there is a substantive disagreement over what heuristics are helpful. Semantics, I think.
Replies from: Yvain↑ comment by Scott Alexander (Yvain) · 2011-09-28T19:23:06.012Z · LW(p) · GW(p)
I agree. Although Mormonism may not be a good example because only certain people are Mormons. If we use an example that potentially casts doubt on all humans (for example, conspiracy theories), then it not only means that all previous evidence you've heard from others becomes less believable, but even that your own calculations on the subject are now suspect since you are probably subject to the same bias as everyone else.
comment by lessdazed · 2011-09-28T16:39:35.617Z · LW(p) · GW(p)
Consider the following books: WWII for Dummies, Rise and Fall of the Third Reich, War as I Knew It, With the Old Breed: At Peleliu and Okinawa, HE WAS MY CHIEF: The Memoirs of Adolf Hitler's Secretary, and The Rising Sun: The Decline and Fall of the Japanese Empire. Which are not primary sources?
Anyone have a response?
Replies from: wedrifid, Nornagest↑ comment by wedrifid · 2011-09-29T01:41:28.041Z · LW(p) · GW(p)
Which are not primary sources?
Anyone have a response?
Yes. Context matters. The meaning conveyed here is "Which are not primary sources about war or historical event that they are describing?" Anyone answering under that assumption (with the right relative answers) is correct. They get bonus points if they subtly disambiguate the question by including gratuitous "are primary sources about X" so that their words are literally correct not matter how poorly you are communicating.
↑ comment by Nornagest · 2011-09-28T20:22:07.972Z · LW(p) · GW(p)
Which are not primary sources?
WWII for Dummies, Rise and Fall of the Third Reich, and The Rising Sun, I believe (War As I Knew It is the Patton book, right?), although I'm not sure where you're going with this. My best guess is that it's a demonstration that primary sources are subject to a more varied and often stronger set of biases than secondary or tertiary sources, and that that should be kept in mind when interpreting them; is that about right?
Replies from: lessdazed↑ comment by lessdazed · 2011-09-28T20:59:01.631Z · LW(p) · GW(p)
I'm not sure where you're going with this
It was a trick question (sorry!) - all media are primary sources. From the wikipedia article:
"Primary" and "secondary" are relative terms, with sources judged primary or secondary according to specific historical contexts and what is being studied.
...
For example, encyclopedias are generally considered tertiary sources, but Pliny's Naturalis Historia, originally published in the 1st century, is a primary source for information about the Roman era.
Not all of the books I mentioned are primary sources about WWII - the ones you mentioned are primary sources in other subjects. For example, WWII for Dummies is a primary source for a study of the For Dummies series, and Rise and Fall of the Third Reich is a primary source for how history was generally written in the second half of the twentieth century (e.g., from the perspective of nations more so than of a random person).
So where I'm going with this is to say that reversed stupidity isn't intelligence, but information about stupidity is information about a topic, just as legitimate information about a topic is legitimate information about a topic, even if others are stupid about that topic. Knowledge of stupidity is a type of information no different than any other, this is how it indirectly affects knowledge about which the stupid are stupid about.