Being Wrong Doesn't Mean You're Stupid and Bad (Probably)
post by Zack_M_Davis · 2019-06-29T23:58:09.105Z · LW · GW · 5 commentsContents
5 comments
Sometimes, people are reluctant to admit that they were wrong about something, because they're afraid that "You are wrong about this" carries inextricable connotations of "You are stupid and bad." But this behavior is, itself, wrong, for at least two reasons.
First, because it's evidential decision theory. The so-called "rationalist" "community" has a lot of cached [LW · GW] clichés about this! A blank map does not correspond to a blank territory. What's true is already so [LW · GW]; owning up to it doesn't make it worse. Refusing to go to the doctor (thereby avoiding encountering evidence that you're sick) doesn't keep you healthy.
If being wrong means that you're stupid and bad, then preventing yourself from knowing that you were wrong doesn't stop you from being stupid and bad in reality. It just prevents you from knowing that you're stupid and bad—which is an important fact to know (if it's true), because if you don't know that you're stupid and bad, then it probably won't occur to you to even look for possible interventions to make yourself less stupid and less bad.
Second, while "You are wrong about this" is evidence for the "You are stupid and bad" hypothesis if stupid and bad people are more likely to be wrong, I claim that it's very weak evidence. (Although it's possible that I'm wrong about this—and if I'm wrong, it's furthermore possible that the reason I'm wrong is because I'm stupid and bad.)
Exactly how weak evidence is it? It's hard to guess directly, but fortunately, we can use probability theory to reduce the claim into more "atomic" conditional and prior probabilities that might be easier to estimate!
Let represent the proposition "You are wrong about something", represent the proposition "You are stupid", and represent the proposition "You are bad."
By Bayes's theorem, the probability that you are stupid and bad given that you're wrong about something is given by—
For the purposes of this calculation, let's assume that badness and stupidity are statistically independent. I doubt this is true in the real world, but because I'm stupid and bad (at math), I want that simplifying assumption to make the algebra easier for me. That lets us unpack the conjunctions, giving us—
This expression has six degrees of freedom: , , , , , . Arguing about the values of these six individual parameters is probably more productive than arguing about the value of directly!
Suppose half the people are stupid (), one-tenth of people are bad (), and that most people are wrong, but that being stupid or bad each make you somewhat more likely to be wrong, to the tune of , , and . So our posterior probabilty that someone is stupid and bad given that they were wrong once is
But the base rate of being stupid and bad is (0.1)(0.5) = 0.05. Learning that someone was wrong only raised our probability that they are stupid and bad by 0.0042. That's a small number that you shouldn't worry about!
5 comments
Comments sorted by top scores.
comment by steven0461 · 2019-07-01T20:52:15.965Z · LW(p) · GW(p)
If you don't just learn what someone's opinion is, but also how they arrived at it and how confidently they hold it, that can be much stronger evidence that they're stupid and bad. Arguably over half the arguments one encounters in the wild could never be made in good faith.
comment by Raemon · 2019-06-30T02:43:32.063Z · LW(p) · GW(p)
One note is that the framing here looks potentially useful to address internalized shame over being wrong, but not anxiety over being socially punished. (wherein none of the math applies, only what other people think the math is, or whatever process they use to allocate beliefs about wrong-bad-ness)
comment by Bendini (bendini) · 2019-06-30T01:08:56.667Z · LW(p) · GW(p)
This post relies on several assumptions that I believe are false:
1. The rationalist community has managed to avoid bringing in any outside cultural baggage so when someone admits they were wrong about something important (and not making a strategic disclosure) people will only raise their estimate of incompetence by a Bayesian 0.42%.
2. The base rate of being "stupid and bad" by rationalist standards is 5% or lower (The sample has been selected for being better than average, but the implicit standards are much higher)
3. When people say they are worried about being "wrong" and therefore "stupid" and "bad", they are referring to things with standard definitions that are precise enough to do math with.
4. The individuals you're attempting to reassure with this post get enough of a spotlight that their 1 instance of publicly being wrong is balanced by a *salient* memory of the 9 other times they were right.
5. Not being seen as "stupid and bad" in this community is sufficient for someone to get the things they want/avoid the things they don't want.
6. In situations where judgements must be made with limited information (e.g. job interviews) using a small sample of data is worse than defaulting to base rates. (Thought experiment: you're at a tech conference and looking for interesting people to talk to, do you bother approaching anyone wearing a suit on the chance that a few hackers like dressing up?)
Replies from: Zack_M_Davis, Pattern↑ comment by Zack_M_Davis · 2019-07-24T14:43:29.644Z · LW(p) · GW(p)
The rationalist community [...] rationalist standards [...] in this community
Uh, remind me why I'm supposed to care what some Bay Area robot cult thinks? (Although I heard there was an offshoot in Manchester [LW · GW] that might be performing better!) The square quotes around "rationalist" "community" in the second paragraph are there for a reason.
The OP is a very narrowly focused post, trying to establish a single point (Being Wrong Doesn't Mean You're Stupid and Bad, Probably) by appealing to probability theory as normative reasoning (and some plausible assumptions). If you're worried about someone thinking you're stupid and bad because you were wrong, you should just show them this post, and if they care about probability theory as normative reasoning, then they'll realize that they were wrong and stop mistakenly thinking that you're stupid and bad. On the other hand, if the person you're trying to impress doesn't care about probability theory as normative reasoning, then they're stupid and bad, and you shouldn't care about impressing them.
outside cultural baggage
Was there ever an "inside", really? I thought there was. I think I was wrong.
people will only raise their estimate of incompetence by a Bayesian 0.42%.
But that's the correct update! People who update more or less than the Bayesian 0.42% are wrong! (Although that doesn't mean they're stupid or bad, obviously.)
they are referring to things with standard definitions that are precise enough to do math with.
This is an isolated demand for rigor and I'm not going to fall for it. I shouldn't need to have a reduction of what brain computations correspond to people's concept of "stupid and bad" in order to write a post like this.
using a small sample of data is worse than defaulting to base rates
What does this mean? If you have a small sample of data and you update on it the correct amount, you don't do worse than you would have without the data.
you're at a tech conference and looking for interesting people to talk to, do you bother approaching anyone wearing a suit on the chance that a few hackers like dressing up?
Analyzing the signaling game governing how people choose to dress at tech conferences does look like a fun game-theory exercise; thanks for the suggestion! I don't have time for that now, though.
↑ comment by Pattern · 2019-07-04T16:58:54.624Z · LW(p) · GW(p)
3. When people say they are worried about being "wrong" and therefore "stupid" and "bad", they are referring to things with standard definitions that are precise enough to do math with.
I'd highlight the likelihood of conflicting definitions, precision or no precision.