Sufficiently sincere confirmation bias is indistinguishable from science
post by Benquo
Some theater people at NYU people wanted to demonstrate how gender stereotypes affected the 2016 US presidential election. So they decided to put on a theatrical performance of the presidential debates – but with the genders of the principals swapped. They assumed that this would show how much of a disadvantage Hillary Clinton was working under because of her gender. They were shocked to discover the opposite – audiences full of Clinton supporters, watching the gender-swapped debates, came away thinking that Trump was a better communicator than they'd thought.
The principals don't seem to have come into this with a fair-minded attitude. Instead, it seems to have been a case of "I'll show them!":
Salvatore says he and Guadalupe began the project assuming that the gender inversion would confirm what they’d each suspected watching the real-life debates: that Trump’s aggression—his tendency to interrupt and attack—would never be tolerated in a woman, and that Clinton’s competence and preparedness would seem even more convincing coming from a man.
Let's be clear about this. This was not epistemic even-handedness. This was a sincere attempt at confirmation bias. They believed one thing, and looked only for confirming evidence to prove their point. It was only when they started actually putting together the experiment that they realized they might learn the opposite lesson:
But the lessons about gender that emerged in rehearsal turned out to be much less tidy. What was Jonathan Gordon smiling about all the time? And didn’t he seem a little stiff, tethered to rehearsed statements at the podium, while Brenda King, plainspoken and confident, freely roamed the stage? Which one would audiences find more likeable?
What made this work? I think what happened is that they took their own beliefs literally. They actually believed that people hated Hillary because she was a woman, and so their idea of something that they were confident would show this clearly was a fair test. Because of this, when things came out the opposite of the way they'd predicted, they noticed and were surprised, because they actually expected the demonstration to work.
But they went further. Even though they knew in advance of the public performances that the experiment got the wrong answer, they neither falsified nor file-drawered the evidence. They tried to show, they got a different answer, they showed it anyway.
This is much, much better science than contemporary medical or psychology research were before the replication crisis.
Sometimes, when I think about how epistemically corrupt our culture is, I'm tempted to adopt a permanent defensive crouch and disbelieve anything I can't fact-check, to explicitly adjust for all the relevant biases, and this prospect sounds exhausting. It's not actually necessary. You don't have to worry too much about your biases. Just take your own beliefs literally, as though they mean what they say they mean, and try to believe all their consequences as well. And, when you hit a contradiction – well, now you have an opportunity to learn where you're wrong.
(Cross-posted at my personal blog.)
Comments sorted by top scores.
comment by Lumifer ·
2017-03-15T15:51:41.328Z · LW(p) · GW(p)
Even though they knew in advance of the public performances that the experiment got the wrong answer, they neither falsified nor file-drawered the evidence.
Well, they're not scientists, they're entertainers. Their goal was a successful performance (measured, I assume, by such things as audience engagement, media reaction, ticket sales, etc.) Gender-swapped debates just were a good idea and if there was any science there (I'm doubtful: a single data point is not meaningful) it was very much secondary.
comment by HungryHippo ·
2017-03-16T13:50:47.298Z · LW(p) · GW(p)
They tried to show, they got a different answer, they showed it anyway.
This is very admirable! Especially on such a politically charged topic.
comment by MrMind ·
2017-03-15T14:06:53.212Z · LW(p) · GW(p)
"Strong opinion loosely held".
I didn't know about the experiment, I'm glad to hear that they decided to show it anyway.
comment by Vaniver ·
2017-03-17T15:41:45.868Z · LW(p) · GW(p)
While "strong opinion weakly held" is more traditional and widespread, I prefer replacing "strong" with "clear," since it points more crisply at the relevant feature.
comment by Douglas_Knight ·
2017-03-19T16:13:24.366Z · LW(p) · GW(p)
Is that the relevant feature? If we disagree about that, then this phrase is not itself a clear opinion. The original version was extreme, which is how I always understood it.
What do you mean by "clear"? As I understand it, the point is to be clear about some thing and not clear about others — in particular, not to clearly state all disclaimers.
Added: the modern origin gives an explanation that might fit the word zealous. He emphasizes the energy and commitment necessary to test the claims.
comment by SnowSage4444 ·
2017-03-17T16:39:56.326Z · LW(p) · GW(p)
This is what happens when you believe feminism.
Still, I must give the "Scientists" credit for actually admitting when they were wrong. Of course, if they didn't, audiences would have said it for them.
comment by jmh ·
2017-03-15T16:19:09.183Z · LW(p) · GW(p)
I liked one of (well I liked them all but this one made me think about the comment I'm making) comments, David_C._Brayton, about the difference in views between engineers and theorists. I cannot help but wonder if there's a difference in behavior between wanting to test the theory versus wanting to apply the theory in terms of one's confirmation biases and ability to step beyond them.
comment by lifelonglearner ·
2017-03-15T14:24:13.598Z · LW(p) · GW(p)
The title says that sufficiently sincere confirmation bias is indistinguishable from real science. But I don't see how this differs too much from real science (the attitude of the NYU people versus scientists.)
What made this work? I think what happened is that they took their own beliefs literally. They actually believed that people hated Hillary because she was a woman, and so their idea of something that they were confident would show this clearly was a fair test.
I'm a little confused. Isn't this just saying that these people held real beliefs, rather than, say, belief-in-belief? So when contrary evidence appeared, they were able to change their mind?
I dunno; I feel not super convinced that its confirmation bias which causes this sort of good epistemic behavior? (As in, I wouldn't expect this sort of thing in this sort of situation to happen much and this is maybe unique?)
comment by Benquo ·
2017-03-15T20:14:36.259Z · LW(p) · GW(p)
It's sincerity that causes this sort of behavior.
comment by lifelonglearner ·
2017-03-16T00:57:39.156Z · LW(p) · GW(p)
I'm unsure I have a good internal picture of what sincerity is pointing at. Does being sincere differ much from "truly, actually, super-duper, very much so" believing in something?
comment by Benquo ·
2017-03-16T02:22:11.502Z · LW(p) · GW(p)
I think I mean the same thing you mean by "real beliefs, rather than, say, belief-in-belief". So, I'm saying, it's not confirmation bias that causes the good thing, it's sincerity that makes the confirmation bias comparatively harmless.
comment by WalterL ·
2017-03-15T19:52:05.122Z · LW(p) · GW(p)
Real belief is actually moderately rare. People don't generally believe stuff anymore that they might get laughed at about. Find one person who believes something they didn't read on wikipedia and it's a weird week.
comment by lifelonglearner ·
2017-03-16T00:59:38.596Z · LW(p) · GW(p)
I grant that most people may not hold too many real beliefs, in the normal sense of the word, but is this also generally true of scientists who are conducting studies? It feels like you'd need to belief that X was true in order for you to run a study in the first place..?
Or are we assuming that most scientists are just running off weakly held beliefs and just "doing things"?
(I really don't know much about what the field might be like.)