Can Social Dynamics Explain Conjunction Fallacy Experimental Results?
post by curi · 2020-08-05T08:50:05.855Z · LW · GW · 6 commentsThis is a question post.
Contents
Answers 5 OwenBiesel 2 Unnamed None 6 comments
Is there any conjunction fallacy [LW · GW] research which addresses the alternative hypothesis [LW · GW] that the observed results are mainly due to social dynamics?
Most people spend most of their time thinking in terms of gaining or losing social status, not in terms of reason. They care more about their place in social status hierarchies than about logic. They have strategies for dealing with communication that have more to do with getting along with people than with getting questions technically right. They look for the social meaning in communications. E.g. people normally try to give – and expect to receive – useful, relevant, reasonable info that is safe to make socially normal assumptions about.
Suppose you knew Linda in college. A decade later, you run into another college friend, John, who still knows Linda. You ask what she’s up to. John says Linda is a bank teller, doesn’t give additional info, and changes the subject. You take this to mean that there isn’t more positive info. You and John both see activism positively and know that activism was one of the main ways Linda stood out. This conversation suggests to you that she stopped doing activism. Omitting info isn’t neutral in real world conversations. People mentally model the people they speak with and consider why the person said and omitted things.
In Bayesian terms, you got two pieces of info from John’s statement. Roughly: 1) Linda is a bank teller. 2) John thinks that Linda being a bank teller is key info to provide and chose not to provide other info. That second piece of info can affect people’s answers in psychology research.
So, is there any research which rules out social dynamics explanations for conjunction fallacy experimental results?
Answers
There's some recent research by Kevin Dorst and Matthew Mandelkern into the idea that people fall for the conjunction fallacy because they are so often trying to strike a balance between being correct and being informative.
Roughly, the idea is that guessing "Linda is an activist and a bank teller" is so informative, it's sometimes more preferable as a guess than just guessing that Linda is a bank teller. Giving not just true but informative guesses is such an ingrained habit that it's hard to stop and select the option most likely to be true.
You can download their paper here: https://philpapers.org/rec/DORGG or read a blog post by Kevin Dorst here: https://www.kevindorst.com/stranger_apologies/the-conjunction-fallacy-take-a-guess
The social dynamics that you point to in your John-Linda anecdote seem to depend on the fact that John knows what happened with Linda. This suggests that these social dynamics would not apply to questions about the future, where the question was coming from someone who couldn't know what was going to happen.
Some studies have looked for the conjunction fallacy in predictions about the future, and they've found it there too. One example which was mentioned in the post that you linked [LW · GW] is the forecast about a breakdown of US-Soviet relations. Here's a more detailed description of the study from a an earlier post [LW · GW] in that sequence:
Another experiment from Tversky and Kahneman (1983) was conducted at the Second International Congress on Forecasting in July of 1982. The experimental subjects were 115 professional analysts, employed by industry, universities, or research institutes. Two different experimental groups were respectively asked to rate the probability of two different statements, each group seeing only one statement:
1. "A complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983."
2. "A Russian invasion of Poland, and a complete suspension of diplomatic relations between the USA and the Soviet Union, sometime in 1983."
Estimates of probability were low for both statements, but significantly lower for the first group than the second (p < .01 by Mann-Whitney). Since each experimental group only saw one statement, there is no possibility that the first group interpreted (1) to mean "suspension but no invasion".
6 comments
Comments sorted by top scores.
comment by Gordon Seidoh Worley (gworley) · 2020-08-05T17:48:35.616Z · LW(p) · GW(p)
I don't know of any research to point you to but just wanted to say I think you're right we have reason to be suspect of the normative correctness of many irrationality results. It's not that people aren't ever "irrational" in various ways, but that sometimes what looks from the outside like irrationality is in fact a failure to isolate from context in a way that humans not trained in this skill can do well.
I seem to recall a post here a while back that made a point about how some people on tasks like this are strong contextualizers and you basically can't get them to give the "rational" answer because they won't or can't treat it like mathematical variables where the content is irrelevant to the operation, but related to the ideas shared in this post [LW · GW].
Replies from: curi, mr-hire↑ comment by curi · 2020-08-05T19:01:48.144Z · LW(p) · GW(p)
Yeah, (poor) context isolation is is a recurring theme I've observed in my discussions and debates. Here's a typical scenario:
There's an original topic, X. Then we talk back and forth about it for a bit: C1, D1, C2, D2, C3, D3, C4, D4. The C messages are me and D is the other guy.
Then I write a reply, C5, about a specific detail in D4. Often I quote the exact thing I'm replying to or explain what I'm doing (e.g. a statement like "I disagree with A because B" where A was something said in D4.).
Then the person writes a reply (more of a non sequitur from my pov) about X.
People routinely try to jump the conversation back to the original context/topic. And they make ongoing attempts to interpret things I say in relation to X. Whatever I say, they often try to jump to conclusions about my position on X from it.
I find it very hard to get people to stop doing this. I've had little success even with explicit topic shifts like "I think you're making a discussion methodology mistake, and talking about X won't be productive until we get on the same page about how to discuss."
Another example of poor context isolation is when I give a toy example that'd be trivial to replace with a different toy example, but they start getting hung up on specific details of the example chosen. Sometimes I make the example intentionally unrealistic and simple because I want it to clearly be a toy example and I want to get rid of lots of typical context, but then they get hung up specifically on how unrealistic it is.
Another common example is when I compare X and Y regarding trait Z, and people get hung up b/c of how X and Y compare in general. Me: X and Y are the same re Z. Them: X and Y aren't similar!
I think Question-Ignoring Discussion Pattern is related, too. It's a recurring pattern where people don't give direct responses to the thing one just said.
And thanks for the link [LW · GW]. It makes sense to me and I think social dynamics ideas are some of the ones most often coupled/contextualized. I think it’s really important to be capable of thinking about things from multiple perspectives/frameworks, but most people really just have the one way of thinking (and have enough trouble with that), and for most people their one way has a lot of social norms built into it (because they live in society – you need 2+ thinking modes in order for it to make sense to have one without social norms, otherwise you don’t have a way to get along with people. Some people compromise and build fewer social norms into their way of thinking because that’s easier than learning multiple separate ways to think).
↑ comment by Matt Goldenberg (mr-hire) · 2020-08-05T23:21:28.535Z · LW(p) · GW(p)
This feels highly related to Simulacra levels. [LW · GW]
If it's merely about me preferring "contextualizing norms", then I should be able to, in the context of a scientific study, be able to recognize that the context is such that I can basically just tell the truth.
However, if I've gotten to a point where I literally can't separate out social signalling from truth signalling (Simulacra level 3), then you'd expect a result like you see here.
Replies from: gworley↑ comment by Gordon Seidoh Worley (gworley) · 2020-08-06T00:29:29.288Z · LW(p) · GW(p)
This feels highly related to Simulacra levels. [LW · GW]
I thought about linking that, but decided against it because I feel like that's mostly about rationalists getting confused about contextualization and needing a guide to understand it, especially confusion about the ways that people who care more about social reality than "physical" reality pay more attention to how other people will think about words rather than what the words nominally are agreed to mean, rather than about what what it means to think in a highly contextualized way. It's somewhat adjacent, as a causal sibling of the phenomenon being asked about in this post.
If it's merely about me preferring "contextualizing norms", then I should be able to, in the context of a scientific study, be able to recognize that the context is such that I can basically just tell the truth.
Maybe it's just your phrasing, but I feel like this is subtly missing what it means to contextualize by supposing you can create a context where something can be left out, like saying let me create a new set of everything that doesn't include everything.
I confused by what you mean when you say "just tell the truth". The only interpretation that comes to mind is one where you mean something like the contextualized perspective is not capable of saying anything true, and that seems insufficiently charitable.
I think contextualization allows something like understanding how the study intends for me to respond and using that to guess the teacher's password [LW · GW], rather than falling for what I would consider the epistemic trap of thinking the study's isolating perspective is the "real" one. Maybe that's what you meant?
Replies from: mr-hire↑ comment by Matt Goldenberg (mr-hire) · 2020-08-06T00:49:21.319Z · LW(p) · GW(p)
Maybe it's just your phrasing, but I feel like this is subtly missing what it means to contextualize by supposing you can create a context where something can be left out, like saying let me create a new set of everything that doesn't include everything.
I confused by what you mean when you say "just tell the truth". The only interpretation that comes to mind is one where you mean something like the contextualized perspective is not capable of saying anything true, and that seems insufficiently charitable.
I think contextualization allows something like understanding how the study intends for me to respond and using that to guess the teacher's password [LW · GW], rather than falling for what I would consider the epistemic trap of thinking the study's isolating perspective is the "real" one. Maybe that's what you meant?
I think a proper contextualizing perspective would recognize that the study's isolated perspective is indeed one of the most relevant perspectives when in the study. If I'm tracking what people will think of me when in fact what I do during the study won't get back to people I care about at all, I'm not properly tracking context, instead of I've internalized tribal perspectives so much that I can't actually separate them from real context.
To me this is what separates Simulacra levels from contextualizing.
Replies from: curi↑ comment by curi · 2020-08-06T21:50:13.330Z · LW(p) · GW(p)
People got post-research interviewed and asked to explain their answers. There were social feedback mechanisms. Even if there wasn't peer to peer social feedback, it was certainly possible to annoy the authority (researchers) who is giving you the questions (like annoying your teacher who gives you a test). The researchers want you to answer a particular way so people, reasonably, guess what that is, even if they don't already have that way highly internalized (as most people do).
This is how people have learned to deal with questions in general. And people are correct to be very wary of guessing "it's safe to be literal now" (often when it looks safe, it's not, so people come to the reasonable rule of thumb that it's never safe and basically decide (but not as a conscious decision) that maintaining a literalist personality to be used very rarely, when it's hard to even identify any safe times to use it, is not worth the cost). People have near-zero experience in situations where being hyper literal (or whatever you want to call it) won't be punished. Those scenarios barely exist. Even science, academia or Less Wrong mostly aren't like that.
More related to this in my followup post: Asch Conformity Could Explain the Conjunction Fallacy [LW · GW]