Asch's Conformity Experiment
post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2007-12-26T07:03:13.000Z · LW · GW · Legacy · 67 commentsContents
67 comments
Solomon Asch, with experiments originally carried out in the 1950s and well-replicated since, highlighted a phenomenon now known as “conformity.” In the classic experiment, a subject sees a puzzle like the one in the nearby diagram: Which of the lines A, B, and C is the same size as the line X? Take a moment to determine your own answer . . .
The gotcha is that the subject is seated alongside a number of other people looking at the diagram—seemingly other subjects, actually confederates of the experimenter. The other “subjects” in the experiment, one after the other, say that line C seems to be the same size as X. The real subject is seated next-to-last. How many people, placed in this situation, would say “C”—giving an obviously incorrect answer that agrees with the unanimous answer of the other subjects? What do you think the percentage would be?
Three-quarters of the subjects in Asch’s experiment gave a “conforming” answer at least once. A third of the subjects conformed more than half the time.
Interviews after the experiment showed that while most subjects claimed to have not really believed their conforming answers, some said they’d really thought that the conforming option was the correct one.
Asch was disturbed by these results:1
That we have found the tendency to conformity in our society so strong . . . is a matter of concern. It raises questions about our ways of education and about the values that guide our conduct.
It is not a trivial question whether the subjects of Asch’s experiments behaved irrationally. Robert Aumann’s Agreement Theorem shows that honest Bayesians cannot agree to disagree—if they have common knowledge of their probability estimates, they have the same probability estimate. Aumann’s Agreement Theorem was proved more than twenty years after Asch’s experiments, but it only formalizes and strengthens an intuitively obvious point—other people’s beliefs are often legitimate evidence.
If you were looking at a diagram like the one above, but you knew for a fact that the other people in the experiment were honest and seeing the same diagram as you, and three other people said that C was the same size as X, then what are the odds that only you are the one who’s right? I lay claim to no advantage of visual reasoning—I don’t think I’m better than an average human at judging whether two lines are the same size. In terms of individual rationality, I hope I would notice my own severe confusion and then assign >50% probability to the majority vote.
In terms of group rationality, seems to me that the proper thing for an honest rationalist to say is, “How surprising, it looks to me like B is the same size as X. But if we’re all looking at the same diagram and reporting honestly, I have no reason to believe that my assessment is better than yours.” The last sentence is important—it’s a much weaker claim of disagreement than, “Oh, I see the optical illusion—I understand why you think it’s C, of course, but the real answer is B.”
So the conforming subjects in these experiments are not automatically convicted of irrationality, based on what I’ve described so far. But as you might expect, the devil is in the details of the experimental results. According to a meta-analysis of over a hundred replications by Smith and Bond . . . 2
. . . Conformity increases strongly up to 3 confederates, but doesn’t increase further up to 10–15 confederates. If people are conforming rationally, then the opinion of 15 other subjects should be substantially stronger evidence than the opinion of 3 other subjects.
Adding a single dissenter—just one other person who gives the correct answer, or even an incorrect answer that’s different from the group’s incorrect answer—reduces conformity very sharply, down to 5–10% of subjects. If you’re applying some intuitive version of Aumann’s Agreement to think that when 1 person disagrees with 3 people, the 3 are probably right, then in most cases you should be equally willing to think that 2 people will disagree with 6 people.3 On the other hand, if you’ve got people who are emotionally nervous about being the odd one out, then it’s easy to see how adding a single other person who agrees with you, or even adding a single other person who disagrees with the group, would make you much less nervous.
Unsurprisingly, subjects in the one-dissenter condition did not think their nonconformity had been influenced or enabled by the dissenter. Like the 90% of drivers who think they’re above-average in the top 50%, some of them may be right about this, but not all. People are not self-aware of the causes of their conformity or dissent, which weighs against any attempts to argue that the patterns of conformity are rational.4
When the single dissenter suddenly switched to conforming to the group, subjects’ conformity rates went back up to just as high as in the no-dissenter condition. Being the first dissenter is a valuable (and costly!) social service, but you’ve got to keep it up.
Consistently within and across experiments, all-female groups (a female subject alongside female confederates) conform significantly more often than all-male groups. Around one-half the women conform more than half the time, versus a third of the men. If you argue that the average subject is rational, then apparently women are too agreeable and men are too disagreeable, so neither group is actually rational . . .
Ingroup-outgroup manipulations (e.g., a handicapped subject alongside other handicapped subjects) similarly show that conformity is significantly higher among members of an ingroup.
Conformity is lower in the case of blatant diagrams, like the one at the beginning of this essay, versus diagrams where the errors are more subtle. This is hard to explain if (all) the subjects are making a socially rational decision to avoid sticking out.
Finally, Paul Crowley reminds me to note that when subjects can respond in a way that will not be seen by the group, conformity also drops, which also argues against an Aumann interpretation.
1Solomon E. Asch, “Studies of Independence and Conformity: A Minority of One Against a Unanimous Majority,” Psychological Monographs 70 (1956).
2Rod Bond and Peter B. Smith, “Culture and Conformity: A Meta-Analysis of Studies Using Asch’s (1952b, 1956) Line Judgment Task,” Psychological Bulletin 119 (1996): 111–137.
3This isn’t automatically true, but it’s true ceteris paribus.
4For example, in the hypothesis that people are socially-rationally choosing to lie in order to not stick out, it appears that (at least some) subjects in the one-dissenter condition do not consciously anticipate the “conscious strategy” they would employ when faced with unanimous opposition.
67 comments
Comments sorted by oldest first, as this post is from before comment nesting was available (around 2009-02-27).
comment by James_Bach · 2007-12-26T08:25:49.000Z · LW(p) · GW(p)
I don't see this exercise as being so much about rationality as it is about our relationship with dissonance. People in my community (context-driven software testers) are expected to treat confusion or controversy as itself evidence of a potentially serious problem. For the responsible tester, such evidence must be investigated and probably raised as an issue to the client.
In short, in the situation given in the exercise, I would not answer the question, but rather raise some questions.
I drive telephone surveyors nuts in this way. They just don't know what to do with a guy who answers "no opinion" or "I don't know" or "can't answer" to every single question in their poorly worded and context-non-specific questionnaires.
Replies from: xenohunter↑ comment by xenohunter · 2023-09-25T11:42:13.803Z · LW(p) · GW(p)
as being so much about rationality as it is about our relationship with dissonance
It seems to me that most of rationality is about our relationship with dissonance. Though in most cases that dissonance is implicit while here it is obvious.
comment by James_Annan · 2007-12-26T09:39:15.000Z · LW(p) · GW(p)
Robert Aumann's Agreement Theorem shows that honest Bayesians cannot agree to disagree - if they have common knowledge of their probability estimates, they have the same probability estimate.
Um, doesn't this also depend on them having common priors?
James
Replies from: pnrjuliuscomment by Vladimir_Nesov2 · 2007-12-26T09:48:07.000Z · LW(p) · GW(p)
It feels like there was no explicit rule not to ask questions. It's interesting what percentage of subjects actually questioned the process.
If people are conforming rationally, then the opinion of 15 other subjects should be substantially stronger evidence than the opinion of 3 other subjects.
I don't see how moderate number of other wrong-answering subjects should influence decision of rational subject, even if it's strictly speaking stronger evidence, as uncertainty in your own sanity should be much lower than probability of alternative explanations for wrong answers of other subjects.
comment by Paul_Crowley2 · 2007-12-26T12:29:32.000Z · LW(p) · GW(p)
The video notes that when the subject is instructed to write their answers, conformity drops enormously. That suggests we can set aside the hypothesis that they conform for the rational reason you set out.
comment by anonymous7 · 2007-12-26T12:33:16.000Z · LW(p) · GW(p)
90% of drivers can be better than the average.
Replies from: pnrjulius, Eli_Zarrindast↑ comment by pnrjulius · 2012-04-09T05:32:57.424Z · LW(p) · GW(p)
Only in a hella skewed distribution, far from the observed distribution of actual driving behavior.
Replies from: downie, Autolykos↑ comment by Autolykos · 2017-02-09T13:01:52.229Z · LW(p) · GW(p)
Even a more sane and more continuously distributed measure could yield that result, depending on how you fit the scale. If you measure the likelihood of making a mistake (so zero would be a perfect driver, and one a rabid lemur), I expect the distribution to be hella skewed. Most people drive in a sane way most of the time. But it's the few reckless idiots you remember - and so does every single one of the thousand other drivers who had the misfortune to encounter them. It would not surprise me if driving mistakes followed more-or-less a Pareto distribution.
↑ comment by Eli_Zarrindast · 2019-12-17T11:30:09.560Z · LW(p) · GW(p)
I took it to mean "You create some measurement that orders all of the N drivers (labeled with the natural numbers). They do not know their numbers. 90% of them will estimate that their number is >= the ceiling function of N/2".
comment by Chris · 2007-12-26T12:39:31.000Z · LW(p) · GW(p)
'This may come as some surprise' to Asch & Aumann, but rationality is not the design point of the human brain (otherwise this blog would have no reason to exist), getting by in the real world is. And getting by in the real world involved, for our ancestors through tens of millenia, group belonging, hence group conformity. See J. Harris, 'No Two Alike', Chaps. 8 & 9 for a discussion which references the Asch work. This does not mean of course that group conformity was the only adaptation factor. Being right and being 'in' both had (and have...) fitness value, and it's pefectly natural that both tendencies exist, in tension.
Replies from: halcyoncomment by Steve_Shervais · 2007-12-26T12:49:19.000Z · LW(p) · GW(p)
At an applied level, this reminds me of Dr. Jerry B. Harvey’s discussion of the "Abilene Paradox" in management, where groupthink can take over and move an organization in a direction that no-one really wants to go. All it takes is one dissenter to break the spell.
comment by Recovering_irrationalist · 2007-12-26T13:58:03.000Z · LW(p) · GW(p)
Surely there's more than social conformity/conflict aversion at work here? In the experiment in the video, an expectation of pattern continuation is set up. For most questions, the 4 spoken words the subject hears before responding do correspond to the apparently correct spoken word response. I'd expect subconcious processes to start interpreting this as an indicator of the correct answer regardless of social effects and be influenced accordingly, at least enough to cause confusion which would then increase susceptibility to the social effects.
I'd expect this effect to also be reduced where the subject is writing down his answers, as that takes out of the equation the close connection between hearing spoken numbers and speaking spoken numbers.
comment by Caledonian2 · 2007-12-26T14:37:51.000Z · LW(p) · GW(p)
Aumann's Agreement Theorem was proved more than twenty years after Asch's experiments, but it only formalizes and strengthens an intuitively obvious point - other people's beliefs are often legitimate evidence.
No, other people's beliefs are often treated as evidence, and very powerful evidence at that.
Belief is not suitable as any kind of evidence when more-direct evidence is available, yet people tend to reject direct evidence in order to conform with the beliefs of others.
The human goal usually isn't to produce justified predictions of likelihood, but to ingratiate ourselves with others in our social group.
What are you attempting to do, Eliezer?
Replies from: Joshua↑ comment by Joshua · 2011-02-12T20:03:06.648Z · LW(p) · GW(p)
Isn't this exactly what was said in Hug The Query? I'm not sure I understand why you were down voted.
Replies from: Blueberry, Kennycomment by StuartBuck · 2007-12-26T14:51:13.000Z · LW(p) · GW(p)
FYI, if you look at Asch's 1955 Scientific American article, the lines on the cards were a little closer in length than in the example shown above.
comment by Steve · 2007-12-26T17:09:07.000Z · LW(p) · GW(p)
my vision is so bad that i answered 'none of the above'. i had to decide to measure the lines. that meant i first had to get to where i did not think the trick was the question. that took a cup of tea. 'trust the ruler, not the vision' has been added to my list of -ings.
comment by Unknown_Healer · 2007-12-26T17:56:52.000Z · LW(p) · GW(p)
"Belief is not suitable as any kind of evidence when more-direct evidence is available, yet people tend to reject direct evidence in order to conform with the beliefs of others."
Caledonian, this is just wrong. Our ability to interpret evidence is not infallible, and is often fallible in ways that are not perfectly correlated across individuals. So even if we share the same 'direct evidence' as other observers of equaly ability their beliefs are still relevant.
comment by Psy-Kosh · 2007-12-26T18:50:32.000Z · LW(p) · GW(p)
Except we'd have to take into account the idea that the others who's beliefs we are using as evidence may themselves have been using the same idea... That results weighting of the beleifs of an initial group being greatly amplified above and beyond what it should be, no?
comment by Sebastian_Hagen2 · 2007-12-26T19:49:45.000Z · LW(p) · GW(p)
Robert Aumann's Agreement Theorem shows that honest Bayesians cannot agree to disagree - if they have common knowledge of their probability estimates, they have the same probability estimate.
In addition to what James Annan said, they also both have to know (with very high confidence) that they are in fact honest bayesians. Both sides being honest isn't enough if either suspects the other of lying.
comment by Richard_Kennaway · 2007-12-26T21:21:50.000Z · LW(p) · GW(p)
In terms of individual rationality, I hope I would notice my own severe confusion and then assign >50% probability to the majority vote.
Noticing your own severe confusion should lead to investigating the reasons for the disagreement, not to immediately going along with the majority. Honest Bayesians cannot agree to agree either. They must go through the process of sharing their information, not just their conclusions.
comment by Dave3 · 2007-12-27T03:17:39.000Z · LW(p) · GW(p)
What are the odds, given today's society, that a randomly selected group of people will include any honest Bayesians. Safer to assume that most of the group are either lying, self-deluded, confused, or have altered perceptions. Particularly so in a setting like a psychology experiment.
Replies from: pnrjulius↑ comment by pnrjulius · 2012-04-09T05:36:47.198Z · LW(p) · GW(p)
Strict honest Bayesians? ZERO. (Not even LW contains a single true honest Bayesian.)
Approximations of honest Bayesians? Better than you might think. Certainly LW is full of reasonably good approximations, and in studies about 80% of people are honest (though most people assume that only 50% of people are honest, a phenomenon known as the Trust Gap). The Bayesian part is harder, since people who are say, religious, or superstitious, or believe in various other obviously false things, clearly don't qualify.
Replies from: thrawncacomment by Jason_Brennan · 2007-12-27T04:40:14.000Z · LW(p) · GW(p)
Check out this paper:
Gregory S. Berns, Jonathan Chappelow, Caroline F. Zink, Giuseppe Pagnoni, Megan E. Martin-Skurski, and Jim Richards, “Neurobiological Correlates of Social Conformity and Independence During Mental Rotation,” Biological Psychiatry 58 (2005), pp. 245-253.
It claims that the conformists can, under some conditions, actually come to see the world differently.
comment by Psy-Kosh · 2007-12-27T07:39:23.000Z · LW(p) · GW(p)
Oh, one other thing. I know it's been brought up before, but as far as the agreement theorem, I don't feel I can safely use it. What I mean is that it seems I don't understand exactly when it can and cannot be used. Specifically, I know that there's something I'm missing here, some understanding because I don't know the correct way to resolve things like agreement theorem vs quantum suicide.
It's been discussed, but I haven't seen it resolved, so until I know exactly why agreement theorem does not apply there (or why the apparently straightforward (to me) way of computing the quantum suicide numbers is wrong), I'd personally be really hesitant to use the agreement theorem directly.
Replies from: pnrjulius↑ comment by pnrjulius · 2012-04-09T05:38:58.481Z · LW(p) · GW(p)
The quantum suicide numbers are wrong because of the Born probabilities, and also the fact that consciousness is not an either-or phenomenon. The odds of losing 99% of your consciousness may be sufficiently high that you effectively have no consciousness left. (Also: Have you ever been unconscious? Apparently it is possible for you to find yourself in a universe where you WERE unconscious for a period of time.)
Also, I've convinced that Many-Worlds is a dead end and Bohm was right, but I know I'm in the minority on LW.
comment by Unknown · 2007-12-27T10:32:40.000Z · LW(p) · GW(p)
Perhaps Eliezer or someone else can check the math, but according to my calculations, if you use Nick Bostrom's SSSI (Strong Self-Sampling Assumption), and make the reference class "observers after a quantum suicide experiment", then if the prior probability of quantum immortality is 1/2, after a quantum suicide experiment has been performed with the person surviving, both the outside observer and the person undergoing the risk of death should update the probability of quantum immortality to 4/7, so that they end up agreeing.
This seems odd, but it is based on the calculation that if the probability of quantum immortality is 1/2, then the probability of ending up being an observer watching the experiment is 17/24, while the probability of being an observer surviving the experiment is 7/24. How did I derive this? Well, if Quantum Immortality is true, then the probability of being an observer watching the experiment is 2/3, because one observer watches someone die, one observer watches someone survive, and one observer experiences survival. Likewise if QI is true, the probability of being an observer surviving the experiment is 1/3. On the other hand, if QI is false, the probability of being an observer watching the experiment is 3/4 (I will leave this derivation to the reader), while the probability of being an observer surviving the experiment is 1/4.
From this it is not difficult to derive the probabilities above, that the probability of being a watcher is 17/24, and the probability of being a survivor 7/24. If you apply Bayes's theorem to get the probability of QI given the fact of being a survivor, you will get 4/7. You will also get 4/7 if you update your probabilities both on the fact of being a watcher and on the fact of seeing a survivor. So the two end up agreeing.
Intuitive support for this is the fact that if a QI experiment were actually performed, and we consider the viewpoint of the one surviving 300 successive trials, he would certainly conclude that QI was true, and our intuitions say that the outside observers should admit that he's right.
Replies from: pnrjuliuscomment by Unknown · 2007-12-27T11:32:00.000Z · LW(p) · GW(p)
In the above calculation I forgot to mention that for simplicity I assumed that the experiment is such that one would normally have a 50% chance of survival. If this value is different, the values above would be different, but the fact of agreement would be the same (although there would also be the difficulty that a chance other than 50% is not easy to reconcile with a many-worlds theory anyway.)
comment by Nick_Tarleton · 2007-12-27T15:35:45.000Z · LW(p) · GW(p)
Quantum suicide vs. Aumann has been discussed a couple times before, and yes, it's very confusing.
Intuitive support for this is the fact that if a QI experiment were actually performed, and we consider the viewpoint of the one surviving 300 successive trials, he would certainly conclude that QI was true, and our intuitions say that the outside observers should admit that he's right.
My intuitions say outside observers should not update their estimates one bit, and I'm pretty sure this is correct, unless they should also increase their probability of MWI on making the equivalent observation of a coin coming up heads 300 times in a row.
(although there would also be the difficulty that a chance other than 50% is not easy to reconcile with a many-worlds theory anyway.) http://www.hedweb.com/everett/everett.htm#probabilities http://hanson.gmu.edu/mangledworlds.html
comment by Someone · 2007-12-27T18:10:40.000Z · LW(p) · GW(p)
I think the most interesting question that arises from these experiments is what's the difference in personality between people who dissent and people who conform (aside from the obvious).
Replies from: pnrjulius, ben-lang↑ comment by pnrjulius · 2012-04-09T05:44:39.361Z · LW(p) · GW(p)
I would guess that if we did a study using the usual Big Five, a single personality trait would drive most of the variance, the one called "agreeableness". Unfortunately this is not actually one trait, we just treat it like it is; there's no particular reason to think that conformity is correlated with empathy, for example, yet they are both considered "agreeableness". (This is similar to the problem with the trait "Belief in a Just World", which includes both the belief that a just world is possible and the belief that it is actual. An ideal moral person would definitely believe in the possibility; but upon observing a single starving child they would know that it is not actual. Hence should they be high, or low, in "Belief in a Just World"?)
↑ comment by Ben (ben-lang) · 2022-08-22T10:55:55.125Z · LW(p) · GW(p)
At my school we did this experiment. (I, happened to be one of the people who was not in on it, and did not conform). I have no idea what evidence they had to say this but the teacher suggested that people into "maths, physics or science stuff" were less likely to conform.
comment by Psy-Kosh · 2007-12-27T21:06:33.000Z · LW(p) · GW(p)
Unknown: Hrm, hadn't thought of using the SSSI. Thanks. Ran through it myself by hand now, and it does seem to result in the experimenter and test subject agreeing.
However, it produces an... oddity. Specifically, if using the SSSI, then by my calculations, when one takes into account that the external observer and the test subject are not the only people in existance, the actual strength of evidence extractable from a single quantum suicide experiment would seem to be relatively weak. If the ratio of non test subjects to test subjects is N, and the probability of the subject surviving simply by the nature of the quantum experiment is R, the likelihood ratio is (1+N)/(R+N), (which both the test subject and the external observer would agree on). Seeing a nonsurvival gives a MWI to ~ MWI likelihood ratio of N/(R+N). At least, assuming I did the math right. :)
Anyways, so it looks like if SSSI is valid, quantum suicide doesn't actually give very strong evidence one way or the other at all, does it?
Hrm... I wonder if in principle it could be used to make estimates about the total population of the universe by doing it a bunch of times and then analyzing the ratios of observed results... chuckles May have just discovered the maddest way to do a census, well, ever.
Replies from: pnrjulius↑ comment by pnrjulius · 2012-04-09T05:46:32.209Z · LW(p) · GW(p)
Clearly it can't actually matter what the population of the universe is. (There's nothing about the experiment that is based on that! It would be this bizarre nonlocal phenomenon that pops out of the theory without being put into it!) That's the kind of weirdness you come up with if you do anthropic calculations WRONG.
comment by Psy-Kosh · 2007-12-27T22:18:58.000Z · LW(p) · GW(p)
Actually, if considering the SSSA instead of just the SSA, one has to take into account all the observer-moments, past and future, right? So there well be, in addition to the specific observer moments of "immediately post experiment test subject (or not), experimenter, everyone else...", there'll be past and future versions theirof, and of other entities, so you'll have K1 total "others" (other observer-moments, that is) in a MW universe, and K2 << K1 "others" in a single world universe.
This'll make the calculation a bit more confusing.
comment by Sam4 · 2008-01-08T11:28:24.000Z · LW(p) · GW(p)
"... then what are the odds that only you are the one who's right?"
If this is the reasoning for people choosing the same answer then surely it becomes a question of confidence rather than conformity?
Choosing the same answer as the group in your argument is because you aren't confident in your answer and are willing to defer to the majority answer. Not necessarily the same as conformity. By your own rationing you are going with the group because you think their answer is "better" not because you want to be part of the group. I know you can argue that that is just your rationale for conformity, but I feel that conformity is more about doubting something you are sure you know, to side with a group, rather than doubting something you think you might know.
I feel possibly a more accurate test (using this reasoning for conformity) would be to take a group and tell all the members individually that only they will know the right answer. Then give all bar one the same answer and one a different answer and see if they will conform with the group.
comment by Leeroy_Jenkins · 2008-05-08T15:42:45.000Z · LW(p) · GW(p)
I believe that the subjects were of those of a non-matured state, thus making them of a "childish" mind and not able to process the situation. The subjrects would simply say anything their peers would say or do. I am testing this experiment on my classmates. I am in the 10th grade and will respond back with the solution. I blieve that a matured mind would not give in so easily with a simple question. It is not the question at hand that is making the subjects say something completely incorrect, it is the group pressure and the maturity of the subjects. If a child's mind thinks he or she is to believe that of another subject, then it shall think of that at hand. Children's minds are so open and naive thatt they will believe something as simple as Santa Clause comming down the chimney every year, then they will not hesitate to think of an answer to the question of this experiment. It is a simple and most uneducated experiment I had to present and test. A matured mind will think not of the group pressure but that of the question. I will be back with my results. Thank you.
Leeroy Jenkins
Replies from: pnrjuliuscomment by Sadun_Kal · 2008-08-20T01:50:12.000Z · LW(p) · GW(p)
"I believe that the subjects were of those of a non-matured state..."
I guess that's the difference between being biased or not. I think your understanding of a "mature mind" equals an "unbiased mind" which is not present in all the adults. And of course the result of this experiment would have been different if it were conducted on the readers of this website.
comment by BlackHumor · 2010-11-02T01:36:10.968Z · LW(p) · GW(p)
I don't see why you think that 3 extra people, no matter if they're honest or not, amount to any significant amount of evidence when you can see the diagram yourself.
Sure, maybe they're good enough if you can't see the diagram; 3 people thinking the same thing doesn't often happen when they're wrong. But when they are wrong, when you can see that they are wrong, then it doesn't matter how many of them there are.
Also: certainly the odds aren't high that you're right if we're talking totally random odds about a proposition where the evidence is totally ambiguous. But since there is a diagram, the odds then shift to either the very low probability "My eyesight has suddenly become horrible in this one instance and no others" combined with the high probability "3/4 people are right about a seemingly easy problem", versus the low probability "3/4 people are wrong about a seemingly easy problem", versus the high probability "My eyesight is working fine".
I don't know the actual numbers for this, but it seems likely the the probability of your eyesight suddenly malfunctioning in strange and specific ways is worse then the probability of 3 other people getting an easy problem wrong. Remember, they can have whatever long-standing problems with their eyesight or perception or whatever anyone cares to make up. Or you could just take the results of Asch's experiment as a prior and say that they're not that much more impressive than 1 person going first.
(All this of course changes if they can explain why C is a better answer; if they have a good logical reason for it despite how odd it seems, it's probably true. But until then, you have to rely on your own good logical reason for B being a better answer.)
comment by handoflixue · 2011-05-24T23:53:11.279Z · LW(p) · GW(p)
"I hope I would notice my own severe confusion and then assign >50% probability to the majority vote."
On a group level, I wouldn't think it's a particularly rational path to mimic the majority, even if you believe that they're honestly reporting. If you had a group of, say, 10 people, and the first 5 all gave the wrong answer, there would then be a rational impetuous for everyone subsequent to mimic that wrong answer on the logic that "the last (5-9) people all said C, so clearly p(C) > 0.5".
Far better to dissent and provide the group with new information.
Replies from: pnrjulius↑ comment by pnrjulius · 2012-04-09T05:49:22.969Z · LW(p) · GW(p)
Ooh, that's really interesting. The best solution might actually be to say the full statement, "I see B as equal, but since the other 5 people before me said C, C is probably objectively more likely." Then future people after you can still hear what you saw, independently of what you inferred based on others.
But I think there are a lot of other really interesting problems embedded in this, involving the feedback between semi-Bayesians trying to use each other to process evidence. (True Bayesians get the right answer; but what answer to semi-Bayesians get?)
comment by [deleted] · 2011-09-03T22:16:58.414Z · LW(p) · GW(p)
How do you face this situation as a rationalist?comment by pnrjulius · 2012-04-09T05:32:01.172Z · LW(p) · GW(p)
This gives us a very good reason to publicize dissenting opinions about just about anything---even perhaps when we think those dissents are wrong. Apparently the mere presence of a dissenter damages groupthink and allows true answers a much better chance to emerge.
comment by avichapman · 2012-06-01T01:53:00.046Z · LW(p) · GW(p)
I was all set to ask whether the result of female groups' increased conformity had any explanatory power over the question of why there aren't more woman in the rationalist movement. Then as I read on, it became less likely that female psychology had anything to do with it. Rather, in-group vs out-group psychology did. Males, being the socially more privileged gender, are more likely to see themselves as 'just normal' rather than part of a particular group called 'males'.
Of course, this lends itself to predictions. In a given grouping that self-identifies strongly as that grouping (such as woman, minority ethnicities, etc), if that group is very into a particular subject, its members will also likely be into it. Whereas, with a group that is less likely to self identify (such as American Caucasians, Americans within American borders (but not abroad) and men) the conformity on interests will be less.
Have there been any studies done to test this minority vs majority group conformity idea?
Replies from: avichapman↑ comment by avichapman · 2012-06-04T00:27:36.854Z · LW(p) · GW(p)
I'm not upset about losing points for this post, but I am a bit confused about it. Many out there know more about this stuff than I do. Did I say something factually inaccurate or engage in bad reasoning? I want to know so that I don't repeat my mistake.
Replies from: TimS↑ comment by TimS · 2012-06-04T02:46:08.577Z · LW(p) · GW(p)
Your first paragraph mentions a highly contested thesis that you admit is irrelevant to the evidence. Your second paragraph seems to assert that dominant groups do not strongly-self identify - which seems empirically false - consider spontaneous chants of "USA, USA, USA"
Also, you are using some quasi-technical jargon less precisely than the terms are usually used - and your misuses seem to be directed at supporting a particular ideological position.
But that's just the sense of someone who probably has a contrary ideological position, so I'm not sure how I would recommend you generalizing from my impression. (and the downvote is gone at the moment I'm writing this - was it just one? Just ignore those if you can't figure them out.)
Replies from: avichapman↑ comment by avichapman · 2012-06-04T03:25:30.043Z · LW(p) · GW(p)
Ah.
I had suspected that it might be because someone had tried to infer my position on such matters from my asking of the question and didn't like the implication. I did, after all, admit to including the thesis that 'the observed high conformance of a group of females is influenced by an aspect of female psychology' in my list of possible explanations for the high conformance in that group, even though I ended up rejecting that hypothesis.
(I suspect that your position viz a viz whether either gender is superior is not that different than my own. But to be clear, my position is that both genders possess great capacity for type 2 cognition, which is the most important measurement of human success. Any difference between healthy adults of either gender in their use of such cognition comes down to social factors, which can be changed to create a fairer society.)
I'm still surprised about the second paragraph's inaccuracy, though. In my experience, the chants of "USA, USA, USA" occur at sporting matches against other countries. That's not an 'internal to America' thing. Then again, I don't live in America and haven't for many years. I chose America because I was trying to cater my words to my audience. Perhaps that was wrong and I should have spoken from experience instead. (I'm Australian.)
I want to use every word accurately, so I would be most appreciative if you could give me a few examples of jargon I've used and a description (or link to one) of the way it should actually be used.
Thanks, Avi
PS - Yes. It was just one vote, so maybe I got re-upvoted or something. Oh well. The experienced alerted me to an issue. That's all anyone could ask of it.
comment by [deleted] · 2019-12-21T03:54:33.163Z · LW(p) · GW(p)
Image is missing from article.
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-12-21T04:17:51.624Z · LW(p) · GW(p)
Thank you, fixed! (And thanks to Said for having backups of all the images on readthesequences.com)
Replies from: None↑ comment by [deleted] · 2019-12-21T06:33:39.615Z · LW(p) · GW(p)
Glad someone's paying attention to comments on old articles. There's actually quite a few examples of missing images like this. Sorry I didn't mention the ones I've encountered so far. I will do so in the future.
Replies from: habryka4↑ comment by habryka (habryka4) · 2019-12-21T08:11:00.806Z · LW(p) · GW(p)
Yes, please do. I try to fix all broken links and images in old content that I can find.
comment by jwray1 · 2021-04-12T18:01:45.163Z · LW(p) · GW(p)
I can't imagine myself ever conforming until it was less than 1/8 as blatant as the example image. Assigning a >50% probability to the majority being correct seems way too generous, because I have no strong evidence that they're not lying, and a high prior on my ability to see linear distances on a 2D page.
Did the 100+ replications collect any data on what sort of people are more conformist than others, besides the gender gap?
comment by Ben (ben-lang) · 2022-08-22T10:51:55.068Z · LW(p) · GW(p)
On the topic of "why would conformity not grow with 15 people going before you instead of 3", one answer is obvious. The subject realises that the other people are not independently wrong, but that their is a trick. A Baysian can reason that other people might not be independent data points.
In my school I volunteered with the psychology class to help with an experiment. The first person said line C was the matching one, I practically flinched, and gave them an involuntary "what is wrong with you?" look. The second agreed it was line C. I thought "what the hell?!!" and looked at their sheet, then back and my sheet and confirmed they were the same. On person 3 I realised I could fold my sheet over and hold it to the light to check they lined up. If I had been the 4th in the line I would have been really confused. But I was something like 12th. By person 7 I was almost 100% sure the play was "Oh, they must all be in on it somehow. They want to see if peer pressure will turn me into an idiot."
So an unavoidable issue with the experiment is that more people conforming gives the subject more time to theorise about why a large cohort of people would all be making such an obvious mistake in a correlated way. And this theorising will not take long to start focussing on the fact that you know its an experiment, and you start wondering what they could be testing.
I was about to add on that no-one would ever conform if they had even the slightest real reward for pointing out the obvious (eg. some $ for a right answer), but a quick google suggests that my intuitions on that might be really out.
comment by qvalq (qv^!q) · 2023-04-28T16:20:43.102Z · LW(p) · GW(p)
If people are conforming rationally, then the opinion of 15 other subjects should be substantially stronger evidence than the opinion of 3 other subjects.
This doesn't seem true; the data correlate pretty strongly, so more wouldn't provide much evidence.
Adding a single dissenter—just one other person who gives the correct answer, or even an incorrect answer that’s different from the group’s incorrect answer—reduces conformity very sharply, down to 5–10% of subjects.
This is irrational, though.