New study on choice blindness in moral positions

post by nerfhammer · 2012-09-20T18:14:33.437Z · LW · GW · Legacy · 152 comments

Contents

152 comments

Change blindness is the phenomenon whereby people fail to notice changes in scenery and whatnot if they're not directed to pay attention to it. There are countless videos online demonstrating this effect (one of my favorites here, by Richard Wiseman).

One of the most audacious and famous experiments is known informally as "the door study": an experimenter asks a passerby for directions, but is interrupted by a pair of construction workers carrying an unhinged door, concealing another person whom replaces the experimenter as the door passes. Incredibly, the person giving directions rarely notices they are now talking to a completely different person. This effect was reproduced by Derren Brown on British TV (here's an amateur re-enactment).

Subsequently a pair of Swedish researchers familiar with some sleight-of-hand magic conceived a new twist on this line of research, arguably even more audacious: have participants make a choice and quietly swap that choice with something else. People not only fail to notice the change, but confabulate reasons why they had preferred the counterfeit choice (video here). They called their new paradigm "Choice Blindness".

Just recently the same Swedish researchers published a new study that is even more shocking. Rather than demonstrating choice blindness by having participants choose between two photographs, they demonstrated the same effect with moral propositions. Participants completed a survey asking them to agree or disagree with statements such as "large scale governmental surveillance of e-mail and Internet traffic ought to be forbidden as a means to combat international crime and terrorism". When they reviewed their copy of the survey their responses had been covertly changed, but 69% failed to notice at least one of two changes, and when asked to explain their answers 53% argued in favor of what they falsely believed was their original choice, when they had previously indicated the opposite moral position (study here, video here).

152 comments

Comments sorted by top scores.

comment by AlexMennen · 2012-09-20T19:29:18.011Z · LW(p) · GW(p)

I find myself thinking "I remember believing X. Why did I believe X? Oh right, because Y and Z. Yes, I was definitely right" with alarming frequency.

Replies from: Lightwave, John_Maxwell_IV
comment by Lightwave · 2012-09-20T20:57:13.118Z · LW(p) · GW(p)

When reading old LW posts and comments and seeing I've upvoted some comment, I find myself thinking "Wait, why have I upvoted this comment?"

comment by John_Maxwell (John_Maxwell_IV) · 2012-09-25T02:48:44.925Z · LW(p) · GW(p)

This doesn't seem obviously bad to me... You just have to differentiate times when you have a gut feeling that something's true because you worked it out before, or because of some stupid reason like your parents telling it to you when you were a kid. Right?

I think I can tell apart rationalizations I'm creating on the spot with reasoning I remember constructing in the past. And if I'm creating rationalizations on the spot, I make an effort to rationalize in the opposing direction a bit for balance.

comment by shminux · 2012-09-20T17:07:56.081Z · LW(p) · GW(p)

An instrumental question: how would you exploit this to your advantage, were you dark-arts inclined? For example, if you are a US presidential candidate, what tactics would you use to invisibly switch voters' choice to you? Given that you are probably not better at it than the professionals in each candidate's team, can you find examples of such tactics?

Replies from: Eugine_Nier, DaFranker, Epiphany, fubarobfusco, Epiphany, Haladdin, siodine, Alejandro1, TimS, Epiphany, Epiphany
comment by Eugine_Nier · 2012-09-21T03:19:00.295Z · LW(p) · GW(p)

Claim to agree with them on issue X, then once they've committed to supporting you, change your position on issue X.

Come to think of it, politicians already do this.

Replies from: MTGandP, Hawisher
comment by MTGandP · 2012-11-01T23:11:46.786Z · LW(p) · GW(p)

Interestingly, the other major party never seems to fail to notice. Right now there are endless videos on YouTube of Romney's flip-flopping, and Republicans reacted similarly to Kerry's waffling in 2004. But for some reason, supporters of the candidate in question either don't notice or don't care.

comment by Hawisher · 2012-09-24T14:40:25.324Z · LW(p) · GW(p)

The quality (in American politics, at least) that either 1: a politician's stance on any given topic is highly mutable, or 2: a politician's stance could perfectly reasonably disagree with that of some of his supporters, given that the politician one supports is at best a best-effort compromise rather than (in most cases) a perfect representation of one's beliefs, is it not so widely-known as to eliminate or alleviate that effect?

Replies from: DaFranker
comment by DaFranker · 2012-09-24T14:57:06.009Z · LW(p) · GW(p)

I don't see how either or both options you've presented change the point in any way; if politicians claim to agree on X until you agree to vote for them, then turn out to revert to their personal preference once you've already voted for them, then while you may know they're mutable or a best-effort-compromise, you've still agreed with a politician and voted for them on the basis of X, which they now no longer hold.

That they are known to have mutable stances or be prone to hidden agendas only makes this tactic more visible, but also more popular, and by selection effects makes the more dangerous instances of this even more subtle and, well, dangerous.

Replies from: Hawisher
comment by Hawisher · 2012-09-24T15:30:19.979Z · LW(p) · GW(p)

I would argue that the chief difference between picking a politician to support and choosing answers based on one's personal views of morality is that the former is self-evidently mutable. If a survey-taker was informed beforehand that the survey-giver might or might not change his responses, it is highly doubtful the study in question would have these results.

comment by DaFranker · 2012-09-20T19:33:29.584Z · LW(p) · GW(p)

While meeting with voters in local community halls, candidates sometimes go around distributing goodwill tokens or promises while thanking people for supporting them, whether the person actually seems to support them or not.

It's not a very strong version, and it's tinged with some guilt-tripping, but matches the pattern under some circumstances and very well might trigger the choice blindness in some cases.

comment by Epiphany · 2012-09-21T07:41:25.385Z · LW(p) · GW(p)

Dark tactic: Have we verified that it doesn't work to present them with a paper saying what their opinion is even if they did NOT fill anything out? I explain how that might work This tactic is based on that possibility:

  1. An unethical political candidate could have campaigners get a bunch of random people together and hand them a falsified survey with their name on it, making it look like they filled it out. The responses support a presidential candidate.

  2. The unethical campaigner might then say: "A year ago, (too long for most people to remember the answers they gave on tests) you filled out a survey with our independent research company, saying you support X, Y and Z." If authoritative enough, they might believe this.

  3. "These are the three key parts of my campaign! Can you explain why you support these?"

  4. (victim explains)

  5. "Great responses! Do you mind if we use these?"

  6. (victim may feel compelled to say yes or seem ungrateful for the compliment)

  7. "I think your family and friends should hear what great supports you have for your points on this important issue, don't you?"

  8. (now new victims will be dragged in)

  9. The responses that were given are used to make it look like there's a consensus.

Replies from: army1987, MugaSofer
comment by A1987dM (army1987) · 2012-09-25T12:02:40.597Z · LW(p) · GW(p)

(too long for most people to remember the answers they gave on tests)

For me at least, one year is also too long for me to reliably hold the same opinion, so if you did that to me, I think I'd likely say something like “Yeah, I did support X, Y and Z back then, but now I've changed my mind.” (I'm not one to cache opinions about most political issues -- I usually recompute them on the fly each time I need them.)

comment by MugaSofer · 2012-09-21T11:47:47.329Z · LW(p) · GW(p)

Someone should see if this works.

Of course, you need to filter for people who fill out surveys.

Replies from: DaFranker
comment by DaFranker · 2012-09-21T12:26:59.584Z · LW(p) · GW(p)

Idea:

Implement feedback surveys for lesswrong meta stuff, and slip in a test for this tactic in one of the surveys a few surveys in.

Having a website as a medium should make it even harder for people to speak up or realize there's something going on, and I figure LWers are probably the biggest challenge. If LWers fall into a trap like this, that'd be strong evidence that you could take over a country with such methods.

Replies from: ModusPonies
comment by ModusPonies · 2012-09-21T19:09:52.223Z · LW(p) · GW(p)

That would be very weak evidence that you could take over a country with such methods. It would be strong evidence that you could take over a website with such methods.

comment by fubarobfusco · 2012-09-20T20:51:23.594Z · LW(p) · GW(p)

how would you exploit this to your advantage, were you dark-arts inclined?

Break into someone's blog and alter statements that reflect their views.

comment by Epiphany · 2012-09-21T07:59:01.100Z · LW(p) · GW(p)

Dark Tactic:

This one makes me sick to my stomach.

Imagine some horrible person wants to start a cult. So they get a bunch of people together and survey them asking things like:

"I don't think that cults are a good thing." "I'm not completely sure that (horrible person) would be a good cult leader."

and switches them with:

"I think that cults are a good thing." "I'm completely sure that (horrible person) would be a good cult leader."

And the horrible person shows the whole room the results of the second set of questions, showing that there's a consensus that cults are a good thing and most people are completely sure that (horrible person) would be a good cult leader.

Then the horrible person asks individuals to support their conclusions about why cults are a good thing and why they would be a good leader.

Then the horrible person starts asking for donations and commitments, etc.

Who do we tell about these things? They have organizations for reporting security vulnerabilities for computer systems so the professionals get them... where do you report security vulnerabilities for the human mind?

Replies from: ChristianKl, Pentashagon, DaFranker
comment by ChristianKl · 2012-09-22T17:40:56.789Z · LW(p) · GW(p)

Is you start a cult you don't tell people that you start a cult. You tell them: Look there this nice meetup. All the people in that meetup are cool. The people in that group think differently than the rest of the world. They are better. Then there are those retreats where people spents a lot of time together and become even better and more different than the average person on the street.

Most people in the LessWrong community don't see it as a cult, and the same is true for most organisations that are seen as cults.

Replies from: John_Maxwell_IV, wedrifid
comment by John_Maxwell (John_Maxwell_IV) · 2012-09-25T02:44:52.048Z · LW(p) · GW(p)

That's not too different from the description of a university though.

comment by wedrifid · 2012-09-22T18:13:04.667Z · LW(p) · GW(p)

Is you start a cult you don't tell people that you start a cult. You tell them: Look there this nice meetup. All the people in that meetup are cool. The people in that group think differently than the rest of the world. They are better. Then there are those retreats where people spents a lot of time together and become even better and more different than the average person on the street.

Do you? Really? That works? When creating an actual literal cult? This is counter-intuitive.

Replies from: Endovior, NancyLebovitz, ChristianKl
comment by Endovior · 2012-09-23T17:50:28.522Z · LW(p) · GW(p)

The trick: you need to spin it as something they'd like to do anyway... you can't just present it as a way to be cool and different, you need to tie it into an existing motivation. Making money is an easy one, because then you can come in with an MLM structure, and get your cultists to go recruiting for you. You don't even need to do much in the way of developing cultic materials; there's plenty of stuff designed to indoctrinate people in anti-rational pro-cult philosophies like "the law of attraction" that are written in a way so as to appear as guides for salespeople, so your prospective cultists will pay for and perform their own indoctrination voluntarily.

I was in such a cult myself; it's tremendously effective.

Replies from: ChristianKl
comment by ChristianKl · 2012-09-24T21:55:31.716Z · LW(p) · GW(p)

If you want to reach a person who feels lonely having a community of like minded people who accept the person can be enough. You don't necessarily need stuff like money.

Replies from: Endovior
comment by Endovior · 2012-09-25T13:37:50.950Z · LW(p) · GW(p)

Agreed. Emotional motivations make just as good a target as intellectual ones. If someone already feels lonely and isolated, then they have a generally exploitable motivation, making them a prime candidate for any sort of cult recruitment. That kind of isolation is just what cults look for in a recruit, and most try to create it intentionally, using whatever they can to cut their cultists off from any anti-cult influences in their lives.

Replies from: wedrifid
comment by wedrifid · 2012-09-25T14:16:56.396Z · LW(p) · GW(p)

Emotional motivations make just as good a target as intellectual ones.

Agree, except I'd strengthen this to "a much better".

comment by NancyLebovitz · 2012-09-24T20:36:52.447Z · LW(p) · GW(p)

It works. Especially if you can get people away from their other social contacts. Mix in insufficient sleep and a low protein diet, and it works really well. (Second-hand information, but there's pretty good consensus on how cults work.)

How do you think cults work?

Replies from: Nornagest, wedrifid
comment by Nornagest · 2012-09-24T21:08:40.498Z · LW(p) · GW(p)

I'd question "really well". Cult retention rates tend to be really low -- about 2% for Sun Myung Moon's Unification Church ("Moonies") over three to five years, for example, or somewhere in the neighborhood of 10% for Scientology. The cult methodology seems to work well in the short term and on vulnerable people, but it seriously lacks staying power: one reason why many cults focus so heavily on recruiting, as they need to recruit massively just to keep up their numbers.

Judging from the statistics here, retention rates for conventional religious conversions are much higher than this (albeit lower than retention rates for those raised in the church).

Replies from: NancyLebovitz, gwern
comment by NancyLebovitz · 2012-09-24T21:22:03.574Z · LW(p) · GW(p)

I guess "really well" is ill-defined, but I do think that both Sun Myung Moon and L. Ron Hubbard could say "It's a living".

You can get a lot out of people in the three to five years before they leave.

Replies from: shminux
comment by shminux · 2012-09-24T23:27:22.791Z · LW(p) · GW(p)

Note that the term cult is a worst argument in the world (guilt by association). The neutral term is NRM. Thus to classify something as a cult one should first tick off the "religious" check mark, which requires spirituality, a rather nebulous concept:

Spirituality is the concept of an ultimate or an alleged immaterial reality; an inner path enabling a person to discover the essence of his/her being; or the "deepest values and meanings by which people live.

If you define cult as an NRM with negative connotations, then you have to agree on what those negatives are, not an easy task.

Replies from: fubarobfusco
comment by fubarobfusco · 2012-09-25T00:13:23.415Z · LW(p) · GW(p)

"NRM" is a term in the sociology of religion. There are many groups that are often thought of as "cultish" in the ordinary-language sense that are not particularly spiritual. Multi-level marketing groups and large group awareness training come to mind.

comment by gwern · 2012-09-25T00:47:59.698Z · LW(p) · GW(p)

This is basically true, although I had a dickens of a time finding specifics in the religious/psychology/sociological research - everyone is happy to claim that cults have horrible retention rates, but none of them seem to present much beyond anecdotes.

Replies from: Nornagest
comment by Nornagest · 2012-09-25T00:58:41.440Z · LW(p) · GW(p)

I'll confess I was using remembered statistics for the Moonies, not fresh ones. The data I remember from a couple of years ago seems to have been rendered unGooglable by the news of Sun Myung Moon's death.

Scientology is easier to find fresh statistics for, but harder to find consistent statistics for. I personally suspect the correct value is lower, but 10% is about the median in easily accessible sources.

Replies from: army1987
comment by A1987dM (army1987) · 2012-09-25T08:00:34.073Z · LW(p) · GW(p)

The data I remember from a couple of years ago seems to have been rendered unGooglable by [more recent stuff]

Click on “Search tools” at the bottom of the menu on the left side of Google's search results page, then on “Custom range”.

comment by wedrifid · 2012-09-24T21:39:55.532Z · LW(p) · GW(p)

How do you think cults work?

Like what you say but not much like ChristianKI said. I think he was exaggerating rather a lot to try to make something fit when it doesn't particularly.

comment by ChristianKl · 2012-09-24T21:54:01.427Z · LW(p) · GW(p)

What's an actual literal cult?

When I went to the Quantified Self conference in Amsterdam last year, I heard the allegation that Quantified Self is a cult after I explained it to someone who lived at the place I stayed for the weekend. I also had to defend against the cult allegation when explain the Quantified Self community to journalists. Which groups are cults depends a lot of the person who's making the judgement.

There are however also groups where we can agree that they are cults. I would say that the principle applies to an organisation like the Church of Scientology.

comment by Pentashagon · 2012-09-21T18:11:49.882Z · LW(p) · GW(p)

I think that's known as voter fraud. A lot of people believe (and tell others to believe) that certain candidates were legally and fairly elected even when exit polls show dramatically different results. Although of course this could work the same way if exit polls were changed to reflect the opposite outcome of an actually fair election and people believed the false exit polls and demanded a recount or re-election. It just depends on which side can effectively collude to cheat.

Replies from: Epiphany
comment by Epiphany · 2012-09-21T19:31:25.754Z · LW(p) · GW(p)

No. What I'm saying here is that, using this technique, it might not be seen as fraud.

If the view on "choice blindness" is that people are actually changing their opinions, it would not be technically seen as false to claim that those are their opinions. Committing fraud would require you to lie. This may be a form of brainwashing, not a new way to lie.

That's why this is so creepy.

comment by DaFranker · 2012-09-21T12:23:35.528Z · LW(p) · GW(p)

We need a worldwide Mindhacker Convention/Summit/Place-where-people-go.

Unfortunately, the cult leaders you've just described will not permit this, because they've already brainwashed their minions (and those minions' children, and those childrens' children, for thousands of years) into accepting that the human mind is supreme and sacred and must not be toyed with at any cost.

comment by Haladdin · 2012-09-24T17:28:39.288Z · LW(p) · GW(p)

Online dating. Put up a profile that suggests a certain personality types and interests. In face-to-face meetup, even if you're someone different than was advertised, choice blindness should cover up the fact.

This tactic can also be extended to job resumes presumably.

Replies from: khafra, army1987, Vaniver
comment by khafra · 2012-09-25T19:44:04.985Z · LW(p) · GW(p)

Either that's already a well-used tactic amongst online daters, or 6'1", 180lb guys who earn over $80k/year are massively more likely to use online dating sites than the average man.

comment by A1987dM (army1987) · 2012-09-25T12:00:02.626Z · LW(p) · GW(p)

This tactic can also be extended to job resumes presumably.

I wouldn't like to be standing in the shoes of someone who tried that and it didn't work.

Replies from: wedrifid
comment by wedrifid · 2012-09-25T14:15:55.578Z · LW(p) · GW(p)

I wouldn't like to be standing in the shoes of someone who tried that and it didn't work.

Why? Just go interview somewhere else. The same applies for any interview signalling strategy.

Replies from: army1987
comment by A1987dM (army1987) · 2012-09-25T22:11:02.554Z · LW(p) · GW(p)

I meant in the shoes of the candidate, not the interviewer. If that happened to me, I would feel like my status-o-meter started reading minus infinity.

comment by Vaniver · 2012-09-24T18:07:11.307Z · LW(p) · GW(p)

Tom N. Haverford comes to mind.

comment by siodine · 2012-09-21T03:31:07.127Z · LW(p) · GW(p)

The problem is that we don't know how influential the blind spot is. It could just fade away after a couple minutes and a "hey, wait a minute..." But assuming it sticks:

If I were a car salesmen, I would have potential customers tell me their ideal car and then I would tell them what I want their ideal car to be as though I were simply restating what they just said.

If I were a politician, I would target identities (e.g., latino, pro-life, low taxes, ect) rather than individuals because identities are made of choices and they're easier to target than individuals. The identity makes a choice and then you assume the identity chose you. E.g., "President Obama has all but said that I'm instigating "class warfare," or that I don't care about business owners, or that I want to redistribute wealth. Well, Mr. Obama, I am fighting with and for the 99%; the middle class; the inner city neighborhoods that your administration has forgotten; Latinos; African-Americans. We all have had enough of the Democrats decades long deafness towards our voice. Vote Romney." Basically, you take the opposition's reasons for not voting for you and then assume those reasons are for the opposition, and you run the ads in the areas you want to affect.

Replies from: synkarius
comment by synkarius · 2012-09-21T05:00:27.689Z · LW(p) · GW(p)

I don't like either presidential candidate. I need to say that before I say this: using current rather than past political examples is playing with fire.

Replies from: siodine
comment by siodine · 2012-09-21T13:36:46.969Z · LW(p) · GW(p)

I completely agree with you; there shouldn't be any problems discussing political examples where you're only restating a campaign's talking points rather than supporting one side or the other.

comment by Alejandro1 · 2012-09-20T18:59:33.659Z · LW(p) · GW(p)

For example, if you are a US presidential candidate, what tactics would you use to invisibly switch voters' choice to you?

I vaguely remember that when a president becomes very widely accepted as a good or bad president, many people will misremember that they voted for or against him respectively; e.g. much fewer people would admit (even to themselves) having voted for Nixon than the actual number that voted for him. If this is so, then maybe the answer is simply "Win, and be a good president".

Replies from: shminux
comment by shminux · 2012-09-20T19:02:20.819Z · LW(p) · GW(p)

"Win, and be a good president"

That would not be an instrumentally useful campaigning strategy.

comment by TimS · 2012-09-20T18:24:52.932Z · LW(p) · GW(p)

Now I'm alternating between laughing and crying. :(

Replies from: Epiphany
comment by Epiphany · 2012-09-21T07:20:52.288Z · LW(p) · GW(p)

Awww. I might have discovered a flaw in this study, TimS. Here you go

comment by Epiphany · 2012-09-21T09:11:47.217Z · LW(p) · GW(p)

Imagine answering a question like "I think such and such candidate is not a very good person." and then it gives you a button where you can automatically post it to your twitter / facebook. When you read the post on your twitter, it says "I think such and such candidate is a very good person." but you don't notice the wording has changed. :/

I wonder if people would feel compelled to confabulate reasons why they posted that on their accounts. It might set of their "virus" radars because of the online context and therefore not trigger the same behavior.

comment by Epiphany · 2012-09-21T07:50:09.010Z · LW(p) · GW(p)

Dark Tactic:

  1. An unwitting research company could be contracted to do a survey by an unethical organization.
  2. The survey could use the trick where by asking some question that people will mostly say "yes" to and then ask a similar question later where the wording is slightly changed to agree with the viewpoint of the unethical organization.
  3. Most people end up saying they agree with the viewpoint of the unethical organization.
  4. The reputation of the research company is abused as the unethical organization claims they "proved" that most people agree with their point of view.
  5. A marketing campaign is devised around the false evidence that most people agree with them.

They already trick people in less expensive ways, though. I was taught in school that they'll do things like ask 5 doctors whether they recommend something and then saying "4 of 5 doctors" recommend this to imply 4 of every 5 doctors when their sample was way too small.

comment by simplicio · 2012-09-21T23:22:38.033Z · LW(p) · GW(p)

One of the most audacious and famous experiments is known informally as "the door study": an experimenter asks a passerby for directions, but is interrupted by a pair of construction workers carrying an unhinged door, concealing another person whom replaces the experimenter as the door passes. Incredibly, the person giving directions rarely notices they are now talking to a completely different person. This effect was reproduced by Derren Brown on British TV (here's an amateur re-enactment).

I think the response of the passerby is quite reasonable, actually. Confronted with a choice between (a) "the person asking me directions was just spontaneously replaced by somebody different, also asking me directions," and (b) "I just had a brain fart," I'll consciously go for (b) every time, especially considering that I make similar mistakes all the time (confusing people with each other immediately after having encountered them). I know that this is probably not a phenomenon that occurs at the conscious level, but we should expect the unconscious level to be even more automatic.

Replies from: MaoShan, jimmy, robert-miles, Haladdin
comment by MaoShan · 2012-09-24T02:05:27.635Z · LW(p) · GW(p)

...Confronted with a choice between (a) "the person asking me directions was just spontaneously replaced by somebody different, also asking me directions," and (b) "I just had a brain fart," I'll consciously go for (a) every time, especially considering that I observe similar phenomena all the time (people spontaneously replacing each other immediately after having encountered them). ...

I'm curious, why do you take that view?

Replies from: simplicio, Swimmer963, RobFisher
comment by simplicio · 2012-09-24T11:50:04.404Z · LW(p) · GW(p)

Missed it on the first read-through, heh. Excellent try.

Replies from: army1987
comment by A1987dM (army1987) · 2012-09-24T22:28:01.585Z · LW(p) · GW(p)

I didn't notice until I read Swimmer963's comment. I did remember reading its parent and did remember that it said something sensible, so when I read the altered quotation I thought I had understood it to be ironic.

comment by Swimmer963 (Miranda Dixon-Luinenburg) (Swimmer963) · 2012-09-24T03:05:53.914Z · LW(p) · GW(p)

Am I the only one who's really confused that this comment is quoting text that is different than the excerpt in the above comment?

Replies from: Alejandro1, Zaine
comment by Alejandro1 · 2012-09-24T03:43:45.755Z · LW(p) · GW(p)

Shhhhh! You're ruining the attempt at replication!

comment by Zaine · 2012-09-24T03:28:27.109Z · LW(p) · GW(p)

No. Maybe Mao is joking?

comment by RobFisher · 2012-09-29T08:02:31.472Z · LW(p) · GW(p)

I didn't notice at first, but only because I did notice that you were quoting the comment above which I had just read and skipped over the quote.

comment by jimmy · 2012-09-24T20:57:56.166Z · LW(p) · GW(p)

What a coincidence, this happened to me with your comment! I originally read your name as "shminux" and was quite surprised when I reread it.

If there's some coding magic going on behind the scenes, you've got me good. But I'm sticking with (b) - final answer.

Replies from: shminux
comment by shminux · 2012-09-24T22:23:07.601Z · LW(p) · GW(p)

originally read your name as "shminux" and was quite surprised when I reread it.

For the record, I fully endorse simplicio's analysis :)

comment by Robert Miles (robert-miles) · 2012-09-25T18:07:42.582Z · LW(p) · GW(p)

A rational prior for "the person asking me directions was just spontaneously replaced by somebody different, also asking me directions" would be very small indeed (that naturally doesn't happen, and psych experiments are rare). A rational prior for "I just had a brain fart" would be much bigger, since that sort of thing happens much more often. So at the end, a good Bayesian would assign a high probability to "I just had a brain fart", and also a high probability to "This is the same person" (though not as high as it would be without the brain fart).

The problem is that the conscious mind never gets the "I just had a brain fart" belief. The error is unconsciously detected and corrected but not reported at all, so the person doesn't even get the "huh, that feels a little off" feeling which is in many cases the screaming alarm bell of unconscious error detection. Rationalists can learn to catch that feeling and examine their beliefs or gather more data, but without it I can't think of a way to beat this effect at all, short of paying close attention to all details at all times.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-09-25T18:59:54.766Z · LW(p) · GW(p)

And a sufficiently large change gets noticed...

Replies from: Decius
comment by Decius · 2012-09-26T17:08:31.665Z · LW(p) · GW(p)

Really? Did any of them refuse to give the camera to the new people, because they weren't the owners of the camera?

Replies from: Alicorn
comment by Alicorn · 2012-09-26T17:34:24.458Z · LW(p) · GW(p)

If you watch the video closely, the camera actually prints out a picture of the old guys, so the old guys are clearly at least involved with the camera in some way.

comment by Haladdin · 2012-09-24T17:15:37.347Z · LW(p) · GW(p)

Confronted with a choice between (a) "the person asking me directions was just spontaneously replaced by somebody different, also asking me directions," and (b) "I just had a brain fart,"

Schizophrenia. Capgras Delusion.

I wonder how schizophrenics would comparatively perform on the study.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-09-25T14:35:36.178Z · LW(p) · GW(p)

A man who'd spent some time institutionalized said that the hell of it was that half of what you were seeing was hallucinations and the other half was true things that people won't admit to. Unfortunately, I didn't ask him for examples of the latter.

Replies from: thomblake
comment by thomblake · 2012-09-25T15:31:54.333Z · LW(p) · GW(p)

Unfortunately, I didn't ask him for examples of the latter.

Or perhaps fortunately!

comment by MixedNuts · 2012-09-20T08:40:01.136Z · LW(p) · GW(p)

Can someone sneakily try this on me? I like silly questionnaires, polls, and giving opinions, so it should be easy.

Replies from: MileyCyrus
comment by MileyCyrus · 2012-09-20T17:34:02.073Z · LW(p) · GW(p)

You said in a previous thread that after a hard day of stealing wifi and lobbying for SOPA, you and Chris Brown like to eat babies and foie gras together. Can you explain your moral reasoning behind this?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-20T18:16:21.834Z · LW(p) · GW(p)

The geese and babies aren't sentient, wifi costs the provider very little, that's actually a different Chris Brown, and I take the money I get paid lobbying for SOPA and donate it to efficient charities!

(Sorry, couldn't resist when I saw the "babies" part.)

Replies from: jeremysalwen
comment by jeremysalwen · 2012-09-21T01:44:11.893Z · LW(p) · GW(p)

I'll make sure to keep you away from my body if I ever enter a coma...

Replies from: Incorrect
comment by Incorrect · 2012-09-21T04:58:43.619Z · LW(p) · GW(p)

Oh don't worry, there will always be those little lapses in awareness. Even supposing you hide yourself at night, are you sure you maintain your sentience while awake? Ever closed your eyes and relaxed, felt the cool breeze, and for a moment, forgot you were aware of being aware of yourself?

comment by loup-vaillant · 2012-09-20T07:47:15.344Z · LW(p) · GW(p)

Now that's one ultimate rationalization. The standard pattern is to decide (or prefer) something for one reason, then confabulate more honourable reasons why we decided (or preferred) thus.

But confabulating for something we didn't even decide… that's takings things up a notch.

I bet the root problem is the fact that we often resolve cognitive dissonance before it even hits the concious level. Could we train ourselves to notice such dissonance instead?

Replies from: DaFranker
comment by DaFranker · 2012-09-20T19:21:35.952Z · LW(p) · GW(p)

Could we train ourselves to notice such dissonance instead?

This needs to get a spot in CFAR's training program(s/mme(s)?). It sounds like the first thing you'd want to do once you reach the rank of second-circle Initiate in the Bayesian Conspiracy. Or maybe the first part of the test to attain this rank.

comment by Epiphany · 2012-09-21T07:06:07.845Z · LW(p) · GW(p)

An alternate explanation:

Maybe the years of public schooling that most of us receive cause us to trust papers so much, that if we see something written down on a paper, we feel uncomfortable opposing it. If you're threatened with punishment for not regurgitating what is on an authority's papers daily for that many years of your life, you're bound to be classically conditioned to behave as if you agree with papers.

So maybe what's going on is this:

  1. You fill out a scientist's paper.

  2. The paper tells you your point of view. It looks authoritative because it's in writing.

  3. You feel uncomfortable disagreeing with the authority's paper. School taught you this was bad.

  4. Now the authority wants you to support the opinion they think is yours.

  5. You feel uncomfortable with the idea of failing to show the authority that you can support the opinion on the paper. (A teacher would not have approved - and you'd look stupid.)

  6. You might want to tell the authority that it's not your opinion, but they have evidence that you believe it - it's in writing.

  7. You behave according to your conditioning by agreeing with the paper, and do as expected by supporting what the researcher thinks your point of view is.

I think this might just be an external behavior meant to maintain approval of an authority, not evidence that they've truly changed their minds.

I wonder what would happen if the study were re-done in a really casual way with say, crayon-scrawled questions on scraps of napkins instead of authoritative looking papers.

Also, I wonder how much embarrassment it caused when they seemed to fill out the answers all wrong and how embarrassment might have influenced these people's behavior. Imagine you're filling out a paper (reminiscent of taking a test in school) but you filled out the answers all wrong. Horrified by the huge number of mistakes you made, might you try to hide it by pretending you meant to fill them out that way?

Replies from: orthonormal
comment by orthonormal · 2012-09-21T15:46:04.842Z · LW(p) · GW(p)

It seems to me that this hypothesis is more of a mechanism for choice blindness than an alternate explanation- we already know that human beings will change their minds (and forget they've done so) in order to please authority.

(There's nonfictional evidence for this, but I need to run, so I'll just mention that we've always been at war with Oceania.)

Replies from: Epiphany
comment by Epiphany · 2012-09-21T17:30:51.164Z · LW(p) · GW(p)

What I'm saying is "Maybe they're only pretending to have an opinion that's not theirs." not "They've changed their minds for authority." so I still think it is an alternate explanation for the results.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-09-21T19:01:45.448Z · LW(p) · GW(p)

IIRC, part of the debriefing protocol for the study involved explaining the actual purpose of the study to the subjects and asking them if there were any questions where they felt the answers had been swapped. If they at that point identified a question as having fallen into that category, it was marked as retrospectively corrected, rather than uncorrected.

Of course, they could still be pretending, perhaps out of embarrassment over having been rooked.

Replies from: Epiphany
comment by Epiphany · 2012-09-21T20:00:00.793Z · LW(p) · GW(p)

I'm having trouble interpreting what your point is. It seems like you're saying "because they were encouraged to look for swapped questions before hand, Epiphany's point might not be valid" however, what I read stated: "After the experiment, the participants were fully debriefed about the true purpose of the experiment." so it may not have even occurred to most of them to wonder whether the questions had been swapped at the point when they were giving confabulated answers.

Does this clarify anything? It seems somebody got confused. Not sure who.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-09-21T20:44:09.643Z · LW(p) · GW(p)

IIRC, questions that were scored as "uncorrected" were those that, even after debriefing, subjects did not identify as swapped.
So if Q1 is scored as uncorrected, part of what happened is that I gave answer A to Q1, it's swapped for B, I explained why I believe B, I was afterwards informed that some answers were swapped and asked whether there were any questions I thought that was true for, even if I didn't volunteer that judgment at the time, and I don't report that this was true of Q1.
If I'm only pretending to have an opinion (B) that's not mine about Q1, the question arises of why I don't at that time say "Oh, yeah, I thought that was the case about Q1, since I actually believe A, but I didn't say anything at the time."

As I say, though, it's certainly possible... I might continue the pretense of believing B.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-20T14:14:10.152Z · LW(p) · GW(p)

Move to Main, please!

comment by Desrtopa · 2012-09-21T18:55:42.532Z · LW(p) · GW(p)

I have to wonder if many of the respondents in the survey didn't hold any position with much strength in the first place. Our society enforces the belief, not only that everyone is entitled to their opinions, but that everyone should have an opinion on just about any issue. People tend to stand by "opinions" that are really just snap judgments, which may be largely arbitrary.

If the respondents had little basis for determining their responses in the first place, it's unsurprising if they don't notice when they've been changed, and that it doesn't affect their ability to argue for them.

Replies from: k3nt, simplicio, Unnamed
comment by k3nt · 2012-09-22T19:22:32.731Z · LW(p) · GW(p)

But the study said:

"The statements in condition two were picked to represent salient and important current dilemmas from Swedish media and societal debate at the time of the study."

Replies from: Vaniver
comment by Vaniver · 2012-09-23T06:45:22.539Z · LW(p) · GW(p)

But the study said:

Even then, people can fail to have strong opinions on issues in current debate; I know my opinions are silent on many issues that are 'salient and important current dilemmas' in American society.

Replies from: MaoShan, TheOtherDave
comment by MaoShan · 2012-09-24T02:10:33.829Z · LW(p) · GW(p)

I remember an acquaintance of mine in high school (maybe it was 8th grade) replied to a teacher's question with "I'm Pro-who cares". He was strongly berated by the teacher for not taking a side, when I honestly believe he had no reason to care either way.

comment by TheOtherDave · 2012-09-23T07:12:49.302Z · LW(p) · GW(p)

IIRC, the study also asked people to score how strongly they held a particular opinion, and found a substantial (though lower) rate of missed swaps for questions they rated as strongly held.

I would not expect that result were genuine indifference among options the only significant factor, although I suppose it's possible people just mis-report the strengths of their actual opinions.

comment by simplicio · 2012-09-21T23:35:41.285Z · LW(p) · GW(p)

Quite. My own answer to most of the questions in the survey is "Yes/No, but with the following qualifications." It's not too hard for me to imagine choosing, say, "Yes" to the surveillance question (despite my qualms), then being told I said "No," and believing it.

You won't fool these people if you ask them about something salient like abortion.

Replies from: MugaSofer
comment by MugaSofer · 2012-09-26T12:23:19.651Z · LW(p) · GW(p)

Abortion is a complex issue. You could propably change someone's position on one aspect of the abortion debate, such as a hardline pro-lifer "admitting" that it's OK in cases where the mother's life is in danger.

comment by Unnamed · 2012-09-25T22:00:46.625Z · LW(p) · GW(p)

There is a long tradition in social science research, going back at least to Converse (1964), holding that most people's political views are relatively incoherent, poorly thought-through, and unstable. They're just making up responses to survey questions on the spot, in a way that can involve a lot of randomness.

This study demonstrates that plus confabulation, in a way that is particularly compelling because of the short time scale involved and the experimental manipulation of what opinion the person was defending.

comment by Johnicholas · 2012-09-21T12:28:40.515Z · LW(p) · GW(p)

There's cognitive strategies that (heuristically) take advantage of the usually-persistent world. Should I be embarrassed, after working and practicing with pencil and paper to solve arithmetic problems, that I do something stupid when someone changes the properties of pencil and paper from persistent to volatile?

What I'd like to see is more aboveboard stuff. Suppose that you notify someone that you're showing them possibly-altered versions of their responses. Can we identify which things were changed when explicitly alerted? Do we still confabulate (probably)? Are the questions that we still confabulate on questions that we're more uncertain about - more ambiguous wording, more judgement required?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-09-21T19:11:14.010Z · LW(p) · GW(p)

I don't have citations handy, but IIRC in general inattentional blindness effects are greatly diminished if you warn people ahead of time, which should not be surprising. I don't know what happens if you warn people between the filling-out-the-questionaire stage and the reading-the-(possibly altered)-answers stage; I expect you'd get a reduced rate of acceptance of changed answers, but you'd also get a not-inconsiderable rate of rejection of unchanged answers.

More generally: we do a lot of stuff without paying attention to what we're doing, but we don't keep track of what we did or didn't pay attention to, and on later recollection we tend to confabulate details into vague memories of unattended-to events. This is a broken system design, and it manifests in a variety of bugs that are unsurprising once we let go of the intuitive but false belief that memory is a process of retrieving recordings into conscious awareness.

It frequently startles me how tenacious that belief is.

comment by Epiphany · 2012-09-21T08:34:08.526Z · LW(p) · GW(p)

Another explanation:

Might this mean they trust external memories of their opinions more than their own memories? Know what that reminds me of? Ego. Some people trust others more than themselves when it comes to their view of themselves. And that's why insults hurt, isn't it? Because they make you doubt yourself. Maybe people do this because of self-doubt.

comment by shminux · 2012-09-20T16:53:26.127Z · LW(p) · GW(p)

1 karma point to anyone who links to a LW thread showing this effect (blind change of moral choice) in action. 2 karma points if you catch yourself doing it in such a thread.

Replies from: shminux
comment by shminux · 2012-09-20T16:57:35.136Z · LW(p) · GW(p)

A real-life example of a similar effect: I explained the Newcomb problem to a person and he two-boxed initially, then, after some discussion, he switched to one-boxing and refused to admit that he ever two-boxed.

Replies from: jimmy, RichardHughes
comment by jimmy · 2012-09-21T07:08:43.753Z · LW(p) · GW(p)

This is common enough that I specifically watch out for it when asking questions that people might have some attachment to. Just today I didn't even ask because I knew I was gonna get a bogus "I've always thought this" answer.

I know a guy who "has always been religious" ever since he almost killed himself in a car crash.

My mom went from "Sew it yourself" to "Of course I'll sew it for you, why didn't you ask me earlier?" a couple weeks later because she offered to sew something for my brother in law, which would make her earlier decision incongruent with her self image. Of course, she was offended when I told her that I did :p

Replies from: pjeby
comment by pjeby · 2012-09-25T17:51:47.835Z · LW(p) · GW(p)

I know a guy who "has always been religious" ever since he almost killed himself in a car crash.

My wife, not long before she met me, became an instant non-smoker and was genuinely surprised when friends offered her cigarettes -- she had to make a conscious effort to recall that she had previously smoked, because it was no longer consistent with her identity, as of the moment she decided to be a non-smoker.

This seems to be such a consistent feature of brains under self-modification that the very best way to know whether you've really changed your mind about something is to see how hard it is to think the way you did before, or how difficult it is to believe that you ever could have thought differently.

Replies from: thomblake
comment by thomblake · 2012-09-25T18:15:59.422Z · LW(p) · GW(p)

It's the best way I've seen to quit smoking - it seems to work every time. The ex-smoker says "I'm a non-smoker now" and starts badmouthing smokers - shortly they can't imagine doing something so disgusting and inconsiderate as smoking.

Replies from: wedrifid
comment by wedrifid · 2012-09-25T21:49:58.245Z · LW(p) · GW(p)

It's the best way I've seen to quit smoking - it seems to work every time.

The second of these claims would be extremely surprising to me, even if weakened to '90% of the time' to allow for figures of speech. Even a success rate of 50% would be startling. I don't believe it.

Replies from: thomblake, pjeby
comment by thomblake · 2012-09-26T13:48:32.417Z · LW(p) · GW(p)

It's not surprising to me, though I imagine it's vulnerable to massive selection effect. My observation is about people who actually internalized being a non-smoker, not those who tried to do so and failed. I'm not surprised those two things are extremely highly correlated. So it might not be any better as strategy advice than "the best way to quit smoking is to successfully quit smoking".

comment by pjeby · 2012-09-26T04:05:10.956Z · LW(p) · GW(p)

Even a success rate of 50% would be startling. I don't believe it.

Which is ironic, because the Wikipedia page you just linked to says that "95% of former smokers who had been abstinent for 1–10 years had made an unassisted last quit attempt", with the most frequent method of unassisted quitting being "cold turkey", about which it was said that:

53% of the ex-smokers said that it was "not at all difficult" to stop

Of course, the page also says that lots of people don't successfully quit, which isn't incompatible with what thomblake says. Among people who are able to congruently decide to become non-smokers, it's apparently one of the easiest and most successful ways to do it.

It's just that not everybody can decide to be a non-smoker, or that it occurs to them to do so.

Anecdotally, my wife said that she'd "quit smoking" several times prior, each time for extrinsic reasons (e.g. dating a guy who didn't smoke, etc.). When she "became a non-smoker" instead (as she calls it), she did it for her own reasons. She says that as soon as she came to the conclusion that she needed to stop for good, she decided that "quitting smoking" wasn't good enough to do the job, and that she would have to become a non-smoker instead. (That was over 20 years ago, fwiw.)

I'm not sure how you'd go about prescribing that people do this: either they have an intrinsic desire to do it or not. You can certainly encourage and assist, but intrinsic motivation is, well, intrinsic. It's rather difficult to decide on purpose to do something of your own free will, if you're really trying to do it because of some extrinsic reason. ;-)

Replies from: Vaniver
comment by Vaniver · 2012-09-26T04:13:34.965Z · LW(p) · GW(p)

Which is ironic, because the Wikipedia page you just linked to says that "95% of former smokers who had been abstinent for 1–10 years had made an unassisted last quit attempt", with the most frequent method of unassisted quitting being "cold turkey", about which it was said that:

wedrifid is asking for P(success|attempt), not P(attempt|success), and so a high P(attempt|success) isn't ironic.

comment by RichardHughes · 2012-09-20T19:13:13.650Z · LW(p) · GW(p)

Can you provide more info about the event?

Replies from: shminux
comment by shminux · 2012-09-20T20:05:10.511Z · LW(p) · GW(p)

I presented the paradox (the version where you know of 1000 previous attempts all confirming that the Predictor is never wrong), answered the questions, cut off some standard ways to weasel out, then asked for the answer and the justification, followed by a rather involved discussion of free will, outside vs inside view, then returned to the question. What I heard was "of course I would one-box". "But barely an hour ago you were firmly in the two-boxing camp!". Blank stare... "Must have been a different problem!"

Replies from: fubarobfusco
comment by fubarobfusco · 2012-09-20T22:07:45.465Z · LW(p) · GW(p)

Denying all connection to a possible alternate you who would two-box might be some sort of strategy ...

comment by TheOtherDave · 2012-09-20T14:34:07.444Z · LW(p) · GW(p)

This ought not surprise me. It is instructive how surprising it nevertheless is.

comment by pinyaka · 2012-09-21T12:14:31.307Z · LW(p) · GW(p)

I wonder how long lived the new opinions are?

Replies from: TheOtherDave
comment by TheOtherDave · 2012-09-21T19:15:12.713Z · LW(p) · GW(p)

Relatedly, I wonder how consistent people's original answers to these questions are (if, say, retested a month later). But I would expect answers the subjects are asked to defend/explain (whether original or changed) to be more persistent than answers they aren't.

comment by Lightwave · 2012-09-20T20:48:21.106Z · LW(p) · GW(p)

Kid version of choice blindness. :D

Replies from: Alejandro1
comment by Alejandro1 · 2012-09-21T18:37:53.023Z · LW(p) · GW(p)

Rabbit season!

comment by Kawoomba · 2012-09-20T06:40:43.909Z · LW(p) · GW(p)

Similar to your first video, here's the famous "count how often the players in white pants pass the ball" test (Simons & Chabris 1999).

Incredibly, if you weren't primed to look for something unexpected, you probably would't notice. Seen it work first hand in cogsci classes.

Replies from: Lapsed_Lurker, nerfhammer
comment by Lapsed_Lurker · 2012-09-20T07:46:12.467Z · LW(p) · GW(p)

Even having watched the video before, when I concentrated hard on counting passes, I missed seeing it.

comment by nerfhammer · 2012-09-20T19:58:44.807Z · LW(p) · GW(p)

This is "inattention blindness". Choice blindness is sort of like the opposite; in inattention blindness you don't notice something you're not paying attention to, in choice blindness you don't notice something which you are paying attention to.

Replies from: Kawoomba
comment by Kawoomba · 2012-09-21T08:03:19.956Z · LW(p) · GW(p)

Edit: Didn't really understand your above definition of choice blindness versus inattentional blindness, scholarpedia has a good contrasting definition:

Change blindness refers to the failure to notice something different about a display whereas inattentional blindness refers to a failure to see something present in a display. Although these two phenomena are related, they are also distinct.

Change blindness inherently involves memory — people fail to notice something different about the display from one moment to the next; that is, they must compare two displays to spot the change. The signal for change detection is the difference between two displays, and neither display on its own can provide evidence that a change occurred.

In contrast, inattentional blindness refers to a failure to notice something about an individual display. The missed element does not require memory – people fail to notice that something is present in a display.

In a sense, most inattentional blindness tasks could be construed as change blindness tasks by noting that people fail to see the introduction of the unexpected object (a change – it was not present before and now it is). However, inattentional blindness specifically refers to a failure to see the object altogether, not to a failure to compare the current state of a display to an earlier state stored in memory.

comment by Lightwave · 2012-09-20T21:41:05.580Z · LW(p) · GW(p)

One interpretation is that many people don't have strongly held or stable opinions on some moral questions and/or don't care. Doesn't sound very shocking to me.

Maybe morality is extremely context sensitive in many cases, thus polls on general moral questions are not all that useful.

Replies from: Ezekiel
comment by Ezekiel · 2012-09-21T07:58:09.198Z · LW(p) · GW(p)

The study asked people to rate their position on a 9-point scale. People who took more extreme positions, while more likely to detect the reversal, also gave the strongest arguments in favour of the opposite opinion when they failed to detect the reversal.

Also, the poll had two kinds of questions. Some of them were general moral principles, but some of them were specific statements.

Replies from: Lightwave
comment by Lightwave · 2012-09-21T19:34:56.913Z · LW(p) · GW(p)

Some of them were general moral principles, but some of them were specific statements.

Trolley problems are also very specific, but people have great trouble with them. Maybe I should have said "non-familiar" rather than just "general".

Replies from: k3nt
comment by k3nt · 2012-09-22T19:29:23.601Z · LW(p) · GW(p)

If you read the study, they say that the "specific" questions they are asking are questions that were very salient at the time of the study. These are things that people were talking about and arguing about at the time, and were questions with real-world implications. Thus precisely not "trolley problems."

comment by RichardHughes · 2012-09-20T14:48:30.443Z · LW(p) · GW(p)

It strikes me that performing this experiment on people, then revealing what has occurred, may be a potentially useful method of enlightening people to the flaws of their cognition. How might we design a 'kit' to reproduce this sleight of hand in the field, so as to confront people with it usefully?

Replies from: NancyLebovitz, Armok_GoB, nerfhammer, niceguyanon
comment by NancyLebovitz · 2012-09-20T18:28:47.278Z · LW(p) · GW(p)

It would be easy enough to do with a longish computer survey. It's much easier to change what appears on a screen than to do sleight-of-paper.

comment by Armok_GoB · 2012-09-20T18:11:52.452Z · LW(p) · GW(p)

For added fun metaness, have the option you swich them to and start rationalize for be the one you're trying to convince them of :p

comment by nerfhammer · 2012-09-20T17:00:09.649Z · LW(p) · GW(p)

The video shows the mechanics of how it works pretty well.

comment by niceguyanon · 2012-09-20T16:08:37.034Z · LW(p) · GW(p)

I suspect that those who are most susceptible to moral proposition switches and their subsequent defense of switch, are also the same people that will deny the evidence when confronted with their switch. Much like the Dunning Kruger effect there will be people who fail to recognize the extremity of their inadequacy, even when confronted with evidence of such.

Edit: The paper states that they informed all participants of the true nature of the survey, but it does not go in to detail on whether participants actually acknowledged that their moral propositions were switched.

comment by SilasBarta · 2012-09-21T21:59:34.176Z · LW(p) · GW(p)

I thought I might mention a sort-of similar thing, though done more for humor: the Howard Stern Show interviewed people in an area likely to favor a certain politician, asking them if they supported him because of position X, or position Y (both of which he actually opposed).

(If you remember this, go ahead and balk at the information I left out.)

Replies from: simplicio
comment by simplicio · 2012-09-21T23:28:18.592Z · LW(p) · GW(p)

This is indeed amusing, but the author draws a wrong/incomplete/tendentious conclusion from it. I think the proper conclusion is basically our usual "Blue vs Green" meme, plus some Hansonian cynicism about 'informed electorates.'

comment by Epiphany · 2012-09-21T19:45:32.008Z · LW(p) · GW(p)

Clarifying question: Did they actually change their minds on moral positions or did this study just give the appearance that they changed their minds? This is a question that we need to be asking as we look for meaning in this information, but not everyone here is thinking to ask it. Even when I proposed an alternate explanation to show how this could give the false appearance of people changing their minds when they did not, I got one response from somebody that didn't seem to realize I had just explained why this result might be due to people pretending to support those views when they do not. (I have made this even more explicit.) I think it might be a good idea to include the clarifying question at the end of the original post.

comment by RobinZ · 2012-09-20T18:02:43.041Z · LW(p) · GW(p)

There was a high level of inter-rater agreement between the three raters for the NM reports (r = .70) as well as for the M reports (r = .77), indicating that there are systematic patterns in the verbal reports that corresponds to certain positions on the rating scale for both NM and M trials. Even more interestingly, there was a high correlation between the raters estimate and the original rating of the participants for NM (r = .59) as well as for M reports (r = .71), which indicates that the verbal reports in the M trials do in fact track the participants rated level of agreement with the opposite of the initial moral principle or issue [emphasis added] (for an illustration of this process and example reports, see figure S1, Supporting Online Material). In addition, this relationship highlights the logic of the attitude reversal, in that more modest positions result in verbal reports expressing arguments appropriate for the same region on the mirror side of the scale. And while extreme reversals more often are detected, the remaining non-detected trials also create stronger and more dramatic confabulations for the opposite position.

Am I misreading this, or does it say that the verbal statements of people supporting an inverted opinion fit that opinion better than those describing their genuine opinion?

Replies from: None, Epiphany, Ezekiel
comment by [deleted] · 2012-09-21T18:50:31.162Z · LW(p) · GW(p)

Konkvistador's LessWrong improvement algorithm

  1. Trick brilliant but contrarian thinker into mainstream position.
  2. Trick brilliant but square thinker into contrarian position.
  3. Have each write an article defending their take.
  4. Enjoy improved rationalist community.
Replies from: army1987
comment by A1987dM (army1987) · 2012-09-22T14:03:32.905Z · LW(p) · GW(p)

Now, go ahead and implement that!

comment by Epiphany · 2012-09-21T08:21:24.288Z · LW(p) · GW(p)

Consider this: If you're supporting your own genuine opinion, you might have your own carefully chosen perspective that is slightly different from the question's wording. You only select the answer because it's the closest one of the options, not because it's exactly your answer. So, you may be inclined, then, to say things that are related but don't fit the question exactly. If you're confabulating to support a random opinion, though, what do you have to go by but the wording? The opinion is directing your thoughts then, leading your thoughts to fit the opinion. You aren't trying to cram pre-existing thoughts into an opinion box to make it fit your view.

Or looking at it another way:

When expressing your point of view, the important thing is to express what you feel, regardless of whether it fits the exact question.

When supporting "your" point because you don't want to look like an idiot in front of a researcher, the objective is to support it as precisely as possible, not to express anything.

As for whether your interpretation of that selection is correct: it's past my bed time and I'm getting drowsy, so someone else should answer that part instead.

comment by Ezekiel · 2012-09-21T08:06:12.581Z · LW(p) · GW(p)

I think it does. Can't believe I missed that.

Actually, this fits well with my personal experience. I've frequently found it easier to verbalize sophisticated arguments for the other team, since my own opinions just seem self-evident.

comment by undermind · 2012-09-30T22:40:37.470Z · LW(p) · GW(p) Gaslighting.

Seriously, there's already a well-established form of psychological abuse founded on this principle. It works, and it's hard to see how to take it much further into the Dark Arts.

comment by Fyrius · 2012-09-23T16:32:09.281Z · LW(p) · GW(p)

concealing another person whom replaces the experimenter as the door passes.

(Very minor and content-irrelevant point here, but my grammar nazi side bids me to say it, at the risk of downvotery: it should be "who" here, not "whom", since it's the subject of the relative clause.)

comment by drethelin · 2012-09-20T20:19:08.393Z · LW(p) · GW(p)

A side of effect of this is to reinforce the importance of writing about the Obvious, because things seem obvious after we've learned them and we literally have trouble thinking about not knowing/viewing things in a certain way.

Replies from: shminux
comment by shminux · 2012-09-20T20:20:20.671Z · LW(p) · GW(p)

Especially if the Obvious turns out to be wrong.

Replies from: drethelin
comment by drethelin · 2012-09-20T20:24:46.375Z · LW(p) · GW(p)

Sure. Either way actively talking about the obvious is useful.

comment by Epiphany · 2012-09-22T07:43:09.112Z · LW(p) · GW(p)

You can't not believe everything you read, from the Journal of Personality and Social Psychology, might contain the beginnings of another alternative explanation to this.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-09-22T09:02:41.458Z · LW(p) · GW(p)

Great paper, although (annoyingly) they conflate democracy with liberalism.

comment by RHollerith (rhollerith_dot_com) · 2012-09-22T00:15:43.991Z · LW(p) · GW(p)

So if I defraud someone by pretending to sell them an iPad for $100 but pocketing the $100 instead, I am more likely to get away with the fraud if instead of straightfowardly offering them an iPad, I set up a shady charity and offer them a choice between buying the iPad and donating $100 to the shady charity (provided that it's sufficiently easy for me to extract money from the charity).

comment by physicsgirl · 2013-01-27T00:20:42.417Z · LW(p) · GW(p)

Stuff actually works, I just did an experiment on it with tea and jam. It's so crazy.

Replies from: gwern, army1987
comment by gwern · 2013-01-27T01:46:34.005Z · LW(p) · GW(p)

Details?

comment by A1987dM (army1987) · 2013-01-27T01:46:33.647Z · LW(p) · GW(p)

I just did an experiment on it with tea and jam

What kind of experiment?

comment by learnmethis · 2012-09-26T01:51:13.822Z · LW(p) · GW(p)

Also known as the "people can't remember things without distinctive features" phenomenon. Still interesting to note their behaviours in the situation though.

comment by blogospheroid · 2012-09-21T06:45:17.654Z · LW(p) · GW(p)

Wow!

I don't bandy the term sheeple out very frequently. But here it might just be appropriate.

Replies from: Richard_Kennaway, Epiphany, Ezekiel, TraderJoe
comment by Richard_Kennaway · 2012-09-21T09:34:08.139Z · LW(p) · GW(p)

No-one says "sheeple" intending to include themselves. Do you have any reason to think you are immune from this effect?

Replies from: blogospheroid
comment by blogospheroid · 2012-09-22T04:45:27.004Z · LW(p) · GW(p)

Actually, Yes. I would think that I would be relatively immune from the effect of this in the domain of morality, because I have thought about morality and quite often.

Maybe in a field that I didn't have much knowledge about, if I were asked to give opinions and this kind of a thing was pulled on me, I would succumb and quite badly, I admit. But I wouldn't feel that bad.

I guess my main takeaway from this analogy is that most people don't care that much about morality to stop and think for a while. They go as the flow goes and therefore I said "Sheeple".

I am in no way saying that I am the purest and most moral person on earth. I am most definitely not living my life in accordance with my highest values. But I have a fairly high confidence that I will not succumb to this effect atleast in the domain of moral questions.

comment by Epiphany · 2012-09-21T08:59:57.104Z · LW(p) · GW(p)

That's what I thought at first, too but on second thought, I don't think they went far enough to confirm that this actually causes people to change their opinions. There are other reason people might act the way they did.

comment by Ezekiel · 2012-09-21T08:01:42.563Z · LW(p) · GW(p)

I suspect sheep would be less susceptible to this sort of thing than humans.

comment by TraderJoe · 2012-09-21T08:11:29.475Z · LW(p) · GW(p)

For logic this woolly, I agree...