File Under "Keep Your Identity Small"

post by Matt_Simpson · 2012-04-05T18:36:05.380Z · LW · GW · Legacy · 8 comments

Contents

8 comments

We know politics makes us stupid, but now there's evidence (pdf) that politics makes us less likely to consider things from another's point of view. From the abstract:

Replicating prior research, we found that participants who were outside during winter overestimated the extent to which other people were bothered by cold (Study 1), and participants who ate salty snacks without water thought other people were overly bothered by thirst (Study 2). However, in both studies, this effect evaporated when participants believed that the other people under consideration held opposing political views from their own. Participants who judged these dissimilar others were unaffected by their own strong visceral-drive states, a finding that highlights the power of dissimilarity in social judgment. Dissimilarity may thus represent a boundary condition for embodied cognition and inhibit an empathic understanding of shared out-group pain.

As Will Wilkinson notes:

Got that? We overestimate the extent to which others feel what we're feeling, unless they're on another team.

Now this isn't necessarily a negative effect - you might argue that it's bias correcting. But implicitly viewing them as so different that it's not even worth thinking about things from their perspective is scary in itself. 

8 comments

Comments sorted by top scores.

comment by [deleted] · 2012-04-05T21:12:58.149Z · LW(p) · GW(p)

This could stem from availability bias. It suggests that we underestimate within-class variance and we overestimate between-class variance. This could be because it is easier for us to mentally draw a sample opinion from similar-to-me idealizations of people than from different-from-me idealizations.

For example, I agree with Ron Paul on many political issues, and often when someone says something to me like, "But Ron Paul will adversely affect education, I mean he's a creationist for Pete's sake" I tend to be very surprised. I think to myself, "Ron Paul and I have similar views on non-interventionism and fiscal policy.. when I mentally simulate what he must think about evolution, well, of course he must believe in it." I am drawing a simulation of Ron Paul's belief from a similar-to-me distribution, where I am underestimating the variance among people that are similar to me in certain dimensions.

It is available and easy for me to simulate views that agree with mine. Especially if I then attribute them to famous or influential people. And what's even more terrifying is that I then file this away in my brain as if it were evidence that my own beliefs are valid. It is simulating the truth state of a fact by appealing to lazy, easy simulation, and then generalizing from my own simulation which is mostly fictional!

To state it in an entirely obfuscated way that has as its only merit the fact that it helps me contextualize this, the result suggests that my brain is performing some kind of bootstrap procedure to impute the preferences of others on dimensions I have not measured by appealing to dimensions that I have measured, and then using a crappy acceptance-rejection process to actually draw the simulated sample, then filling in the missing dimensional data with my crappy simulation, and then treating those filled-in dimensions like they count as much as genuine observed data would count.

In fact, unless I am directly contradicted (like in the case of Ron Paul's creationist beliefs, which forced me to recognize how I was simulating his opinions on unknown issues from my own opinions just because we have some overlap), I probably will "remember" these imputed ideas as if they were facts that I learned more directly from evidence.

This could explain a lot of petty disputes (e.g. "You didn't turn off the coffee pot", "Yes, I did"): we rarely recognize when we're reasoning from imputed within-class generalizations, when we do realize it we rarely challenge it ("what do I think I know and why do I think I know it"), and when we challenge it we default to believing our imputations must have been factually motivated, unless we're easily contradicted.

It would be fun to make an iPhone app called ContradictThyself which makes a nice interface connecting claims you made based on imputed conclusions with data that is available on them. Even better if it uploaded to a web interface where I could see nice statistical breakdowns of the areas where I am prone to believe my imputations vs. areas where I'm more vigilant.

Can anyone answer how much it would cost to commission people at SIAI or other rationality-promoting organizations to create such an app/website? Is such a thing of general interest? I see it as a good contribution to quantified life: having statistics about when you claim to know something strongly but are in fact incorrect. I'm sure I would have no problem convincing my girlfriend to help me by logging what I say weekly with such an app :)

Replies from: jhuffman
comment by jhuffman · 2012-04-07T03:20:43.281Z · LW(p) · GW(p)

I'm interested in your ideas for such an app - how would you interact with it? The only ideas I come up with amount to window dressing on a journal.

Replies from: None
comment by [deleted] · 2012-04-07T03:41:57.744Z · LW(p) · GW(p)

My ideas for the interaction are not well formed. I'm more interested in what I could data mine. I imagine something like predictionbook, but miniaturized to daily situations. Maybe it could be a little like a game. You have a circle of friends and you make predictions about things. You're all at a restaurant together. Someone says they think the U.S. dollar has gone down in real value since leaving the gold standard. You use an iPhone app to state the claim and then people in your circle can add their predictions or something, and maybe someone can accept an answer in a way that's similar to stack overflow. It might require some honor system for your peers to up-vote you when you get it right and down-vote you when you get it wrong.

But the valuable part for me would be to see a huge time series of my rights and wrongs, broken down by some text analysis of the statements of the claims, and possibly linked to confidence ratings in my answers. If I had a year's worth of data from this app, on questions ranging from "What is Obama's voting record on bills involving the accessibility of birth control?" to "Will Kentucky win the basketball championship?", I think I could gain a lot by seeing how my overconfidence breaks down by subject area, which topics I am less willing to say oops and change my mind about, which areas I am more inclined to guess correctly, and so on.

It sounds like a fun project. A version of stackoverflow where you decide which circles of friends can be part of your "honor system" by fact checking your claims. Presumably making it a Facebook app that posts twitter-style synopses of the predictions you want to make public would be a good interface for it, and then figuring out what kind of textual machine learning algorithms need to go underneath would be a bit harder.

Replies from: MixedNuts
comment by MixedNuts · 2012-04-09T17:03:38.139Z · LW(p) · GW(p)

Prediction Book?

Replies from: None
comment by [deleted] · 2012-04-09T17:25:02.657Z · LW(p) · GW(p)

Yes, when I originally wrote the above post I was thinking of precisely PredictonBook, except that it would be crucial that it comes in an app form and almost surely to get a lot of users it has to interface with at least Facebook. I'm not aware that PredictionBook has an app, but I'll check into it. If a donor was looking to donate money to develop such an app, probably PredictionBook is the right place to go and not SIAI as I had suggested before.

comment by Shmi (shminux) · 2012-04-05T19:39:28.777Z · LW(p) · GW(p)

Not just politics, any kind of "us vs them" division.

Replies from: Bluehawk, Thomas
comment by Bluehawk · 2012-04-09T01:30:47.127Z · LW(p) · GW(p)

Worth testing as to whether it occurs to different extents depending on what type of division it is, or how important the test subject believes that one difference to be.

comment by Thomas · 2012-04-05T20:45:55.302Z · LW(p) · GW(p)

Maybe. But maybe, at certain divisions the opposite effect prevails. I wouldn't be so hasty here.