Taking the Outgroup Seriously

post by Davis_Kingsley · 2020-02-16T13:23:42.645Z · LW · GW · 8 comments

Contents

8 comments

(Author's note/content warning: this post contains politically controversial examples, which are unfortunately perhaps necessary given the subject. I have tried to be relatively even-handed in this matter, my apologies if I erred in doing so.)

Occasionally, one reads a purported "takedown" of an opposing group that perhaps deals more with the writer's own models than those of the group he intends to refute.

For example, I recently read a widely-shared Quora post that claimed that religious proselytization was part of an elaborate brainwashing scheme to make the proselytizers themselves feel more connected to their own religion, and that the goal was not at all to make converts. The post might have been a good piece of rhetoric, but as something that aimed to actually understand the outgroup I think it was silly and obviously wrong.

Another example might be the claim that abortion supporters literally worship Moloch and want to kill babies as sacrifices, or for that matter the claim that abortion opponents only hold their views because they hate women and oppose abortion as part of a conspiracy to curtail women's rights.

What do these sorts of claims all have in common? They don't take the outgroup seriously. Sure, there might well be some fringe radicals who actually worship Moloch and want to kill babies or who oppose abortion because doing so furthers their conspiracy to suppress women, but such views likely constitute an extreme minority opinion. In point of fact, the person who says "I support abortion because I support women's rights." probably in fact actually believes that; the person who says "I support abortion because I believe killing a fetus is murder." probably in fact actually believes that too! There is no need to posit that these people are secret Moloch cultists or members of a grand conspiracy to suppress women -- they have already told you their reasons for their belief, and you weren't listening!

You can in fact often gain remarkable insight into the belief structures of most anyone -- even opponents -- by actually listening to and reading what they have to say. In most cases, people do not come up with elaborate secret reasoning for their opinions and then withhold it in favor of other arguments -- instead, they tend to explain their actual reasoning, which you can listen to to better understand their perspective. [1] However, it is very easy to skip over the reasoning that the person you're interacting with actually presents and instead engage only with very extreme arguments, even if they represent only a tiny fraction of what people holding these views actually believe. In fact, such styles of argument seem very common. [2]

I claim that thinking in this way is really doing a disservice not only to the outgroup but also to yourself. If you think of your opponents only as extreme caricatures, you are likely to miss their actual concerns, and you are less likely to be able to accurately model their viewpoints and perhaps come to a mutual understanding in the future. Instead, you may have frustrating and divisive conversations where it seems that both of you are operating based on caricatures of the others' opinion.

A large number of problems and misunderstandings, both politically and interpersonally, seem to me to be related to this sort of reasoning, and avoiding it seems often key to solving major problems in one's life. If you go around thinking that those who oppose you are all idiots, or crazy people, or innately evil [LW · GW], or just haven't thought about the situation (unlike you, of course!)... well, I won't say that you'll always be wrong, but that sure doesn't seem like the best way to go about trying to form an accurate model of the world! Instead, try looking at what they actually have to say and really actually trying to understand their arguments and what those arguments imply. You might be surprised at what you find!


[1] There are some domains where this may not apply, especially certain interpersonal ones (indeed, it would normally be considered outrageously impolite to explain your reasoning in some such matters), but the point stands in general.

[2] Scott Aikin and Robert Talisse refer to this as the weak man fallacy.

8 comments

Comments sorted by top scores.

comment by waveman · 2020-02-17T00:19:01.092Z · LW(p) · GW(p)

I think this is sometimes true but often not.

An example:

Andrew Denton, an Australian journalist, did a podcast about the question of euthanasia ( well worth the listen https://www.wheelercentre.com/broadcasts/podcasts/better-off-dead). During this process he attended a right to life conference. During the conference speakers spoke openly about the fact that the arguments they used in public against voluntary euthanasia were not at all their own reasons for opposing it.

In summary their actual reason for opposing VE is that in Christian theology you are not allowed to die until Jesus decides to take you / that you have suffered enough. Because this reason is unacceptable to most people, they said that they would try on various arguments and use the ones that seemed to resonate e.g. Hitler used euthanasia as an excuse to murder people, people will kill granny to get the inheritance, people will kill the disabled and other "useless eaters" , governments will encourage euthanasia to save aged care dollars.

In American politics Donald Trump started using the phrase "Drain the Swamp" frequently when he noticed that people responded to it. I leave it to the reader to judge whether it was his intention to drain the swamp, or whether he even thought it was possible.

In general IMHO people often advance bogus arguments because they know their real reasons will not be acceptable. In fact there is some evidence that confabulation is a core competency of the human brain. See e.g. https://en.wikipedia.org/wiki/Split-brain

Replies from: Davis_Kingsley
comment by Davis_Kingsley · 2020-02-17T04:35:43.454Z · LW(p) · GW(p)

One interesting point is that arguments that people use internally with their own group are more likely to be truthful than arguments that people use in a public-facing context. If you suspect people might be advancing false arguments, it can be useful to do as Denton did and investigate their own internal communications as well as the outward-facing arguments. That said, I think these scenarios are comparatively rare.

Replies from: Dagon
comment by Dagon · 2020-02-17T16:09:57.727Z · LW(p) · GW(p)

This is the point at which "ingroup-outgroup" has to get more nuanced. Groups have sub-groups, and it's absolutely NOT the case the the arguments used "internally" are all that "truthful". There's definitely a tendency to use DIFFERENT arguments with different groups (in the example, "God says" with one group and "no way to avoid bad incentives" with another), but the actual true reason may well be "it's icky". Or rather, a mix of all of the given reasons - most of the time people (ingroup or out-) don't actually think they're lying when they use different reasons for a demanded/recommended policy, just that they're focusing on different valid elements of their argument.

comment by Viliam · 2020-02-17T01:00:03.131Z · LW(p) · GW(p)

I agree that one should be aware of what their opponents literally believe, instead of strawmanning them. Also, it should be acceptable to say: "I didn't really spend time to research what they believe, but they have a bad reputation among the people I trust, so I go along with that judgment", if that indeed is the case.

On the other hand, the example about religious proselytising -- there may be a difference between why people do things, and why it works. Like this [LW · GW], but on a group level. So, you should understand the motivation of your outgroup, but also the mechanism. More generally, you should understand the mechanism of everything, including yourself. Your opponents are implemented on broken hardware, and so are you, and it's actually the same type of hardware. But when you work on this level, you should be skeptical not only about your opponents, but also about yourself and your allies. If you fail to apply the same skepticism towards yourself, you are doing it wrong -- not because you are too unfair to your opponents, but because you are too naive about yourself.

comment by romeostevensit · 2020-02-16T17:51:55.847Z · LW(p) · GW(p)

I think one mental motion that might be happening is that people often have a vague sense that there are deep reasons for their own position even though those deep reasons aren't properly enumerated in the public discourse. When they compare this feeling of philosophical depth in their own position to the perceived shallowness of the opponents' public position the opponent seems like some sort of gullible idiot. But the person on the other side is in the exact same position.

comment by Lukas Finnveden (Lanrian) · 2020-02-16T18:10:46.560Z · LW(p) · GW(p)

I think that the position put forward here could usefully be applied to parts of the post itself.

In particular, I'd say that it's quite uncommon for people to claim that abortion opponents "oppose abortion as part of a conspiracy to curtail women's rights". There's no reason to posit a conspiracy, ie., a large number of people who have discussed and decided on this as a good method of suppressing women. I think a fair number of people claim that abortion opponents are motivated by religious purity norms, though, such that they don't mind inflicting suffering on women who have sex outside of marriages; or perhaps that they generally don't care enough about the welfare of women, because they're misogynist. Justifications for why abortion opponents want to prevent abortions in the first place range from thinking that they hate promiscious women so much that they want to punish them, to acknowledging that they may care about foetuses to some extent, to thinking that they mostly care about repeating religious shibboleths. Some of these seem silly to me, but not all of them.

There's even some evidence that you could cite for these claims, e.g. that abortion opponents are rarely strong supporters of birth control (which I'd guess is the best method of preventing abortions). And there's some arguments you could put forward against this, in turn, namely that people in general are bad at finding the best interventions for the things they care about. I haven't thought about this in depth, but I don't think the most sophisticated version of any of these sides is silly and obviously wrong.

On the margin, I think it'd be good if people moved towards the position that you're advocating in this post. But I don't think it's obvious that people generally "tend to explain their actual reasoning". I think there's often a mix-up between conversation norms and predictions about how people act in the real world, when talking about this kind of thing:

  • In a 1-on-1 conversation, it makes sense to take the other persons view seriously, because if you think they're arguing in bad faith, you should probably just stop talking with them.
  • In conversations with lots of listeners, people might be able to convince a lot of people by putting forth arguments other than those that convinced them, so I can't why we should predict that they always put forward their true reasoning. Whether they mostly do so or not seems like an empirical question with a non-obvious answer (and I would actually quite like to read an analysis of how common this is). However, we still want to strongly endorse and support the norm of responding to points under the assumption that they were made in good faith, because accusations will quickly destroy all value that the conversation might have generated. I think it's extremely important that we have such norms on lesswrong, for example (and I also believe that lesswrongers almost always do argue in good faith, partly because we generally don't discuss hot topics like, uhm, abortion... oops).
  • When thinking about other people's actions outside of conversations with them (whether in our own heads or in conversation with a third party), I think we'd be unnecessarily handicapped if we assumed that people always meant everything they said. If a politician makes a claim, I predict that a person who has "the politician made that claim just to gain votes" somewhere in their hypothesis-space is going to make better predictions than someone who steadfastely applies the conversation norms to their own thoughts.
comment by Dagon · 2020-02-16T19:42:10.990Z · LW(p) · GW(p)

This is good advice, but nowhere near simple to implement. Much of the public writing on group-identity topics does not include enough foundation agreement on models and assumptions for it to actually make sense. Most people (including your ingroup, if you apply the same standards) are just awful at explaining what they believe, let alone why they believe it.

Note: the contrapositive is perhaps one way to actually pursue this. "Don't take your ingroup seriously". You're just as prone to unexamined assumptions and faulty logic as your counterpart in one of your outgroups. Identifying where your peers are simply not seeing things clearly can help you in finding the topics on which communication is hard and tends to cleave into social groups, rather than shared examination across diverse backgrounds.

comment by Pattern · 2020-02-16T18:20:19.090Z · LW(p) · GW(p)

(Contains an unendorsed model, as an example of a fake model.)

What do these sorts of claims all have in common? They don't take the outgroup seriously. Sure, there might well be some fringe radicals who actually

I disagree slightly with some of the examples. Here is what seems to generalize:

1. Some "ideas"/organizations exist that spread themselves. Intentional or not, if lying offers an advantage, then over time selection of groups (as they arise and die out) may lead to widespread lies/systems of lies.

2. How does one determine whether or not one is dealing with fringe radicals? The label "outgroup" suggests we consider the group we are dealing with to be fringe radicals.

3. What if the outgroup doesn't "take themselves seriously"? Consider the following example*:

Model: Sex leads to closeness/intimacy. This effect becomes weaker if after being activated, the people in question break up/etc..

There are groups that spread this to argue against sex before marriage.

But an alternative conclusion is that lots of sex is a good thing, as it enables people to become less overwhelmed by strong emotions which cause them to make rash decisions, which leads to marriages that don't last.

If this were a widespread response to the model, then maybe those groups would stop spreading it because they are using it to argue for something that they value/against something they anti-value.

While the above is a hypothetical, it points at a phenomenon that seems to be widespread - in which groups (and individuals) are not arguing in good faith, and taking them seriously will lead one astray.

*If you remember what post this example is from, let me know so I can add a link to it.


Specific:

If you go around thinking that those who oppose you are all idiots, or crazy people, or innately evil [LW · GW], or just haven't thought about the situation (unlike you, of course!)... well, I won't say that you'll always be wrong, but that sure doesn't seem like the best way to go about trying to form an accurate model of the world!

If it seems wrong because it involves postulating that there are two types of people, you and everyone else in the world, then that seems easily fixed, by accepting that the conditions observed occur in oneself. (Although this should really be a matter of empirical judgement rather than theory - why should the best way of going about forming an accurate model of the world seem like the best way, when so many people are wrong?)

  • Everyone is foolish.
  • Everyone is evil.
  • Everyone is "crazy".

Each of these could be a starting point for a more complicated model.

Are people crazy in predictable ways?

Is wisdom randomly distributed throughout the population such that people tend to be wise in one domain but foolish at others, or is wisdom/foolishness a general trait?

Does everyone go about achieving their aims in largely similar ways, such that whether someone is good or evil will depend entirely on circumstance and what people believe they have to gain, or is it largely a subconscious/unreflective phenomena, or are people good and evil generally, or do people tend to be good in some areas but bad in others? And do those areas vary between people and change over time or with circumstance?