Can group identity be a force for good?

post by Eric Neyman (UnexpectedValues) · 2021-07-04T17:16:32.761Z · LW · GW · 13 comments

This is a link post for https://ericneyman.wordpress.com/2021/07/04/can-tribalism-be-a-force-for-good/

Contents

  I. Effective altruism
  II. Neoliberalism
  III. Rationality
None
13 comments

Many in the rationalist sphere look down on tribalism and group identity. Paul Graham writes that identity interferes with people’s ability to have a productive discussion. Julia Galef seconds this view (though with exceptions), devoting a chapter of Scout Mindset to the ways that identity interferes with clear thinking. Eliezer Yudkowsky makes a similar point [LW · GW] in the context of political identity.

I think this view is correct, but I want to give three personal examples of my own identity motivating me do good things. These examples have a common theme, and I believe there are lessons to be drawn from them.

 

I. Effective altruism

I first ran into EA in 11th grade, when my English teacher assigned an essay by Peter Singer and asked us to argue against it. I completed the assignment and got a pretty good grade (though, having reread my essay, it was pretty bad!). But I noticed that Singer’s argument was much stronger than most arguments I’d encountered. Over the next couple years I became a utilitarian and was fully on board with EA (at least in broad strokes) my freshman year of college.

By “fully on board”, I mean that I agreed that the EA approach to improving the world was a really good one. On the other hand, I didn’t change any of my personal behaviors to reflect this. I didn’t start donating to or working on effective causes.

Unlike (I think) most EAs, I’m not intrinsically motivated to do good. Making someone happy feels good if I’m close to them, but knowing that I’ve saved the expected lives of 1.2 strangers — frankly it doesn’t. If I could change that I probably would, but as far as I can tell this is just how my brain works.

 

But last year I donated a few percent of my income to effective charities, and I plan to work my way up toward 10% over the next few years. What changed?

For me, it was the fact that I wanted to feel like a real EA. I wanted to feel like I could honestly identify as an EA, be part of the EA tribe. I joined Columbia EA in my first year of grad school, felt really at home in that community, and grew closer to the EA movement as a whole. Now I consider myself an effective altruist. Introspecting about why I’m donating, and why I’m thinking about ways to make my academic research on forecasting more relevant to EA causes, well — it’s because it’s the EA thing to do.

So my journey has been (1) I learned about and agreed with EA principles, (2) I grew close to the EA community, and (3) I started identifying as an EA and taking actions that made me feel like an EA.

The rationalist in me balks a little bit at (3). I should be taking actions because they’re right, not because it makes me part of the tribe. And yet I think those actions are right — it’s just that my brain finds this sort of circuitous motivation more effective. This isn’t surprising: I think most brains are built better for being part of a tribe than for doing abstractly good things.

 

Some EAs I follow on Twitter have a light bulb emoji 💡 next to their names. I’ve told myself that I don’t deserve this emoji yet — not until I donate 10% of my income to effective charities or do direct work on effective causes. But that emoji is motivating. I want it by my name, and I hope to be able to put it there soon. That light bulb might be a weird motivation, but if it makes me save a couple more lives, I’ll take it.

 

II. Neoliberalism

Speaking of Twitter emojis, for a long time I had a globe emoji 🌐 next to my Twitter name, representing Twitter neoliberalism (not to be confused by what most people mean by neoliberalism). Several pre-existing common thought patterns attracted me to neoliberal ideology, including a belief in consequentialism, an appreciation of markets, and having read (and agreed in broad strokes with) some of Scott Alexander’s ideas. The main neoliberal account pretty reliably had opinions I agreed with on things I’d thought about, and this made me trust their opinions more on issues I hadn’t given much thought.

One of these issues was housing policy, which was a somewhat awkward one for me. I hate living in dense neighborhoods; when deciding on my grad school, I went to Columbia in spite of the fact that it was in New York. How much I enjoy living in a neighborhood is inversely related to its density.

At the same time, I would have reluctantly agreed with the neoliberal notion that increasing the housing supply would be better for the world. Rent prices would fall, there would be fewer homeless people, and people would be able to live closer to work and to each other, decreasing commute times and making the economy more efficient.

But I agreed with this position only on an intellectual level; it didn’t feel true to me. I like my Upper West Side brownstone apartment, thank you very much. (Better yet if it were a standalone house with a picket fence, but I don’t think I could afford that.) So while the neoliberal “upzone and build more housing” position intellectually seemed correct, it didn’t feel correct.

 

But I felt like the neoliberal community had things basically right. It felt like a bastion of sanity amid all the other crazy ideologies of the Internet. They also did things I really liked, like raising money for the Against Malaria Foundation. Not that I fell in with that crowd for completely rational reasons — partly I just liked their aesthetic — but I came to identify as a neoliberal. I became part of the tribe.

And as a result, what changed wasn’t so much my views on housing policy, but rather my feelings about it. I cheered for new development and rooted for pro-housing candidates in local races. A couple weeks ago I voted enthusiastically for pro-housing candidates up and down the New York ballot. In the city council election — the one that most directly affects whether housing gets built where I live — I voted for the pro-development candidate, against my own interests. She lost, and I’m sad about that.

Had I not become part of the neoliberal tribe, I likely would have reluctantly agreed that the pro-housing candidate is better for the world, and then voted for the anti-housing candidate anyway. Just like I used to intellectually agree that donating to effective charities is good for the world without acting on that belief. My actions changed only when I became part of a tribe that brought my feelings in alignment with these beliefs.

 

III. Rationality

I first ran into the ideas behind rationality the summer after 10th grade, when a student at a math camp I attended gave out physical copies of the first 17 chapters of Harry Potter and the Methods of Rationality. I was hooked and promptly proceeded to read the remainder (or what existed at the time). I didn’t realize that HPMOR was part of a broader community until my freshman year of college, when I ran into Scott Alexander via his wonderful It Was You Who Made My Blue Eyes Blue. The post linked to Scott Aaronson’s Common Knowledge and Aumann’s Agreement Theorem, which to this day I consider the most important thing I’ve ever read. And then I realized that Eliezer Yudkowsky was also part of this community. At a time where I’d gone from surrounded by friends in high school to feeling pretty lonely in college, I felt that I had found my crowd.

Being a rationalist has changed how I approach the world in many small ways, mostly from learning things covered in rationalist essays and articles. But a few of the changes came, I think, as a result of identifying as a rationalist. The most notable of these is working toward relying less on motivated reasoning and developing a scout mindset. Again, I would have agreed with the statement “motivated reasoning is important to avoid” well before becoming a rationalist, but I became emotionally invested in avoiding motivated reasoning around when I became a rationalist, because avoiding motivated reasoning is what rationalists do (or try to do).

 

IV. Hmm, about that neoliberalism

In a recent Twitter thread, I drew a distinction between effective altruism and neoliberalism. In my mind, effective altruism is a question (“What are the best ways to improve the world?”), while neoliberalism is an answer (to a related question: “What public policies would most improve the world?”). Identities centered around questions seem epistemically safer than those centered around answers. If you identify as someone who pursues the answer to a question, you won’t be attached to a particular answer. If you identify with an answer, that identity may be a barrier to changing your mind in the face of evidence.

A week ago I removed the globe emoji from my Twitter name. I would like to say that I did so upon realizing that my neoliberal identity made me worse at seeing public policy debates clearly. I came to believe a while ago that I’d benefit from holding this identity more lightly. But ultimately I removed the globe, after some nagging discomfort with it, because it went against my rationalist identity. Rationalists care about not letting tribalism cloud their reasoning, and I felt that by being too attached to my neoliberal identity I wasn’t being a good rationalist. Going forward, I’ll continue using the “neoliberal” label as a more or less accurate descriptor of my beliefs, but hope not to be attached to the label itself.

 

 

 

In all of these cases, my experience with tribalism and identity is that it takes things that I believe I should do — donate to effective causes, vote for pro-housing candidates, develop a scout mindset, hold my neoliberal identity more lightly — and gets me to want to do them on an emotional level. In a sense, this isn’t optimal. Ideally I would want to do something as soon as I decide I should do it. But it seems that my brain doesn’t work that way, and tribalism has been a useful crutch.

I don’t endorse tribalism in general, or think it’s a net positive. Tribalism strikes me as a symmetric weapon, equally wieldable by good and evil. This alone would make tribalism net neutral, but in fact tribalism corrupts, turning scouts into soldiers, making people defend their side irrespective of who’s right. And the more tribal a group becomes, the more fiercely they fight. Tribalism is a soldier of Moloch, the god of defecting in prisoner’s dilemmas.

But after thinking about my own tribalism, I’ve come to realize that it can have systematic positive effects as well. Whether it is possible for us to have the positives without the negatives — or at least in excess of the negatives — is in my mind an open question.

But if I may suggest an answer: First, be a scout. Make truth seeking a central part of your identity. Just like EA, a scout identity is centered around a question (“What is true?”). Then, so long as you always think of yourself as the sort of person who cares first and foremost about the truth, growing other identities and joining other tribes will be less epistemically scary.

13 comments

Comments sorted by top scores.

comment by ryan_greenblatt · 2021-07-04T21:58:32.937Z · LW(p) · GW(p)

First of all, I like this post and (at least roughly) agree with the core premise. I also think similar arguments can apply for other cognitive biases/cognitive heuristics. For example, see Sunk Costs Fallacy Fallacy [LW · GW].

Tribalism is a soldier of Moloch, the god of defecting in prisoner’s dilemmas.

I'm modestly confident that the opposite is true for our hunter gatherer ancestors and for small groups more generally. For example, we can model individuals freeloading and failing to gather food for the group as an iterated, many way prisoners dilemma. In this case I would imagine that tribalism tends toward cooperate over defect. Similarly, consider group conflict. The defect/Moloch option here is actually avoiding the fight which reduces risk of injury without substantially reducing the probability of your group winning. Tribalism would tend toward more (violent) opposition of the other group.

I have no idea how tribalism interacts with Moloch for the large ideological tribes of today.

Replies from: UnexpectedValues
comment by Eric Neyman (UnexpectedValues) · 2021-07-04T22:12:38.333Z · LW(p) · GW(p)

Good point! You might be interested in how I closed off an earlier draft of this post (which makes some points I didn't make above, but which I think ended up having too high of a rhetoric to insight ratio):

 

"I don’t endorse tribalism in general, or think it’s a net positive. Tribalism strikes me as a symmetric weapon, equally wieldable by good and evil. This alone would make tribalism net neutral, but in fact tribalism corrupts, turning scouts into soldiers, making people defend their side irrespective of who’s right. And the more tribal a group becomes, the more fiercely they fight. Tribalism is a soldier of Moloch, the god of defecting in prisoner’s dilemmas.

This is somewhat in tension with my earlier claim that my tribalism is a net positive. If I claim that my tribalism is net positive, but tribalism as a whole is net negative, then I’m saying that I’m special. But everyone feels special from the inside, so you’d be right to call me out for claiming that most people who feel that their tribalism is good are wrong, but I happen to be right. I would respond by saying that among people who think carefully about tribalism, many probably have a good relationship with it. I totally understand if you don’t buy that — or if you think that I haven’t thought carefully enough about my tribalism.

But the other thing is, tribalism’s relationship with Moloch isn’t so straightforward. While on the inter-group level it breeds discord, within a tribe it fosters trust and cooperation. An American identity, and a British identity, and a Soviet identity helped fight the Nazis — just as my EA identity helps fight malaria.

So my advice on tribalism might be summarized thus: first, think carefully and critically about who the good guys are. And once you’ve done that — once you’ve joined them — a little tribalism can go a long way. Not a gallon of tribalism — beyond a certain point, sacrificing clear thinking for social cohesion becomes negative even if you’re on the good side — but a teaspoon."

comment by ChristianKl · 2021-07-04T22:05:35.881Z · LW(p) · GW(p)

I consider the general move from "boo tribalism" to what is tribalism about, what advantages does it have and what disadvantages does it have, very good. Policy debates shouldn't be one-sided [LW · GW]. 

comment by Viliam · 2021-07-04T22:44:55.259Z · LW(p) · GW(p)

We are not 100% rational, and partial improvements in wrong context can bring us to the valley of bad rationality [? · GW]. Doing approximately the right thing just because "this is what members of my tribe are expected to do" is not perfect. But it beats not doing the right thing.

comment by Mike Conant (mike-conant) · 2021-07-05T03:09:30.236Z · LW(p) · GW(p)

Thank you for your thoughts!  I enjoyed reading. My first reaction is how you equated tribalism and group identity, so it is worth clarifying what you mean here?   I believe that a group's identity is not necessarily tribal, though it may very well be used as a tribal weapon.  Simple examples are the creation of gentiles, atheists, or Huguenots as "identities" defined via the jews, theists, or Catholics, respectively, as derisive identities. These labels are payloads for both blame or pride depending on who is owning them. Your thoughts?

I appreciated the scout analogy as a way of describing one's openness to new ideas as a balance to the more characteristically tribal behavior of defending the "rules of the tribe".  I think we all benefit from a balance.  

Is Tribalism a negative?  Maybe.  I think we all love connection with those who we relate with, that is good.  When it defines your weltanschauung, it is definitely limiting the potential for growth.

Replies from: UnexpectedValues
comment by Eric Neyman (UnexpectedValues) · 2021-07-05T05:29:05.918Z · LW(p) · GW(p)

It's true that I didn't draw a distinction between tribalism and group identity. My reason for doing so was that I thought both terms applied to my three examples. I thought a bit about the distinction between the two in my mind but didn't get very far. So I'm not sure whether the pattern I pointed out in my post is true of tribalism, or of group identity, or both. But since you pressed me, let me try to draw a distinction.

(This is an exercise for me in figuring out what I mean by these two notions; I'm not imposing these definitions on anyone.)

The word "tribalism" has a negative connotation. Why? I'd say because it draws out tendencies of tribe members to lose subjectivity and defend their tribe. (I was going to call this "irrational" behavior, but I'm not sure that's right; it's probably epistemically irrational but not necessarily instrumentally irrational.) So, maybe tribalism can be defined as a mindset of membership in a group that causes the member to react defensively to external challenges, rather than treating those challenges objectively.

(I know that I feel tribalism toward the rationalist community because of how I felt on the day that Scott Alexander took down Slate Star Codex, and when the New York Times article was published. I expect to feel similarly about EA, but haven't had anything trigger that emotional state in me about it yet. I feel a smaller amount of tribalism toward neoliberalism.)

(Note that I'm avoiding defining tribes, just tribalism, because what's relevant to my post is how I feel about the groups I mentioned, not any property of the groups themselves. If you wanted to, you could define a tribe as a group where the average member feels tribalism toward the group, or something.)

Identity is probably easier to define -- I identify with a group if I consider myself a member of it. I'm not sure which of these two notions is most relevant for the sort of pattern I point out, though.

comment by George3d6 · 2021-07-05T05:28:21.022Z · LW(p) · GW(p)

Tribalism can be good for the individual and for society as a whole.

But it's bad for reasoning about tribe affiliated subjects.

I think you're kinda talking past the point eliazer was making there.

Replies from: UnexpectedValues
comment by Eric Neyman (UnexpectedValues) · 2021-07-05T05:33:46.628Z · LW(p) · GW(p)

To be clear, I'm agreeing with Eliezer; I say so in the second paragraph. But for the most part my post doesn't directly address Eliezer's essay except in passing. Instead I point out: "Yeah, the 'bad for reasoning about tribe affiliated subjects' is a drawback, but here's a benefit, at least for me."

Replies from: George3d6
comment by George3d6 · 2021-07-05T05:36:37.084Z · LW(p) · GW(p)

Is anyone arguing that we shouldn't affiliate with any tribe?

Replies from: UnexpectedValues
comment by Eric Neyman (UnexpectedValues) · 2021-07-05T05:39:15.180Z · LW(p) · GW(p)

I'm not sure. The strongest claim I make in that direction is that "Many in the rationalist sphere look down on tribalism and group identity." I think this is true -- I bet each of the people I named would endorse the statement "The world would be better off with a lot less tribalism."

Replies from: George3d6
comment by George3d6 · 2021-07-05T05:53:08.057Z · LW(p) · GW(p)

Yeah, but that's not to say it's not good.

The world would be better off with a lot less violence. But in a world where other people are violent, being so is sometimes good.

Even if no humans were violent we'd still need a minimal level of violence to defend against pests and such.

But the world would still be better off without violence.

I don't know, it seems to me like you're making a category error, but maybe I'm missing something.

Replies from: Slider, UnexpectedValues
comment by Slider · 2021-07-05T17:13:32.871Z · LW(p) · GW(p)

Possibility strickler in me notices a claim of "It is impossible to deal with pests without resorting to violence". While doing poisoning and outrigth killlings for pest control is rather easy ethic bar to clear I don't see the inevitability of it. You could have things like plant surfaces being engireered to be repulsive to pests, you could do things like allowing pests to only grow outside of industiralised farming. For a lot of these options the effort extended would overshadow the gains in "ethical" operation.

For example in a very simple view of law enforcement the police just straight up murder bad guys. But for a more nuanced and complex system, use of force is more detailed and actual application of lethal force would be rarely the prescription. There is a important line between "policing involves use of legitimised state violence" vs "policing will always involve force".

comment by Eric Neyman (UnexpectedValues) · 2021-07-05T06:02:10.161Z · LW(p) · GW(p)

I think we're disagreeing on semantics. But I'd endorse both the statements "violence is bad" and "violence is sometimes good".