Conversational Cultures: Combat vs Nurture (V2)

post by Ruby · 2020-01-08T20:23:53.772Z · LW · GW · 88 comments

Contents

  Combat Culture
  Nurture Culture
  The Key Cultural Distinction: Significance of a Speech-Act
  When does each culture make sense?
  Combat without Safety, Nurture without Caring
  Know thyself and others
    Content
None
88 comments

You are viewing Version 2 of this post: a major revision written for the LessWrong 2018 Review [LW · GW]. The original version published on 9th November 2018 can be viewed here [LW · GW].

See my change notes [LW(p) · GW(p)] for major updates between V1 and V2.

Combat Culture

I went to an orthodox Jewish high school in Australia. For most of my early teenage years, I spent one to three hours each morning debating the true meaning of abstruse phrases of Talmudic Aramaic. The majority of class time was spent sitting opposite your chavrusa (study partner, but linguistically the term has the same root as the word “friend”) arguing vehemently for your interpretation of the arcane words. I didn’t think in terms of probabilities back then, but if I had, I think at any point I should have given roughly even odds to my view vs my chavrusa’s view on most occasions. Yet that didn’t really matter. Whatever your credence, you argued as hard as you could for the view that made sense in your mind, explaining why your adversary/partner/friend’s view was utterly inconsistent with reality. That was the process. Eventually, you’d reach agreement or agree to disagree (which was perfectly legitimate), and then move onto the next passage to decipher.

Later, I studied mainstream analytic philosophy at university. There wasn’t the chavrusa, pair-study format, but the culture of debate felt the same to me. Different philosophers would write long papers explaining why philosophers holding opposite views were utterly confused and mistaken for reasons one through fifty. They’d go back and forth, each arguing for their own correctness and the others’ mistakeness with great rigor. I’m still impressed with the rigor and thoroughness of especially good analytic philosophers.

I’ll describe this style as combative, or Combat Culture. You have your view, they have their view, and you each work to prove your rightness by defending your view and attacking theirs. Occasionally one side will update, but more commonly you develop or modify your view to meet the criticisms. Overall, the pool of arguments and views develops and as a group you feel like you’ve made progress.

While it’s true that you’ll often shake your head at the folly of those who disagree with you, the fact that you’re bothering to discuss with them at all implies a certain minimum of respect and recognition. You don’t write lengthy papers or books to respond to people whose intellect you have no recognition of, people you don’t regard as peers at all.

There’s an undertone of countersignalling to healthy Combat Culture. It is because recognition and respect are so strongly assumed between parties that they can be so blunt and direct with each other. If there were any ambiguity about the common knowledge of respect, you couldn’t be blunt without the risk of offending someone. That you are blunt is evidence you do respect someone. This is portrayed clearly in a passage from Daniel’s Ellsberg recent book, The Doomsday Machine: Confessions of a Nuclear War Planner (pp. 35-36):

From my academic life, I was used to being in the company of very smart people, but it was apparent from the beginning that this was as smart a bunch of men as I have ever encountered. That first impression never changed (though I was to learn, in the years ahead, the severe limitations of sheer intellect). And it was even better than that. In the middle of the first session, I ventured--though I was the youngest, assigned to taking notes, and obviously a total novice on the issues--to express an opinion. (I don’t remember what it was.) Rather than showing irritation or ignoring my comment, Herman Kahn, brilliant and enormously fat, sitting directly across the table from me, looked at me soberly and said, “You’re absolutely wrong.”

A warm glow spread throughout my body. This was the way my undergraduate fellows on the editorial board of the Harvard Crimson (mostly Jewish like Herman and me) had routinely spoken to each other: I hadn’t experienced anything like it for six years. At King’s College, Cambridge, or in the Society of Fellows, arguments didn’t take this gloves-off, take-no-prisoners form. I thought, “I’ve found a home.” [emphasis and paragraph break added]

That a senior member of the RAND group he had recently joined was willing to be completely direct in shooting down his idea didn’t cause the author to shut down in anguish and rejection, on the contrary, it made it author feel respected and included. I’ve found a home.

Nurture Culture

As I’ve experienced more of the world, I discovered that many people, perhaps even most people, strongly dislike combative discussions where they are being told that they are wrong for ten different reasons. I’m sure some readers are hitting their foreheads and thinking “duh, obvious,” yet as above, it’s not obvious if you’re used to a different culture. 

While Combat Culture prioritizes directness and the free expression of ideas, in contrast, Nurture Culture prioritizes the comfort, wellbeing, and relationships of participants in a conversation. It wants everyone to feel safe, welcome, and well-regarded within a general spirit of  “we’re on the same side here”.

Since disagreement, criticism, and pushback can all lead to feelings of distance between people, Nurture Culture tries to counter those potential effects with signals of goodwill and respect.  Nurture Culture wants it to be clear that notwithstanding that I think your idea is wrong/stupid/confused/bad/harmful, that doesn’t mean that I think you’re stupid/bad/harmful/unwelcome/enemy, or that I don’t wish to continue to hear your ideas. 

Nurture Culture makes a lot of sense in a world where criticism and disagreement are often an attack or threat– people talk at length about how their enemies and outgroups are mistaken, never about how they’re correct. Even if I have no intention to attack someone by arguing that they are wrong, I’m still providing evidence of their poor reasoning whenever I critique. 

There is a simple entailment: holding a mistaken belief or making a poor argument is evidence of poor reasoning such that when I say you are wrong, I’m implying, however slightly, that your reasoning or information is poor. And each time you’re seen to be wrong, you (rightly) lose a few points in people’s estimation of you. It might be a tiny fractional loss of a point of respect and regard, but it can’t be zero.

So some fraction of the time people express disagreement because they generally believe it, but also in the wider world, a large fraction of the time people express disagreement it is as an attack [1] [LW(p) · GW(p)]. A healthy epistemic Nurture Culture works to make it possible to safely have productive disagreement by showing that disagreement is safe and not intended as an attack.

To offer a concrete example of Nurture Culture and why you might want it, I wrote the following fictional account of Alex Percy:

I was three months into my new role as an analyst at Euler Associates. It was my first presentation making any recommendations of consequence. It was attended by heads of three other departments. Senior managers who managed teams of hundreds and budgets of millions. I was nervous. Sure, I had a doctorate in operations management, but that was merely classroom and book knowledge. 

Ten minutes into the meeting I’d finished my core pitch. Ms. Fritz seemed mildly irritated, Mr. Wyman radiated skepticism. My heart sank. I mean, I shouldn’t have been surprised. If  (if!) I keep my job long enough, I’m sure I’ll get there...

“You lost me on slide 4, but let’s talk it through. It seems you’re assuming that regional sales growth is going to plateau, which I hadn’t been assuming myself, but I could see it being true. Let’s assume it for now and can chat for a few minutes to see if your right about the flow-through effects”, these were the first words from Dr. Li.

I felt elation. Engagement! A chance to explain my reasoning! Being taken seriously! Maybe contributing to Euler Associates wouldn’t be such a painful grind.

IN A LESS NURTURING UNIVERSE

My heart sank. I mean, not really surprising. If (if!) I keep my job long enough, I’m sure I’ll get there...

“To get that result,  you assume regional sales growth is going to plateau. That would be very surprising. So that must be wrong.” 

Such was the terse response to my months of work. I’d triple-checked that premise– it wasn’t certain but it was reasonable. Was I supposed to argue back with someone who probably doubted I was worth their time? Did I meekly accept that I hadn’t justified this enough in my pitch and try to do better next time? My boss gave no indication when I look at him. Ach, I’m such a fool. I should have known you need to really justify the controversial points. My first big presentation and I blew it.

 

It might be a virtue to be tough and resilient, but it also costly to to bring that degree of emotional fortitude to bare. Someone might be pushing against a strong prior that they are unwelcome or that others are trying to make them push uphill contribute.

The first scenario describes Nurturing behavior. A busy senior manager signaling to the new person: “I’m going to give you a chance, I know you want to help.”
If nothing else, I expect the consulting firm where Nurturing behavior is commonplace to be a far more pleasant place to work, and probably better for people’s health and wellbeing.

The Key Cultural Distinction: Significance of a Speech-Act

It is possible to describe to Combat and Nurture cultures by where they fall on a number of dimensions such as adversarial-collaborative, “emotional effort and concern”, filtering, etc. I discuss these dimensions in Appendix 1 [LW(p) · GW(p)].

However, I believe that the key distinction between Combat and Nurture cultures  is found in one primary question:

Within a culture, what is the meaning/significance of combative speech acts? 

By combative speech acts, I mean things like blatant disagreement, directly telling people they are wrong, and arguing one-sidedly against someone’s ideas with no attempt to find merit. The question is, within a culture, do these speech acts imply negative attitudes or have negative effects for the receiver? Or, conversely, are they treated like any other ordinary speech? 

The defining feature of a Combat Culture is that these traditionally combative speech acts do not convey threat, attack, or disdain. Quite the opposite– when Herman Kahn says to Daniel Ellsberg, “you’re absolutely wrong”, Ellsberg interprets this as a sign of respect and inclusion. The same words said to Alex Percy by the senior managers of Euler Associates are likely to be interpreted as a painful “I’m not interested in you or what you have to say.”

And like in language in general, the significance of speech acts (combative or otherwise) needs to be coordinated between speakers and listeners. Obviously, issues arise when people from different cultures assign different meanings to the same thing in an interaction. Someone with Nurture Culture meanings associated to speech-acts can feel attacked in a Combative space. A Combatively-cultured person [2] [LW(p) · GW(p)] in a Nurture culture space can make others feel attacked.

The prevailing culture often isn’t clear, and often spaces are mixed. You can imagine how that goes.

When does each culture make sense?

In the original version of this post, I mostly refrained from commenting on which culture was better. At this point, I think I can say a little more.

Combat Culture has a number of benefits that make it attractive, particularly for truth-seeking discussion: it has greater freedom of speech, no extra effort is required to phrase things politely, one doesn’t need to filter as much or dance around, and less attention is devoted to worrying about offending people.

And as Said Achmiz mentioned in a comment on the original post [LW(p) · GW(p)], “generally speaking, as people get to know each other, they tend to relax around each other, and drop down into a more “combative” orientation in discussions and debates with each other. They say what they think.” Or they outright countersignal. That’s some reason to have a generic preference for a culture where fewer speech acts parse as attacks or threats.

But Combat Culture only works when you’ve established that the standardly hostile speech acts aren’t unfriendly. It works when you’ve got a prior of friendliness, or have generally built up trust. And that means it tends to work well where people know each other well, have common interests, or some other strong factor that makes them feel aligned and on the same team. Something that creates a sense of trust and safety that you’re not being attacked even if your beliefs, works, or choices are.

Interestingly, while in any given particular exchange Combat Culture might seem to be more efficient and “pay less communicative overhead", that’s only possible because the Combat Culture previously invested “overhead” into creating a high-safety context. It is because you’ve spent so much time with a person and signaled so much caring and connection that you’re now able to bluntly tell them that they’re absolutely wrong.

However, there’s more than one path to being able to tell someone point-blank that they’re dead wrong. Being old friends is one method, but you also get the relevant kind of trust and safety in contexts where not everyone knows each other well, like in philosophy and mathematics departments, as well as my Talmud class and Ellsberg’s group at RAND. 

In those contexts, I believe the mechanics work out to produce priors such as everyone makes mistakes; mistakes aren’t a big deal; you point out mistakes and move on. Perhaps it’s because in mathematics mistakes are very legible once identified and so everyone is seen to make them, or that for the whole endeavor to work, people have to people to point them out and people get completely used to it as normal. In philosophy, everyone’s been saying that everyone else is wrong for millennia. Someone saying you’re wrong just means you’re part of the game.

Somehow, these domains have done the work required to give blunt disagreement different significance than is normal in the wider world. They’ve set up conducive priors [A2] [LW(p) · GW(p)] for healthy combat.

Combat without Safety, Nurture without Caring

One of my regrets with the original Combat vs Nurture post is that some people starting describing generically aggressive and combative conversation cultures as “Combat Culture.” No! I meant more specific and better! As clarified above, I meant specifically a culture where combative speech-acts aren’t perceived as threatening by people. If people do feel threatened or attacked, you don’t have a real Combat Culture!

Others pointed out that there are non-Combative cultures that aren’t actually nurturing at all. This is entirely correct. You can avoid blatantly being aggressive while still being neutral to hostile underneath. That isn’t what I meant to call Nurture Culture.

In a way, both Combat Culture and Nurture are my attempted steelmen for two legitimate conversational cultures which are situated within a larger space of many other conversational cultures. Abram Demski attempts to do a better job of carving up the entire space [LW · GW].

Know thyself and others

Compared to 5-10 years ago, I’ve updated that people operating in different cultures from my own are probably not evil. To my own surprise, I’ve seen myself both be an advocate for Combat and Nurture at different points.

I continue to think the most important advice [LW(p) · GW(p)] in this space is being aware of both your own cultural tendencies and those of the people around you. Even if others are wrong, you’ll probably do better by understanding them and yourself relative to them.
 


Further Content

88 comments

Comments sorted by top scores.

comment by Ruby · 2018-11-11T22:44:28.278Z · LW(p) · GW(p)

This content was moved from the main body of the post to this comment. After receiving some good feedback, I've decided I'll follow the template of "advice section in comments" for most of my posts.

Some Quick Advice

Awareness

  • See if you can notice conversational cultures/styles which match what I’ve described.
  • Begin noticing if you lean towards a particular style.
  • Begin paying attention to whether those you discuss with might have a particular style, especially if it’s different from yours.
  • Start determining if different groups you’re a member of, e.g. clubs or workplaces, lean in one cultural direction or another.

Openness

  • Reflect on the advantages that cultures/styles different to your own have and others might use them instead.
  • Consider that on some occasions styles different to yours might be more appropriate.
  • Don’t assume that alternatives to your own culture are obviously wrong, stupid, bad, or lacking in skills.

Experimentation

  • Push yourself a little in the direction of adopting a non-default style for you. Perhaps you already do but push yourself a little more. Try doing so and feeling comfortable and open, if possible.

Ideal and Degenerate Forms of Each Culture

Unsurprisingly, each of the cultures has their advantages and weaknesses, mostly to do with when and where they’re most effective. I hope to say more in future posts, but here I’ll quickly list as what I think the cultures look like at their best and worst.

Combat Culture

At its best

  • Communicators can more fully focus their attention on their ideas the content rather than devoting thought to the impact of their speech acts on the emotions of others.
  • Communication can be direct and unambiguous when it doesn’t need to be “cushioned” to protect feelings.
  • The very combativeness and aggression prove to all involved that they’re respected and included.

At its worst

  • The underlying truth-seeking nature of conversation is lost and instead becomes a fight or competition to determine who is Right.
  • The combative style around ideas is abused to dismiss, dominate, bully, belittle, or exclude others.
  • It devolves into a status game.

Nurture Culture

At its best

  • Everyone is made to feel safe, welcomed, and encouraged to participate without fear of ridicule, dismissal, or judgment.
  • People assist each other to develop their ideas, seeking to find their strongest versions rather than attacking their weak points. Curiosity pervades.

At its worst

  • Fear of inducing a negative feeling in others and the need to create positive feelings and impressions of inclusion dominate over any truth-seeking goal.
  • Empathy becomes pathological and ideas are never criticized.
  • Communicators spend most of their thought and attention on the social interaction itself rather than the ideas they’re trying to exchange.
comment by Richard_Kennaway · 2018-11-10T16:01:44.996Z · LW(p) · GW(p)

There is another story I have heard of the chavrusa. Two Talmudic students were going hammer and tongs, as they do, when one of them found himself at a loss how to reply to the other's latest argument. As he struggled in thought, the other stepped in and said, such-and-such is how you should argue against what I just said.

comment by MakoYass · 2018-11-11T05:48:24.424Z · LW(p) · GW(p)

This is a great example of how people should go about playing competitive games with their friends, always be ready to point out ways they can do better, even if that would let them win against you.

Not so sure about treating conversation as a competitive game though heh

comment by AdrianSmith · 2018-11-13T19:50:15.256Z · LW(p) · GW(p)

I think this is why we make the debate/conversation distinction. It's not a perfect line, and your culture informs where it lies in any situation, but there's an idea that you switch from "we're just talking about whatever or exploring some idea" vs "we're trying to dig deep into the truth of something".

Knowing when one or the other mode is appropriate is something that's often lacking in online discussions.

comment by Said Achmiz (SaidAchmiz) · 2018-11-10T03:24:25.228Z · LW(p) · GW(p)

This is a good post; thank you for writing it. I think this dichotomy, while perhaps not a perfect categorization, is pretty good, and clarifies some things.

My background and preference is what you call “Combat Culture”. “Nurture Culture” has always seemed obviously wrong and detrimental to me, and has seemed more wrong and more detrimental the more I’ve seen it in action. This, of course, is not news, and I mention it only to give context to what I have to say.

I have two substantive comments to make:

First, “It devolves into a status game.” is something that easily can, and often does, happen to “Nurture Culture” spaces also. What this looks like is that “you are not communicating in a [ nurturing | nonviolent | prosocial | etc. ] way” gets weaponized and used as a cudgel in status plays; meanwhile, participants of higher social status skirt the letter of whatever guidelines exist to enforce a “nurturing” atmosphere, while saying and doing things whose effects are to create a chilling atmosphere and to discourage contrarian or opposing views. (And the more that “nurturing” communities attempt to pare down their “nurturing”-enforcement rules to the basics of “be nice”, the more the “tyranny of structurelessness” applies.)

Second, it has been my experience, in my personal relationships, that the more comfortable a person feels around another person or group, the more the first person is willing to be “combative”. A very “nurturing” type interaction style—where you don’t just tell a person they’re wrong, but “open-mindedly” and “curiously” inquire as to their reasons, and take care not to insult them, etc., etc., does indeed work well to avert misunderstandings or overt conflicts between near-strangers, casual acquaintances, etc. It also tends to be a near-perfect sign of a low-intimacy relationship; generally speaking, as people get to know each other, they tend to relax around each other, and drop down into a more “combative” orientation in discussions and debates with each other. They say what they think. (Of course, this could be a selection effect on my part. But then, this seems correct to me, and proper, and expected.)

comment by Ruby · 2018-11-10T07:24:59.395Z · LW(p) · GW(p)

Thanks!

I agree that Nurture Culture can be exploited for status too, perhaps equally so. When I was writing the post, I was thinking that Combat Culture more readily heads in that direction since in Combat Culture you are already permitted to act in ways which in other contexts would be outright power-plays, e.g. calling their ideas dumb. With Nurture Culture, it has to be more indirect, e.g. the whole "you are not conforming to the norm" thing. Thinking about it more, I'm not sure. It could be they're on par for status exploitability.

An increase in combativeness alongside familiarity and comfort matches my observation too, but I don't think it's universal - possibly a selection effect for those more natively Combative. To offer a single counterexample, my wife describes herself as being sickeningly nurturing when together with one of her closest friends. Possibly nurturing is one way to show that you care and this causes it to become ramped up in some very close relationships. Possibly it's that receiving nurturing creates warm feelings of safety, security, and comfort for some such that they provide this to each other to a higher extent in closer relationships. I'm not sure, I haven't thought about this particular aspect in depth.

comment by jimmy · 2018-11-10T19:19:28.584Z · LW(p) · GW(p)
To offer a single counterexample, my wife describes herself as being sickeningly nurturing when together with one of her closest friends.

I don't think they're mutually exclusive. My response in close relationships tends to be both extra combative and extra nurturing, depending on the context.

The extra combativeness comes from common knowledge of respect, as has already been discussed. The extra nurturing is more interesting, and there are multiple things going on.

Telling people when they're being dumb and having them listen can be important. If those paths haven't been carved yet, it can be important to say "this is dumb" and prove that you can be reliably right when you say things like that. Doing that productively isn't trivial, and the fight to get your words respected at full value can get in the way of nurturing. In my close relationships where I can simply say "you're being dumb" and have them stop and say "oops, what am I missing?" I sometimes do, but I'm also far more likely to be uninterested in saying that because they'll figure it out soon enough and I actually am curious why they're doing something seems so deeply mistaken to me. Just like how security in nurturing can allow combativeness, security in combativeness can allow nurturing.

Another thing is that when people gain trust in you to not shit on them when they're vulnerable, they start opening up more in places in which nurture is the more appropriate response. In these cases it's not that I'm being nurturing instead of being combative, it's that I'm being nurturing instead of not having the interaction at all. Relative to the extreme care that'd need to be taken with someone less close in those areas, that high level of nurturing is still more combative.

comment by Elo · 2018-11-10T22:43:00.127Z · LW(p) · GW(p)

I'm in agreement with you, with the caveat that there's a paradox of, "bring yourself". Where people show courage in the face of potential pain for vulnerability and they feel stronger and better about the whole thing.

However this courage thing is complicated by the fact that emotions don't bite in the same way as physical dogs bite. There is a lot more space to be in uncomfortable emotions and not die than is expected from the sense of discomfort that comes with them.

comment by Mary Chernyshenko (mary-chernyshenko) · 2018-11-22T21:01:28.257Z · LW(p) · GW(p)

...I still don't get it why one needs to say "you're being dumb", when obviously the intended meaning is "you're saying/doing something dumb", in virtually all settings.

If people are that close, can't they just growl at each other? Or use one of the wonderfully adaptable short words that communicate so much?..

comment by jimmy · 2018-11-25T21:09:55.454Z · LW(p) · GW(p)

The precise phrasing isn't important, and often "growls" do work. The important part is in knowing that you can safely express your criticisms unfiltered and they'll be taken for what they're worth.

comment by Benquo · 2018-12-03T04:57:32.269Z · LW(p) · GW(p)

At the risk of tediously repeating what Mary Chernyshenko said, I don’t think a key point was really addressed:

If “the exact phrasing is not important” implies unbiased errors in phrasing, then it’s quite surprising that people tend to round off “your argument is bad” to “you’re dumb” so often.

If, as therefore seems probable, there is a motivated tendency to do that, then it’s clearly important for some purpose and we ought to be curious about what, when we’re trying to evaluate the relation of various conversational modes to truth-seeking vs other potentially competing goals.

comment by Zvi · 2018-12-03T15:45:21.985Z · LW(p) · GW(p)

An outright "You're dumb" is a mistake, period, unless you actually meant to say that the person is in fact dumb. This rounding is a pure bad, and there's no need of it. Adding 'being' or 'playing' or 'doing something' before the dumb is necessary.

Part of a good combative-type culture is that you mean what you say and say what you mean, so the rounding off here is a serious problem even before the (important) feelings/status issue.

comment by Ruby · 2018-12-05T21:48:53.482Z · LW(p) · GW(p)

I emphatically agree with Zvi about the mistakeness of saying "you're dumb."

In my own words:

1) "You're absolutely wrong" is strong language, but not unreasonable in a combative culture if that's what you believe and you're honestly reporting it.

2a) "You're saying/doing something dumb" becomes a bit more personal than when making a statement about a particular view. Though I think it's rare that one have need to say this, and it's only appropriate when levels of trust and respect are very high.

2b) "You're being dumb" is a little harsher than "saying/doing something dumb." The two don't register as much different to me, however, though they do to Mary Chernyshenko?

3) "You're dumb" (introduced in this discussion by Benquo) is now making a general statement about someone else and is very problematic. It erodes the assumptions of respect which make combative-type cultures feasible in the first place. I'd say that conversations where people are calling others dumb to their faces are not situations I'd think of as healthy, good-faith, combative-type conversations.

[As an aside, even mild "that seems wrong to me"-type statements should be recognized as potentially combative. There are many contexts where any explicit disagreement registers as hostile or contrarian.]

comment by Mary Chernyshenko (mary-chernyshenko) · 2018-12-10T20:30:19.556Z · LW(p) · GW(p)

(Not important, but my supervisor was a great man who tended to revel in combat settings and to say smth like "You're being dumb" more often than other versions, & though everybody understood what he meant, it destroyed his team eventually. People found themselves better things to do, as, of course, people generally should. This is where I'm coming from.)

comment by Benquo · 2018-12-05T22:42:59.967Z · LW(p) · GW(p)

Your response here and Ruby's both seem rude to me: you're providing a (helpful) clarification, but doing that without either addressing the substantive issue directly or noting that you're not doing that. Ordinarily that wouldn't be a big deal, but when the whole point of my comment was that jimmy ignored Mary's substantive point I think it's obnoxious to then ignore my substantive point about Mary's substantive point being ignored.

comment by jimmy · 2018-12-14T19:11:37.247Z · LW(p) · GW(p)
[...]but when the whole point of my comment was that jimmy ignored Mary's substantive point I think it's obnoxious to then ignore my substantive point about Mary's substantive point being ignored.

FWIW, “jimmy ignored Mary’s substantive point” is both uncharitable and untrue, and both “making uncharitable and untrue statements as if they were uncontested fact” and “stating that you find things obnoxious in cases where people might disagree about what is appropriate instead of offering an argument as to why it shouldn’t be done” stand out as far more obnoxious to me.

I normally would just ignore it (because again, I think saying “I think that’s obnoxious” is generally obnoxious and unhelpful) but given your comment you’ll probably either find the feedback helpful or else it’ll help you change your mind about whether it's helpful to call out things one finds to be obnoxious :P

comment by jimmy · 2018-12-14T19:10:08.687Z · LW(p) · GW(p)

The exact phrasing isn't important, but conveying the right message is. As Zvi and Ruby note, that “being”/”doing”/etc part is important. “You’re dumb” is not an acceptable alternative because it does not mean the same thing. “Your argument is bad” is also unacceptable because it also means something completely different.

"Your argument is bad" only means “your argument is bad”, and it is possible to go about things in a perfectly reasonable way and still have bad arguments sometimes. It is completely different than a situation where someone is failing to notice problems in their arguments which would be obvious to them if they weren’t engaging in motivated cognition and muddying their own thinking. An inability to think well is quite literally what “dumb” is, and “being dumb” is a literal description of what they’re doing, not a sloppy or motivated attempt to say or pretend to be saying something else.

As far as “then why does it always come out that way”, besides the fact that “you’re being dumb” is far quicker to say than the more neutral “you’re engaging in motivated cognition”, in my experience it doesn’t always or even usually come out that way — and in fact often doesn’t come out at all, which was kinda the point of my original comment.

When it does take that form, there are often good reasons which go beyond “¼ the syllables” and are completely above board, explicit, and agreed upon by both parties. Counter-signalling respect and affection is perhaps the clearest example.

There are examples of people doing it poorly or with hostile and dishonest intent, of course, but the answer to “why do those people do it that way” is a very different question than what was asked.

comment by Mary Chernyshenko (mary-chernyshenko) · 2018-11-26T18:17:30.346Z · LW(p) · GW(p)

yeah, it's not important, it just keeps happening that way, doesn't it.

comment by Ruby · 2020-01-04T21:44:55.611Z · LW(p) · GW(p)

[Update: the new version is now live!!]

[Author writing here.]

The initial version of this post was written quickly on a whim, but given the value people have gotten from this post (as evidenced by the 2018 Review nomination and reviews), I think it warrants a significant update which I plan to write in time for possibly publication in a book, and ideally the Review voting stage.

Things I plan to include in the update:

  • Although dichotomies (X vs. Y) are easy to remember and talk about, different conversational cultures differ on multiple dimensions [LW(p) · GW(p)], and that ought to be addressed explicitly.
  • It's easy to round-off the cultures to something simpler than I intended, and I want to ward against that. For example, the healthy Combat Culture I advocate requires a a basis of trust between participants. Absent that, you don't have the culture I was pointing at.
  • Relatedly, an updated post should incorporate some of the ideas I mentioned in the sequel [LW · GW] about the conditions that give rise to different cultures.
  • A concept which crystallized for me since writing the post is that of "the significance of a speech act" and how this crucially differs between cultures.
  • The tradeoffs between the two cultures can be addressed more explicitly.

Overall, I think my original post did something valuable in pointing clearly at two distinct regions in conversation-culture space and giving them sufficiently good labels which enabled people to talk about them and better notice them in practice. The fact that they've gotten traction has surprised me a bit since it pointed out perhaps a hole in our communal vocab.

Crisply pointing at these two centroids in the large space necessarily meant sacrificing the nuance and detail from the multiple dimensions in the space. I think the ideal treatment of the topic both provides easy-to-use handles for discussion as well as more thorough theory of conversational-cultures. In truth, probably a sequence of posts is warranted rather than just a single behemoth post or something.

A point interesting to me is that the post differs somewhat in style from my other posts. With my other posts, I usually try to be very technically precise (and end up sounding a bit like a textbook or academic paper). This post's style was meant to be more engaging, more entertaining, more emotional, and I'm guessing that was part of its appeal. I'm not sure if it's entirely a good thing, since I think trying to write in the evocative way is in tension with writing in the most technically precise, model-rich, theoretically-accurate way. 

In updating the post, I expect to move it to the latter style more and make it relatively "more boring read" even as I make it more accurate. I could imagine the ideal for authors to be is to have one highly-engaging, evocative post for a topic that draws people in and another with the same models in their full technical glory.

Lastly I mention that I think there's so much detail in this domain that alternative takes, e.g. Abram Demski's Combat vs Nurture & Meta-Contrarianism [LW · GW], feel like they're describing real and true things, yet somehow different things than what I addressed. I don't have a meta-theory yet that manage to unify all the models in this space, though that would be nice.

comment by Ruby · 2018-11-26T07:24:19.052Z · LW(p) · GW(p)

Two dimensions independent of the two cultures

Having been inspired by the comments here, I'm now thinking that there are two communication dimensions at play here within the Cultures. The correlation between these dimensions and the Cultures is incomplete which has been causing confusion.

1) The adversarial-collaborative dimension. Adversarial communication is each side attacking the other's views while defending their own. Collaborative communication is openness and curiosity to each other's ideas. As Ben Pace describes it [LW(p) · GW(p)]:

I'll say a thing, and you'll add to it. Lots of 'yes-and'. If you disagree, then we'll step back a bit, and continue building where we can both see the truth. If I disagree, I won't attack your idea, but I'll simply notice I'm confused about a piece of the structure we're building, and ask you to add something else instead, or wonder why you'd want to build it that way.

2) The "emotional concern and effort" dimension. Communication can be conducted with little attention or effort placed on ensuring the emotional comfort of the participants, often resulting in a directness or bluntness (because it's assumed people are fine and don't need things softened). Alternatively, communication can be conducted with each participant putting in effort to ensure the other feels okay (feels validated/respected/safe/etc.) At this end of the spectrum, words, tone, and expression are carefully selected as overall a model of the other is used to ensure the other is taken care of.

My possible bucket error [LW · GW]

It was easy for me to notice "adversarial, low effort towards emotional comfort" as one cluster of communication behaviors and "collaborative, high concern" as another. Those two clusters are what I identified as Combat Culture and Nurture Culture.

Commenters here, including at least Raemon [LW(p) · GW(p)], Said [LW(p) · GW(p)], and Ben Pace [LW(p) · GW(p)], have rightly made comments to the effect that you can have communication where participants are being direct, blunt, and not proactively concerned for the feelings of the other while nonetheless still being open, being curious, and working collaboratively to find the truth with a spirit of, "being on the same team". This maybe falls under Combat Culture too, but it's a less central example.

On the other side, I think it's entirely possible to be acting combatively, i.e. with an external appearance of aggression and hostility, while nonetheless being very attentive to the feelings and experience of the other. Imagine two fencers sparring in the practice ring: during a bout, each is attacking and trying to win, however, they're also taking create care as to not actually injure the other. They would stop the moment they suspected they had and switch to an overtly nurturing mode.

A 2x2 grid?

One could create a 2x2 grid with the two dimensions described in this comment. Combat and Nurture cultures most directly fit in two of the quadrants, but I think the other two quadrants are populated by many instances of real-world communication. In fact, these other two quadrants might contain some very healthy communication.

comment by Benquo · 2018-11-26T13:32:51.551Z · LW(p) · GW(p)

I think this is 2-dimension schema is pretty good. The original dichotomy bothered me a bit (like it was overwriting important detail) but this one doesn’t.

One more correlated but distinct dimension I’d propose is whether the participants are trying to maximize (and therefore learn a lot) or minimize (and therefore avoid conflict) the scope of the argument.

US courts tend to take an (adversarial, low emotional concern, minimize) point of view, while scientific experiments are supposed to maximize the implications of disagreement.

comment by Ruby · 2018-11-26T19:36:37.038Z · LW(p) · GW(p)
I’d propose is whether the participants are trying to maximize (and therefore learn a lot) or minimize (and therefore avoid conflict) the scope of the argument.

Interesting, though I'm not sure I fully understand your meaning. Do you mind elaborating your examples a touch?

comment by Benquo · 2018-11-26T21:09:17.038Z · LW(p) · GW(p)

American judges like to decide cases in ways that clarify undetermined areas of law as little as possible. This is oriented towards preserving the stability of the system. If a case can be decided on a technicality that allows a court to avoid opining on some broader issue, the court will often take that way out. Consider the US Supreme Court's decision on the gay wedding cake - the court put off a decision on the core issue by instead finding a narrower procedural reason to doubt the integrity of the decisionmaking body that sanctioned the baker. Both sides in a case have an incentive to avoid asking courts to overturn precedents, since that reduces their chance of victory.

Plea bargains are another example where the thing the court is mainly trying to do is resolve conflicting interests with minimal work, not learn what happened.

In general, if you see the interesting thing about arguments as the social conflict, finding creative ways to avoid the need for the argument helps you defuse fights faster and more reliably, at the expense of learning.

By contrast, in science, the best experiments and ones scientists are rewarded for seeking out are ones that overturn existing models with high confidence. Surprising and new results are promoted rather than suppressed.

This is of course a bit of a stereotyped picture. Actual scientific fields resemble this to varying degrees, and of course there's also non-disagreement-oriented data-gathering and initial model formation. But the ideal of falsification does matter. Activist lawyers will sometimes deliberately try to force a court to decide a large issue instead of a small one. And on the other hand, actual scientific research also includes non-disagreement-oriented data-gathering and initial model formation. But the ideal of falsification does matter in science and affects the discourse.

comment by Ben Pace (Benito) · 2018-11-26T23:59:01.050Z · LW(p) · GW(p)

That is such a respectable social norm, to try and make as conservative a statement about norms as possible whenever you're given the opportunity (as opposed to many people's natural instincts which is to try to paint a big picture that seems important and true to them).

comment by Benquo · 2018-11-27T14:06:16.315Z · LW(p) · GW(p)

Could you clarify your references a bit? None of my guesses as to the connection between my comment and your reply are such a good fit as to make me confident that I've understood what you're saying.

comment by Ruby · 2018-11-26T23:11:44.834Z · LW(p) · GW(p)

Thanks, that was clarifying and helpful.

comment by Richard_Kennaway · 2018-11-26T14:52:30.989Z · LW(p) · GW(p)

This is a false dichotomy. But whenever someone marks two points on an otherwise featureless map, typically the rest of the space of possibilities that the world explodes with disappears from the minds of the participants. People end up saying "combat good, nurture bad", or the reverse, and then defend their position by presenting ways in which one is good and ways in which the other is bad. Or someone expatiates on the good and bad qualities of each one, in multiple permutations, and ends up with a Ribbonfarm post.

Said Achmiz has spoken eloquently of bad things that happen in "nurture culture". For examples of bad things in "combat culture", see any snark-based community, such as 4chan or rationalwiki. All of these things are destructive of epistemic quality. (If anything, nurture goes more wrong than combat, because it presents a smile, a knife in the back, and crocodile tears, while snark wields its weapons openly.)

When you leave out all of the ways that either supposed culture can go wrong, what is left of them? In a culture without snark or smothering, good ideas will be accepted, and constructively built on, not extinguished. Bad ideas will be pointed out as such; if there is something close that is better, that can be pointed out; if an idea is unsalvageable, that also.

Several Japanese terms have gained currency in the rational community, such as tsoyoku naritai and isshoukenmei. Here is another that I think deserving of wider currency: 切磋琢磨, sessa takuma, joyfully competitive striving for a common purpose.

comment by Richard_Kennaway · 2018-11-26T18:35:28.582Z · LW(p) · GW(p)

One might even say that all functioning communities are alike; each dysfunctional community is dysfunctional in its own way. "For men are good in but one way, but bad in many."

comment by ChristianKl · 2018-11-11T12:49:07.195Z · LW(p) · GW(p)
Communication can be direct and unambiguous when it doesn’t need to be “cushioned” to protect feelings.

I don't believe that communication becomes direct in a combative discussion. Participants in a combative discussion usually try to hide spots where they or their arguments are vulnerable. This means it's harder to get at the true rejection [LW · GW]of the other person.

There's a huge problem in our Western culture where having knowledge is seen as the ability to have a opinion about a topic that can be effectively defended intellectually instead of knowledge being the ability to interact directly with the real world or to make predictions about it.

In a combative environment I can't speak about those things that I know to be true where I can make good predictions but that I can't defend intellectually in a way that makes sense to the person I'm speaking with.

comment by AdrianSmith · 2018-11-13T19:56:00.726Z · LW(p) · GW(p)

I find this to be true, but only to a point. Those blind spots in our beliefs are usually subconscious, and so in non-combative discussion they just never come up at all. In combative discussion you find yourself defending them even without consciously realizing why you're so worried about that part of your belief (something something Belief in Belief).

I almost always find that when I've engaged in a combative discussion I'll update around an hour later, when I notice ways I defended my position that are silly in hindsight.

comment by Said Achmiz (SaidAchmiz) · 2018-11-14T00:32:37.120Z · LW(p) · GW(p)

This is an excellent point, and I too have had this experience.

Very relevant to this are Arthur Schopenhauer’s comments in the introduction to his excellent Die Kunst, Recht zu behalten (usually translated as The Art of Controversy). Schopenhauer comments on people’s vanity, irrationality, stubbornness, and tendency toward rationalization:

If human nature were not base, but thoroughly honourable, we should in every debate have no other aim than the discovery of truth; we should not in the least care whether the truth proved to be in favour of the opinion which we had begun by expressing, or of the opinion of our adversary. That we should regard as a matter of no moment, or, at any rate, of very secondary consequence; but, as things are, it is the main concern. Our innate vanity, which is particularly sensitive in reference to our intellectual powers, will not suffer us to allow that our first position was wrong and our adversary’s right. The way out of this difficulty would be simply to take the trouble always to form a correct judgment. For this a man would have to think before he spoke. But, with most men, innate vanity is accompanied by loquacity and innate dishonesty. They speak before they think; and even though they may afterwards perceive that they are wrong, and that what they assert is false, they want it to seem the contrary. The interest in truth, which may be presumed to have been their only motive when they stated the proposition alleged to be true, now gives way to the interests of vanity: and so, for the sake of vanity, what is true must seem false, and what is false must seem true.

But, says Schopenhauer, these very tendencies may be turned around and harnessed to our service:

However, this very dishonesty, this persistence in a proposition which seems false even to ourselves, has something to be said for it. It often happens that we begin with the firm conviction of the truth of our statement; but our opponent’s argument appears to refute it. Should we abandon our position at once, we may discover later on that we were right after all: the proof we offered was false, but nevertheless there was a proof for our statement which was true. The argument which would have been our salvation did not occur to us at the moment. Hence we make it a rule to attack a counter-argument, even though to all appearances it is true and forcible, in the belief that its truth is only superficial, and that in the course of the dispute another argument will occur to us by which we may upset it, or succeed in confirming the truth of our statement. In this way we are almost compelled to become dishonest; or, at any rate, the temptation to do so is very great. Thus it is that the weakness of our intellect and the perversity of our will lend each other mutual support; and that, generally, a disputant fights not for truth, but for his proposition, as though it were a battle pro aris et focis. He sets to work per fas et nefas; nay, as we have seen, he cannot easily do otherwise. As a rule, then, every man will insist on maintaining whatever he has said, even though for the moment he may consider it false or doubtful.

[emphasis mine]

Schopenhauer is saying that—to put it in modern terms—we do not have the capability to instantly evaluate all arguments put to us, to think in the moment through all their implications, to spot flaws, etc., and to perform exactly the correct update (or lack of update). So if we immediately admit that our interlocutor is right and we are wrong, as soon as this seems to be the case, then we can very easily be led into error!

So we don’t do that. We defend our position, as it stands at the beginning. And then, after the dispute concludes, we can consider the matter at leisure, and quite possibly change our minds.

Schopenhauer further comments that, as far as the rules and “stratagems” of debate (which form the main part of the book)—

In following out the rules to this end, no respect should be paid to objective truth, because we usually do not know where the truth lies. As I have said, a man often does not himself know whether he is in the right or not; he often believes it, and is mistaken: both sides often believe it. Truth is in the depths. At the beginning of a contest each man believes, as a rule, that right is on his side; in the course of it, both become doubtful, and the truth is not determined or confirmed until the close.

(Note the parallel, here, to adversarial collaborations—and recall that in each of the collaborations in Scott’s contest, both sides came out of the experience having moved closer to their opponent/collaborator’s position, despite—or, perhaps, because of?—the process involving a full marshaling of arguments for their own initial view!)

So let us not demand—neither of our interlocutors, nor of ourselves—that a compelling argument be immediately accepted. It may well be that stubborn defense of one’s starting position—combined with a willingness to reflect, after the dispute ends, and to change one’s mind later—is a better path to truth.

comment by ChristianKl · 2018-11-14T12:18:32.719Z · LW(p) · GW(p)

I see no apriori reason to think that the average adversarial collaboration was combative in nature. The whole idea was to get the people to collabarate and that collabartion will lead to a good outcome.

comment by ChristianKl · 2018-11-14T12:21:28.184Z · LW(p) · GW(p)

Understanding the blind spots of the person you are talking with and bringing them to their awareness is a skill. It might very well be that you are not used to talking to people who have that skill set.

If you follow a procedure like double crux and the person you are talking with have a decent skill-level there's a good chance that they will point out blind spots to you.

I almost always find that when I've engaged in a combative discussion I'll update around an hour later, when I notice ways I defended my position that are silly in hindsight.

"silly" is a pretty big requirement. It would be better if people don't need to believe that there old positions are silly to update to be able to do so.

Professional philosophers as a class who's culture is combative have a bad track record of changing their opinion when confronted with opposing views. Most largely still hold those position that were important to them a decade ago.

comment by TAG · 2018-11-14T13:11:31.462Z · LW(p) · GW(p)

Professional philosophers as a class who’s culture is combative have a bad track record of changing their opinion when confronted with opposing views.

As opposed to whom? Who's good?

comment by ChristianKl · 2018-11-15T06:43:40.630Z · LW(p) · GW(p)

In Silicon Valley startup culture there are many people who are able to pivot when it turns out that there initial assumptions were wrong.

Nassim Taleb makes in his books the claim that successful traders are good at changing their opinion when there are goods arguments to change positions.

CFAR went from teaches Bayes rule and Fermi estimation to teaching parts work. A lot of their curriculum manages to change.

comment by Ruby · 2018-11-13T20:10:14.899Z · LW(p) · GW(p)
I almost always find that when I've engaged in a combative discussion I'll update around an hour later, when I notice ways I defended my position that are silly in hindsight.

I second that experience.

comment by PaulK · 2018-11-11T03:28:55.147Z · LW(p) · GW(p)

Great essay!

Another aspect of this divide is about articulability. In a nurturing context, it's possible to bring something up before you can articulate it clearly, and even elicit help articulating it.

For example, "Something about <the proposal we're discussing> strikes me as contradictory -- like it's somehow not taking into account <X>?". And then the other person and I collaborate to figure out if and what exactly that contradiction is.

Or more informally, "There's something about this that feels uncomfortable to me". This can be very useful to express even when I can't say exactly what it is that I'm uncomfortable with, IF my conversation partner respects that, and doesn't dismiss what I'm saying because it's not precise enough.

In a combative context, on the other hand, this seems like a kind of interaction you just can't have (I may be wrong, I don't have much experience in them). Because there, inarticulateness just reads as your arguments being weak. And you don't want to run the risk of putting half-baked ideas out there and having them swatted down. So your only real choices are to figure out how to articulate things, by yourself, on the fly, or remain silent.

And that's too bad, because the edge of what can be articulated is IME the most interesting place to be.

(Gendlin's Focusing is an extreme example of being at the edge of what can be articulated, and in the paired version you have one person whose job is basically to be a nurturing & supportive presence.)

comment by Bucky · 2019-12-16T16:41:46.987Z · LW(p) · GW(p)

Most people who commented on this post seemed to recognise it from their experience and get a general idea of what the different cultures look like (although some people differ on the details, see later). This is partly because it is explained well but also because I think the names were chosen well.

Here are a few people saying that they have used/referenced it: 1 [LW(p) · GW(p)], 2 [LW(p) · GW(p)], 3 [LW(p) · GW(p)] plus me [LW(p) · GW(p)].

From a LW standpoint thinking about this framing helps me to not be offended by blunt comments. My family was very combat culture but in life in general I find people are unwilling to say “you’re wrong” so it now comes as a bit of a shock. Now when someone says something blunt on LW I just picture it being said by my older brother and realise that probably no offense is meant.

Outside of LW, this post has caused me to add a bit into my induction of new employees at work. I encourage a fairly robust combat culture in my department but I realise that some people aren’t used to this so I try to give people a warning up front and make sure they know that no offense is meant.

***

There were a few examples in the comments where it seemed like the distinction between the two cultures wasn’t quite as clear as it could be.

Ruby updated [LW(p) · GW(p)] the original distinction into two dimensions in a later comment – the “adversarial-collaborative” dimension and the “emotional concern and effort” dimension. The central combat culture was “adversarial + low emotional effort” and nurture was “collaborative + high emotional effort”. However there are cultures which fit in the other 2 possible pairings and the original framing suppressed that somewhat.

I personally would like to see a version of the OP which includes that distinction and think that it would likely be a good fit for the 2018 review. Short of making that distinction, the OP allows for people to fit, say, “collaborative + low emotional effort” into either the nurture or combat culture category. If the combat-nurture distinction is to be used as common knowledge then I worry that this will cause confusion between people with different interpretations.

My other worry about including this in the 2018 review is a claim of what the default should be. If the post claims that nurture culture should be the default, does that then seem like this is how LW should be? This counts even more as the post is by a member of the LW team.

Finally, in my mind if this post (or a version of it) was included in the 2018 review then it would benefit from including something like the excellent section [LW(p) · GW(p)] which Ruby later moved into the comments. For the post it made sense to move it to the comments but it would be shame to miss it out entirely for the review.

comment by Bucky · 2020-01-18T21:46:49.332Z · LW(p) · GW(p)

Just to keep this up-to-date, I think V2 of this post addresses my concerns and I consider this an excellent fit for the 2018 review.

comment by Ruby · 2019-12-22T19:14:02.730Z · LW(p) · GW(p)

My other worry about including this in the 2018 review is a claim of what the default should be. If the post claims that nurture culture should be the default, does that then seem like this is how LW should be? This counts even more as the post is by a member of the LW team.


I agree it should be clear about which normative stances taken in the post are statements about what should be true of LW.

At the time I wrote this post, I'd begun discussions about joining the LW team and had done maybe a couple dozen hours of remote analytics work, and I began a full-time trial but I didn't become a full-time team member until March 2019. I'd be more careful now.

The LW team doesn't currently have a firm stance on where LW should fall on the dimensions outlined in the OP/discussion, that's something we're likely to work on in the next quarter. We've got the Frontpage commenting guidelines so far, but that doesn't really state things in these the terms of Combat/Nurture.

My own thinking on the topic has been enriched by my much greater participation in LW discussion, including discussion around communication styles. I'd begun typing a paragraph here of some of my current thoughts, but probably it's best to hold off till I've thought more at length and am speaking alongside the rest of team. (An update in recent discussions of moderation and conversation norms is that the team should be careful to not confuse people by saying different things individually [LW · GW].)

I think it is safe for me to say that while I still think that something in the Nurture cluster is a good default for most contexts, that doesn't mean that LW might not have good reasons to deviate from that default.

comment by Ben Pace (Benito) · 2019-12-22T19:24:50.766Z · LW(p) · GW(p)

but probably it's best to hold off till I've thought more at length and am speaking alongside the rest of team.

I was gonna leave a comment reminding you that you should always feel free to speak for yourself, and then I hit

(An update in recent discussions of moderation and conversation norms is that the team should be careful to not confuse people by saying different things individually [LW · GW].)

If I read you right, this hasn't been my own update, so I guess I'll tell you to be careful what you say on behalf of the team without checking for consensus ;-) I agree some users have been confused, but the result mustn't be to retreat to only saying consensus things. I might be open to adding more disclaimers or something, but overall I really care that I don't give up the ability to just think for myself out loud on LW on basically all topics relating to LW. 

I agree writing about moderation in particular is an unusually careful topic where I want to take extra care to signal what is consensus/actionable and what is me just thinking aloud. But I still stand by that you should share your own thoughts, saying "I" everywhere (I just edited this very comment to own all of my thoughts more), is still pretty great and it'd be bad if you felt the general need to get team consensus.

comment by Bucky · 2020-01-09T21:19:17.416Z · LW(p) · GW(p)

FWIW, I agree that it is good/important for mods to be able to state their own opinions freely. My only worry was that a book form of the review might lose this nuance if this is not stated explicitly.

comment by DanArmak · 2020-01-18T18:51:35.669Z · LW(p) · GW(p)

This post is well written and not over-long. If the concepts it describes are unfamiliar to you, it is a well written introduction. If you're already familiar with them, you can skim it quickly for a warm feeling of validation.

I think the post would be even better with a short introduction describing its topic and scope, but I'm aware that other people have different preferences. In particular:

  • There are more than two 'cultures' or styles of discussion, perhaps many more. The post calls this out towards the end (apparently this is new in v2).
  • The post gives two real examples of Combat Culture, and only one made-up scenario of Nurture Culture. It does not attempt to ground the discussion in anything quantitative - how common these cultures are, what they correlate with, how to recognize or test for them, how gradually they may shade into each other or into something else altogether.

I don't want to frame these as shortcomings; the post is still useful and interesting without them!

comment by Ben Pace (Benito) · 2018-11-25T19:18:42.725Z · LW(p) · GW(p)

At first, I felt that 'nurture' was a terrible name, because the primary thing I associated with the idea you're discussing is that we are building up an axiomatised system together. Collaboratively. I'll say a thing, and you'll add to it. Lots of 'yes-and'. If you disagree, then we'll step back a bit, and continue building where we can both see the truth. If I disagree, I won't attack your idea, but I'll simply notice I'm confused about a piece of the structure we're building, and ask you to add something else instead, or wonder why you'd want to build it that way. I agree this is more nurturing, but that's not the point. The point is collaboration.

But then my model of Said said "What? I don't understand why this sort of collaborative exploration isn't perfectly compatible with combative culture - I can still ask all those questions and make those suggestions" which is a point he has articulated quite clearly down-thread (and elsewhere). So then I got to thinking about the nurturing aspect some more.

I'd characterise combative culture as working best in a professional setting, where it's what one does as one's job. When I think of productive combative environments, I visualise groups of experts in healthy fields like math or hard science or computer science. The researchers will bring powerful and interesting arguments forth to each other, but typically they do not discuss nor require an explicit model of how another researcher in their field thinks. And symmetrically, the person responsible for how this researcher thinks is up to them - that's their whole job! They'll note they were wrong, and make some updates about what cognitive heuristics they should be using, but not bring that up in the conversation, because that's not the point of the conversations. The point of the conversation is, y'know, whether the theorem is true, or whether this animal evolved from that, or whether this architecture is more efficient when scaled. Not our emotions or feelings.

Sure, we'll attack each other in ways that can often make people feel defensive, but in a field where everyone has shown their competence (e.g. PhDs) we have common knowledge of respect for one another - we don't expect it to actually hurt us to be totally wrong on this issue. It won't mean I lose social standing, or stop being invited to conferences, or get fired. I mean, obviously it needs to correlate, but never does any single sentence matter or single disagreement decide something that strong. Generally the worst that will happen to you is that you just end up a median scientist/researcher, and don't get to give the big conference talks. There's a basic level of trust as we tend to go about our work, that means combative culture is not a real problem.

I think this is good. It's hard to admit you're wrong, but if we have common knowledge of respect, then this makes the fear smaller, and I can overcome it.

I think one of the key motivations for nurturing culture is that we don't have common knowledge that everything will be okay in many part of our lives, and in the most important decisions in our lives way more is at stake than in academia. Some example decisions where being wrong about them has far worse consequences for your life than being wrong about whether Fermat's Last Theorem is true or false:

  • Will my husband/wife and I want the same things in the next 50 years?
  • Will my best friends help me keep the up the standard of personal virtue I care about in myself, or will they not notice if I (say) lie to myself more and more?
  • I'm half way through med school. Is being a doctor actually hitting the heavy tails of impact I could have with my life?

These questions have much more at stake. I know for myself, when addressing them, I feel emotions like fear, anger, and disgust.

Changing my mind on the important decisions in my life, especially those that affect my social standing amongst my friends and community, is really far harder than changing my life about an abstract topic where the results don't have much direct impact on my life.

Not that computer science or chemistry or math aren't incredibly hard, it's just that to do good work in these fields does not require the particular skill of believing things even when they'll lower your social standing.

I think if you imagine the scientists above turning combative culture to their normal lives (e.g. whether they feel aligned with their husband/wife for the next 50 years), and really trying to do it hard, they'd immediately go through an incredible amount of emotional pain until it was too much to bear and then they'd stop.

If you want someone to be open to radically changing their job, lifestyle, close relationships, etc, some useful things can be:

  • Have regular conversations with norms such that the person will not be immediately judged if they say something mistaken, or if they consider a hypothesis that you believe to be wrong.
  • If you're discussing with them an especially significant belief and whether to change it, keep a track of their emotional state, and help them carefully walk through emotionally difficult steps of reasoning.

If you don't, they'll put a lot of effort into finding any other way of shooting themselves in the foot that's available, rather than realise that something incredibly painful is about to happen to them (and has been happening for many years).

I think that trying to follow this goal to it's natural conclusions will lead you to a lot of the conversational norms that we're calling 'nurturing'.

I think Qiaochu once said something like "If you don't regularly feel like your soul is being torn apart, you're not doing rationality right." Those weren't his specific words, but I remember the idea being something like that.

comment by Said Achmiz (SaidAchmiz) · 2018-11-25T22:07:22.329Z · LW(p) · GW(p)

I think one of the key motivations for nurturing culture is that we don’t have common knowledge that everything will be okay in many part of our lives, and in the most important decisions in our lives way more is at stake than in academia. Some example decisions where being wrong about them has far worse consequences for your life than being wrong about whether Fermat’s Last Theorem is true or false:

I do not really agree with your view here, but I think what you say points to something quite important.

I have sometimes said that personal loyalty is one of the most important virtues. Certainly it has always seemed to me to be a neglected virtue, in rationalist circles. (Possibly this is because giving personal loyalty pre-eminence in one’s value system is difficult, at best, to reconcile with a utilitarian moral framework. This is one of the many reasons I am not a utilitarian.)

One of the benefits of mutual personal loyalty between two people is that they can each expect not to be abandoned, even if the other judges them to be wrong. This is patriotism in microcosm: “my country, right or wrong” scaled down to the relation between individuals—“my friend, right or wrong”. So you say to me: “You are wrong! What you say is false; and what you do is a poor choice, and not altogether an ethical one.” And yet I know that we remain friends; and you will stand by me, and support me, and take risks for me, and make sacrifices for me, if such are called for.

There are limits, of course; thresholds which, if crossed, strain personal loyalty to its limit, and break it. Betrayal of trust is one such. Intentional, malicious action of one’s friend against oneself is another; so is failure to come to one’s aid, in a dark hour. But these are high thresholds. It is near-impossible to exceed them accidentally. (And if you think you know exactly what I’m talking about, then ask yourself: if your friend committed murder, would you turn them in to the police? If the answer is “yes, of course”, then some inferential distance yet remains…)

To a friend like this, you can say, without softening the blow: “Wrong! You’re utterly wrong! This is foolish!”—and without worrying that they will not confide in you, for fear of such a judgment. And from a friend like this, you can hear a judgment like that, and yet remain certain that your friendship is not under the least threat; and so, in a certain important sense, it does not hurt to be judged… no more, at least, than it hurts to judge yourself.

Friendship like this… is it “Nurture Culture”? Or “Combat Culture”?

I think Qiaochu once said something like “If you don’t regularly feel like your soul is being torn apart, you’re not doing rationality right.” Those weren’t his specific words, but I remember the idea being something like that.

The consequence of what I say above is this: it is precisely this state (“soul being torn apart”) which I think is critically important to avoid, in order to be truly rational.

comment by Ben Pace (Benito) · 2018-11-26T15:28:12.963Z · LW(p) · GW(p)

Thanks for your reply, I also do not agree with it but found that it points to something important ideas. (In the past I have tended to frame the conversation more about 'trust' rather than 'personal loyalty', but I think with otherwise similar effect.)

The first question I want to ask is: how do you get to the stage where personal loyalty is warranted?

From time to time, I think back to the part of Harry Potter and the Philosopher's Stone where Harry, Hermione and Ron become loyal to one another - the point where they build the strength of relationship where they can face down Voldemort without worrying that one another may leave out of fear.

It is after Harry and Ron run in to save Hermione from a troll.

The people who I have the most loyalty to in the world are those who have proven that it is there, with quite costly signals. And this was not a stress-free situation. It involved some pressure on each of our souls, though the important thing was that we came out with our souls intact, and also built something we both thought truly valuable.

So it is not clear to me that you can get to the stage of true loyalty without facing some trolls together, and risking actually losing.

The second and more important question I want to ask is: do you think that having loyal friends is sufficient to achieve your goals without regularly feeling like your soul is being torn apart?

You say:

The consequence of what I say about is this: it is precisely this state (“soul being torn apart”) which I think is critically important to avoid, in order to be truly rational.

Suppose I am confident that I will not lose my loyal friend.

Here are some updates about the world I might still have to make:

  • My entire social circle gives me social gradients in directions I do not endorse, and I should leave and find a different community
  • There is likely to be an existential catastrophe in the next 50 years and I should entirely re-orient my life around preventing it
  • The institution I'm rising up in is fundamentally broken, and for me to make real progress on problems I care about I should quit (e.g. academia, a bad startup).
  • All the years of effort I've spent on a project or up-skilling in a certain domain has been either useless or actively counterproductive (e.g. working in politics, a startup that hasn't found product-market fit) and I need to give up and start over.

The only world in which I could feel confident that I wouldn't have to go through any of these updates are one in which the institutions are largely functional, and I feel that rising up my local social incentives will align with my long term goals. This is not what I observe. [? · GW]

Given the world I observe, it seems impossible for me to not pass through events and updates that cause me significant emotional pain and significant loss of local social status, whilst also optimising for my long term goals. So I want my close allies, the people loyal to me, the people I trust, to have the conversational tools (cf. my comment above) to help me keep my basic wits of rationality about me while I'm going through these difficult updates and making these hard decisions.

I am aware this is not a hopeful comment. [LW · GW] I do think it is true.

---

Edit: changed 'achieve your goals while staying rational' to 'achieve your goals without regularly feeling like your soul is being torn apart', which is what I meant to say.

comment by Said Achmiz (SaidAchmiz) · 2018-11-27T08:52:35.622Z · LW(p) · GW(p)

There’s a lot I have to say in response to your comment.

I’ll start with some meta commentary:

From time to time, I think back to the part of Harry Potter and the Philosopher’s Stone where Harry, Hermione and Ron become loyal to one another—the point where they build the strength of relationship where they can face down Voldemort without worrying that one another may leave out of fear.

It is after Harry and Ron run in to save Hermione from a troll.

Harry and Ron never ran in to save Hermione from a troll, never became loyal to one another as a result, never built any strength of relationship, and never faced down Voldemort. None of these events ever happened; and Harry, Ron, and Hermione, in fact, never existed.

I know, I know: I’m being pedantic, nitpicking, of course you didn’t mean to suggest that these were actual events, you were only using them as an example, etc. I understand. But as Eliezer wrote:

What’s wrong with using movies or novels as starting points for the discussion? No one’s claiming that it’s true, after all. Where is the lie, where is the rationalist sin? …

Not every misstep in the precise dance of rationality consists of outright belief in a falsehood; there are subtler ways to go wrong.

Are the events depicted in Harry Potter and the Philosopher’s Stone—a children’s story about wizards (written by an inexperienced writer)—representative of how actual relationships work, between adults, in our actual reality, which does not contain magic, wizards, or having to face down trolls in between classes? If they are, then you should have no trouble calling to mind, and presenting, illustrative examples from real life. And if you find yourself hard-pressed to do this, well…

Let me speak more generally, and also more directly. As I have previously obliquely suggested [LW(p) · GW(p)], I think it is high time for a moratorium, on Less Wrong, on fictional examples used to illustrate claims about real people, real relationships, real interpersonal dynamics, real social situations, etc. If I had my way, this would be the rule: if you can’t say it without reference to examples from fiction, then don’t say it. (As for using Harry Potter as a source of examples—that should be considered extremely harmful, IMHO.)

That this sort of thing distorts your thinking is, I think, too obvious to belabor, and in any case Eliezer did an excellent job with the above-linked Sequence post. But another problem is that it also muddies communication, such as in the case of this line:

So it is not clear to me that you can get to the stage of true loyalty without facing some trolls together, and risking actually losing.

In the real world, there are no trolls. Clearly, you’re speaking metaphorically. But what is the literal interpretation? What are “trolls”, in this analogy? Precisely? Is it “literal life or death situations, where you risk actually, physically dying?” Surely not… but then—what? I really don’t know. (I have some thoughts on what is and what is not necessary to “get to the stage of true loyalty”, but I really have no desire to respond to a highly ambiguous claim; it seems likely to result in us wasting each other’s time and talking past one another.)

Ok, enough meta, now for some object-level commentary:

The second and more important question I want to ask is: do you think that having loyal friends is sufficient to achieve your goals without regularly feeling like your soul is being torn apart?

Having loyal friends is not sufficient to achieve your goals, period, without even tacking on any additional criteria. This seems very obvious to me, and it seems unlikely that you wouldn’t have noticed this, so I have to assume I have somehow misunderstood your question. Please clarify.

Here are some updates about the world I might still have to make:

Of the potential updates you list, it seems to me that some of them are not like the others. To wit:

My entire social circle gives me social gradients in directions I do not endorse, and I should leave and find a different community

In my case, I have great difficulty imagining what this would mean for me. I do not think it applies. I don’t know the details of your social situation, but I conjecture that the cure for this sort of possibility is to find your social belonging less in “communities” and more in personal friendships.

There is likely to be an existential catastrophe in the next 50 years and I should entirely re-orient my life around preventing it

Note that this combines a judgment of fact with… an estimate of effectiveness of a certain projected course of action, I suppose? My suggestion would be to disentangle these things. Once this is done, I don’t see why there should be any more “soul tearing apart” involved here than in any of a variety of other, much more mundane, scenarios.

The institution I’m rising up in is fundamentally broken, and for me to make real progress on problems I care about I should quit (e.g. academia, a bad startup).

Indeed, I have experience with this sort of thing. Knowing that, regardless of the outcome of the decision in question, I would have the unshakable support of friends and family, removed more or less all the “soul tearing apart” from the equation.

All the years of effort I’ve spent on a project or up-skilling in a certain domain has been either useless or actively counterproductive (e.g. working in politics, a startup that hasn’t found product-market fit) and I need to give up and start over.

Indeed, this can be soul-wrenching. My comment on the previous point applies, though, of course, in this case it does not go nearly as far toward full amelioration as in the previous case. But, of course, this is precisely the sort of situation one should strive to avoid (cf. the principle of least regret). Total avoidance is impossible, of course, and this sort of situation is the (hopefully) rare exception to the heuristic I noted.

Given the world I observe, it seems impossible for me to not pass through events and updates that cause me significant emotional pain and significant loss of local social status, whilst also optimising for my long term goals. So I want my close allies, the people loyal to me, the people I trust, to have the conversational tools (cf. my comment above) to help me keep my basic wits of rationality about me while I’m going through these difficult updates and making these hard decisions.

Meaning no offense, but: if you’re losing significant (and important) social status in any of the situations listed above, then you are, I claim, doing something wrong (specifically, organizing your social environment very sub-optimally).

And in those cases where great strain is unavoidable (such as in the last example you listed), it is precisely a cold, practical, and un-softened judgment, which I most desire and most greatly value, from my closest friends. In such cases—where the great difficulty of the situation is most likely to distort my own rationality—“nurturing” takes considerably less caring and investment, and is much, much less valuable, than true honesty, and a clear-eyed perspective on the situation.

comment by ChristianKl · 2018-11-28T15:14:59.424Z · LW(p) · GW(p)
I have sometimes said that personal loyalty is one of the most important virtues. Certainly it has always seemed to me to be a neglected virtue, in rationalist circles.

I'm surprised to here that sentiment from you when you also speak against the value of rationalists doing community things together.

Doing rituals together is a way to create the emotional bonds that in turn create mutual loyality. That's why fraternities have their initiation rituals.

comment by Said Achmiz (SaidAchmiz) · 2018-11-28T15:34:37.773Z · LW(p) · GW(p)

I have sometimes said that personal loyalty is one of the most important virtues. Certainly it has always seemed to me to be a neglected virtue, in rationalist circles.

I’m surprised to here that sentiment from you when you also speak against the value of rationalists doing community things together.

These sentiments are not only not opposed—they are, in fact, inextricably linked. That this seems surprising to you is… unfortunate; it means the inferential distance between us is great. I am at a loss for how to bridge it, truth be told. Perhaps someone else can try.

Doing rituals together is a way to create the emotional bonds that in turn create mutual loyality. That’s why fraternities have their initiation rituals.

You cannot hack your way to friendship and loyalty—and (I assert) bad things happen if you try. That you can (sort of) hack your way to a sense of friendship and loyalty is not the same thing (but may prevent you from seeing the fact of the preceding sentence).

comment by Benquo · 2018-11-28T18:46:55.695Z · LW(p) · GW(p)

What does it look like for this sort of thing to be done well? Can you point to examples?

comment by Said Achmiz (SaidAchmiz) · 2018-11-28T19:35:48.006Z · LW(p) · GW(p)

I am unsure what you’re asking. What is “this sort of thing”? Do you mean “friendship and loyalty”? I don’t know that I have much to tell you, on that subject, that hasn’t been said by many people, more eloquent and wise than I am. (How much has been written about friendship, and about loyalty? This stuff was old hat to Aristotle…)

These are individual virtues. They are “done”—well or poorly—by individuals. I do not think there is any good way to impose them from above. (You can, perhaps, encourage a social environment where such virtues can more readily be exercised, and avoid encouraging a social environment where they’re stifled. But the question of how to do this is… complex; beyond the scope of this discussion, I think, and in any case not something I have anything approaching a solid grasp on.)

comment by Benquo · 2018-11-28T20:49:49.747Z · LW(p) · GW(p)
You can, perhaps, encourage a social environment where such virtues can more readily be exercised, and avoid encouraging a social environment where they’re stifled.

I thought that's what you were talking about: that some ways of organizing people fight or delegitimize personal loyalty considerations, while others work with it or at least figure out how not to actively destroy it. It seemed to me like you were saying that the way Rationalists try to do community tends to be corrosive to this other thing you think is important.

comment by Said Achmiz (SaidAchmiz) · 2018-11-28T22:34:32.427Z · LW(p) · GW(p)

That’s… at once both close to what I’m saying, and also not really what I’m saying at all.

I underestimated the inferential distance here, it seems; it’s surprising to me, how much what I am saying is not obvious. (If anything, I expected the reaction to be more like “ok, yes, duh, that is true and boring and everyone knows this”.)

I may try to write something longer on this topic, but I fear it would have to be much longer; the matters that this question touches upon range wide and deep…

comment by Benquo · 2018-11-28T22:53:49.509Z · LW(p) · GW(p)

I hope you do find the time to write about this in depth.

comment by Ruby · 2018-11-29T01:43:27.485Z · LW(p) · GW(p)

Seconded. Would like to hear the in-depth version.

comment by Raemon · 2018-11-28T20:09:04.139Z · LW(p) · GW(p)
I don’t know that I have much to tell you, on that subject, that hasn’t been said by many people, more eloquent and wise than I am

Sure, but as such, there's a lot of different approaches to how to do them well (some mutually exclusive), so pinpointing which particular things you're talking about seems useful.

(I do think I have an idea of what you mean, and might agree, but the thing you're talking about is probably about as clear to me Ben's "fight trolls" comment was to you)

Seems fine to table it for now if it doesn't seem relevant though.

comment by ChristianKl · 2018-11-28T16:59:23.689Z · LW(p) · GW(p)

Do you think that existing societal organisations like fraternities aren't build with the goal of facilitating friendship and loyalty? Do you think they fail at that and produce bad results?

comment by Said Achmiz (SaidAchmiz) · 2018-11-28T17:26:09.485Z · LW(p) · GW(p)

Do you think that existing societal organisations like fraternities aren’t build with the goal of facilitating friendship and loyalty?

It varies. That is a goal for some such organizations.

Do you think they fail at that and produce bad results?

I think they produce bad results, while failing at the above goal, approximately to the degree that they rely on “emotion hacking”.

comment by ChristianKl · 2018-11-28T14:45:59.228Z · LW(p) · GW(p)

Interesstingly there's currently a highly upvoted question on Academia.StackExchange titled Why don't I see publications criticising other publications? which suggests that academics don't engage in combat culture within papers.

comment by ChristianKl · 2018-11-28T15:06:18.503Z · LW(p) · GW(p)

Most academic work raises little emotions but that's not true for all academic work. Ionnadis wrote about how he might be killed for his work.

Whenever work that's revolutionary in the Khunian sense is done there's the potential for the loss of social status.

comment by Ruby · 2020-01-17T20:27:18.894Z · LW(p) · GW(p)

SUPPLEMENTAL CONTENT FOR V2
 

Please do post comments at the top level.

comment by Ruby · 2020-01-17T20:29:08.454Z · LW(p) · GW(p)

Changes from V1 to V2

This section describes the most significant changes from version 1 to version 2 of this post:

  • The original post opened with a strong assertion that it intended to be descriptive. In V2, I’ve been more prescriptive/normative.
  • I clarified that the key distinction between Combat and Nurture is the meaning assigned to combative speech-acts.
  • I changed the characterization of Nurture Culture to be less about being “collaborative” (which can often be true of Combat), and more about intentionally signaling friendliness/non-hostility.
  • I expanded the description of Nurture Culture which in the original was much shorter than the description of Combat, including the addition of a hopefully evocative example.
  • I clarify that Combat and Nurture aren’t a complete classification of conversation-culture space– far from it. And further describe degenerate neighbors: Combat without Safety, Nurture without Caring.
  • Adding appendices which cover:
    • Dimensions along which conversations and conversations vary.
    • Factors that contribute to social trust.

 

Shout out to Raemon [LW · GW], Bucky [LW · GW], and Swimmer963 [LW · GW] for their help with the 2nd Version.

comment by Ruby · 2020-01-19T00:01:06.199Z · LW(p) · GW(p)

Appendix 4: Author's Favorite Comments

Something I've never had the opportunity to do before, since I've never revised a post before, is collect the comments that I think that added the most to the conversation by building on, responding to, questioning, or contradicting the post.

Here's that list for this post:

  • This comment [LW(p) · GW(p)] from Said Achmiz that seems correct to me in both its points: 1) that Nurture-like Cultures can be abused politically, and 2), that close interpersonal relationships trend Combative as the closeness grows.
  • Benquo's comment [LW(p) · GW(p)] about the dimension of whether participants are trying to minimize or maximize the scope of a disagreement.
  • Ben Pace's comment [LW(p) · GW(p)] talking about when and where the two cultures fit best, and particularly regarding how Nurture Culture is required to hold space when discussing sensitive topics like relationships, personal standards, and confronting large life choices.
  • PaulK's comment about "articulability" [LW(p) · GW(p)]: how a Nurturing culture makes it easier to express ill-formed, vague, or not yet justifiable thoughts.
  • AdrianSmith's comment [LW(p) · GW(p)] about how Combat Culture can help expose the weak points on one's belief which wouldn't come up in Nurture Culture (even if only updates after the heat of "battle"), and Said Achmiz's expansion of this point [LW(p) · GW(p)] with quotes from Schopenhauer, claiming that continuing to fight for one's position without regard for truth might actually be epistemically advantageous. 

Best humorous comments:

comment by Ruby · 2020-01-17T20:34:44.383Z · LW(p) · GW(p)

Appendix 3: How to Nurture

These are outtakes from a draft revision for Nurture Culture which seemed worth putting somewhere:

A healthy epistemic Nurture Culture works to make it possible to safely have productive disagreement by showing that disagreement is safe. There are better and worse ways to do this. Among them:

  • Adopting a “softened tone” which holds the viewpoints as object and at some distance: “That seems mistaken to me, I noticed I’m confused” as opposed to “I can’t see how anyone could possibly think that”.
  • Expending effort to understand: “Okay, let me summarize what you’re saying and see if I got right . . .”
  • Attempting to be helpful in the discussion: “I’m not sure what you’re saying, is this is it <some description or model>?”
  • Mentioning what you think is good and correct: “I found this post overall very helpful, but paragraph Z seems gravely mistaken to me because <reasons>.” This counters perceived reputational harms and can put people at ease.

Things which are not very Nurturing:

  • “What?? How could anyone think that”
  • A comment that only says “I think this post is really wrong.”
  • You’re not accounting for X, Y, Z. <insert multiple paragraphs explaining issues at length>

Items in the first list start to move the dial on the dimensions [LW(p) · GW(p)] of collaborativeness and are likely to be helpful in many discussions, even relatively Combative ones; however, they have the important additional Nurturing effect of signaling hard that a conversation has the goal of mutual understanding and reaching truth-together– a goal whose salience shifts the significance of attacking ideas to purely practical rather than political.

While this second list can include extremely valuable epistemic contributions, they can heighten the perception of reputational and other harms [1] and thereby i) make conversations unpleasant (counterfactually causing them not to happen), and ii) raise the stakes of a discussion, making participants less likely to update.

Nurture Culture concludes that it’s worth paying the costs of more complicated and often indirect speech in order to make truth-seeking discussion a more positive experience for all.

[1] So much of our wellbeing and success depends on how others view us. It reasonable for people be very sensitive to how others perceive them.

comment by Ruby · 2020-01-17T20:33:01.757Z · LW(p) · GW(p)

Appendix 2: Priors of Trust

I’ve said that that Combat Culture requires trust. Social trust is complicated and warrants many dedicated posts of its own, but I think it’s safe to say that having following priors help one feel safe in a “combative” environment: 

  • A prior that you are wanted, welcomed and respected,
  • that others care about you and your interests,
  • that one’s status or reputation are not under a high-level of threat, 
  • that having dumb ideas is safe and that’s just part of the process,
  • that disagreement is perfectly fine and dissent will not be punished, and 
  • that you won’t be punished for saying the wrong thing.

If one has a strong priors for the above, you can have a healthy Combat Culture.

comment by Ruby · 2020-01-17T20:32:22.077Z · LW(p) · GW(p)

Appendix 1: Conversational Dimensions

Combat and Nurture point at regions within conversation space, however as commenters on the original pointed out, there are actually quite a few different dimensions relevant to conversations. (Focused on truth-seeking conversations.)

Some of them:

  • Competitive vs Cooperative: within a conversation, is there any sense of one side trying to win against the others? Is there a notion of “my ideas” vs “your ideas”? Or is there just us trying to figure it out together.
    • Charitability is a related concept.
    • Willingness to Update: how likely are participants to change their position within a conversation in response to what’s said?
  • Directness & Bluntness: how straightforwardly do people speak? Do they say “you’re absolutely wrong” or do they say, “I think that maybe what you’re saying is not 100%, completely correct in all ways”?
  • Filtering: Do people avoid saying things in order to avoid upsetting or offending others?
  • Degree of Concern for Emotions: How much time/effort/attention is devoted to ensuring that others feel good and have a good experience? How much value is placed on this?
  • Overhead: how much effort must be expended to produce acceptable speech acts? How many words of caveats, clarification, softening? How carefully are the words chosen?
  • Concern for Non-Truth Consequences: how much are conversation participants worried about the effects of their speech on things other than obtaining truth? Are people worrying about reputation, offense, etc?
  • Playfulness & Seriousness: is it okay to make jokes? Do participants feel like they can be silly? Or is it no laughing business, too much at stake, etc.?
  • Maximizing or Minimizing the Scope of Disagreement: are participants trying to find all the ways in which they agree and/or sidestep points of disagreement, or are they clashing and bringing to the fore every aspect of disagreement? [See this comment by Benquo [LW(p) · GW(p)].]

Similarly, it’s worth noting the different objectives conversations can have:

  • Figuring out what’s true / exchanging information.
  • Jointly trying to figure out what’s true vs trying to convince the other person.
  • Fun and enjoyment.
  • Connection and relationship building.

The above are conversational objectives that people can share. There are also objectives that most directly belong to individuals:

  • To impress others.
  • To harm the reputation of others.
  • To gain information selfishly.
  • To enjoy themselves (benignly or malignantly).
  • To be helpful (for personal or altruistic gain).
  • To develop relationships and connection.

We can see which positions along these dimensions cluster together and which correspond to the particular clusters that are Combat and Nurture.

A Combat Culture is going to be relatively high on bluntness and directness, can be more competitive (though isn’t strictly); if there is concern for emotions, it’s going be a lower priority and probably less effort will be invested. 

A Nurture Culture may inherently be prioritizing the relationships between and experiences of participants more. Greater filtering of what’s said will take place and people might worry more about reputational effects of what gets said.

These aren’t exact and different people will focus on cultures which differ along all of these dimensions. I think of Combat vs Nurture as tracking an upstream generator that impacts how various downstream parameters get set.

comment by Ruby · 2020-01-17T20:30:05.476Z · LW(p) · GW(p)

Footnotes

comment by Ruby · 2020-01-17T20:30:46.315Z · LW(p) · GW(p)

[2] A third possibility is someone who is not really enacting either culture: they feel comfortable being combative towards others but dislike it if anyone acts in kind to them. I think is straightforwardly not good.

comment by Ruby · 2020-01-17T20:30:28.520Z · LW(p) · GW(p)

[1] I use the term attack very broadly and include any action which may be cause harm to a person acted upon. The harm caused by an attack could be reputational (people think worse of you), emotional (you feel bad), relational (I feel distanced from you), or opportunal (opportunities or resources are impacted).

comment by JohnBuridan · 2019-12-14T23:19:52.836Z · LW(p) · GW(p)

I read this post when it initially came out. It resonated with me to such an extent that even three weeks ago, I found myself referencing it when counseling a colleague on how to deal with a student whose heterodoxy caused the colleague to make isolated demands for rigor from this student.

The author’s argument that Nurture Culture should be the default still resonates with me, but I think there are important amendments and caveats that should be made. The author said:

"To a fair extent, it doesn’t even matter if you believe that someone is truly, deeply mistaken. It is important foremost that you validate them and their contribution, show that whatever they think, you still respect and welcome them."

There is an immense amount of truth in this. Concede what you can when you can. Find a way to validate the aspects of a person’s point which you can agree with, especially with the person you tend to disagree with most or are more likely to consider an airhead, adversary, or smart-aleck. This has led me to a great amount of success in my organization. As Robin Hanson once asked pointedly, “Do you want to appear revolutionary, or be revolutionary?” Esse quam videri.

Combat Culture is a purer form of Socratic Method. When we have a proposer and a skeptic, we can call this the adversarial division of labor: You propose the idea, I tell you why you are wrong. You rephrase your idea. Rinse and repeat until the conversation reaches one of three stopping points: aporia, agreement, or an agreement to disagree. In the Talmud example, both are proposers of a position and both are skeptics of the other persons’ interpretation.

Nurture Culture does not bypass the adversarial division of labor, but it does put constraints on it - and for good reason. A healthy combat culture can only exist when a set of rare conditions are met. Ruby’s follow-up post outlined those conditions. But Nurture Culture is how we still make progress despite real world conditions like needing consensus, or not everyone being equal in status or knowledge, or some people having more skin in the game than others.

So here are some important things I would add more emphasis to from the original article after about a hundred iterations of important disagreements at work and in intellectual pursuits since 2018.

1. Nurture Culture assumes knowledge and status asymmetries.

2. Nurture Culture demands a lot of personal patience.

3. Nurture Culture invites you to consider what even the most wrong have right.

4. Sometimes you can overcome a disagreement at the round table by talking to your chief adversary privately and reaching a consensus, then returning for the next meeting on the same page.

While these might be two cultures, it’s helpful to remember that there are cultures to either side of these two: Inflexible Orthodoxy and Milquetoast Relativism. A Combat Culture can resolve into a dominant force with weaponized arguments owning the culture, while a Nurture Culture can become so limply committed to their nominal goals that no one speaks out against anything that runs counter to the mission.

comment by JohnBuridan · 2019-12-14T23:22:47.774Z · LW(p) · GW(p)

In the Less Wrong community, Anti-Nurture comments are afraid of laxity with respect to the mission, while Anti-Combat commenters are afraid of a narrow dogmatism infecting the community.

comment by Raemon · 2018-11-12T01:20:05.793Z · LW(p) · GW(p)

There's a subtle difference in focus between nurture culture as described here, and what I'd call "collaborative truthseeking." Nurture brings to mind helping people to grow. Collaborative brings to mind more like "we're on a team", which doesn't just mean we're on the same side, but that we each have some responsibilities to bring to the table.

comment by Said Achmiz (SaidAchmiz) · 2018-11-12T08:01:01.257Z · LW(p) · GW(p)

But you can have collaborative truthseeking with “Combat Culture”—which is precisely what the example in the OP (the Ellsberg quote) illustrates.

comment by Raemon · 2018-11-25T21:56:56.622Z · LW(p) · GW(p)

Hmm. Naming things is hard. I agree you can have collaboration in combat culture. It feels like a different kind of collaboration though.

Double hmm.

Before continuing, noticing an assumption I was making: in Ben Pace's comment [LW(p) · GW(p)] elsethread, he frames "nurture" culture as involving lots of "yes-and", which roughly matched my stereotype. Correspondingly, combat culture felt like it involved lots of "no-but". Each person is responsible for coming up with their ideas and shoring them up, and it's basically other people's responsibility to notice where those ideas are flawed. And this seemed like the main distinction. (I'm not sure how much this matches what you mean by it.)

What's sort of funny (in a "names continue to be hard" way), is that I instinctively want to call "each person is responsible for their ideas" culture "Group Collaboration" (as in "group selection"). It's collaborative on the meta level, since over time the best ideas rise to the top, and people shore up the weaknesses of their own ideas.

Whereas... I'd call the "yes and" thing that Ben describes as.... (drumroll) "Group Collaboration" (as in "group project."). Which is yes obviously a terrible naming schema. :P

Comparing this to corporations building products: between corporations, and maybe between teams within a single corporation, is group selection. There is competition, which hopefully incentivizes each team to build a good product.

Within a team, you still need to be able to point out when people are mistaken and argue for good ideas. But you're fundamentally trying to build the same thing together. It doesn't do you nearly as much good to point out that a coworker's idea is bad, as to figure out how to fix it and see if their underlying idea is still relevant. If you don't trust your coworker to be generally capable of coming up with decent ideas that are at least pointed in the right direction, your team is pretty doomed anyhow.

comment by Said Achmiz (SaidAchmiz) · 2018-11-25T22:27:51.354Z · LW(p) · GW(p)

Within a team, you still need to be able to point out when people are mistaken and argue for good ideas. But you’re fundamentally trying to build the same thing together. It doesn’t do you nearly as much good to point out that a coworker’s idea is bad, as to figure out how to fix it and see if their underlying idea is still relevant.

I don’t see these as contradictory, or even opposed. How can you fix something, unless you can first notice that it needs to be fixed? Isn’t this just a policy of “don’t explicitly point out that an idea is flawed, because it would hurt the originator’s feelings; only imply it (by suggesting fixes)”? (And what do you do with unsalvageable ideas?)

If you don’t trust your coworker to be generally capable of coming up with decent ideas that are at least pointed in the right direction, your team is pretty doomed anyhow.

Sure, “generally”, maybe, but it’s the exceptions that count, here. I don’t necessarily trust myself to reliably come up with good ideas (which is the whole point of testing ideas against criticism, etc., and likewise is the whole point of brainstorming and so on), so it seems odd to ask if I trust other people to do so!

More generally, though… if it’s the distinction between “yes, and…” and “no, but…” which makes the difference between someone being able to work in a team or being unable to do so, then… to be quite honest, were I in a position to make decisions for the team, I would question whether that person has the mental resilience, and independence of mind, to be useful.

comment by Bucky · 2019-11-27T19:16:40.631Z · LW(p) · GW(p)

I’ve referred back to this multiple times and it has helped (e.g. at work) to get people to understand each other better.

comment by habryka (habryka4) · 2018-11-21T19:00:59.726Z · LW(p) · GW(p)

Promoted to curated: I think this post is quite exceptionally clear in pointing at an important distinction, and I've already referenced it a few times in the last two weeks, which is a good sign. I don't think this post necessarily says anything massively new, but I don't remember any write-up of similar clarity, and so I do think it adds a bunch to the intellectual commons.

comment by ozziegooen · 2018-11-28T11:38:34.057Z · LW(p) · GW(p)

I found the ideas behind Radical Candor to be quite useful. I think they're similar to ones here. Link

comment by mr-hire · 2019-11-21T21:14:48.600Z · LW(p) · GW(p)

This post highlighted one of the two main disagreements I see about LW conversationsal culture (the other being contextualizing vs. decoupling). Having a handle to refer to this thing is quite handy for common knowledge.

comment by Alephywr · 2018-11-28T03:12:37.793Z · LW(p) · GW(p)

I've won practically every interaction I've ever had. I've become so good at winning that most people won't actually interact with me anymore.

comment by Elo · 2018-11-28T03:27:58.748Z · LW(p) · GW(p)

Maybe it's not about winning for you. Maybe you need to lose before you learn something.

comment by Alephywr · 2018-11-28T03:40:22.961Z · LW(p) · GW(p)

It would also help if they understood what a joke was

comment by Alephywr · 2018-11-28T03:39:03.520Z · LW(p) · GW(p)

People would have to actually engage with me for that to happen.

comment by Elo · 2018-11-28T05:14:06.272Z · LW(p) · GW(p)

Why would people want to engage if they just encounter someone who wants to win? What's in it for them?

comment by Alephywr · 2018-11-28T05:22:48.855Z · LW(p) · GW(p)

IT WAS A JOKE YOU MORON

comment by Elo · 2018-11-28T05:24:30.542Z · LW(p) · GW(p)

Maybe if you made less shitty jokes, people would engage with you more.

comment by Alephywr · 2018-11-28T05:26:20.327Z · LW(p) · GW(p)

God forbid people like you

comment by G Gordon Worley III (gworley) · 2018-11-12T20:46:54.559Z · LW(p) · GW(p)

This reminds me of the distinction between debate and dialectic. Both can be means to truth seeking, both have their own failure modes (debate can become about winning instead of the truth; dialectic can become confused without adequate experience with synthesis), and different people can have a preference for one over the other. Thinking in terms of a culture though is perhaps better suited to what's going on than talking about preference for a particular technique because it gets at something deeper fueling that preference for particular methods.

Also, for what it's worth, I found this post useful enough at describing what I want to moderate towards that I've linked it now from my LW moderation guidelines to indicate I want people to prefer nurture to combat culture in the comments on my posts. I think a failure to have this well explained helps explain why things went in a direction I didn't like in a recent contentious post I made [LW · GW].