Conversational Cultures: Combat vs Nurture

post by Ruby · 2018-11-09T23:16:15.686Z · score: 120 (41 votes) · LW · GW · 69 comments

Contents

  Combat Culture
  Nurture Culture
    Now in the Comments: Advice & Ideal/Degenerate Forms of the Cultures

Note: This post is intended as descriptive rather than prescriptive. This post describes the cultures as I see them, together with some of their underlying rationales, arguments, advantages, and disadvantages. This post does not contain any strong or well-formed opinions of mine about ideal conversational norms, which culture is better, etc.

My foremost aim is that readers of this post will share my perception of the different conversation cultures, at which point we can begin to explore all the questions of ideal cultures, how to interact cross-culturally, culturally-mixed venues, etc., etc.

Edit: This post now has a sequel. Combat vs Nurture: Cultural Genesis [LW · GW] clarifies some points, discusses the true difference between the cultures, and opines on the circumstances which give rise to the different cultures.

Combat Culture

I went to an orthodox Jewish high school in Australia. For most of my early teenage years, I spent one to three hours each morning debating the true meaning of abstruse phrases of Talmudic Aramaic. The majority of class time was spent sitting opposite your chavrusa (study partner, but linguistically the term has the same root as the word “friend”) arguing vehemently for your interpretation of the arcane words. I didn’t think in terms of probabilities back then, but if I had, I think at any point I would have given roughly even odds to my view vs my chavrusa’s view on most occasions. Yet that didn’t really matter. Whatever your credence, you argued as hard as you could for the view that made sense in your mind, explaining why your adversary/partner/friend’s view was utterly inconsistent with reality. That was the process. Eventually, you’d reach agreement or agree to disagree (which was perfectly legitimate), and then move onto the next passage to decipher.

Later, I studied mainstream analytic philosophy at university. There wasn’t the chavrusa, pair-study format, but the culture of debate felt the same to me. Different philosophers would write long papers explaining why philosophers holding opposite views were utterly confused and mistaken for reasons one through fifty. They’d go back and forth, each arguing for their own correctness and the others’ mistakeness with great rigor. I’m still impressed with the rigor and thoroughness of especially good analytic philosophers.

I’ll describe this style as combative, or Combat Culture. You have your view, they have their view, and you each work to prove your rightness by defending your view and attacking theirs. Occasionally one side will update, but more commonly you develop or modify your view to meet the criticisms. Overall, the pool of arguments and views develops and as a group you feel like you’ve made progress.

While it’s true that you’ll often shake your head at the folly of those who disagree with you, the fact that you’re bothering to discuss with them at all implies a certain minimum of respect and recognition. You don’t write lengthy papers or books to respond to people whose intellect you have no recognition of, people you don’t regard as peers at all.

There’s an undertone of countersignalling to healthy Combat Culture. It is because recognition and respect are so strongly assumed between parties that they can be so blunt and direct with each other. If there were any ambiguity about the common knowledge of respect, you couldn’t be blunt without the risk of offending someone. That you are blunt is evidence you do respect someone. This is portrayed clearly in a passage from Daniel’s Ellsberg recent book, The Doomsday Machine: Confessions of a Nuclear War Planner (pp. 35-36):

From my academic life, I was used to being in the company of very smart people, but it was apparent from the beginning that this was as smart a bunch of men as I have ever encountered. That first impression never changed (though I was to learn, in the years ahead, the severe limitations of sheer intellect). And it was even better than that. In the middle of the first session, I ventured--though I was the youngest, assigned to taking notes, and obviously a total novice on the issues--to express an opinion. (I don’t remember what it was.) Rather than showing irritation or ignoring my comment, Herman Kahn, brilliant and enormously fat, sitting directly across the table from me, looked at me soberly and said, “You’re absolutely wrong.”
A warm glow spread throughout my body. This was the way my undergraduate fellows on the editorial board of the Harvard Crimson (mostly Jewish like Herman and me) had routinely spoken to each other: I hadn’t experienced anything like it for six years. At King’s College, Cambridge, or in the Society of Fellows, arguments didn’t take this gloves-off, take-no-prisoners form. I thought, “I’ve found a home.” [emphasis added]

That a senior member of the RAND group he had recently joined was willing to be completely direct in shooting down his idea didn’t cause the author to shut down in anguish and rejection, on the contrary, it made it author feel respected and included. I’ve found a home.

Nurture Culture

As I’ve experienced more of the world, I discovered that many people, perhaps even most people, strongly dislike combative discussions where they are being told that they are wrong for ten different reasons. I’m sure some readers thinking are hitting their foreheads and thinking “duh, obvious,” yet as above, it’s not obvious if you’re used to a different culture. Still, I’ve found that the dominant culture I am now exposed to, living in the Bay Area, is what I’m terming Nurture Culture.

If Combat Culture has a spirit of “let’s smash our ideas against each other until the strongest ones survive”, then Nurture Culture is “let’s work together to excavate the truth from beneath all the dirt of uncertainty” or “let’s work together to sculpt this beautiful sculpture.”

In Nurture Culture, the fundamental principle is that we’re all on the same team working for the same goals, we value and respect each other, and by extension, we appreciate all contributions and ideas. These attitudes should be expressed in how you interact with people.

These attitudes inform the priors which shape how you relate to them. If you actually respect someone’s mind and contributions, then you start with the prior that their ideas are worth taking seriously. So if someone’s idea is different from yours or seems mistaken, you orient with openness and curiosity. You don’t start listing why they must be wrong, you instead ask clarifying questions to see what it is that you missed, you be curious to see what knowledge and experience they are bringing which you might lack.

To a fair extent, it doesn’t even matter if you believe that someone is truly, deeply mistaken. It is important foremost that you validate them and their contribution, show that whatever they think, you still respect and welcome them.

In truth, I think Nurture Culture actually makes sense as the default. Combat Culture is precisely that - combative - and the body language, tone, and overall stances used are those used in Combat Culture bear resemblance to those used when are genuinely being aggressive and hostile towards others. In fact, it would only be in a minority of contexts that saying to someone “you’re absolutely wrong” would not be considered hostile. It follows that barring unusual cultural training and very specific contexts, the default is to be averse to body language and tone which is in the direction of aggression, judgment, and hostility.

The norms of Nurture Culture aren’t just about protecting feelings, however. They’re crucial to the truth-seeking purpose of communication. I think it is true universally that when someone feels genuinely threatened in conversation or fears that they might be attacked, then they will not be willing or able to fully participate in any such conversation. This applies to those whose native style is Combat Culture too, it is merely that people of different cultures do not feel threatened in all the same circumstances.

If you have not been culturally trained to view some aggressive body language and tone as not implying disrespect and dismissal, then perceiving such aggression will impede your ability to participate in conversation. The norms of Nurture Culture are designed to make people feel safe enough to engage in discussion.

It is legitimately often risky to speak up given the real chance that someone might think you’re dumb, think less of you, and like you less. This applies especially in groups and public forums. Nurture Culture assumes that only in a culture that expressly assures people that they and their ideas are wanted that they will speak up. (And crucially, you can’t allow displays of aggression which demonstrate a disturbing lack of safety).

Moreover, many very clever and knowledgeable people operate with Nurture Culture norms and assumptions. If you are not sensitive to this, you will lose out on their contributions. (I present this as a statement of fact, not as a definitive prescription for action)

This post now has a sequel. Combat vs Nurture: Cultural Genesis [LW · GW] clarifies some points, discusses the true difference between the cultures, and opines on the circumstances which give rise to the different cultures.

Now in the Comments: Advice & Ideal/Degenerate Forms of the Cultures

Originally this post had some brief advice here as well as description of healthy/degenerate forms of the cultures. To clean up the post, I've moved it to a comment below [LW · GW].

69 comments

Comments sorted by top scores.

comment by Ruby · 2018-11-11T22:44:28.278Z · score: 48 (18 votes) · LW · GW

This content was moved from the main body of the post to this comment. After receiving some good feedback, I've decided I'll follow the template of "advice section in comments" for most of my posts.

Some Quick Advice

Awareness

  • See if you can notice conversational cultures/styles which match what I’ve described.
  • Begin noticing if you lean towards a particular style.
  • Begin paying attention to whether those you discuss with might have a particular style, especially if it’s different from yours.
  • Start determining if different groups you’re a member of, e.g. clubs or workplaces, lean in one cultural direction or another.

Openness

  • Reflect on the advantages that cultures/styles different to your own have and others might use them instead.
  • Consider that on some occasions styles different to yours might be more appropriate.
  • Don’t assume that alternatives to your own culture are obviously wrong, stupid, bad, or lacking in skills.

Experimentation

  • Push yourself a little in the direction of adopting a non-default style for you. Perhaps you already do but push yourself a little more. Try doing so and feeling comfortable and open, if possible.

Ideal and Degenerate Forms of Each Culture

Unsurprisingly, each of the cultures has their advantages and weaknesses, mostly to do with when and where they’re most effective. I hope to say more in future posts, but here I’ll quickly list as what I think the cultures look like at their best and worst.

Combat Culture

At its best

  • Communicators can more fully focus their attention on their ideas the content rather than devoting thought to the impact of their speech acts on the emotions of others.
  • Communication can be direct and unambiguous when it doesn’t need to be “cushioned” to protect feelings.
  • The very combativeness and aggression prove to all involved that they’re respected and included.

At its worst

  • The underlying truth-seeking nature of conversation is lost and instead becomes a fight or competition to determine who is Right.
  • The combative style around ideas is abused to dismiss, dominate, bully, belittle, or exclude others.
  • It devolves into a status game.

Nurture Culture

At its best

  • Everyone is made to feel safe, welcomed, and encouraged to participate without fear of ridicule, dismissal, or judgment.
  • People assist each other to develop their ideas, seeking to find their strongest versions rather than attacking their weak points. Curiosity pervades.

At its worst

  • Fear of inducing a negative feeling in others and the need to create positive feelings and impressions of inclusion dominate over any truth-seeking goal.
  • Empathy becomes pathological and ideas are never criticized.
  • Communicators spend most of their thought and attention on the social interaction itself rather than the ideas they’re trying to exchange.
comment by Richard_Kennaway · 2018-11-10T16:01:44.996Z · score: 44 (17 votes) · LW · GW

There is another story I have heard of the chavrusa. Two Talmudic students were going hammer and tongs, as they do, when one of them found himself at a loss how to reply to the other's latest argument. As he struggled in thought, the other stepped in and said, such-and-such is how you should argue against what I just said.

comment by MakoYass · 2018-11-11T05:48:24.424Z · score: 6 (4 votes) · LW · GW

This is a great example of how people should go about playing competitive games with their friends, always be ready to point out ways they can do better, even if that would let them win against you.

Not so sure about treating conversation as a competitive game though heh

comment by AdrianSmith · 2018-11-13T19:50:15.256Z · score: 2 (2 votes) · LW · GW

I think this is why we make the debate/conversation distinction. It's not a perfect line, and your culture informs where it lies in any situation, but there's an idea that you switch from "we're just talking about whatever or exploring some idea" vs "we're trying to dig deep into the truth of something".

Knowing when one or the other mode is appropriate is something that's often lacking in online discussions.

comment by Said Achmiz (SaidAchmiz) · 2018-11-10T03:24:25.228Z · score: 30 (14 votes) · LW · GW

This is a good post; thank you for writing it. I think this dichotomy, while perhaps not a perfect categorization, is pretty good, and clarifies some things.

My background and preference is what you call “Combat Culture”. “Nurture Culture” has always seemed obviously wrong and detrimental to me, and has seemed more wrong and more detrimental the more I’ve seen it in action. This, of course, is not news, and I mention it only to give context to what I have to say.

I have two substantive comments to make:

First, “It devolves into a status game.” is something that easily can, and often does, happen to “Nurture Culture” spaces also. What this looks like is that “you are not communicating in a [ nurturing | nonviolent | prosocial | etc. ] way” gets weaponized and used as a cudgel in status plays; meanwhile, participants of higher social status skirt the letter of whatever guidelines exist to enforce a “nurturing” atmosphere, while saying and doing things whose effects are to create a chilling atmosphere and to discourage contrarian or opposing views. (And the more that “nurturing” communities attempt to pare down their “nurturing”-enforcement rules to the basics of “be nice”, the more the “tyranny of structurelessness” applies.)

Second, it has been my experience, in my personal relationships, that the more comfortable a person feels around another person or group, the more the first person is willing to be “combative”. A very “nurturing” type interaction style—where you don’t just tell a person they’re wrong, but “open-mindedly” and “curiously” inquire as to their reasons, and take care not to insult them, etc., etc., does indeed work well to avert misunderstandings or overt conflicts between near-strangers, casual acquaintances, etc. It also tends to be a near-perfect sign of a low-intimacy relationship; generally speaking, as people get to know each other, they tend to relax around each other, and drop down into a more “combative” orientation in discussions and debates with each other. They say what they think. (Of course, this could be a selection effect on my part. But then, this seems correct to me, and proper, and expected.)

comment by Ruby · 2018-11-10T07:24:59.395Z · score: 17 (9 votes) · LW · GW

Thanks!

I agree that Nurture Culture can be exploited for status too, perhaps equally so. When I was writing the post, I was thinking that Combat Culture more readily heads in that direction since in Combat Culture you are already permitted to act in ways which in other contexts would be outright power-plays, e.g. calling their ideas dumb. With Nurture Culture, it has to be more indirect, e.g. the whole "you are not conforming to the norm" thing. Thinking about it more, I'm not sure. It could be they're on par for status exploitability.

An increase in combativeness alongside familiarity and comfort matches my observation too, but I don't think it's universal - possibly a selection effect for those more natively Combative. To offer a single counterexample, my wife describes herself as being sickeningly nurturing when together with one of her closest friends. Possibly nurturing is one way to show that you care and this causes it to become ramped up in some very close relationships. Possibly it's that receiving nurturing creates warm feelings of safety, security, and comfort for some such that they provide this to each other to a higher extent in closer relationships. I'm not sure, I haven't thought about this particular aspect in depth.

comment by jimmy · 2018-11-10T19:19:28.584Z · score: 24 (8 votes) · LW · GW
To offer a single counterexample, my wife describes herself as being sickeningly nurturing when together with one of her closest friends.

I don't think they're mutually exclusive. My response in close relationships tends to be both extra combative and extra nurturing, depending on the context.

The extra combativeness comes from common knowledge of respect, as has already been discussed. The extra nurturing is more interesting, and there are multiple things going on.

Telling people when they're being dumb and having them listen can be important. If those paths haven't been carved yet, it can be important to say "this is dumb" and prove that you can be reliably right when you say things like that. Doing that productively isn't trivial, and the fight to get your words respected at full value can get in the way of nurturing. In my close relationships where I can simply say "you're being dumb" and have them stop and say "oops, what am I missing?" I sometimes do, but I'm also far more likely to be uninterested in saying that because they'll figure it out soon enough and I actually am curious why they're doing something seems so deeply mistaken to me. Just like how security in nurturing can allow combativeness, security in combativeness can allow nurturing.

Another thing is that when people gain trust in you to not shit on them when they're vulnerable, they start opening up more in places in which nurture is the more appropriate response. In these cases it's not that I'm being nurturing instead of being combative, it's that I'm being nurturing instead of not having the interaction at all. Relative to the extreme care that'd need to be taken with someone less close in those areas, that high level of nurturing is still more combative.

comment by Elo · 2018-11-10T22:43:00.127Z · score: 3 (2 votes) · LW · GW

I'm in agreement with you, with the caveat that there's a paradox of, "bring yourself". Where people show courage in the face of potential pain for vulnerability and they feel stronger and better about the whole thing.

However this courage thing is complicated by the fact that emotions don't bite in the same way as physical dogs bite. There is a lot more space to be in uncomfortable emotions and not die than is expected from the sense of discomfort that comes with them.

comment by Mary Chernyshenko (mary-chernyshenko) · 2018-11-22T21:01:28.257Z · score: 2 (2 votes) · LW · GW

...I still don't get it why one needs to say "you're being dumb", when obviously the intended meaning is "you're saying/doing something dumb", in virtually all settings.

If people are that close, can't they just growl at each other? Or use one of the wonderfully adaptable short words that communicate so much?..

comment by jimmy · 2018-11-25T21:09:55.454Z · score: 5 (7 votes) · LW · GW

The precise phrasing isn't important, and often "growls" do work. The important part is in knowing that you can safely express your criticisms unfiltered and they'll be taken for what they're worth.

comment by Benquo · 2018-12-03T04:57:32.269Z · score: 8 (1 votes) · LW · GW

At the risk of tediously repeating what Mary Chernyshenko said, I don’t think a key point was really addressed:

If “the exact phrasing is not important” implies unbiased errors in phrasing, then it’s quite surprising that people tend to round off “your argument is bad” to “you’re dumb” so often.

If, as therefore seems probable, there is a motivated tendency to do that, then it’s clearly important for some purpose and we ought to be curious about what, when we’re trying to evaluate the relation of various conversational modes to truth-seeking vs other potentially competing goals.

comment by Zvi · 2018-12-03T15:45:21.985Z · score: 19 (7 votes) · LW · GW

An outright "You're dumb" is a mistake, period, unless you actually meant to say that the person is in fact dumb. This rounding is a pure bad, and there's no need of it. Adding 'being' or 'playing' or 'doing something' before the dumb is necessary.

Part of a good combative-type culture is that you mean what you say and say what you mean, so the rounding off here is a serious problem even before the (important) feelings/status issue.

comment by Ruby · 2018-12-05T21:48:53.482Z · score: 10 (4 votes) · LW · GW

I emphatically agree with Zvi about the mistakeness of saying "you're dumb."

In my own words:

1) "You're absolutely wrong" is strong language, but not unreasonable in a combative culture if that's what you believe and you're honestly reporting it.

2a) "You're saying/doing something dumb" becomes a bit more personal than when making a statement about a particular view. Though I think it's rare that one have need to say this, and it's only appropriate when levels of trust and respect are very high.

2b) "You're being dumb" is a little harsher than "saying/doing something dumb." The two don't register as much different to me, however, though they do to Mary Chernyshenko?

3) "You're dumb" (introduced in this discussion by Benquo) is now making a general statement about someone else and is very problematic. It erodes the assumptions of respect which make combative-type cultures feasible in the first place. I'd say that conversations where people are calling others dumb to their faces are not situations I'd think of as healthy, good-faith, combative-type conversations.

[As an aside, even mild "that seems wrong to me"-type statements should be recognized as potentially combative. There are many contexts where any explicit disagreement registers as hostile or contrarian.]

comment by Mary Chernyshenko (mary-chernyshenko) · 2018-12-10T20:30:19.556Z · score: 5 (3 votes) · LW · GW

(Not important, but my supervisor was a great man who tended to revel in combat settings and to say smth like "You're being dumb" more often than other versions, & though everybody understood what he meant, it destroyed his team eventually. People found themselves better things to do, as, of course, people generally should. This is where I'm coming from.)

comment by Benquo · 2018-12-05T22:42:59.967Z · score: 8 (1 votes) · LW · GW

Your response here and Ruby's both seem rude to me: you're providing a (helpful) clarification, but doing that without either addressing the substantive issue directly or noting that you're not doing that. Ordinarily that wouldn't be a big deal, but when the whole point of my comment was that jimmy ignored Mary's substantive point I think it's obnoxious to then ignore my substantive point about Mary's substantive point being ignored.

comment by jimmy · 2018-12-14T19:11:37.247Z · score: 2 (1 votes) · LW · GW
[...]but when the whole point of my comment was that jimmy ignored Mary's substantive point I think it's obnoxious to then ignore my substantive point about Mary's substantive point being ignored.

FWIW, “jimmy ignored Mary’s substantive point” is both uncharitable and untrue, and both “making uncharitable and untrue statements as if they were uncontested fact” and “stating that you find things obnoxious in cases where people might disagree about what is appropriate instead of offering an argument as to why it shouldn’t be done” stand out as far more obnoxious to me.

I normally would just ignore it (because again, I think saying “I think that’s obnoxious” is generally obnoxious and unhelpful) but given your comment you’ll probably either find the feedback helpful or else it’ll help you change your mind about whether it's helpful to call out things one finds to be obnoxious :P

comment by jimmy · 2018-12-14T19:10:08.687Z · score: 2 (1 votes) · LW · GW

The exact phrasing isn't important, but conveying the right message is. As Zvi and Ruby note, that “being”/”doing”/etc part is important. “You’re dumb” is not an acceptable alternative because it does not mean the same thing. “Your argument is bad” is also unacceptable because it also means something completely different.

"Your argument is bad" only means “your argument is bad”, and it is possible to go about things in a perfectly reasonable way and still have bad arguments sometimes. It is completely different than a situation where someone is failing to notice problems in their arguments which would be obvious to them if they weren’t engaging in motivated cognition and muddying their own thinking. An inability to think well is quite literally what “dumb” is, and “being dumb” is a literal description of what they’re doing, not a sloppy or motivated attempt to say or pretend to be saying something else.

As far as “then why does it always come out that way”, besides the fact that “you’re being dumb” is far quicker to say than the more neutral “you’re engaging in motivated cognition”, in my experience it doesn’t always or even usually come out that way — and in fact often doesn’t come out at all, which was kinda the point of my original comment.

When it does take that form, there are often good reasons which go beyond “¼ the syllables” and are completely above board, explicit, and agreed upon by both parties. Counter-signalling respect and affection is perhaps the clearest example.

There are examples of people doing it poorly or with hostile and dishonest intent, of course, but the answer to “why do those people do it that way” is a very different question than what was asked.

comment by Mary Chernyshenko (mary-chernyshenko) · 2018-11-26T18:17:30.346Z · score: 2 (4 votes) · LW · GW

yeah, it's not important, it just keeps happening that way, doesn't it.

comment by Ruby · 2018-11-26T07:24:19.052Z · score: 19 (7 votes) · LW · GW

Two dimensions independent of the two cultures

Having been inspired by the comments here, I'm now thinking that there are two communication dimensions at play here within the Cultures. The correlation between these dimensions and the Cultures is incomplete which has been causing confusion.

1) The adversarial-collaborative dimension. Adversarial communication is each side attacking the other's views while defending their own. Collaborative communication is openness and curiosity to each other's ideas. As Ben Pace describes it [LW · GW]:

I'll say a thing, and you'll add to it. Lots of 'yes-and'. If you disagree, then we'll step back a bit, and continue building where we can both see the truth. If I disagree, I won't attack your idea, but I'll simply notice I'm confused about a piece of the structure we're building, and ask you to add something else instead, or wonder why you'd want to build it that way.

2) The "emotional concern and effort" dimension. Communication can be conducted with little attention or effort placed on ensuring the emotional comfort of the participants, often resulting in a directness or bluntness (because it's assumed people are fine and don't need things softened). Alternatively, communication can be conducted with each participant putting in effort to ensure the other feels okay (feels validated/respected/safe/etc.) At this end of the spectrum, words, tone, and expression are carefully selected as overall a model of the other is used to ensure the other is taken care of.

My possible bucket error [LW · GW]

It was easy for me to notice "adversarial, low effort towards emotional comfort" as one cluster of communication behaviors and "collaborative, high concern" as another. Those two clusters are what I identified as Combat Culture and Nurture Culture.

Commenters here, including at least Raemon [LW · GW], Said [LW · GW], and Ben Pace [LW · GW], have rightly made comments to the effect that you can have communication where participants are being direct, blunt, and not proactively concerned for the feelings of the other while nonetheless still being open, being curious, and working collaboratively to find the truth with a spirit of, "being on the same team". This maybe falls under Combat Culture too, but it's a less central example.

On the other side, I think it's entirely possible to be acting combatively, i.e. with an external appearance of aggression and hostility, while nonetheless being very attentive to the feelings and experience of the other. Imagine two fencers sparring in the practice ring: during a bout, each is attacking and trying to win, however, they're also taking create care as to not actually injure the other. They would stop the moment they suspected they had and switch to an overtly nurturing mode.

A 2x2 grid?

One could create a 2x2 grid with the two dimensions described in this comment. Combat and Nurture cultures most directly fit in two of the quadrants, but I think the other two quadrants are populated by many instances of real-world communication. In fact, these other two quadrants might contain some very healthy communication.

comment by Benquo · 2018-11-26T13:32:51.551Z · score: 17 (3 votes) · LW · GW

I think this is 2-dimension schema is pretty good. The original dichotomy bothered me a bit (like it was overwriting important detail) but this one doesn’t.

One more correlated but distinct dimension I’d propose is whether the participants are trying to maximize (and therefore learn a lot) or minimize (and therefore avoid conflict) the scope of the argument.

US courts tend to take an (adversarial, low emotional concern, minimize) point of view, while scientific experiments are supposed to maximize the implications of disagreement.

comment by Ruby · 2018-11-26T19:36:37.038Z · score: 6 (4 votes) · LW · GW
I’d propose is whether the participants are trying to maximize (and therefore learn a lot) or minimize (and therefore avoid conflict) the scope of the argument.

Interesting, though I'm not sure I fully understand your meaning. Do you mind elaborating your examples a touch?

comment by Benquo · 2018-11-26T21:09:17.038Z · score: 28 (6 votes) · LW · GW

American judges like to decide cases in ways that clarify undetermined areas of law as little as possible. This is oriented towards preserving the stability of the system. If a case can be decided on a technicality that allows a court to avoid opining on some broader issue, the court will often take that way out. Consider the US Supreme Court's decision on the gay wedding cake - the court put off a decision on the core issue by instead finding a narrower procedural reason to doubt the integrity of the decisionmaking body that sanctioned the baker. Both sides in a case have an incentive to avoid asking courts to overturn precedents, since that reduces their chance of victory.

Plea bargains are another example where the thing the court is mainly trying to do is resolve conflicting interests with minimal work, not learn what happened.

In general, if you see the interesting thing about arguments as the social conflict, finding creative ways to avoid the need for the argument helps you defuse fights faster and more reliably, at the expense of learning.

By contrast, in science, the best experiments and ones scientists are rewarded for seeking out are ones that overturn existing models with high confidence. Surprising and new results are promoted rather than suppressed.

This is of course a bit of a stereotyped picture. Actual scientific fields resemble this to varying degrees, and of course there's also non-disagreement-oriented data-gathering and initial model formation. But the ideal of falsification does matter. Activist lawyers will sometimes deliberately try to force a court to decide a large issue instead of a small one. And on the other hand, actual scientific research also includes non-disagreement-oriented data-gathering and initial model formation. But the ideal of falsification does matter in science and affects the discourse.

comment by Benito · 2018-11-26T23:59:01.050Z · score: 6 (3 votes) · LW · GW

That is such a respectable social norm, to try and make as conservative a statement about norms as possible whenever you're given the opportunity (as opposed to many people's natural instincts which is to try to paint a big picture that seems important and true to them).

comment by Benquo · 2018-11-27T14:06:16.315Z · score: 3 (2 votes) · LW · GW

Could you clarify your references a bit? None of my guesses as to the connection between my comment and your reply are such a good fit as to make me confident that I've understood what you're saying.

comment by Ruby · 2018-11-26T23:11:44.834Z · score: 5 (3 votes) · LW · GW

Thanks, that was clarifying and helpful.

comment by ChristianKl · 2018-11-11T12:49:07.195Z · score: 16 (8 votes) · LW · GW
Communication can be direct and unambiguous when it doesn’t need to be “cushioned” to protect feelings.

I don't believe that communication becomes direct in a combative discussion. Participants in a combative discussion usually try to hide spots where they or their arguments are vulnerable. This means it's harder to get at the true rejection [LW · GW]of the other person.

There's a huge problem in our Western culture where having knowledge is seen as the ability to have a opinion about a topic that can be effectively defended intellectually instead of knowledge being the ability to interact directly with the real world or to make predictions about it.

In a combative environment I can't speak about those things that I know to be true where I can make good predictions but that I can't defend intellectually in a way that makes sense to the person I'm speaking with.

comment by AdrianSmith · 2018-11-13T19:56:00.726Z · score: 6 (5 votes) · LW · GW

I find this to be true, but only to a point. Those blind spots in our beliefs are usually subconscious, and so in non-combative discussion they just never come up at all. In combative discussion you find yourself defending them even without consciously realizing why you're so worried about that part of your belief (something something Belief in Belief).

I almost always find that when I've engaged in a combative discussion I'll update around an hour later, when I notice ways I defended my position that are silly in hindsight.

comment by Said Achmiz (SaidAchmiz) · 2018-11-14T00:32:37.120Z · score: 6 (4 votes) · LW · GW

This is an excellent point, and I too have had this experience.

Very relevant to this are Arthur Schopenhauer’s comments in the introduction to his excellent Die Kunst, Recht zu behalten (usually translated as The Art of Controversy). Schopenhauer comments on people’s vanity, irrationality, stubbornness, and tendency toward rationalization:

If human nature were not base, but thoroughly honourable, we should in every debate have no other aim than the discovery of truth; we should not in the least care whether the truth proved to be in favour of the opinion which we had begun by expressing, or of the opinion of our adversary. That we should regard as a matter of no moment, or, at any rate, of very secondary consequence; but, as things are, it is the main concern. Our innate vanity, which is particularly sensitive in reference to our intellectual powers, will not suffer us to allow that our first position was wrong and our adversary’s right. The way out of this difficulty would be simply to take the trouble always to form a correct judgment. For this a man would have to think before he spoke. But, with most men, innate vanity is accompanied by loquacity and innate dishonesty. They speak before they think; and even though they may afterwards perceive that they are wrong, and that what they assert is false, they want it to seem the contrary. The interest in truth, which may be presumed to have been their only motive when they stated the proposition alleged to be true, now gives way to the interests of vanity: and so, for the sake of vanity, what is true must seem false, and what is false must seem true.

But, says Schopenhauer, these very tendencies may be turned around and harnessed to our service:

However, this very dishonesty, this persistence in a proposition which seems false even to ourselves, has something to be said for it. It often happens that we begin with the firm conviction of the truth of our statement; but our opponent’s argument appears to refute it. Should we abandon our position at once, we may discover later on that we were right after all: the proof we offered was false, but nevertheless there was a proof for our statement which was true. The argument which would have been our salvation did not occur to us at the moment. Hence we make it a rule to attack a counter-argument, even though to all appearances it is true and forcible, in the belief that its truth is only superficial, and that in the course of the dispute another argument will occur to us by which we may upset it, or succeed in confirming the truth of our statement. In this way we are almost compelled to become dishonest; or, at any rate, the temptation to do so is very great. Thus it is that the weakness of our intellect and the perversity of our will lend each other mutual support; and that, generally, a disputant fights not for truth, but for his proposition, as though it were a battle pro aris et focis. He sets to work per fas et nefas; nay, as we have seen, he cannot easily do otherwise. As a rule, then, every man will insist on maintaining whatever he has said, even though for the moment he may consider it false or doubtful.

[emphasis mine]

Schopenhauer is saying that—to put it in modern terms—we do not have the capability to instantly evaluate all arguments put to us, to think in the moment through all their implications, to spot flaws, etc., and to perform exactly the correct update (or lack of update). So if we immediately admit that our interlocutor is right and we are wrong, as soon as this seems to be the case, then we can very easily be led into error!

So we don’t do that. We defend our position, as it stands at the beginning. And then, after the dispute concludes, we can consider the matter at leisure, and quite possibly change our minds.

Schopenhauer further comments that, as far as the rules and “stratagems” of debate (which form the main part of the book)—

In following out the rules to this end, no respect should be paid to objective truth, because we usually do not know where the truth lies. As I have said, a man often does not himself know whether he is in the right or not; he often believes it, and is mistaken: both sides often believe it. Truth is in the depths. At the beginning of a contest each man believes, as a rule, that right is on his side; in the course of it, both become doubtful, and the truth is not determined or confirmed until the close.

(Note the parallel, here, to adversarial collaborations—and recall that in each of the collaborations in Scott’s contest, both sides came out of the experience having moved closer to their opponent/collaborator’s position, despite—or, perhaps, because of?—the process involving a full marshaling of arguments for their own initial view!)

So let us not demand—neither of our interlocutors, nor of ourselves—that a compelling argument be immediately accepted. It may well be that stubborn defense of one’s starting position—combined with a willingness to reflect, after the dispute ends, and to change one’s mind later—is a better path to truth.

comment by ChristianKl · 2018-11-14T12:18:32.719Z · score: 2 (1 votes) · LW · GW

I see no apriori reason to think that the average adversarial collaboration was combative in nature. The whole idea was to get the people to collabarate and that collabartion will lead to a good outcome.

comment by ChristianKl · 2018-11-14T12:21:28.184Z · score: 2 (1 votes) · LW · GW

Understanding the blind spots of the person you are talking with and bringing them to their awareness is a skill. It might very well be that you are not used to talking to people who have that skill set.

If you follow a procedure like double crux and the person you are talking with have a decent skill-level there's a good chance that they will point out blind spots to you.

I almost always find that when I've engaged in a combative discussion I'll update around an hour later, when I notice ways I defended my position that are silly in hindsight.

"silly" is a pretty big requirement. It would be better if people don't need to believe that there old positions are silly to update to be able to do so.

Professional philosophers as a class who's culture is combative have a bad track record of changing their opinion when confronted with opposing views. Most largely still hold those position that were important to them a decade ago.

comment by TAG · 2018-11-14T13:11:31.462Z · score: 1 (1 votes) · LW · GW

Professional philosophers as a class who’s culture is combative have a bad track record of changing their opinion when confronted with opposing views.

As opposed to whom? Who's good?

comment by ChristianKl · 2018-11-15T06:43:40.630Z · score: 4 (2 votes) · LW · GW

In Silicon Valley startup culture there are many people who are able to pivot when it turns out that there initial assumptions were wrong.

Nassim Taleb makes in his books the claim that successful traders are good at changing their opinion when there are goods arguments to change positions.

CFAR went from teaches Bayes rule and Fermi estimation to teaching parts work. A lot of their curriculum manages to change.

comment by Ruby · 2018-11-13T20:10:14.899Z · score: 2 (2 votes) · LW · GW
I almost always find that when I've engaged in a combative discussion I'll update around an hour later, when I notice ways I defended my position that are silly in hindsight.

I second that experience.

comment by Richard_Kennaway · 2018-11-26T14:52:30.989Z · score: 15 (4 votes) · LW · GW

This is a false dichotomy. But whenever someone marks two points on an otherwise featureless map, typically the rest of the space of possibilities that the world explodes with disappears from the minds of the participants. People end up saying "combat good, nurture bad", or the reverse, and then defend their position by presenting ways in which one is good and ways in which the other is bad. Or someone expatiates on the good and bad qualities of each one, in multiple permutations, and ends up with a Ribbonfarm post.

Said Achmiz has spoken eloquently of bad things that happen in "nurture culture". For examples of bad things in "combat culture", see any snark-based community, such as 4chan or rationalwiki. All of these things are destructive of epistemic quality. (If anything, nurture goes more wrong than combat, because it presents a smile, a knife in the back, and crocodile tears, while snark wields its weapons openly.)

When you leave out all of the ways that either supposed culture can go wrong, what is left of them? In a culture without snark or smothering, good ideas will be accepted, and constructively built on, not extinguished. Bad ideas will be pointed out as such; if there is something close that is better, that can be pointed out; if an idea is unsalvageable, that also.

Several Japanese terms have gained currency in the rational community, such as tsoyoku naritai and isshoukenmei. Here is another that I think deserving of wider currency: 切磋琢磨, sessa takuma, joyfully competitive striving for a common purpose.

comment by Richard_Kennaway · 2018-11-26T18:35:28.582Z · score: 6 (4 votes) · LW · GW

One might even say that all functioning communities are alike; each dysfunctional community is dysfunctional in its own way. "For men are good in but one way, but bad in many."

comment by PaulK · 2018-11-11T03:28:55.147Z · score: 12 (4 votes) · LW · GW

Great essay!

Another aspect of this divide is about articulability. In a nurturing context, it's possible to bring something up before you can articulate it clearly, and even elicit help articulating it.

For example, "Something about <the proposal we're discussing> strikes me as contradictory -- like it's somehow not taking into account <X>?". And then the other person and I collaborate to figure out if and what exactly that contradiction is.

Or more informally, "There's something about this that feels uncomfortable to me". This can be very useful to express even when I can't say exactly what it is that I'm uncomfortable with, IF my conversation partner respects that, and doesn't dismiss what I'm saying because it's not precise enough.

In a combative context, on the other hand, this seems like a kind of interaction you just can't have (I may be wrong, I don't have much experience in them). Because there, inarticulateness just reads as your arguments being weak. And you don't want to run the risk of putting half-baked ideas out there and having them swatted down. So your only real choices are to figure out how to articulate things, by yourself, on the fly, or remain silent.

And that's too bad, because the edge of what can be articulated is IME the most interesting place to be.

(Gendlin's Focusing is an extreme example of being at the edge of what can be articulated, and in the paired version you have one person whose job is basically to be a nurturing & supportive presence.)

comment by Raemon · 2018-11-12T01:20:05.793Z · score: 7 (4 votes) · LW · GW

There's a subtle difference in focus between nurture culture as described here, and what I'd call "collaborative truthseeking." Nurture brings to mind helping people to grow. Collaborative brings to mind more like "we're on a team", which doesn't just mean we're on the same side, but that we each have some responsibilities to bring to the table.

comment by Said Achmiz (SaidAchmiz) · 2018-11-12T08:01:01.257Z · score: 3 (3 votes) · LW · GW

But you can have collaborative truthseeking with “Combat Culture”—which is precisely what the example in the OP (the Ellsberg quote) illustrates.

comment by Raemon · 2018-11-25T21:56:56.622Z · score: 7 (4 votes) · LW · GW

Hmm. Naming things is hard. I agree you can have collaboration in combat culture. It feels like a different kind of collaboration though.

Double hmm.

Before continuing, noticing an assumption I was making: in Ben Pace's comment [LW · GW] elsethread, he frames "nurture" culture as involving lots of "yes-and", which roughly matched my stereotype. Correspondingly, combat culture felt like it involved lots of "no-but". Each person is responsible for coming up with their ideas and shoring them up, and it's basically other people's responsibility to notice where those ideas are flawed. And this seemed like the main distinction. (I'm not sure how much this matches what you mean by it.)

What's sort of funny (in a "names continue to be hard" way), is that I instinctively want to call "each person is responsible for their ideas" culture "Group Collaboration" (as in "group selection"). It's collaborative on the meta level, since over time the best ideas rise to the top, and people shore up the weaknesses of their own ideas.

Whereas... I'd call the "yes and" thing that Ben describes as.... (drumroll) "Group Collaboration" (as in "group project."). Which is yes obviously a terrible naming schema. :P

Comparing this to corporations building products: between corporations, and maybe between teams within a single corporation, is group selection. There is competition, which hopefully incentivizes each team to build a good product.

Within a team, you still need to be able to point out when people are mistaken and argue for good ideas. But you're fundamentally trying to build the same thing together. It doesn't do you nearly as much good to point out that a coworker's idea is bad, as to figure out how to fix it and see if their underlying idea is still relevant. If you don't trust your coworker to be generally capable of coming up with decent ideas that are at least pointed in the right direction, your team is pretty doomed anyhow.

comment by Said Achmiz (SaidAchmiz) · 2018-11-25T22:27:51.354Z · score: 5 (3 votes) · LW · GW

Within a team, you still need to be able to point out when people are mistaken and argue for good ideas. But you’re fundamentally trying to build the same thing together. It doesn’t do you nearly as much good to point out that a coworker’s idea is bad, as to figure out how to fix it and see if their underlying idea is still relevant.

I don’t see these as contradictory, or even opposed. How can you fix something, unless you can first notice that it needs to be fixed? Isn’t this just a policy of “don’t explicitly point out that an idea is flawed, because it would hurt the originator’s feelings; only imply it (by suggesting fixes)”? (And what do you do with unsalvageable ideas?)

If you don’t trust your coworker to be generally capable of coming up with decent ideas that are at least pointed in the right direction, your team is pretty doomed anyhow.

Sure, “generally”, maybe, but it’s the exceptions that count, here. I don’t necessarily trust myself to reliably come up with good ideas (which is the whole point of testing ideas against criticism, etc., and likewise is the whole point of brainstorming and so on), so it seems odd to ask if I trust other people to do so!

More generally, though… if it’s the distinction between “yes, and…” and “no, but…” which makes the difference between someone being able to work in a team or being unable to do so, then… to be quite honest, were I in a position to make decisions for the team, I would question whether that person has the mental resilience, and independence of mind, to be useful.

comment by Benito · 2018-11-25T19:18:42.725Z · score: 4 (2 votes) · LW · GW

At first, I felt that 'nurture' was a terrible name, because the primary thing I associated with the idea you're discussing is that we are building up an axiomatised system together. Collaboratively. I'll say a thing, and you'll add to it. Lots of 'yes-and'. If you disagree, then we'll step back a bit, and continue building where we can both see the truth. If I disagree, I won't attack your idea, but I'll simply notice I'm confused about a piece of the structure we're building, and ask you to add something else instead, or wonder why you'd want to build it that way. I agree this is more nurturing, but that's not the point. The point is collaboration.

But then my model of Said said "What? I don't understand why this sort of collaborative exploration isn't perfectly compatible with combative culture - I can still ask all those questions and make those suggestions" which is a point he has articulated quite clearly down-thread (and elsewhere). So then I got to thinking about the nurturing aspect some more.

I'd characterise combative culture as working best in a professional setting, where it's what one does as one's job. When I think of productive combative environments, I visualise groups of experts in healthy fields like math or hard science or computer science. The researchers will bring powerful and interesting arguments forth to each other, but typically they do not discuss nor require an explicit model of how another researcher in their field thinks. And symmetrically, the person responsible for how this researcher thinks is up to them - that's their whole job! They'll note they were wrong, and make some updates about what cognitive heuristics they should be using, but not bring that up in the conversation, because that's not the point of the conversations. The point of the conversation is, y'know, whether the theorem is true, or whether this animal evolved from that, or whether this architecture is more efficient when scaled. Not our emotions or feelings.

Sure, we'll attack each other in ways that can often make people feel defensive, but in a field where everyone has shown their competence (e.g. PhDs) we have common knowledge of respect for one another - we don't expect it to actually hurt us to be totally wrong on this issue. It won't mean I lose social standing, or stop being invited to conferences, or get fired. I mean, obviously it needs to correlate, but never does any single sentence matter or single disagreement decide something that strong. Generally the worst that will happen to you is that you just end up a median scientist/researcher, and don't get to give the big conference talks. There's a basic level of trust as we tend to go about our work, that means combative culture is not a real problem.

I think this is good. It's hard to admit you're wrong, but if we have common knowledge of respect, then this makes the fear smaller, and I can overcome it.

I think one of the key motivations for nurturing culture is that we don't have common knowledge that everything will be okay in many part of our lives, and in the most important decisions in our lives way more is at stake than in academia. Some example decisions where being wrong about them has far worse consequences for your life than being wrong about whether Fermat's Last Theorem is true or false:

  • Will my husband/wife and I want the same things in the next 50 years?
  • Will my best friends help me keep the up the standard of personal virtue I care about in myself, or will they not notice if I (say) lie to myself more and more?
  • I'm half way through med school. Is being a doctor actually hitting the heavy tails of impact I could have with my life?

These questions have much more at stake. I know for myself, when addressing them, I feel emotions like fear, anger, and disgust.

Changing my mind on the important decisions in my life, especially those that affect my social standing amongst my friends and community, is really far harder than changing my life about an abstract topic where the results don't have much direct impact on my life.

Not that computer science or chemistry or math aren't incredibly hard, it's just that to do good work in these fields does not require the particular skill of believing things even when they'll lower your social standing.

I think if you imagine the scientists above turning combative culture to their normal lives (e.g. whether they feel aligned with their husband/wife for the next 50 years), and really trying to do it hard, they'd immediately go through an incredible amount of emotional pain until it was too much to bear and then they'd stop.

If you want someone to be open to radically changing their job, lifestyle, close relationships, etc, some useful things can be:

  • Have regular conversations with norms such that the person will not be immediately judged if they say something mistaken, or if they consider a hypothesis that you believe to be wrong.
  • If you're discussing with them an especially significant belief and whether to change it, keep a track of their emotional state, and help them carefully walk through emotionally difficult steps of reasoning.

If you don't, they'll put a lot of effort into finding any other way of shooting themselves in the foot that's available, rather than realise that something incredibly painful is about to happen to them (and has been happening for many years).

I think that trying to follow this goal to it's natural conclusions will lead you to a lot of the conversational norms that we're calling 'nurturing'.

I think Qiaochu once said something like "If you don't regularly feel like your soul is being torn apart, you're not doing rationality right." Those weren't his specific words, but I remember the idea being something like that.

comment by Said Achmiz (SaidAchmiz) · 2018-11-25T22:07:22.329Z · score: 9 (3 votes) · LW · GW

I think one of the key motivations for nurturing culture is that we don’t have common knowledge that everything will be okay in many part of our lives, and in the most important decisions in our lives way more is at stake than in academia. Some example decisions where being wrong about them has far worse consequences for your life than being wrong about whether Fermat’s Last Theorem is true or false:

I do not really agree with your view here, but I think what you say points to something quite important.

I have sometimes said that personal loyalty is one of the most important virtues. Certainly it has always seemed to me to be a neglected virtue, in rationalist circles. (Possibly this is because giving personal loyalty pre-eminence in one’s value system is difficult, at best, to reconcile with a utilitarian moral framework. This is one of the many reasons I am not a utilitarian.)

One of the benefits of mutual personal loyalty between two people is that they can each expect not to be abandoned, even if the other judges them to be wrong. This is patriotism in microcosm: “my country, right or wrong” scaled down to the relation between individuals—“my friend, right or wrong”. So you say to me: “You are wrong! What you say is false; and what you do is a poor choice, and not altogether an ethical one.” And yet I know that we remain friends; and you will stand by me, and support me, and take risks for me, and make sacrifices for me, if such are called for.

There are limits, of course; thresholds which, if crossed, strain personal loyalty to its limit, and break it. Betrayal of trust is one such. Intentional, malicious action of one’s friend against oneself is another; so is failure to come to one’s aid, in a dark hour. But these are high thresholds. It is near-impossible to exceed them accidentally. (And if you think you know exactly what I’m talking about, then ask yourself: if your friend committed murder, would you turn them in to the police? If the answer is “yes, of course”, then some inferential distance yet remains…)

To a friend like this, you can say, without softening the blow: “Wrong! You’re utterly wrong! This is foolish!”—and without worrying that they will not confide in you, for fear of such a judgment. And from a friend like this, you can hear a judgment like that, and yet remain certain that your friendship is not under the least threat; and so, in a certain important sense, it does not hurt to be judged… no more, at least, than it hurts to judge yourself.

Friendship like this… is it “Nurture Culture”? Or “Combat Culture”?

I think Qiaochu once said something like “If you don’t regularly feel like your soul is being torn apart, you’re not doing rationality right.” Those weren’t his specific words, but I remember the idea being something like that.

The consequence of what I say above is this: it is precisely this state (“soul being torn apart”) which I think is critically important to avoid, in order to be truly rational.

comment by Benito · 2018-11-26T15:28:12.963Z · score: 10 (5 votes) · LW · GW

Thanks for your reply, I also do not agree with it but found that it points to something important ideas. (In the past I have tended to frame the conversation more about 'trust' rather than 'personal loyalty', but I think with otherwise similar effect.)

The first question I want to ask is: how do you get to the stage where personal loyalty is warranted?

From time to time, I think back to the part of Harry Potter and the Philosopher's Stone where Harry, Hermione and Ron become loyal to one another - the point where they build the strength of relationship where they can face down Voldemort without worrying that one another may leave out of fear.

It is after Harry and Ron run in to save Hermione from a troll.

The people who I have the most loyalty to in the world are those who have proven that it is there, with quite costly signals. And this was not a stress-free situation. It involved some pressure on each of our souls, though the important thing was that we came out with our souls intact, and also built something we both thought truly valuable.

So it is not clear to me that you can get to the stage of true loyalty without facing some trolls together, and risking actually losing.

The second and more important question I want to ask is: do you think that having loyal friends is sufficient to achieve your goals without regularly feeling like your soul is being torn apart?

You say:

The consequence of what I say about is this: it is precisely this state (“soul being torn apart”) which I think is critically important to avoid, in order to be truly rational.

Suppose I am confident that I will not lose my loyal friend.

Here are some updates about the world I might still have to make:

  • My entire social circle gives me social gradients in directions I do not endorse, and I should leave and find a different community
  • There is likely to be an existential catastrophe in the next 50 years and I should entirely re-orient my life around preventing it
  • The institution I'm rising up in is fundamentally broken, and for me to make real progress on problems I care about I should quit (e.g. academia, a bad startup).
  • All the years of effort I've spent on a project or up-skilling in a certain domain has been either useless or actively counterproductive (e.g. working in politics, a startup that hasn't found product-market fit) and I need to give up and start over.

The only world in which I could feel confident that I wouldn't have to go through any of these updates are one in which the institutions are largely functional, and I feel that rising up my local social incentives will align with my long term goals. This is not what I observe. [LW · GW]

Given the world I observe, it seems impossible for me to not pass through events and updates that cause me significant emotional pain and significant loss of local social status, whilst also optimising for my long term goals. So I want my close allies, the people loyal to me, the people I trust, to have the conversational tools (cf. my comment above) to help me keep my basic wits of rationality about me while I'm going through these difficult updates and making these hard decisions.

I am aware this is not a hopeful comment. [LW · GW] I do think it is true.

---

Edit: changed 'achieve your goals while staying rational' to 'achieve your goals without regularly feeling like your soul is being torn apart', which is what I meant to say.

comment by Said Achmiz (SaidAchmiz) · 2018-11-27T08:52:35.622Z · score: 19 (6 votes) · LW · GW

There’s a lot I have to say in response to your comment.

I’ll start with some meta commentary:

From time to time, I think back to the part of Harry Potter and the Philosopher’s Stone where Harry, Hermione and Ron become loyal to one another—the point where they build the strength of relationship where they can face down Voldemort without worrying that one another may leave out of fear.

It is after Harry and Ron run in to save Hermione from a troll.

Harry and Ron never ran in to save Hermione from a troll, never became loyal to one another as a result, never built any strength of relationship, and never faced down Voldemort. None of these events ever happened; and Harry, Ron, and Hermione, in fact, never existed.

I know, I know: I’m being pedantic, nitpicking, of course you didn’t mean to suggest that these were actual events, you were only using them as an example, etc. I understand. But as Eliezer wrote:

What’s wrong with using movies or novels as starting points for the discussion? No one’s claiming that it’s true, after all. Where is the lie, where is the rationalist sin? …

Not every misstep in the precise dance of rationality consists of outright belief in a falsehood; there are subtler ways to go wrong.

Are the events depicted in Harry Potter and the Philosopher’s Stone—a children’s story about wizards (written by an inexperienced writer)—representative of how actual relationships work, between adults, in our actual reality, which does not contain magic, wizards, or having to face down trolls in between classes? If they are, then you should have no trouble calling to mind, and presenting, illustrative examples from real life. And if you find yourself hard-pressed to do this, well…

Let me speak more generally, and also more directly. As I have previously obliquely suggested [LW · GW], I think it is high time for a moratorium, on Less Wrong, on fictional examples used to illustrate claims about real people, real relationships, real interpersonal dynamics, real social situations, etc. If I had my way, this would be the rule: if you can’t say it without reference to examples from fiction, then don’t say it. (As for using Harry Potter as a source of examples—that should be considered extremely harmful, IMHO.)

That this sort of thing distorts your thinking is, I think, too obvious to belabor, and in any case Eliezer did an excellent job with the above-linked Sequence post. But another problem is that it also muddies communication, such as in the case of this line:

So it is not clear to me that you can get to the stage of true loyalty without facing some trolls together, and risking actually losing.

In the real world, there are no trolls. Clearly, you’re speaking metaphorically. But what is the literal interpretation? What are “trolls”, in this analogy? Precisely? Is it “literal life or death situations, where you risk actually, physically dying?” Surely not… but then—what? I really don’t know. (I have some thoughts on what is and what is not necessary to “get to the stage of true loyalty”, but I really have no desire to respond to a highly ambiguous claim; it seems likely to result in us wasting each other’s time and talking past one another.)

Ok, enough meta, now for some object-level commentary:

The second and more important question I want to ask is: do you think that having loyal friends is sufficient to achieve your goals without regularly feeling like your soul is being torn apart?

Having loyal friends is not sufficient to achieve your goals, period, without even tacking on any additional criteria. This seems very obvious to me, and it seems unlikely that you wouldn’t have noticed this, so I have to assume I have somehow misunderstood your question. Please clarify.

Here are some updates about the world I might still have to make:

Of the potential updates you list, it seems to me that some of them are not like the others. To wit:

My entire social circle gives me social gradients in directions I do not endorse, and I should leave and find a different community

In my case, I have great difficulty imagining what this would mean for me. I do not think it applies. I don’t know the details of your social situation, but I conjecture that the cure for this sort of possibility is to find your social belonging less in “communities” and more in personal friendships.

There is likely to be an existential catastrophe in the next 50 years and I should entirely re-orient my life around preventing it

Note that this combines a judgment of fact with… an estimate of effectiveness of a certain projected course of action, I suppose? My suggestion would be to disentangle these things. Once this is done, I don’t see why there should be any more “soul tearing apart” involved here than in any of a variety of other, much more mundane, scenarios.

The institution I’m rising up in is fundamentally broken, and for me to make real progress on problems I care about I should quit (e.g. academia, a bad startup).

Indeed, I have experience with this sort of thing. Knowing that, regardless of the outcome of the decision in question, I would have the unshakable support of friends and family, removed more or less all the “soul tearing apart” from the equation.

All the years of effort I’ve spent on a project or up-skilling in a certain domain has been either useless or actively counterproductive (e.g. working in politics, a startup that hasn’t found product-market fit) and I need to give up and start over.

Indeed, this can be soul-wrenching. My comment on the previous point applies, though, of course, in this case it does not go nearly as far toward full amelioration as in the previous case. But, of course, this is precisely the sort of situation one should strive to avoid (cf. the principle of least regret). Total avoidance is impossible, of course, and this sort of situation is the (hopefully) rare exception to the heuristic I noted.

Given the world I observe, it seems impossible for me to not pass through events and updates that cause me significant emotional pain and significant loss of local social status, whilst also optimising for my long term goals. So I want my close allies, the people loyal to me, the people I trust, to have the conversational tools (cf. my comment above) to help me keep my basic wits of rationality about me while I’m going through these difficult updates and making these hard decisions.

Meaning no offense, but: if you’re losing significant (and important) social status in any of the situations listed above, then you are, I claim, doing something wrong (specifically, organizing your social environment very sub-optimally).

And in those cases where great strain is unavoidable (such as in the last example you listed), it is precisely a cold, practical, and un-softened judgment, which I most desire and most greatly value, from my closest friends. In such cases—where the great difficulty of the situation is most likely to distort my own rationality—“nurturing” takes considerably less caring and investment, and is much, much less valuable, than true honesty, and a clear-eyed perspective on the situation.

comment by ChristianKl · 2018-11-28T15:14:59.424Z · score: 2 (1 votes) · LW · GW
I have sometimes said that personal loyalty is one of the most important virtues. Certainly it has always seemed to me to be a neglected virtue, in rationalist circles.

I'm surprised to here that sentiment from you when you also speak against the value of rationalists doing community things together.

Doing rituals together is a way to create the emotional bonds that in turn create mutual loyality. That's why fraternities have their initiation rituals.

comment by Said Achmiz (SaidAchmiz) · 2018-11-28T15:34:37.773Z · score: 3 (2 votes) · LW · GW

I have sometimes said that personal loyalty is one of the most important virtues. Certainly it has always seemed to me to be a neglected virtue, in rationalist circles.

I’m surprised to here that sentiment from you when you also speak against the value of rationalists doing community things together.

These sentiments are not only not opposed—they are, in fact, inextricably linked. That this seems surprising to you is… unfortunate; it means the inferential distance between us is great. I am at a loss for how to bridge it, truth be told. Perhaps someone else can try.

Doing rituals together is a way to create the emotional bonds that in turn create mutual loyality. That’s why fraternities have their initiation rituals.

You cannot hack your way to friendship and loyalty—and (I assert) bad things happen if you try. That you can (sort of) hack your way to a sense of friendship and loyalty is not the same thing (but may prevent you from seeing the fact of the preceding sentence).

comment by Benquo · 2018-11-28T18:46:55.695Z · score: 4 (2 votes) · LW · GW

What does it look like for this sort of thing to be done well? Can you point to examples?

comment by Said Achmiz (SaidAchmiz) · 2018-11-28T19:35:48.006Z · score: 0 (2 votes) · LW · GW

I am unsure what you’re asking. What is “this sort of thing”? Do you mean “friendship and loyalty”? I don’t know that I have much to tell you, on that subject, that hasn’t been said by many people, more eloquent and wise than I am. (How much has been written about friendship, and about loyalty? This stuff was old hat to Aristotle…)

These are individual virtues. They are “done”—well or poorly—by individuals. I do not think there is any good way to impose them from above. (You can, perhaps, encourage a social environment where such virtues can more readily be exercised, and avoid encouraging a social environment where they’re stifled. But the question of how to do this is… complex; beyond the scope of this discussion, I think, and in any case not something I have anything approaching a solid grasp on.)

comment by Benquo · 2018-11-28T20:49:49.747Z · score: 8 (1 votes) · LW · GW
You can, perhaps, encourage a social environment where such virtues can more readily be exercised, and avoid encouraging a social environment where they’re stifled.

I thought that's what you were talking about: that some ways of organizing people fight or delegitimize personal loyalty considerations, while others work with it or at least figure out how not to actively destroy it. It seemed to me like you were saying that the way Rationalists try to do community tends to be corrosive to this other thing you think is important.

comment by Said Achmiz (SaidAchmiz) · 2018-11-28T22:34:32.427Z · score: 6 (3 votes) · LW · GW

That’s… at once both close to what I’m saying, and also not really what I’m saying at all.

I underestimated the inferential distance here, it seems; it’s surprising to me, how much what I am saying is not obvious. (If anything, I expected the reaction to be more like “ok, yes, duh, that is true and boring and everyone knows this”.)

I may try to write something longer on this topic, but I fear it would have to be much longer; the matters that this question touches upon range wide and deep…

comment by Benquo · 2018-11-28T22:53:49.509Z · score: 10 (4 votes) · LW · GW

I hope you do find the time to write about this in depth.

comment by Ruby · 2018-11-29T01:43:27.485Z · score: 2 (1 votes) · LW · GW

Seconded. Would like to hear the in-depth version.

comment by Raemon · 2018-11-28T20:09:04.139Z · score: 8 (2 votes) · LW · GW
I don’t know that I have much to tell you, on that subject, that hasn’t been said by many people, more eloquent and wise than I am

Sure, but as such, there's a lot of different approaches to how to do them well (some mutually exclusive), so pinpointing which particular things you're talking about seems useful.

(I do think I have an idea of what you mean, and might agree, but the thing you're talking about is probably about as clear to me Ben's "fight trolls" comment was to you)

Seems fine to table it for now if it doesn't seem relevant though.

comment by ChristianKl · 2018-11-28T16:59:23.689Z · score: 2 (1 votes) · LW · GW

Do you think that existing societal organisations like fraternities aren't build with the goal of facilitating friendship and loyalty? Do you think they fail at that and produce bad results?

comment by Said Achmiz (SaidAchmiz) · 2018-11-28T17:26:09.485Z · score: 2 (1 votes) · LW · GW

Do you think that existing societal organisations like fraternities aren’t build with the goal of facilitating friendship and loyalty?

It varies. That is a goal for some such organizations.

Do you think they fail at that and produce bad results?

I think they produce bad results, while failing at the above goal, approximately to the degree that they rely on “emotion hacking”.

comment by ChristianKl · 2018-11-28T14:45:59.228Z · score: 5 (3 votes) · LW · GW

Interesstingly there's currently a highly upvoted question on Academia.StackExchange titled Why don't I see publications criticising other publications? which suggests that academics don't engage in combat culture within papers.

comment by ChristianKl · 2018-11-28T15:06:18.503Z · score: 2 (1 votes) · LW · GW

Most academic work raises little emotions but that's not true for all academic work. Ionnadis wrote about how he might be killed for his work.

Whenever work that's revolutionary in the Khunian sense is done there's the potential for the loss of social status.

comment by habryka (habryka4) · 2018-11-21T19:00:59.726Z · score: 4 (2 votes) · LW · GW

Promoted to curated: I think this post is quite exceptionally clear in pointing at an important distinction, and I've already referenced it a few times in the last two weeks, which is a good sign. I don't think this post necessarily says anything massively new, but I don't remember any write-up of similar clarity, and so I do think it adds a bunch to the intellectual commons.

comment by ozziegooen · 2018-11-28T11:38:34.057Z · score: 3 (2 votes) · LW · GW

I found the ideas behind Radical Candor to be quite useful. I think they're similar to ones here. Link

comment by G Gordon Worley III (gworley) · 2018-11-12T20:46:54.559Z · score: 2 (3 votes) · LW · GW

This reminds me of the distinction between debate and dialectic. Both can be means to truth seeking, both have their own failure modes (debate can become about winning instead of the truth; dialectic can become confused without adequate experience with synthesis), and different people can have a preference for one over the other. Thinking in terms of a culture though is perhaps better suited to what's going on than talking about preference for a particular technique because it gets at something deeper fueling that preference for particular methods.

Also, for what it's worth, I found this post useful enough at describing what I want to moderate towards that I've linked it now from my LW moderation guidelines to indicate I want people to prefer nurture to combat culture in the comments on my posts. I think a failure to have this well explained helps explain why things went in a direction I didn't like in a recent contentious post I made [LW · GW].

comment by Alephywr · 2018-11-28T03:12:37.793Z · score: -5 (7 votes) · LW · GW

I've won practically every interaction I've ever had. I've become so good at winning that most people won't actually interact with me anymore.

comment by Elo · 2018-11-28T03:27:58.748Z · score: -5 (7 votes) · LW · GW

Maybe it's not about winning for you. Maybe you need to lose before you learn something.

comment by Alephywr · 2018-11-28T03:40:22.961Z · score: -1 (5 votes) · LW · GW

It would also help if they understood what a joke was

comment by Alephywr · 2018-11-28T03:39:03.520Z · score: -3 (4 votes) · LW · GW

People would have to actually engage with me for that to happen.

comment by Elo · 2018-11-28T05:14:06.272Z · score: -3 (6 votes) · LW · GW

Why would people want to engage if they just encounter someone who wants to win? What's in it for them?

comment by Alephywr · 2018-11-28T05:22:48.855Z · score: -12 (6 votes) · LW · GW

IT WAS A JOKE YOU MORON

comment by Elo · 2018-11-28T05:24:30.542Z · score: -5 (9 votes) · LW · GW

Maybe if you made less shitty jokes, people would engage with you more.

comment by Alephywr · 2018-11-28T05:26:20.327Z · score: -9 (4 votes) · LW · GW

God forbid people like you