Lack of Social Grace Is an Epistemic Virtue

post by Zack_M_Davis · 2023-07-31T16:38:05.375Z · LW · GW · 104 comments

Contents

104 comments

Someone once told me that they thought I acted like refusing to employ the bare minimum of social grace was a virtue, and that this was bad. (I'm paraphrasing; they actually used a different word that starts with b.)

I definitely don't want to say that lack of social grace is unambiguously a virtue. Humans are social animals, so the set of human virtues is almost certainly going to involve doing social things gracefully!

Nevertheless, I will bite the bullet on a weaker claim. Politeness is, to a large extent, about concealing or obfuscating information that someone would prefer not to be revealed—that's why we recognize the difference between one's honest opinion, and what one says when one is "just being polite." Idealized honest Bayesian reasoners would not have social graces—and therefore, humans trying to imitate idealized honest Bayesian reasoners will tend to bump up against (or smash right through) the bare minimum of social grace. In this sense, we might say that the lack of social grace is an "epistemic" virtue—even if it's probably not great for normal humans trying to live normal human lives.

Let me illustrate what I mean with one fictional and one real-life example.


The beginning of the film The Invention of Lying (before the eponymous invention of lying) depicts an alternate world in which everyone is radically honest [LW · GW]—not just in the narrow sense of not lying [LW · GW], but more broadly saying exactly what's on their mind, without thought of concealment.

In one scene, our everyman protagonist is on a date at a restaurant with an attractive woman.

"I'm very embarrassed I work here," says the waiter. "And you're very pretty," he tells the woman. "That only makes this worse."

"Your sister?" the waiter then asks our protagonist.

"No," says our everyman.

"Daughter?"

"No."

"She's way out of your league."

"... thank you."

The woman's cell phone rings. She explains that it's her mother, probably calling to check on the date.

"Hello?" she answers the phone—still at the table, with our protagonist hearing every word. "Yes, I'm with him right now. ... No, not very attractive. ... No, doesn't make much money. It's alright, though, seems nice, kind of funny. ... A bit fat. ... Has a funny little—snub nose, kind of like a frog in the—facial ... No, I won't be sleeping with him tonight. ... No, probably not even a kiss. ... Okay, you too, 'bye."

The scene is funny because of how it violates the expected social conventions of our own world. In our world, politeness demands that you not say negative-valence things about someone in front of them, because people don't like hearing negative-valence things about themselves. Someone in our world who behaved like the woman in this scene—calling someone ugly and poor and fat right in front of them—could only be acting out of deliberate cruelty.

But the people in the movie aren't like us. Having taken the call, why should she speak any differently just because the man she was talking about could hear? Why would he object? To a decision-theoretic agent, the value of information is always nonnegative. Given that his date thought he was unattractive, how could it be worse for him to know rather than not-know?

For humans from our world, these questions do have answers—complicated answers having to do with things like map–territory confusions that make receiving bad news seem like a bad event (rather than the good event of learning information about how things were already bad, whether or not you knew it), and how it's advantageous for others to have positive-valence false beliefs about oneself.

The world of The Invention of Lying is simpler, clearer, easier to navigate than our world. There, you don't have to worry whether people don't like you and are planning to harm your interests. They'll tell you.


In "Los Alamos From Below", physicist Richard Feynman's account of his work on the Manhattan Project to build the first atomic bomb, Feynman recalls being sought out by a much more senior physicist specifically for his lack of social graces:

I also met Niels Bohr. His name was Nicholas Baker in those days, and he came to Los Alamos with Jim Baker, his son, whose name is really Aage Bohr. They came from Denmark, and they were very famous physicists, as you know. Even to the big shot guys, Bohr was a great god.

We were at a meeting once, the first time he came, and everybody wanted to see the great Bohr. So there were a lot of people there, and we were discussing the problems of the bomb. I was back in a corner somewhere. He came and went, and all I could see of him was from between people's heads.

In the morning of the day he's due to come next time, I get a telephone call.

"Hello—Feynman?"

"Yes."

"This is Jim Baker." It's his son. "My father and I would like to speak to you."

"Me? I'm Feynman, I'm just a—"

"That's right. Is eight o'clock OK?"

So, at eight o'clock in the morning, before anybody's awake, I go down to the place. We go into an office in the technical area and he says, "We have been thinking how we could make the bomb more efficient and we think of the following idea."

I say, "No, it's not going to work. It's not efficient ... Blah, blah, blah."

So he says, "How about so and so?"

I said, "That sounds a little bit better, but it's got this damn fool idea in it."

This went on for about two hours, going back and forth over lots of ideas, back and forth, arguing. [...]

"Well," [Niels Bohr] said finally, lighting his pipe, "I guess we can call in the big shots now." So then they called all the other guys and had a discussion with them.

Then the son told me what happened. The last time he was there, Bohr said to his son, "Remember the name of that little fellow in the back over there? He's the only guy who's not afraid of me, and will say when I've got a crazy idea. So the next time when we want to discuss ideas, we're not going to be able to do it with these guys who say everything is yes, yes, Dr. Bohr. Get that guy and we'll talk with him first."

I was always dumb in that way. I never knew who I was talking to. I was always worried about the physics. If the idea looked lousy, I said it looked lousy. If it looked good, I said it looked good. Simple proposition.

Someone who felt uncomfortable with Feynman's bluntness and wanted to believe that there's no conflict between rationality and social graces might argue that Feynman's "simple proposition" is actually wrong insofar as it fails to appreciate the map–territory distinction: in saying, "No, it's not going to work", was not Feynman implicitly asserting that just because he couldn't see a way to make it work, it simply couldn't? And in general, shouldn't you know who you're talking to? Wasn't Bohr, the Nobel prize winner, more likely to be right than Feynman, the fresh young Ph.D. (at the time)?

While not entirely without merit (it's true that the map is not the territory; it's true that authority is not without evidential weight), attending overmuch to such nuances distracts from worrying about the physics [LW · GW], which is what Bohr wanted out of Feynman—and, incidentally, what I want out of my readers. I would not expect readers to confirm interpretations with me before publishing a critique. If the post looks lousy, say it looks lousy. If it looks good, say it looks good. Simple proposition.

104 comments

Comments sorted by top scores.

comment by Vaniver · 2023-07-31T23:16:20.023Z · LW(p) · GW(p)

By coincidence, I just finished up my summary of A Social History of Truth [LW · GW] for LW. One of its core claims is that the "social graces" of English gentility were a fundamental component of the Royal Society and the beginnings of empirical science. Some key ingredients:

  1. Honor culture that highly valued reputation and honesty, which viewed calling someone a liar as a grounds for dueling, which led to cautious statements, careful disagreements, and hypothesizing on how everyone might be right
  2. Idleness culture that valued conversation as an art form / game where the right move is one that allows for a response (like a variant of tennis where the goal is to have the other party return the volley, rather than be unable to return it)
  3. A negative view of scholarly pedantic argumentative culture, which viewed reputation as a zero-sum game and was detached from worldly considerations.

The claim is that the originators of the Royal Society were, among other things, concerned with keeping the conversation going. If experiments over here conflicted with observations over there, rather than trying to immediately settle which was correct, they wanted to relax and observe; maybe there's a difference between the underlying generators between here and there, such that both can be locally correct and we can see more deeply into the underlying reality.

Part of this was a social and practical matter. There was a situation where two astronomers reported different locations for a comet on the same night, and the Royal Society worked to defuse the disagreement--in large part because they wanted to keep receiving observations from both of the astronomers, and suspected that being too critical of either might result in the loss of their data. (The Royal Society's guess was that both observations were correct, and there were two comets.)

Politeness is, to a large extent, about concealing or obfuscating information that someone would prefer not to be revealed

I think politeness is "about" avoiding conflict (and, more broadly, reputational harm). It may use concealing or obfuscating information as a means to that end, but I think the goal is more central than the methodology.

comment by mingyuan · 2023-07-31T22:10:19.480Z · LW(p) · GW(p)

The advice this post points to is probably useful for some people, but I think LessWrongers are the last people who need to be told to be less socially graceful in favor of more epistemic virtue. So much basic kindness is already lacking in the way that many rationalists interact, and it's often deeply painful to be around.

Also, I just don't really buy that there's a necessary, direct tradeoff between epistemic virtue and social grace. I am quite blunt, honest, and (I believe) epistemically virtuous, but I still generally interact in a way that endears me to people and makes them feel listened to and not attacked. (If you know me feel free to comment/agree/disagree on this statement.) I'm not saying that all of my interactions are 100% successful in this regard but I think I come across as basically kind and socially graceful without sacrificing honesty or epistemics.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-08-01T00:22:51.092Z · LW(p) · GW(p)

The advice this post points to is probably useful for some people, but I think LessWrongers are the last people who need to be told to be less socially graceful in favor of more epistemic virtue.

I would certainly have thought this, but recent experience [LW · GW] has shown the diametric opposite to be true. The OP’s advice is sorely needed here more than almost anywhere else.

In particular, it is not just that LessWrongers need to be told to be less socially graceful, but—and especially—that they need to be told to demand less “social grace” (if what’s demanded even deserves such a respectful term) from others.

So much basic kindness is already lacking in the way that many rationalists interact, and it’s often deeply painful to be around.

I agree with this. [LW(p) · GW(p)] But it’s precisely the “basic kindness” which doesn’t interfere with “epistemic virtues” that rationalists are unusually bad at; and, conversely, precisely the “basic kindness” (though, again, I consider this to be a tendentious description in that case) which does interfere with “epistemic virtues” that’s mostly commonly demanded. This leaves us with the worst of both worlds.

Also, I just don’t really buy that there’s a necessary, direct tradeoff between epistemic virtue and social grace. I am quite blunt, honest, and (I believe) epistemically virtuous, but I still generally interact in a way that endears me to people and makes them feel listened to and not attacked. (If you know me feel free to comment/agree/disagree on this statement.) I’m not saying that all of my interactions are 100% successful in this regard but I think I come across as basically kind and socially graceful without sacrificing honesty or epistemics.

I do not know you personally, so I certainly can’t dispute nor affirm this claim. But it does seem to me to be an entirely plausible claim…

… if, and only if, we construe “social grace” in such a way that rules out its interference with epistemics (cf. this comment [LW(p) · GW(p)]).

Now, I think that this is a reasonable use of the term “social grace” (and for this reason I think that Zack has made a somewhat unfortunate word choice in the post’s title). The trouble is, such a construal makes your claim a question-begging one.

And if what you mean is that, for example, in a scenario like the Feyman story in the OP, you would nevertheless attend to social status, behave with deference, couch your disagreements in qualifications, avoid outright saying to people’s faces that they’re wrong or that their idea is bad, etc., etc., well… then I think that your claim that such “social grace” doesn’t interfere with “epistemic virtue” is just flat-out false.

comment by Raemon · 2023-08-01T02:29:31.256Z · LW(p) · GW(p)

I think this post is pointing at an important consideration, but I want to flag it doesn't acknowledge or address my own primary cruxes, which focus on "what social patterns generate, in humans, the most intellectual progress over time." This feels related to Vaniver's comment. 

One sub-crux is "people don't get sick of you and stop talking to you" (or, people get sick of a given discussion area being drama-prone)

Another sub-crux is "phrasing things in a triggery-way makes people feel less safe (and then less willing to open up and share vulnerable information), and also makes people more fight-minded and think less rationaly (i.e. less able to process information correctly).

My overall claim is that thick skin, social courage(and/or obliviousness), and tact are all epistemic virtues. 

I see you arguing for thick skin and social courage/obliviousness and I agree, but your arguments prove too much and don't seem to engage at all with the actual social question of how to build a truthseeking institution and don't seem to explore much where tact is actually important.

To be clear: I think it's an important virtue to cultivate thick skin, and the ability to hear unpleasant feedback without developing an ugh field or becoming irrational. And it's important to have the social courage to say unpleasant things that disrupt the social harmony. 

But it's strictly better to be able to convey those things without triggering people, making people annoyed enough that they leave, or putting them into a political frame where they are more trying to defeat your argument than focus on truthfinding. I think the amount of expectations an intellectual community should have that people are capable of doing that isn't zero.

I don't think it's overwhelmingly obvious where that non-zero bar for tact should be, exactly. I definitely think the tact bar is lower than I might have naively guessed 4 years ago. I'm guessing your don't mean to literally argue it should be zero, and I read you more as arguing that the value of thick skin and willingness-to-disrupt-the-social-harmony is nonzero, rather than arguing it's literally infinite. 

But, like, I agree with that, I think most people around here agree with that, and the question is the actually complex question of how these things interact and how to prioritize them given limited resources (as well as how much to focus on this whole overall question as opposed to other things that lend themselves less self-reinforcing-internet-arguments)

I think it's a good exercise, for people arguing things like "taxes should be higher, or lower", to be able to answer the question "how would you know if it were too high or too low?", and that's the sort of thing I'd find actually persuasive here.

It feels important and telling that if it wasn't a day-job that I was paid for to think about posts like this, I would not be motivated to engage very deeply here. My past experience is you're kinda ideological about this, such responding to individual arguments here doesn't seem particularly worthwhile. My sense is that you'll keep generating reasons for not having to learn tact no matter what I say, and meanwhile it's not very fun and I don't think this is actually near the top of the list of things I could focus on to increase the rate of intellectual progress on LessWrong (which looks less like engaging in internet arguments and more like doing some of the less exciting parts of research and engineering).

(and notably, I almost left off the last paragraph because it seemed to push the comment in a more political-fight-y direction that was likely to result in a) the conversation being a bit more polarized, b) more likely to lead to me unendorsedly spending time on this that I'd rather spend on some more systematic improvements to the site. I ended up deciding to include it, and this meta-commentary, which I'm not sure whether was the right call)

Replies from: SaidAchmiz, SaidAchmiz, Zack_M_Davis, SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-08-01T03:08:56.625Z · LW(p) · GW(p)

reasons for not having to learn tact

This formulation presupposes that Zack doesn’t know how to phrase things “tactfully”. Is that the case? Or, is it instead the case that he knows how, but doesn’t think that it’s a good idea, or doesn’t think it’s worth the effort, or some other such thing?

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2023-08-01T06:50:39.658Z · LW(p) · GW(p)

Well, it wouldn't be tactful to suggest that I know how to be tactful and am deliberately choosing not to do so.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-08-01T07:27:58.201Z · LW(p) · GW(p)

It seems to me like this points to some degree of equivocation in the usage of “tact” and related words.

As I’ve seen the words used, to call something “tactless” is to say that it’s noticeably and unusually rude, lacking in politeness, etc. Importantly, one would never describe something as “tactless” which could be described as “appropriate”, “reasonable”, etc. To call an action (including a speech act of any sort) “tactless” is to say that it’s a mistake to have taken that action.

It’s the connotations of such usage which are imported and made use of, when one accuses someone of lacking “tact”, and expects third parties to condemn the accused, should they concur with the characterization.

But the way that I see “tact” used in these discussions we’ve been having (including in Raemon’s top-level comment at the top of this comment thread) doesn’t match the above-described usage. Rather, it seems to me to refer to some practice of going beyond what might be called “appropriate” or “reasonable”, and actually, e.g., taking various positive steps to counteract various neuroses of one’s interlocutor. But if that is what we mean by “tact”, then it hardly deserves the connotations that the usual usage comes with!

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2023-08-01T23:09:16.872Z · LW(p) · GW(p)

Isn't the whole problem that different people don't seem to agree on what's reasonable or appropriate, and what's normal human behavior rather than a dysfunctional neurosis? I don't think equivocation is the problem here; I think you (we) need to make the empirical case that hugbox cultures are dysfunctional.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-08-02T04:17:23.847Z · LW(p) · GW(p)

Isn’t the whole problem that different people don’t seem to agree on what’s reasonable or appropriate, and what’s normal human behavior rather than a dysfunctional neurosis?

No, I don’t think so. That is—it’s true that different people don’t always agree on this, but I don’t think this is the problem. Why? Because when you use words like “tact” (and “tactful”, “tactless”, etc.), you implicitly refer to what’s acceptable in society as a whole (or commonly understood to be acceptable in whatever sort of social context you’re in). (Otherwise, what you’re talking about isn’t “tact” or “social graces”, but something else—perhaps “consideration”, or “solicitousness”, or some such?)

I think you (we) need to make the empirical case that hugbox cultures are dysfunctional.

Making that case is good, but that’s a separate matter.

EDIT: Let me clarify something that may perhaps not have been obvious:

The reason I said (in the grandparent) that “[the preceding exchange] seems to me like this points to some degree of equivocation in the usage of “tact” and related words” is the following apparent paradox:

On the ordinary meaning of the word “tact” (as it’s used in wider society, beyond Less Wrong), deliberately choosing not to employ tact is usually a bad thing (i.e., not justified by any reasonable personal goal, and detrimental to most plausible collective goals).

But as Raemon seems to be using the word “tact”, deliberately choosing not to employ tact seems not just unproblematic, but often actively beneficial, and sometimes (given some plausible personal and/or collective goals) even ethically obligatory!

This strongly suggests that these two usages of the word “tact” in fact refer to two very different things.

comment by Said Achmiz (SaidAchmiz) · 2023-08-01T02:48:39.099Z · LW(p) · GW(p)

Another sub-crux is “phrasing things in a triggery-way makes people feel less safe

What is meant by “safe” in this context?

EDIT: Same question re: “triggery”.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2023-08-01T06:51:57.139Z · LW(p) · GW(p)

People feel "safe" when their interests aren't being threatened. (Usually the relevant interests are social in nature; we're not talking about safety from physical illness or injury.) This is relevant to the topic of what discourse norms support intellectual progress, because people who feel unsafe are likely to lie, obfuscate, stonewall, &c. as part of attempts to become more safe. If you want people to tell the truth (goes the theory), you need to make them feel safe first.

I will illustrate with a hypothetical but realistic example. Sometimes people write a comment that seems to contradict something they said in an earlier comment. Suppose that on Forum A, other commenters who notice this are likely to say something like, "That's not what you said earlier! Were you lying then, or are you lying now, huh?!" but that on Forum B, other commenters are likely to say something like, "This seems in tension with what you said earlier; could you clarify?" The culture of Forum B seems better at making it feel "safe" to change one's mind without one's social interest in not-being-called-a-liar being threatened.

I'm sure you can think of reasons why this illustration doesn't address most appeals to "safety" on this website, but you asked a question, and I am answering it as part of my service to the Church of Arbitrarily Large Amounts of Intepretive Labor. (You don't believe in interpretive labor [LW(p) · GW(p)], but Ray doesn't believe in answering all of Said's annoying questions, so it's my job to fill in the gap.)

Replies from: tailcalled, SaidAchmiz
comment by tailcalled · 2023-08-03T08:35:39.759Z · LW(p) · GW(p)

I will illustrate with a hypothetical but realistic example. Sometimes people write a comment that seems to contradict something they said in an earlier comment. Suppose that on Forum A, other commenters who notice this are likely to say something like, "That's not what you said earlier! Were you lying then, or are you lying now, huh?!" but that on Forum B, other commenters are likely to say something like, "This seems in tension with what you said earlier; could you clarify?" The culture of Forum B seems better at making it feel "safe" to change one's mind without one's social interest in not-being-called-a-liar being threatened.

In this case Forum B has a better culture than Forum A. People might change their mind, have nuanced opinions, or similar. It is only when people fail to engage with the point of the contradiction or give a nonsensical response that accusations of lying seem appropriate, unless one already has evidence that the person is a liar.

comment by Said Achmiz (SaidAchmiz) · 2023-08-01T07:18:16.907Z · LW(p) · GW(p)

The culture of Forum B seems better at making it feel “safe” to change one’s mind without one’s social interest in not-being-called-a-liar being threatened.

Hmm, I see. That usage makes sense in the context of the hypothetical example. But—

I’m sure you can think of reasons why this illustration doesn’t address most appeals to “safety” on this website

… indeed.

you asked a question, and I am answering it as part of my service to the Church of Arbitrarily Large Amounts of Intepretive Labor

Thanks! However, I have a follow-up question, if you don’t mind:

Are you confident that one or more of the usages of “safe” which you described (of which there were two in your comment, by my count) was the one which Raemon intended…?

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2023-08-01T23:08:22.485Z · LW(p) · GW(p)

I think I'll go up to 85% confidence that Raemon [LW · GW] will affirm the grandparent as a "close enough" explanation of what he means by safe. ("Close enough" meaning, I don't particularly expect Ray to have thought about how to reduce the meaning [LW · GW] of safe and independently come up with the same explanation as me, but I'm predicting that he won't report major disagreement with my account after reading it.)

Replies from: Raemon
comment by Raemon · 2023-08-02T21:14:47.139Z · LW(p) · GW(p)

It's similar (I definitely felt it was a good faith attempt and captured at least some of it).

But I think the type-signature of what I meant was more like "a physiological response" than like "a belief about what will happen". I do think people are more likely to have that physiological response if they feel their interests are threatened, but there's more to it than that.

Here are a few examples worth examining:

  1. On a public webforum, Alice (a medium-high-ish status person, say) makes a comment that A) threatens Bob's interests, B) indicates they don't understand that they have threatened Bob's interests (so they aren't even tracking it as a cost/concern)
     
  2. #1, but Alice does convey they understood Bob's interests, and thinks in this case it's worth sacrificing them for some other purpose
     
  3. Same as #1, but on a private slack channel (where Bob doesn't visceral feel the thing is likely to immediately spiral out of control)
     
  4. Same as #1, but it's in a cozy cabin with a fireplace, or maybe outdoors near some beautiful trees and a nice stream or something.
     
  5. Same as #4, but the conversation by the fireplace is being broadcast live to the world. 
     
  6. Same as #4 (threatening, not understanding, but by nice stream), but in this case Alice is a) high status, and specifically states an explicit plan they intend to follow through on, even though right now technically the conversation is private and Bob has a chance to respond. 
     
  7. We're back on a public webforum, Alice is high status, announcing a credible threatening plan, doesn't seem to understand Bob right now, but there is a history of people on the webforum trying to understand where each other are coming from, have some (limited) budget for listening when people say "hey man you're threatening my interests" until they at least understand what those interests are, and some tradition of looking for third-options that accomplish Alice's original goal while threatening Bob less. There is also some "being on same-paged-ness" about everyone's goals (which might include 'we all care about truth, such that it's in our interests to get criticized for being wrong even if it'd, say, hurt our chances of getting grant money'. This might further include some history of understanding that people gain status rather than lose status when they admit they're wrong, etc)

I'd probably expect 1 - 4 to be in ascending order of safety-feeling and "safety-thinking". #5, #6 and #7 are each a bit of a wildcard that depends on the individual person. I expect a moderate number of people to feel that Alice is "more threatening" in an objective sense, but to nonetheless not feel as much triggered fight-or-flight or political response. 

#7 is sort of imaginary right now and I'm not quite sure how to operationalize all of it it, but it's the sort of thing I'm imagining going in the direction of. 

But, when I talk about prioritizing "feelings of safety", the thing I'm thinking about at the group level is "can we have conversations about people's interests being threatened, without people entering into physiological flight-or-fight/defensive/tribal mode".

There are a bunch of further complications where people have competing access needs of what makes them feel safe, and some things that make-some-people-feel-safe have varying amounts of expensiveness for different people, and this is not transparent.

(I do not currently have a strong belief about what exactly is right here, but these are terms in the equation I'm think about)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-08-06T20:49:15.477Z · LW(p) · GW(p)

In such cases where these physiological responses are not truth-tracking, then surely the correct remedy is to rectify that mismatch, not to force people to whose words the responses are responding to speak and write differently…?

In other words, if I say something and you believe that my words somehow put you in some sort of danger (or, threaten your interests), or that my words signal that my actions will have such effects, then that’s perhaps a conflict between us which it may be productive for us to address.

On the other hand, if you have some sort of physiological response or feeling (aside: the concept of an alief seems like a good match for what you’re referring to, no?) about my words, but you do not believe that feeling tracks the truth about whether there’s any threat to you or your interests[1]… then what is there to discuss? And what do I have to do with this? This is a bug, in your cognition, for you to fix. What possible justification could you have for involving me in this? (And certainly, to suggest that I am somehow to blame, and that the burden is on me to avoid triggering such bugs—well, that would be quite beyond the pale!)


  1. The second clause is necessary, because if you have a “physiological response” but you believe it to be truth-tracking—i.e., you also have a belief of threat and not just an alief—then we can (and should) simply discuss the belief, and have no need even to mention the “feeling”. ↩︎

Replies from: Raemon
comment by Raemon · 2023-08-07T00:50:34.572Z · LW(p) · GW(p)

I think a truth-tracking community should do whatever is cheapest / most effective here. (which I think includes both people learning to deal with their physiological responses on their own, and also learning not to communicate in a way that predictably causes certain physiological responses)

Replies from: Zack_M_Davis, SaidAchmiz
comment by Zack_M_Davis · 2023-08-07T05:57:24.150Z · LW(p) · GW(p)

What's in it for me?

Suppose I've never heard of this—troop-tricking comity?—or whatever it is you said.

Sell me on it. If I learn not to communicate in a way that predictably causes certain physiological responses, like your co-mutiny is asking me to do, what concrete, specific membership benefits does the co-mutiny give me in return?

It's got to be something really good, right? Because if you couldn't point to any benefits, then there would be no reason for anyone to care about joining your roof-tacking impunity, or even bother remembering its name.

comment by Said Achmiz (SaidAchmiz) · 2023-08-07T01:22:40.915Z · LW(p) · GW(p)

This sort of “naive utilitarianism” is a terrible idea for reasons which we are (or should be!) very well familiar with [LW(p) · GW(p)].

comment by Zack_M_Davis · 2023-08-01T23:09:50.730Z · LW(p) · GW(p)

My sense is that you'll keep generating reasons [...] no matter what I say

Thanks for articulating a specific way in which you think I'm being systematically dumb! This is super helpful, because it makes it clear how to proceed: I can either bite the bullet ("Yes, and I'd be right to keep generating such reasons, because ...") or try to provide evidence that I'm not being stupid in that particular way.

As it happens, I do not want to bite this bullet; I think I'm smarter than your model of me, and I'm eager to prove it by addressing your cruxes. (I wouldn't expect you to take my word for it.)

One sub-crux is "people don't get sick of you and stop talking to you" (or, people get sick of a given discussion area being drama-prone)

I agree that this is a real risk![1] You mention Vaniver's comment [LW(p) · GW(p)], which mentions that the Royal Society prioritized keeping the conversation going. I think I also prioritize this: in yet-unpublished work,[2] I talk about how in politically charged Twitter discussions, I sometimes try to use the minimal amount of strategic bad faith needed to keep the discussion going, when I suspect my interlocutor would hang up the phone if they knew what I was really thinking.

Another sub-crux is "phrasing things in a triggery-way makes people feel less safe (and then less willing to open up and share vulnerable information), and also makes people more fight-minded and think less rationaly (i.e. less able to process information correctly)."

All other things being equal, I agree that this is a relevant consideration. Correspondingly, I think I do pay a fair amount of attention to word choice depending on what I'm trying to convey to what audience. I admit that I often end up going with a relatively "fighty" tone when it feels appropriate for what I'm trying to do, but ... I also often don't? If someone wanted to persuade me to change my policy here, I'd need specific examples of things I've written that are allegedly making people feel unsafe.

I suspect a crux there is that I'm more likely to interpret feelings of unsafety as a decision-theoretic extortion attempt, that sometimes people feel unsafe because the elephant in their brain can predict that others will offer to distort shared maps as a concession to make them feel safe.

Did you notice how I started this comment by thanking you for expressing a negative opinion of my rationality? That was very deliberate on my part: I'm trying to make it cheap to criticize me. It may not be the same thing you're calling tact, but it seems related (in being an attempt to shape incentives to favor opening up).

don't seem to engage at all with the actual social question of how to build a truthseeking institution

I agree that I've been focusing on individual practice rather than institution-building. Someone who was focusing on institution-building might therefore find my meta-discoursey posts less interesting. (I think my mathposts [LW · GW] should be good either way.)

A big crux here is that I think institutions are often dumber than their members as individuals and that you can build more interesting systems out of smarter bricks [LW(p) · GW(p)]. I'm not eager to pay the costs of coordinating for some alleged collective benefit that I mostly just don't think is real in the first place.

to be able to answer the question "how would you know if it were too high or too low?", and that's the sort of thing I'd find actually persuasive here.

I mean, I definitely think that an intellectual forum where people were routinely making off-topic personal insults should be moderated to require more tact (e.g., by instituting an enforced rule against off-topic personal insults). Is that still too ideological for you (because I expect to be able to appeal to principles like speech being "on topic", rather than empirically checking how people are feeling)?

I almost left off the last paragraph because it seemed to push the comment in a more political-fight-y direction [...] I'm not sure whether was the right call

I'm glad you included it! It was a great paragraph! More generally, I think heuristics for limiting damage from political fights by means of hiding them are going to generalize poorly to this particular conflict, which is very weird because my side of the conflict is specifically fighting to reveal information about hidden conflicts [LW(p) · GW(p)].


  1. As an aside, in a recent email thread with Ben, Jessica, and Michael after not being part of their clique for 2½ years, I was disappointed with some aspects of their performance; I worry that almost everyone in a position to find flaws in their ideology has written them off and been written off by them. I want to figure out how to sic Said on them. ↩︎

  2. Possibly worth yanking out into its own post? (Working title: "Good Bad Faith".) ↩︎

comment by Said Achmiz (SaidAchmiz) · 2023-08-01T03:01:14.045Z · LW(p) · GW(p)

But it’s strictly better to be able to convey those things without triggering people, making people annoyed enough that they leave, or putting them into a political frame where they are more trying to defeat your argument than focus on truthfinding.

I think that this is very wrong, in multiple ways.

First and most obviously, if such “more tactful”[1] formulations cost more to produce, then that is a way in which using them would not be strictly better, even if it was better on net.

Second, even if the “more tactful” formulations are no more costly to produce, they are definitely more costly to read (or otherwise parse), for at least some (and possibly most) readers (or hearers, etc.). (Simple length is one obvious reason for this, though not the only one by any means; complexity, ambiguity, etc., also contribute.)

Third, if the “more tactful” formulations are less effective (and not merely less efficient!)—for example, by increasing the probability of communication errors—then using them would be directly detrimental, even ignoring any costs that doing so might impose.

Fourth, if “less tactful” formulations act as a filter against people who are more easily “triggered”, who are more likely to become annoyed at lack of “tact”, who are prone to entering a “political frame”, etc., and if, furthermore, having such people is detrimental on net (perhaps because communicating productively with them imposes various costs, or perhaps because they have a tendency to attempt to force changes to local communicative or other practices, which are harmful to the goal or the organization), then it is in fact good to use “less tactful” formulations precisely because they “trigger people”, “make people annoyed enough that they leave”, etc.

I think the amount of expectations an intellectual community should have that people are capable of doing that isn’t zero.

It is possible that an intellectual community should expect that people are capable of doing this, but also that said community should expect, not only that people are also capable of not doing this, but in fact that they actually don’t do this.


  1. I am not sure if this is a short summary label which you’d endorse; you use the word “tact” elsewhere in your comment, so it seemed like a decent guess. If not, feel free to provide a comparably compact alternative. ↩︎

comment by Said Achmiz (SaidAchmiz) · 2023-07-31T18:53:25.564Z · LW(p) · GW(p)

The world of The Invention of Lying is simpler, clearer, easier to navigate than our world.

I don’t think this is true.[1] Now, you say, by way of expansion:

There, you don’t have to worry whether people don’t like you and are planning to harm your interests. They’ll tell you.

And that’s true. But does this (and all the other ways in which “radical honesty” manifests) actually translate into “simpler, clearer, easier to navigate”?

It seems to me that one of the things that makes our society fairly simple to navigate most of the time is that you can act as if [LW(p) · GW(p)] everyone around you doesn’t care about you one way or the other, and will behave toward you in the ways prescribed by their professional and other formal obligations, and otherwise will neither help nor hinder you. Of course there are many important exceptions, but this is the default state. Its great virtue is that it vastly reduces the amount of “social processing” that we have to do as we go about our daily lives, freeing up our cognitive resources for other things—and enabling our modern technological civilization to exist.

Of course, this default state is accomplished partly by actually having most people mostly not care one way or the other about most other people most of the time. But only partly; and the other part of the equation is that people usually just don’t meaningfully act on their attitudes toward others, instead behaving in ways that conform to professional obligations, social rituals, etc., and thus abstract away from their attitudes, presenting a socially normative mask or “interface” to the world.

Now suppose you tear away that mask—or, to use the “interface” language, you crash the UI layer, forcing everyone to deal with each other’s low-level “implementation details”. Suddenly, a great deal more processing power is needed, just to interact with other humans!

The film’s conceit is that the depicted society is just like ours, except that they don’t lie to each other. But is this plausible? Is it not possible, instead, that without that abstraction layer—without lying—the people of that world cannot spare the cognitive resources to build such a world as ours? (Think, in particular, of the degree to which all our technology, all our science, is due to the sort of person who finds it burdensome and unnatural to deal with other people’s internals. Now strip away the formalities which save such people from needing to do this, which permit them to treat others as predictable interfaces—and consider what it does to their ability to accomplish anything of use!)


  1. Reading “true” as “likely to be true, were this fictional world actually real”, and similar transformations as appropriate. ↩︎

Replies from: interstice, SaidAchmiz, Richard_Kennaway, CuriousMeta
comment by interstice · 2023-07-31T19:27:36.736Z · LW(p) · GW(p)

I think a society without lying would have other means of maintaining the social interface layer. For instance, when queried about how they feel about you, people might say things like "I quite dislike you, but don't have any plans on acting on it, so don't worry about it". In our world this would be a worrying thing to hear, but in the hypothetical, you could just go on with your day without thinking about it further.

Replies from: cubefox
comment by cubefox · 2023-08-01T01:57:55.183Z · LW(p) · GW(p)

We would also be perfectly used to it.

comment by Said Achmiz (SaidAchmiz) · 2023-07-31T19:04:21.170Z · LW(p) · GW(p)

Let me note, as a counterpoint to the above comment, that I agree wholeheartedly with the post’s thesis (as expressed in the last two paragraphs). I just think that the film does not make for a very good illustration of the point. The Feynman anecdote (even if we treat it as semi-fictional itself) is a much better example, because it exhibits the key qualities of a situation where the argument applies most forcefully:

  1. There is a clear objective;
  2. The objective deals with physical reality, not social reality, so maneuvering in social reality can only hinder it, not help;
  3. Everyone involved shares the formal goal of achieving the objective.

In such a case, deploying the objections alluded to in the OP’s second-to-last paragraph is simply a mistake (or else deliberate sabotage, perhaps to further one’s own social aims, to the detriment of the common goal). We might perhaps find plausible justifications (or even good reasons), in everyday life, for considering people’s feelings about true claims, or for behaving in a way that signals recognition of social status, or what have you; but in a case where we’re supposed to be building a working nuclear weapon, or (say) solving AI alignment, it’s radically inappropriate—indeed, quite possibly collectively-suicidal—to carry on such obfuscations.

comment by Richard_Kennaway · 2023-07-31T22:29:56.721Z · LW(p) · GW(p)

"Good fences make good neighbours."

Honesty does not require blurting out everything that passes through one's stream of consciousness (or unconsciousness, as the case may be). To take the scene from The Invention of Lying, I am not interested in a waiter's opinions about anything but the menu, and as the man on the date I would bluntly (but not rudely) tell him so.

Is it true? Is it relevant? Is it important? If the answer is no any of these, keep silent.

comment by Self (CuriousMeta) · 2024-12-15T14:25:43.763Z · LW(p) · GW(p)

"Honesty reduces predictability" seems implausible as a thesis.

Replies from: philh, CuriousMeta
comment by philh · 2024-12-17T18:19:56.932Z · LW(p) · GW(p)

I think the thesis is not "honesty reduces predictability" but "certain formalities, which preclude honesty, increase predictability".

comment by Self (CuriousMeta) · 2024-12-17T10:13:58.370Z · LW(p) · GW(p)

Downvoters: consider "Deception increases predictability"

comment by philh · 2023-08-09T10:16:55.435Z · LW(p) · GW(p)

Note that the Feynman anecdote contrasts Feynman's bluntness against everyone else's "too scared to speak up". There's no one in the story who says "I don't think that will work" instead of "that won't work", or "that seems like a bad idea" instead of "that's a damn fool idea". You assert afterwards that such a person would have been distracted from the thing Bohr wanted, but the anecdote doesn't particularly support or discredit that idea.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2023-08-10T02:28:48.898Z · LW(p) · GW(p)

You know, that's a good point!

comment by romeostevensit · 2023-07-31T18:16:06.869Z · LW(p) · GW(p)

Disagree. Social graces are not only about polite lies but about social decision procedures on maintaining game theoretic equilibria to maintain cooperation favoring payoff structures.

I've observed the thesis posited here before IRL and it appeared to be motivated reasoning about the person's underlying proclivity towards disagreeableness. I can sympathize as I used to test in the 98th percentile on disagreeableness, but realized this was a bad strategy and ameliorated it somewhat.

Replies from: Raemon, aphyer, SaidAchmiz, UnexpectedValues, tailcalled, interstice
comment by Raemon · 2023-07-31T18:42:11.030Z · LW(p) · GW(p)

A slight variation on this, that's less opinionated about whether the payoff structures are actually "better" (which I think varies, sometimes the equilibria is bad and it's good to disrupt it), it's that at the very least, there is some kind of equilibria, and being radically honest or blunt doesn't just mean "same situation but with more honesty", it's "pretty different situation in the first place."

Like, I think "the Invention of Lying" example is notably an incoherent world that doesn't make any goddamn sense (and it feels sort of important that the OP doesn't mention this). In the world where everyone was radically honest, you wouldn't end up with "current dating equilibria but people are rude-by-current standards", you'd end up in some entirely different dating equilibria.

comment by aphyer · 2023-07-31T21:03:24.934Z · LW(p) · GW(p)

This seems to assume that social graces represent cooperative social strategies, rather than adversarial social strategies. I don't think this is always the case.

Consider a couple discussing where to go to dinner. Both keep saying 'oh, I'm fine to go anywhere, where do you want to go?' This definitely sounds very polite! Much more socially-graceful than 'I want to go to this place! We leave at 6!'

Yet I'd assert that most of the time these people are playing social games adversarially against one another.

If you name a place and I agree to go there (especially if I do so in just the right tone of pseudo-suppressed reluctance), it feels like you owe me one.

If you name a place and then something goes wrong - the food is bad, the service is slow, there is a long wait - it feels like I can blame you for that.

What looks like politeness is better thought of as these people fighting one another in deniable and destructive ways for social standing. Opting out of that seems like a good thing: if the Invention Of Lying people say 'I would like to go to this place, but not enough to pay large social costs to do so,' that seems more honest and more cooperative.

Replies from: GuySrinivasan
comment by SarahNibs (GuySrinivasan) · 2023-07-31T21:33:41.972Z · LW(p) · GW(p)

I believe the common case of mutual "where do you want to go?" is motivated by not wanting to feel like you're imposing, not some kind of adversarial game.

Maybe I'm bubbled though?

Replies from: SaidAchmiz, Archimedes
comment by Said Achmiz (SaidAchmiz) · 2023-07-31T21:50:39.288Z · LW(p) · GW(p)

That is an adversarial game—the game of avoiding having to expend cognitive effort and/or “social currency”.

Replies from: GuySrinivasan
comment by SarahNibs (GuySrinivasan) · 2023-07-31T21:55:14.949Z · LW(p) · GW(p)

No, that is a cooperative game that both participants are playing poorly.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-07-31T22:03:36.252Z · LW(p) · GW(p)

This seems substantially less likely a priori. What convinced you of this?

Replies from: GuySrinivasan
comment by SarahNibs (GuySrinivasan) · 2023-08-01T14:25:50.175Z · LW(p) · GW(p)

What convinced you that adversarial games between friends are more likely a priori? In my experience the vast majority of interactions between friends are cooperative, attempts at mutual benefit, etc. If a friend needs help, you do not say "how can I extract the most value from this", you say "let me help"*. Which I guess is what convinced me. And is also why I wrote "Maybe I'm bubbled though?" Is it really the case for you that you look upon people you think of as friends and say "ah, observe all the adversarial games"?

*Sure, over time, maybe you notice that you're helping more than being helped, and you can evaluate your friendship and decide what you value and set boundaries and things, but the thing going through your head at the time is not "am I gaining more social capital from this than the amount of whatever I lose from helping as opposed to what, otherwise, I would most want to do". Well, my head.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-08-01T17:10:05.222Z · LW(p) · GW(p)

Is it really the case for you that you look upon people you think of as friends and say “ah, observe all the adversarial games”?

Indeed not. Among my friends, the “mutual ‘where do you want to go?’ scenario” doesn’t happen in the first place. If it did, it would of course be an adversarial game; but it does not, for precisely the reason that adversarial games among friends are rare.

comment by Archimedes · 2023-08-01T02:42:22.474Z · LW(p) · GW(p)

Adversarial gaming doesn't match my experience much at all and suggesting options doesn't feel imposing either. For me at least, it's largely about the responsibility and mental exertion of planning.

In my experience, mutual "where do you want to go" is most often when neither party has a strong preference and neither feels like taking on the cognitive burden of weighing options to come to a decision. Making decisions takes effort especially when there isn't a clearly articulated set of options and tradeoffs to consider.

For practical purposes, one person should provide 2-4 options they're OK with and the other person can pick one option or veto some option(s). If they veto all given options, they must provide their own set of options the first person can choose or veto. Repeat as needed but rarely is more than one round needed unless participants are picky or disagreeable.

comment by Said Achmiz (SaidAchmiz) · 2023-07-31T19:10:33.128Z · LW(p) · GW(p)

I am skeptical of this account, because I’m pretty high on disagreeableness, but have never particularly felt compelled to practice “radical honesty” in social situations (like dating or what have you).

It seems to me (as I describe in my top-level comment thread [LW(p) · GW(p)]) that “not being radically honest, and instead behaving more or less as socially prescribed” has its quite sensible and useful role, but also that trying to enforce “social graces” in situations where you’re trying to accomplish some practical task is foolish and detrimental to effectiveness. I don’t see that there’s any contradiction here; and it seems to me that something other than “disagreeableness” is the culprit behind any errors in applying these generally sensible principles.

comment by Eric Neyman (UnexpectedValues) · 2023-07-31T19:17:30.383Z · LW(p) · GW(p)

Social graces are not only about polite lies but about social decision procedures on maintaining game theoretic equilibria to maintain cooperation favoring payoff structures.

This sounds interesting. For the sake of concreteness, could you give a couple of central examples of this?

comment by tailcalled · 2023-07-31T20:16:14.848Z · LW(p) · GW(p)

Zack gives some examples in the post; do you have any examples to illustrate your point?

comment by interstice · 2023-07-31T19:50:30.972Z · LW(p) · GW(p)

Do you disagree that lack of social grace is an epistemic virtue, though? Social skills might indeed be useful for maintaining cooperative coalitions, but this doesn't necessarily conflict with the thesis of the post. I guess some social graces don't involve polite lies(like saying "good morning" to people when meeting them) but a lot of them do, and I think those that do can only be explained by ongoing or past deception or short-range emotional management(arguably another sort of deception)

comment by astridain (aristide-twain) · 2023-07-31T17:56:45.054Z · LW(p) · GW(p)

I think this misses the extent to which a lot of “social grace” doesn't actually decrease the amount of information conveyed; it's purely aesthetic — it's about finding comparatively more pleasant ways to get the point across. You say — well, you say “I think she's a little out of your league” instead of saying “you're ugly”. But you expect the ugly man to recognise the script you're using, and grok that you're telling him he's ugly! The same actual, underlying information is conveyed!

The cliché with masters of etiquette is that they can fight subtle duels of implied insults and deferences, all without a clueless shmoe who wandered into the parlour even realising. The kind of politeness that actually impedes transmission of information is a misfire; a blunder. (Though in some cases it's the person who doesn't get it who would be considered “to blame”.)

Obviously it's not always like this. And rationalists might still say “why are we spending all this brainpower encrypting our conversations just so that the other guy can decrypt them again? it's unnecessary at best”. But I don't grant your premise that social grace is fundamentally about actual obfuscation rather than pretend-obfuscation.

Replies from: Zack_M_Davis, gjm, RamblinDash, SaidAchmiz
comment by Zack_M_Davis · 2023-07-31T18:20:51.845Z · LW(p) · GW(p)

What is the function of pretend-obfuscation, though? I don't think that the brainpower expenditure of encrypting conversations so that other people can decrypt them again is unnecessary at best; I think it's typically serving the specific function of using the same message to communicate to some audiences but not others, like an ambiguous bribe offer that corrupt officeholders know how to interpret, but third parties can't blow the whistle on [LW · GW].

In general, when you find yourself defending against an accusation of deception by saying, "But nobody was really fooled", what that amounts to is the claim that anyone who was fooled, isn't "somebody".

(All this would be unnecessary if everyone wanted everyone else to have maximally accurate beliefs, but that's not what social animals are designed to do.)

I basically expect this style of analysis to apply to "more pleasant ways to get the point across", but in a complicated way that doesn't respect our traditional notions of agency and personhood. If there's some part of my brain that takes offense at hearing overtly negative-valence things about me, "gentle" negative feedback that avoids triggering that part could be said to be "deceiving" it in a functional sense, even if my system 2 [? · GW] consciousness can piece together the message.

Replies from: T3t, AnthonyC, aristide-twain, qv^!q
comment by RobertM (T3t) · 2023-07-31T23:52:20.171Z · LW(p) · GW(p)

As an empirical matter of fact (per my anecdotal observations), it is very easy to derail conversations by "refusing to employ the bare minimum of social grace".  This does not require deception, though often it may require more effort to clear some threshold of "social grace" while communicating the same information.

People vary widely, but:

  • I think that most people (95%+) are at significant risk of being cognitively hijacked if they perceive rudeness, hostility, etc. from their interlocutor.
    • I don't personally think I'd benefit from strongly selecting for conversational partners who are at low risk of being cognitively hijacked, and I think nearly all people who do believe that they'd benefit from this (compared to counterfactuals like "they operate unchanged in their current social environment" or "they put in some additional marginal effort to say true things with more social grace") are mistaken.
  • Online conversations are one-to-many, not one-to-one.  This multiplies the potential cost of that cognitive hijacking.

Obviously there are issues with incentives toward fragility here, but the fact that there does not, as far as I'm aware, exist any intellectually generative community which operates on the norms you're advocating for, is evidence that such a community is (currently) unsustainable.

Replies from: Zack_M_Davis, SaidAchmiz
comment by Zack_M_Davis · 2023-08-01T00:30:34.121Z · LW(p) · GW(p)

I don't personally think I'd benefit from strongly selecting for conversational partners who are at low risk of being cognitively hijacked, and I think nearly all people who do believe that they'd benefit from this [...] are mistaken.

I find this claim surprising and would be very interested to hear more about why you think this!!

I think the case for benefit is straightforward: if your interlocutors are selected for low risk of getting triggered, there's a wider space of ideas you can explore without worrying about offending them. Do you disagree with that case for benefit? If so, why? If not, presumably you think the benefit is outweighed by other costs—but what are those costs, specifically? (Are non-hijackable people dumber—or more realistically, do they have systematic biases that can only be corrected by hijackable people? What might those biases be, specifically?)

there does not, as far as I'm aware, exist any intellectually generative community which operates on the norms you're advocating for

How large does something need to be in order to be a "community"? Anecdotally, my relationships with my "fighty"/disagreeable friends seem more intellectually generative than the typical Less Wrong 2.0 interaction in a way that seems deeply related to our fightiness: specifically, I'm wrong about stuff a lot, but I think I manage to be less wrong with the corrective help of my friends who know I'll reward rather than punish them for asking incisive probing questions and calling me out on motivated distortions.

Your one-to-many point is well taken, though. (The special magical thing I have with my disagreeable friends seems hard to scale to an entire website. Even in the one-to-one setting, different friends vary on how much full-contact criticism we manage to do without spiraling into a drama explosion and hurting each other.)

Replies from: T3t
comment by RobertM (T3t) · 2023-08-01T00:51:53.564Z · LW(p) · GW(p)

If not, presumably you think the benefit is outweighed by other costs—but what are those costs, specifically?

Some costs:

  • Such people seem much more likely to also themselves be fairly disagreeable.
  • There are many fewer of them.  I think I've probably gotten net-positive value out of my interactions with them to date, but I've definitely gotten a lot of value out of interactions with many people who wouldn't fit the bill, and selecting against them would be a mistake.
    • To be clear, if I were to select people to interact with primarily on whatever qualities I expect to result in the most useful intellectual progress, I do expect that those people would both be at lower risk of being cognitively hijacked and more disagreeable than the general population.  But the correlation isn't overwhelming, and selecting primarily for "low risk of being cognitively hijacked" would not get me the as much of the useful thing I actually want.

How large does something need to be in order to be a "community"?

As I mentioned in my reply to Said, I did in fact have medium-sized online communities in mind when writing that comment.  I agree that stronger social bonds between individuals will usually change the calculus on communication norms.  I also suspect that it's positively tractable to change that frontier for any given individual relationship through deliberate effort, while that would be much more difficult[1] for larger communities.

  1. ^

    I think basically impossible in nearly all cases, but don't have legible justifications for that degree of belief.

Replies from: Zack_M_Davis, SaidAchmiz
comment by Zack_M_Davis · 2023-08-05T23:22:34.738Z · LW(p) · GW(p)

There are many fewer of them [...] the correlation isn't overwhelming [...] selecting primarily [...] would not get me the as much of the useful thing I actually want

Sure, but the same arguments go through for, say, mathematical ability, right? The correlation between math-smarts and the kind of intellectual progress we're (ostensibly) trying to achieve on this website isn't overwhelming; selecting primarily for math prowess would get you less advanced rationality when the tails come apart [LW · GW].

And yet, I would not take this as a reason not to "structure communities like LessWrong in ways which optimize for participants being further along on this axis" for fear of "driving away [a ...] fraction of an existing community's membership". In my own intellectual history, I studied a lot of math and compsci stuff because the culture of the Overcoming Bias comment section of 2008 made that seem like a noble and high-status thing to do. A website that catered to my youthful ignorance instead of challenging me to remediate it would have made me weaker rather than stronger.

Replies from: T3t
comment by RobertM (T3t) · 2023-08-05T23:47:55.079Z · LW(p) · GW(p)

LessWrong is obviously structured in ways which optimize for participants being quite far along that axis relative to the general population; the question is whether further optimization is good or bad on the margin.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2023-08-06T22:56:14.253Z · LW(p) · GW(p)

I think we need an individualist conflict-theoretic rather than a collective mistake-theoretic perspective to make sense of what's going on here.

If the community were being optimized by the God-Empress, who is responsible for the whole community and everything in it, then She would decide whether more or less math is good on the margin for Her purposes.

But actually, there's no such thing as the God-Empress; there are individual men and women, and there are families. That's the context in which Said's plea to "keep your thumb off the scales, as much as possible" [LW(p) · GW(p)] can even be coherent. (If there were a God-Empress determining the whole community and everything in it as definitely as an author determines the words in a novel, then you couldn't ask Her to keep Her thumb off the scales. What would that even mean?)

In contrast to the God-Empress, mortals have been known to make use of a computational shortcut they call "not my problem". If I make a post, and you say, "This has too many equations in it; people don't want to read a website with too many equations; you're driving off more value to community than you're creating", it only makes sense to think of this as a disagreement if I've accepted the premise that my job is to optimize the whole community and everything in it, rather than to make good posts. If my position is instead, "I thought it was a good post; if it drives away people who don't like equations, that's not my problem," then what we have is a conflict rather than a disagreement.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-08-06T23:41:11.798Z · LW(p) · GW(p)

In contrast to the God-Empress, mortals have been known to make use of a computational shortcut they call “not my problem”. If I make a post, and you say, “This has too many equations in it; people don’t want to read a website with too many equations; you’re driving off more value to community than you’re creating”, it only makes sense to think of this as a disagreement if I’ve accepted the premise that my job is to optimize the whole community and everything in it, rather than to make good posts. If my position is instead, “I thought it was a good post; if it drives away people who don’t like equations, that’s not my problem,” then what we have is a conflict rather than a disagreement.

Indeed. In fact, we can take this analysis further, as follows:

If there are people whose problem it is to optimize the whole community and everything in it (let us skip for the moment the questions of why this is those people’s problem, and who decided that it should be, and how), then those people might say to you: “Indeed it is not your problem, to begin with; it is mine; I must solve it; and my approach to solving this problem is to make it your problem, by the power vested in me.” At that point you have various options: accede and cooperate, refuse and resist, perhaps others… but what you no longer have is the option of shrugging and saying “not my problem”, because in the course of the conflict which ensued when you initially shrugged thus, the problem has now been imposed upon you by force.

Of course, there are those questions which we skipped—why is this “problem” a problem for those people in authority; who decided this, and how; why are they in authority to begin with, and why do they have the powers that they have; how does this state of affairs comport with our interests, and what shall we do about it if the answer is “not very well”; and others in this vein. And, likewise, if we take the “refuse and resist” option, we can start a more general conversation about what we, collectively, are trying to accomplish, and what states of affairs “we” (i.e., the authorities, who may or may not represent our interests, and may or may not claim to do so) should take as problems to be solved, etc.

In short, this is an inescapably political question, with all the usual implications. It can be approached mistake-theoretically only if all involved (a) agree on the goals of the whole enterprise, and (b) represent honestly, in discussion with one another, their respective individual goals in participating in said enterprise. (And, obviously, assuming that (a) and (b) hold, as a starting point for discussion, is unwise, to say the least!)

comment by Said Achmiz (SaidAchmiz) · 2023-08-01T02:45:09.831Z · LW(p) · GW(p)

I also suspect that it’s positively tractable to change that frontier for any given individual relationship through deliberate effort, while that would be much more difficult[1] for larger communities.

[1] I think basically impossible in nearly all cases, but don’t have legible justifications for that degree of belief.

This seems diametrically wrong to me. I would say that it’s difficult (though by no means impossible) for an individual to change in this way, but very easy for a community to do so—through selective [LW · GW] (and, to a lesser degree, structural) methods. (But I suspect you were thinking of corrective methods instead, and for that reason judged the task to be “basically impossible”—no?)

Replies from: T3t
comment by RobertM (T3t) · 2023-08-01T06:06:09.841Z · LW(p) · GW(p)

No, I meant that it's very difficult to do so for a community without it being net-negative with respect to valuable things coming out of the community.  Obviously you can create a new community by driving away an arbitrarily large fraction of an existing community's membership; this is not a very interesting claim.  And obviously having some specific composition of members does not necessarily lead to valuable output, but whether this gets better or worse is mostly an empirical question, and I've already asked for evidence on the subject.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-08-01T07:04:30.335Z · LW(p) · GW(p)

Obviously you can create a new community by driving away an arbitrarily large fraction of an existing community’s membership; this is not a very interesting claim.

Is it not? Why?

In my experience, it’s entirely possible for a community to be improved by getting rid of some fraction of its members. (Of course, it is usually then desirable to add some new members, different from the departed ones—but the effect of the departures themselves may help to draw in new members, of a sort who would not have joined the community as it was. And, in any case, new members may be attracted by all the usual means.)

As for your empirical claims (“it’s very difficult to do so for a community without it being net-negative …”, etc.), I definitely don’t agree, but it’s not clear what sort of evidence I could provide (nor what you could provide to support your view of things)…

comment by Said Achmiz (SaidAchmiz) · 2023-08-01T00:26:14.838Z · LW(p) · GW(p)

I think that most people (95%+) are at significant risk of being cognitively hijacked if they perceive rudeness, hostility, etc. from their interlocutor.

Would you include yourself in that 95%+?

there does not, as far as I’m aware, exist any intellectually generative community which operates on the norms you’re advocating for,

There certainly exist such communities. I’ve been part of multiple such, and have heard reports of numerous others.

Replies from: T3t
comment by RobertM (T3t) · 2023-08-01T00:36:16.487Z · LW(p) · GW(p)

Would you include yourself in that 95%+?

Probably; I think I'm maybe in the 80th or 90th percentile on the axis of "can resist being hijacked", but not 95th or higher.

There certainly exist such communities. I’ve been part of multiple such, and have heard reports of numerous others.

Can you list some?  On a reread, my initial claim was too broad, in the sense that there are many things that could be called "intellectually generative communities" which could qualify, but they mostly aren't the thing I care about (in context, not-tiny online communities where most members don't have strong personal social ties to most other members).

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-08-01T01:30:57.315Z · LW(p) · GW(p)

Would you include yourself in that 95%+?

Probably; I think I’m maybe in the 80th or 90th percentile on the axis of “can resist being hijacked”, but not 95th or higher.

Suppose you could move up along that axis, to the 95th percentile. Would you consider than a change for the better? For the worse? A neutral shift?

Can you list some?

I’m afraid I must decline to list any of the currently existing such communities which I have in mind, for reasons of prudence (or paranoia, if you like). (However, I will say that there is a very good chance that you’ve used websites or other software which were created in one of these places, or benefited from technological advances which were developed in one of these places.)

As for now-defunct such communities, though—well, there are many examples, although most of the ones I’m familiar with are domain-specific. A major category of such were web forums devoted to some hobby or other (D&D, World of Warcraft, other games), many of which were truly wondrous wellsprings of creativity and inventiveness in their respective domains—and which had norms basically identical to what Zack advocates.

Replies from: T3t
comment by RobertM (T3t) · 2023-08-01T02:28:34.327Z · LW(p) · GW(p)

Suppose you could move up along that axis, to the 95th percentile. Would you consider than a change for the better? For the worse? A neutral shift?

All else equal, better, of course.  (In reality, all else is rarely equal; at a minimum there are opportunity costs.)

I’m afraid I must decline to list any of the currently existing such communities which I have in mind, for reasons of prudence (or paranoia, if you like). (However, I will say that there is a very good chance that you’ve used websites or other software which were created in one of these places, or benefited from technological advances which were developed in one of these places.)

See my response to Zack (and previous response to you) for clarification on the kinds of communities I had in mind; certainly I think such things are possible (& sometimes desirable) in more constrained circumstances.

ETA: and while in this case I have no particular reason to doubt your report that such communities exist, I have substantial reason to believe that if you were to share what those communities were with me, I probably wouldn't find that most of them were meaningful counterevidence to my claim (for a variety of reasons, including that my initial claim was overbroad).

Replies from: SaidAchmiz, SaidAchmiz, SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-08-01T02:39:34.577Z · LW(p) · GW(p)

Suppose you could move up along that axis, to the 95th percentile. Would you consider than a change for the better? For the worse? A neutral shift?

All else equal, better, of course. (In reality, all else is rarely equal; at a minimum there are opportunity costs.)

Sure, opportunity costs are always a complication, but in this case they are somewhat beside the point. If indeed it’s better to be further along this axis (all else being equal), then it seems like a bad idea to encourage and incentivize being lower on this axis, and to discourage and disincentivize being further on it. But that is just what I see happening!

Replies from: T3t
comment by RobertM (T3t) · 2023-08-01T06:03:18.121Z · LW(p) · GW(p)

If indeed it’s better to be further along this axis (all else being equal), then it seems like a bad idea to encourage and incentivize being lower on this axis, and to discourage and disincentivize being further on it. But that is just what I see happening!

The consequent does not follow.  It might be better for an individual to press a button, if pressing that button were free, which moved them further along that axis.  It is not obviously better to structure communities like LessWrong in ways which optimize for participants being further along on this axis, both because this is not a reliable proxy for the thing we actually care about and because it's not free.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-08-01T07:08:26.565Z · LW(p) · GW(p)

That it’s “not free” is a trivial claim (very few things are truly free), but that it costs very little, to—not even encourage moving upward along that axis, but simply to avoid encouraging the opposite—to keep your thumb off the scales, as much as possible—this seems to me to be hard to dispute.

because this is not a reliable proxy for the thing we actually care about

Could you elaborate? What is the thing we actually care about, and what is the unreliable proxy?

comment by Said Achmiz (SaidAchmiz) · 2023-08-01T02:47:33.061Z · LW(p) · GW(p)

See my response to Zack (and previous response to you) for clarification on the kinds of communities I had in mind; certainly I think such things are possible (& sometimes desirable) in more constrained circumstances.

Sorry, I’m not quite sure which “previous response” you refer to. Link, please?

Replies from: T3t
comment by RobertM (T3t) · 2023-08-01T06:09:09.871Z · LW(p) · GW(p)

https://www.lesswrong.com/posts/h2Hk2c2Gp5sY4abQh/lack-of-social-grace-is-an-epistemic-virtue?commentId=QQxjoGE24o6fz7CYm [LW(p) · GW(p)]

As I mentioned in my reply to Said, I did in fact have medium-sized online communities in mind when writing that comment.  I agree that stronger social bonds between individuals will usually change the calculus on communication norms.  I also suspect that it's positively tractable to change that frontier for any given individual relationship through deliberate effort, while that would be much more difficult[1] [LW(p) · GW(p)] for larger communities.

https://www.lesswrong.com/posts/h2Hk2c2Gp5sY4abQh/lack-of-social-grace-is-an-epistemic-virtue?commentId=Dy3uyzgvd2P9RZre6 [LW(p) · GW(p)]

they mostly aren't the thing I care about (in context, not-tiny online communities where most members don't have strong personal social ties to most other members)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-08-01T07:10:25.452Z · LW(p) · GW(p)

So, “not-tiny online communities where most members don’t have strong personal social ties to most other members”…? But of course that is exactly the sort of thing I had in mind, too. (What did you think I was talking about…?)

Anyhow, please reconsider my claims, in light of this clarification.

comment by Said Achmiz (SaidAchmiz) · 2023-08-01T02:42:07.400Z · LW(p) · GW(p)

ETA: and while in this case I have no particular reason to doubt your report that such communities exist, I have substantial reason to believe that if you were to share what those communities were with me, I probably wouldn’t find that most of them were meaningful counterevidence to my claim (for a variety of reasons, including that my initial claim was overbroad).

This is understandable, but in that case, do you care to reformulate your claim? I certainly don’t have any idea what you had in mind, given what you say here, so a clarification is in order, I think.

comment by AnthonyC · 2023-07-31T19:54:45.886Z · LW(p) · GW(p)

Choice of mode/aesthetics for conveying a message also conveys contextual information that often is useful. Who is this person, what is my relationship to them, what is their background, what do those things tell me about the likely assumptions and lenses through which they will be interpreting the things I say?

In most cases verbal language is not sufficient to convey the entirety of a message, and even when it is, successful communication requires that the receiver is using the right tools for interpretation.

Yes, in practice this can be (and is) used to hide corruption, enforce class and status hierarchies, and so on, in addition to the use case of caring about how the message affects the recipients emotional state.

It can also be used to point at information that is taboo, in scenarios where two individuals are not close enough to have common knowledge of each others beliefs. 

Or in social situations (which is all of them when we're communicating at all, the difference is one of degree) it can be used to test someone's intelligence and personality, seeing how adroit they are at perceiving and sending signals and messages. 

See also this SSC post, if you haven't yet.

Filter also through a lens of the fact that humans very often have to talk to, work with, and have lasting relationships with people they don't like, don't know very well outside a narrow context, and don't trust much. Norms that obscure information that isn't supposed to be relevant, without making it impossible to convey such information, are useful, because it is not my goal, or my responsibility, to communicate those things. Politeness norms can thus help the speaker by ensuring they don't accidentally (and unnecessarily, and unambiguously) convey information they didn't mean to, which doesn't pertain to the matter at hand, and which the other party has no right to obtain. And they can help the listener by enabling them to ignore ambiguous information that is none of their business.

In the context of Feynman and Bohr, remember that in addition to the immediate discussion, in such scenarios it is also often the case that one party has a lot of power over the other. Bohr seems to be saying he's someone who has no interest in abusing such power, but Feynman doesn't know that, and the group doesn't have common knowledge of it, and you can't assume this in general. So the default is politeness to avoid giving anyone a pretense that the powerful can use against the weak. Overcoming that default takes dedicated effort over time.

comment by astridain (aristide-twain) · 2023-07-31T19:15:49.642Z · LW(p) · GW(p)

Some of it might be actual-obfuscation if there are other people in the room, sure. But equally-intelligent equally-polite people are still expected to dance the dance even if they're alone. 

Your last paragraph gets at what I think is the main thing, which is basically just an attempt at kindness. You find a nicer, subtler way to phrase the truth in order to avoid shocking/triggering the other person. If both people involved were idealised Bayesian agents this would be unnecessary, but idealised Bayesian agents don't have emotions, or at any rate they don't have emotions about communication methods. Humans, on the other hand, often do; and it's often not practical to try and train ourselves out of them completely; and even if it were, I don't think it's ultimately desirable. Idiosyncratic, arbitrary preferences are the salt of human nature; we shouldn't be trying to smooth them out, even if they're theoretically changeable to something more convenient. That way lies wireheading.

Replies from: interstice
comment by interstice · 2023-07-31T19:44:06.930Z · LW(p) · GW(p)

But equally-intelligent equally-polite people are still expected to dance the dance even if they're alone

I think this could be considered to be a sort of "residue" of the sort of deception Zack is talking about. If you imagine agents with different levels of social savviness, the savviest ones might adopt a deceptively polite phrasing, until the less savvy ones catch on, and so on down the line until everybody can interpret the signal correctly. But now the signaling equilibrium has shifted, so all communication uses the polite phrasing even though no one is fooled. I think this is probably the #2 source of deceptive politeness, with #1 being management of people's immediate emotional reactions, and #3 ongoing deceptiveness.

comment by qvalq (qv^!q) · 2023-08-03T08:12:17.195Z · LW(p) · GW(p)

Pretend-obfuscation prevents common knowledge.

comment by gjm · 2023-07-31T20:40:51.755Z · LW(p) · GW(p)

I think "I think she's a little out of your league"[1] doesn't convey the same information as "you're ugly" would, because (1) it's relative and the possibly-ugly person might interpret it as "she's gorgeous" and (2) it's (in typical use, I think) broader than just physical appearance so it might be commenting on the two people's wittiness or something, not just on their appearance.

[1] Parent actually says "you're a little out of her league" but I assume that's just a slip.

It's not obvious to me how important this is to the difference in graciousness, but it feels to me as if saying that would be ruder if it did actually allow the person it was said to to infer "you're ugly" rather than merely "in some unspecified way(s) that may well have something to do with attractiveness, I rate her more highly than you". So in this case, at least, I think actual-obfuscation as well as pretend-obfuscation is involved.

Replies from: aristide-twain
comment by astridain (aristide-twain) · 2023-07-31T21:30:01.143Z · LW(p) · GW(p)

That might be a fault with my choice of example. (I am not infact in fact a master of etiquette.) But I'm sure examples can be supplied where "the polite thing to say" is a euphemism that you absolutely do expect the other person to understand. At a certain level of obviousness and ubiquity, they tend to shift into figures of speech. “Your loved one has passed on” instead of “you loved one is dead”, say.

And yes, that was a typo. Your way of expressing it might be considered an example of such unobtrusive politeness. My guess is that you said “I assume that's just a slip” not because you have assigned noteworthy probability-mass to the hypothesis “astridain had a secretly brilliant reason for saying the opposite of what you'd expect and I just haven't figured it out”, but because it's nicer to fictitiously pretend to care about that possibility than to bluntly say “you made an error”. It reduces the extent to which I feel stupid in the moment; and it conveys a general outlook of your continuing to treat me as a worthy conversation partner; and that's how I understand the note. I don't come away with a false belief that you were genuinely worried about the possibility that there was a brilliant reason I'd reversed the pronouns and you couldn't see it. You didn't expect me to, and you didn't expect anyone to. It's just a graceful way of correcting someone.

Replies from: qv^!q
comment by qvalq (qv^!q) · 2023-08-03T08:15:10.857Z · LW(p) · GW(p)

"Your loved one has passed on"

I'm not sure I've ever used a euphemism (I don't know what a euphemism is).

When should I?

comment by RamblinDash · 2023-07-31T18:22:17.344Z · LW(p) · GW(p)

And rationalists might still say “why are we spending all this brainpower encrypting our conversations just so that the other guy can decrypt them again? it's unnecessary at best”.

 

We do this so that the ugly guy can get the message without creating Common Knowledge [LW · GW] of his ugliness.

comment by Said Achmiz (SaidAchmiz) · 2023-07-31T18:16:03.712Z · LW(p) · GW(p)

Amount of information conveyed to whom?

More pleasant for whom?

Obfuscation from whom?

Without these things, your account is underspecified.

And if you specify these things, you may find that your claim is radically altered thereby.

comment by MondSemmel · 2023-10-05T16:22:59.685Z · LW(p) · GW(p)

While the framing of treating lack of social grace as a virtue captures something true, it's too incomplete and imo can't support its strong conclusion. The way I would put it is that you have correctly observed that, whatever the benefits of social grace are, it comes at a cost, and sometimes this cost is not worth paying. So in a discussion, if you decline to pay the cost of social grace, you can afford to buy other virtues instead.[1]

For example, it is socially graceful not to tell the Emperor Who Wears No Clothes that he wears no clothes. Whereas someone who lacks social grace is more likely to tell the emperor the truth.

But first of all, I disagree with the frame that lack of social grace is itself a virtue. In the case of the emperor, for example, the virtues are rather legibility and non-deception, traded off against whichever virtues the socially graceful response would've gotten.

And secondly, often the virtues you can buy with social grace are worth far more than whatever you could gain by declining to be socially graceful. For example, when discussing politics with someone of an opposing ideology, you could decline to be socially graceful and tell your interlocutor to their face that you hate them and everything they stand for. This would be virtuously legible and non-deceptive, at the cost of immediately ending the conversation and thus forfeiting any chance of e.g. gains from trade, coming to a compromise, etc.

One way I've seen this cost manifest on LW is that some authors complain that there's a style of commenting here that makes it unenjoyable to post here as an author. As a result, those authors are incentivized to post less, or to post elsewhere.[2]

And as a final aside, I'm skeptical of treating Feynman as socially graceless. Maybe he was less deferential towards authority figures, but if he had told nothing but the truth to all the authority figures (who likely included some naked emperors) throughout his life, his career would've presumably ended long before he could've gotten his Nobel Prize. And b), IIRC the man's physics lectures are just really fun to watch, and I'm pretty confident that a sufficiently socially graceless person would not make for a good teacher. For example, it is socially graceful not to belittle fledgling students as intellectual inferiors, even though they in some ways are just that.

  1. ^

    Related: I wrote this comment [LW(p) · GW(p)] and this follow-up [LW(p) · GW(p)] where I wished that Brevity was considered a rationalist virtue. Because if there's no counterbalancing virtue to trade off against other virtues like legibility and truth-seeking, then supposedly virtuous discussions are incentivized to become arbitrarily long.

  2. ^

    The moderation log of users banned by other users [? · GW] is a decent proxy for the question of which authors have considered which commenters to be too costly to interact with, whether due to lack of social grace of something else.

Replies from: philh
comment by philh · 2023-10-06T11:12:55.239Z · LW(p) · GW(p)

On the narrow question of Feynman's social graces, I only remember watching one video of his and it did seem to back up the "he kinda lacks them" idea. From memory: an interviewer asks him "why is ice slippery" and he starts musing about "how do I explain this to you". The interviewer seems to get kind of a dismissive vibe (which I got too) and says "I think it's a fair question", and Feynman says "of course it's a fair question, it's an excellent question".

And now not from memory, here's the video. The question is actually about magnets, he starts pushing for more detail about "what are you actually asking" and that's when you get that exchange. I think the vibe I get is actually more aggressive than dismissive, like at times it seems he's angry at me. I assume it's just enthusiasm, but I feel like I'd find it uncomfortable to have a long conversation with him in that mode. That would be a shame, and hopefully I'd get used to it.

(Of course, "having / not having social graces" is way oversimplified. "Feynman was skilled in some social graces and unskilled in others" seems likely. And for all I know, maybe most people don't pick up an aggressive vibe from the video.)

But, also relevant: he does talk about ice, and this HN comment says his explanation is wrong. But he actually hedges that explanation. "It is in the case of ice that when you stand on it, they say, momentarily the pressure melts the ice a little bit."

comment by philh · 2024-12-17T18:15:26.113Z · LW(p) · GW(p)

I kinda like this post, and I think it's pointing at something worth keeping in mind. But I don't think the thesis is very clear or very well argued, and I currently have it at -1 in the 2023 review.

Some concrete things.

  • There are lots of forms of social grace, and it's not clear which ones are included. Surely "getting on the train without waiting for others to disembark first" isn't an epistemic virtue. I'd normally think of "distinguishing between map and territory" as an epistemic virtue but not particularly a social grace, but the last two paragraphs make me think that's intended to be covered. Is "when I grew up, weaboo wasn't particularly offensive, and I know it's now considered a slur, but eh, I don't feel like trying to change my vocabulary" an epistemic virtue?
    • Perhaps the claim is only meant to be that lack of "concealing or obfuscating information that someone would prefer not to be revealed" is an epistemic virtue? Then the map/territory stuff seems out of place, but the core claim seems much more defensible.
  • "Idealized honest Bayesian reasoners would not have social graces—and therefore, humans trying to imitate idealized honest Bayesian reasoners will tend to bump up against (or smash right through) the bare minimum of social grace." Let's limit this to the social graces that are epistemically harmful. Still, I don't see how this follows.
    • Idealized honest Bayesian reasoners wouldn't need to stop and pause to think, but a human trying to imitate one will need to do that. A human getting closer in some respects to an idealized honest Bayesian reasoner might need to spend more time thinking.
    • And, where does "bare minimum" come from? Why will these humans do approximately-none-at-all of the thing, rather than merely less-than-maximum of it?
    • I do think there's something awkward about humans-imitating-X, in pursuit of goal Y that X is very good at, doing something that X doesn't do because it would be harmful to Y. But it's much weaker than claimed.
  • There's a claim that "distinguishing between the map and the territory" is distracting, but as I note here [LW(p) · GW(p)] it's not backed up.
  • I note that near the end we have: "If the post looks lousy, say it looks lousy. If it looks good, say it looks good." But of course "looks" is in the map. The Feynman in the anecdote seems to have been following a different algorithm: "if the post looks [in Feynman's map, which it's unclear if he realizes is different from the territory] lousy, say it's lousy. If it looks [...] good, say it's good."
  • Vaniver [LW(p) · GW(p)] and Raemon [LW(p) · GW(p)] point out something along the lines of "social grace helps institutions perservere". Zack says [LW(p) · GW(p)] he's focusing on individual practice rather than institution-building. But both his anecdotes involve conversations. It seems that Feynman's lack of social grace was good for Bohr's epistemics... but that's no help for Feynman's individual practice. Bohr appreciating Feynman's lack of social grace seems to have been good for Feynman's ability-to-get-close-to-Bohr, which itself seems good for Feynman's epistemics, but that's quite different.
    • Oh, elsewhere Zack says [LW(p) · GW(p)] "The thesis of the post is that people who are trying to maximize the accuracy of shared maps are going to end up being socially ungraceful sometimes", which doesn't sound like it's focusing on individual practice?
    • Hypothesis: when Zack wrote this post, it wasn't very clear to himself what he was trying to focus on.

Man, this review kinda feels like... I can imagine myself looking back at it two years later and being like "oh geez that wasn't a serious attempt to actually engage with the post, it was just point scoring". I don't think that's what's happening, and that's just pattern matching on the structure or something? But I also think that if it was, it wouldn't necessarily feel like it to me now?

It also feels like I could improve it if I spent a few more hours on it and re-read the comments in more detail, and I do expect that's true.

In any case, I'm pretty sure both [the LW review process] and [Zack specifically] prefer me to publish it.

comment by Ape in the coat · 2023-08-01T09:42:40.459Z · LW(p) · GW(p)

Strong downvote. The post looks loosy. The relation between social grace, honesty and truth seeking is complicated and multidimentional. You didn't engaged with this complexity. You didn't properly argued your point. You made a statement then vaguely gestured in the direction of two examples. 

The first example is not only fictional, but isn't even really relevant. The world without lies is in a way nicer to live in because people reveal more information to you. It doesn't make you a supperior truth seeker. Now, would I prefer to live in such world? Sure, me and every other autistic person. But this is axiological issue not epistemological one.

The second example is more on point. It shows that it is epistemically useful to be able to talk to someone ignoring status concerns, especially when people need it. This is the point I completely agree with. However it doesn't generalises to "It's always epistemically better to lack any social grace". Because 1) the same tool isn't the best for every job 2) social grace isn't just about status concerns.   

There is a potential interisting conversation with lots of nuance to be had here which a supperior version of this post would have tried to have. For instance, while sometimes politeness is about concealing and obfuscating information it's often the case that more polite/political correct terms are strictly more accurate. Consider:

  1. "You are ugly"
  2. "You are not conventionally attractive"

The 1st statement is implicitly commiting the mind projecting fallacy [LW · GW] where ugliness is considered to be the property of a person. The second doesn't, as it explicitly mentions that attractiveness depends not only on your qualities but also on their relation to the convention.

Here is a different angle. Consider:

  1. A person unaware of social conventions just doing object-level reasoning about X
  2. A person that used to be unaware of social conventions, learned some and while doing the same object-level reasoning about X, then presents the finding in a nice way
  3. A person unable to disentangle their reasoning from status concerns just saying nice platitudes about X 
  4. A person unable to disentangle their reasoning from status concerns intentionally being rude about X because they believe that it gives them the appeal of truthseeker and the hight status corresponding to it.

Your model distincts neither between 1 and 4, nor between 2 and 3. Which is bad, because as long as epistemic virtue goes we would like to be either 1 or 2 and not 3 or 4. 

comment by Ben (ben-lang) · 2023-07-31T17:08:05.281Z · LW(p) · GW(p)

This is kind of an aside, but does this Feynman story strike anyone else as off? Its kind of too perfect. Not even subtly. It strikes me as "significantly exaggerated", at the very least.

Replies from: interstice, Zack_M_Davis
comment by interstice · 2023-07-31T17:41:04.685Z · LW(p) · GW(p)

While I was reading I was thinking that Bohr might have contacted Feynman moreso because he was more competent than others rather than more honest, but (ironically) it would be rude for Feynman to say that. It's also the case that being competent means you can be blunt without making a fool of yourself, so it's sort of a costly signal.

comment by Zack_M_Davis · 2023-07-31T17:22:06.425Z · LW(p) · GW(p)

We'll never know! Niels and Aage Bohr are both dead and can't offer a contradictory account.

There does seem to be a tension between "all I could see of him was from between people's heads" and Bohr particularly noticing Feynman as unmoved by status. (Unless the noticeable thing was Feynman not particularly trying to be seen?)

comment by Cornelius Dybdahl (Kalciphoz) · 2023-08-17T00:04:10.849Z · LW(p) · GW(p)

My thinking on this point is that the only proper way to respect a great work is to treat it with the same fire that went into making it. Grovelling at Niels Bohr's feet is not as respectful as contending with his ideas and taking them seriously — and expending great mental effort on an intense, focused interlocution is an act of profound respect.

There's a difference between that and discourtesy like what is displayed in the movie scene. Extending courtesy to a kind and virtuous person is a simple matter of justice. Comparing his face to a frog is indelicate, whereas admitting plainly that you find him unattractive is equally as honest without being as hurtful. If he wants a more specific inventory of his physical flaws, he can ask for elaboration.

comment by philh · 2023-08-09T15:00:09.584Z · LW(p) · GW(p)

Someone who felt uncomfortable with Feynman's bluntness and wanted to believe that there's no conflict between rationality and social graces might argue that Feynman's "simple proposition" is actually wrong insofar as it fails to appreciate the map–territory distinction: in saying, "No, it's not going to work", was not Feynman implicitly asserting that just because he couldn't see a way to make it work, it simply couldn't? ...

While not entirely without merit (it's true that the map is not the territory; it's true that authority is not without evidential weight), attending overmuch to such nuances distracts from worrying about the physics [LW · GW]

Here's something I wrote earlier today: "I thought transactions wouldn't cause "wait for lock" unless requested explicitly, and I don't think we request it explicitly. But maybe I'm wrong there?"

I don't fully remember my epistemic state at the time, but I think I was pretty confident on both counts. But as it happens, I was wrong on the first count. This is the crucial piece of information we needed to understand what we were investigating.

I can imagine that I might instead have written "transactions won't cause "wait for lock" unless requested explicitly, and we don't request it explicitly". I think writing that would have been worse for me and worse for my team, because someone reading it would have been less likely to double check the wrong thing that I believed. (I don't know if in reality my colleague who found the problem read my message and thought "what? Yes they will", or "oh? that sounds wrong to me", or "hm, I guess that's something to check" or maybe just didn't see my message at all. But I think it's unlikely-but-plausible that the more-confident version of my message could have cost several hours of debugging time.)

Would you say that in choosing to write the less-confident thing instead of the more-confident thing, I was distracting myself from worrying about the sql? I think that would be kind of a weird thing to say, but perhaps defensible. But in any case I was worrying about saying true things, and I think that matters too.

(A Feynman quote that seems relevant: "The first principle is not to fool yourself – and you are the easiest person to fool.")

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2023-08-10T02:29:16.242Z · LW(p) · GW(p)

Thanks for commenting! I agree that it's good to communicate one's uncertainty when one is uncertain. (From a certain perspective, it's unfortunate that our brains and culture aren't set up to do this in a particularly nuanced way; we only know how to say "X" and "I think X" rather than sharing likelihood ratios.) Perhaps read the second half of this post as expressing anxiety about tone-policing of confident-sounding language being used for social status regulation rather than to optimize communication of actual uncertainty?

Replies from: philh
comment by philh · 2023-08-10T10:31:10.785Z · LW(p) · GW(p)

Nod, but then perhaps that part isn't saying "lack-of-grace is a virtue" so much as "a certain kind of criticism of lack-of-grace is a vice"? (I haven't reread with this possibility in mind.)

In any case, I think I'm fine with that kind of tone policing being used for social status regulation when the confidence is unjustified.

I suppose you can say "if someone routinely talks with unjustified confidence, then eventually they'll be wrong, and they can take the status hit then?" But

  1. I think we can update faster than that. E.g. I recall Scott Adams said Trump would win 2016 with 99% probability or something? Trump did win, but I'm still comfortable judging this as overconfident without looking at his forecasting track record. (Though if someone were to look at his track record and found that he was well calibrated, I guess I'd have to be less comfortable.)
  2. Often we never really learn the answer, e.g. with counterfactuals ("if the weather had been 3° colder that day, Hillary would have won") or claims about what's inside someone's head ("they claim to sincerely believe X, but they obviously are just saying that to avoid censure"). "Is this confidence justified" is another example here.
Replies from: Zack_M_Davis, Ninety-Three
comment by Zack_M_Davis · 2023-08-13T23:09:26.067Z · LW(p) · GW(p)

The thesis of the post is that people who are trying to maximize the accuracy of shared maps are going to end up being socially ungraceful sometimes, because sometimes social grace calls for obfuscating shared maps.

Criticism of unjustified confidence for being unjustified increases the accuracy of shared maps. Criticism of unjustified confidence for reasons of social status regulation is predictably not going to be limited to cases where the confidence is unjustified, even if it happens to be unjustified in a particular case.

Accuracy of shared maps is quantitative. A culture that's optimized for social grace isn't going to make people wrong about everything, and could make people less wrong about many things relative to many less graceful alternative cultures. (At minimum, if you're not allowed be confident, you can't be overconfident; if you're not allowed to talk about what's inside someone's head, you can't be wrong about what's inside someone's head.)

Replies from: philh
comment by philh · 2023-08-15T12:48:27.488Z · LW(p) · GW(p)

Criticism of unjustified confidence for being unjustified increases the accuracy of shared maps. Criticism of unjustified confidence for reasons of social status regulation is predictably not going to be limited to cases where the confidence is unjustified, even if it happens to be unjustified in a particular case.

This sounds like it's contrasting "criticism for being unjustified" against "criticism for social status regulation". But those aren't the same use of the word "for", much like it would be weird to contrast "locking someone up for murder" against "locking someone up for deterrence". (Though "for deterrence" might be a different "for" again, I'm not sure.)

To unpack, when I said

I think I’m fine with that kind of tone policing being used for social status regulation when the confidence is unjustified.

I didn't intend to support someone being like "I want to do some social status regulation and I'm going to do it by tone policing some unjustified confidence". I meant to support "this is unjustified confidence, I want less of this and to that end I'm going to do some social status regulation through the mechanism of tone policing". I can't tell if you're yay-that or boo-that.

I guess that when you said

Perhaps read the second half of this post as expressing anxiety about tone-policing of confident-sounding language being used for social status regulation rather than to optimize communication of actual uncertainty?

I basically ignored the "rather than..." and thought you were just opposed to tone-policing of confident sounding language in general. And the reason I did that might be that in my head, it's surprising to talk about "tone policing for social status regulation, rather than tone policing to optimize communication"; rather, I'd expect to talk about "tone policing for social status regulation, in order to optimize communication".

comment by Ninety-Three · 2023-08-12T17:05:23.194Z · LW(p) · GW(p)

Scott Adams predicted Trump would win in a landslide. He wasn't just overconfident, he was wrong! The fact that he's not taking a status hit is because people keep reporting his prediciton incompletely and no one bothers to confirm what he actually predicted (when I Google 'Scott Adams Trump prediciton' in Incognito, the first two results say "landslide" in the first ten seconds and title, respectively).

Your first case is an example of something much worse than not updating fast enough.

Replies from: philh
comment by philh · 2023-08-12T23:11:30.502Z · LW(p) · GW(p)

Thanks for the correction! Bad example on my part then.

My guess is that the point is clear and fairly undisputed, and coming up with an actually correct example wouldn't be very helpful. Still a little embarrassing.

comment by Seth Herd · 2024-12-17T21:40:59.954Z · LW(p) · GW(p)

By all means, strategically violate social customs. But if you irritate people by doing it, you may be advancing your own epistemics by making them talk to you, but you're actually hurting their epistemics by making them irritated with whatever belief you're trying to pitch. Lack of social grace is very much not an epistemic virtue.

This post captures a fairly common belief in the rationalist community. It's important to understand why it's wrong.

Emotions play a strong role in human reasoning. I finally wrote up at least a little sketch of why that happens. The technical term is motivated reasoning.

Motivated reasoning/confirmation bias as the most important cognitive bias [LW(p) · GW(p)]

comment by Dagon · 2023-07-31T22:06:03.939Z · LW(p) · GW(p)

"Be like Feynman" is great advice for 0.01% of the population, and horrible for 99% (and irrelevant to the remainder).  In order to be valued for bluntness, one must be correct insanely often.   Otherwise, you have to share evidence rather than conclusions, and couching it in more pleasant terms makes it much more tolerable (again, for most but not all).

I do want to react to:

There, you don't have to worry whether people don't like you and are planning to harm your interests

Wait, that's if THEY CANNOT lie, not if you choose not to.  Unilateral simplicity in a complex world does not have very many advantages.  Further, nobody has to worry about anything.  You sometimes have to consider that they will harm you, and take steps to maintain distance while you collect evidence that they're allies.  But you don't have to worry while doing so.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-08-01T00:27:50.029Z · LW(p) · GW(p)

“Be like Feynman” is great advice for 0.01% of the population, and horrible for 99% (and irrelevant to the remainder). In order to be valued for bluntness, one must be correct insanely often. Otherwise, you have to share evidence rather than conclusions, and couching it in more pleasant terms makes it much more tolerable (again, for most but not all).

Bluntness has nothing whatever to do with not sharing evidence, so this seems like a total red herring to me.

comment by Nathaniel Monson (nathaniel-monson) · 2023-08-01T15:47:38.769Z · LW(p) · GW(p)

To a decision-theoretic agent, the value of information is always nonnegative

This seems false. If I selectively give you information in an adversarial manner, and you don't know that I'm picking the information to harm you, I think it's very clear that the value of the information you gain can be strongly negative. 

comment by Dave Lindbergh (dave-lindbergh) · 2023-07-31T17:14:56.623Z · LW(p) · GW(p)

A lot of "social grace" is strategic deception. The out-of-his-league woman defers telling the guy he's getting nowhere as long as possible, just in case it turns out he's heir to a giant fortune or something.

And of course people suck up to big shots (the Feynman story) because they hope to associate with them and have some of their fame and reputation rub off on themselves. 

This is not irrational behavior, given human goals.

Replies from: Viliam, Herb Ingram
comment by Viliam · 2023-08-04T09:09:58.339Z · LW(p) · GW(p)

The problem is the deception, not the social grace. If we succeeded to remove social grace entirely, but people remained deceptive, we wouldn't get closer to truth. We would only make our interactions less pleasant.

comment by Herb Ingram · 2023-07-31T21:22:19.782Z · LW(p) · GW(p)

just in case it turns out he's heir to a giant fortune or something.

That seems like a highly dubious explanation to me. I guess, the woman's honest account (or what you'd get by examining her state of mind) would say that she does it as a matter of habit, aiming to be nice and conform to social conventions.

If that's true, the question becomes where the convention comes from and what maintains it despite the naively plausible benefits one might hope to gain by breaking it. I don't claim to understand this (that would hint at understanding a lot of human culture at a basic level). However, I strongly suspect the origins of such behavior (and what maintains it) to be social. I.e., a good explanation of why the woman has come to act this way involves more than two people. That might involve some sort of strategic deception, but consider that most people in fact want to be lied to in such situations. An explanation must go a lot deeper than that kind of strategic deception.

comment by TAG · 2023-08-02T09:05:55.480Z · LW(p) · GW(p)

The world of The Invention of Lying is simpler, clearer, easier to navigate than our world.

If you only remove lying, you end up with a world that contains a lot more of the negative consequences socially sanctioned lying is intended to avoid -- hurt feelings and so on.

comment by Vladimir_Nesov · 2023-07-31T23:11:14.805Z · LW(p) · GW(p)

To a decision-theoretic agent, the value of information is always nonnegative.

A boundary around one's mind enforced by a norm of not mind-reading people [LW · GW] seems useful. When working on a problem, thoughts on that problem are appropriate to reveal, and counterproductive to drown in social graces, but that says little about value of communicating everything that's feasible to communicate.

comment by James Camacho (james-camacho) · 2024-12-19T03:37:51.472Z · LW(p) · GW(p)

For humans from our world, these questions do have answers—complicated answers having to do with things like map–territory confusions that make receiving bad news seem like a bad event (rather than the good event of learning information about how things were already bad, whether or not you knew it), and how it's advantageous for others to have positive-valence false beliefs about oneself.

 

If you have bad characteristics (e.g. you steal from your acquaintances), isn't it in your best interest to make sure this doesn't become common knowledge? You don't want to normalize people pointing out your flaws, so you get mad at people for gossiping behind your back, or saying rude things in front of you.