Elements of Rationalist Discourse

post by Rob Bensinger (RobbBB) · 2023-02-12T07:58:42.479Z · LW · GW · 49 comments

Contents

50 comments

I liked Duncan Sabien's Basics of Rationalist Discourse [LW · GW], but it felt somewhat different from what my brain thinks of as "the basics of rationalist discourse". So I decided to write down my own version (which overlaps some with Duncan's).

Probably this new version also won't match "the basics" as other people perceive them. People may not even agree that these are all good ideas! Partly I'm posting these just out of curiosity about what the delta is between my perspective on rationalist discourse and y'alls perspectives.

The basics of rationalist discourse, as I understand them:

 

1. Truth-Seeking. Try to contribute to a social environment that encourages belief [? · GW] accuracy [LW · GW] and good epistemic [LW · GW] processes [? · GW]. Try not to “win [LW · GW]” arguments [? · GW] using symmetric weapons (tools that work similarly well whether you're right or wrong). Indeed, try not to treat arguments [? · GW] like soldiers [LW · GW] at all.

 

2. Non-Violence: Argument gets counter-argument. Argument does not get bullet. Argument does not get doxxing, death threats, or coercion.[1]

 

3. Non-Deception. Never try to steer your conversation partners (or onlookers) toward having falser models. Where possible, avoid saying stuff that you expect to lower the net belief accuracy of the average reader; or failing that, at least flag that you're worried about this happening.

As a corollary:

3.1. Meta-Honesty [LW · GW]. Make it easy for others to tell how honest, literal, PR-y, etc. you are (in general, or in particular contexts). This can include everything from "prominently publicly discussing the sorts of situations in which you'd lie" to "tweaking your image/persona/tone/etc. to make it likelier that people will have the right priors [? · GW] about your honesty".

 

4. Localizability. Give people a social affordance to decouple [LW · GW] / evaluate the local [LW · GW] validity of claims. Decoupling is not required, and indeed context is often important and extremely worth talking about! But it should almost always be OK to locally address a specific point or subpoint, without necessarily weighing in on the larger context or suggesting you’ll engage further.

 

5. Alternative-Minding. Consider alternative [? · GW] hypotheses, and ask yourself what Bayesian [? · GW] evidence [? · GW] you have that you're not in those alternative worlds. This mostly involves asking what models retrodict.

Cultivate the skills of original seeing [? · GW] and of seeing from new vantage points.

As a special case, try to understand and evaluate the alternative hypotheses that other people are advocating. Paraphrase stuff back to people to see if you understood [? · GW], and see if they think you pass their Ideological Turing Test on the relevant ideas.

Be a fair bit more willing to consider nonstandard beliefs, frames/lenses, and methodologies, compared to (e.g.) the average academic. Keep in mind that inferential gaps can be large [? · GW], most life-experience is hard to transmit in a small [LW · GW] number of words [LW · GW] (or in words at all), and converging on the truth can require a long process of cultivating the right mental motions, doing exercises, gathering and interpreting new data, etc.

Make it a habit to explicitly distinguish "what this person literally said" from "what I think this person means". Make it a habit to explicitly distinguish "what I think this person means" from "what I infer about this person as a result".

 

6. Reality-Minding. Keep your eye on the ball, hug [LW · GW] the query, and don’t lose sight of object-level [LW · GWreality [LW · GW].

Make it a habit to flag when you notice ways to test an assertion. Make it a habit to actually test claims, when the value-of-information [? · GW] is high enough.

Reward scholarship, inquiry, betting, pre-registered predictions, and sticking your neck out, especially where this is time-consuming, effortful, or socially risky.

 

7. Reducibility. Err on the side of using simple, concrete [LW · GW], literal, and precise [LW · GW] language. Make it a habit to taboo [LW · GW] your words, do reductionism [? · GW], explain what you mean, define [LW · GW] your terms [? · GW], etc.

As a corollary, applying precision and naturalism your own cognition:

7.1. Probabilism. Try to quantify [? · GW] your uncertainty [? · GW] to some degree.

 

8. Purpose-Minding. Try not to lose purpose [LW · GW] (unless you're deliberately creating a sandbox for a more free-form and undirected stream of consciousness, based on some meta-purpose or impulse or hunch you want to follow).

Ask yourself why you're having a conversation, and whether you want to do something differently [LW · GW]. Ask others what their goals are. Keep the Void [LW · GW] in view.

As a corollary:

8.1. Cruxiness. Insofar as you have a sense of what the topic/goal of the conversation is, focus on cruxes [? · GW], or (if your goal shifts) consider explicitly flagging that you're tangenting or switching to a new conversational topic/goal.[2]

 

9. Goodwill. Reward others' good epistemic conduct (e.g., updating) more than most people naturally do. Err on the side of carrots over sticks, forgiveness over punishment, and civility over incivility, unless someone has explicitly set aside a weirder or more rough-and-tumble space.[3]

 

10. Experience-Owning. Err on the side of explicitly owning your experiences, mental states, beliefs, and impressions. Flag your inferences as inferences, and beware the Mind Projection Fallacy [LW · GW] and Typical Mind Fallacy [LW · GW].

As a corollary:

10.1. Valence-Owning. Err on the side of explicitly owning your shoulds [LW · GW] and desires [? · GW]. Err on the side of stating your wants and beliefs (and why you want or believe them) instead of (or in addition to) saying what you think people ought to do.

Try to phrase things in ways that make space for disagreement, and try to avoid socially pressuring people into doing things. Instead, as a strong default, approach people with an attitude of informing and empowering them to do what they want.

Favor language with fewer and milder connotations, and make your arguments explicitly where possible, rather than relying excessively on the connotations, feel, fnords, or vibes of your words.


A longer, less jargony version of this post is available on the EA Forum [EA · GW].

 

 

  1. ^

    Counter-arguments aren't the only OK response to an argument. You can choose not to reply. You can even ban someone because they keep making off-topic arguments, as long as you do this in a non-deceptive way. But some responses to arguments are explicitly off the table.

  2. ^

    Note that "the topic/goal of the conversation" is an abstraction. "Goals" don't exist in a vacuum. You have goals (though these may not be perfectly stable, coherent, etc.), and other individuals have goals too. Conversations can be mutually beneficial when some of my goals are the same as some of yours, or when we have disjoint goals but some actions are useful for my goals as well as yours.

    Be wary of abstractions and unargued premises in this very list! Try to taboo [LW · GW] these prescriptions and claims, paraphrase them back, figure out why I might be saying all this stuff, and explicitly ask yourself whether these norms serve your goals too.

    Part of why I've phrased this list as a bunch of noun phrases ("purpose-minding", etc.) rather than verb phrases ("mind your purpose", etc.) is that I suspect conversations will go better (on the dimension of goodwill and cheer) if people make a habit of saying "hm, I think you violated the principle of experience-owning there" or "hm, your comment isn't doing the experience-owning thing as much as I'd have liked", as opposed to "own your experience!!".

    But another part of why I used nouns is that commands aren't experience-owning, and can make it harder for people to mind their purposes. I do have imperatives in the post (mostly because the prose flowed better that way), but I want to encourage people to engage with the ideas and consider whether they make sense, rather than just blindly obey them. So I want people to come into this post engaging with these first as ideas to consider, rather than as commands to obey.

  3. ^

    Note that this doesn't require assuming everyone you talk to is honest or has good intentions.

    It does have some overlap with the rule of thumb "as a very strong but defeasible default, carry on object-level discourse as if you were role-playing being on the same side [LW · GW] as the people who disagree with you".

49 comments

Comments sorted by top scores.

comment by DragonGod · 2023-02-12T10:30:45.489Z · LW(p) · GW(p)

2. Non-Violence: Argument gets counter-argument. Argument does not get bullet. Argument does not get doxxing, death threats, or coercion.[1]

I'd want to include some kinds of social responses as unacceptable as well. Derision, mockery, acts to make the argument low status, ad hominems, etc. 

You can choose not to engage with bad arguments, but you shouldn't engage by not addressing the arguments and instead trying to execute some social maneuver to discredit it. 

Replies from: RobbBB, David Hornbein
comment by Rob Bensinger (RobbBB) · 2023-02-12T12:53:00.024Z · LW(p) · GW(p)

Derision, mockery, acts to make the argument low status, ad hominems, etc.

I don't want to include these in "Non-Violence", because I'm thinking of that rule as relatively absolute. By comparison, "derision" and "mockery" should probably be kept to a minimum, but I'm not going to pretend I've never made fun of the Time Cube guy, or that I feel super bad about having done so.

I also think sometimes a person tries to output "light-hearted playing around", but someone else perceives it as "cruel mockery". This can be a hint that the speaker messed up a bit, but I don't want to treat it as a serious sin (and I don't want to ban all play for the sake of preventing this).

Similarly, "acts to make the argument low status" is a bit tricky to encode as a rule, because even things as simple as "generating a good counter-argument" can lower the original argument's status in many people's eyes. (Flawed arguments should plausibly be seen as lower-status than good arguments!)

And "ad hominem" can actually be justified when the topic is someone's character (e.g., when you're discussing a presidential candidate's judgment, or discussing whether to hire someone, or discussing whether someone's safe to date). So again it's tricky to delimit exactly which cases are OK versus bad.

I do think you're getting at an important thing here, it's just a bit tricky to put into words. My hope is that people will realize that those sorts of things are discouraged by:

1. Truth-Seeking: "Try not to 'win [LW · GW]' arguments [? · GW] using symmetric weapons"

6. Reality-Minding: "Keep your eye on the ball, hug [LW · GW] the query, and don’t lose sight of object-level [LW · GWreality [LW · GW]."

9. Goodwill: "Err on the side of carrots over sticks, forgiveness over punishment, and civility over incivility"

10.1. Valence-Owning: "Favor language with fewer and milder connotations, and make your arguments explicitly where possible, rather than relying excessively on the connotations, feel, fnords, or vibes of your words."

(If people think it's worth being more explicit here, I'd be interested in ideas for specific edits.)

Replies from: DragonGod, RobbBB
comment by DragonGod · 2023-02-12T15:00:53.507Z · LW(p) · GW(p)

I don't want to include these in "Non-Violence", because I'm thinking of that rule as relatively absolute. By comparison, "derision" and "mockery" should probably be kept to a minimum, but I'm not going to pretend I've never made fun of the Time Cube guy, or that I feel super bad about having done so.

I've made fun of people on Twitter, but:

  1. Don't think that reflects well on me as a rationalist
  2. Don't think such posts are acceptable content for LessWrong.

You may not feel bad about mockery (I don't generally do so either), but do you think it reflects well on you as a rationalist?

 

I don't want to include these in "Non-Violence", because I'm thinking of that rule as relatively absolute.

I agree these aren't acts of violence, but I listened to the rest of the post and didn't hear you object to them anywhere else. This felt like the closest place (in that bad argument gets counterargument and doesn't get any of the things I mentioned).

 

Similarly, "acts to make the argument low status" is a bit tricky to encode as a rule, because even things as simple as "generating a good counter-argument" can lower the original argument's status in many people's eyes. (Flawed arguments should plausibly be seen as lower-status than good arguments!)

An appropriately more nuanced version would be something like: "acts to make an argument low status for reasons other than its accuracy/veracity, and conformance to norms (some true things can be presented in very unpleasant/distasteful ways [e.g. with the deliberate goal of being maximally offensive])".

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2023-02-12T23:36:36.428Z · LW(p) · GW(p)

You may not feel bas about mockery (I don't generally do so either), but do you think it reflects well on you as a rationalist?

I like this example! I do indeed share the intuition "mocking Time Cube guy on Twitter doesn't reflect well on me as a rationalist". It also just seems mean to me.

I think part of what's driving my intuition here, though, is that "mocking" sounds inherently mean-spirited, and "on Twitter" makes it sound like I'm writing the sort of low-quality viral personal attack that's common on Twitter.

"Make a light-hearted reference to Time Cube (in a way that takes for granted that Time Cube is silly) in a chat with some friends" feels pretty unlike "write a tweet mocking and deriding Time Cube", and the former doesn't feel to me like it necessarily reflects poorly on me as a rationalist. (It feels more orthogonal to the spirit of rationality to me, like making puns or playing a video game; puns are neither rationalist nor anti-rationalist.)

So part of my reservation here is that I have pretty different intuitions about different versions of "tell jokes that turn on a certain claim/belief being low-probability", and I'm not sure where to draw the line exactly (beyond the general heuristics I mentioned in the OP).

Another part of my reservation is just that I'm erring on the side of keeping the list of norms too short rather than too long. I'd rather have non-exhaustive lists and encourage people to use their common sense and personal conscience as a guide in the many cases that the guidelines don't cover (or don't cover until you do some interpretive work).

I worry that modern society is too norm-heavy in general, encouraging people to fixate on heuristics, patches, and local Prohibited Actions, in ways that are cognitively taxing and unduly 'domesticating'. I think this can make it harder to notice and appropriately respond to the specifics of the situation you're in, because your brain is yelling a memorized "no! unconditional rule X!" script at you, when in fact if you consulted your unassisted conscience and your common sense you'd have an easier time seeing what the right thing to do is.

So I'm mostly interested in trying to distill core aspects of the spirit of rationalist discourse, in the hope that this can help people's common sense and conscience grow (/ help people become more self-aware of aspects of their common sense and conscience that are already inside themselves, but that they aren't lucid about).

I suspect I've left at least one important part of "the spirit of rationalist discourse" out, so I'm mainly nitpicking your suggestions in case your replies cause me to realize that I'm missing some important underlying generator that isn't alluded to in the OP. I care less about whether "mockery" specifically gets called out in the OP, and more about whether I've neglected an underlying spirit/generator.

Maybe Goodwill is missing a generator-sentence that's something like "Don't lean into cruelty, or otherwise lose sight of what your conscience or common sense says about how best to relate to other human beings."

"acts to make an argument low status for reasons other than its accuracy/veracity, and conformance to norms (some true things can be presented in very unpleasant/distasteful ways [e.g. with the deliberate goal of being maximally offensive])".

Yeah, I like that more. I still worry that "low status" is vague and different people conceive of it differently, so I have the instinct that it might be good to taboo "status" here. "Conformance to norms" is also super vague; someone would need to have the right norms in mind in order for this to work.

comment by Rob Bensinger (RobbBB) · 2023-02-12T12:54:03.257Z · LW(p) · GW(p)

I also don't want to call minor things like ad hominems "violent"!

(Actually, possibly I'm already watering down "violence" more than is ideal by treating "doxxing" and "coercion" as violent. But in this context I do feel like physical violence, death threats, doxing, and coercion are in a cluster together, whatever you want to call it, and things like mockery are in a different cluster.)

Replies from: cubefox
comment by cubefox · 2023-02-12T14:04:58.680Z · LW(p) · GW(p)

It seems to me that forms of mockery, bullying, social ostracization etc are actually in the same cluster. They all attack the opponent with something else than an argument, be it physical or not. If bullying doesn't count as violence, then the problem seems to be with labeling the cluster "violence". Maybe rule 2 shouldn't be called "non-violence", but "non-aggressiveness" or something like that.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2023-02-12T14:36:20.852Z · LW(p) · GW(p)

They all attack the opponent with something else than an argument, be it physical or not.

And what, precisely, is an "attack"? Can you taboo that word and give a pretty precise definition, so we know what does and doesn't count?

I've seen people on the Internet use words like "bullying", "harassment", "violence", "abuse", etc. to refer to stuff like 'disagreeing with my political opinions'.

(The logic being, e.g.: "Anti-Semites have historically killed people like me. I claim that political opinion X (e.g., about the Israeli-Palestinian conflict) is anti-Semitic. Therefore you expressing your opinion is (1) a thing I should reasonably take as a veiled threat against me and an attempt to bully and harass me, and (2) a thing that will embolden anti-Semites and thereby further endanger me.")

I'm not saying that this reasoning makes sense, or that we should totally avoid words like "bullying" because they get overused in a lot of places. But I do take stuff like this as a warning sign about what can happen if you start building your social norms around vague concepts.

I'd rather have norms that either mention extremely specific concrete things that aren't up for interpretation (see how much more concrete "death threats" is than "bullying"), or that mention higher-level features shared by lots of different bad behavior (e.g., "avoid symmetric weapons").

Replies from: cubefox
comment by cubefox · 2023-02-12T15:07:37.468Z · LW(p) · GW(p)

And what, precisely, is an "attack"? Can you taboo that word and give a pretty precise definition, so we know what does and doesn't count?

How about "hurting a person or deminishing their credibility, or the credibility of their argument, without using a rational argument"? This would make it acceptable when people get hurt by rational arguments, or when their credibility is diminished by such an argument. The problem seems to be when this is achieved by something else than a rational argument.

Maybe this is not the perfect definition of the cluster which includes both physical violence and non-physical aggression, but the pure "physical violence" cluster seems in any case arbitrary. E.g. social ostracization can be far more damaging than a punch in the guts, and both are bad as a response to an argument insofar they are not themselves forms of argument.

I've seen people on the Internet use words like "bullying", "harassment", "violence", "abuse", etc. to refer to stuff like 'disagreeing with my political opinions'.

Yes, people do that, but them confusing disagreement with bullying doesn't mean disagreement is bullying. And the fact that disagreement is okay doesn't mean that bullying, mockery, etc. is a valid discourse strategy.

Moreover, the speaker can identify actions like mockery by introspection, so avoiding it doesn't rely on the capabilities of the listener to distinguish it from disagreement. The vagueness objection seems to assume the perspective of the listener, but rule 2 applies to us in our role as speakers. It recommends what we should say or do, not how we should interpret others. (Of course, there could be an additional rule which says that we, as listeners, shouldn't be quick to dismiss mere disagreements as personal attacks.)

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2023-02-12T23:54:06.063Z · LW(p) · GW(p)

How about "hurting a person or deminishing their credibility, or the credibility of their argument, without using a rational argument"?

"Hurting a person" still seems too vague to me (sometimes people are "hurt" just because you disagreed with them on a claim of fact), "Diminishing... the credibility of their argument, without using a rational argument" sounds similar to "using symmetric weapons" to me (but the latter strikes me as more precise and general: don't try to persuade people via tools that aren't Bayesian evidence for the truth of the thing you're trying to persuade them of).

"A rational argument", I worry, is too vague here, and will make people think that all rationalist conversation as to look like their mental picture of Spock-style discourse.

The problem seems to be when this is achieved by something else than a rational argument.

A lot of things can hurt people's feelings other than rational arguments, and I don't think the person causing the hurt is always at fault for those things. (E.g., maybe I beat someone at a video game and this upset them.)

but the pure "physical violence" cluster seems in any case arbitrary. E.g. social ostracization can be far more damaging than a punch in the guts, and both are bad as a response to an argument insofar they are not themselves forms of argument.

The point of separating out physical violence isn't to say "this is the worst thing you can do to someone". It's to draw a clear black line around a case that's especially easy to rule completely out of bounds. We've made at least some progress thereby, and it would be a mistake to throw out this progress just because it doesn't solve every other problem; don't let the perfect be the enemy of the good.

Other sorts of actions can be worse than some forms of physical violence consequentially, but there isn't a good sharp black line in every case for clearly verbally transmitting what those out-of-bound actions are. See also my reply [LW(p) · GW(p)] to DragonGod.

Even "this is at least as harmful as a punch in the gut" isn't a good pointer, since some people are extremely emotionally brittle and can be put in severe pain with very minor social slights. I think it's virtuous to try to help those people flourish, but I don't want to claim that a rationalist has done a Terrible Thing if they ever do something that makes someone that upset; it depends on the situation.

I feel specifically uncomfortable with leaning on the phrase "social ostracization" here, because it's so vague, and the way you're talking about it makes it sound like you want rationalists to be individually responsible for making every human on Earth feel happy, welcome, and accepted in the rat community. "Ostracization" seems clearly bad to me if it looks like bullying and harassment, but sometimes "ostracizing" just means banning someone from an Internet forum, and I think banning is often prosocial.

(Including banning someone because of an argument! If someone keeps posting off-topic arguments, feel free to ban.)

Replies from: cubefox
comment by cubefox · 2023-02-13T01:10:35.578Z · LW(p) · GW(p)

"Hurting a person" still seems too vague to me (sometimes people are "hurt" just because you disagreed with them on a claim of fact),

Even "this is at least as harmful as a punch in the gut" isn't a good pointer, since some people are extremely emotionally brittle and can be put in severe pain with very minor social slights. I think it's virtuous to try to help those people flourish, but I don't want to claim that a rationalist has done a Terrible Thing if they ever do something that makes someone that upset; it depends on the situation.

As I said, if someone feels upset by mere disagreement, that's not a violation of a rational discourse norm.

The focus on physical violence is nice insofar violence is halfway clear-cut, but is also fairly useless insofar the badness of violence is obvious to most people (unlike things like bullying, bad-faith mockery, moral grandstanding, etc which are very common), and mostly irrelevant in internet discussions without physical contact, where most irrational discourse is happening nowadays, very nonviolently.

I feel specifically uncomfortable with leaning on the phrase "social ostracization" here, because it's so vague, and the way you're talking about it makes it sound like you want rationalists to be individually responsible for making every human on Earth feel happy, welcome, and accepted in the rat community.

That seems to me an uncharitable interpretation. Social ostracization is prototypically something which happens e.g. when someone gets cancelled by a Twitter mob. "Mob" insofar those people don't use rational arguments to attack you, even if "attacking you without using arguments" can't be defined perfectly precisely. (Something like the Bostrom witch-hunt on Twitter, which included outright defamation, but hardly any arguments.)

If you would consequently shun vagueness, then you couldn't even discourage violence, because the difference between violence and non-violence is gradual, it likewise admits of borderline cases. But since violence is bad despite borderline cases, the borderline cases and exceptions you cited also don't seem very serious. You never get perfectly precise definitions. And you have to embrace some more vagueness than in the case of violence, unless you want to refer only to a tiny subset of irrational discourse.

By the way, I would say banning/blocking is irrational when it is done in response to disagreement (often people on Twitter ban other people who merely disagree with them) and acceptable when off-topic or purely harassment. Sometimes there are borderline cases which lie in between, those are grey areas where blocking may be neither clearly bad nor clearly acceptable, but such grey areas are in no way counterexamples to the clear-cut cases.

comment by David Hornbein · 2023-02-13T02:54:42.554Z · LW(p) · GW(p)

I would expand "acts to make the argument low status" to "acts to make the argument low status without addressing the argument". Lots of good rationalist material, including the original Sequences, includes a fair amount of "acts to make arguments low status". This is fine—good, even—because it treats the arguments it targets in good faith and has a message that rhymes with "this argument is embarrassing because it is clearly wrong, as I have shown in section 2 above" rather than "this argument is embarrassing because gross stupid creeps believe it".

Many arguments are actually very bad. It's reasonable and fair to have a lower opinion of people who hold them, and to convey that opinion to others along with the justification. As you say, "you shouldn't engage by not addressing the arguments and instead trying to execute some social maneuver to discredit it". Discrediting arguments by social maneuvers that rely on actual engagement with the argument's contents is compatible with this.

Replies from: malcolmocean
comment by MalcolmOcean (malcolmocean) · 2024-07-18T16:40:42.438Z · LW(p) · GW(p)

I've actually come to the impression that the extensive use of contempt in the Sequences is one of the worst aspects of the whole piece of writing, because it encourages people to disown their own actual experience where it's (near) the target of such contempt, and to adopt a contemptuous stance when faced with perspectives they in fact don't get.

Contempt usually doesn't help people change their minds, and when it does it does so via undermining people's internal epistemic processes with social manipulation.  If the argument in "section 2 above" turns out to have flaws or mistaken assumptions, then an attitude of contempt (particularly from a position of high status) about how it's embarrassing to not understand that will not help people understand it better.  It might get them to spend more time with the argument in order to de-embarrass themselves, but it won't encourage them to take the arguments on its merits.  Either the argument is good and addresses relevant concerns people have (factual and political) and if so you'll be able to tell because it will work!  Shaming people for not getting it is at best a distraction, and at worst an attack on people's sensemaking.  And generally a symmetric weapon.

Meanwhile, contempt as a stance in the holder it tend to block curiosity and ability to notice confusion. Even if some argument is clearly wrong, it somehow actually made sense to the person arguing it—at least as a thing to say, if not a way to actually view the world.  What sense did it make?  Why did they say this bizarre thing and not that bizarre thing?  Just because energy-healing obviously doesn't work via [violating this particular law of physics], that doesn't mean it can't work via some other mechanism—after all, the body heals itself non-magically under many ordinary circumstances! And if interventions can make it harder for that to work, then they can probably make it easier.  So how might it work?  And what incentivized the energy healer to make up a bad model in the first place?

Contempt may be common among rationalists but from my perspective the main reason Rob didn't include it is probably because it's not actually very functional for good discourse.

comment by the gears to ascension (lahwran) · 2023-02-12T09:35:24.865Z · LW(p) · GW(p)

love this post. meta-note: it would be really great to have visited link highlighting on by default on lesswrong, to make posts with very heavy referencing like this easier to navigate.

Replies from: Raemon
comment by Raemon · 2023-02-12T20:38:40.560Z · LW(p) · GW(p)

FYI we have a PR for up it up (I made it a week ago when you last requested it), but not merged in yet.

comment by SomeoneYouOnceKnew · 2023-02-13T01:16:47.300Z · LW(p) · GW(p)

Feel free to delete this if it feels off-topic, but on a meta note about discussion norms, I was struck by that meme about C code. Basically, the premise that there is higher code quality when there is swearing.

I was also reading discussions in the linux mailinglists- the discussions there are clear, concise, and frank. And occasionally, people still use scathing terminology and feedback.

I wonder if people would be interested in setting up a few discussion posts where specific norms get called out to "participate in good faith but try to break these specific norms"

And people play a mix-and-match to see which ones are most fun, engaging and interesting for participants. This would probably end in disaster if we started tossing slurs willy-nilly, but sometimes while reading posts, I think people could cut down on the verbiage by 90% and keep the meaning.

comment by Rafael Harth (sil-ver) · 2023-02-12T11:43:26.002Z · LW(p) · GW(p)

Strong upvoted; these also feel closer to the "core" virtues to me, even though there's nothing wrong with Duncan's post.

comment by romeostevensit · 2023-02-12T17:19:37.416Z · LW(p) · GW(p)

Flag your inferences as inferences

Cultivating what Korzybski dubbed Consciousness of Abstraction (ie not unconsciously abstracting) improves things a lot eg noticing what metaphors are being deployed as part of an argument about the generalizability of your experience. To develop this, it was useful to first do the easier task of noticing when and how others are abstracting.

comment by Rob Bensinger (RobbBB) · 2023-02-14T04:00:39.162Z · LW(p) · GW(p)

I've rewritten this post for the EA Forum [EA · GW], to help introduce more EAs to rationalist culture and norms. The rewrite goes into more detail about a lot of the points, explaining jargon, motivating some of the less intuitive norms, etc. I expect some folks will prefer that version, and some will prefer the LW version.

(One shortcoming of the EA Forum version is that it's less concise. Another shortcoming is that there's more chance I got stuff wrong, since I erred on the side of "spell things out more in the hope of conveying more of the spirit to people who are new to this stuff", rather than "leave more implicit so that the things I say out loud can all be things I feel really confident about".)

comment by Vladimir_Nesov · 2023-02-12T15:16:29.035Z · LW(p) · GW(p)

(Edit: Already fixed, no longer relevant.)

Try not to “win” arguments using asymmetric weapons (tools that work similarly well whether you're right or wrong).

Should be "symmetric". From Scott's post:

Logical debate has one advantage over narrative, rhetoric, and violence: it’s an asymmetric weapon. That is, it’s a weapon which is stronger in the hands of the good guys than in the hands of the bad guys.

You've also repeated the incorrect usage in two [LW(p) · GW(p)] comments [LW(p) · GW(p)] to this post.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2023-02-12T22:20:19.646Z · LW(p) · GW(p)

Thanks, fixed!

comment by cubefox · 2023-02-12T14:33:46.272Z · LW(p) · GW(p)

There is an interesting variation of rule 1 (truth-seeking). According to common understanding, this rule seems to imply that if we argue for X, we should only do so if we believe X to more than 50%. Similarly rule 3. But recently a number of philosophers have argued that (at least in academic context) you can actually argue for interesting hypotheses without believing in them. This is sometimes called "championing", and described as a form of epistemic group-rationality, which says that sometimes individually irrational arguments can be group-rational.

The idea is that truth seeking is viewed as a competitive-collaborative process, which has benefits when people specialize in certain outsider theories and champion them. In some contexts it is fairly likely that some outsider theory is true, even though each individual outsider theory has a much lower probability than the competing mainstream theory. If everyone argued for the most likely (mainstream) theory, there would be too little "intellectual division of labor"; hardly anyone would bother arguing for individually unlikely theories.

(This recent essay [LW · GW] might be interpreted as an argument for championing.)

It might be objected that the championers should be honest and report that they find the interesting theory they champion ultimately unlikely to be true. But this could have bad effects for the truth-seeking process of the group: Why should anyone feel challenged by someone advocating a provocative hypothesis when the advocators themselves don't believe it? The hypothesis would lose much of its provocativeness, and the challenged people wouldn't really feel challenged. It wouldn't encourage fruitful debate.

(This is can also be viewed as a solution to the disagreement paradox: Why could it ever be rational to disagree with our epistemic peers? Shouldn't we average our opinions? Answer: Averaging might be individually rational, but not group-rational.)

Replies from: Vladimir_Nesov, sinclair-chen, Dweomite
comment by Vladimir_Nesov · 2023-02-12T15:04:33.437Z · LW(p) · GW(p)

this rule seems to imply that if we argue for X, we should only do so if we believe X to more than 50%

Being an "argument for" is anti-inductive, an argument stops working in either direction once it's understood. You believe what you believe, at a level of credence you happen to have. You can make arguments. Others can change either understanding or belief in response to that. These things don't need to be related. And there is nothing special about 50%.

Replies from: cubefox
comment by cubefox · 2023-02-12T15:16:05.780Z · LW(p) · GW(p)

I don't get what you mean. Assuming you argue for X, but you don't believe X, it would seem something is wrong, at least from the individual rationality perspective. For example, you argue that it raining outside without you believing that it is raining outside. This could e.g. be classified as lying (deception) or bullshitting (you don't care about the truth).

Replies from: Vladimir_Nesov, RobbBB
comment by Vladimir_Nesov · 2023-02-12T15:24:58.323Z · LW(p) · GW(p)

Assuming you argue for X

What does "arguing for" mean? There's expectation that a recipient changes their mind in some direction. This expectation goes away for a given argument, once it's been considered, whether it had that effect or not. Repeating the argument won't present an expectation of changing the mind of a person who already knows it, in either direction, so the argument is no longer an "argument for". This is what I mean by anti-inductive [LW · GW].

Assuming you argue for X, but you don't believe X

Suppose you don't believe X, but someone doesn't understand an aspect of X, such that you expect its understanding to increase their belief in X. Is this an "argument for" X? Should it be withheld, keeping the other's understanding avoidably lacking?

Replies from: cubefox
comment by cubefox · 2023-02-12T15:57:59.152Z · LW(p) · GW(p)

What does "arguing for" mean? There's expectation that a recipient changes their mind in some direction. This expectation goes away for a given argument, once it's been considered, whether it had that effect or not.

Here is a proposal: A argues with Y for X iff A 1) claims that Y, and 2) that Y is evidence for X, in the sense that P(X|Y)>P(X|-Y). The latter can be considered true even if you already believe in Y.

Suppose you don't believe X, but someone doesn't understand an aspect of X, such that you expect its understanding to increase their belief in X. Is this an "argument for" X? Should it be withheld, keeping the other's understanding avoidably lacking?

I agree, that's a good argument.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-02-13T15:13:59.267Z · LW(p) · GW(p)

The best arguments confer no evidence, they guide you in putting together the pieces you already hold.

Replies from: cubefox
comment by cubefox · 2023-02-13T16:48:36.334Z · LW(p) · GW(p)

Yeah, aka Socratic dialogue.

Alice: I don't believe X.

Bob: Don't you believe Y? And don't you believe If Y then X?

Alice: Okay I guess I do believe X.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-02-13T17:25:02.621Z · LW(p) · GW(p)

The point is, conditional probability doesn't capture the effect of arguments.

Replies from: cubefox
comment by cubefox · 2023-02-13T22:22:51.415Z · LW(p) · GW(p)

It seems that arguments provide evidence, and Y is evidence for X if and only if . That is, when X and Y are positively probabilistically dependent. If I think that they are positively dependent, and you think that they are not, then this won't convince you of course.

comment by Rob Bensinger (RobbBB) · 2023-02-12T22:31:32.577Z · LW(p) · GW(p)

Assuming you argue for X, but you don't believe X, it would seem something is wrong, at least from the individual rationality perspective.

Belief is a matter of degree. If someone else thinks it's 10% likely to be raining, and you believe it's 40% likely to be raining, then we could summarize that as "both of you think it's not raining". And if you share some of your evidence and reasoning for thinking the probability is more like 40% than 10%, then we could maybe say that this isn't really arguing for the proposition "it's raining", but rather the proposition "rain is likelier than you think" or "rain is 40% likely" or whatever.

But in both cases there's something a bit odd about phrasing things this way, something that cuts a bit skew to reality. In reality there's nothing special about the 50% point, and belief isn't a binary. So I think part of the objection here is: maybe what you're saying about belief and argument is technically true, but it's weird to think and speak that way because in fact the cognitive act of assigning 40% probability to something is very similar to the act of assigning 60% probability to something, and the act of citing evidence for rain when you have the former belief is often just completely identical to the act of citing evidence for rain when you have the latter belief.

Replies from: cubefox
comment by cubefox · 2023-02-12T23:35:10.021Z · LW(p) · GW(p)

The issue for discourse is that beliefs do come in degrees, but when expressing them they lose this feature. Declarative statements are mostly discrete. (Saying "It's raining outside" doesn't communicate how strongly you believe it, except to more than 50% -- but again, the fan of championing will deny even that in certain discourse contexts.)

Talking explicitly about probabilities is a workaround, a hack where we still make binary statements, just about probabilities. But talking about probabilities is kind of unnatural, and people (even rationalists) rarely do it. Notice how both of us made a lot of declarative statements without indicating our degrees of belief in them. The best we can do, without using explicit probabilities, is using qualifiers like "I believe that", "It might be that", "It seems that", "Probably", "Possibly", "Definitely", "I'm pretty sure that" etc. See https://raw.githubusercontent.com/zonination/perceptions/master/joy1.png

comment by Sinclair Chen (sinclair-chen) · 2023-04-19T09:52:08.488Z · LW(p) · GW(p)

Examples of truth-seeking making you "give reasons for X" even though you don't "believe X":
- Everyone believes X is 2% but you think X is 15% because of reasons they aren't considering, and you tell them why
- Everyone believes X is 2%. You do science and tell everyone all your findings, some of which support X (and some of which don't support X).

We should bet on moonshots (low chance, high EV). This is what venture capitalists and startup founders do. I imagine this is what some artists, philosophers, comedians, and hipsters do as well, and I think it is truth-tending on net.
But I hate the norm that champions should lie. Instead, champions should only say untrue things if everyone knows the norms around that. Like lawyers in court or comedians on stage, or all fiction. 

Replies from: cubefox
comment by cubefox · 2023-04-19T10:57:29.930Z · LW(p) · GW(p)

Yeah, championing seems to border on deception, bullshitting, or even lying. But the group rationality argument says that it can be optimal when a few members of a group "over focus" (from an individual perspective) on an issue. These pull in different directions.

comment by Dweomite · 2023-04-18T23:04:07.395Z · LW(p) · GW(p)

I think people can create an effectively unlimited number of "outsider theories" if they aren't concerned with how likely they are.  Do you think ALL of those should get their own champions?  If not, what criteria do you propose for which ones get champions and which don't?

Maybe it would be better to use a frame of "which arguments should we make?" rather than "which hypotheses should we argue for?"  Can we just say that you should only make arguments that you think are true, without talking about which camp those arguments favor?

(Though I don't want to ban discussions following the pattern "I can't spot a flaw in this argument, but I predict that someone else can, can anyone help me out?"  I guess I think you should be able to describe arguments you don't believe if you do it in quotation marks.)

comment by Ben Pace (Benito) · 2024-12-05T21:07:42.174Z · LW(p) · GW(p)

It's a fine post, but I don't love this set of recommendations and justifications, and I feel like rationalist norms & advice should be held to a high standard, so I'm not upvoting it in the review. I'll give some quick pointers to why I don't love it.

  1. Truth-Seeking: Seems too obvious to be useful advice. Also I disagree with the subpoint about never treating arguments like soldiers, I think two people inhabiting opposing debate-partners is sort of captured by this and I think this is a healthy truth-seeking process.
  2. Non-Violence: All the examples of things you're not supposed to do in response to an argument are things you're not supposed to do anyway. Also it seems too much like it's implying the only response to an argument is a counter-argument. Sometimes the correct response to bad argument is to fire someone or attempt to politically disempower them. As an example, Zvi Mowshowitz presents evidence and argument in Repeal the Jones Act of 1920 [LW · GW] that there are a lot of terrible and disingenuous arguments being put forward by unions that are causing a total destruction of the US shipping industry. The generator here of arguments seems reliably non-truth-tracking, and I would approve of someone repealing the Jones act without persuading such folks or spending the time to refute each and every argument.
  3. Non-Deception: I'll quote the full description here:
    1. "Never try to steer your conversation partners (or onlookers) toward having falser models. Where possible, avoid saying stuff that you expect to lower the net belief accuracy of the average reader; or failing that, at least flag that you're worried about this happening."
    2. I think that the space of models one walks through is selected for both accuracy and usefulness. Not all models are equally useful. I might steer someone from a perfectly true but vacuous model, to a less perfect but more practical model, thereby net reducing the accuracy of a person's statements and beliefs (most of the time). I prefer something more like a standard of "Intent to Inform".

Various other ones are better, some are vague, many things are presented without justification and I suspect I might disagree if it was offered. I think Zack M. Davis's critique [LW · GW] of 'goodwill' is good.

comment by MondSemmel · 2023-10-02T17:13:33.474Z · LW(p) · GW(p)

I wish Brevity was considered another important rationalist virtue. Unfortunately, it isn't practiced as such, including by me.

In programming, "lines of code" is a cost, not an accomplishment. It's a proxy for something we care about (like the functionality and robustness of a program), but all other things being equal, we'd prefer the number to be as small as possible. Similarly, the number of words in our posts and comments is a cost for the things we actually care about (e.g. legible communication), and all other things being equal, we'd prefer this number to be as small as possible.

These are also related costs: complexity, jargon, parenthetical asides (like the ones in this comment), clarifications, footnotes, ...

That doesn't mean that the costs are never worth paying. Just that they shouldn't be paid mindlessly, and that brevity is too often subordinated to other virtues and goals.

Finally, there are some ways to add and edit text whose benefits imo usually outweigh their costs: like adding outlines and headings, or using formatting.

Replies from: Raemon, dorkichiban
comment by Raemon · 2023-10-02T17:39:14.154Z · LW(p) · GW(p)

Yeah I think brevity straightforwardly should be considered one, at least on the margin.

Replies from: MondSemmel
comment by MondSemmel · 2023-10-02T18:05:33.016Z · LW(p) · GW(p)

After thinking more about it, declaring sth like brevity as a virtue might be outright required, because the other virtues and elements of discourse don't directly trade off against one another. So a perfectionist might try to optimize by fulfilling all of them, at the cost of writing absurdly long and hard-to-parse text. Hence there's value in naming some virtue that's opposed to the others, as a counterbalance, to make the tradeoffs explicit.

comment by Kylie (dorkichiban) · 2023-10-05T05:51:39.165Z · LW(p) · GW(p)

I think considering brevity, for its own sake, to be an important rationalist virtue is unlikely to prove beneficial for maintaining, or raising, the quality of rationalist discourse. That's because it is a poorly defined goal that could easily be misinterpreted as encouraging undesirable tradeoffs at the expense of, for example, clarity of communication, laying out of examples to aid in understanding of a point, or making explicit potentially dry details such as the epistemic status of a belief, or the cruxes upon which a position hinges. 

There is truth to the points you've brought up though, and thinking about about how brevity could be incorporated into a list of rationalist virtues has brought two ideas to mind:

1. It seems to me that this could be considered an aspect of purpose-minding. If you know your purpose, and keep clearly in mind why you're having a conversation, then an appropriate level of brevity should be the natural result. The costs of brevity, or lack thereof, can be payed as needed according to what best fits your purpose. A good example of this is this post here on lesswrong, and the longer, but less jargony, version of it that exists on the EA forum.

2. The idea of epistemic legibility [LW · GW] feels like it includes the importance of brevity while also making the tradeoffs that brevity, or lack thereof, involves more explicit than directly stating brevity as a rationalist virtue. For example a shorter piece of writing that cites fewer sources is more likely to be read in full rather than skimmed, and more likely to have its sources checked rather than having readers simply hope that they provide the support that the author claims. This is in contrast to a longer piece of writing that cites more sources which allows an author to more thoroughly explain their position, or demonstrate greater support for claims that they make. No matter how long or short a piece of writing is, there are always benefits and costs to be considered.

While writing this out I noticed that there was a specific point you made that did not sit well with me, and which both of the ideas above address.

Similarly, the number of words in our posts and comments is a cost for the things we actually care about (e.g. legible communication), and all other things being equal, we'd prefer this number to be as small as possible.

To me this feels like focusing on the theoretical ideal of brevity at the expense of the practical reality of brevity. All other things are never equal, and I believe the preference should be for having precisely as many words as necessary, for whatever specific purpose and context a piece of writing is intended for.

I realize that "we'd prefer this number to be as small as possible" could be interpreted as equivalent to "the preference should be for having precisely as many words as necessary", but the difference in implications  between these phrases, and the difference in their potential for unfortunate interpretations, does not seem at all trivial to me.

As an example, something that I've seen discussed both on here, and on the EA forum, is the struggle to get new writers to participate in posting and commenting. This is a struggle that I feel very keenly as I started reading lesswrong many years ago, but have (to my own great misfortune) avoided posting and commenting for various reasons. If I think about a hypothetical new poster who wants to embody the ideals and virtues of rationalist discourse, asking them to have their writing use as small a number of words as possible feels like a relatively intimidating request when compared to asking that they consider the purpose and context of their writing and try to find an appropriate length with that in mind. The latter framing also feels much more conducive to experimenting, failing, and learning to do better.

Replies from: MondSemmel
comment by MondSemmel · 2023-10-06T20:03:24.852Z · LW(p) · GW(p)

To be clear, I didn't mean that all LW posts and comments should be maximally short, merely that it would be better if brevity or a related virtue (like "ease of being read") were considered as part of an equation to balance. Because I currently feel like we're erring towards writing stuff that's far longer [LW(p) · GW(p)] than would be warranted if there was some virtue which could counterbalance spending extra paragraphs on buying diminishing returns in virtues like legibility (where e.g. the first footnote is often very valuable, but the fifth is less so).

If I think about a hypothetical new poster who wants to embody the ideals and virtues of rationalist discourse, asking them to have their writing use as small a number of words as possible feels like a relatively intimidating request when compared to asking that they consider the purpose and context of their writing and try to find an appropriate length with that in mind. The latter framing also feels much more conducive to experimenting, failing, and learning to do better.

I actually think that, if the community considered and practiced brevity as one of our virtues, the site would be more welcoming to new posters, not less. The notion of writing my first comment on this site in 2023, rather than 2013, feels daunting to me. Right now I imagine it feels like you have to dot all your i's and cross all your t's before you can get started, whereas I'm pretty sure the standards for new commenters were far lower in the beginning of the site.

And one thing I find particularly daunting, and would imo find even more daunting as a newcomer, is that it feels like the median post and comment are incredibly long. And that, in order to fit in, one also has to go to such great lengths in everything one writes.

comment by metacoolus · 2023-06-01T23:49:57.586Z · LW(p) · GW(p)

I appreciate the clarity and thoroughness of this post. It's a useful distillation of communication norms that nurture understanding and truth-seeking. As someone who has sometimes struggled with getting my point across effectively, these guidelines serve as a solid reminder to stay grounded in both purpose and goodwill. It's reassuring to know that there's an ever-evolving community working towards refining the art of conversation for collective growth.

comment by Ruby · 2023-04-18T00:37:03.931Z · LW(p) · GW(p)

Curate. I want to curate this post in the same manner that Raemon curated [LW(p) · GW(p)] Basics of Rationalist Discourse [LW · GW]. This post contains a list of non-universal/non-standard ways for people to communicate, that do allow for better truthseeking. These are two recent posts on the topic, but I'd be keen to see more exploration of how do we communicate better, and how we can quickly get many more new people to pick up these ideals and methods.

comment by roland · 2023-02-13T11:50:23.029Z · LW(p) · GW(p)

Valence-Owning

Could you please give a definition of the word valence? The definition I found doesn't make sense to me: https://en.wiktionary.org/wiki/valence

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2023-02-13T12:58:11.445Z · LW(p) · GW(p)

Basically: whether something is good or bad, enjoyable or unpleasant, desirable or undesirable, interesting or boring, etc. It's the aspect of experience that evaluates some things as better or worse to varying degrees and in various respects.

comment by Review Bot · 2024-02-25T12:41:48.779Z · LW(p) · GW(p)

The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.

Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?

comment by RomanS · 2023-04-25T09:33:23.456Z · LW(p) · GW(p)

Reality-Minding. Keep your eye on the ball, hug [LW · GW] the query, and don’t lose sight of object-level [LW · GWreality [LW · GW].

 

A relevant post: Consume fiction wisely [LW · GW]. 

TLDR: Fiction is often harmful for your mind, and it is often made to manipulate you.

The post got surprisingly controversial. It seems that even in this community many people are disturbed by the idea that watching Hollywood movies or reading fantasy - is harmful for cognition. 

comment by Jasnah Kholin (Jasnah_Kholin) · 2023-02-12T14:46:05.465Z · LW(p) · GW(p)

i may be should (and probably will not) write my own post about Goodwill. instead i will say in comment what Goodwill is about, by my definition. 

Goodwill, the way i see it, on the emotional level, is basically respect and cooperation. when someone make an argument, do you try to see to what area in ConceptSpace they are trying to gesturing, and then asking clarifying questions to understand, or do you round it up to the nearest stupid position, and not even see the actual argument being made? do you even see then saying something incoherent and try to parse it, instead of proving it wrong?

the standard definition of Goodwill does not include the ways in which failure of Goodwill is failure of rationality. is failure of seeing what someone is trying to say, to understand their position and their framing.

civility is good for its own sake. but almost everyone who decided to be uncivil end up strawmanning their opponents, end up with more wrong map of the world. what may look like forgiveness from outside, for rationalist, should look from inside like remembering that we expect short inferential distances [LW · GW] and that politics wrecks your ability to do math and your believes filter your receptions, so depends on your side in argument.


i gained my understanding of those phenomenons mostly from the Rational Blogosphere, and saw it as part of rationality. there is important difference between person executing the algorithm "being civil and forgiving", and people executing algorithm "remember about biases and inferential distances, and try to overcome them", that implemented by understanding the importance of cooperating even after perceived defection in noisy environment, in the prisoner's dilemma, and by assuming that communication is hard ind miscommunications are frequent, etc.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2023-02-12T14:59:33.530Z · LW(p) · GW(p)

I think that's right, but in my list I'm trying to factor out non-strawmanning as "alternative-minding", and civility under "goodwill".

I think there are anti-strawmanning benefits to being friendly, but I'm wary of trying to cash out everything that's a good idea as "oh yeah, this is good because it helps individuals see the truth better", when that's not actually true for every good idea.

In this case, I think there are two things worth keeping distinct: the goal of understanding others' views in a discussion, and the goal of making discussion happen at all. Civility helps keep social environments fun and chill enough that people stick around, are interested in engaging, and don't go into the conversation feeling triggered or defensive. That's worth protecting, IMO, even if there's no risk that yelling at people (or whatever) will directly cause you to straw-man them.

Replies from: Jasnah_Kholin
comment by Jasnah Kholin (Jasnah_Kholin) · 2023-02-15T07:53:03.500Z · LW(p) · GW(p)

so i thought about you comment and i understand why we think about that in different ways.

in my model of the world, there is important concept - Goodwill. there are arrows that point toward it, things that create goodwill - niceness, same side politically, personal relationship, all sort of things. there are also things that destroy goodwill, or even move it to the negative numbers.

there are arrows that come out of this Goodwill node in my casual graph. things like System1 understand what actually said, tend to react nicely to things, able to pass ITT. some things you can get other ways - people can be polite to people they hate, especially on the internet. but there are things that i saw only as result of Goodwill. and System1 correct interpretation is one of them/ maybe it's possible - but i never saw it. and the politeness you get without Goodwill, is shallow. people's System1 notice that in body language, and even in writing.

now, you can dial back on needless insulting and condescension. those are adversarial moves that can be chose consciously or avoided, even with effort. but from my point of view, when there is so little Goodwill left, the chance for good discussion already lost. it can only be bad and very bad. avoiding very bad is important! but my aim in such situations is to leave the discussion when the goodwill come close to zero, and have mental alarm screaming at me if i ever in the negative numbers of feel like the other person have negative numbers of Goodwill toward me.

so, basically, in my model of the world, there is ONE node, Goodwill. in the world, there is no different things. you write: "even if there's no risk that yelling at people (or whatever) will directly cause you to straw-man them.". but in my model, such situation is impossible! yelling at people WILL cause you to strawman them. 

in my model of the world, this fact is not public knowledge, and my model regarding that is important part of what i want to communicate when I'm talking about Goodwill. 

thanks for the conversion! it's the clearest way i ever described my concept of Goodwill, and it was useful for my to formulate that in words.
 

comment by metacoolus · 2023-06-01T23:50:14.787Z · LW(p) · GW(p)