Moderation notes re: recent Said/Duncan threads

post by Raemon · 2023-04-14T18:06:21.712Z · LW · GW · 560 comments

Update: Ruby and I have posted moderator notices for Duncan and Said in this thread. This was a set of fairly difficult moderation calls on established users and it seems good for the LessWrong userbase to have the opportunity to evaluate it and respond. I'm stickying this post for a day-or-so.

 

Recently there's been a series of posts and comment back-and-forth between Said Achmiz and Duncan Sabien, which escalated enough that it seemed like site moderators should weigh in.

For context, a quick recap of recent relevant events as I'm aware of them are. (I'm glossing over many details that are relevant but getting everything exactly right is tricky)

  1. Duncan posts Basics of Rationalist Discourse [LW · GW]. Said writes some comments in response. 
  2. Zack posts "Rationalist Discourse" Is Like "Physicist Motors", which Duncan and Said argue some more and Duncan eventually says "goodbye" which I assume coincides with banning Said from commenting further on Duncan's posts. 
  3. I publish LW Team is adjusting moderation policy [LW · GW]. Lionhearted suggests "Basics of Rationalist Discourse" as a standard the site should uphold. Paraphrasing here, Said objects to a post being set as the site standards if not all non-banned users can discuss it. More discussion ensues.
  4. Duncan publishes Killing Socrates [LW · GW], a post about a general pattern of LW commenting that alludes to Said but doesn't reference him by name. Commenters other than Duncan do bring up Said by name, and the discussion gets into "is Said net positive/negative for LessWrong?" in a discussion section where Said can't comment.
  5. @gjm [LW · GW] publishes On "aiming for convergence on truth" [LW · GW], which further discusses/argues a principle from Basics of Rationalist Discourse [LW · GW] that Said objected to. Duncan and Said argue further in the comments. I think it's a fair gloss to say "Said makes some comments about what Duncan did, which Duncan says are false enough that he'd describe Said as intentionally lying about them. Said objects to this characterization" (although exactly how to characterize this exchange is maybe a crux of discussion)

LessWrong moderators got together for ~2 hours to discuss this overall situation, and how to think about it both as an object-level dispute and in terms of some high level "how do the culture/rules/moderation of LessWrong work?". 

I think we ended up with fairly similar takes, but, getting to the point that we all agree 100% on what happened and what to do next seemed like a longer project, and we each had subtly different frames about the situation. So, some of us (at least Vaniver and I, maybe others) are going to start by posting some top level comments here. People can weigh in the discussion. I'm not 100% sure what happens after that, but we'll reflect on the discussion and decide on whether to take any high-level mod actions.

If you want to weigh in, I encourage you to take your time even if there's a lot of discussion going on. If you notice yourself in a rapid back and forth that feels like it's escalating, take at least a 10 minute break and ask yourself what you're actually trying to accomplish. 

I do note: the moderation team will be making an ultimate call on whether to take any mod actions based on our judgment. (I'll be the primary owner of the decision, although I expect if there's significant disagreement among the mod team we'll talk through it a lot). We'll take into account arguments various people post, but we aren't trying to reflect the wisdom of crowds. 

So if you may want to focus on engaging with our cruxes rather than what other random people in the comments think.

560 comments

Comments sorted by top scores.

comment by Raemon · 2023-04-17T21:52:56.110Z · LW(p) · GW(p)

Preliminary Verdict (but not "operationalization" of verdict)

tl;dr – @Duncan_Sabien [LW · GW]and @Said Achmiz [LW · GW] each can write up to two more comments on this post discussing what they think of this verdict, but are otherwise on a temporary ban from the site until they have negotiated with the mod team and settled on either:

  • credibly commit to changing their behavior in a fairly significant way,
  • or, accept some kind of tech solution that limits their engagement in some reliable way that doesn't depend on their continued behavior.
  • or, be banned from commenting on other people’s posts (but still allowed to make new top level posts and shortforms)

(After the two comments they can continue to PM the LW team, although we'll have some limit on how much time we're going to spend negotiating)

Some background:

Said and Duncan are both among the two single-most complained about users since LW2.0 started (probably both in top 5, possibly literally top 2). They also both have many good qualities I'd be sad to see go. 

The LessWrong team has spent hundreds of person hours thinking about how to moderate them over the years, and while I think a lot of that was worthwhile (from a perspective of "we learned new useful things about site governance") there's a limit to how much it's worth moderating or mediating conflict re: two particular users.

So, something pretty significant needs to change.

A thing that sticks out in both the case of Said and Duncan is that they a) are both fairly law abiding (i.e. when the mods have asked them for concrete things, they adhere to our rules, and clearly suppor rule-of-law and the general principle of Well Kept Gardens), but b) both have a very strong principled sense of what a “good” LessWrong would look like and are optimizing pretty hard for that within whatever constraints we give them. 

I think our default rules are chosen to be something that someone might trip accidentally, if you’re trying to mostly be good stereotypical citizen but occasionally end up having a bad day. Said and Duncan are both trying pretty hard to be good citizen in another country that the LessWrong team is consciously not trying to be. It’s hard to build good rules/guidelines that actually robustly deal with that kind of optimization.

I still don’t really know what to do, but I want to flag that the the goal I'll be aiming for here is "make it such that Said and Duncan either have actively (credibly) agreed to stop optimizing in a fairly deep way, or, are somehow limited by site tech such that they can't do the cluster of things they want to do that feels damaging to me." 

If neither of those strategies turn out to be tractable, banning is on the table (even though I think both of them contribute a lot in various ways and I'd be pretty sad to resort to that option). I have some hope tech-based solutions can work

(This is not a claim about which of them is more valuable overall, or better/worse/right-or-wrong-in-this-particular-conflict. There's enough history with both of them being above-a-threshold-of-worrisome that it seems like the LW team should just actually resolve the deep underlying issues, regardless of who's more legitimately aggrieved this particular week)

Re: Said:

One of the most common complaints I've gotten about LessWrong, from both new users as well as established, generally highly regarded users, is "too many nitpicky comments that feel like they're missing the point". I think LessWrong is less fragile than it was in 2018 when I last argued extensively with Said about this, but I think it's still an important/valid complaint.

Said seems to actively prefer a world where the people who are annoyed by him go away, and thinks it’d be fine if this meant LessWrong had radically fewer posts. I think he’s misunderstanding something about how intellectual progress actually works, and about how valuable his comments actually are. (As I said previously, I tend to think Said’s first couple comments are worthwhile. The thing that feels actually bad is getting into a protracted discussion, on a particular (albeit fuzzy) cluster of topics)

We've had extensive conversations with Said about changing his approach here. He seems pretty committed to not changing his approach. So, if he's sticking around, I think we'd need some kind of tech solution. The outcome I want here is that in practice Said doesn't bother people who don't want to be bothered. This could involve solutions somewhat specific-to-Said, or (maybe) be a sitewide rule that works out to stop a broader class of annoying behavior. (I'm skeptical the latter will turn out to work without being net-negative, capturing too many false positives, but seems worth thinking about)

Here are a couple ideas:

  1. Easily-triggered-rate-limiting. I could imagine an admin feature that literally just lets Said comment a few times on a post, but if he gets significantly downvoted, gives him a wordcount-based rate-limit that forces him to wrap up his current points quickly and then call it a day. I expect fine-tuning this to actually work the way I imagine in my head is a fair amount of work but not that much. 
  2. Proactive warning. If a post author has downvoted Said comments on their post multiple times, they get some kind of UI alert saying "Yo, FYI, admins have flagged this user as somewhat with a pattern of commenting that a lot of authors have found net-negative. You may want to take that into account when deciding how much to engage".

There's some cluster of ideas surrounding how authors are informed/encouraged to use the banning options. It sounds like the entire topic of "authors can ban users" is worth revisiting so my first impulse is to avoid investing in it further until we've had some more top-level discussion about the feature.

Why is it worth this effort?

You might ask "Ray, if you think Said is such a problem user, why bother investing this effort instead of just banning him?". Here are some areas I think Said contributes in a way that seem important:

  • Various ops/dev work maintaining sites like readthesequences.com, greaterwrong.com, and gwern.com. (edit: as Ben Pace notes, this is pretty significant, and I agree with his note that [LW(p) · GW(p)] "Said is the person independent of MIRI (including Vaniver) and Lightcone who contributes the most counterfactual bits to the sequences and LW still being alive in the world")
  • Most of his comments are in fact just pretty reasonable and good in a straightforward way.
  • While I don't get much value out of protracted conversations about it, I do think there's something valuable about Said being very resistant to getting swept up in fad ideas. Sometimes the emperor in fact really does have no clothes. Sometimes the emperor has clothes, but you really haven't spelled out your assumptions very well and are confused about how to operationalize your idea. I do think this is pretty important and would prefer Said to somehow "only do the good version of this", but seems fine to accept it as a package-deal.

Re: Duncan

I've spent years trying to hash out "what exactly is the subtle but deep/huge difference between Duncan's moderation preferences and the LW teams." I have found each round of that exchange valuable, but typically it didn't turn out that whatever-we-thought-was-the-crux was a particularly Big Crux.

I think I care about each of the things Duncan is worried about (i.e. such as things listed in Basics of Rationalist Discourse [LW · GW]). But I tend to think the way Duncan goes about trying to enforce such things extremely costly.

Here's this month/year's stab at it: Duncan cares particularly about things strawmans/mischaracterizations/outright-lies getting corrected quickly (i.e. within ~24 hours). See Concentration of Force [LW · GW] for his writeup on at least one-set-of-reasons this matters). I think there is value in correcting them or telling people to "knock it off" quickly. But, 

a) moderation time is limited
b) even in the world where we massively invest in moderation... the thing Duncan cares most about moderating quickly just doesn't seem like it should necessarily be at the top of the priority queue to me? 

was surprised and updated on You Don't Exist, Duncan [LW · GW] getting as heavily upvoted as it did, so I think it's plausible that this is all a bigger deal than I currently think it is. (that post goes into one set of reasons that getting mischaracterized hurts). And there are some other reasons this might be important (that have to do with mischaracterizations taking off and becoming the de-facto accepted narrative). 

I do expect most of our best authors to agree with Duncan that these things matter, and generally want the site to be moderated more heavily somehow. But I haven't actually seen anyone but Duncan argue they should be prioritized nearly as heavily as he wants. (i.e. rather than something you just mostly take-in-stride, downvote and then try to ignore, focusing on other things)

I think most high-contributing users agree the site should be moderated more (see the significant upvotes on LW Team is adjusting moderation policy [LW · GW]), but don't necessarily agree on how. It'd be cruxy for me if more high-contributing-users actively supported the sort of moderation regime Duncan-in-particular seems to want.

I don't know that really captured the main thing here. I feel less resolved on what should change on LessWrong re: Duncan. But I (and other LW site moderators), want to be clear that while strawmanning is bad and you shouldn’t do it, we don’t expect to intervene on most individual cases. I recommend strong downvoting, and leaving one comment stating the thing seems false. 

I continue to think it's fine for Duncan to moderate his own posts however he wants (although as noted previously I think an exception should be made for posts that are actively pushing sitewide moderation norms)

Some goals I'd have are:

  • people on LessWrong feel safe that they aren't likely to get into sudden, protracted conflict with Duncan that persists outside his own posts. 
  • the LessWrong team and Duncan are on-the-same-page about LW team not being willing to allocate dozens of hours of attention at a moments notice in the specific ways Duncan wants. I don't think it's accurate to say "there's no lifeguard on duty", but I think it's quite accurate to say that the lifeguard on duty isn't planning to prioritize the things Duncan wants, so, Duncan should basically participate on LessWrong as if there is, in effect "no lifeguard" from his perspective. I'm spending ~40 hours this week processing this situation with a goal of basically not having to do that again.
  • In the past Duncan took down all his LW posts when LW seemed to be actively hurting him. I've asked him about this in the past year, and (I think?) he said he was confident that he wouldn't. One thing I'd want going forward is a more public comment that, if he's going to keep posting on LessWrong, he's not going to do that again. (I don't mind him taking down 1-2 problem posts that led to really frustrating commenting experiences for him, but if he were likely to take all the posts down that undercuts much of the value of having him here contributing)

FWIW I do think it's moderately likely that the LW team writes a post taking many concepts from Basics of Rationalist Discourse [LW · GW] and integrating it into our overall moderation policy. (It's maybe doable for Duncan to rewrite the parts that some people object to, and to enable commenting on those posts by everyone. but I think it's kinda reasonable for people to feel uncomfortable with Duncan setting the framing, and it's worth the LW team having a dedicated "our frame on what the site norms are" anyway)

In general I think Duncan has written a lot of great posts – many of his posts have been highly ranked in the LessWrong review. I expect him to continue to provide a lot of value to the LessWrong ecosystem one way or another.

I'll note that while I have talked to Duncan for dozens(?) of hours trying to hash out various deep issues and not met much success, I haven't really tried negotiating with him specifically about how he relates to LessWrong. I am fairly hopeful we can work something out here. 

Replies from: Duncan_Sabien, Vladimir_Nesov, Davidmanheim, Vaniver, JBlack, Zack_M_Davis, Benito, DanielFilan, Ruby, Screwtape, Viliam, Leviad, SaidAchmiz
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-18T03:23:00.059Z · LW(p) · GW(p)

I generally agree with the above and expect to be fine with most of the specific versions of any of the three bulleted solutions that I can actually imagine being implemented.

I note re:

It'd be cruxy for me if more high-contributing-users actively supported the sort of moderation regime Duncan-in-particular seems to want.

... that (in line with the thesis of my most recent post) I strongly predict that a decent chunk of the high-contributing users who LW has already lost would've been less likely to leave and would be more likely to return with marginal movement in that direction.

I don't know how best to operationalize this, but if anyone on the mod team feels like reaching out to e.g. ~ten past heavy-hitters that LW actively misses, to ask them something like "how would you have felt if we had moved 25% in this direction," I suspect that the trend would be clear. But the LW of today seems to me to be one in which the evaporative cooling has already gone through a couple of rounds, and thus I expect the LW of today to be more "what? No, we're well-adapted to the current environment; we're the ones who've been filtered for."

(If someone on the team does this, and e.g. 5 out of 8 people the LW team misses respond in the other direction, I will in fact take that seriously, and update.)

Replies from: Raemon
comment by Raemon · 2023-04-18T03:48:56.846Z · LW(p) · GW(p)

Nod. I want to clarify, the diff I'm asking about and being skeptical about is "assuming, holding constant, that LessWrong generally tightens moderation standards along many dimensions, but doesn't especially prioritize the cluster of areas around 'strawmanning being considered especially bad' and 'making unfounded statements about a person's inner state'"

i.e. the LessWrong team is gearing up to invest a lot more in moderation one way or another. I expect you to be glad that happened, but still frequently feel in pain on the site and feel a need to take some kind of action regarding it. So, the poll I'd want is something like "given overall more mod investment, are people still especially concerned about the issues I associate with Duncan-in-particular".

I agree some manner of poll in this space would be good, if we could implement it.

Replies from: DaystarEld, Vladimir_Nesov
comment by DaystarEld · 2023-04-18T13:47:16.525Z · LW(p) · GW(p)

FWIW, I don't avoid posting because of worries of criticism or nitpicking at all. I can't recall a moment that's ever happened.

But I do avoid posting once in a while, and avoid commenting, because I don't always have enough confidence that, if things start to move in an unproductive way, there will be any *resolution* to that.

If I'd been on Lesswrong a lot 10 years ago, this wouldn't stop me much. I used to be very... well, not happy exactly, but willing, to spend hours fighting the good fight and highlighting all the ways people are being bullies or engaging in bad argument norms or polluting the epistemic commons or using performative Dark Arts and so on.

But moderators of various sites (not LW) have often failed to be able to adjudicate such situations to my satisfaction, and over time I just felt like it wasn't worth the effort in most cases.

From what I've observed, LW mod team is far better than most sites at this. But when I imagine a nearer-to-perfect-world, it does include a lot more "heavy handed" moderation in the form of someone outside of an argument being willing and able to judge and highlight whether someone is failing in some essential way to be a productive conversation partner.

I'm not sure what the best way to do this would be, mechanically, given realistic time and energy constraints. Maybe a special "Flag a moderator" button that has a limited amount of uses per month (increased by account karma?) that calls in a mod to read over the thread and adjudicate? Maybe even that would be too onerous, but *shrugs* There's probably a scale at which it is valuable for most people while still being insufficient for someone like Duncan. Maybe the amount decreases each time you're ruled against.

Overall I don't want to overpromise something like "if LW has a stronger concentration of force expectation for good conversation norms I'd participate 100x more instead of just reading." But 10x more to begin with, certainly, and maybe more than that over time.

Replies from: Vaniver
comment by Vaniver · 2023-04-18T14:50:18.969Z · LW(p) · GW(p)

Maybe a special "Flag a moderator" button that has a limited amount of uses per month (increased by account karma?) that calls in a mod to read over the thread and adjudicate?

This is similar to the idea for the Sunshine Regiment [LW · GW] from the early days of LW 2.0, where the hope was that if we have a wide team of people who were sometimes called on to do mod-ish actions (like explaining what's bad about a comment, or how it could have been worded, or linking to the relevant part of The Sequences, or so on), we could get much more of it. (It both would be a counterspell to bystander effect (when someone specific gets assigned a comment to respond to), a license to respond at all (because otherwise who are you to complain about this comment?), a counterfactual matching incentive to do it (if you do the work you're assigned, you also fractionally encourage everyone else in your role to do the work they're assigned), and a scheme to lighten the load (as there might be more mods than things to moderate).)

It ended up running into the problem that, actually there weren't all that many people suited to and interested in doing moderator work, and so there was the small team of people who would do it (which wasn't large enough to reliably feel on top of things instead of needing to prioritize to avoid scarcity).

I also don't think there's enough uniformity of opinion among moderators or high-karma-users or w/e that having a single judge evaluate whole situations will actually resolve them. (My guess is that if I got assigned to this case Duncan would have wanted to appeal, and if RobertM [LW(p) · GW(p)] got assigned to this case Said would have wanted to appeal, as you can see from the comments they wrote in response. This is even tho I think RobertM and I agree on the object-level points and only disagree on interpretations and overall judgments of relevance!) I feel more optimistic about something like "a poll" of a jury drawn from some limited pool, where some situations go 10-0, others 7-3, some 5-5; this of course 10xs the costs compared to a single judge. (And open-access polls both have the benefit and drawback of volunteer labor.)

Replies from: DaystarEld, Ruby
comment by DaystarEld · 2023-04-19T20:20:55.821Z · LW(p) · GW(p)

All good points, and yeah I did consider the issue of "appeals" but considered "accept the judgement you get" part of the implicit (or even explicit if necessary) agreeement made when raising that flag in the first place. Maybe it would require both people to mutually accept it.

But I'm glad the "pool of people" variation was tried, even if it wasn’t sustainable as volunteer work.

comment by Ruby · 2023-04-27T18:56:54.254Z · LW(p) · GW(p)

It ended up running into the problem that, actually there weren't all that many people suited to and interested in doing moderator work

 

I'm not sure that's true? I was asked at the time to be Sunshine mod, I said yes, and then no one ever followed up to assign me any work. At some point later I was given an explanation, but I don't remember it.

comment by Vladimir_Nesov · 2023-04-22T18:39:03.937Z · LW(p) · GW(p)

LessWrong [...] doesn't especially prioritize the cluster of areas around 'strawmanning being considered especially bad' and 'making unfounded statements about a person's inner state'

You mean it's considered a reasonable thing to aspire to, and just hasn't reached the top of the list of priorities? This would be hair-raisingly alarming if true.

Replies from: Raemon
comment by Raemon · 2023-04-22T23:37:04.616Z · LW(p) · GW(p)

I'm not sure I parse this. I'd say yes, it's a reasonable thing to aspire to and hasn't reached the top of (the moderator/admins) priorities. You say "that would be alarming", and infer... something?

I think you might be missing some background context on how much I think Duncan cares about this, and what I mean by not prioritizing it to the degree he does?

(I'm about to make some guesses about Duncan. I expect to re-enable his commenting within a day or so and he can correct me if I'm wrong)

I think Duncan thinks "Rationalist Discourse" Is Like "Physicist Motors" [LW · GW] strawmans his position, and still gets mostly upvoted and if he wasn't going out of his way to make this obvious, people wouldn't notice. And when he does argue that this is happening, his comment doesn't get upvoted much-at-all [LW(p) · GW(p)].

You might just say "well, Duncan is wrong about whether this is strawmanning". I think it is [edit for clarity: somehow] strawmanning, but Zack's post still has some useful frames and it's reasonable for it to be fairly upvoted.

I think if I were to try say "knock it off, here's a warning" the way I think Duncan wants me to, this would a) just be more time consuming than mods have the bandwidth for (we don't do that sort of move in general, not just for this class of post), b) disincentivize literal-Zack and new marginal Zack-like people from posting, and, I think the amount of strawmanning here is just not bad enough to be worth that. (see this comment [LW(p) · GW(p)])

Replies from: Vladimir_Nesov, Zack_M_Davis, ambigram
comment by Vladimir_Nesov · 2023-04-23T02:56:21.691Z · LW(p) · GW(p)

It's a bad thing to institute policies when missing good proxies. Doesn't matter if the intended objective is good, a policy that isn't feasible to sanely execute makes things worse.

Whether statements about someone's inner state are "unfounded" or whether something is a "strawman" is hopelessly muddled in practice, only open-ended discussion has a hope of resolving that. Not a policy that damages that potential discussion. And when a particular case is genuinely controversial, only open-ended discussion establishes common knowledge of that fact.

But even if moderators did have oracular powers of knowing that something is unfounded or a strawman, why should they get involved in consideration of factual questions? Should we litigate p(doom) next? This is just obviously out of scope, I don't see a principled difference. People should be allowed to be wrong, that's the only way to notice being right [LW · GW] based on observation of arguments (as opposed to by thinking on your own).

(So I think it's not just good proxies needed to execute a policy that are missing in this case, but the objective is also bad. It's bad on both levels, hence "hair-raisingly alarming".)

Replies from: Raemon
comment by Raemon · 2023-04-23T05:50:34.398Z · LW(p) · GW(p)

I'm actually still kind of confused about what you're saying here (and in particular whether you think the current moderator policy of "don't get involved most of the time" is correct)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-04-26T07:20:08.874Z · LW(p) · GW(p)

You implied and then confirmed that you consider a policy for a certain objective an aspiration, I argued that policies I can imagine that target that objective would be impossible to execute, making things worse in collateral damage. And that separately the objective seems bad (moderating factual claims).

(In the above two comments, I'm not saying anything about current moderator policy. I ignored the aside in your comment on current moderator policy, since it didn't seem relevant to what I was saying. I like keeping my asides firmly decoupled/decontextualized, even as I'm not averse to re-injecting the context into their discussion. But I won't necessarily find that interesting or have things to say on.)

So this is not meant as subtle code for something about the current issues. Turning to those, note that both Zack [LW(p) · GW(p)] and Said [LW(p) · GW(p)] are gesturing at some of the moderators' arguments getting precariously close to appeals to moderate factual claims. Or that escalation in moderation is being called for in response to unwillingness to agree with moderators on mostly factual questions [LW(p) · GW(p)] (a matter of integrity) or to implicitly take into account some piece of alleged knowledge [LW(p) · GW(p)]. This seems related to how I find the objective of the hypothetical policy against strawmanning a bad thing.

Replies from: Raemon
comment by Raemon · 2023-04-26T16:57:35.469Z · LW(p) · GW(p)

Okay, gotcha, I had not understood that. (Vaniver's comment elsethread had also cleared this up for me I just hadn't gotten around to replying to it yet)

One thing about "not close to the top of our list of priorities" means is that I haven't actually thought that much about the issue in general. On the issue of "do LessWrong moderators think they should respond to strawmanning?" (or various other fallacies), my guess (thinking about it for like 5 minutes recently), I'd say something like:

I don't think it makes sense for moderators to have a "policy against strawmanning", in the sense that we take some kind of moderator action against it. But, a thing I think we might want to do is "when we notice someone strawmanning, make a comment saying 'hey, this seems like strawmanning to me?'" (which we aren't treating as special mod comment with special authority, more like just proactively being a good conversation participant). And, if we had a lot more resources, we might try to do something like "proactively noticing and responding to various fallacious arguments at scale."

Replies from: Raemon, Raemon
comment by Raemon · 2023-04-27T18:58:32.694Z · LW(p) · GW(p)

(FYI @Vladimir_Nesov [LW · GW] I'm curious if this sort of thing still feels 'hair raisingly alarming' to you)

comment by Raemon · 2023-04-26T17:05:18.099Z · LW(p) · GW(p)

(Note that I see this issue as fairly different from the issue with Said, where the problem is not any one given comment or behavior, but an aggregate pattern)

comment by Zack_M_Davis · 2023-04-23T06:13:24.077Z · LW(p) · GW(p)

I think it is strawmanning Zack's post still has some useful frames and it's reasonable for it to be fairly upvoted. [...] I think the amount of strawmanning here is just not bad enough

Why do you think it's strawmanning, though? What, specifically, do you think I got wrong? This seems like a question you should be able to answer!

As I've explained [LW · GW], I think that strawmanning accusations should be accompanied by an explanation of how the text that the critic published materially misrepresents the text that the original author published. In a later comment [LW(p) · GW(p)], I gave two examples illustrating what I thought the relevant evidentiary standard looks like.

If I had a more Said-like commenting style, I would stop there, but as a faithful adherent of the church of arbitrarily large amounts of interpretive labor, I'm willing to do your work for you. When I imagine being a lawyer hired to argue that "'Rationalist Discourse' Is Like 'Physicist Motors'" [LW · GW] engages in strawmanning, and trying to point to which specific parts of the post constitute a misrepresentation, the two best candidates I come up with are (a) the part where the author claims that "if someone did [speak of 'physicist motors'], you might quietly begin to doubt how much they really knew about physics", and (b) the part where the author characterizes Bensinger's "defeasible default" of "role-playing being on the same side as the people who disagree with you" as being what members of other intellectual communities would call "concern trolling."

However, I argue that both examples (a) and (b) fail to meet the relevant standard, of the text that the critic published materially misrepresenting the text that the original author published.

In the case of (a), while the most obvious reading of the text might be characterized as rude or insulting insofar as it suggests that readers should quietly begin to doubt Bensinger's knowledge of rationality, insulting an author is not the same thing as materially misrepresenting the text that the author published. In the case of (b), "concern-trolling" is pejorative term; it's certainly true that Bensinger would not self-identify as engaging in concern-trolling. But that's not what the text is arguing: the claim is that the substantive behavior that Bensinger recommends is something that other groups would identify as "concern trolling." I continue to maintain that this is true.

Regarding another user's claim that the "entire post" in question "is an overt strawman", that accusation was rebutted in the comments by both myself [LW(p) · GW(p)] and Said Achmiz [LW(p) · GW(p)].

In conclusion, I stand by my post.

If you disagree with my analysis here, that's fine: I want people to be able to criticize my work. But I think you should be able to say why, specifically. I think it's great when people make negative-valence claims about my work, and then back up those claims with specific arguments that I can learn from. But I think it's bad when people make negative-valence claims about my work that they don't argue for, and then I have to do their work for them as part of my service to the church of arbitrarily large amounts of interpretive labor (as I've done in this comment).

Replies from: Raemon, Duncan_Sabien
comment by Raemon · 2023-04-23T07:17:23.253Z · LW(p) · GW(p)

I meant the primary point of my previous comment to be "Duncan's accusation in that thread is below the threshold of 'deserves moderator response' (i.e. Duncan wishes the LessWrong moderators would intervene on things like that on his behalf [edit: reliably and promptly], and I don't plan to do that, because I don't think it's that big a deal. (I edited the previous comment to say "kinda" strawmanning, to clarify the emphasis more)

My point here was just explaining to Vladimir why I don't find it alarming that the LW team doesn't prioritize strawmanning the way Duncan wants (I'm still somewhat confused about what Vlad meant with his question though and am honestly not sure what this conversation thread is about)

Replies from: Vaniver, Ruby
comment by Vaniver · 2023-04-24T00:30:12.084Z · LW(p) · GW(p)

I'm still somewhat confused about what Vlad meant with his question though and am honestly not sure what this conversation thread is about

I see Vlad as saying "that it's even on your priority list, given that it seems impossible to actually enforce, is worrying" not "it is worrying that it is low instead of high on your priority list."

comment by Ruby · 2023-04-23T17:12:48.357Z · LW(p) · GW(p)

I don't plan to do that, because I don't think it's that big a deal
 

I think it plausibly is a big deal and mechanisms that identify and point out when people are doing this (and really, I think a lot of the time it might just be misunderstanding) would be very valuable.

I don't think moderators showing up and making and judgment and proclamation is the right answer. I'm more interested in making it so people reading the thread can provide the feedback, e.g. via Reacts. 

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-24T23:56:41.880Z · LW(p) · GW(p)

Just noting that "What specifically did it get wrong?" is a perfectly reasonable question to ask, and is one I would have (in most cases) been willing to answer, patiently and at length.

That I was unwilling in that specific case is an artifact of the history of Zack being quick to aggressively misunderstand that specific essay, in ways that I considered excessively rude (and which Zack has also publicly retracted).

Given that public retraction, I'm considering going back and in fact answering the "what specifically" question, as I normally would have at the time. If I end up not doing so, it will be more because of opportunity costs than anything else. (I do have an answer; it's just a question of whether it's worth taking the time to write it out months later.)

comment by ambigram · 2023-04-23T07:51:07.674Z · LW(p) · GW(p)

I'm very confused, how do you tell if someone is genuinely misunderstanding or deliberately misunderstanding a post?

The author can say that a reader's post is an inaccurate representation of the author's ideas, but how can the author possibly read the reader's mind and conclude that the reader is doing it on purpose? Isn't that a claim that requires exceptional evidence?

Accusing someone of strawmanning is hurtful if false, and it shuts down conversations because it pre-emptively casts the reader in an adverserial role. Judging people based on their intent is also dangerous, because it is near-unknowable, which means that judgments are more likely to be influenced by factors other than truth. It won't matter how well-meaning you are because that is difficult to prove; what matters is how well-meaning other people believe you to be, which is more susceptible to biases (e.g. people who are richer, more powerful, more attractive get more leeway).

I personally would very much rather people being judged by their concrete actions or impact of those actions (e.g. saying someone consistently rephrases arguments in ways that do not match the author's intent or the majority of readers' understanding), rather than their intent (e.g. saying someone is strawmanning).

To be against both strawmanning (with weak evidence) and 'making unfounded statements about a person's inner state' seems to me like a self-contradictory and inconsistent stance.

comment by Vladimir_Nesov · 2023-04-22T18:35:06.467Z · LW(p) · GW(p)

I think Said and Duncan are clearly channeling this conflict [LW(p) · GW(p)], but the confict is not about them, and doesn't originate with them. So by having them go away or stop channeling the conflict, you leave it unresolved and without its most accomplished voices, shattering the possibility of resolving it in the foreseeable future. The hush-hush strategy of dealing with troubling observations, fixing symptoms instead of researching the underlying issues, however onerous that is proving to be.

(This announcement is also rather hush-hush, it's not a post and so I've only just discovered it, 5 days later. This leaves it with less scrutiny that I think transparency of such an important step requires.)

Replies from: Raemon
comment by Raemon · 2023-04-22T20:09:41.177Z · LW(p) · GW(p)

(Also, this announcement is also rather hush-hush, it's not a post and so I've only just discovered it, 5 days later. This leaves it with less scrutiny that I think transparency of such an important step requires.)

It's an update to me that you hadn't seen it (I figured since you had replied to a bunch of other comments you were tracking the thread, and more generally figured that since there's 360 comments on this thing it wasn't suffering from lack-fo-scrutiny). But, plausible that we should pin it for a day when we make our next set of announcement comments (which are probably coming sometime this weekend, fwiw)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-04-22T20:27:02.820Z · LW(p) · GW(p)

360 comments on this thing

I meant this thread specifically, with the action announcement, not the post. The thread was started 4 days after the post, so everyone who wasn't tracking the post had every opportunity to miss it. (It shouldn't matter for the point about scrutiny that I in particular might've been expected to not miss it.)

comment by Davidmanheim · 2023-04-18T10:18:26.643Z · LW(p) · GW(p)

Just want to note that I'm less happy with a lesswrong without Duncan. I very much value Duncan's pushback against what I see as a slow decline in quality, and so I would prefer him to stay and continue doing what he's doing. The fact that he's being complained about makes sense, but is mostly a function of him doing something valuable. I have had a few times where I have been slapped down by Duncan, albeit in comments on his Facebook page, where it's much clearer that his norms are operative, and I've been annoyed, but each of those times, despite being frustrated, I have found that I'm being pushed in the right direction and corrected for something I'm doing wrong.

I agree that it's bad that his comments are often overly confrontational, but there's no way to deliver constructive feedback that doesn't involve a degree of confrontation, and I don't see many others pushing to raise the sanity waterline. In a world where a dozen people were fighting the good fight, I'd be happy to ask him to take a break. But this isn't that world, and it seems much better to actively promote a norm of people saying they don't have energy or time to engage than telling Duncan (and maybe / hopefully others) not to push back when they see thinking and comments which are bad. 
 

comment by Vaniver · 2023-04-18T15:24:28.982Z · LW(p) · GW(p)

The thing that feels actually bad is getting into a protracted discussion, on a particular (albeit fuzzy) cluster of topics

I think I want to reiterate my position that I would be sad about Said not being able to discuss Circling (which I think is one of the topics in that fuzzy cluster). I would still like to have a written explanation of Circling (for LW) that is intelligible to Said, and him being able to point out which bits are unintelligible and not feel required to pretend that they are intelligible seems like a necessary component of that.

With regards to Said's 'general pattern', I think there's a dynamic around socially recognized gnosis [LW(p) · GW(p)] where sometimes people will say "sorry, my inability/unwillingness to explain this to you is your problem" and have the commons on their side or not, and I would be surprised to see LW take the position that authors decide for that themselves. Alternatively, tech that somehow makes this more discoverable and obvious--like polls or reacts or w/e--does seem good.

I think productive conversations stem from there being some (but not too much) diversity in what gnosis people are willing to recognize, and in the ability for subspaces to have smaller conversations that require participants to recognize some gnosis.

comment by JBlack · 2023-04-18T00:22:46.752Z · LW(p) · GW(p)

Is there any evidence that either Duncan or Said are actually detrimental to the site in general, or is it mostly in their interactions directly with each other? As far as I can see, 99% of the drama here is in their conflicts directly with each other and heavy moderation team involvement in it.

From my point of view (as an interested reader and commenter), this latest drama appears to have started partly due to site moderation essentially forcing them into direct conflict with each other via a proposal to adopt norms based on Duncan's post while Said and others were and continue to be banned from commenting on it.

From this point of view, I don't see what either of Said or Duncan have done to justify any sort of ban, temporary or not.

Replies from: Raemon
comment by Raemon · 2023-04-18T03:16:18.267Z · LW(p) · GW(p)

This decision is based on mostly on past patterns with both of them, over the course of ~6 years.

The recent conflict, in isolation, is something where I'd kinda look sternly at them and kinda judge them (and maybe a couple others) for getting themselves into a demon thread [LW · GW]*, where each decision might look locally reasonable but nonetheless it escalates into a weird proliferating discussion that is (at best) a huge attention sink and (at worst) gets people into an increasingly antagonistic fight that brings out people's worse instincts. If I spent a long time analyzing I might come to more clarity about who was more at fault, but I think the most I might do for this one instance is ban one or both of them for like a week or so and tell them to knock it off.

The motivation here is from a larger history. (I've summarized one chunk of that history from Said here [LW(p) · GW(p)], and expect to go into both a bit more detail about Said and a bit more about Duncan in some other comments soon, although I think I describe the broad strokes in the top-level-comment here)

And notably, my preference is for this not to result in a ban. I'm hoping we can work something out. The thing I'm laying down in this comment is "we do have to actually work something out."

comment by Zack_M_Davis · 2023-04-18T02:32:08.853Z · LW(p) · GW(p)

I condemn the restrictions on Said Achmiz's speech in the strongest possible terms. I will likely have more to say soon, but I think the outcome will be better if I take some time to choose my words carefully.

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2023-04-19T05:41:21.372Z · LW(p) · GW(p)

his speech is not being restricted in variety, it's being ratelimited. the difference there is enormous.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2023-04-19T13:36:10.830Z · LW(p) · GW(p)

Did we read the same verdict? The verdict says that the end of the ban is conditional on the users in question "credibly commit[ting] to changing their behavior in a fairly significant way", "accept[ing] some kind of tech solution that limits their engagement in some reliable way that doesn't depend on their continued behavior", or "be[ing] banned from commenting on other people's posts".

The first is a restriction on variety of speech. (I don't see what other kind of behavioral change the mods would insist on—or even could insist on, given the textual nature of an online forum where everything we do here is speech.) The third is a restriction of venue, which I claim predictably results in a restriction of variety. (Being forced to relegate your points into a shortform or your own post, won't result in the same kind of conversation as being able to participate in ordinary comment threads.) I suppose the "tech solution" of the second could be mere rate-limiting, but the "doesn't depend on their continued behavior" clause makes me think something more onerous is intended.

(The grandparent only mentions Achmiz because I particularly value his contributions, and because I think many people would prefer that I don't comment on the other case, but I'm deeply suspicious of censorship in general, for reasons that I will likely explain in a future post.)

Replies from: Raemon
comment by Raemon · 2023-04-19T17:15:04.101Z · LW(p) · GW(p)

The tech solution I'm currently expecting is rate-limiting. Factoring in the costs of development time and finickiness, I'm leaning towards either "3 comments per post" or "3 comments per post per day". (My ideal world, for Said, is something like "3 comments per post to start, but, if nothing controversial happens and he's not ruining the vibe, he gets to comment more without limit." But that's fairly difficult to operationalize and a lot of dev-time for a custom-feature limiting one or two particular-users).

I do have a high level goal of "users who want to have the sorts of conversations that actually depend on a different culture/vibe than Said-and-some-others-explicitly-want are able to do so". The question here is "do you want the 'real work' of developing new rationality techniques to happen on LessWrong, or someplace else where Said/etc can't bother you and?" (which is what's mostly currently happening). 

So, yeah the concrete outcome here is Said not getting to comment everywhere he wants, but he's already not getting to do that, because the relevant content + associated usage-building happens off lesswrong, and then he finds himself in a world where everyone is "suddenly" in significant agreement about some "frame control" concept he's never heard of. (I can't find the exact comment atm but I remember him expressing alarm at the degree of consensus on frame control, in the comments of Aella's post [LW · GW]. There was consensus because somewhere between 50 and 200 people had been using that phrase in various day-to-day conversations for like 3 years. I'm not sure if there's a world where that discussion was happening on LW because frame-control tends to come up in dicey sensitive adversarial situations)

So, I think the censorship policy you're imagining is a fabricated option.

My current guess of actual next steps are "Said gets 3 comments per post per day" restriction, is banned from commenting on shortform in particular (since our use case for that is specifically antithetical to the vibe Said wants), and then (after also setting up some other moderation tools and making some judgment calls on some other similar-but-lower-profile-users), messaging people like Logan Strohl and saying "hey, we've made a bunch of changes, we'd like it if you came in and tried using the site again", and hope that this time it actually works.

(Duncan might get a similar treatment, for fairly different reasons, although I'm more optimistic about he/us actually negotiating something that requires less heavyhanded restriction)

Replies from: Zack_M_Davis, Ruby
comment by Zack_M_Davis · 2023-04-19T18:54:33.364Z · LW(p) · GW(p)

a high level goal of "users who want to have the sorts of conversations that actually depend on a different culture/vibe than Said-and-some-others-explicitly-want are able to do so".

We already have a user-level personal ban feature! (Said doesn't like it, but he can't do anything about it.) Why isn't the solution here just, "Users who don't want to receive comments from Said ban him from their own posts"? How is that not sufficient? Why would you spend more dev time than you need to, in order to achieve your stated goal? This seems like a question you should be able to answer.

the concrete outcome here is Said not getting to comment everywhere he wants, but he's already not getting to do that, because the relevant content + associated usage-building happens off lesswrong

This is trivially false as stated. (Maybe you meant to say something else, but I fear that despite my general eagerness to do upfront interpretive labor, I'm unlikely to guess it; you'll have to clarify.) It's true that relevant content and associated usage-building happens off Less Wrong. It is not true that this prevents Said from commenting everywhere he wants (except where already banned from posts by individual users—currently, that's Elizabeth, and DirectedEvolution, and one other user [? · GW]).

I'm leaning towards either "3 comments per post" or "3 comments per post per day". (My ideal world, for Said, is something like "3 comments per post to start, but, if nothing controversial happens and he's not ruining the vibe

This would make Less Wrong worse for me. I want Said Achmiz to have unlimited, unconditional commenting privileges on my posts. (Unconditional means the software doesn't stop Said from posting a fourth comment; "to start" is not unconditional if it requires a human to approve the fourth comment.)

More generally, as a long-time user of Less Wrong (original join date 26 February 2009 [LW · GW], author of five Curated posts [LW · GW]) and preceding community (first Overcoming Bias comment 22 December 2007 [LW(p) · GW(p)], attendee of the first Overcoming Bias meetup on 21 February 2008 [LW · GW]), I do not want Said Achmiz to be a second-class citizen in my garden. If we have a user-level personal ban feature that anyone can use, I might or might not think that's a good feature to have, but at least it's a feature that everyone can use; it doesn't arbitrarily single out a single user on a site-wide basis.

Judging by the popularity of Alicorn's comment [LW(p) · GW(p)] testifying that she "[doesn't] think [she has] ever read a Said comment and thought it was a waste of time, or personally bothersome to [her], or sneaky or pushy or anything" (at 72 karma in 43 votes, currently the second-highest rated comment on this post), I'd bet a lot of other users feel similarly. From your stated plans, it looks like you're not taking those 43 users' preferences into account. Why is that? This seems like a question you should be able to answer.

Replies from: philh, Raemon, Raemon, Vaniver, Raemon
comment by philh · 2023-04-20T12:24:12.043Z · LW(p) · GW(p)

Judging by the popularity of Alicorn’s comment testifying that she “[doesn’t] think [she has] ever read a Said comment and thought it was a waste of time, or personally bothersome to [her], or sneaky or pushy or anything” (at 72 karma in 43 votes, currently the second-highest rated comment on this post), I’d bet a lot of other users feel similarly. From your stated plans, it looks like you’re not taking those 43 users’ preferences into account.

Stipulating that votes on this comment are more than negligibly informative on this question... it seems bizarre to count karma rather than agreement votes (currently 51 agreement from 37 votes). But also anyone who downvoted (or disagreed) here is someone who you're counting as not being taken into account, which seems exactly backwards.

comment by Raemon · 2023-04-19T19:27:06.020Z · LW(p) · GW(p)

Some other random notes (probably not maximally cruxy for you but

1. If Said seemed corrigible about actually integrating the spirit-of-our-models into his commenting style (such as proactively avoiding threads that benefit from a more open/curiosity/interpretative mode, without needing to wait for an author or mod to ban him from that post), then I'd be much more happy to just leave that as a high-level request from the mod team rather than an explicit code-based limitation.

But we've had tons of conversations with Said asking him to adjust his behavior, and he seems pretty committed to sticking to his current behavior. At best he seems grudgingly willing to avoid some threads if there are clear-cut rules we can spell out, but I don't trust him to actually tell the difference in many edge cases.

We've spent a hundred+ person hours over the years thinking about how to limit Said's damage, have a lot of other priorities on our plate. I consider it a priority to resolve this in a way that won't continue to eat up more of our time. 

2. I did list "actually just encourage people to use the ban tool more" is an option. (DirectedEvolution didn't even know it was an option until pointed out to him recently). If you actually want to advocate for that over a Said-specific-rate-limit, I'm open to that (my model of you thinks that's worse).

(Note, I and I think several other people on the mod team would have banned him from my comment sections if I didn't feel an obligation as a mod/site-admin to have a more open comment section)

3. I will probably build something that let's people Opt Into More Said. I think it's fairly likely the mod team will probably generally do some more heavier handed moderation in the nearish future, and I think a reasonable countermeasure to build, to alleviate some downsides of this, is to also give authors a "let this user comment unfettered on my posts, even though the mod teams have generally restricted them in some way."

(I don't expect that to really resolve your crux here but it seemed like it's at least an improvement on the margin)

4. I think it's plausible that the right solution is to ban him from shortform, use shortform as the place where people can talk about whatever they want in a more open/curious vibe. I currently don't think this is the right call because I just think it's... just actually a super reasonable, centrally supported use-case of top level posts to have sets of norms that are actively curious and invested. It seems really wrong to me to think the only kind of conversation you need to make intellectual progress be "criticize without trying to figure out what the OP is about and what problems they're trying to solve".

I do think, for the case of Said, building out two high level normsets of "open/curious/cooperative" and "debate/adversarial collaboration/thicker-skin-required", letting authors choose between them, and specifically banning Said from the former, is a viable option I'd consider. I think you have previously argued agains this, and Said expressed dissatisfaction with it elsewhere in this comment section.

(This solution probably wouldn't address my concerns about Duncan though)

Replies from: Vaniver, Vladimir_Nesov, Zack_M_Davis, evand, Screwtape
comment by Vaniver · 2023-04-20T02:58:15.249Z · LW(p) · GW(p)

If Said seemed corrigible about actually integrating the spirit-of-our-models into his commenting style (such as proactively avoiding threads that benefit from a more open/curiosity/interpretative mode, without needing to wait for an author or mod to ban him from that post), then I'd be much more happy to just leave that as a high-level request from the mod team rather than an explicit code-based limitation.

I am a little worried that this is a generalization that doesn't line up with actual evidence on the ground, and instead is caused by some sort of vibe spiral. (I'm reluctant to suggest a lengthy evidence review, both because of the costs and because I'm somewhat uncertain of the benefits--if the problem is that lots of authors find Said annoying or his reactions unpredictable, and we review the record and say "actually Said isn't annoying", those authors are unlikely to find it convincing.)

In particular, I keep thinking about this comment [LW(p) · GW(p)] (noting that I might be updating too much on one example). I think we have evidence that "Said can engage with open/curious/interpretative topics/posts in a productive way", and should maybe try to figure out what was different that time.

comment by Vladimir_Nesov · 2023-04-22T19:11:58.020Z · LW(p) · GW(p)

I will probably build something that let's people Opt Into More Said.

I think in the sense of the general garden-style conflict [LW(p) · GW(p)] (rather than Said/Duncan conflict specifically) this is the only satisfactory solution that's currently apparent, users picking the norms they get to operate under, like Commenting Guidelines, but more meaningful in practice.

There should be for a start just two options, Athenian Garden and Socratic Garden [LW(p) · GW(p)], so that commenters can cheaply make decisions about what kinds of comments are appropriate for a particular post, without having to read custom guidelines.

I do think, for the case of Said, building out two high level normsets of "open/curious/cooperative" and "debate/adversarial collaboration/thicker-skin-required", letting authors choose between them, and specifically banning Said from the former, is a viable option I'd consider.

Excellent. I predict that Said wouldn't be averse to voluntarily not commenting on "open/curious/cooperative" posts, or not commenting there in the kind of style that adherents of that culture dislike, so that "specifically banning Said" from that is an unnecessary caveat.

comment by Zack_M_Davis · 2023-04-21T00:07:50.856Z · LW(p) · GW(p)

I did list "actually just encourage people to use the ban tool more" is an option. [...] If you actually want to advocate for that over a Said-specific-rate-limit, I'm open to that (my model of you thinks that's worse).

Well, I'm glad you're telling actual-me this rather than using your model of me. I count the fact your model of me is so egregiously poor (despite our having a number of interactions over the years) as a case study in favor of Said's interaction style (of just asking people things, instead of falsely imagining that you can model them).

Yes, I would, actually, want to advocate for informing users about a feature that already exists that anyone can use, rather than writing new code specifically for the purpose of persecuting a particular user that you don't like.

Analogously, if the town council of the city I live in passes a new tax increase, I might grumble about it, but I don't regard it as a direct personal threat. If the town council passes a tax increase that applies specifically to my friend Said Achmiz, and no one else, that's a threat to me and mine. A government that does that is not legitimate.

It seems really wrong to me to think the only kind of conversation you need to make intellectual progress be "criticize without trying to figure out what the OP is about and what problems they're trying to solve".

So, usually when people make this kind of "hostile paraphrase" in an argument, I tend to take it in stride. I mostly regard it as "part of the game": I think most readers can tell the difference between an attempted fair paraphrase (which an author is expected to agree with) and an intentional hostile paraphrase (which is optimized to highlight a particular criticism, without the expectation that the author will agree with the paraphrase). I don't tell people to be more charitable to me; I don't ask them to pass my ideological Turing test; I just say, "That's not what I meant," and explain the idea again; I'm happy to do the extra work.

In this particular situation, I'm inclined to try out a different commenting style that involves me doing less interpretive labor. I think you know very well that "criticize without trying to figure out what the OP is about" is not what Said and I think is at issue. Do you think you can rephrase that sentence in a way that would pass Said's ideological Turing test?

I consider it a priority to resolve this in a way that won't continue to eat up more of our time.

Right, so if someone complains about Said, point out that they're free to strong-downvote him and that they're free to ban him from their posts. That's much less time-consuming than writing new code! (You're welcome.)

If Said seemed corrigible about actually integrating the spirit-of-our-models into his commenting style

Sorry, I thought your job was to run a website, not dictate to people how they should think and write? (Where part of running a website includes removing content that you don't want on the website, but that's not the same thing as decreeing that individuals must "integrat[e] the spirit-of-[your]-models into [their] commenting style".) Was I mistaken about what your job is?

building out two high level normsets of "open/curious/cooperative" and "debate/adversarial collaboration/thicker-skin-required"

I am strongly opposed to this because I don't think the proposed distinction cuts reality at the joints [LW · GW]. (I'd be happy to elaborate on request, but will omit the detailed explanation now in order to keep this comment focused.)

We already let authors write their own moderation guidelines! It's a blank text box! If someone happens to believe in this "cooperative vs. adversarial" false dichotomy, they can write about it in the text box! How is that not enough?

Replies from: Vladimir_Nesov, Raemon
comment by Vladimir_Nesov · 2023-04-22T19:32:05.969Z · LW(p) · GW(p)

We already let authors write their own moderation guidelines! It's a blank text box!

Because it's a blank text box, it's not convenient for commenters to read it in detail every time, so I expect almost nobody reads it, these guidelines are not practical to follow.

With two standard options, color-coded or something, it becomes actually practical, so the distinction between blank text box and two standard options is crucial. You might still caveat the standard options with additional blank text boxes, but being easy to classify without actually reading is the important part.

Replies from: philh
comment by philh · 2023-04-22T22:25:04.998Z · LW(p) · GW(p)

Also, moderation guidelines aren't visible on GreaterWrong at all, afaict. So Said specifically is unlikely to adjust his commenting in response to those guidelines, unless that changes.

(I assume Said mostly uses GW, since he designed it.)

comment by Raemon · 2023-04-25T03:09:48.704Z · LW(p) · GW(p)

I've been busy, so hadn't replied to this yet, but specifically wanted to apologize for the hostile paraphrase (I notice I've done that at least twice now in this thread, I'm trying to better but seems important for me to notice and pay attention to).

I think I the corrigible about actually integrating the spirit-of-our-models into his commenting style" line pretty badly, Oliver and Vaniver also both thought it was pretty alarming. The thing I was trying to say I eventually reworded in my subsequent mod announcement as:

Feel free to argue with this decision. And again, in particular, if Said makes a case that he either can obey the spirit of “don’t imply people have an obligation to engage with your comments”, or someone can suggest a letter-of-the-law that actually accomplishes the thing I’m aiming at in a more clear-cut way that Said thinks he can follow, I’d feel fairly good about revoking the rate-limit.

i.e. this isn't about Said changing this own thought process, but, like, there is a spirit-of-the-law relevant in the mod decision here, and whether I need to worry about specification-gaming.

I expect you to still object to that for various reasons, and I think it's reasonable to be pretty suspicious of me for phrasing it the way I did the first time. (I think it does convey something sus about my thought process, but, fwiw I agree it is sus and am reflecting on it)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-25T03:42:59.710Z · LW(p) · GW(p)

Feel free to argue with this decision. And again, in particular, if Said makes a case that he either can obey the spirit of “don’t imply people have an obligation to engage with your comments”, or someone can suggest a letter-of-the-law that actually accomplishes the thing I’m aiming at in a more clear-cut way that Said thinks he can follow, I’d feel fairly good about revoking the rate-limit.

FYI, my response to this is is waiting for an answer to my question in the first paragraph of this comment [LW(p) · GW(p)].

comment by evand · 2023-04-24T01:45:23.158Z · LW(p) · GW(p)

I'm still uncertain how I feel about a lot of the details on this (and am enough of a lurker rather than poster that I suspect it's not worth my time to figure that out / write it publicly), but I just wanted to say that I think this is an extremely good thing to include:

I will probably build something that let's people Opt Into More Said. I think it's fairly likely the mod team will probably generally do some more heavier handed moderation in the nearish future, and I think a reasonable countermeasure to build, to alleviate some downsides of this, is to also give authors a "let this user comment unfettered on my posts, even though the mod teams have generally restricted them in some way."

This strikes me basically as a way to move the mod team's role more into "setting good defaults" and less "setting the only way things work". How much y'all should move in that direction seems an open question, as it does limit how much cultivation you can do, but it seems like a very useful tool to make use of in some cases.

comment by Screwtape · 2023-04-20T15:03:34.518Z · LW(p) · GW(p)

How technically troublesome would an allow list be?

Maybe the default is everyone gets three comments on a post. People the author has banned get zero, people the author has opted in for get unlimited, the author automatically gets unlimited comments on their own post, mods automatically get unlimited comments.

(Or if this feels more like a Said and/or Duncan specific issue, make the options "Unlimited", "Limited", and "None/Banned" then default to everyone at Unlimited except for Said and/or Duncan at Limited.)

comment by Raemon · 2023-04-19T19:01:10.276Z · LW(p) · GW(p)

From your stated plans, it looks like you're not taking those 43 users' preferences into account. Why is that?

My prediction is that those users are primarily upvoting it for what it's saying about Duncan rather than about Said.

Replies from: Raemon
comment by Raemon · 2023-04-19T19:09:19.508Z · LW(p) · GW(p)

To spell out what evidence I'm looking at:

There is definitely some term in the my / the mod team's equation for "this user is providing a lot of valuable stuff that people want on the site". But the high level call the moderation team is making is something like "maximize useful truths we're figuring out". Hearing about how many people are getting concrete value out of Said or Duncan's comments is part of that equation, hearing about how many people are feeling scared or offput enough that they don't comment/post much is also part of that equation. And there are also subtler interplays that depend on our actual model of how progress gets made.

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2023-04-20T01:01:00.103Z · LW(p) · GW(p)

I wonder how much of the difference in intuitions about Duncan and Said come from whether people interact with LW primarily as commenters or as authors. 

The concerns about Said seem to be entirely from and centered around the concerns of authors. He makes posting mostly costly, he drives content away. Meanwhile many concerns about Duncan could be phrased as being about how he interacts with commenters.

If this trend exists it is complicated. Said gets >0 praise from author for his comments on their own post (e.g. Raemon here [LW(p) · GW(p)]), and major Said defender Zack has written lots of well-regarded posts, Said banner DirectedEvolution writes good content but stands out to me as one of the best commenters on science posts. Duncan also generates a fair amount of concern for attempts to set norms outside his own posts. But I think there might be a thread here

Replies from: AllAmericanBreakfast, Zack_M_Davis
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-24T01:30:45.907Z · LW(p) · GW(p)

Said banner DirectedEvolution writes good content but stands out to me as one of the best commenters on science posts.

Thank you for the complement!

With writing science commentary, my participation is contingent on there being a specific job to do (often, "dig up quotes from links and citations and provide context") and a lively conversation. The units of work are bite-size. It's easy to be useful and appreciated.

Writing posts is already relatively speaking not my strong suit. There's no preselection on people being interested enough to drive a discussion, what makes a post "interesting" is unclear, and the amount of work required to make it good is large enough that it feels like work more than play. When I do get a post out, it often fails to attract much attention. What attention it does receive is often negative, and Said is one of the more prolific providers of negative attention. Hence, I ban Said because he further inhibits me from developing in my areas of relative weakness. 

My past conflict with Duncan arose when I would impute motives to him, or blur the precise distinctions in language he was attempting to draw - essentially failing to adopt the "referee" role that works so well in science posts, and putting the same negative energy I dislike receiving into my responses to Duncan's posts. When I realized this was going on, I apologized and changed my approach, and now I no longer feel a sense of "danger" in responding to Duncan's posts or comments. I feel that my commenting strong suit is quite compatible with friendly discourse with Duncan, and Duncan is good at generating lively discussions where my refereeing skillset may be of use.

So if I had to explain it, some people (me, Duncan) are sensitive about posting, while others are sharp in their comments (Said, anonymousaisafety). Those who are sensitive about posting will get frustrated by Said, while those who write sharp comments will often get in conflict with Duncan.

comment by Zack_M_Davis · 2023-04-22T17:44:08.884Z · LW(p) · GW(p)

I'm not sure what other user you're referring to besides Achmiz—it looks like there's supposed to be another word between "about" and "and" in your first sentence, and between "about" and "could" in the last sentence of your second paragraph, but it's not rendering correctly in my browser? Weird.

Anyway, I think the pattern you describe could be generated by a philosophical difference about where the burden of interpretive labor rests. A commenter who thinks that authors have a duty to be clear (and therefore asks clarifying questions, or makes attempted criticisms that miss the author's intended point) might annoy authors who think that commenters have a duty to read charitably. Then the commenter might be blamed for driving authors away, and the author might be blamed for getting too angrily defensive with commenters.

major Said defender Zack has written lots of well-regarded posts

I interact with this website as an author more than a commenter these days, but in terms of the dichotomy I describe above, I am very firmly of the belief that authors have a duty to be clear. (To the extent that I expect that someone who disagrees with me, also disagrees with my proposed dichotomy; I'm not claiming to be passing anyone's ideological Turing test.)

The other month I published a post that I was feeling pretty good about [LW · GW], quietly hoping that it might break a hundred karma. In fact, the comment section was very critical (in ways that I didn't have satisfactory replies to), and the post only got 18 karma in 26 votes, an unusually poor showing for me. That made me feel a little bit sad that day, and less likely to write future posts that I could anticipate being disliked by commenters in the way that this post was disliked.

In my worldview, this is exactly how things are supposed to work. I didn't have satisfactory replies to the critical comments. Of course that's going to result in downvotes! Of course it made me a little bit sad that day! (By "conservation of expected feelings": I would have felt a little bit happy if the post did well.) Of course I'm going to try not to write posts relevantly "like that" in the future!

I've been getting the sense that a lot of people somehow seem to disagree with me that this is exactly how things are supposed to work?—but I still don't think understand why. Or rather, I do have an intuitive model of why people seem to disagree, but I can't quite permit myself to believe it, because it's too uncharitable; I must not be understanding correctly.

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2023-04-24T00:43:03.012Z · LW(p) · GW(p)

Thanks for engaging, I found this comment very… traction-ey? Like we’re getting closer to cruxes. And you’re right that I want to disagree with your ontology.

I think “duty to be clear” skips over the hard part, which is that “being clear” is a transitive verb. It doesn’t make sense to say if a post is clear or not clear, only who one is clear and unclear to. 

To use a trivial example:  Well taught physics 201 is clear if you’ve had the prerequisite physics classes or are a physics savant, but not to laymen. Poorly taught physics 201 is clear to a subset of the people who would understand it if well-taught.  And you can pile on complications from there. Not all prerequisites are as obvious as Physics 101 -> Physics 201, but that doesn’t make them not prerequisites. People have different writing and reading styles. Authors can decide the trade-offs are such that they want to write a post but use fairly large step sizes, and leave behind people who can’t fill in the gaps themselves.

So the question is never “is this post clear?”, it’s “who is this post intended for?” and “what percentage of its audience actually finds it clear?” The answers are never “everyone” and “100%” but being more specific than that can be hard and is prone to disagreement. 

Commenters of course have every right to say “I don’t understand this” and politely ask questions. But I, and I suspect the mods and most authors, reject the idea that publishing a piece on LessWrong gives me a duty to make every reader understand it. That may cost me karma or respect and I think that’s fine*, I’m not claiming a positive right to other people’s high regard. 

You might respond “fine, authors have a right not to answer, but that doesn’t mean commenters don’t have a right to ask”. I think that’s mostly correct but not at the limit, there is a combination of high volume, aggravating approach, and entitlement that drives off far more value than it creates.


*although I think downvoting things I don’t understand is tricky specifically because it’s hard to tell where the problem lies, so I rarely do.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-24T01:44:45.756Z · LW(p) · GW(p)

You might respond “fine, authors have a right not to answer, but that doesn’t mean commenters don’t have a right to ask”. I think that’s mostly correct but not at the limit, there is a combination of high volume, aggravating approach, and entitlement that drives off far more value than it creates.

 

YES. I think this is hugely important, and I think it's a pretty good definition of the difference between a confused person and a crank.

Confused people ask questions of people they think can help them resolve their confusion. They signal respect, because they perceive themselves as asking for a service to be performed on their behalf by somebody who understands more than they do. They put effort into clarifying their own confusion and figuring out what the author probably meant. They assume they're lucky if they get one reply from the author, and so they try not to waste their one question on uninteresting trivialities that they could have figured out for themselves.

Cranks ask questions of people they think are wrong, in order to try and expose the weaknesses in their arguments. They signal aloofness, because their priority is on being seen as an authority who deserves similar or higher status (at least on the issue at hand) as the person they're addressing. They already expect the author they're questioning is fundamentally confused, and so they don't waste their own time trying to figure out what the author might have meant. The author, and the audience, are lucky to have the crank's attention, since they're obviously collectively lost in confusion and need a disinterested outsider to call attention to that fact.

There's absolutely a middle ground. There are many times when I ask questions - let's say of an academic author - where I think the author is probably either wrong or misguided in their analysis. But outside of pointing out specific facts that I know are wrong and suspect the author might not have noticed, I never address these authors in the manner of a crank. If I bother to contact them, it's to ask questions to do things like:

  • Describe my specific disagreement succinctly, and ask the author to explain why they think or approach the issue differently
  • Ask about the points in the author's argument I don't fully understand, in case those turn out to be cruxes
  • Ask what they think about my counterargument, on the assumption that they've already thought about it and have a pretty good answer that I'm genuinely interested in hearing
Replies from: pktechgirl, Duncan_Sabien
comment by Elizabeth (pktechgirl) · 2023-04-24T07:55:06.345Z · LW(p) · GW(p)

This made something click for me. I wonder if some of the split is people who think comments are primarily communication with the author of a post, vs with other readers. 

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-24T15:39:41.548Z · LW(p) · GW(p)

Cranks ask questions of people they think are wrong, in order to try and expose the weaknesses in their arguments. They signal aloofness, because their priority is on being seen as an authority who deserves similar or higher status (at least on the issue at hand) as the person they're addressing. They already expect the author they're questioning is fundamentally confused, and so they don't waste their own time trying to figure out what the author might have meant. The author, and the audience, are lucky to have the crank's attention, since they're obviously collectively lost in confusion and need a disinterested outsider to call attention to that fact.

And this attitude is particularly corrosive to feelings of trust, collaboration, "jamming together," etc. ... it's like walking into a martial arts academy and finding a person present who scoffs at both the instructors and the other students alike, and who doesn't offer sufficient faith to even try a given exercise once before first a) hearing it comprehensively justified and b) checking the sparring records to see if people who did that exercise win more fights.

Which, yeah, that's one way to zero in on the best martial arts practices, if the other people around you also signed up for that kind of culture and have patience for that level of suspicion and mistrust!

(I choose martial arts specifically because it's a domain full of anti-epistemic garbage and claims that don't pan out.)

But in practice, few people will participate in such a martial arts academy for long, and it's not true that a martial arts academy lacking that level of rigor makes no progress in discovering and teaching useful things to its students.

Replies from: jimmy
comment by jimmy · 2023-04-25T18:30:56.274Z · LW(p) · GW(p)

You're describing a deeply dysfunctional gym, and then implying that the problem lies with the attitude of this one character rather than the dysfunction that allows such an attitude to be disruptive.

The way to jam with such a character is to bet you can tap him with the move of the day, and find out if you're right. If you can, and he gets tapped 10 times in a row with the move he just scoffed at every day he does it, then it becomes increasingly difficult for him to scoff the next time, and increasingly funny and entertaining for everyone else. If you can't, and no one can, then he might have a point, and the gym gets to learn something new.

If your gym knows how to jam with and incorporate dissonance without perceiving it as a threat, then not only are such expressions of distrust/disrespect not corrosive, they're an active part of the productive collaboration, and serve as opportunities to form the trust and mutual respect which clearly weren't there in the first place. It's definitely more challenging to jam with dissonant characters like that (especially if they're dysfunctionally dissonant, as your description implies), and no one wants to train at a gym which fails to form trust and mutual respect, but it's important to realize that the problem isn't so much the difficulty as the inability to overcome the difficulty, because the solutions to each are very different.
 

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-25T21:00:38.260Z · LW(p) · GW(p)

Strong disagree that I'm describing a deeply dysfunctional gym; I barely described the gym at all and it's way overconfident/projection-y to extrapolate "deeply dysfunctional" from what I said.

There's a difference between "hey, I want to understand the underpinnings of this" and the thing I described, which is hostile to the point of "why are you even here, then?"

Edit: I view the votes on this and the parent comment as indicative of a genuine problem; jimmy above is exhibiting actually bad reasoning (à la representativeness) and the LWers who happen to be hanging around this particular comment thread are, uh, apparently unaware of this fact. Alas.

Replies from: localdeity, jimmy
comment by localdeity · 2023-04-27T00:28:11.467Z · LW(p) · GW(p)

Strong disagree that I'm describing a deeply dysfunctional gym; I barely described the gym at all and it's way overconfident/projection-y to extrapolate "deeply dysfunctional" from what I said.

Well, you mentioned the scenario as an illustration of a "particularly corrosive" attitude.  It therefore seems reasonable to fill in the unspecified details (like just how disruptive the guy's behavior is, how much of everyone's time he wastes, how many instructors are driven away in shame or irritation) with pretty negative ones—to assume the gym has in fact been corroded, being at least, say, moderately dysfunctional as a result.

Maybe "deeply dysfunctional" was going too far, but I don't think it's reasonable to call that "way overconfident/projection-y".  Nor does the difference between "deeply dysfunctional" and "moderately dysfunctional" matter for jimmy's point.

votes

FYI, I'm inclined to upvote jimmy's comment because of the second paragraph: it seems to be the perfect solution to the described situation (and to all hypothetical dysfunction in the gym, minor or major), and has some generalizability (look for cheap tests of beliefs, challenge people to do them).  And your comment seems to be calling jimmy out inappropriately (as I've argued above), so I'm inclined to at least disagree-vote it.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-27T04:42:04.082Z · LW(p) · GW(p)

"Let's imagine that these unspecified details, which could be anywhere within a VERY wide range, are specifically such that the original point is ridiculous, in support of concluding that the original point is ridiculous" does not seem like a reasonable move to me.

Separately:

https://www.lesswrong.com/posts/WsvpkCekuxYSkwsuG/overconfidence-is-deceit

Replies from: philh
comment by philh · 2023-04-28T13:53:33.821Z · LW(p) · GW(p)

I think my feeling here is:

  • Yes, Jimmy was either projecting (filling in unspecified details with dysfunction, where function would also fit) or making an unjustified claim (that any gym matching your description must be dysfunctional). I think projection is more likely. Neither of these options is great.
  • But it's not clear how important that mistake is to his comment. I expect people were mostly reacting to paragraphs 2 and 3, and you could cut paragraph 1 out and they'd stand by themselves.
  • Do the more-interesting parts of the comment implicitly rely on the projection/unjustified-claim? Also not clear to me. I do think the comment is overstated. ("The way to jam"?) But e.g. "the problem isn’t so much the difficulty as the inability to overcome the difficulty" seems... well, I'd say this is overstated too, but I do think it's pointing at something that seems valuable to keep in mind even if we accept that the gym is functional.
  • So I don't think it's unreasonable that the parent got significantly upvoted, though I didn't upvote it myself; and I don't think it's unreasonable that your correction didn't, since it looks correct to me but like it's not responding to the main point.
  • Maybe you think paragraphs 2 and 3 were relying more on the projection than it currently seems to me? In that case you actually are responding to what-I-see-as the main point. But if so I'd need it spelled out in more detail.
Replies from: jimmy
comment by jimmy · 2023-05-08T04:55:52.646Z · LW(p) · GW(p)
  • Yes, Jimmy was either projecting (filling in unspecified details with dysfunction, where function would also fit) or making an unjustified claim (that any gym matching your description must be dysfunctional). I think projection is more likely. Neither of these options is great.

FWIW, that is a claim I'm fully willing and able to justify. It's hard to disclaim all the possible misinterpretations in a brief comment (e.g. "deeply" != "very"), but I do stand by a pretty strong interpretation of what I said as being true, justifiable, important, and relevant.
 

comment by jimmy · 2023-05-08T04:52:30.385Z · LW(p) · GW(p)

There's a difference between "hey, I want to understand the underpinnings of this" and the thing I described, which is hostile to the point of "why are you even here, then?"

Yes, and that's why I described the attitude as "dysfunctionally dissonant" (emphasis in original). It's not a good way of challenging the instructors, and not the way I recommend behaving.

What I'm talking about is how a healthy gym environment is robust to this sort of dysfunctional dissonance, and how to productively relate to unskilled dissonance by practicing skillfully enough yourself that the system's combined dysfunction never becomes supercritical and instead decays towards productive cooperation.


it's way overconfident/projection-y to extrapolate "deeply dysfunctional" from what I said.

That's certainly one possibility. But isn't it also conceivable though that I simply see underlying dynamics (and lack thereof) which you don't see, and which justify the confidence level I display?

It certainly makes sense to track the hypothesis that I am overconfident here, but ironically it strikes me as overconfident to be asserting that I am being overconfident without first checking things like "Can I pass his ITT"/"Can I point to a flaw in his argument that makes him stutter if not change his mind"/etc.

To be clear, my view here is based on years of thinking about this kind of problem and practicing my proposed solutions with success, including in a literal martial arts gym for the last eight years. Perhaps I should have written more about these things on LW so my confidence doesn't appear to come out of nowhere, but I do believe I am able to justify what I'm saying very well and won't hesitate to do so if anyone wants further explanation or sees something which doesn't seem to fit. And hey, if it turns out I'm wrong about how well supported my perspective is, I promise not to be a poor sport about it.


jimmy above is exhibiting actually bad reasoning (à la representativeness)

In absence of an object level counterargument, this is textbook ad hominem. I won't argue that there isn't a place for that (or that it's impossible that my reasoning is flawed), but I think it's hard to argue that it isn't premature here. As a general rule, anyone that disagrees with anyone can come up with a million accusations of this sort, and it isn't uncommon for some of it to be right to an extent, but it's really hard to have a productive conversation if such accusations are used as a first resort rather than as a last resort. Especially when they aren't well substantiated.

I see that you've deactivated your account now so it might be too late, but I want to point out explicitly that I actively want you to stick around and feel comfortable contributing here. I'm pushing back against some of the things you're saying because I think that it's important to do so, but I do not harbor any ill will towards you nor do I think what you said was "ridiculous" [LW(p) · GW(p)]. I hope you come back.

comment by Vaniver · 2023-04-20T03:05:07.613Z · LW(p) · GW(p)

Maybe you meant to say something else, but I fear that despite my general eagerness to do upfront interpretive labor, I'm unlikely to guess it; you'll have to clarify.

I thought it was a reference to, among other things, this exchange [LW(p) · GW(p)] where Said says one of Duncan's Medium posts was good, and Duncan responds that his decision to not post it on LW was because of Said. If you're observing that Said could just comment on Medium instead, or post it as a linkpost on LW and comment there, I think you're correct. [There are, of course, other things that are not posted publicly, where I think it then becomes true.]

comment by Raemon · 2023-04-19T19:50:13.817Z · LW(p) · GW(p)

I do want to acknowledge that based on various comments and vote patterns, I agree it seems like a pretty controversial call, and I model is as something like "spending down and or making a bet with a limited resource (maybe two specific resources of "trust in the mods" and "some groups of people's willingness to put up with the site being optimized a way they think is wrong.") 

Despite that, I think it is the right call to limit Said significantly in some way, but I don't think we can make that many moderation calls on users this established that there this controversial without causing some pretty bad things to happen.

Replies from: Zack_M_Davis
comment by Zack_M_Davis · 2023-04-21T00:10:29.516Z · LW(p) · GW(p)

I don't think we can make that many moderation calls on users this established that there [sic] this controversial without causing some pretty bad things to happen.

Indeed. I would encourage you to ask yourself whether the number referred to by "that many" is greater than zero.

comment by Ruby · 2023-04-23T17:15:11.189Z · LW(p) · GW(p)

50 and 200 people had been using that phrase in various day-to-day conversations for like 3 years

I don't remember this. I feel like Aella's post introduced the term?

A better example might be Circling, though I think Said might have had a point of it hadn't been carefully scrutinized, a lot of people had just been doing it.

Replies from: Raemon
comment by Raemon · 2023-04-23T18:01:46.483Z · LW(p) · GW(p)

Frame control was a pretty central topic on "what's going on with Brent?" two years prior, as well as some other circumstances. We'd been talking about it internal at Lightcone/LessWrong during that time.

Replies from: Ruby
comment by Ruby · 2023-04-23T18:47:39.350Z · LW(p) · GW(p)

Hmm, yeah, I can see that. Perhaps just not under that name.

Replies from: Raemon
comment by Raemon · 2023-04-23T19:33:04.505Z · LW(p) · GW(p)

I think the term was getting used, but makes sense if you weren't as involved in those conversations. (I just checked and there's only one old internal lw-slack message about it from 2019, but it didn't feel like a new term to me at the time and pretty sure it came up a bunch on FB and in moderation convos periodically under that name)

comment by Ben Pace (Benito) · 2023-04-25T05:43:37.372Z · LW(p) · GW(p)

Ray writes:

Here are some areas I think Said contributes in a way that seem important:

  • Various ops/dev work maintaining sites like readthesequences.com, greaterwrong.com, and gwern.com. 

For the record, I think the value here is "Said is the person independent of MIRI (including Vaniver) and Lightcone who contributes the most counterfactual bits to the sequences and LW still being alive in the world", and I don't think that comes across in this bullet.

Replies from: Raemon
comment by Raemon · 2023-04-25T19:58:32.762Z · LW(p) · GW(p)

Yeah I agree with this, and agree it's worth emphasizing more. I'm updating the most recent announcement to indicate this more, since not everyone's going to read everything in this thread.

Replies from: Benito
comment by Ben Pace (Benito) · 2023-04-26T00:55:12.680Z · LW(p) · GW(p)

Great!

comment by DanielFilan · 2023-04-18T01:23:45.851Z · LW(p) · GW(p)

I could imagine an admin feature that literally just lets Said comment a few times on a post, but if he gets significantly downvoted, gives him a wordcount-based rate-limit that forces him to wrap up his current points quickly and then call it a day.

I feel like this incentivizes comments to be short, which doesn't make them less aggravating to people. For example, IIRC people have complained about him commenting "Examples?". This is not going to be hit hard by a rate limit.

Replies from: gwern
comment by gwern · 2023-04-18T02:17:12.455Z · LW(p) · GW(p)

'Examples?' is one of the rationalist skills [LW · GW] most lacking on LW2 and if I had the patience for arguments I used to have, I would be writing those comments myself. (Said is being generous in asking for only 1. I would be asking for 3, like Eliezer.) Anyone complaining about that should be ashamed that they either (1) cannot come up with any, or (2) cannot forthrightly admit "Oh, I don't have any yet, this is speculative, so YMMV".

Replies from: Duncan_Sabien, Raemon
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-18T03:29:43.355Z · LW(p) · GW(p)

Spending my last remaining comment here.

I join Ray and Gwern in noting that asking for examples is generically good (and that I've never felt or argued to the contrary). Since my stance on this was called into question, I elaborated [LW(p) · GW(p)]:

If one starts out looking to collect and categorize evidence of their conversational partner not doing their fair share of the labor, then a bunch of comments that just say "Examples?" would go into the pile. But just encountering a handful of comments that just say "Examples?" would not be enough to send a reasonable person toward the hypothesis that their conversational partner reliably doesn't do their fair share of the labor.

"Do you have examples?" is one of the core, common, prosocial moves, and correctly so. It is a bid for the other person to put in extra work, but the scales of "are we both contributing?" don't need to be balanced every three seconds, or even every conversation. Sometimes I'm the asker/learner and you're the teacher/expounder, and other times the roles are reversed, and other times we go back and forth.

The problem is not in asking someone to do a little labor on your behalf. It's having 85+% of your engagement be asking other people to do labor on your behalf, and never reciprocating, and when people are like, hey, could you not, or even just a little less? being supercilious about it.

My recent experience has been that saying "this is half-baked" is not met with a subsequent shift in commentary, meeting the "Oh, I don't have any yet, this is speculative, so YMMV" tone.

I think it would be nice if LW could have both tones:

  • I'm claiming this quite confidently; bring on the challenges, I'm ready to convince
  • I have a gesture in a direction I'm pretty sure has merit, but am not trying to e.g. claim that if others don't update to my position they're wrong; this is a sapling and I'd like help growing it, not help stepping on it.

Trying to do things in the latter tone on LW has felt, to me, extremely anti-rewarding of late, and I'm hoping that will change, because I think a lot of good work happens there. That's not to say that the former tone is bad; it feels like they are twin pillars of intellectual progress.

Replies from: Davidmanheim
comment by Davidmanheim · 2023-04-18T10:27:25.215Z · LW(p) · GW(p)

Noting that my very first lesswrong post [LW · GW], back in the LW1 days, was an example of #2. I was wrong on some of the key parts of the intuition I was trying to convey, and ChristianKl [LW · GW] corrected me. As an introduction to posting on LW, that was pretty good - I'd hate to think that's no longer acceptable.

At the same time, there is less room for it as the community got much bigger, and I'd probably weak downvote a similar post today, rather than trying to engage with a similar mistake, given how much content there is. Not sure if there is anything that can be done about this, but it's an issue.

Replies from: Raemon
comment by Raemon · 2023-04-18T16:44:56.341Z · LW(p) · GW(p)

fwiw that seems like a pretty great interaction. ChristanKl seems to be usefully engaging with your frame while noting things about it that don't seem to work, seems (to me) to have optimized somewhat for being helpful, and also the conversation just wraps up pretty efficiently. (and I think this is all a higher bar than what I mean to be pushing for, i.e. having only one of those properties would have been fine)

Replies from: Davidmanheim
comment by Davidmanheim · 2023-04-20T06:45:55.802Z · LW(p) · GW(p)

I agree - but think that now, if and when similarly initial thoughts on a conceptual model are proposed, there is less ability or willingness to engage, especially with people who are fundamentally confused about some aspect of the issue. This is largely, I believe, due to the volume of new participants, and the reduced engagement for those types of posts.

comment by Raemon · 2023-04-18T03:10:01.534Z · LW(p) · GW(p)

I want to reiterate that I actually think the part where Said says "examples?" is basically just good [LW(p) · GW(p)] (and is only bad insofar as it creates a looming worry of particular kinds of frustrating, unproductive and time-consuming conversations that are likely to follow in some subsets of discussions)

(edit: I actually am pretty frustrated that "examples?" became the go-to example people talked about and reified as a kinda rude thing Said did. I think I basically agree this process is good:

  1. Alice -> writes confident posts without examples
  2. Bob -> says "examples?"
  3. Alice -> either gives (at least one, and yeah ideally 3) examples, or says "Oh, I don't have any yet, this is speculative, so YMMV", or doesn't reply but feels a bit chagrined. 

)

Replies from: DanielFilan
comment by DanielFilan · 2023-04-18T03:40:51.933Z · LW(p) · GW(p)

Oops, sorry for saying something that probabilistically implied a strawman of you.

comment by Ruby · 2023-04-17T23:02:23.153Z · LW(p) · GW(p)

was surprised and updated on You Don't Exist, Duncan [LW · GW] getting as heavily upvoted as it did

I'm not sure what you think this is strong evidence of?

Replies from: Raemon
comment by Raemon · 2023-04-17T23:07:18.321Z · LW(p) · GW(p)

I don't think it's "strong" evidence per se, but, it was evidence that something I'd previously thought was more of a specific pet-peeve of Duncan's, was more objected to by more LessWrongfolk. 

(Where the thing in question is something like "making sweeping ungrounded claims about other people... but in a sort of colloquial/hyperbolic way which most social norms don't especially punish)

Replies from: Ruby
comment by Ruby · 2023-04-17T23:18:02.491Z · LW(p) · GW(p)

Some evidence for that, also seems likely to get upvoted on the basis of "well written and evocative of a difficult personal experience", or people relate to being outliers and unusual even if they didn't feel alienated and hurt in quite the same way. I'm unsure.

Replies from: lahwran
comment by the gears to ascension (lahwran) · 2023-04-19T05:44:09.011Z · LW(p) · GW(p)

I upvoted it because it made me finally understand what in the world might be going on in Duncan's head to make him react the way he does

comment by Screwtape · 2023-04-18T16:00:53.939Z · LW(p) · GW(p)

If the lifeguard isn't on duty, then it's useful to have the ability to be your own lifeguard.

I wanted to say that I appreciate the moderation style options and authors being able to delete and ban for their posts. While we're talking about what to change and what isn't working, I'd like to weigh in on the side of that being a good set of features that should be kept. Raemon, you've mentioned [LW(p) · GW(p)] those features are there to be used. I've never used the capability and I'm still glad it exists. (I can barely use it actually.) Since site wide moderators aren't going to intervene everywhere quickly (which I don't think they should or even can, moderators are heavily outnumbered) then I think letting people moderate their local piece is good.

If I ran into lots of negative feedback I didn't think was helpful and it wasn't getting moderated by me or the site admins, I'd just move my writing to a blog on a different website where I could control things. Possibly I'd set up crossposting like Zvi or Jefftk and then ignore the LessWrong comment section. If lots of people do that, then we get the diaspora effect from late LessWrong 1.0. Having people at least crossposting to LessWrong seems good to me, since I like tools like the agreement karma and the tag upvotes. Basically, the BATNA for a writer who doesn't like LessWrong's comment section is Wordpress or Substack. Some writers you'd rather go elsewhere obviously, but Said and Duncan's top level posts seem mostly a good fit here. 

I do have a question about norm setting I'm curious about. If Duncan had titled his post "Duncan's Basics of Rationalist Discourse" would that have changed whether it merited the exception around pushing site wide norms? What if lots of people started picking Norm Enforcing for the moderation guidelines and linking to it?

Replies from: Raemon
comment by Raemon · 2023-04-18T16:10:19.254Z · LW(p) · GW(p)

I do have a question about norm setting I'm curious about. If Duncan had titled his post "Duncan's Basics of Rationalist Discourse" would that have changed whether it merited the exception around pushing site wide norms? What if lots of people started picking Norm Enforcing for the moderation guidelines and linking to it?

Yeah I think this'd be much less cause for concern. (I haven't checked whether the rest of the post has anything else that felt LW-wide-police-y about it, I'd maybe have wanted a slightly different opening paragraph or something)

comment by Viliam · 2023-04-18T09:16:13.055Z · LW(p) · GW(p)

One thing I'd want going forward is a more public comment that, if he's going to keep posting on LessWrong, he's not going to do that (take down all his LW posts) again.

I think Duncan also posts all his articles on his own website, is this correct?

In that case, would it be okay to replace the article on LW with a link to Duncan's website? So that the articles stay there, the comments stay here, the page with comments links the article, but the article does not link the page with comments.

I am not suggesting to do this. I am asking that if Duncan (or anyone else) hypothetically at some moment decided for whatever reason that he is uncomfortable with his articles being on LW, whether doing this (moving the articles elsewhere and replacing them with the links towards the new place) would be acceptable for you? Like, whether this could be a policy "if you decide to move away from LW, this is our preferred way to do it".

comment by Drake Morrison (Leviad) · 2023-04-19T00:01:57.764Z · LW(p) · GW(p)

Are we entertaining technical solutions at this point? If so, I have some ideas. This feels to me like a problem of balancing the two kinds of content on the site. Balancing babble to prune, artist to critic, builder to breaker. I think Duncan wants an environment that encourages more Babbling/Building. Whereas it seems to me like Said wants an environment that encourages more Pruning/Breaking. 

Both types of content are needed. Writing posts pattern matches with Babbling/Building, whereas writing comments matches closer to Pruning/Breaking. In my mind anyway. (update: prediction market)

Inspired by this post [LW · GW] I propose enforcing some kind of ratio between posts and comments. Say you get 3 comments per post before you get rate-limited?[1] This way if you have a disagreement or are misunderstanding a post there is room to clarify, but not room for demon threads. If it takes more than a few comments to clarify that is an indication of a deeper model disagreement and you should just go ahead and write your own post explaining your views. ( as an aside I would hope this creates an incentive to write posts in general, to help with the inevitable writer turn-over)

Obviously the exact ratio doesn't have to be 3 comments to 1 post. It could be 10:1 or whatever the mod team wants to start with before adjusting as needed.

  1. ^

    I'm not suggesting that you get rate-limited site-wide if you start exceeding 3 comments per post. Just that you are rate-limited on that specific post. 

Replies from: Jasnah_Kholin, Raemon
comment by Jasnah Kholin (Jasnah_Kholin) · 2023-04-19T07:54:45.605Z · LW(p) · GW(p)

i find the fact that you see comments as criticism, and not expanding and continuing the building, is indicative of what i see as problematic. good comments should most of the time not be critisim. be part of the building. 

the dynamic that is good in my eyes, is one when comments are making the post better not by criticize it, but by sharing examples, personal experiences, intuitions, and the relations of those with the post. 

counting all comments as prune instead of bubble disincentivize bubble-comments. this is what you want?

Replies from: Leviad
comment by Drake Morrison (Leviad) · 2023-04-19T23:24:19.663Z · LW(p) · GW(p)

I don't see all comments as criticism. Many comments are of the building up variety! It's that prune-comments and babble-comments have different risk-benefit profiles, and verifying whether a comment is building up or breaking down a post is difficult at times. 

Send all the building-comments you like! I would find it surprising if you needed more than 3 comments per day to share examples, personal experiences, intuitions and relations.

The benefits of building-comments is easy to get in 3 comments per day per post. The risks of prune-comments(spawning demon threads) are easy to mitigate by only getting 3 comments per day per post. 

Replies from: Jasnah_Kholin
comment by Jasnah Kholin (Jasnah_Kholin) · 2023-04-20T07:01:34.626Z · LW(p) · GW(p)

i think we have very different models of things, so i will try to clarify mine. my best bubble site example is not in English, so i will give another one - the emotional Labor thread in MetaFilter, and MetaFilter as whole. just look on the sheer LENGTH of this page!

https://www.metafilter.com/151267/Wheres-My-Cut-On-Unpaid-Emotional-Labor

there are much more then 3 comments from person there.

from my point of view, this rule create hard ceiling that forbid the best discussions to have. because the best discussions are creative back-and-forth. my best discussions with friends are  - one share model, one ask questions, or share different model, or share experience, the other react, etc. for way more then three comments. more like 30 comments. it's dialog. and there are lot of unproductive examples for that in LW. and it's quite possible (as in, i assign to it probability of 0.9) that in first-order effects, it will cut out unproductive discussions and will be positive.

but i find rules that prevent the best things from happening as bad in some way that i can't explain clearly. something like, I'm here to try to go higher. if it's impossible, then why bother? 

I also think it's VERY restrictive rule. i wrote more then three comments here, and you are the first one to answer me. like, i'm just right now taking part in counter-example to "would find it surprising if you needed more than 3 comments per day to share examples, personal experiences, intuitions and relations."

i shared my opinions on very different and unrelated parts of this conversation here. this is my six comment. and i feel i reacted very low-heat. the idea i should avoid or conserve those comments to have only three make me want to avoid comment on LW altogether. the message i get from this rule is like... is like i assumed guilty of thing i literately never do, and so have very restricted rules placed on me, and it's very unfriendly in a way that i find hard to describe.

like, 90% of the activity this rule will restrict is legitimate, good comments. this is awful false positive ratio. even if you don't count the you-are-bad-and-unwelcome effect i feel from it and you, apparently, not.

 

comment by Raemon · 2023-04-19T00:09:31.148Z · LW(p) · GW(p)

Yeah this is the sort of solution I'm thinking of (although it sounds like you're maybe making a more sweeping assumption than me?)

My current rough sense is that a rate limit of 3 comments per post per day (maybe with an additional wordcount based limit per post per day), would actually be pretty reasonable at curbing the things I'm worried about (for users that seem particularly prone to causing demon threads [LW · GW])

comment by Said Achmiz (SaidAchmiz) · 2023-04-17T23:34:40.107Z · LW(p) · GW(p)

Said and Duncan are both among the two single-most complained about users since LW2.0 started (probably both in top 5, possibly literally top 2).

Complaints by whom? And why are these complaints significant?

Are you taking the stance that all or most of these complaints are valid, i.e. that the things being complained about are clearly bad (and not merely dispreferred by this or that individual LW member)?

(See also this recent comment [LW(p) · GW(p)], where I argue that at least one particular characterization of my commenting activity is just demonstrably inconsistent with reality.)

Replies from: Raemon, jkaufman, Raemon
comment by Raemon · 2023-04-22T19:11:40.200Z · LW(p) · GW(p)

Here's a bit of metadata on this: I can recall offhand 7 complaints from users with 2000+ karma who aren't on the mod team (most of whom had significantly more than 2000 karma, and all of them had some highly upvoted comments and/or posts that are upvoted in the annual review). One of them cites you as being the reason they left LessWrong a few years ago, and ~3-4 others cite you as being a central instance of a pattern that means they participate less on LessWrong, or can't have particularly important types of conversations here.

I also think most of the mod team (at least 4 of them? maybe more) of them have had such complaints (as users, rather than as moderators)

I think there's probably at least 5 more people who complained about you by name who I don't think have particularly legible credibility beyond "being some LessWrong users." 

I'm thinking about my reply to "are the complaints valid tho?". I have a different ontology here.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-04-22T20:15:15.100Z · LW(p) · GW(p)

There are some problems with this [LW · GW] as pointing in a particular direction. There is little opportunity for people to be prompted to express opposite-sounding opinions, and so only the above opinions are available to you.

I have a concern that Said and Zack are an endangered species that I want there to be more of on LW and I'm sad they are not more prevalent. I have some issues with how they participate, mostly about tendencies towards cultivating infinite threads instead of quickly de-escalating and reframing, but this in my mind is a less important concern than the fact that there are not enough of them. Discouraging or even outlawing Said cuts that significantly, and will discourage others.

comment by jefftk (jkaufman) · 2023-04-18T13:31:51.742Z · LW(p) · GW(p)

Ray pointing out the level of complaints is informative even without (far more effort) judgement on the merits of each complaint. There being a lot of complaints is evidence (to both the moderation team and the site users) that it's worth putting in effort here to figure out if things could be better.

Replies from: Vladimir_Nesov, pseud
comment by Vladimir_Nesov · 2023-04-22T20:03:30.761Z · LW(p) · GW(p)

There being a lot of complaints is evidence [...] that it's worth putting in effort here to figure out if things could be better.

It is evidence that there is some sort of problem. It's not clear evidence about what should be done about it, about what "better" means specifically. Instituting ways of not talking about the problem anymore doesn't help with addressing it [LW(p) · GW(p)].

comment by pseud · 2023-04-20T07:46:15.810Z · LW(p) · GW(p)

It didn't seem like Said was complaining about the reports being seen as evidence that it is worth figuring out whether thing could be better. Rather, he was complaining about them being used as evidence that things could be better.

Replies from: philh
comment by philh · 2023-04-20T12:17:51.078Z · LW(p) · GW(p)

If we speak precisely... in what way would they be the former without being the latter? Like, if I now think it's more worth figuring out whether things could be better, presumably that's because I now think it's more likely that things could be better?

(I suppose I could also now think the amount-they-could-be-better, conditional on them being able to be better, is higher; but the probability that they could be better is unchanged. Or I could think that we're currently acting under the assumption that things could be better, I now think that's less likely so more worth figuring out whether the assumption is wrong. Neither seems like they fit in this case.)

Separately, I think my model of Said would say that he was not complaining, he was merely asking questions (perhaps to try to decide whether there was something to complain about, though "complain" has connotations there that my model of Said would object to).

So, if you think the mods are doing something that you think they shouldn't be, you should probably feel free to say that (though I think there are better and worse ways to do so).

But if you think Said thinks the mods are doing something that Said thinks they shouldn't be... idk, it feels against-the-spirit-of-Said to try to infer that from his comment? Like you're doing the interpretive labor that he specifically wants people not to do.

Replies from: pseud
comment by pseud · 2023-04-20T16:43:55.816Z · LW(p) · GW(p)

My comment wasn't well written, I shouldn't have used the word "complaining" in reference to what Said was doing. To clarify:

As I see it, there are two separate claims:

  1. That the complaints prove that Said has misbehaved (at least a little bit)
  2. That the complaints increase the probability that Said has misbehaved 

Said was just asking questions - but baked into his questions is the idea of the significance of the complaints, and this significance seems to be tied to claim 1. 

Jefftk seems to be speaking about claim 2. So, his comment doesn't seem like a direct response to Said's comment, although the point is still a relevant one. 

comment by Raemon · 2023-04-20T08:01:00.246Z · LW(p) · GW(p)

(fyi I do plan to respond to this, although don't know how satisfying it'll be when I do)

comment by Ruby · 2023-04-23T23:41:29.585Z · LW(p) · GW(p)

Warning to Duncan

(See also: Raemon's moderator action on Said [LW(p) · GW(p)])

Since we were pretty much on the same page, Raemon delegated writing this warning to Duncan to me, and signed off on it.

Generally, I am quite sad if, when someone points/objects to bad behavior, they end up facing moderator action themselves. It doesn’t set a great incentive. At the same time, some of Duncan’s recent behavior also feels quite bad to me, and to not respond to it would also create a bad incentive – particularly if the undesirable behavior results in something a person likes.

Here’s my story of what happened, building off of some of Duncan’s own words and some endorsement of something I said previous exchange with him:

Duncan felt that Said engaged in various behaviors that hurt him (confident based on Duncan’s words) and were in general bad (inferred from Duncan writing posts describing why those behaviors are bad). Such bad/hurtful behaviors include strawmanning, psychologizing at length, and failing to put in symmetric effort. For example, Said argued that Duncan banned him from his posts because Said disagreed. I am pretty sympathetic to these accusations against Said (and endorse moderation action against Said) and don’t begrudge Duncan any feelings of frustration and hurt he might have.

Duncan additionally felt that the response of other users (e.g. in voting patterns) and moderators was not adequate.

I much prefer the world where there are competent police to the world where I have to fight off muggers in the alley. - source [LW(p) · GW(p)]

and 

I dunno.  If mods would show up and be like "false" and "cut it out" I would pretty happily never get into a scrap on LW ever again.”

Given what he felt to be the inadequate response from others, Duncan decided to defend himself (or try to cause others to defend him). His manner of doing so, I feel, generates quite a few costs that warrant moderator action to incentivize against Duncan or others imposing these costs on the site and mods in the future.

The following is a summary of what I consider Duncan’s self-defensive behavior (not necessarily in order of occurrence).

  1. Arguing back and forth in the comments
  2. Banned Said from his posts
  3. Argued more in comments not on his own posts [LW(p) · GW(p)]
  4. Requested that the moderators intervene, and quickly (offsite)
  5. Wrote a top-level post [LW · GW] at least somewhat in response to Said (planned to write it anyhow, but prioritized based on Said interactions), and it was interpreted by others as being about Said and calling for banning him.
  6. In further comments, identifies statements that he says cause he to categorize and treat Said as an intentional liar [LW(p) · GW(p)].
  7. Says he’d prefer a world where both he and Said were banned than neither.
  8. Accuses the LessWrong moderators [LW(p) · GW(p)] of not maintaining a tended garden, and that perhaps should just leave.

Individually and done occasionally, I think many of these actions are fine. The “ban users from your posts” feature is there so that you don’t have to engage with a user you don’t want to, as a mod, I appreciate people flagging behavior they think isn’t good, writing top-level posts describing why you think certain behaviors are bad (in a timeless/universal way) also is good, and if the site doesn’t make you feel safe, saying so and leaving also seems legit (I’m sad if this is true, but I’d like to know it rather than someone leaving silently).

Requesting quick moderator intervention, denouncing that he categorizes and treats Said as an intentional liar, saying that he’d prefer both Said himself be banned than neither, and writing a post that at least some people interpreted as calling for Said to be banned, feel like a pretty “aggressive” response. Combined with the other behaviors that are more usually okay but still confrontational, it feels to me like Duncan’s response was quite escalatory in a way that generates costs.

First, I think it’s bad to have users on the site who others are afraid of getting into conflict with. Naturally, people weigh the expect value and expected costs from posting/commenting/etc, and I know that with high confidence myself and at least three others (and I assume quite a few more) are pretty afraid to get into conflict with Duncan, because Duncan argues long and hard and generally invests a lot of time to defend himself against what feels like harm, e.g. all the ways he has done so on this occasion. I assume here that others are similar to me (not everyone, but enough) in being quite wary of accidentally doing something Duncan reacts to as a terrible norm violation, because doing so can result in a really unpleasant conflict (this has happened twice that I know of with other LW team members).

I recognize that Duncan feels like he’s trying to make LessWrong a place that’s net positive for him to contribute, and does so in some prosocial ways (e.g. writes Basics of Rationalist Discourse), but I need to call out ways in which his manner doing also causes harm, e.g. a climate of fear where people won’t express disagreement because defending themselves against Duncan would be extremely exhausting and effortful.

This is worsened by the fact that often Duncan is advocating for norms. If he was writing about trees and you were afraid to disagree, it might not be a big deal. But if he is arguing norms for your community, it’s worse if you think he might be advocating something wrong but disagreeing feels very risky.

Second, Duncan’s behavior directly or indirectly requires moderator attention, sometimes fairly immediately (partly because he’s requested quick response, and partly because if there’s an overt conflict between users, mods really ought to chime in sooner rather than later). I would estimate that the team has collectively spent 40+ hours on moderation over two weeks in response to recent events (some of that I place on Said who probably needed moderation anyway), but the need to drop other work and respond to the conflict right now is time-consuming and disruptive. Not counting exactly, it feels like this has happened periodically for several years with Duncan.

Duncan is a top contributor to the site, and I think for the most part advocates for good norms, so it feels worth it to devote a good amount of time and attention to his requests, but only so much. So there’s a cost there I want to call out that was incurred from recent behavior. (I think that if Duncan had notified us of really not liking some of Said’s behavior and point to a thread, said he’d like a response within two months or else he might leave the site – that would have been vastly less costly to us than what happened.)

I don’t think we’ve previously pointed out the costs here, so it’s fair to issue a warning rather than any harsher action.


 Duncan, if you do things that impose what feel like to me costs of:

  • Taking actions such that I predict users will be afraid to engage with you, at the same time as you advocate norms
  • You demand fast responses to things you don’t like, thereby costing a lot of resources from mods in excess of what seems reasonable (and you're basically out of budget for a long while now)

The moderators will escalate moderator action in response, e.g. rate limits or bans of escalating duration.


A couple of notes of clarification. I feel that this warning is warranted on the basis of Duncan’s recent behavior re: Said alone, but my thinking is informed by similar-ish patterns from the past that I didn’t get into here. Also for other users wondering if this warning could apply to them. Theoretically, yes, but I think most users aren’t at all close to doing the things here that I don’t like. If you have not previously had extensive engagement with the mods about a mix of your complaints and behavior, then what I’m describing here as objectionable is very unlikely to be something you’re doing.

To close, I’ll say I’m sad that the current LessWrong feels like somewhere where you, Duncan, need to defend yourself. I think many of your complaints are very very reasonable, and I wish I had the ability to immediately change things. It’s not easy and there are many competing tradeoffs, but I do wish this was a place where you felt like it was entirely positive to contribute.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-24T01:31:42.913Z · LW(p) · GW(p)

Just noting as a "for what it's worth"

(b/c I don't think my personal opinion on this is super important or should be particularly cruxy for very many other people)

that I accept, largely endorse, and overall feel fairly treated by the above (including the week suspension that preceded it).

comment by Raemon · 2023-04-23T23:57:04.770Z · LW(p) · GW(p)

Moderation action on Said

(See also: Ruby's moderator warning for Duncan [LW(p) · GW(p)])

I’ve been thinking for a week, and trying to sanity-check whether there are actual good examples of Said doing-the-thing-I’ve-complained-about, rather than “I formed a stereotype of Said and pattern match to it too quickly”, and such. 

I think Said is a pretty confusing case though. I’m going to lay out my current thinking here, in a number of comments, and I expect at least a few more days of discussion as the LessWrong community digests this. I’ve pinned this post to the top of the frontpage for the day so users who weren’t following the discussion can decide whether to weigh in.

Here’s a quick overview of how I think about Said moderation:

  • Re: Recent Duncan Conflict. 
    • I think he did some moderation-worthy things in the recent conflict with Duncan, but a) so did Duncan, and I think there’s a “it takes two-to-tango” aspect of demon threads, b) at most, those’d result in me giving one or both of them a 1-week ban and then calling it a day. I basically endorse Vaniver’s take [LW(p) · GW(p)] on some object level stuff. I have a bit more to say but not much.
  • Overall pattern. 
  • Not sufficient corresponding upside
    • I’d be a lot less wary of the previous pattern if I felt like Said was also contributing significantly more value to LessWrong. [Edit: I do, to be clear, think Said has contributed significant value, both in terms of keeping the spirit of the sequences alive in the world [LW(p) · GW(p)] ala readthesequences.com, and through being a voice with a relatively rare (these days) perspective that keeps us honest in important ways. But I think the costs are, in fact, really high, and I think the object level value isn't enough to fully counterbalance it]
  • Prior discussion and warnings. 
    • We’ve had numerous discussions with Said about this (I think we’ve easily spent 100+ hours of moderator-time on it, and probably more like 200), including an explicit moderation warning.
  • Few recent problematic pattern instances. 
    • That all said, prior to this ~month’s conflict with Duncan, I don’t have a confident belief that Said has recently strongly embodied the pattern I’m worried about. I think it was more common ~5 years ago. I cut Said some slack for the convo with Duncan because I think Duncan is kind of frustrating to argue with. 
    • THAT said, I think it’s crept up at least somewhat occasionally in the past 3 years, and having to evaluate whether it’s creeping up to an unacceptable level is fairly costly. 
      • THAT THAT said, I do appreciate that the first time we gave him an explicit moderation notice, I don’t think we had any problems for ~3 years afterwards.
  • Strong(ish) statement of intent
    • Said’s made a number of comments that make me think he would still be doing a pattern I consider problematic if the opportunity arose. I think he’ll follow the letter of the law if we give it to him, but it’s difficult to specify a letter-of-the-law that does the thing I care about.

A thing that is quite important to me is that users feel comfortable ignoring Said if they don’t think he’s productive to engage with. (See below for more thoughts on this). One reason this is difficult is that it’s hard to establish common knowledge about it among authors. Another reason is that I think Said’s conversational patterns have the effect of making authors and other commenters feel obliged to engage with him (but, this is pretty hard to judge in a clear-cut way)

For now, after a bunch of discussion with other moderators, reading the thread-so-far, and talking with various advisors – my current call is giving Said a rate limit of 3-comments-per-post-per-week. See this post on the general philosophy of rate limiting as a moderation tool we’re experimenting with [LW · GW]. I think there’s a decent chance we’ll ship some new features soon that make this actually a bit more lenient, but don’t want to promise that at the moment.

I am not very confident in this call, and am open to more counterarguments here, from Said or others. I’ll talk more about some of the reasoning here at the end of this comment. But I want to start by laying out some more background reasoning for the entire moderation decision.

In particular, if either Said makes a case that he can obey the spirit of “don’t imply people have an obligation to engage with his comments”; or, someone suggests a letter-of-the-law that actually accomplishes the thing I’m aiming at in a more clear-cut way, I’d feel fairly good about revoking the rate-limit.

(Note: one counterproposal I’ve seen is to develop a rate-limit based entirely on karma rather than moderator judgment, and that it is better to do this than to have moderators make individual judgment calls about specific users. I do think this idea has merit, although it’s hard to build. I have more to say about it at the end)

Said Patterns

3 years ago Habryka summarized a pattern we’d seen a lot [LW(p) · GW(p)]:

The usual pattern of Said's comments as I experience them has been (and I think this would be reasonably straightforward to verify): 

  • Said makes a highly upvoted comment asking a question, usually implicitly pointing out something that is unclear to many in the post
  • Author makes a reasonably highly upvoted reply
  • Said says that the explanation was basically completely useless to him, this often gets some upvotes, but drastically less than the top-level question
  • Author tries to clarify some more, this gets much fewer upvotes than the original reply
  • Said expresses more confusion, this usually gets very few upvotes
  • More explanations from the author, almost no upvotes
  • Said expresses more confusion, often being downvoted and the author and others expressing frustration

I think the most central of this is in this thread on circling [LW · GW], where AFAICT Said asked for examples of some situations where social manipulation is “good.” Qiaochu and Sarah Constantin offer some examples. Said responds to both of them by questioning their examples and doubting their experience in a way that is pretty frustrating to respond to (and in the Sarah case seemed to me like a central example of Said missing the point, and the evo-psych argument not even making sense in context, which makes me distrust his taste on these matters). [1 [LW(p) · GW(p)], 2 [LW(p) · GW(p)]] 

I don’t actually remember more examples of that pattern offhand. I might be persuaded that I overupdated on some early examples. But after thinking a few days, I think a cruxy piece of evidence on how I think it makes sense to moderate Said is this comment from ~3 years ago [LW(p) · GW(p)]:

There is always an obligation by any author to respond to anyone’s comment along these lines*.  If no response is provided to (what ought rightly to be) simple requests for clarification (such as requests to, at least roughly, define or explain an ambiguous or questionable term, or requests for examples of some purported phenomenon), the author should be interpreted as ignorant. These are not artifacts of my particular commenting style, nor are they unfortunate-but-erroneous implications—they are normatively correct general principles.

*where I think “these lines” means “asking for examples”, “asking people to define terms,” etc.

For completeness, Said later elaborates:

Where does that obligation come from?

I should clarify, first of all, that the obligation by the author to respond to the comment is not legalistically specific. By this I mean that it can be satisfied in any of a number of ways; a literal reply-to-comment is just one of them. Others include:

  • Mentioning the comment in a subsequent post (“In the comments on yesterday’s post, reader so-and-so asked such-and-such a question. And I now reply thus: …”).
  • Linking to one’s post or comment elsewhere which constitutes an answer to the question.
  • Someone else linking to a post or comment elsewhere (by the OP) which constitutes an answer to the question.
  • Someone else answering the question in the OP’s stead (and the OP giving some indication that this answer is endorsed).
  • Answering an identical, or very similar, question elsewhere (and someone providing a link or citation).

In short, I’m not saying that there’s a specific obligation for a post author to post a reply comment, using the Less Wrong forum software, directly to any given comment along the lines I describe. 

Habryka and Said discussed it at length at the time.

I want to reiterate that I think asking for examples is fine (and would say the same thing for questions like “what do you mean by ‘spirituality’?” or whatnot). I agree that a) authors generally should try to provide examples in the first place, b) if they don’t respond to questions about examples, that’s bayesian evidence about whether their idea will ground out into something real. I’m fairly happy with clone of saturn's variation [LW(p) · GW(p)] on Said’s statement, that if the author can’t provide examples, “the post should be regarded as less trustworthy” (as opposed to “author should be interpreted as ignorant”), and gwern’s note [LW(p) · GW(p)] that if they can’t, they should forthrightly admit “Oh, I don’t have any yet, this is speculative, so YMMV”.

The thing I object fairly strongly to is “there is an obligation on the part of the author to respond.” 

I definitely don’t think there’s a social obligation, and I don’t think most LessWrongers think that. (I’m not sure if Said meant to imply that). Insofar as he means there’s a bayesian obligation-in-the-laws-of-observation/inference, I weakly agree but think he overstates it: there’s a lot of reasons an author might not respond (“belief that a given conversation won’t be productive,” “volume of such comments,” “trying to have a 202 conversation and not being interested in 101 objections,” and simple opportunity cost). 

From a practical ‘things that the LessWrong culture should socially encourage people to do’, I liked Vladimir's point [LW(p) · GW(p)] that:

My guess is that people should be rewarded [LW(p) · GW(p)] for ignoring criticism [LW(p) · GW(p)] they want to ignore, it should be convenient for them to do so. [...] This way authors are less motivated to take steps that discourage criticism (including steps such as not writing things). Criticism should remain convenient, not costly, and directly associated with the criticized thing (instead of getting pushed to be published elsewhere).

i.e. I want there to be good criticism on LW, and think that people feeling free to ignore criticism encourages more good criticism, in part by encouraging more posts and engagement.

It’s been a few years and I don’t know that Said still endorses the obligation phrasing, but much of my objection to Said’s individual commenting stylistic choices has a lot to do with reinforcing this feeling of obligation. I also think (less confidently) that they get an impression that Said thinks if an author hasn’t answered a question to his satisfaction (as an example of a reasonable median LW user), they should feel an [social] obligation to succeed at that. 

Whether he intends this or not, I think it's an impression that comes across, and which exerts social pressure, and I think this has a significant negative effect on the site. 

I’m a bit confused about how to think about “prescribed norms” vs “good ideas that get selected on organically.” In a previous post Vladmir_Nesov argues that prescribing norms [LW(p) · GW(p)] generally doesn’t make sense. Habryka had a similar take yesterday when I spoke with him. I’m not sure I agree (and some of my previous language here has probably assumed a somewhat more prescriptivist/top-down approach to moderating LessWrong that I may end up disendorsing after chatting more with Habryka)

But even in a more organic approach to moderation, I, Habryka and Ruby think it’s pretty reasonable for moderators to take action to prevent Said from implying that there’s some kind of norm here and exerting pressure around it on other people’s comment sections, when, AFAICT, there is no consensus of such a norm. I predict a majority of LessWrong members would not agree with that norm, either on normative-Bayesian terms nor consequentialist social-norm-design terms. (To be clear I think many people just haven’t thought about it at all, but expect them to at least weakly disagree when exposed to the arguments. “What is the actual collective endorsed position of the LW commentariat” is somewhat cruxy for me here)

Rate-limit decision reasoning

If this was our first (or second or third) argument with Said over this, I’d think stating this clearly and giving him a warning would be a reasonable next action. Given that we’ve been intermittently been arguing about this for 5 years, spending a hundred+ hours of mod time discussing it with him, it feels more reasonable to move to an ultimatum of “somehow, Said needs to stop exerting this pressure in other people’s comment threads, or moderators will take some kind of significant action to either limit the damage or impose a tax on it.”

If we were limited to our existing moderator tools, I would think it reasonable to ban him. But we are in the middle of setting up a variety of rate limiting tools to generally give mods more flexibility, and avoid being heavier-handed than we need to be.

I’m fairly open to a variety of options here. FWIW, I am interested in what Said actually prefers here. (I expect it is not a very fun conversation to be asked by the people-in-power “which way of constraining you from doing the thing you think is right seems least-bad to you?”, but, insofar as Said or others have an opinion on that I am interested)

am interested in building a automated tool that detects demon threads and rate limits people based on voting patterns.. I most likely want to try to build such a tool regardless of what call we make on Said, and if I had a working version of such a tool I might be pretty satisfied with using it instead. My primary cruxes are 

a) I think it’s a lot harder to build and I’m not sure we can succeed, 
b) I do just think it’s okay for moderators to make judgment calls about individual users based on longterm trends. That’s sort of what mods are for. (I do think for established users it’s important for this process to be fairly costly and subjected to public scrutiny)

But for now, after chatting with Oli and Ruby and Robert, I’m implementing the 3-comments-per-post-per-week rule for Said. If we end up having time to build/validate an organic karma-based rate limit that solves the problem I’m worried about here, I might switch to that. Meanwhile some additional features I haven’t shipped yet, which I can’t make promises about, but which I personally think would be god to ship soon include:

  • There’s at least a boolean flag for individual posts so authors can allow “rate limited people can comment freely”, and probably also a user-setting for this. Another possibility is a user-specific whitelist, but that’s a bit more complicated and I’m not sure if there’s anyone who would want that who wouldn’t want the simpler option.
    • I’d ideally have this flag set on this post, and probably on other moderation posts written by admins.
  • Rate-limited users in a given comment section have a small icon that lets you know they’re rate-limited, so you have reasonable expectations of when they can reply.
  • Updating the /moderation page to list rate limited users, ideally with some kind of reason / moderation-warning.
  • Updating rate limits to ensure that users can comment as much as they want on their own posts (we made a PR for this change a week ago and haven’t shipped it yet largely because this moderation decision took a lot of time)

Some reasons for this-specific-rate-limit rather than alternatives are:

  • 3 comments within a week is enough for an initial back-and-forth where Said asks questions or makes a critique, the author responds, Said responds-to-the-response. (i.e. allowing the 4 layers of intellectual conversation, and getting the parts of Said comments that most people agree are valuable)
  • It caps the conversation out before it can spiral into unproductive escalatory thread.
  • It signals culturally that the problem here isn’t about initial requests for examples or criticisms, it’s about the pattern that tends to play out deeper in threads. I think it’s useful for this to be legible both to authors engaging with Said, and other comments inferring site norms (i.e. some amount of Socrates is good, too much can cause problems [LW · GW])
  • If 3 comments isn’t enough to fully resolve a conversation, it’s still possible to follow up eventually.
  • Said can still write top level posts arguing for norms that he thinks would be better, or arguing about specific posts that he thinks are problematic. 

That all said, the idea of using rate-limits as a mod-tool is pretty new, I’m not actually sure how it’ll play out. Again, I’m open to alternatives. (And again, see this post [LW · GW] for more thoughts on rate limiting)

Feel free to argue with this decision. And again, in particular, if Said makes a case that he either can obey the spirit of “don’t imply people have an obligation to engage with your comments”, or someone can suggest a letter-of-the-law that actually accomplishes the thing I’m aiming at in a more clear-cut way that Said thinks he can follow, I’d feel fairly good about revoking the rate-limit.

Replies from: rsaarelm, Raemon, Raemon, jimmy, SaidAchmiz, SaidAchmiz, Zack_M_Davis
comment by rsaarelm · 2023-04-25T04:49:26.642Z · LW(p) · GW(p)

This sounds drastic enough that it makes me wonder, since the claimed reason was that Said's commenting style was driving high-quality contributors away from the site, do you have a plan to follow up and see if there is any sort of measurable increase in comment quality, site mood or good contributors becoming more active moving forward?

Also, is this thing an experiment with a set duration, or a permanent measure? If it's permanent, it has a very rubber room vibe to it, where you don't outright ban someone but continually humiliate them if they keep coming by and wish they'll eventually get the hint.

comment by Raemon · 2023-04-25T19:56:28.587Z · LW(p) · GW(p)

A background model I want to put out here: two frames that feel relevant to me here are "harm minimization" and "taxing". I think the behavior Said does has unacceptably large costs in aggregate (and, perhaps to remind/clarify, I think a similar-in-some-ways set of behaviors I've seen Duncan do also would have unacceptably large costs in aggregate).

And the three solutions I'd consider here, at some level of abstraction, are:

  1. So-and-so agrees to stop doing the behavior (harder when the behavior is subtle and multifaceted, but, doable in principle)
  2. Moderators restrict the user such that they can't do the behavior to unacceptable degrees
  3. Moderators tax the behavior such that doing-too-much-of-it is harder overall (but, it's still something of the user's choice if they want to do more of it and pay more tax). 

All three options seem reasonable to me apriori, it's mostly a question of "is there a good way to implement them?". The current rate-limit-proposal for Said is mostly option 2. All else being equal I'd probably prefer option 3, but the options I can think of seem harder to implement and dev-time for this sort of thing is not unlimited.

comment by Raemon · 2023-07-29T19:58:25.063Z · LW(p) · GW(p)

Quick update for now: @Said Achmiz [LW · GW]'s rate limit has expired, and I don't plan to revisit applying-it-again unless a problem comes up. 

I do feel like there's some important stuff left unresolved here. @Zack_M_Davis [LW · GW]'s comment on this other post [LW(p) · GW(p)] asks some questions that seem worth answering. 

I'd hoped to write up something longer this week but was fairly busy, and it seemed better to explicitly acknowledge it. For the immediate future I think improving on the auto-rate-limits and some other systemic stuff seems more important that arguing or clarifying the particular points here.

comment by jimmy · 2023-04-25T19:12:31.035Z · LW(p) · GW(p)

A thing that is quite important to me is that users feel comfortable ignoring Said if they don’t think he’s productive to engage with. (See below for more thoughts on this). One reason this is difficult is that it’s hard to establish common knowledge about it among authors. Another reason is that I think Said’s conversational patterns have the effect of making authors and other commenters feel obliged to engage with him (but, this is pretty hard to judge in a clear-cut way)

 

It seems like the natural solution here would be something that establishes this common knowledge. Something like the twitter "community notes" being attached to relevant comments that says something like "There is no obligation to respond to this comment, please feel comfortable ignoring this user if you don't feel he will productive to engage with. Discussion here [LW(p) · GW(p)]"

Replies from: Raemon
comment by Raemon · 2023-04-25T19:39:26.322Z · LW(p) · GW(p)

Yeah I did list that as one of my options I'd consider in the previous announcement [LW(p) · GW(p)]. 

A problem I anticipate is that it's some combination of ineffective, and also in some ways a harsher punishment. But if Said actively preferred some version of this solution I wouldn't be opposed to doing it instead of rate-limiting.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-25T22:18:46.216Z · LW(p) · GW(p)

Forgive me for making what may be an obvious suggestion which you’ve dismissed for some good reason, but… is there, actually, some reason why you can’t attach such a note to all comments? (UI-wise, perhaps as a note above the comment form, or something?) There isn’t an obligation, in terms of either the site rules or the community norms as the moderators have defined them, to respond to any comment, is there? (Perhaps with the exception of comments written by moderators…? Or maybe not even those?)

That is, it seems to me that the concern here can be characterized as a question of communicating forum norms to new participants. Can it not be treated as such? (It’s surely not unreasonable to want community members to refrain from actively interfering with the process of communicating rules and norms to newcomers, such as by lying to them about what those rules/norms are, or some such… but the problem, as such, is one which should be approached directly, by means of centralized action, no?)

Replies from: Benito
comment by Ben Pace (Benito) · 2023-04-26T00:42:03.444Z · LW(p) · GW(p)

I think it could be quite nice to give new users information about what site norms are and give a suggested spirit in which to engage with comments.

(Though I'm sure there's lots of things it'd be quite nice to tell new users about the spirit of the site, but there's of course bandwidth limitations on how much they'll read, so just because it's an improvement doesn't mean it's worth doing.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-26T02:10:09.103Z · LW(p) · GW(p)

If it’s worth banning[1] someone (and even urgently investing development resources into a feature that enables that banning-or-whatever!) because their comments might, possibly, on some occasions, potentially mislead users into falsely believing X… then it surely must be worthwhile to simply outright tell users ¬X?

(I mean, of all the things that it might be nice to tell new users, this, which—if this topic, and all the moderators’ comments on it, are to be believed—is so consequential, has to be right up at the top of list?)


  1. Or rate-limiting, or applying any other such moderation action to. ↩︎

Replies from: Raemon
comment by Raemon · 2023-04-26T02:23:20.805Z · LW(p) · GW(p)

because their comments might, possibly, on some occasions, potentially mislead users into falsely believing X

This is not what I said though.

Replies from: SaidAchmiz, SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-26T20:03:07.807Z · LW(p) · GW(p)

Now that you’ve clarified [LW(p) · GW(p)] your objection here, I want to note that this does not respond to the central point of the grandparent comment:

If it’s worth applying moderation action and developing novel moderation technology to (among other things, sure) prevent one user from potentially sometimes misleading users into falsely believing X, then it must surely be worthwhile to simply outright tell users ¬X?

Communicating this to users seems like an obvious win, and one which would make a huge chunk of this entire discussion utterly moot.

Replies from: habryka4, Raemon
comment by habryka (habryka4) · 2023-04-26T20:30:39.517Z · LW(p) · GW(p)

If it’s worth applying moderation action and developing novel moderation technology to (among other things, sure) prevent one user from potentially sometimes misleading users into falsely believing X, then it must surely be worthwhile to simply outright tell users ¬X?

Adding a UI element, visible to every user, on every new comment they write, on every post they will ever interface with, because one specific user tends to have a confusing communication style seems unlikely to be the right choice. You are a UI designer and you are well-aware of the limits of UI complexity, so I am pretty surprised you are suggesting this as a real solution. 

But even assuming we did add such a message, there are many other problems: 

  • Posting such a message would communicate a level of importance of this specific norm, which does not actually come up very frequently in conversations that don't involve you and a small number of other users, that is not commensurate with its actual importance. We have the standard frontpage commenting guidelines, and they cover what I consider the actually most important things to communicate, and they are approximately the maximum length I expect new users to read. Adding this warning would have to displace one of the existing guidelines, which seems very unlikely to be worth it.
  • Banner blindness is real, and if you put the same block of text anywhere, people will quickly learn to ignore them. This has already happened with the existing moderation guidelines and frontpage guidelines. 
  • If you have a sign in a space that says "don't scream at people" but then lots of people do actually scream at you in that room, this doesn't actually really help very much, and more likely just reduces trust in your ability to set any kind of norm in your space. I've really done a lot of user interviews and talked to lots of authors about this pattern, and interfacing with you and a few other users definitely gets confidently interpreted as making a claim that authors and other commenters have an obligation to respond or otherwise face humiliation in front of the LessWrong audience. The correct response by users to your comments, in the presence of the box with the guideline, would be "There is a very prominent rule that says I am not obligated to respond, so why aren't you deleting or moderating the people who sure seem to be creating a strong obligation for me to respond?", which then would just bring us back to square one. 
    • My guess is you will respond to this with some statement of the form "but I have said many times that I do not think the norms are such that you have an obligation to respond", but man, subtext and text do just differ frequently in communication, and the subtext of your comments does really just tend to communicate the opposite. A way out of this situation might be that you just include a disclaimer in the first comment on every post, but I can also imagine that not working for a bunch of messy reasons.
    • I can also imagine you responding to this with "but I can't possible create an obligation to respond, the only people who can do that are the moderators", which seems to be a stance implied by some other comments you wrote recently. This stance seems to me to fail to model how actual social obligations develop and how people build knowledge about social norms in a space. The moderators only set a small fraction of the norms and culture of the site, and of course individual users can create an obligation for someone to respond.

I am not super interested in going into depth here, but felt somewhat obligated to reply since your suggested had some number of upvotes. 

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-26T21:32:00.606Z · LW(p) · GW(p)

First, concerning the first half of your comment (re: importance of this information, best way of communicating it):

I mean, look, either this is an important thing for users to know or it isn’t. If it’s important for users to know, then it just seems bizarre to go about ensuring that they know it in this extremely reactive way, where you make no real attempt to communicate it, but then when a single user very occasionally says something that sometimes gets interpreted by some people as implying the opposite of the thing, you ban that user. You’re saying “Said, stop telling people X!” And quite aside from “But I haven’t actually done that”, my response, simply from a UX design perspective, is “Sure, but have you actually tried just telling people ¬X?”

Have you checked that users understand that they don’t have an obligation to respond to comments?

If they don’t, then it sure seems like some effort should be spent on conveying this. Right? (If not, then what’s the point of all of this?)

Second, concerning the second half of your comment:

Frankly, this whole perspective you describe just seems bizarre.

Of course I can’t possibly create a formal obligation to respond to comments. Of course only the moderators can do that. I can’t even create social norms that responses are expected, if the moderators don’t support me in this (and especially if they actively oppose me). I’ve never said that such a formal obligation or social norm exists; and if I ever did say that, all it would take is a moderator posting a comment saying “no, actually” to unambiguously controvert the claim.

But on the other hand, I can’t create an epistemic obligation to respond, either—because it already either exists or already doesn’t exist, regardless of what I think or do.

So, you say:

I’ve really done a lot of user interviews and talked to lots of authors about this pattern, and interfacing with you and a few other users definitely gets confidently interpreted as making a claim that authors and other commenters have an obligation to respond or otherwise face humiliation in front of the LessWrong audience.

If someone writes a post and someone else (regardless of who it is!) writes a comment that says “what are some examples?”, then whether the post author “faces humiliation” (hardly the wording I’d choose, but let’s go with it) in front of the Less Wrong audience if they don’t respond is… not something that I can meaningfully affect. That judgment is in the minds of the aforesaid audience. I can’t make people judge thus, nor can I stop them from doing so. To ascribe this effect to me, or to any specific commenter, seems like willful denial of reality.

The correct response by users to your comments, in the presence of the box with the guideline, would be “There is a very prominent rule that says I am not obligated to respond, so why aren’t you deleting or moderating the people who sure seem to be creating a strong obligation for me to respond?”, which then would just bring us back to square one.

This would be a highly unreasonable response. And the correct counter-response by moderators, to such a question, would be:

“Because users can’t ‘create a strong obligation for you to respond’. We’ve made it clear that you have no such obligation. (And the commenters certainly aren’t claiming otherwise, as you can see.) It would be utterly absurd for us to moderate or delete these comments, just because you don’t want to respond to them. If you feel that you must respond, respond; if you don’t want to, don’t. You’re an adult and this is your decision to make.”

(You might also add that the downvote button exists for a reason. You might point out, additionally, that low-karma comments are hidden by default. And if the comments in question are actually highly upvoted, well, that suggests something, doesn’t it?)

Replies from: habryka4, Jiro
comment by habryka (habryka4) · 2023-04-26T21:44:23.165Z · LW(p) · GW(p)

(I am not planning to engage further at this point. 

My guess is you can figure out what I mean by various things I have said by asking other LessWrong users, since I don't think I am saying particularly complicated things, and I think I've communicated enough of my generators so that most people reading this can understand what the rules are that we are setting without having to be worried that they will somehow accidentally violate them. 

My guess is we also both agree that it is not necessary for moderators and users to come to consensus in cases like this. The moderation call is made, it might or might not improve things, and you are either capable of understanding what we are aiming for, or we'll continue to take some moderator actions until things look better by our models. I think we've both gone far beyond our duty of effort to explain where we are coming from and what our models are.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-26T22:14:24.819Z · LW(p) · GW(p)

This seems like an odd response.

In the first part of the grandparent comment, I asked a couple of questions. I can’t possibly “figure out what you mean” in those cases, since they were questions about what you’ve done or haven’t done, and about what you think of something I asked.

In the second part of the grandparent comment, I gave arguments for why some things you said seem wrong or incoherent. There, too, “figuring out what you mean” seems like an inapplicable concept.

I think we’ve both gone far beyond our duty of effort to explain where we are coming from and what our models are.

You and the other moderators have certainly written many words. But only the last few comments on this topic have contained even an attempted explanation of what problem you’re trying to solve (this “enforcement of norms” thing), and there, you’ve not only not “gone far beyond your duty” to explain—you’ve explicitly disclaimed any attempt at explanation. You’ve outright said that you won’t explain and won’t try!

comment by Jiro · 2023-06-10T01:38:17.103Z · LW(p) · GW(p)

If it’s important for users to know, then it just seems bizarre to go about ensuring that they know it in this extremely reactive way,

It's important for users to know when it comes up. It doesn't come up much except with you.

comment by Raemon · 2023-04-26T22:18:23.976Z · LW(p) · GW(p)

(I wrote the following before habryka wrote his message)

While I still have some disagreement here about how much of this conversation gets rendered moot, I do agree this is a fairly obvious good thing to do which would help in general, and help at least somewhat with the things I've been expressing concerns about in this particular discussion. 

The challenge is communicating the right things to users at the moments they actually would be useful to know (there are lots and lots of potentially important/useful things for users to know about the site, and trying to say all of them would turn into noise).

But, I think it'd be fairly tractable to have a message like "btw, if this conversation doesn't seem productive to you, consider downvoting it and moving on with your day [link to some background]" appear when, say, a user has downvoted-and-replied to a user twice in one comment thread or something (or when ~2 other users in a thread have done so)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-26T22:22:15.991Z · LW(p) · GW(p)

But, I think it’d be fairly tractable to have a message like “btw, if this conversation doesn’t seem productive to you, consider downvoting it and moving on with your day [link to some background]” appear when, say, a user has downvoted-and-replied to a user twice in one comment thread or something (or when ~2 other users in a thread have done so)

This definitely seems like a good direction for the design of such a feature, yeah. (Some finessing is needed, no doubt, but I do think that something like this approach looks likely to be workable and effective.)

comment by Said Achmiz (SaidAchmiz) · 2023-04-26T02:51:20.001Z · LW(p) · GW(p)

Oh? My mistake, then. Should it be “because their comments have, on some occasions, misled users into falsely believing X”?

(It’s not clear to me, I will say, whether you are claiming this is actually something that ever happened. Are you? I will note that, as you’ll find if you peruse my comment history, I have on more than one occasion taken pains to explicitly clarify that Less Wrong does not, in fact, have a norm that says that responding to comments is mandatory, which is the opposite of misleading people into believing that such a norm exists…)

Replies from: Raemon
comment by Raemon · 2023-04-26T17:18:43.387Z · LW(p) · GW(p)

No. This is still oversimplifying the issue, which I specifically disclaimed. Ben Pace gives a sense of it here: [LW(p) · GW(p)]

The philosophical disagreement is related-to but not itself the thing I believe Ray is saying is bad. The claim I understand Ray to be making is that he believes you gave a false account of the site-wide norms about what users are obligated to do, and that this is reflective of you otherwise implicitly enforcing such a norm many times that you comment on posts. Enforcing norms on behalf of a space that you don't have buy-in for and that the space would reject tricks people into wasting their time and energy trying to be good citizens of the space in a way that isn't helping and isn't being asked of them.

If you did so, I think that behavior ought to be clearly punished in some way. I think this regardless of whether you earnestly believed that an obligation-to-reply-to-comments was a site-wide norm, and also regardless of whether you were fully aware that you were doing so. I think it's often correct to issue a blanket punishment of a costly behavior even on the occasions that it is done unknowingly, to ensure that there is a consistent incentive against the behavior — similar to how it is typically illegal to commit a crime even if you aren't aware what you did was a crime.

The problem is implicit enforcement of norms. Your stated beliefs do help alleviate this but only somewhat. And, like Ben also said in that comment, from a moderator perspective it's often correct to take mod action regardless of whether someone meant to do something we think has had an outsized harm on the site.

I've now spent (honestly more than) the amount of time I endorse on this discussion. I am still mulling a lot over the overall discussion over, but in the interest of declaring this done for now, I'm declaring that we'll leave the rate limit in place for ~3 months, and re-evaluate then. I feel pretty confident doing this because it seems commensurate with the original moderation warning (i.e. a 3 month rate limit seems similar to me in magnitude to a 1-month ban, and I think Said's comments in the Duncan/Said conflict count as a triggering instance).

I will reconsider the rate limit in the future if you can think of a way to change your commenting behavior in longer comment threads that won't have the impacts the mod team is worried about. I don't know that we explained this maximally well, but I think we explained it well enough that it should be fairly obvious to you why your comment here is missing the point, and if it's not, I don't really know what to do about that.

Replies from: SaidAchmiz, SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-26T19:29:26.252Z · LW(p) · GW(p)

No. This is still oversimplifying the issue, which I specifically disclaimed.

Alright, fair enough, so then…

The problem is implicit enforcement of norms.

… but then my next question is:

What the heck is “implicit enforcement of norms”??

I will reconsider the rate limit in the future if you can think of a way to change your commenting behavior in longer comment threads that won’t have the impacts the mod team is worried about. I don’t know that we explained this maximally well

To be quite honest, I think you have barely explained it at all. I’ve been trying to get an explanation out of you, and I have to say: it’s like pulling teeth. It seems like we’re getting somewhere, finally? Maybe?

You’re asking me to change my commenting behavior. I can’t even consider doing that unless I know what you think the problem is.

So, questions:

  1. What is “implicit enforcement of norms”? How can a non-moderator user enforce any norms in any way?

  2. This “implicit enforcement of norms” (whatever it is)—is it a problem additionally to making false claims about what norms exist?

  3. If the answer to #2 is “yes”, then what is your response to my earlier comments pointing out that no such false claims took place?

Replies from: Vladimir_Nesov, habryka4
comment by Vladimir_Nesov · 2023-04-27T02:10:13.623Z · LW(p) · GW(p)

What is “implicit enforcement of norms”?

A norm is a pattern of behavior, something people can recognize and enact. Feeding a norm involves making a pattern of behavior more available (easy to learn and perceive), and more desirable (motivating its enactment, punishing its non-enactment). A norm can involve self-enforcement (here "self" refers to the norm, not to a person), adjoining punishment of non-enforcers and reward of enforcers as part of the norm. A well-fed norm is ubiquitous status quo, so available you can't unsee it. It can still be opted-out of, by not being enacted or enforced, at the cost of punishment from those who enforce it. It can be opposed by conspicuously doing the opposite of what the norm prescribes, breaking the pattern, thus feeding a new norm of conspicuously opposing the original norm.

Almost all anti-epistemology is epistemic damage perpetrated by self-enforcing norms. Tolerance is boundaries against enforcement of norms. Intolerance of tolerance breaks it down, tolerating tolerance [LW · GW] allows it to survive, restricting virality of self-enforcing norms. The self-enforcing norm of tolerance that punishes intolerance potentially exterminates valuable norms, not obviously a good idea.

So there is a norm of responding to criticism, its power is the weight of obligation to do that. It always exists in principle, at some level of power, not as categorically absent or present. I think clearly there are many ways of feeding that norm, or not depriving it of influence, that are rather implicit.

(Edit: Some ninja-editing, Said quoted the pre-edit version of third paragraph. Also fixed the error in second paragraph where I originally equivocated between tolerating tolerance and self-enforcing tolerance.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-27T02:15:49.342Z · LW(p) · GW(p)

So there is a norm of responding to criticism. I think clearly there are many ways of feeding that norm, or not depriving it of influence, that are rather implicit.

Perhaps, for some values of “feeding that norm” and “[not] not depriving it of influence”. But is this “enforcement”? I do not think so. As far as I can tell, when there is a governing power (and there is surely one here), enforcement of the power’s rules can be done by that power only. (Power can be delegated—such as by the LW admins granting authors the ability to ban users from their posts—but otherwise, it is unitary. And such delegated power isn’t at all what’s being discussed here, as far as I can tell.)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-04-27T02:40:41.553Z · LW(p) · GW(p)

That's fair, but I predict that the central moderators' complaint is in the vicinity of what I described, and has nothing to do with more specific interpretations of enforcement.

Replies from: SaidAchmiz, SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-27T03:35:34.978Z · LW(p) · GW(p)

If so, then that complaint seems wildly unreasonable. The power of moderators to enforce a norm (or a norm’s opposite) is vastly greater than the power of any ordinary user to subtly influence the culture toward acceptance or rejection of a norm. A single comment from a moderator so comprehensively outweighs the influence, on norm-formation, of even hundreds of comments from any ordinary user, that it seems difficult to believe that moderators would ever need to do anything but post the very occasional short comment that links to a statement of the rules/norms and reaffirms that those rules/norms are still in effect.

(At least, for norms of the sort that we’re discussing. It would be different for, e.g., “users should do X”. You can punish people for breaking rules of the form “users should never do X”; that’s easy enough. Rules/norms of the form “users don’t need to do X”—i.e., those like the one we’ve been discussing—are even easier; you don’t need to punish anything, just occasionally reaffirm or remind people that X is not mandatory. But “users should do X” is tricky, if X isn’t something that you can feasibly mandate; that takes encouragement, incentives, etc. But, of course, that isn’t at all the sort of thing we’re talking about…)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-04-27T03:59:40.637Z · LW(p) · GW(p)

Everyone can feed a norm, and direct action by moderators can be helpless before strong norms, as scorched-earth capabilities can still be insufficient for reaching more subtle targets. Thus discouraging the feeding of particular norms rather than going against the norms themselves.

occasionally reaffirm or remind people that X is not mandatory

If there are enough people feeding the norm of doing X, implicitly rewarding X and punishing non-X, reaffirming that it's not mandatory doesn't obviously help. So effective direct action by moderators might well be impossible. It might still behoove them to make some official statements to this effect, and that resolves the problem of miscommunication, but not the problem of well-fed undesirable norms.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-27T04:16:14.954Z · LW(p) · GW(p)

If there are enough people feeding the norm of doing X, implicitly rewarding X and punishing non-X, reaffirming that it’s not mandatory doesn’t obviously help. So effective direct action by moderators might well be impossible. It might still behoove them to make some official statements to this effect, and that resolves the problem of miscommunication, but not the problem of well-fed undesirable norms.

What you are describing would have to be a very well-entrenched and widespread norm, supported by many users, and opposed by few users. Such a thing is perhaps possible (I have my doubts about this; it seems to me that such a hypothetical scenario would also require, for one thing, a lack of buy-in from the moderators); but even if it is—note how far we have traveled from anything resembling the situation at hand!

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-04-27T04:51:35.873Z · LW(p) · GW(p)

well-entrenched and widespread norm

Motivation gets internalized, following a norm can be consciously endorsed, disobeying a norm can be emotionally valent. So it's not just about external influence in affecting the norm, there is also the issue of what to do when the norm is already in someone's head. To some extent it's their problem, as there are obvious malign incentives towards becoming a utility monster. But I think it's a real thing that happens all the time.

This particular norm is obviously well-known in the wider world, some people have it well-entrenched in themselves. The problem discussed above was reinforcing or spreading the norm, but there is also a problem of triggering the norm. It might be a borderline case of feeding it (in the form of its claim to apply on LW as well), but most of the effect is in influencing people who already buy the norm towards enacting it, by setting up central conditions for its enactment. Which can be unrewarding for them, but necessary on the pain of disobeying the norm entrenched in their mind.

For example, what lsusr is talking about here [LW(p) · GW(p)] is trying not to trigger the norm. Statements are less imposing than questions in that they are less valent as triggers for response-obligation norms. This respects boundaries of people's emotional equilibrium, maintains comfort. When the norms/emotions make unhealthy demands on one's behavior, this leads to more serious issues. It's worth correcting, but not without awareness of what might be going on. I guess this comes back to motivating some interpretative labor, but I think there are relevant heuristics at all levels of subtlety.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-27T05:07:33.717Z · LW(p) · GW(p)

To some extent it’s their problem, as there are obvious malign incentives towards becoming a utility monster.

Just so.

In general, what you are talking about seems to me to be very much a case of catering to utility monsters, and denying that people have the responsibility to manage their own feelings. It should, no doubt, be permissible to behave in such ways (i.e., to carefully try to avoid triggering various unhealthy, corrosive, and self-sabotaging habits / beliefs, etc.), but it surely ought not be mandatory. That incentivizes the continuation and development of such habits and beliefs, rather than contributing to extinguishing them; it’s directly counterproductive.

EDIT: Also, and importantly, I think that describing this sort of thing as a “norm” is fundamentally inaccurate. Such habits/beliefs may contribute to creating social norms, but they are not themselves social norms; the distinction matters.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-04-27T05:48:38.594Z · LW(p) · GW(p)

a case of catering to utility monsters [...] incentivizes the continuation and development of such habits and beliefs, rather than contributing to extinguishing them; it’s directly counterproductive

That's a side of an idealism debate, a valid argument that pushes in this direction, but there are other arguments that push in the opposite direction, it's not one-sided [LW · GW].

Some people change, given time or appropriate prodding. There are ideological (as in the set of endorsed principles) or emotional flaws, lack of capability at projecting sufficiently thick skin, or of thinking in a way that makes thick skin unnecessary, with defenses against admitting this or being called out on it. It's not obvious to me that the optimal way of getting past that is zero catering, and that the collateral damage of zero catering is justified by the effect compared to some catering, as well as steps like discussing the problem abstractly, making the fact of its existence more available without yet confronting it directly.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-27T06:01:36.679Z · LW(p) · GW(p)

I retain my view [LW(p) · GW(p)] that to a first approximation, people don’t change.

And even if they do—well, when they’ve changed, then they can participate usefully and non-destructively. Personal flaws are, in a sense, forgiveable, as we are all human, and none of us is perfect; but “forgiveable” does not mean “tolerable, in the context of this community, this endeavor, this task”.

It’s not obvious to me that the optimal way of getting past that is zero catering, and that the collateral damage of zero catering is justified by the effect compared to some catering

I think we are very far from “zero” in this regard. Going all the way to “zero” is not even what I am proposing, nor would propose (for example, I am entirely in favor of forbidding personal insults, vulgarity, etc., even if some hypothetical ideal reasoner would be entirely unfazed even by such things).

But that the damage done by catering to “utility monsters” of the sort who find requests for clarification to be severely unpleasant, is profound and far-ranging, seems to me to be too obvious to seriously dispute. It’s hypothetically possible to acknowledge this while claiming that failing to cater thusly has even more severely damaging consequences, but—well, that would be one heck of an uphill battle, to make that case.

(as well as steps like discussing the problem abstractly, making the fact of its existence more available without yet confronting it directly)

Well, I’m certainly all for that.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-04-27T06:27:28.795Z · LW(p) · GW(p)

I think the central disagreement is on the side of ambient nondemanding catering, the same kind of thing as avoidance of weak insults, but for norms like response-obligation. This predictably lacks clear examples and there are no standard words like "weak insult" to delineate the issue, it's awareness of cheaply avoidable norm-triggering and norm-feeding that points to these cases.

I agree that unreasonable demands are unreasonable. Pointing them out gains more weight after you signal ability to correctly perceive the distinction [LW(p) · GW(p)] between "reasonable"/excusable and clearly unreasonable demands for catering. Though that often leads to giving up or not getting involved [LW(p) · GW(p)]. So there is value in idealism in a neglected direction, it keeps the norm of being aware of that direction alive.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-27T07:24:59.524Z · LW(p) · GW(p)

I think the central disagreement is on the side of ambient nondemanding catering, the same kind of thing as avoidance of weak insults, but for norms like response-obligation. This predictably lacks clear examples and there are no standard words like “weak insult” to delineate the issue, it’s awareness of cheaply avoidable norm-triggering and norm-feeding that points to these cases.

I must confess that I am very skeptical. It seems to me that any relevant thing that would need to be avoided, is a thing that is actually good, and avoiding which is bad (e.g., asking for examples of claims, concretizations of abstract concepts, clarifications of term usage, etc.). Of course if there were some action which were avoidable as cheaply (both in the “effort to avoid” and “consequences of avoiding” sense) as vulgarity and personal insults are avoidable, then avoiding it might be good. (Or might not; there is at least one obvious way in which it might actually be bad to avoid such things even if it were both possible and cheap to do so! But we may assume that possibility away, for now.)

But is there such a thing…? I find it difficult to imagine what it might be…

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-04-27T08:45:19.332Z · LW(p) · GW(p)

I agree that it's unclear that steps in this direction are actually any good, or if instead they are mildly bad, if we ignore instances of acute conflict. But I think there is room for optimization that won't have substantive negative consequences in the dimensions worth caring about, but would be effective in avoiding conflict.

The conflict might be good in highlighting the unreasonable nature of utility monsterhood, or anti-epistemology promoted in the name of catering to utility monsterhood (including or maybe especially in oneself), but it seems like we are on the losing side, so not provoking the monsters it is. To make progress towards resolving this conflict, someone needs ability and motivation to write up things that explain the problem, as top level posts and not depth-12 threads on 500-comment posts. Recently, that's been Zack and Duncan, but that's difficult when there aren't more voices and simultaneously when moderators take steps that discourage this process. These factors might even be related!

So it's things like adopting lsusr's suggestion [LW(p) · GW(p)] to prefer statements to questions. A similar heuristic I follow is to avoid actually declaring that there is an error/problem in something I criticise, or what that error is, and instead to give the argument or relevant fact that should make that obvious, at most gesturing at the problem by quoting a bit of text from where it occurs. If it's still not obvious, it either wouldn't work with more explicit explanation, or it's my argument's problem, and then it's no loss, this heuristic leaves the asymmetry intact. I might clarify when asked for clarification. Things like that, generated as appropriate by awareness of this objective.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-27T19:37:16.613Z · LW(p) · GW(p)

The conflict might be good in highlighting the unreasonable nature of utility monsterhood, or anti-epistemology promoted in the name of catering to utility monsterhood (including or maybe especially in oneself), but it seems like we are on the losing side, so not provoking the monsters it is.

One does not capitulate to utility monsters, especially not if one’s life isn’t dependent on it.

To make progress towards resolving this conflict, someone needs ability and motivation to write up things that explain the problem, as top level posts and not depth-12 threads on 500-comment posts. Recently, that’s been Zack and Duncan, but that’s difficult when there aren’t more voices and simultaneously when moderators take steps that discourage this process. These factors might even be related!

I wholly agree.

So it’s things like adopting lsusr’s suggestion to prefer statements to questions.

As I said in reply to that comment, it’s an interesting suggestion, and I am not entirely averse to applying it in certain cases. But it can hardly be made into a rule, can it? Like, “avoid vulgarity” and “don’t use direct personal attacks” can be made into rules. There generally isn’t any reason to break them, except perhaps in the most extreme, rare cases. But “prefer statements to questions”—how do you make that a rule? Or anything even resembling a rule? At best it can form one element of a set of general, individually fairly weak, suggestions about how to reduce conflict. But no more than that.

A similar heuristic I follow is to avoid actually declaring that there is an error/problem in something I criticise, or what that error is, and instead to give the argument or relevant fact that should make that obvious, at most gesturing at the problem by quoting a bit of text from where it occurs.

I follow just this same heuristic!

Unfortunately, it doesn’t exactly work to eliminate or even meaningfully reduce the incidence of utility-monster attack—as this very post we’re commenting under illustrates.

(Indeed I’ve found it to have the opposite effect. Which is a catch-22, of course. Ask questions, and you’re accused of acting in a “Socratic” way, which is apparently bad; state relevant facts or “gesture at the problem by quoting a bit of text”, and you’re accused of “not steelmanning”, of failing to do “interpretive labor”, etc.; make your criticisms explicit, and you’re accused of being hostile… having seen the response to all possible approaches, I can now say with some confidence that modifying the approach doesn’t work [LW(p) · GW(p)].)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-04-30T02:49:06.698Z · LW(p) · GW(p)

not provoking the monsters

One does not capitulate to utility monsters

I'm gesturing at settling into an unsatisfying strategic equilibrium, as long as there isn't enough engineering effort towards clarifying the issue (negotiating boundaries that are more reasonable-on-reflection than the accidental status quo). I don't mean capitulation as a target even if the only place "not provoking" happens to lead is capitulation (in reality, or given your model of the situation). My model doesn't say that this is the case.

Ask questions, and you’re accused of acting in a “Socratic” way, which is apparently bad; state relevant facts or “gesture at the problem by quoting a bit of text”, and you’re accused of “not steelmanning”, of failing to do “interpretive labor”, etc.; make your criticisms explicit, and you’re accused of being hostile… having seen the response to all possible approaches, I can now say with some confidence that modifying the approach doesn’t work.

The problem with this framing (as you communicate it, not necessarily in your own mind) is that it could look the same even if there are affordances for de-escalation at every step, and it's unclear how efficiently they were put to use (it's always possible to commit a lot of effort towards measures that won't help; the effort itself doesn't rule out availability of something effective). Equivalence between "not provoking" and "capitulation" is a possible conclusion from observing absence of these affordances, or alternatively it's the reason the affordances remain untapped. It's hard to tell.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-30T11:57:36.250Z · LW(p) · GW(p)

What would any of what you’re alluding to look like, more concretely…?

(Of course I also object to the term “de-escalation” here, due to the implication of “escalation”, but maybe that’s beside the point.)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-04-30T13:01:15.236Z · LW(p) · GW(p)

Like escalation makes a conflict more acute, de-escalation settles it. Even otherwise uninvolved parties could plot either, there is no implication of absence of de-escalation being escalation. Certainly one party could de-escalate a conflict that the other escalates.

Some examples are two comments up [LW(p) · GW(p)], as well as your list of things that don't work [LW(p) · GW(p)]. Another move not mentioned so far is deciding to exit certain conversations.

The harder and more relevant question is whether some of these heuristics have the desired effect, and which ones are effective when. I think only awareness of the objective of de-escalation could apply these in a sensible way, specific rules (less detailed than a book-length intuition-distilling treatise) won't work efficiently (that is, without sacrificing valuable outcomes).

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-30T14:59:39.777Z · LW(p) · GW(p)

I don’t think I disagree with anything you say in particular, not exactly, but I really am not sure that I have any sense of what the category boundaries of this “de-escalation” are supposed to be, or what the predicate for it would look like. (Clearly the naive connotation isn’t right, which is fine—although maybe it suggests a different choice of term? or not, I don’t really know—but I’m not sure where else to look for the answers.)

Maybe this question: what exactly is “the desired effect”? Is it “avoid conflict”? “Avoid unnecessary conflict”? “Avoid false appearance of conflict”? “Avoid misunderstanding”? Something else?

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-04-30T15:36:57.887Z · LW(p) · GW(p)

Acute conflict here is things like moderators agonizing over what to do, top-level posts lobbying site-wide policy changes, rumors being gathered and weaponized. Escalation is interventions that target the outcome of there being an acute conflict (in the sense of optimization, so not necessarily intentionally). De-escalation is interventions that similarly target the outcome of absence of acute conflict.

In some situations acute conflict could be useful, a Schelling point for change (time to publish relevant essays, which might be heard more vividly as part of this event). If it's not useful, I think de-escalation is the way, with absence of acute conflict as the desired effect.

(De-escalation is not even centrally avoidance of individual instances of conflict. I think it's more important what the popular perception of one's intentions/objectives/attitudes is, and to prevent formation of grudges. Mostly not bothering those who probably have grudges. This more robustly targets absence of acute conflict, making some isolated incidents irrelevant.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-30T17:20:52.564Z · LW(p) · GW(p)

Acute conflict here is things like moderators agonizing over what to do, top-level posts lobbying site-wide policy changes, rumors being gathered and weaponized. Escalation is interventions that target the outcome of there being an acute conflict (in the sense of optimization, so not necessarily intentionally). De-escalation is interventions that similarly target the outcome of absence of acute conflict.

Is this really anything like a natural category, though?

Like… obviously, “moderators agonizing over what to do, top-level posts lobbying site-wide policy changes, rumors being gathered and weaponized” are things that happen. But once you say “not necessarily intentionally” in your definitions of “escalation” and “de-escalation”, aren’t you left with “whatever actions happen to increase the chance of their being an acute conflict” (and similar “decrease” for “de-escalation”)? But what actions have these effects clearly depends heavily on all sorts of situational factors, identities and relationships of the participants, the subject matter of the conversation, etc., etc., such that “what specific actions will, as it will turn out, have contributed to increasing/decreasing the chance of conflict in particular situation X” is… well, I don’t want to say “not knowable”, but certainly knowing such a thing is, so to speak, “interpersonal-interaction-complete”.

What can really be said about how to avoid “acute conflict” that isn’t going to have components like “don’t discuss such-and-such topics; don’t get into such-and-such conversations if people with such-and-such social positions in your environment have such-and-such views; etc.”?

Or is that in fact the sort of thing you had in mind?

I guess my question is: do you envision the concrete recommendations for what you call “de-escalation” or “avoiding acute conflict” to concern mainly “how to say it”, and to be separable from “what to say” and “whom to say it to”? It seems to me that such things mostly aren’t separable. Or am I misunderstanding?

(Certainly “not bothering those who probably have grudges” is basically sensible as a general rule, but I’ve found that it doesn’t go very far, simply because grudges don’t develop randomly and in isolation from everything else; so whatever it was that caused the grudge, is likely to prevent “don’t bother person with grudge” from being very applicable or effective.)

comment by Said Achmiz (SaidAchmiz) · 2023-04-27T03:37:36.012Z · LW(p) · GW(p)

Also, it almost goes without saying, but: I think it is extremely unhelpful and misleading to refer to the sort of thing you describe as “enforcement”. This is not a matter of “more [or less] specific interpretation”; it’s just flatly not the same thing.

comment by habryka (habryka4) · 2023-04-26T20:40:46.881Z · LW(p) · GW(p)

What is “implicit enforcement of norms”? How can a non-moderator user enforce any norms in any way?

This might be a point of contention, but honestly, I don't really understand and do not find myself that curious about a model of social norms that would produce the belief that only moderators can enforce norms in any way, and I am bowing out of this discussion (the vast majority of social spaces with norms do not even have any kind of official moderator, so what does this model predict about just like the average dinner party or college class).

My guess is 95% of the LessWrong user-base is capable of describing a model of how social norms function that does not have the property that only moderators of a space have any ability to enforce or set norms within that space and can maybe engage with Said on explaining this, and I would appreciate someone else jumping in and explaining those models, but I don't have the time and patience to do this. 

Replies from: localdeity, philh, SaidAchmiz
comment by localdeity · 2023-04-27T05:08:10.701Z · LW(p) · GW(p)

All right, I'll give it a try (cc @Said Achmiz [LW · GW]).

Enforcing norms of any kind can be done either by (a) physically preventing people from breaking them—we might call this "hard enforcement"—or (b) inflicting unpleasantness on people who violate said norms, and/or making it clear that this will happen (that unpleasantness will be inflicted on violators), which we might call "soft enforcement".[1]

Bans are hard enforcement.  Downvotes are more like soft enforcement, though karma does matter for things like sorting and whether a comment is expanded by default, so there's some element of hardness.  Posting critical comments is definitely soft enforcement; posting a lot of intensely critical comments is intense soft enforcement.  Now, compare with Said's description [LW(p) · GW(p)] elsewhere:

On Less Wrong, there are moderators, and they unambiguously have a multitude of enforcement powers, which ordinary users lack. Ordinary users have very few powers: writing posts and comments, upvotes/downvotes, and bans from one’s posts.

Writing posts and comments isn’t anything at all like “enforcement” (given that moderators exist, and that users can ignore other users, and ban them from their posts).

Said is clearly aware of hard enforcement and calls that "enforcement".  Meanwhile, what I call "soft enforcement", he says isn't anything at all like "enforcement".  One could put this down to a mere difference in terms, but I think there's a little more.

It seems accurate to say that Said has an extremely thick skin.  Probably to some extent deliberately so.  This is admirable, and among other things means that he will cheerfully call out any local emperor for having no clothes; the prospect of any kind of social backlash ("soft enforcement") seems to not bother him, perhaps not even register to him.  Lots of people would do well to be more like him in this respect.

However, it seems that Said may be unaware of the degree to which he's different from most people in this[2].  (Either in naturally having a thick skin, or in thinking "this is an ideal which everyone should be aspiring to, and therefore e.g. no one would willingly admit to being hurt by critical comments and downvotes", or something like that.)  It seems that Said may be blind to one or more of the below:

  • That receiving comments (a couple or a lot) requesting more clarification and explanation could be perceived as unpleasant.
  • That it could be perceived as so unpleasant as to seriously incentivize someone to change their behavior.

I anticipate a possible objection here: "Well, if I incentivize people to think more rigorously, that seems like a good thing."  At this point the question is "Do Said's comments enforce any norm at all?", not "Are Said's comments pushing people in the right direction?".  (For what it's worth, my vague memory includes some instances of "Said is asking the right questions" and other instances of "Said is asking dumb questions".  I suspect that Said is a weird alien (most likely "autistic in a somewhat different direction than the rest of us"—I don't mean this as an insult, that would be hypocritical) and that this explains some cases of Said failing to understand something that's obvious to me, as well as Said's stated experience that trying to guess what other people are thinking is a losing game.)

Second anticipated objection: "I'm not deliberately trying to enforce anything."  I think it's possible to do this non-deliberately, even self-destructively.  For example, a person could tell their friends "Please tell me if I'm ever messing up in xyz scenarios", but then, when a friend does so, respond by interrogating the friend about what makes them qualified to judge xyz, have they ever been wrong about xyz, were they under any kind of drugs or emotional distraction or sleep deprivation at the time of observation, do they have any ulterior motives or reasons for self-deception, do their peers generally approve of their judgment, how smart are they really, what were their test scores, have they achieved anything intellectually impressive, etc.  (This is avoiding the probably more common failure mode of getting offended at the criticism and expressing anger.)  Like, technically, those things are kind of useful for making the report more informative, and some of them might be worth asking in context, but it is easy to imagine the friend finding it unpleasant, either because it took far more time than they expected, or because it became rather invasive and possibly touched on topics they find unpleasant; and the friend concluding "Yeesh.  This interaction was not worth it; I won't bother next time."

And if that example is not convincing (which it might not be for someone with an extremely thick skin), then consider having to file a bunch of bureaucratic forms to get a thing done.  By no means impossible (probably), but it's unpleasant and time-consuming, and might succeed in disincentivizing you from doing it, and one could call it a soft forbiddance.[3] (See also "Beware Trivial Inconveniences".)

Anyway, it seems that the claim from various complainants is that Said is, deliberately or not, providing an interface of "If your posts aren't written in a certain way, then Said is likely to ask a bunch of clarifying questions, with the result that either you may look ~unrigorous or you have to write a bunch of time-consuming replies", and thus this constitutes soft-enforcing a norm of "writing posts in a certain way".

Or, regarding the "clarifying questions need replies or else you look ~unrigorous" norm... Actually, technically, I would say that's not a norm Said enforces; it's more like a norm he invokes (that is, the norm is preexisting, and Said creates situations in which it applies).  As Said says elsewhere, it's just a fact that, if someone asks a clarifying question and you don't have an answer, there are various possible explanations for this, one of which is "your idea is wrong".[4]  And I guess the act of asking a question implies (usually) that you believe the other person is likely to answer, so Said's questions do promulgate this norm even if they don't enforce it.

Moreover, this being the website that hosts Be Specific [LW · GW], this norm is stronger here than elsewhere.  Which... I do like; I don't want to make excuses for people being unrigorous or weak.  But Eliezer himself doesn't say "Name three examples" every single time someone mentions a category.  There's a benefit and a cost to doing so—the benefit being the resulting clarity, the cost being the time and any unpleasantness involved in answering.  My brain generates the story "Said, with his extremely thick skin (and perhaps being a weird alien more generally), faces a very difficult task in relating to people who aren't like him in that respect, and isn't so unusually good at relating to others very unlike him that he's able to judge the costs accurately; in practice he underestimates the costs and asks too often."

  1. ^

    And usually anything that does (a) also does (b).  Removing someone's ability to do a thing, especially a thing they were choosing to do in the past, is likely unpleasant on first principles; plus the methods of removing capabilities are usually pretty coarse-grained.  In the physical world, imprisonment is the prototypical example here.

  2. ^

    It also seems that Duncan is the polar opposite of this (or at least is in that direction), which makes it less surprising that it'd be difficult for them to come to common understanding.

  3. ^

    There was a time at work where I was running a script that caused problems for a system.  I'd say that this could be called the system's fault—a piece of the causal chain was the system's policy I'd never heard of and seemed like the wrong policy, and another piece was the system misidentifying a certain behavior.

    In any case, the guy running the system didn't agree with the goal of my script, and I suspect resented me because of the trouble I'd caused (in that and in some other interactions).  I don't think he had the standing to say I'm forbidden from running it, period; but what he did was tell me to put my script into a pull request, and then do some amount of nitpicking the fuck out of it and requesting additional features; one might call it an isolated demand for rigor, by the standards of other scripts.  Anyway, this was a side project for me, and I didn't care enough about it to push through that, so I dropped it.  (Whether this was his intent, I'm not sure, but he certainly didn't object to the result.)

  4. ^

    Incidentally, the more reasonable and respectable the questioner looks, that makes explanations like "you think the question is stupid or not worth your time" less plausible, and therefore increases the pressure to reply on someone who doesn't want to look wrong.  (One wonders if Said should wear a jester's cap or something, or change his username to "troll".  Or maybe Said can trigger a "Name Examples Bot", which wears a silly hat, in lieu of asking directly.)

Replies from: SaidAchmiz, SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-27T05:44:11.453Z · LW(p) · GW(p)

(Separately from my longer reply: I do want to thank you for making the attempt.)

comment by Said Achmiz (SaidAchmiz) · 2023-04-27T05:43:12.761Z · LW(p) · GW(p)

It seems that Said may be blind to one or more of the below:

  • That receiving comments (a couple or a lot) requesting more clarification and explanation could be perceived as unpleasant.
  • That it could be perceived as so unpleasant as to seriously incentivize someone to change their behavior.

I have already commented extensively on this sort of thing. In short, if someone perceives something so innocuous, so fundamentally cooperative, prosocial, and critical to any even remotely reasonable or productive discussion as receiving comments requesting clarification/explanation as not just unpleasant but “so unpleasant as to seriously incentivize someone to change their behavior”, that is a frankly ludicrous level of personal dysfunction, so severe that I cannot see how such a person could possibly expect to participate usefully in any sort of discussion forum, much less one that’s supposed to be about “advancing the art of rationality” or any such thing.

I mean, forget, for the moment, any question of “incentivizing” anyone in any way. I have no idea how it’s even possible to have discussions about anything without anyone ever asking you for clarification or explanation of anything. What does that even look like? I really struggle to imagine how anything can ever get accomplished or communicated while avoiding such things.

And the idea that “requesting more clarification and explanation” constitutes “norm enforcement” in virtue of its unpleasantness (rather than, say, being a way to exemplify praiseworthy behaviors) seems like a thoroughly bizarre view. Indeed, it’s especially bizarre on Less Wrong! Of all the forums on the internet, here, where it was written that “the first virtue is curiosity”, and that [LW · GW] “the first and most fundamental question of rationality is ‘what do you think you know, and why do you think you know it?’”…!

I suspect that Said is a weird alien (most likely “autistic in a somewhat different direction than the rest of us”—I don’t mean this as an insult, that would be hypocritical) and that this explains some cases of Said failing to understand something that’s obvious to me, as well as Said’s stated experience that trying to guess what other people are thinking is a losing game.

There’s certainly a good deal of intellectual and mental diversity among the Less Wrong membership. (Perhaps not quite enough, I sometimes think, but a respectable amount, compared to most other places.) I count this as a good thing.

And if that example is not convincing (which it might not be for someone with an extremely thick skin), then consider having to file a bunch of bureaucratic forms to get a thing done. By no means impossible (probably), but it’s unpleasant and time-consuming, and might succeed in disincentivizing you from doing it, and one could call it a soft forbiddance.[3] (See also “Beware Trivial Inconveniences”.)

Yes. Having to to file a bunch of bureaucratic forms (or else not getting the result you want). Having to answer your friend’s questions (on pain of quarrel or hurtful interpersonal conflict with someone close to you).

But nobody has to reply to comments. You can just downvote and move on with your life. (Heck, you don’t even have to read comments.)

As for the rest, well, happily, you include in your comment the rebuttal to the rest of what I might have wanted to rebut myself. I agree that I am not, in any reasonable sense of the word, “enforcing” anything. (The only part of this latter section of your comment that I take issue with is the stuff about “costs”; but that, I have already commented on, above.)

I’ll single out just one last bit:

But Eliezer himself doesn’t say “Name three examples” every single time someone mentions a category.

I think you’ll find that I don’t say “name three examples” every single time someone mentions a category, either (nor—to pre-empt the obvious objection—is there any obvious non-hyperbolic version of this implied claim which is true). In fact I’m not sure I’ve ever said it. As gwern writes [LW(p) · GW(p)]:

‘Examples?’ is one of the rationalist skills [LW · GW] most lacking on LW2 and if I had the patience for arguments I used to have, I would be writing those comments myself. (Said is being generous in asking for only 1. I would be asking for 3, like Eliezer.) Anyone complaining about that should be ashamed that they either (1) cannot come up with any, or (2) cannot forthrightly admit “Oh, I don’t have any yet, this is speculative, so YMMV”.

Replies from: localdeity
comment by localdeity · 2023-04-27T10:43:54.920Z · LW(p) · GW(p)

In short, if someone perceives [...] receiving comments requesting clarification/explanation as not just unpleasant but “so unpleasant as to seriously incentivize someone to change their behavior”, that is a frankly ludicrous level of personal dysfunction

I must confess that I don't sympathize much with those who object majorly.  I feel comfortable with letting conversations on the public internet fade without explanation.  "I would love to reply to everyone [or, in some cases, "I used to reply to everyone"] but that would take up more than all of my time" is something I've seen from plenty of people.  If I were on the receiving end of the worst version of the questioning behavior from you, I suspect I'd roll my eyes, sigh, say to myself "Said is being obtuse", and move on.

That said, I know that I am also a weird alien.  So here is my attempt to describe the others:

  • "I do reply to every single comment" is a thing some people do, often in their early engagement on a platform, when their status is uncertain.  (I did something close to that on a different forum recently, albeit more calculatedly as an "I want to reward people for engaging with my post so they'll do more of it".)  There isn't really a unified Internet Etiquette that everyone knows; the unspoken rules in general, and plausibly on this specifically, vary widely from place to place.
    • I also do feel some pressure to reply if the commenter is a friend I see in person—that it's a little awkward if I don't.  This presumably doesn't apply here.
    • I think some people have a self-image that they're "polite", which they don't reevaluate especially often, and believe that it means doing certain things such as giving decent replies to everyone; and when someone creates a situation in which being "polite" means doing a lot of work, that may lead to significant unpleasantness (and possibly lead them to resent whoever put them in that situation; a popular example like this is Bilbo feeling he "has to" feed and entertain all the dwarves who come visiting, being very polite and gracious while internally finding the whole thing very worrying and annoying).
      • If the conversation begins well enough, that may create more of a politeness obligation in some people's heads.  The fact that someone had to create the term "tapping out [? · GW]" is evidence that some people's priors were that simply dropping the conversation was impolite.
  • Looking at what's been said, "frustration" is mentioned.  It seems likely that, ex ante, people expect that answering your questions will lead to some reward (you'll say "Aha, I understand, thank you"; they'll be pleased with this result), and if instead it leads to several levels of "I don't understand, please explain further" before they finally give up, then they may be disappointed ex post.  Particularly if they've never had an interaction like this before, they might have not known what else to do and just kept putting in effort much longer than a more sophisticated version of them would have recommended.  Then they come away from the experience thinking, "I posted, and I ended up in a long interaction with Said, and wow, that sucked.  Not eager to do that again."
  • It's also been mentioned that some questions are perceived as rude.  An obvious candidate category would be those that amount to questioning someone's basic competence.  I'm not making the positive claim here that this accounts for a significant portion of the objectors' perceived unpleasantness, but since you're questioning how it's possible for asking for clarification to be really unpleasant to a remotely functional person—this is one possibility.
  • In some places on the internet, trolling is or has been a major problem.  Making someone do a bunch of work by repeatedly asking "Why?" and "How do you know that?", and generally applying an absurdly high standard of rigor, is probably a tactic that some trolls have engaged in to mess with people.  (Some of my friends who like to tease have occasionally done that.)  If someone seems to be asking a bunch of obtuse questions, I may at least wonder whether it's deliberate.  And interacting with someone you suspect might be trolling you—perhaps someone you ultimately decide is pretty trollish after a long, frustrating interaction—seems potentially uncomfortable.

    (I personally tend to welcome the challenge of explaining myself, because I'm proud of my own reasoning skills (and probably being good at it makes the exercise more enjoyable) and aspire to always be able to do that; but others might not.  Perhaps some people have memories of being tripped up and embarrassed.  Such people should get over it, but given that not all of them have done so... we shouldn't bend over backwards for them, to be sure, but a bit of effort to accommodate them seems justifiable.)
  • Some people probably perceive some really impressive people on Less Wrong, possibly admire some of them a lot, and are not securely confident in their own intelligence or something, and would find it really embarrassing—mortifying—to be made to look stupid in front of us.

    I find this hard to relate to—I'm extremely secure in my own intelligence, and react to the idea of someone being possibly smarter than me with "Ooh, I hope so, I wish that were so!  (But I doubt it!)"; if someone comes away thinking I'm stupid, I tend to find that amusing, at worst disappointing (disappointed in them, that is).  I suspect your background resembles mine in this respect.

    But I hear that teachers and even parents, frequently enough for this to be a problem, feel threatened when a kid says they're wrong (and backs it up).  (To some extent this may be due to authority-keeping issues.)  I hear that often kids in school are really afraid of being called, or shown to be, stupid.  John Holt (writing from his experience as a teacher—the kids are probably age 10 or so) says:

The other day I decided to talk to the other section about what happens when you don't understand what is going on. We had been chatting about something or other, and everyone seemed in a relaxed frame of mind, so I said, "You know, there's something I'm curious about, and I wonder if you'd tell me." They said, "What?" I said, "What do you think, what goes through your mind, when the teacher asks you a question and you don't know the answer?"

It was a bombshell. Instantly a paralyzed silence fell on the room. Everyone stared at me with what I have learned to recognize as a tense expression. For a long time there wasn't a sound. Finally Ben, who is bolder than most, broke the tension, and also answered my question, by saying in a loud voice, “Gulp!"

He spoke for everyone. They all began to clamor, and all said the same thing, that when the teacher asked them a question and they didn't know the answer they were scared half to death.

I was flabbergasted--to find this in a school which people think of as progressive; which does its best not to put pressure on little children; which does not give marks in the lower grades; which tries to keep children from feeling that they're in some kind of race. I asked them why they felt gulpish. They said they were afraid of failing, afraid of being kept back, afraid of being called stupid, afraid of feeling themselves stupid.

Stupid. Why is it such a deadly insult to these children, almost the worst thing they can think of to call each other? Where do they learn this? Even in the kindest and gentlest of schools, children are afraid, many of them a great deal of the time, some of them almost all the time. This is a hard fact of life to deal with. What can we do about it?

(By the way, someone being afraid to be shown to be stupid is probably Bayesian evidence that their intelligence isn't that high (relative to their peers in their formative years), so this would be a self-censoring fear.  I don't think I've seen anyone mention intellectual insecurity in connection to this whole discussion, but I'd say it likely plays at least a minor role, and plausibly plays a major role.)

Again, if school traumatizes people into having irrational fears about this, that's not a good thing, it's the schools' fault, and meanwhile the people should get over it; but again, if a bunch of people nevertheless haven't gotten over it, it is useful to know this, and it's justifiable to put some effort into accommodating them.  How much effort is up for debate.

Eliezer himself doesn’t say “Name three examples” every single time

My point was that Eliezer's philosophy doesn't mean it's always an unalloyed good.  For all that you say it's "so innocuous, so fundamentally cooperative, prosocial, and critical to any even remotely reasonable or productive discussion" to ask for clarification, even you don't believe it's always a good idea (since you haven't, say, implemented a bot that replies to every comment with "Be more specific").  There are costs in addition to the benefits, the magnitude of the benefits varies, and it is possible to go too far.  Your stated position doesn't seem to acknowledge that there is any tradeoff.

Gwern would be asking for 3 examples

Gwern is strong.  You (and Zack) are also strong.  Some people are weaker.  One could design a forum that made zero accommodations for the weak.  The idea is appealing; I expect I'd enjoy reading it and suspect I could hold my own, commenting there, and maybe write a couple of posts.  I think some say that Less Wrong 1.0 was this, and too few people wanted to post there and the site died.  One could argue that, even if that's true, today there are enough people (plus enough constant influx due to interest in AI) to have a critical mass and such a site would be viable.  Maybe.  One could counterargue that the process of flushing out the weak is noisy and distracting, and might drive away the good people.

As long as we're in the business of citing Eliezer, I'd point to the fact that, in dath ilan, he says that most people are not "Keepers" (trained ultra-rationalists, always looking unflinchingly at harsh truths, expected to remain calm and clear-headed no matter what they're dealing with, etc.), that most people are not fit to be Keepers, and that it's fine and good that they don't hold themselves to that standard.  Now, like, I guess one could imagine there should be at least enough Keepers to have their own forum, and perhaps Less Wrong could be such a forum.  Well, one might say that having an active forum that trains people who are not yet Keepers is a strictly easier target than, and likely a prerequisite for, an active and long-lived Keeper forum.  If LW is to be the Keeper forum, where are the Keepers trained?  The SSC subreddit?  Just trial by fire and take the fraction of a fraction of the population who come to the forum untrained and do well without any nurturing?

I don't know.  It could be the right idea.  I would give it... 25%?, that this is better than some more civilian-accommodating thing like what we have today.  I am really not an expert on forecasting this, and am pretty comfortable leaving it up to the current LW team.  (I also note that, if we manage to do something like enhance the overall population's intelligence by a couple of standard deviations—which I hope will be achievable in my lifetime—then the Keeper pipeline becomes much better.)  And no, I don't think it should do much in the way of accommodating civilians at the expense of the strong—but the optimal amount of doing that is more than zero.

Replies from: SaidAchmiz, habryka4
comment by Said Achmiz (SaidAchmiz) · 2023-04-27T21:32:05.256Z · LW(p) · GW(p)

Much of what you write here seems to me to be accurate descriptively, and I don’t have much quarrel with it. The two most salient points in response, I think, are:

  1. To the original question that spawned this subthread (concerning “[implicit] enforcement of norms” by non-moderators, and how such a thing could possibly work), basically everything in your comment here is non-responsive. (Which is fine, of course—it doesn’t imply anything bad about your comment—but I just wanted to call attention to this.)

  2. However accurate your characterizations may be descriptively, the (or, at least, an) important question is whether your prescriptions are good normatively. On that point I think we do have disagreement. (Details follow.)

It’s also been mentioned that some questions are perceived as rude. An obvious candidate category would be those that amount to questioning someone’s basic competence. I’m not making the positive claim here that this accounts for a significant portion of the objectors’ perceived unpleasantness, but since you’re questioning how it’s possible for asking for clarification to be really unpleasant to a remotely functional person—this is one possibility.

“Basic competence” is usually a category error, I think. (Not always, but usually.) One can have basic competence at one’s profession, or at some task or specialty; and these things could be called into question. And there is certainly a norm, in most social contexts, that a non-specialist questioning the basic competence of a specialist is a faux pas. (I do not generally object to that norm in wider society, though I think there is good reason for such a norm to be weakened, at least, in a place like Less Wrong; but probably not absent entirely, indeed.)

What this means, then, is that if I write something about, let’s say, web development, and someone asks me for clarification of some point, then the implicatures of the question depend on whether the asker is himself a web dev. If so, then I address him as a fellow specialist, and interpret his question accordingly. If not, then I address him as a non-specialist, and likewise interpret his question accordingly. In the former case, the asker has standing to potentially question my basic competence, so if I cannot make myself clear to him, that is plausibly my fault. In the latter case, he has no such standing, but likewise a request for clarification from him can’t really be interpreted as questioning my basic competence in the first place; and any question that, from a specialist, would have that implication, from a non-specialist is merely revelatory of the asker’s own ignorance.

Nevertheless I think that you’re onto something possibly important here. Namely, I have long noticed that there is an idea, a meme, in the “rationalist community”, that indeed there is such a thing as a generalized “basic competence”, which manifests itself as the ability to understand issues of importance in, and effectively perform tasks in, a wide variety of domains, without the benefit of what we would usually see as the necessary experience, training, declarative knowledge, etc., that is required to gain expertise in the domain.

It’s been my observation that people who believe in this sort of “generalized basic competence”, and who view themselves as having it, (a) usually don’t have any such thing, (b) get quite offended when it’s called into question, even by the most indirect implication, or even conditionally. This fits the pattern you describe, in a way, but of course that is missing a key piece of the puzzle: what is unpleasant is not being asked for clarification, but being revealed to be a fraud (which would be the consequence of demonstrably failing to provide any satisfying clarification).

In some places on the internet, trolling is or has been a major problem.

Definitely. (As I’ve alluded to earlier in this comment section, I am quite familiar with this problem from the administrator’s side.)

But it’s quite possible, and not even very hard, prove oneself a non-troll. (Which I think that I, for instance, have done many times over. There aren’t many trolls who invest as much work into a community as I have. I note this not to say something like “what I’ve contributed outweighs the harm”, as some of the moderators have suggested might be a relevant consideration—and which reasoning, quite frankly, makes me uncomfortable—but rather to say “all else aside, the troll hypothesis can safely be discarded”.)

In other words, yes, trolling exists, but for the purposes of this discussion we can set that fact aside. The LW moderation team have shown themselves to be more than sufficiently adept at dealing with such “cheap” attacks that we can, to a first (or even second or third) approximation, simply discount the possibility of trolling, when talking about actual discussions that happen here.

Some people probably perceive some really impressive people on Less Wrong, possibly admire some of them a lot, and are not securely confident in their own intelligence or something, and would find it really embarrassing—mortifying—to be made to look stupid in front of us.

As it happens, I quite empathize with this worry—indeed I think that I can offer a steelman of your description here, which (I hope you’ll forgive me for saying) does seem to me to be just a bit of a strawman (or at least a weakman).

There are indeed some really impressive people on Less Wrong. (Their proportion in the overall membership is of course lower than it was in the “glory days”, but nevertheless they are a non-trivial contingent.) And the worry is not, perhaps, that one will be made to look stupid in front of them, but rather than one will waste their time. “Who am I,” the potential contributor might think, “to offer my paltry thoughts on any of these lofty matters, to be listed alongside the writings of these greats, such that the important and no doubt very busy people who read this website will have to sift through the dross of my embarrassingly half-formed theses and idle ramblings, in the course of their readings here?” And then, when such a person gets up the confidence and courage to post, if the comments they get prove at once (to their minds) that all their worries were right, that what they’ve written is worthless, little more than spam—well, surely they’ll be discouraged, their fears reinforced, their shaky confidence shattered; and they won’t post again. “I have nothing to contribute,” they will think, “that is worthy of this place; I know this for a fact; see how my attempts were received!”

I’ve seen many people express worries like this. And there are, I think, a few things to say about the matter.

First: however relevant this worry may have been once, it’s hardly relevant now.

This is for two reasons, of which the first is that the new Less Wrong is designed precisely to alleviate such worries, with the “personal” / “frontpage” distinction. Well, at least, that would be true, if not for the LW moderators’ quite frustrating policy of pushing posts to the frontpage section almost indiscriminately, all but erasing the distinction, and preventing it from having the salutary effect of alleviating such worries as I have described. (At least there’s Shortform, though?)

The second reason why this sort of worry is less relevant is simply that there’s so much more garbage on Less Wrong today. How plausible is it, really, to look at the current list of frontpage posts, and think “gosh, who am I to compete for readers’ time with these great writings, by these great minds?” Far more likely is the opposite thought: “what’s the point of hurling my thoughts into this churning whirlpool of mediocrity?” Alright, so it’s not quite Reddit, but it’s bad enough that the moderators have had to institute a whole new set of moderation policies to deal with the deluge! (And well done, I say, and long overdue—in this, I wholly support their efforts.)

Second: I recall someone (possibly Oliver Habryka? I am not sure) suggesting that the people who are most worried about not measuring up tend also to be those whose contributions would be some of the most useful. This is a model which is more or less the opposite of your suggestion that “someone being afraid to be shown to be stupid is probably Bayesian evidence that their intelligence isn’t that high”; it claims, instead, something like “someone being afraid that they won’t measure up is probably Bayesian evidence that their intellectual standards as applied to themselves are high, and that their ideas are valuable”.

I am not sure to what extent I believe either of these two models. But let us take the latter model for granted, for a moment. Under this view, any sort of harsh criticism, or even just anything but the most gentle handling and the most assiduous bending-over-backwards to avoid any suggestion of criticism, risks driving away the most potentially valuable contributors.

Of course, one problem is that any lowering of standards mostly opens the floodgates to a tide of trash, which itself then acts to discourage useful contributions. But let’s imagine that you can solve that problem—that you can set up a most discerning filter, which keeps out all the mediocre nonsense, all the useless crap, but somehow does this without spooking the easily-spooked but high-value authors.

But even taking all of that for granted—you still haven’t solved the fundamental problems.

Problem (a): even the cleverest of thinkers and writers sometimes have good ideas but sometimes have bad ideas; or ideas that have flaws; or ideas that are missing key parts; or, heck, they simply make mistakes, accidentally cite the wrong thing and come to the wrong conclusion, misremember, miscount… you can’t just not ever engage with any assumption but that the author’s ideas are without flaw, and that your part is only to respectfully learn at the author’s feet. That doesn’t work.

Problem (b): even supposing that an idea is perfect—what do you do with it? In order to make use of an idea, you must understand it, you must explore it; that means asking questions, asking for clarifications, asking for examples. That is (and this is a point which, incredibly, seems often to be totally lost in discussions like this) how people engage with ideas that excite them! (Otherwise—what? You say “wow, amazing” and that’s it? Or else—as I have personally seen, many times—you basically ignore what’s been written, and respond with some only vaguely related commentary of your own, which doesn’t engage with the post at all, isn’t any attempt to build anything out of it, but is just a sort of standalone bit of cleverness…)

My point was that Eliezer’s philosophy doesn’t mean it’s always an unalloyed good. For all that you say it’s “so innocuous, so fundamentally cooperative, prosocial, and critical to any even remotely reasonable or productive discussion” to ask for clarification, even you don’t believe it’s always a good idea (since you haven’t, say, implemented a bot that replies to every comment with “Be more specific”). There are costs in addition to the benefits, the magnitude of the benefits varies, and it is possible to go too far. Your stated position doesn’t seem to acknowledge that there is any tradeoff.

No, this is just confused.

Of course I don’t have a bot that replies to every comment with “Be more specific”, but that’s not because there’s some sort of tradeoff; it’s simply that it’s not always appropriate or relevant or necessary. Why ask for clarification, if all is already clear? Why ask for examples, if they’ve already been provided, or none seem needed? Why ask for more specificity, if one’s interlocutor has already expressed themselves as specifically as the circumstances call for? If someone writes a post about “authenticity”, I may ask what they mean by the word; but what mystery, what significance, is there in the fact that I don’t do the same when someone writes a post about “apples”? I know what apples are. When people speak of “apples” it’s generally clear enough what they’re talking about. If not—then I would ask.

Gwern would be asking for 3 examples

Gwern is strong. You (and Zack) are also strong. Some people are weaker.

There is no shame in being weak. (It is an oft-held view, in matters of physical strength, that the strong should protect the weak; I endorse that view, and hold that it applies in matters of emotional and intellectual strength as well.) There may be shame in remaining weak when one can become strong, or in deliberately choosing weakness; but that may be disputed.

But there is definitely shame in using weakness as a weapon against the strong. That is contemptible.

Strength may not be required. But weakness must not be valorized. And while accommodating the weak is often good, it must never come at the expense of discouraging strength, for then the effort undermines itself, and ultimately engineers its own destruction.

As long as we’re in the business of citing Eliezer, I’d point to the fact that, in dath ilan

I deliberately do not, and would not, cite Eliezer’s recent writings, and especially not those about dath ilan. I think that the ideas you refer to, in particular (about the Keepers, and so on), are dreadfully mistaken, to the point of being intellectually and morally corrosive.

comment by habryka (habryka4) · 2023-04-27T17:13:53.410Z · LW(p) · GW(p)

Just for the record, your first comment was quite good at capturing some of the models that drive me and the other moderators.

This one is not, which is fine and wasn't necessarily your goal, but I want to prevent any future misunderstandings.

comment by philh · 2023-04-26T21:00:52.943Z · LW(p) · GW(p)

I would appreciate someone else jumping in and explaining those models

I'm super not interested in putting effort into talking about this with Said. But a low-effort thing to say is: my review [LW · GW] of Order Without Law seems relevant. (And the book itself moreso, but that's less linkable.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-26T21:06:33.987Z · LW(p) · GW(p)

I do recall reading and liking that post, though it’s been a while. I will re-read it when I have the chance.

But for now, a quick question: do you, in fact, think that the model described in that post applies here, on Less Wrong?

Replies from: philh
comment by philh · 2023-04-26T21:48:06.518Z · LW(p) · GW(p)

(If this starts to be effort I will tap out, but briefly:)

  • It's been a long time since I read it too.
  • I don't think there's a specific thing I'd identify as "the model described in that post".
  • There's a hypothesis that forms an important core of the book and probably the review; but it's not the core of the reason I pointed to it.
  • I do expect bits of both the book and the review apply on LW, yes.
Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-26T22:24:58.406Z · LW(p) · GW(p)

Well, alright, fair enough.

Could you very briefly say more about what the relevance is, then? Is there some particular aspect of the linked review of which you think I should take note? (Or is it just that you think the whole review is likely to contain some relevant ideas, but you don’t necessarily have any specific parts or aspects in mind?)

Replies from: philh, philh
comment by philh · 2023-04-27T00:12:09.727Z · LW(p) · GW(p)

Sorry. I spent a few minutes trying to write something and then decided it was going to be more effort than I wanted, so...

I do have something in mind, but I apparently can't write it down off the cuff. I can gesture vaguely at the title of the book, but I suppose that's unlikely to be helpful. I don't have any specific sections in mind.

(I think I'm unlikely to reply again unless it seems exceptionally likely that doing so will be productive.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-27T00:14:47.129Z · LW(p) · GW(p)

Alright, no worries.

comment by philh · 2023-04-27T00:12:24.052Z · LW(p) · GW(p)
comment by Said Achmiz (SaidAchmiz) · 2023-04-26T21:04:26.669Z · LW(p) · GW(p)

so what does this model predict about just like the average dinner party or college class

Dinner parties have hosts, who can do things like: ask a guest to engage or not engage in some behavior; ask a guest to leave if they’re disruptive or unwanted; not invite someone in the first place; in the extreme, call the police (having the legal standing to do so, as the owner of the dwelling where the party takes place).

College classes have instructors, who can do things like: ask a student to engage or not engage in some behavior; ask a student to leave if they’re disruptive; cause a student to be dropped from enrollment in the course; call campus security to eject the student (having the organizational and legal standing to do so, as an employee of the college, who is granted the mandate of running the lecture/course/etc.).

(I mean, really? A college class, of all things, as an example of a social space which supposedly doesn’t have any kind of official moderator? Forgive me for saying so, but this reply seems poorly thought through…)

My guess is 95% of the LessWrong user-base is capable of describing a model of how social norms function that does not have the property that only moderators of a space have any ability to enforce or set norms within that space …

I, too, am capable of describing such a model.

But, crucially, I do not think I am capable of describing a model where it is both the case (a) that moderators (i.e., people who have the formally, socially, and technically granted power to enforce rules and norms) exist, and (b) that non-moderators have any enforcement power that isn’t granted by the moderators, or sanctioned by the moderators, or otherwise is an expression of the moderators’ power.

On Less Wrong, there are moderators, and they unambiguously have a multitude of enforcement powers, which ordinary users lack. Ordinary users have very few powers: writing posts and comments, upvotes/downvotes, and bans from one’s posts.

Writing posts and comments isn’t anything at all like “enforcement” (given that moderators exist, and that users can ignore other users, and ban them from their posts).

Upvotes/downvotes are very slightly like “enforcement”. (But of course we’re not talking about upvotes/downvotes here.)

Banning a user from your posts is a bit more like “enforcement”. (But we’re definitely not talking about that here.)

Given the existence of moderators on Less Wrong, I do not, indeed, see any way to describe anything that I have ever done as “enforcement” of anything. It seems to me that such a claim is incoherent.

Replies from: habryka4
comment by habryka (habryka4) · 2023-04-26T21:23:21.366Z · LW(p) · GW(p)

But, crucially, I do not think I am capable of describing a model where it is both the case (a) that moderators (i.e., people who have the formally, socially, and technically granted power to enforce rules and norms) exist, and (b) that non-moderators have any enforcement power that isn’t granted by the moderators, or sanctioned by the moderators, or otherwise is an expression of the moderators’ power.

That too, I think 95% of the LessWrong user-base is capable of, so I will leave it to them.

One last reply: 

(I mean, really? A college class, of all things, as an example of a social space which supposedly doesn’t have any kind of official moderator? Forgive me for saying so, but this reply seems poorly thought through…)

Indeed, college classes (and classes in-general) seem like an important study since in my experience it is very clear that only a fraction of the norms in those classes get set by the professor/teacher, and that clearly there are many other sources of norms and the associated enforcement of norms. 

Experiencing those bottom-up norms is a shared experience since almost everyone went through high-school and college, so seems like a good reference.

Replies from: SaidAchmiz, SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-26T21:36:31.440Z · LW(p) · GW(p)

Indeed, college classes (and classes in-general) seem like an important study since in my experience it is very clear that only a fraction of the norms in those classes get set by the professor/teacher, and that clearly there are many other sources of norms and the associated enforcement of norms.

Of course this is true; it is not just the instructor, but also the college administration, etc., that function as the setter and enforcer of norms.

But it sure isn’t the students!

(And this is even more true in high school. The students have no power to set any norms, except that which is given them by the instructor/administration/etc.—and even that rarely happens.)

Replies from: Linch
comment by Linch · 2023-04-26T22:50:26.302Z · LW(p) · GW(p)

Have you been to an American high school and/or watched at least one movie about American high schools?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-26T22:51:34.442Z · LW(p) · GW(p)

I have done both of those things, yes.

EDIT: I have also attended not one but several (EDIT 2: four, in fact) American colleges.

Replies from: Linch
comment by Linch · 2023-04-26T23:03:22.645Z · LW(p) · GW(p)

The plot point of many high school movies is often about what is and isn't acceptable to do, socially. For example, Regina in Mean Girls enforced a number of rules on her clique, and attempted with significant but not complete success to enforce it on others.

comment by Said Achmiz (SaidAchmiz) · 2023-04-26T21:38:51.541Z · LW(p) · GW(p)

That too, I think 95% of the LessWrong user-base is capable of, so I will leave it to them.

I do think it would be useful for you to say how much time should elapse without a satisfactory reply by some representative members of this 95% before we can reasonably evaluate whether this prediction has been proven true.

Replies from: habryka4
comment by habryka (habryka4) · 2023-04-26T21:48:54.942Z · LW(p) · GW(p)

Oh, the central latent variable in my uncertainty here is "is anyone willing to do this?" not "is anyone capable of this?". My honest guess is the answer to that is "no" because this kind of conversation really doesn't seem fun, and we are 7 levels deep into a 400 comment post. 

My guess is if you actively reach out and put effort into trying to get someone to explain it to you, by e.g. putting out a bounty, or making a top-level post, or somehow send a costly signal that you are genuinely interested in understanding, then I do think there is a much higher chance of that, but I don't currently expect that to happen. 

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-26T22:06:41.241Z · LW(p) · GW(p)

You do understand, I hope, how this stance boils down to “we want you to stop doing a thing, but we won’t explain what that thing is; figure it out yourself”?

Replies from: habryka4
comment by habryka (habryka4) · 2023-04-26T22:13:20.127Z · LW(p) · GW(p)

No, it boils down to "we will enforce consistent rules and spend like 100+ hours trying to explain them if an established user is confused, and if that's not enough, then I guess that's life and we'll move on". 

Describing the collective effort of the Lightcone team as "unwilling to explain what the thing is" seems really quite inaccurate, given the really quite extraordinary amount of time we have spent over the years trying to get our models across. You can of course complain about the ineffectuality of our efforts to explain, but I do not think you can deny the effort, and I do not currently know what to do that doesn't involve many additional hours of effort. 

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-26T22:20:02.095Z · LW(p) · GW(p)

we will enforce consistent rules

Wait, what? Are you now claiming that there are rules which were allegedly violated here? Which rules are these?

Describing the collective effort of the Lightcone team as “unwilling to explain what the thing is” seems really quite inaccurate

I do not think you can deny the effort

I’ve been told (and only after much effort on my part in trying to get an answer) that the problem being solved here is something called “(implicit) enforcement of norms” on my part. I’ve yet to see any comprehensible (or even, really, seriously attempted) explanation of what that’s supposed to mean, exactly, and how any such thing can be done by a (non-moderator) user of Less Wrong. You’ve said outright that you refuse to attempt an explanation. “Unwilling to explain what the thing is” seems entirely accurate.

Replies from: habryka4
comment by habryka (habryka4) · 2023-04-26T22:34:36.267Z · LW(p) · GW(p)

Wait, what? Are you now claiming that there are rules which were allegedly violated here? Which rules are these?

The one we've spent 100+ hours trying to explain in this thread, trying to point to with various analogies and metaphors, and been talking about for 5 plus years about what the cost of your comments to the site has been. 

It does not surprise me that you cannot summarize them or restate them in a way that shows understanding them, which is why more effort on explaining them does not seem worth it. The concepts here are also genuinely kind of tricky, and we seem to be coming from very different perspectives and philosophies, and while I do experience frustration, I can also see why this looks very frustrating for you. 

I agree that I personally haven't put a ton of effort (though like 2-3 hours for my comments with Zack which seem related) at this specific point in time, though I have spent many dozens of hours in past years, trying to point to what seems to me the same disagreements. 

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-26T22:51:05.608Z · LW(p) · GW(p)

Wait, what? Are you now claiming that there are rules which were allegedly violated here? Which rules are these?

The one we’ve spent 100+ hours trying to explain in this thread, trying to point to with various analogies and metaphors, and been talking about for 5 plus years about what the cost of your comments to the site has been.

But which are not, like… stated anywhere? Like, in some sort of “what are the rules of this website” page, which explains these rules?

Don’t you think that’s an odd state of affairs, to put it mildly?

The concept of “ignorance of the law is no excuse” was mentioned earlier in this discussion, and it’s a reasonable one in the real world, where you generally can be aware of what the law is, if you’re at all interested in behaving lawfully[1]. If you get a speeding ticket, and say “I didn’t know I was exceeding the speed limit, officer”, the response you’ll get is “signs are posted; if you didn’t read them, that’s no excuse”. But that’s because the signs are, in fact, posted. If there were no signs, then it would just be a case of the police pulling over whoever they wanted, and giving them speeding tickets arbitrarily, regardless of their actual speed.

You seem to be suggesting that Less Wrong has rules (not “norms”, but rules!), which are defined only in places like “long, branching, deeply nested comment threads about specific moderation decisions” and “scattered over years of discussion with some specific user(s)”, and which are conceptually “genuinely kind of tricky”; but that violating these rules is punishable, like any rules violation might be.

Does this seem to you like a remotely reasonable way to have rules?


  1. But note that this, famously, is no longer true in our society today, which does indeed have some profoundly unjust consequences. ↩︎

Replies from: habryka4
comment by habryka (habryka4) · 2023-04-26T23:22:22.560Z · LW(p) · GW(p)

I think we've tried pretty hard to communicate our target rules in this post and previous ones. 

The best operationalization of them is in this comment, as well as the moderation warning I made ~5 years ago: https://www.lesswrong.com/posts/9DhneE5BRGaCS2Cja/moderation-notes-re-recent-said-duncan-threads?commentId=y6AJFQtuXBAWD3TMT [LW(p) · GW(p)] 

These are in a pinned moderator-top-level comment on a moderation post that was pinned for almost a full week, so I don't think this counts as being defined in "long, branching, deeply nested comment threads about specific moderation decisions". I think we tried pretty hard here to extract the relevant decision-boundaries and make users aware of how we plan to make decisions going forward. 

We are also thinking about how to think about having site-wide moderation norms and rules that are more canonical, though I share Ruby's hesitations about that: https://www.lesswrong.com/posts/gugkWsfayJZnicAew/should-lw-have-an-official-list-of-norms [LW · GW

I don't know of a better way to have rules than this. As I said in a thread to Zack, case-law seems to me to be the only viable way of creating moderation guidelines and rules on a webforum like this, and this means that yes, a lot of the rules will be defined in reference to a specific litigated instance of something that seemed to us to have negative consequences. This approach also seems to work pretty well for lots of legal systems in the real world, though yeah, it does sure produce a body of law that in order to navigate it successfully you have to study the lines revealed through past litigation.

Replies from: SaidAchmiz, SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-26T23:53:38.161Z · LW(p) · GW(p)

EDIT: Why do my comments keep double-posting? Weird.

comment by Said Achmiz (SaidAchmiz) · 2023-04-26T23:52:41.321Z · LW(p) · GW(p)

The best operationalization of them is in this comment, as well as the moderation warning I made ~5 years ago: https://www.lesswrong.com/posts/9DhneE5BRGaCS2Cja/moderation-notes-re-recent-said-duncan-threads?commentId=y6AJFQtuXBAWD3TMT [LW(p) · GW(p)]

… that comment is supposed to communicate rules?!

It says:

In particular, if either Said makes a case that he can obey the spirit of “don’t imply people have an obligation to engage with his comments”; or, someone suggests a letter-of-the-law that actually accomplishes the thing I’m aiming at in a more clear-cut way, I’d feel fairly good about revoking the rate-limit.

The only thing that looks like a rule here is “don’t imply people have an obligation to engage with [your] comments”. Is that the rule you’ve been talking about? (I asked this of Raemon and his answer was basically “yes but not only”, or something like that.)

And the rest pretty clearly suggests that there isn’t a clearly defined rule here.

The mod note from 5 years ago seems to me to be very clearly not defining any rules.

Here’s a question: if you asked ten randomly selected Less Wrong members: “What are the rules of Less Wrong?”—how many of them would give the correct answer? Not as a link to this or that comment, but in their own words (or even just by quoting a list of rules, minus the commentary)?

(What is the correct answer?)

How many of their answers would even match one another?

As I said in a thread to Zack, case-law seems to me to be the only viable way of creating moderation guidelines and rules on a webforum like this, and this means that yes, a lot of the rules will be defined in reference to a specific litigated instance of something that seemed to us to have negative consequences. This approach also seems to work pretty well for lots of legal systems in the real world, though yeah, it does sure produce a body of law that in order to navigate it successfully you have to study the lines revealed through past litigation.

Yes, of course, but the way this works in real-world legal systems is that first there’s a law, and then there’s case law which establishes precedent for its application. (And, as you say, it hardly makes it easy to comply with the law. Perhaps I should retain an attorney to help me figure out what the rules of Less Wrong are? Do I need to have a compliance department…?) Real-world legal systems in well-functioning modern countries generally don’t take the approach of “we don’t have any written down laws; we’ll legislate by judgment calls in each case; even after doing that, we won’t encode those judgments into law; there will only be precedent and judicial opinion, and that will be the whole of the law”.[1]


  1. Have there been societies in the past which have worked like this? I don’t know. Maybe we can ask David Friedman? ↩︎

comment by Said Achmiz (SaidAchmiz) · 2023-04-24T17:43:32.824Z · LW(p) · GW(p)

Do I understand you correctly as saying that the problem, specifically, is… that people reading my comments might, or do, get a mistaken impression that there exists on Less Wrong some sort of social norm which holds that authors have a social obligation to respond to comments on their posts?


That aside, I have questions about this rate limit:

  • Does it apply to all posts of any kind, written by anyone? More specifically:
    • Does it apply to both personal and frontpage posts?
    • Does it apply to posts written by moderators? Posts written about me (or specifically addressing me)? Posts written by moderators about me?
    • Does it apply to this post? (I assume that it must not, since you mention that you’d like me to make a case that so-and-so, you say “I am interested in what Said actually prefers here”, etc., but just want to confirm this) EDIT: See below
    • Does it apply to “open thread” type posts (where the post itself is just a “container”, so to speak, and entirely different conversations may be happening under different top-level comments)?
    • Does it apply to my own posts? (That would be very strange, of course, but it wouldn’t be the strangest edge case that’s ever been left unhandled in a feature implementation, so seems worth checking…)
    • Does it apply retroactively to existing posts (including very old posts), or only new posts going forward?
  • Is there any way for a post author to disable this rate limit, or opt out of it?
  • Does the rate limit reset at a specific time each week, or is there simply a check for whether 3 posts have been written in the period starting one week before current time?
  • Is there any rate limit on editing comments, or only posting new ones? (It is presumably not the intent to have the rate limit triggered by fixing a typo, for instance…)
  • Is there a way for me to see the status of the rate limit prior to posting, or do I only find out whether the limit’s active when I try to post a comment and get an error?
  • Is there any UI cue to inform readers or other commenters (including a post’s author) that I can’t reply to a comment of theirs, e.g., due to the rate limit?

ETA: After attempting to post this comment last night, I received a message informing me that I would not be able to do so until some hours in the then-future. This answers the crossed-out question above, I suppose. Unfortunately, it also makes the asides about wanting to know what I think on this topic… well, somewhat farcical, quite frankly.

Replies from: Raemon, Raemon, Raemon, Raemon, habryka4
comment by Raemon · 2023-04-24T20:31:54.541Z · LW(p) · GW(p)

ETA: After attempting to post this comment last night, I received a message informing me that I would not be able to do so until some hours in the then-future. This answers the crossed-out question above, I suppose. Unfortunately, it also makes the asides about wanting to know what I think on this topic… well, somewhat farcical, quite frankly.

Aww christ I am very sorry about this. I had planned to ship the "posts can be manually overridden to ignore rate limiting" feature first thing this morning and apply it to this post, but I forgot that you'd still have made some comments less than a week ago which would block you for awhile. I agree that was a really terrible experience and I should have noticed it.

The feature is getting deployed now and will probably be live within a half hour. 

For now, I'm manually applying the "ignore rate limit" flag to posts that seem relevant. (I'll likely do a migration backfill on all posts by admins that are tagged "Site Meta". I haven't made a call yet about Open Threads)

I think some of your questions are answered in the previous comment:

Meanwhile some additional features I haven’t shipped yet, which I can’t make promises about, but which I personally think would be good to ship soon include:

  • [ETA: should be live soon] There’s at least a boolean flag for individual posts so authors can allow “rate limited people can comment freely”, and probably also a user-setting for this. Another possibility is a user-specific whitelist, but that’s a bit more complicated and I’m not sure if there’s anyone who would want that who wouldn’t want the simpler option.
    • I’d ideally have this flag set on this post, and probably on other moderation posts written by admins.
  • Rate-limited users in a given comment section have a small icon that lets you know they’re rate-limited, so you have reasonable expectations of when they can reply.
  • Updating the /moderation page to list rate limited users, ideally with some kind of reason / moderation-warning.
  • Updating rate limits to ensure that users can comment as much as they want on their own posts (we made a PR for this change a week ago and haven’t shipped it yet largely because this moderation decision took a lot of time)

I'll write a more thorough response after we've finished deploying the "ignoreRateLimits flag for posts" PR.

Replies from: Ruby
comment by Ruby · 2023-04-25T01:25:59.039Z · LW(p) · GW(p)

Site Meta posts contains a lot more moderation, so not sure we should do that.

comment by Raemon · 2023-04-25T19:15:03.779Z · LW(p) · GW(p)

Do I understand you correctly as saying that the problem, specifically, is… that people reading my comments might, or do, get a mistaken impression that there exists on Less Wrong some sort of social norm which holds that authors have a social obligation to respond to comments on their posts?

Basically yes, although I note I said a lot of other words here that  were all fairly important, including the links back to previous comments. For example, it's important that I think you are factually incorrect about there being "normatively correct general principles" that people who don't engage with your comments "should be interpreted as ignorant".

(While I recall you explicitly disclaiming such an obligation in some other recent comments... if you don't think there is some kind of social norm about this, why did you previously use phrasing like "there is always such an obligation" and "Then they shouldn’t post on a discussion forum, should they? What is the point of posting here, if you’re not going to engage with commenters?". Even if you think most of your comments don't have the described effect, I think the linked comment straightforwardly implies a social norm. And I think the attitude in that comment shines through in many of your other comments)

I think my actual crux "somehow, at the end of the day, people feel comfortable ignoring and/or downvoting your comments if they don't think they'll be productive to engage with." 

I believe "Said's commenting style actively pushes against this in a norm-enforcing-feeling way", but, as noted in the post, I'm still kind of confused about that (and I'll say explicitly here: I am still not sure I've named the exact problem). I said a whole lot of words about various problems and caveats and how they fit together and I don't think you can simplify it down to "the problem is X". I said at the end, a major crux is "Said can adhere to the spirit of '“don’t imply people have an obligation to engage with your comments'," where "spirit" is doing some important work of indicating the problem is fuzzy.

We've given you a ton of feedback about this over 5-6 years. I'm happy to talk or answer questions for a couple more days if the questions look like they're aimed at 'actually figure out how to comply with the spirit of the request', but not more discussion of 'is there a problem here from the moderator's perspective?'.

I understand (and respect) that you think the moderators are wrong in several deep ways here, and I do honestly think it's good/better for you to stick around with a generator of thoughts and criticism that's somewhat uncorrelated with the site admin judgment" (but not free-reign to rehash it out in subtle conflict in other people's comment sections)

I'm open (in the longterm) to arguments about whether our entire moderation policy is flawed, but that's outside the scope of this moderation decision and you should argue about that in top-level posts and/or in posts by Zack/etc if it's important to you)[random note that is probably implied but I want to make explicit: "enforcing standards that the LW community hasn't collectively opted into in other people's threads" is also essentially the criticism I'd make of many past comments of Duncans, although he goes about it in a pretty different way]

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-25T22:13:14.877Z · LW(p) · GW(p)

Basically yes, although I note I said a lot of other words here that were all fairly important, including the links back to previous comments. For example, it’s important that I think you are factually incorrect about there being “normatively correct general principles” that people who don’t engage with your comments “should be interpreted as ignorant”.

Well, no doubt most or all of what you wrote was important, but by “important” do you specifically mean “forms part of the description of what you take to be ‘the problem’, which this moderation action is attempting to solve”?

For example, as far as the “normatively correct general principles” thing goes—alright, so you think I’m factually incorrect about this particular thing I said once.[1] Let’s take for granted that I disagree. Well, and is that… a moderation-worthy offense? To disagree (with the mods? with the consensus—established how?—of Less Wrong? with anyone?) about what is essentially a philosophical claim? Are you suggesting that your correctness on this is so obvious that disagreeing can only constitute either some sort of bad faith, or blameworthy ignorance? That hardly seems true!

Or, take the links. One of them is clearly meant to be an example of the thing you described (and which I quoted). The others… don’t seem to be.[2] Are they just examples of things where you disagree with me? Again, fine and well, but is “being (allegedly) wrong about some non-obvious philosophical point” a moderation-worthy offense…? How do these other links fit into a description of what problem you’re solving?

And, perhaps just as importantly… how does any of this fit into… well, anything that has happened recently? All of your links are to discussions that took place three years ago. What is the connection of any of that to recent events? Are you suggesting that I have recently written comments that would give people the impression that Less Wrong has a social norm that imputes on post authors an obligation to respond to comments on their posts?

I ask these things not because I want to persuade you that there isn’t a problem, per se (I think there are many problems but of course my opinion differs from yours about what they are)—but, rather, because I can hardly comply with the rules, either in letter or in spirit or in any other way, when I don’t know what the rules are. From my perspective, what I seem to see the mods doing is the equivalent of the police stopping a person who’s walking down the street, saying “we’re taking you in for speeding”, and, in response to the confused citizen’s protests, explaining that he got a speeding ticket three years ago, and now they’re arresting him for exceeding the speed limit. Is this a long-delayed punishment? Is there a more recent offense? Is there some other reason for the arrest? Or what?

I think my actual crux “somehow, at the end of the day, people feel comfortable ignoring and/or downvoting your comments if they don’t think they’ll be productive to engage with.”

I think that people should feel comfortable ignoring and/or downvoting anyone’s comments if they don’t think engagement will be productive! Certainly I should not be any sort of exception to this. (Why in the world would I be? Of course you should engage only if you have some expectation that engaging will be productive, and not otherwise!)

If I write a comment and you think it is a bad comment (useless, obviously wrong, etc.), by all means downvote and ignore. Why not? And if I write another comment that says “you have an obligation to reply!”—I wouldn’t say that, because I don’t think that, but let’s say that I did—downvote and ignore that comment, too! Do this no matter who the commenter is!

Anyhow, if the problem really is essentially as I’ve summarized it, plus or minus some nuances and elaborations, then:

  1. I really don’t see what any recent event have to do with anything, or how the rate limit solves it, or… really, this entire situation perplexes me, from that perspective. But,

  2. If the worry is that other Less Wrong participants might get the wrong idea about site norms from my comments, then let me assure you that my comments certainly shouldn’t be taken to imply that said norms are anything other than what the moderators say they are. If anyone gets any other impression from my comments, that can only be a misunderstanding. I solemnly promise that if anyone questions me on this point (i.e., asks whether I am claiming the existence of some norms which the moderators have disclaimed), I will, in response, clearly reaffirm this view. (I encourage anyone, moderators or otherwise, to link to this comment in answer to any commenters or authors who seem at all confused on this point.)

Is that… I mean, does that solve the problem…?


  1. Actually, you somewhat misconstrue the comment, by taking it out of context. That’s perhaps not too important, but worth noting. In any case, it’s a comment I wrote three years ago, in the middle of a long discussion, and as part of a longer and offhandedly-written description, spread over a number of comments, of my view—and which, moreover, takes its phrasing directly from the comment it was a reply to. These are hardly ideal conditions for expressing nuances of meaning. My view is that, when writing comments like this in the middle of a long discussion, it is neither necessary nor desirable to agonize over whether the phrasing and formulation is ideal, because anyone who disagrees or misunderstands can just reply to indicate that, and the confusion or disagreement can be hammered out in the replies. (And this is largely what happened in the given case.[3]) ↩︎

  2. In particular, I can’t help but note that you link to a sub-thread which begins with me saying “This comment is a tangent, and I haven’t decided yet if it’s relevant to my main points or just incidental—”, i.e., where I pretty clearly signal that engagement isn’t necessarily critical, as far as the main discussion goes. ↩︎

  3. Perhaps you missed it, but I did write a comment [LW(p) · GW(p)] in that discussion where I very explicitly wrote that “I’m not saying that there’s a specific obligation for a post author to post a reply comment, using the Less Wrong forum software, directly to any given comment along the lines I describe”. Was that comment, despite my efforts, somehow unclear? That’s possible! These things happen. But is that a moderation-worth offense…? ↩︎

Replies from: Benito, Raemon
comment by Ben Pace (Benito) · 2023-04-26T01:35:23.683Z · LW(p) · GW(p)

For example, as far as the “normatively correct general principles” thing goes—alright, so you think I’m factually incorrect about this particular thing I said once.[1] [LW · GW] Let’s take for granted that I disagree. Well, and is that… a moderation-worthy offense? To disagree (with the mods? with the consensus—established how?—of Less Wrong? with anyone?) about what is essentially a philosophical claim? Are you suggesting that your correctness on this is so obvious that disagreeing can only constitute either some sort of bad faith, or blameworthy ignorance? That hardly seems true!

The philosophical disagreement is related-to but not itself the thing I believe Ray is saying is bad. The claim I understand Ray to be making is that he believes you gave a false account of the site-wide norms about what users are obligated to do, and that this is reflective of you otherwise implicitly enforcing such a norm many times that you comment on posts. Enforcing norms on behalf of a space that you don't have buy-in for and that the space would reject tricks people into wasting their time and energy trying to be good citizens of the space in a way that isn't helping and isn't being asked of them.

If you did so, I think that behavior ought to be clearly punished in some way. I think this regardless of whether you earnestly believed that an obligation-to-reply-to-comments was a site-wide norm, and also regardless of whether you were fully aware that you were doing so. I think it's often correct to issue a blanket punishment of a costly behavior even on the occasions that it is done unknowingly, to ensure that there is a consistent incentive against the behavior — similar to how it is typically illegal to commit a crime even if you aren't aware what you did was a crime.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-26T05:20:08.395Z · LW(p) · GW(p)

The claim I understand Ray to be making is that he believes you gave a false account of the site-wide norms about what users are obligated to do

Is that really the claim? I must object to it, if that’s so. I don’t think I’ve ever made any false claims about what social norms obtain on Less Wrong (and to the extent that some of my comments were interpreted that way, I was quick to clearly correct that misinterpretation).

Certainly the “normatively correct general principles” comment didn’t contain any such false claims. (And Raemon does not seem to be claiming otherwise.) So, the question remains: what exactly is the relevance of the philosophical disagreement? How is it connected to any purported violations of site rules or norms or anything?

… and that this is reflective of you otherwise implicitly enforcing such a norm many times that you comment on posts

I am not sure what this means. I am not a moderator, so it’s not clear to me how I can enforce any norm. (I can exemplify conformance to a norm, of course, but that, in this case, would be me replying to comments on my posts, which is not what we’re talking about here. And I can encourage or even demand conformance to some falsely-claimed norm. But for me to enforce anything seems impossible as a purely technical matter.)

If you did so, I think that behavior ought to be clearly punished in some way.

Indeed, if I had done this, then some censure would be warranted. (Now, personally, I would expect that such censure would start with a comment from a moderator, saying something like: “<name of my interlocutor>, to be clear, Said is wrong about what the site’s rules and norms are; there is no obligation to respond to commenters. Said, please refrain from misleading other users about this.” Then subsequent occurrences of comments which were similarly misleading might receive some more substantive punishment, etc. That’s just my own, though I think a fairly reasonable, view of how this sort of moderation challenge should be approached.)

But I think that, taking the totality of my comments in the linked thread, it is difficult to support the claim that I somehow made false claims about site rules or norms. It seems to me that I was fairly clearly talking about general principles—about epistemology, not community organization.

Now, perhaps you think that I did not, in fact, make my meaning clear enough? Well, as I’ve said, these things do happen. Certainly it seems to me like step one to rectify the problem, such as it is, would be just to make a clear ex cathedra statement about what the rules and norms actually are. That mitigates any supposed damage. (Was this done? I don’t recall that it was. But perhaps I missed it.) Then there can be talk of punishment.[1]

But, of course, there already was a moderation warning issued for the incident in question. Which brings us back to the question of what it has to do with the current situation (and to my “arrest for a speeding ticket issued three years ago” analogy).

P.S.:

I think this regardless of whether you earnestly believed that an obligation-to-reply-to-comments was a site-wide norm

To be maximally clear: I neither believed nor (as far as I can recall) claimed this.


  1. Although it seems to me that to speak in terms of “punishment”, when the offense (even taking as given that the offense took place at all) is something so essentially innocent as accidentally mis-characterizing an informal community norm, is, quite frankly, bizarrely harsh. I don’t think that I’ve ever participated in any other forum with such a stringent approach to moderation. ↩︎

comment by Raemon · 2023-04-25T22:48:46.239Z · LW(p) · GW(p)

For a quick answer connecting the dots between "What does the recent Duncan/Said conflict have to do with Said's past behavior," I think your behavior in the various you/Duncan threads was bad in basically the same way we gave you a mod warning [LW(p) · GW(p)] about 5 years ago, and also similar to a preliminary warning we gave you 6 years ago (in intercom, which ended in us deciding to take no action ath the time)

(i.e. some flavor of aggressiveness/insultingness, along with demanding more work from others than you were bringing yourself).

As I said, I cut you some slack for it because of some patterns Duncan brought to the table, but not that much slack. 

The previous mod warning said "we'd ban you for a month if you did it again", I don't really feel great about that since over the past 5 years there's been various comments that flirted with the same behavior and the cost of evaluating it each time is pretty high.

If the worry is that other Less Wrong participants might get the wrong idea about site norms from my comments, then let me assure you that my comments certainly shouldn’t be taken to imply that said norms are anything other than what the moderators say they are. If anyone gets any other impression from my comments, that can only be a misunderstanding. I solemnly promise that if anyone questions me on this point (i.e., asks whether I am claiming the existence of some norms which the moderators have disclaimed), I will, in response, clearly reaffirm this view. (I encourage anyone, moderators or otherwise, to link to this comment in answer to any commenters or authors who seem at all confused on this point.)

I will think on whether this changes anything for me. I do think it's helpful, offhand I don't feel that it completely (or obviously more than 50%) solves the problem, but, I do appreciate it and will think on it.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-25T23:22:47.740Z · LW(p) · GW(p)

… bad in basically the same way we gave you a mod warning [LW(p) · GW(p)] about 5 years ago …

I wonder if you find this comment by Benquo [LW(p) · GW(p)] (i.e., the author of the post in question; note that this comment was written just months after that post) relevant, in any way, to your views on the matter?

Replies from: Raemon
comment by Raemon · 2023-04-26T02:41:09.484Z · LW(p) · GW(p)

Yeah I do find that comment/concept important. I think I basically already counting that class of thing in the list of positive things I'd mentioned elsethread, but yes, I am grateful to you for that. (Benquo being one to say it in that context is a bit more evidence of it's weight which I had missed before, but I do think I was already weighting the concept approximately the right amount for the right reasons. Partly from having already generally updated on some parts of the Benquo worldview)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-26T04:51:07.204Z · LW(p) · GW(p)

Please note, my point in linking that comment wasn’t to suggest that the things Benquo wrote are necessarily true and that the purported truth of those assertions, in itself, bears on the current situation. (Certainly I do agree with what he wrote—but then, I would, wouldn’t I?)

Rather, I was making a meta-level point. Namely: your thesis is that there is some behavior on my part which is bad, and that what makes it bad is that it makes post authors feel… bad in some way (“attacked”? “annoyed”? “discouraged”? I couldn’t say what the right adjective is, here), and that as a consequence, they stop posting on Less Wrong. And as the primary example of this purported bad behavior, you linked the discussion in the comments of the “Zetetic Explanation” post by Benquo (which resulted in the mod warning you noted).

But the comment which I linked has Benquo writing, mere months afterward, that the sort of critique/objection/commentary which I write (including the sort which I wrote in response to his aforesaid post) is “helpful and important”, “very important to the success of an epistemic community”, etc. (Which, I must note, is tremendously to Benquo’s credit. I have the greatest respect for anyone who can view, and treat, their sometime critics in such a fair-minded way.)

This seems like very much the opposite of leaving Less Wrong as a result of my commenting style.

It seems to me that when the prime example you provide of my participation in discussions on Less Wrong purportedly being the sort of thing that drive authors away, actually turns out to be an example of exactly the opposite—of an author (whose post I criticized, in somewhat harsh terms) fairly soon (months) thereafter saying that my critical comments are good and important to the community and that I should continue…

… well, then regardless of whether you agree with the author in question about whether or not my comments are good/important/whatever, the fact that he holds this view casts very serious doubt on your thesis. Wouldn’t you agree?

(And this, note, is an author who has written many posts, many of them quite highly upvoted, and whose writings I have often seen cited in all sorts of significant discussions, i.e., one who has contributed substantially to Less Wrong.)

Replies from: Raemon, Duncan_Sabien
comment by Raemon · 2023-04-26T17:03:52.863Z · LW(p) · GW(p)

The reason it's not additional evidence to me is that I, too, find value in the comments you write for the reasons Benquo states, despite also finding them annoying at the time. So, Benquo's response here seems like an additional instance of my viewpoint here, rather than a counterexample. (though I'm not claiming Benquo agrees with me on everything on this domain)

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-26T05:16:17.395Z · LW(p) · GW(p)

… well, then regardless of whether you agree with the author in question about whether or not my comments are good/important/whatever, the fact that he holds this view casts very serious doubt on your thesis. Wouldn’t you agree?

Said is asking Ray, not me, but I strongly disagree.

Point 1 is that a black raven is not strong evidence against white ravens. (Said knows this, I think.)

Point 2 is that a behavior which displeases many authors can still be pleasant or valuable to some authors. (Said knows this, I think.)

Point 3 is that benquo's view on even that specific comment is not the only author-view that matters; benquo eventually being like "this critical feedback was great" does not mean that other authors watching the interaction at the time did not feel "ugh, I sure don't want to write a post and have to deal with comments like this one." (Said knows this, I think.)

(Notably, benquo once publicly stated that he suspected a rough interaction would likely have gone much better under Duncan moderation norms specifically; if we're updating on benquo's endorsements then it comes out to "both sets of norms useful," presumably for different things.)

I'd say it casts mild doubt on the thesis, at best, and that the most likely resolution is that Ray ends up feeling something like "yeah, fair, this did not turn out to be the best example," not "oh snap, you're right, turns out it was all a house of cards."

(This will be my only comment in this chain, so as to avoid repeating past cycles.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-26T06:01:51.016Z · LW(p) · GW(p)

Point 1 is that a black raven is not strong evidence against white ravens. (Said knows this, I think.)

A black raven is, indeed, not strong evidence against white ravens. But that’s not quite the right analogy. The more accurate analogy would go somewhat like this:

Alice: White ravens exist!
Bob: Yeah? For real? Where, can I see?
Alice (looking around and then pointing): Right… there! That one!
Bob (peering at the bird in question): But… that raven is actually black? Like, it’s definitely black and not white at all.

Now not only is Bob (once again, as he was at the start) in the position of having exactly zero examples of white ravens (Alice’s one purported example having been revealed to be not an example at all), but—and perhaps even more importantly!—Bob has reason to doubt not only Alice’s possession of any examples of her claim (of white ravens existing), but her very ability to correctly perceive what color any given raven is.

Now if Alice says “Well, I’ve seen a lot of white ravens, though”, Bob might quite reasonably reply: “Have you, though? Really? Because you just said that that raven was white, and it is definitely, totally black.” What’s more, not only Bob but also Alice herself ought rightly to significantly downgrade her confidence in her belief in white ravens (by a degree commensurate with how big a role her own supposed observations of white ravens have played in forming that belief).

Point 2 is that a behavior which displeases many authors can still be pleasant or valuable to some authors. (Said knows this, I think.)

Just so. But, once again, we must make our analysis more specific and more precise in order for it to be useful. There are two points to make in response to this.

First is what I said above: the point is not just that the commenting style/approach in question is valuable to some authors (although even that, by itself, is surely important!), but that it turns out to be valuable specifically to the author who served as an—indeed, as the—example of said commenting style/approach being bad. This calls into question not just the thesis that said approach is bad in general, but also the weight of any purported evidence of the approach’s badness, which comes from the same source as the now-controverted claim that it was bad for that specific author.

Second is that not all authors are equal.

Suppose, for example, that dozens of well-respected and highly valued authors all turned out to condemn my commenting style and my contributions, while those who showed up to defend me were all cranks, trolls, and troublemakers. It would still be true, then, to say that “my comments are valuable to some authors but displease others”, but of course the views of the “some” would be, in any reasonable weighting, vastly and overwhelmingly outweighed by the views of the “others”.

But that, of course, is clearly not what’s happening. And the fact that Benquo is certainly not some crank or troll or troublemaker, but a justly respected and valued contributor, is therefore quite relevant.

Point 3 is that benquo’s view on even that specific comment is not the only author-view that matters; benquo eventually being like “this critical feedback was great” does not mean that other authors watching the interaction at the time did not feel “ugh, I sure don’t want to write a post and have to deal with comments like this one.” (Said knows this, I think.)

First, for clarity, let me note that we are not talking (and Benquo was not talking) about a single specific comment, but many comments—indeed, an entire approach to commenting and forum participation. But that is a detail.

It’s true that Benquo’s own views on the matter aren’t the only relevant ones. But they surely are the most relevant. (Indeed, it’s hard to see how one could claim otherwise.)

And as far as “audience reactions” (so to speak) go, it seems to me that what’s good for the goose is good for the gander. Indeed, some authors (or potential authors) reading the interaction might have had the reaction you describe. But others could have had the opposite reaction. (And, judging by the comments in that discussion thread—as well as many other comments over the years—others in fact did have the opposite reaction, when reading that discussion and numerous others in which I’ve taken part.) What’s more, it is even possible (and, I think, not at all implausible) that some authors read Benquo’s months-later comment and thought “you know, he’s right”.

(Notably, benquo once publicly stated that he suspected a rough interaction would likely have gone much better under Duncan moderation norms specifically; if we’re updating on benquo’s endorsements then it comes out to “both sets of norms useful,” presumably for different things.)

Well, as I said in the grandparent comment, updating on Benquo’s endorsement is exactly what I was not suggesting that we do. (Not that I am suggesting the opposite—not updating on his endorsement—either. I am only saying that this was not my intended meaning.)

Still, I don’t think that what you say about “both sets of norms useful” is implausible. (I do not, after all, take exception to all of your preferred norms—quite the contrary! Most of them are good. And an argument can be made that even the ones to which I object have their place. Such an argument would have to actually be made, and convincingly, for me to believe it—but that it could be made, seems to me not to be entirely out of the question.)

I’d say it casts mild doubt on the thesis, at best, and that the most likely resolution is that Ray ends up feeling something like “yeah, fair, this did not turn out to be the best example,” not “oh snap, you’re right, turns out it was all a house of cards.”

Well, as I’ve written [LW(p) · GW(p)], to the extent that the convincingness of an argument for some claim rests on examples (especially if it’s just one example), the purported example(s) turning out to be no such thing does, indeed, undermine the whole argument. (Especially—as I note above—insofar as that outcome also casts doubt on whatever process resulted in us believing that raven to have been white in the first place.)

comment by Raemon · 2023-04-25T02:26:54.180Z · LW(p) · GW(p)

Answering some other questions:

By default, the rate limit applies to all posts, unless we've made an exception for it. There are two exceptions to it:

1. I just shipped the "ignore rate limits" flag on posts, which authors or admins can set so that a given post allows rate-limited comments to comment without restriction.

2. I haven't shipped yet, but expect within the next day to ship "rate-limited authors can comment on their own posts without restriction." (for the immediate future this just applies to authors, I expect to ship something that makes it work for coauthors)

In general, we are starting by rolling out the simplest versions of the rate-limiting feature (which is being used on many users, not just you), and solving problems as we notice them. I acknowledge this makes for some bad experiences along the way. I think I stand by that decision because I'm not even sure rate limits will turn out to work as a moderator tool, and investing like 3 months of upfront work ironing out the bugs first doesn't seem like the right call. 

For the general question of "whether a given such-and-such post will be rate limited", the answer will route through "will individual authors choose to do set "ignoreRateLimit", and/or will site admins choose to do it?". 

Ruby and I have some disagreements on how important it is to set the flag on moderation posts. I personally think it makes sense to be extra cautious about limiting people's ability to speak in discussions that will impact their future ability to speak, since those can snowball and I think people are rightly wary of that.  There are some other tradeoffs important to @Ruby [LW · GW], which I guess he can elaborate on if he wants. 

For now, I'm toggling on the ignoreRateLimits flag on most of my own moderation posts (I've currently done so for LW Team is adjusting moderation policy [LW · GW] and "Rate limiting" as a mod tool [LW · GW])

Other random questions:

  • Re: Open threads – I haven't made a call yet, but I'm leaving the flag disabled/rate-limited-normally for now. 
  • There is no limit to rate-limited-people editing their own comments. We might revisit it if it's a problem but my current guess is rate-limitees editing their comments is pretty fine.
  • The check happens based on the timestamp of your last comment (it works via fetching comments within the time window and seeing if there are more than the allotted amount)
  • On LessWrong.com (but presumably not greaterwrong, atm) it should inform you that you're not able to comment before you get started. 
  • On LessWrong.com, it will probably (later, but not yet, not sure whether we'll get to it this week), show an indicator that a commenter has been rate limited. (It's fairly easy to do this when you open a comment-box to reply to them, there are some performance concerns for checking-to-display it on 
  • I plan to add a list of rate-limited users to lesswrong.com/moderation. I think there's a decent chance that goes live within a day or so. 
Replies from: Ruby
comment by Ruby · 2023-04-25T03:20:13.385Z · LW(p) · GW(p)

Ruby and I have some disagreements on how important it is to set the flag on moderation posts.
 

A lot of this is that the set of "all moderation posts" covers a wide range of topics and the potential set "all rate limited users" might include a wide diversity of users, making me reluctant to commit upfront to not rate limits apply blanketly across the board on moderation posts.

The concern about excluding people from conversations that affect whether they get to speak is a valid consideration, but I think there are others too. Chiefly, people are likely rate limited primarily because they get in the way of productive conversation, and in so far as I care about moderation conversations going well, I might want to continue to exclude rate limited users there.

Note that there are ways, albeit with friction, for people to get to weigh in on moderation questions freely. If it seemed necessary, I'd be down with creating special un-rate-limited side-posts for moderation posts.


I am realizing that what seems reasonable here will depend on your conception of rate limits. A couple of conceptions you might have:

  1. You're currently not producing stuff that meets the bar for LessWrong, but you're writing a lot, so we'll rate limit you as a warning with teeth to up your quality.
  2. We would have / are close to banning you, however we think rate limits might serve either as
    1. a sufficient disincentive against the actions we dislike
    2. a restriction that simply stops you getting into unproductive things, e.g. Demon Threads

Regarding 2., a banned user wouldn't get to participate in moderation discussions either, so under that frame, it's not clear rate limited users should get to. I guess it really depends if it was more of a warning / light rate ban or something more severe, close to an actual ban.

I can say more here, not exactly a complete thought. Will do so if people are interested.

comment by Raemon · 2023-04-24T21:31:26.051Z · LW(p) · GW(p)

I just shipped the "ignore rate limit" flag for posts, and removed the rate limit for this post. All users can set the flag on individual posts. 

Currently they have to set it for each individual post. I think it's moderately likely we'll make it such that users can set it as a default setting, although I haven't talked it through with other team members yet so can't make an entirely confident statement on it. We might iterate on the exact implementation here (for example, we might only give this option to users with 100+ karma or equivalent)

I'm working on a longer response to the other questions.

Replies from: Three-Monkey Mind
comment by Three-Monkey Mind · 2023-04-24T22:27:41.403Z · LW(p) · GW(p)

We might iterate on the exact implementation here (for example, we might only give this option to users with 100+ karma or equivalent)

I could be misunderstanding all sorts of things about this feature that you've just implemented, but…

Why would you want to limit newer users from being able to declare that rate-limited users should be able to post as much as they like on newer users' posts? Shouldn't I, as a post author, be able to let Said, Duncan, and Zack post as much as they like on my posts?

Replies from: Ruby
comment by Ruby · 2023-04-25T01:24:25.386Z · LW(p) · GW(p)

Shouldn't I, as a post author, be able to let Said, Duncan, and Zack post as much as they like on my posts?

100+ karma means something like you've been vetted for some degree of investment in the site and enculturation, reducing the likelihood you'll do something with poor judgment and ill intention. I might worry about new users creating posts that ignore rate limits, then attracting all the rate-limited new users who were not having good effects on the site to come comment there (haven't thought about it hard, but it's the kind of thing we consider). 

The important thing is that the way the site currently works, any behavior on the site is likely to affect other parts of the site, such that to ensure the site is a well-kept garden, the site admins do have to consider which users should get which privileges.

(There are similarly restrictions on which users can be users from which posts.)

comment by habryka (habryka4) · 2023-04-24T20:03:11.876Z · LW(p) · GW(p)

I expect Ray will respond more. My guess is you not being able to comment on this specific post is unintentional and it does indeed seem good to have a place where you can write more of a response to the moderation stuff.

The other details will likely be figured out as the feature gets used. My guess is how things behave are kind of random until we spend more time figuring out the details. My sense was that the feature was kind of thrown together and is now being iterated on more.

comment by Said Achmiz (SaidAchmiz) · 2023-05-13T05:59:19.184Z · LW(p) · GW(p)

The discussion under this post [LW · GW] is an excellent example of the way that a 3-per-week per-post comment limit makes any kind of useful discussion effectively impossible.

comment by Zack_M_Davis · 2023-04-24T21:40:28.515Z · LW(p) · GW(p)

I continue to be disgusted with this arbitrary moderator harrassment of a long-time, well-regarded [LW(p) · GW(p)] user, apparently on the pretext that some people don't like his writing style.

Achmiz is not a spammer or a troll, and has made many highly-upvoted contributions. If someone doesn't like Achmiz's comments, they're free to downvote (just as I am free to upvote). If someone doesn't want to receive comments from Achmiz, they're free to use already-existing site functionality to block him from commenting on their own posts. If someone doesn't like his three-year-old views about an author's responsibility or lack thereof to reply to criticisms, they're free to downvote or offer counterarguments. Why isn't that the end of the matter?

Elsewhere, Raymond Arnold complains that Achmiz isn't "corrigible about actually integrating the spirit-of-our-models into his commenting style" [LW(p) · GW(p)]. Arnold also proposes that [LW(p) · GW(p)] awareness of frame control—a concept that Achmiz has criticized [LW(p) · GW(p)]—become something one is "obligated to learn, as a good LW citizen". I find this attitude shockingly anti-intellectual. Since when is it the job of a website administrator to micromanage how intellectuals think and write, and what concepts they need to accept? (As contrated to removing low-quality, spam, or off-topic comments; breaking up flame wars, &c.)

My first comment on Overcoming Bias was on 15 December 2007 [LW(p) · GW(p)]. I was at the first Overcoming Bias meetup on 21 February 2008 [LW · GW]. Back then, there was no conept of being a "good citizen" of Overcoming Bias. It was a blog. People read the blog, and left comments when they had something to say, speaking in their own voice, accountable to no authority but their own perception of reality, with no obligation to be corrigible to the spirit of someone else's models. Achmiz's first comment on Less Wrong was in May 2010 [LW(p) · GW(p)].

We were here first. This is our garden, too—or it was. Why is the mod team persecuting us? By what right—by what code—by what standard?

Perhaps it will be replied that no one is being silenced—this is just a mere rate-limit, not any kind of persecution or restriction on speech. I don't think Oliver Habryka [LW · GW] is naïve enough to believe that. Citizenship—first-class citizenship—is a Schelling point. When someone tries to take that away from you, it would be foolish to believe that they don't intend you any further harm.

I think Oli Habryka has the integrity [LW · GW] to give me a staight, no-bullshit answer here.

Replies from: habryka4, philh
comment by habryka (habryka4) · 2023-04-25T00:38:54.240Z · LW(p) · GW(p)

I think Oli Habryka has the integrity [LW · GW] to give me a staight, no-bullshit answer here.

Sure, but... I think I don't know what question you are asking. I will say some broad things here, but probably best for you to try to operationalize your question more. 

Some quick thoughts: 

  • LessWrong totally has prerequisites. I don't think you necessarily need to be an atheist to participate in LessWrong, but if you straightforwardly believe in the Christian god, and haven't really engaged with the relevant arguments on the site, and you comment on posts that assume that there is no god, I will likely just ban you or ask you to stop. There are many other dimensions for which this is also true. Awareness of stuff like Frame Control seems IMO reasonable as a prerequisite, though not one I would defend super hard. Does sure seem like a somewhat important concept.
  • Well-Kept Gardens Die by Pacifism [LW · GW] is IMO one of the central moderation principles of LessWrong. I have huge warning flags around your language here and feel like it's doing something pretty similar to the outraged calls for "censorship" that Eliezer refers to in that post, but I might just be misunderstanding you. In-general, LessWrong has always and will continue to be driven by inside-view models of the moderators about what makes a good discussion forum, and this seems quite important.

I don't know, I guess your whole comment feels really quite centrally like the kind of thing that Eliezer explicitly warns against in Well-Kept Gardens Die by Pacifism, so let me just reply to quotes from you with quotes from Eliezer: 

Since when is it the job of a website administrator to micromanage how intellectuals think and write, and what concepts they need to accept? (As contrated to removing low-quality, spam, or off-topic comments; breaking up flame wars, &c.)

Eliezer: 

But when the fools begin their invasion, some communities think themselves too good to use their banhammer for—gasp!—censorship.

After all—anyone acculturated by academia knows that censorship is a very grave sin... in their walled gardens where it costs thousands and thousands of dollars to enter, and students fear their professors' grading, and heaven forbid the janitors should speak up in the middle of a colloquium.

[...]

And after all—who will be the censor?  Who can possibly be trusted with such power?

Quite a lot of people, probably, in any well-kept garden.  But if the garden is even a little divided within itself —if there are factions—if there are people who hang out in the community despite not much trusting the moderator or whoever could potentially wield the banhammer—

(for such internal politics often seem like a matter of far greater import than mere invading barbarians)

—then trying to defend the community is typically depicted as a coup attempt.  Who is this one who dares appoint themselves as judge and executioner?  Do they think their ownership of the server means they own the people?  Own our community?  Do they think that control over the source code makes them a god?

You:

We were here first. This is our garden, too—or it was. Why is the mod team persecuting us? By what right—by what code—by what standard?

Eliezer: 

Maybe it's because I grew up on the Internet in places where there was always a sysop, and so I take for granted that whoever runs the server has certain responsibilities.  Maybe I understand on a gut level that the opposite of censorship is not academia but 4chan (which probably still has mechanisms to prevent spam).  Maybe because I grew up in that wide open space where the freedom that mattered was the freedom to choose a well-kept garden that you liked and that liked you, as if you actually could find a country with good laws.  Maybe because I take it for granted that if you don't like the archwizard, the thing to do is walk away (this did happen to me once, and I did indeed just walk away).

And maybe because I, myself, have often been the one running the server.  But I am consistent, usually being first in line to support moderators—even when they're on the other side from me of the internal politics.  I know what happens when an online community starts questioning its moderators.  Any political enemy I have on a mailing list who's popular enough to be dangerous is probably not someone who would abuse that particular power of censorship, and when they put on their moderator's hat, I vocally support them—they need urging on, not restraining.  People who've grown up in academia simply don't realize how strong are the walls of exclusion that keep the trolls out of their lovely garden of "free speech".

Any community that really needs to question its moderators, that really seriously has abusive moderators, is probably not worth saving.  But this is more accused than realized, so far as I can see.

In any case the light didn't go on in my head about egalitarian instincts (instincts to prevent leaders from exercising power) killing online communities until just recently.  While reading a comment at Less Wrong, in fact, though I don't recall which one.

But I have seen it happen—over and over, with myself urging the moderators on and supporting them whether they were people I liked or not, and the moderators still not doing enough to prevent the slow decay.  Being too humble, doubting themselves an order of magnitude more than I would have doubted them.  It was a rationalist hangout, and the third besetting sin of rationalists is underconfidence [? · GW].

Again, this is all just on a very rough reading of your comment, and I might be misunderstanding you. 

My current read here is that your objection is really a very standard "how dare the moderators moderate LessWrong" objection, when like, I do really think we have the mandate to moderate LessWrong how we see fit, and indeed maybe the primary reason why LessWrong is not as dead as basically every other forum of its age and popularity is because it had the seed of "Well-Kept Gardens Die by Pacifism" in it. The understanding that yes, of course the moderators will follow their inside view and make guesses at what is best for the site without trying to be maximally justifiable, and without getting caught in spirals of self-doubt of whether they have the mandate to do X or Y or Z. 

But again, I don't think I super understood what specific question you were asking me, so I might have totally talked past you.

Replies from: Vladimir_Nesov, Zack_M_Davis, SaidAchmiz, SaidAchmiz
comment by Vladimir_Nesov · 2023-04-26T06:38:43.816Z · LW(p) · GW(p)

But when the fools begin their invasion, some communities think themselves too good to use their banhammer for—gasp!—censorship.

I affirm importance of the distinction between defending a forum from an invasion of barbarians (while guiding new non-barbarians safely past the defensive measures) and treatment of its citizens. The quote is clearly noncentral for this case.

comment by Zack_M_Davis · 2023-04-25T17:11:01.029Z · LW(p) · GW(p)

Thanks, to clarify: I don't intend to make a "how dare the moderators moderate Less Wrong" objection. Rather, the objection is, "How dare the moderators permanently restrict the account of Said Achmiz, specifically, who has been here since 2010 and has 13,500 karma." (That's why the grandparent specifies "long-time, well-regarded", "many highly-upvoted contributions", "We were here first", &c.) I'm saying that Said Achmiz, specifically, is someone you very, very obviously want to have free speech as a first-class citizen on your platform, even though you don't want to accept literally any speech (which is why the grandparent mentions "removing low-quality [...] comments" as a legitimate moderator duty).

Note that "permanently restrict the account of" is different from "moderate". For example, on 6 April, Arnold asked Achmiz to stop commenting on a particular topic [LW(p) · GW(p)], and Achmiz complied [LW(p) · GW(p)]. I have no objections to that kind of moderation. I also have no objections to rate limits on particular threads, or based on recent karma scores, or for new users. The thing that I'm accusing of being arbitrary persecution is specifically the 3-comments-per-post-per-week restriction on Said Achmiz.

Regarding Yudkowsky's essay "Well-Kept Gardens Die By Pacifism" [LW · GW], please note that the end of the essay points out that a forum with a karma system is different from a forum (such as a mailing list) in which moderators are the only attention-allocation mechanism, and urges users not to excessively question themselves when considering downvoting. I agree with this! That's why the grandparent emphasizes that users who don't like Achmiz's comments are free to downvote them. The grandparent also points out that users who don't want to receive comments from Achmiz can ban him from commenting on their own posts. I simply don't see what actual problem exists that's not adequately solved by either of the downvote mechanism, or the personal-user-ban mechanism.

I fear that Yudkowsky might have been right when he claimed that "[a]ny community that really needs to question its moderators, that really seriously has abusive moderators, is probably not worth saving." I sincerely hope Less Wrong is worth saving.

Replies from: habryka4, Raemon, Ruby
comment by habryka (habryka4) · 2023-04-25T20:46:44.063Z · LW(p) · GW(p)

Hmm, I am still not fully sure about the question (your original comment said "I think Oli Habryka has the integrity [LW · GW] to give me a staight, no-bullshit answer here", which feels like it implies a question that should have a short and clear answer, which I am definitely not providing here), but this does clarify things a bit. 

There are a bunch of different dimensions to unpack here, though I think I want to first say that I am quite grateful for a ton of stuff that Said has done over the years, and have (for example) recently recommended a grant to him from the Long Term Future Fund to allow him to do more of that kind of the kind of work he has done in the past (and would continue recommending grants to him in the future). I think Said's net-contributions to the problems that I care about have likely been quite positive, though this stuff is pretty messy and I am not super confident here. 

One solution that I actually proposed to Ray (who is owning this decision) was that instead of banning Said we do something like "purchase him out of his right to use LessWrong" or something like that, by offering him like $10k-$100k to change his commenting style or to comment less in certain contexts, to make it more clear that I am hoping for some kind of trade here, and don't want this to feel like some kind of social slapdown. 

Now, commenting on the individual pieces: 

That's why the grandparent specifies "long-time, well-regarded", "many highly-upvoted contributions", "We were here first", &c.

Well, I mean, the disagreement surely is about whether Said, in his capacity as a commenter, is "well-regarded". My sense is Said is quite polarizing and saying that he is a "long-time ill-regarded" user would be just as accurate. Similarly saying "many highly-downvoted contributions" is also accurate. (I think seniority matters a bit, though like not beyond a few years, and at least I don't currently attach any special significance to someone having been around for 5 years vs. 10 years, though I can imagine this being a mistake).

This is not to say I would consider a summary that describes Said as a "long-time ill-regarded menace with many highly downvoted contributions" as accurate. But neither do I think your summary here is accurate. My sense is a long-time user with some highly upvoted comments and some highly downvoted comments can easily be net-negative for the site. 

Neither do I feel that net-karma is currently at all a good guide towards quality of site-contributions. First, karma is just very noisy and sometimes random posts and comments get hundreds of karma as some someone on Twitter links to them and the tweet goes viral. But second, and more importantly, there is a huge bias in karma towards positive karma. You frequently find comments with +70 karma and very rarely see comments with -70 karma. Some of that is a natural consequence of making comments and posts with higher karma more visible, some of that is that most people experience pushing someone into the negatives as a lot socially harsher than letting them hover somewhere around 0. 

This is again not to say that I am actually confident that Said's commenting contributions have been net-negative for the site. My current best guess is yes, but it's not super obvious to me. I am however quite confident that there is a specific type of commenting interaction that has been quite negative, has driven away a lot of really valuable contributors, and doesn't seem to have produced much value, which is the specific type of interaction that Ray is somehow trying to address with the rate-limiting rules. 

The grandparent also points out that users who don't want to receive comments from Achmiz can ban him from commenting on their own posts. I simply don't see what actual problem exists that's not adequately solved by either of the downvote mechanism, or the personal-user-ban mechanism.

I think people responded pretty extensively to the comment you mention here, but to give my personal response to this: 

  • Most people (and especially new users) don't keep track of individual commenters to the degree that would make it feasible to ban the people they would predictably have bad interactions with. The current proposal is basically to allow users to ban or unban Said however they like (since they can both fully ban him, and allow Said to comment without rate limit on their post), we are just suggesting a default that I expect to be best for most users and the default site experience. 
  • Downvoting helps a bit with reducing visibility, but it doesn't help a lot. I see downvoting in substantial parts as a signal from the userbase to the authors and moderators to take some kind of long-term action. When someone's comments are downvoted authors still get notifications for them, and they still tend to blow up into large demon threads, and so just voting on comments doesn't help that much with solving the moderation problem (this is less true for posts, but only a small fraction of Said contributions are in the form of posts, and I actually really like all of his posts, so this doesn't really apply here). We can try to make automated systems here, but I can't currently think of any super clear cut rules we could put into code, since as I said above, net-karma really is not a reliable guide. I do think it's worth thinking more about (using the average of the most recent N-comments helps a bit, but is really far from catching all the cases I am concerned about).

Separately, I want to also make a bigger picture point about moderation on LessWrong: 

LessWrong moderation definitely works on a case-law basis 

There is no way I can meaningfully write down all the rules and guidelines about how people should behave in discourse in-advance. The way we've always made moderation decisions was to iterate locally on what things seem to be going wrong, and then try to formulate new rules, give individuals advice, and try to figure out general principles as they become necessary. 

This case is the same. Yep, we've decided to take moderation action for this kind of behavior, more than we have done in the past. Said is the first prosecuted case, but I would absolutely want to hold all other users to the same standard going into the future(and indeed my sense is that Duncan is receiving a warning for some things that fall under that same standard). I think it's good and proper for you to hold us to being consistent and ask us to moderate other people doing similar things in the future the same way as we've moderated Said here. 

I hope this is all helpful. I still have a feeling you wanted some straightforward non-bullshit answer to a specific question, but I still don't know which one, though I hope that what I've written above clarifies things at least a bit.

Replies from: pktechgirl, Zack_M_Davis, Celarix
comment by Elizabeth (pktechgirl) · 2023-04-26T07:52:11.494Z · LW(p) · GW(p)

But second, and more importantly, there is a huge bias in karma towards positive karma.

 

I don't know if it's good that there's a positive bias towards karma, but I'm pretty sure the generator for it is a good impulse. I worry that calls to handle things with downvoting lead people to weaken that generator in ways that make the site worse overall even if it is the best way to handle Said-type cases in particular. 

comment by Zack_M_Davis · 2023-04-26T19:05:14.466Z · LW(p) · GW(p)

I think I mostly meant "answer" in the sense of "reply" (to my complaint about rate-limiting Achmiz being an outrage, rather than to a narrower question); sorry for the ambiguity.

I have a lot of extremely strong disagreements with this, but they can wait three months [LW(p) · GW(p)].

Replies from: habryka4
comment by habryka (habryka4) · 2023-04-26T19:39:43.774Z · LW(p) · GW(p)

Cool, makes sense. Also happy to chat in-person sometime if you want. 

comment by Celarix · 2023-04-26T14:35:34.809Z · LW(p) · GW(p)

by offering him like $10k-$100k to change his commenting style or to comment less in certain contexts

What other community on the entire Internet would offer 5 to 6 figures to any user in exchange for them to clean up some of their behavior?

how is this even a reasonable-

Isn't this community close in idea terms to Effective Altruism? Wouldn't it be better to say "Said, if you change your commenting habits in the manner we prescribe, we'll donate $10k-$100k to a charity of your choice?"

I can't believe there's a community where, even for a second, having a specific kind of disagreement with the moderators and community (while also being a long-time contributor) results in considering a possibly-six-figure buyout. I've been a member on other sites with members who were both a) long-standing contributors and b) difficult to deal with in moderation terms, and the thought of any sort of payout, even $1, would not have even been thought of.

Replies from: habryka4, Vaniver, localdeity
comment by habryka (habryka4) · 2023-04-26T17:08:27.324Z · LW(p) · GW(p)

Seems sad! Seems like there is an opportunity for trade here.

Salaries in Silicon Valley are high and probably just the time for this specific moderation decision has cost around 2.5 total staff weeks for engineers that can make probably around $270k on average in industry, so that already suggests something in the $10k range of costs.

And I would definitely much prefer to just give Said that money instead of spending that time arguing, if there is a mutually positive agreement to be found.

We can also donate instead, but I don't really like that. I want to find a trade here if one exists, and honestly I prefer Said having more money more than most charities having more money, so I don't really get what this would improve. Also, not everyone cares about donating to charity, and that's fine.

Replies from: Celarix, ricraz
comment by Celarix · 2023-04-27T00:41:29.920Z · LW(p) · GW(p)

The amount of moderator time spent on this issue is both very large and sad, I agree, but I think it causes really bad incentives to offer money to users with whom moderation has a problem. Even if only offered to users in good standing over the course of many years, that still represents a pretty big payday if you can play your cards right and annoy people just enough to fall in the middle between "good user" and "ban".

I guess I'm having trouble seeing how LW is more than a (good!) Internet forum. The Internet forums I'm familiar with would have just suspended or banned Said long, long ago (maybe Duncan, too, I don't know).

I do want to note that my problem isn't with offering Said money - any offer to any user of any Internet community feels... extremely surprising to me. Now, if you were contracting a user to write stuff on your behalf, sure, that's contracting and not unusual. I'm not even necessarily offended by such an offer, just, again, extremely surprised.

Replies from: habryka4
comment by habryka (habryka4) · 2023-04-27T00:48:22.307Z · LW(p) · GW(p)

I think if you model things as just "an internet community" this will give you the wrong intuitions. 

I currently model the extended rationality and AI Alignment community as a professional community which for many people constitutes their primary work context, is responsible for their salary, and is responsible for a lot of daily infrastructure they use. I think viewing it through that lens, it makes sense that limiting someone's access to some piece of community infrastructure can be quite costly, and somehow compensating people for the considerate cost that lack of access can cause seems reasonable. 

I am not too worried about this being abusable. There are maybe 100 users who seem to me to use LessWrong as much as Said and who have contributed a similar amount to the overall rationality and AI Alignment project that I care about. At $10k paying each one of them would only end up around $1MM, which is less than the annual budget of Lightcone, and so doesn't seem totally crazy.

Replies from: Celarix
comment by Celarix · 2023-04-27T00:56:20.417Z · LW(p) · GW(p)

I think if you model things as just "an internet community" this will give you the wrong intuitions. 

This, plus Vaniver's comment, has made me update - LW has been doing some pretty confusing things if you look at it like a traditional Internet community that make more sense if you look at it as a professional community, perhaps akin to many of the academic pursuits of science and high-level mathematics. The high dollar figures quoted in many posts confused me until now.

comment by Richard_Ngo (ricraz) · 2023-05-21T10:10:42.351Z · LW(p) · GW(p)

I've had a nagging feeling in the past that the rationalist community isn't careful enough about the incentive problems and conflicts of interest that arise when transferring reasonably large sums of money (despite being very careful about incentive landscapes in other ways—e.g. setting the incentives right for people to post, comment, etc, on LW—and also being fairly scrupulous in general). Most of the other examples I've seen have been kinda small-scale and so I haven't really poked at them, but this proposal seems like it pretty clearly sets up terrible incentives, and is also hard to distinguish from nepotism. I think most people in other communities have gut-level deontological instincts about money which help protect them against these problems (e.g. I take Celarix to be expressing this sort of sentiment upthread), which rationalists are more likely to lack or override—and although I think those people get a lot wrong about money too, cases like these sure seems like a good place to apply Chesterton's fence.

comment by Vaniver · 2023-04-26T15:32:41.692Z · LW(p) · GW(p)

I can't believe there's a community where, even for a second, having a specific kind of disagreement with the moderators and community (while also being a long-time contributor) results in considering a possibly-six-figure buyout.

It might help to think of LW as more like a small town's newspaper (with paid staff) than a hobbyist forum (with purely volunteer labor), which considers issues with "business expense" lenses instead of "personal budget" lenses. 

Replies from: Celarix
comment by Celarix · 2023-04-27T00:47:27.198Z · LW(p) · GW(p)

Yeah, that does seem like what LW wants to be, and I have no problem with that. A payout like this doesn't really fit neatly into my categories of what money paid to a person is for, and that may be on my assumptions more than anything else. Said could be hired, contracted, paid for a service he provides or a product he creates, paid for the rights to something he's made, paid to settle a legal issue... the idea of a payout to change part of his behavior around commenting on LW posts was just, as noted on my reply to habryka, extremely surprising.

comment by localdeity · 2023-04-27T00:52:10.505Z · LW(p) · GW(p)

What other community on the entire Internet would offer 5 to 6 figures to any user in exchange for them to clean up some of their behavior?

Exactly.  It's hilarious and awesome.  (That is, the decision at least plausibly makes sense in context; and the fact that this is the result, as viewed from the outside, is delightful.)

comment by Raemon · 2023-04-26T22:50:19.701Z · LW(p) · GW(p)

We were here first. This is our garden, too—or it was. Why is the mod team persecuting us? By what right—by what code—by what standard?

I endorse much of Oliver's replies, and I'm mostly burnt out from this convo at the moment so can't do the followthrough here I'd ideally like. But, it seemed important to publicly state some thoughts here before the moment passed:

Yes, the bar for banning or permanently limiting the speech of a longterm member in Said's reference class is very high, and I'd treat it very differently from moderating a troll, crank, or confused newcomer. But to say you can never do such moderation proves too much – that longterm users can never have enough negative effects to warrant taking permanent action on. My model of Eliezer-2009 believed and intended something similar in Well Kept Gardens. 

I don't think the Spirit of LessWrong 2009 actually supports you on the specific claims you're making here.

As for “by what right do we moderate?” Well, LessWrong had died, no one was owning it, people spontaneously elected Vaniver as leader, Vaniver delegated to habrkya who founded the LessWrong team and got Eliezer's buy-in, and now we have 6 years of track of record that I think most people agree is much better than nobody in charge.

But, honestly, I don't actually think you really believe these meta-level arguments (or, at least won't upon reflection and maybe a week of distance). I think you disagree with our object level call on Said, and on the overall moderation philosophy that led to it.  And, like, I do think there's a lot to legitimately argue over with the object level call on Said and overall moderation philosophy surrounding it. I’m fairly burnt out from taking about this in the immediate future but fwiw I welcome top-level posts arguing about this and expect to engage with them in the future.

And if you decide to quit LessWrong in protest, well, I will be sad about that. I think your writing and generator are quite valuable. I do think there's an important spirit of early LessWrong that you keep alive, and I've made important updates due to your contributions. But, also, man it doesn't look like your relationship with the site is necessarily that healthy for you.

...

I think a lot of what you’re upset about is an overall sense that your home doesn’t feel like you’re home anymore. I do think there is a legitimately sad thing worth grieving there. 

But I think old LessWrong did, actually, die. And, if it hadn’t, well, it’s been 12 years and the world has changed. I think it wouldn’t make sense, by the Spirit of 2009 LessWrong’s lights, to stay exactly the way you remember it. I think some of this is due to specific philosophies the LessWrong 2.0 team brings (I think our original stated goal of “cause intellectual progress to happen faster/better” is very related to and driven by the original sequences, but I think our frame is subtly different). But meanwhile a lot of it is just about the world changing, and Eliezer moving on in some ways (early LessWrong’s spirit was AFAICT largely driven by Eliezer posting frequently, while braindumping a specific set of ideas he had to share. That process is now over and any subsequent process was going to be different somehow)

I don’t know that I really have a useful takeaway. Sometimes there isn’t one. But insofar as you think it is healthy for you to stay on LessWrong and you don’t want to quit in protest of the mod call on Said, fwiw I continue to welcome posts arguing for what you think the spirit of lesswrong should be, and/or where you think the mod team is fucking up.

(As previously stated, I'm fairly burnt out atm, but would be happy to talk more about this sometime in the future if it seemed helpful)

comment by Ruby · 2023-04-25T19:28:14.227Z · LW(p) · GW(p)

Not to respond to everything you've said, but I question the argument (as I understand it) that because someone is {been around a long-time, well-regarded, many highly-upvoted contributions, lots of karma}, this means they are necessarily someone who at the end of the day you want around / are net positive for the site.

Good contributions are relevant. But so are costs. Arguing against the costs seems valid, saying benefits outweigh costs seems valid, but assuming this is what you're saying, I don't think just saying someone has benefits means that obviously obviously you want them as unrestricted citizen.

(I think in fact how it's actually gone is that all of those positive factors you list have gone into moderators decisions so far in not outright banning Said over the years, and why Ray preferred to rate limit Said rather than ban him. If Said was all negatives, no positives, he'd have been banned long ago.)

Correct me though if there's a deeper argument here that I'm not seeing.

comment by Said Achmiz (SaidAchmiz) · 2023-04-25T01:14:49.493Z · LW(p) · GW(p)

In my experience (e.g., with Data Secrets Lox), moderators tend to be too hesitant to ban trolls (i.e., those who maliciously and deliberately subvert the good functioning of the forum) and cranks (i.e., those who come to the forum just to repeatedly push their own agenda, and drown out everything else with their inability to shut up or change the subject), while at the same time being too quick to ban forum regulars—both the (as these figures are usually cited) 1% of authors and the 9% of commenters—for perceived offenses against “politeness” or “swipes against the outgroup” or “not commenting in a prosocial way” or other superficial violations. These two failure modes, which go in opposite directions, somewhat paradoxically coexist quite often.

It is therefore not at all strange or incoherent to (a) agree with Eliezer that moderators should not let “free speech” concerns stop them from banning trolls and cranks, while also (b) thinking that the moderators are being much too willing (even, perhaps, to the point of ultimately self-destructive abusiveness) to ban good-faith participants whose preferences about, and quirks of, communicative styles, are just slightly to the side of the mods’ ideals.

(This was definitely my opinion of the state of moderation over at DSL, for example, until a few months ago. The former problem has, happily, been solved; the latter, unhappily, remains. Less Wrong likewise seems to be well on its way toward solving the former problem; I would not have thought the latter to obtain… but now my opinion, unsurprisingly, has shifted.)

comment by Said Achmiz (SaidAchmiz) · 2023-04-25T00:58:22.681Z · LW(p) · GW(p)

Awareness of stuff like Frame Control seems IMO reasonable as a prerequisite, though not one I would defend super hard. Does sure seem like a somewhat important concept.

Before there can be any question of “awareness” of the concept being a prerequisite, surely it’s first necessary that the concept be explained in some coherent way? As far as I know, no such thing has been done. (Aella’s post on the subject was manifestly nonsensical, to say the least; if that’s the best explanation we’ve got, then I think that it’s safe to say that the concept is incoherent nonsense, and using it does more harm than good.) But perhaps I’ve missed it?

Replies from: Ruby, habryka4
comment by Ruby · 2023-04-25T01:18:25.688Z · LW(p) · GW(p)

Before there can be any question of “awareness” of the concept being a prerequisite, surely it’s first necessary that the concept be explained in some coherent way?

In the comment Zack cites [LW(p) · GW(p)], Raemon said the same when raising the idea of making it a prerequisite:

I have on my todo list to write up a post that's like "hey guys here is an explanation of Frame Control/Manipulation that is more rigorous and more neutrally worded than Aella's post about it, and here's why I think we should have a habit of noticing it.".

Replies from: Raemon
comment by Raemon · 2023-04-25T01:54:21.922Z · LW(p) · GW(p)

Also for everyone's awareness, I have since wrote up Tabooing "Frame Control" [LW · GW] (which I'd hope would be like part 1 of 2 posts on the topic), but the reception of the post,  i.e. 60ish karma, didn't seem like everyone was like "okay yeah this concept is great", and I currently think the ball is still in my court for either explaining the idea better, refactoring it into other ideas, or abandoning the project.

comment by habryka (habryka4) · 2023-04-25T01:09:15.501Z · LW(p) · GW(p)

Yep! As far as I remember the thread Ray said something akin to "it might be reasonable to treat this as a prerequisite if someone wrote a better explanation of it and there had been a bunch of discussion of this", but I don't fully remember.

Aella's post did seem like it had a bunch of issues and I would feel kind of uncomfortable with having a canonical concept with that as its only reference (I overall liked the post and thought it was good, but I don't think a concept should reach canonicity just on the basis of that post, given its specific flaws).

comment by philh · 2023-04-25T00:21:58.266Z · LW(p) · GW(p)

Arnold also proposes that awareness of frame control—a concept that Achmiz has criticized—become something one is “obligated to learn, as a good LW citizen”.

Arnold says he is thinking about maybe proposing that, in future, after he has done the work to justify it and paying attention to how people react to it.

comment by Wei Dai (Wei_Dai) · 2023-04-14T19:19:23.623Z · LW(p) · GW(p)

(Tangentially) If users are allowed to ban other users from commenting on their posts, how can I tell when the lack of criticism in the comments of some post means that nobody wanted to criticize it (which is a very useful signal that I would want to update on), or that the author has banned some or all of their most prominent/frequent critics? In addition, I think many users may be mislead by lack of criticism if they're simply not aware of the second possibility or have forgotten it. (I think I knew it but it hasn't entered my conscious awareness for a while, until I read this post today.)

(Assuming there's not a good answer to the above concerns) I think I would prefer to change this feature/rule to something like allowing the author of a post to "hide" commenters or individual comments, which means that those comments are collapsed by default (and marked as "hidden by the post author") but can be individually expanded, and each user can set an option to always expand those comments for themselves.

Replies from: gilch, adamzerner, lsusr, ChristianKl
comment by gilch · 2023-04-16T20:30:56.662Z · LW(p) · GW(p)

Maybe a middle ground would be to give authors a double-strong downvote power for comments on their posts. A comment with low enough karma is already hidden by default, and repeated strong downvotes without further response would tend chill rather than inflame the ensuing discussion, or at least push the bulk of it away from the author's arena, without silencing critics completely.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2023-04-16T20:45:01.993Z · LW(p) · GW(p)

I think a problem that my proposal tries to solve, and this one doesn't, is that some authors seem easily triggered by some commenters, and apparently would prefer not to see their comments at all. (Personally if I was running a discussion site I might not try so hard to accommodate such authors, but apparently they include some authors that the LW team really wants to keep or attract.)

comment by Adam Zerner (adamzerner) · 2023-04-14T19:56:18.749Z · LW(p) · GW(p)

To me it seems unlikely that there'd be enough banning to prevent criticism from surfacing. Skimming through https://www.lesswrong.com/moderation, [? · GW] the amount of bans seems to be pretty small. And if there is an important critique to be made I'd expect it to be something that more than the few banned users would think of and decide to post a comment on.

Replies from: Wei_Dai, Vaniver
comment by Wei Dai (Wei_Dai) · 2023-04-15T22:11:08.267Z · LW(p) · GW(p)

And if there is an important critique to be made I’d expect it to be something that more than the few banned users would think of and decide to post a comment on.

This may be true in some cases, but not all. My experience here comes from cryptography where it often takes hundreds of person-hours to find a flaw in a new idea (which can sometimes be completely fatal), and UDT, where I found a couple of issues in my own initial idea only after several months/years of thinking (hence going to UDT1.1 and UDT2). I think if you ban a few users who might have the highest motivation to scrutinize your idea/post closely, you could easily reduce the probability (at any given time) of anyone finding an important flaw by a lot.

Another reason for my concern is that the bans directly disincentivize other critics, and people who are willing to ban their critics are often unpleasant for critics to interact with in other ways, further disincentivizing critiques. I have this impression for Duncan myself which may explain why I've rarely commented on any of his posts. I seem to remember once trying to talk him out of (what seemed to me like) overreacting to a critique and banning the critic on Facebook, and having an unpleasant experience (but didn't get banned), then deciding to avoid interacting with him in the future. However I can't find the actual interaction on FB so I'm not 100% sure this happened. FB has terrible search which probably explains it, but maybe I hallucinated this, or confused him with someone else, or did it with a pseudonym.

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2023-04-16T05:33:34.606Z · LW(p) · GW(p)

Hm, interesting points.

I think if you ban a few users who might have the highest motivation to scrutinize your idea/post closely, you could easily reduce the probability (at any given time) of anyone finding an important flaw by a lot.

My impression is that there are some domains for which this is true, but those are the exception rather than the rule. However, this impression is just based off of, err, vaguely querying my brain? I'm not super confident in it. And your claim is one that I think is "important if true". So then, it does seem worth an investigation. Maybe enumerating through different domains and asking "Is it true here? Is it true here?".

One thing I'd like to point out is that, being a community, something very similar is happening. Only a certain type of person comes to LessWrong (this is true of all communities to some extent; they attract a subset of people). It's not that "outsiders" are explicitly banned, they just don't join and don't thus don't comment. So then, effectively, ideas presented here currently aren't available to "outsiders" for critiques.

I think there is a trade off at play: the more you make ideas available to "outsiders" the lower the chance something gets overlooked, but it also has the downside of some sort of friction.

(Sorry if this doesn't make sense. I feel like I didn't articulate it very well but couldn't easily think of a better way to say it.)

Another reason for my concern is that the bans directly disincentivize other critics, and people who are willing to ban their critics are often unpleasant for critics to interact with in other ways, further disincentivizing critiques.

Good point. I think that's true and something to factor in.

comment by Vaniver · 2023-04-14T20:01:07.153Z · LW(p) · GW(p)

While the current number of bans is pretty small, I think this is in part because lots of users don't know about the option to ban people from their posts. (See here [LW(p) · GW(p)], for example.)

Replies from: adamzerner
comment by Adam Zerner (adamzerner) · 2023-04-14T20:15:14.897Z · LW(p) · GW(p)

That makes sense. Still, even if it were more well known, I wouldn't expect the number of bans to reach the point where it is causing real problems with respect to criticism surfacing.

comment by lsusr · 2023-04-26T05:05:13.459Z · LW(p) · GW(p)

One solution is to limit the number of banned users to a small fraction of overall commentors. I've written 297 posts so far and have banned only 3 users from commenting on them. (I did not ban Duncan or Said.)

My highest-quality criticism comes from users who I have never even considered banning. Their comments are consistently well-reasoned and factually correct.

comment by ChristianKl · 2023-04-15T09:52:32.271Z · LW(p) · GW(p)

What exactly does "nobody wanted to criticize it" signal that you don't get from high/low karma votes?

Replies from: Raemon
comment by Raemon · 2023-04-15T18:56:31.505Z · LW(p) · GW(p)

Some UI thoughts as I think about this:

Right now, you see total karma for posts and comments, and total vote count, but not the number of upvotes/downvotes. So you can't actually tell when something is controversial.

One reason for this is because we (once) briefly tried turning this on, and immediately found it made the site much more stressful and anxiety inducing. Getting a single downvote felt like "something is WRONG!" which didn't feel productive or useful. Another reason is that it can de-anonymize strong-votes because their voting power is a less common number.

But, an idea I just had was that maybe we should expose that sort of information once a post becomes popular enough. Like maybe over 75 karma. [Better idea: once a post has a certain number of votes. Maybe at least 25]. At that point you have more of a sense of the overall karma distribution so individual votes feel less weighty, and also hopefully it's harder to infer individual voters.

Tagging @jp [LW · GW] who might be interested.

Replies from: Wei_Dai, jp
comment by Wei Dai (Wei_Dai) · 2023-04-15T23:03:41.457Z · LW(p) · GW(p)

I support exposing the number of upvotes/downvotes. (I wrote a userscript for GW to always show the total number of votes, which allows me to infer this somewhat.) However that doesn't address the bulk of my concerns, which I've laid out in more detail in this comment [LW(p) · GW(p)]. In connection with karma, I've observed that sometimes a post is initially upvoted a lot, until someone posts a good critique, which then causes the karma of the post to plummet. This makes me think that the karma could be very misleading (even with upvotes/downvotes exposed) if the critique had been banned or disincentivized.

comment by jp · 2023-04-17T09:09:28.335Z · LW(p) · GW(p)

We've been thinking about this for the EA Forum. I endorse Raemon's thoughts here, I think, but I know I can't pass the ITT of a more transparent side here.

comment by Vaniver · 2023-04-14T18:32:02.071Z · LW(p) · GW(p)

First, my read of both Said and Duncan is that they appreciate attention to the object level in conflicts like this. If what's at stake for them is a fact of the matter, shouldn't that fact get settled before considering other issues? So I will begin with that. What follows is my interpretation (mentioned here so I can avoid saying "according to me" each sentence).

In this comment [LW(p) · GW(p)], Said describes as bad "various proposed norms of interaction such as “don’t ask people for examples of their claims” and so on", without specifically identifying Duncan as proposing that norm (tho I think it's heavily implied).

Then gjm objects [LW(p) · GW(p)] to that characterization as a straw man.

In this comment [LW(p) · GW(p)] Said defends it, pointing out that Duncan's standard of "critics should do some of the work of crossing the gap" is implicitly a rule against "asking people for examples of their claims [without anything else]", given that Duncan thinks asking for examples doesn't count as doing the work of crossing the gap. (Earlier in the conversation [LW(p) · GW(p)] Duncan calls it 0% of the work.) I think the point as I have written it here is correct and uncontroversial; I think there is an important difference between the point as I wrote it and the point as Said wrote it.

In the response I would have wanted to see, Duncan would have clearly and correctly pointed to that difference. He is in favor of people asking for examples [combined with other efforts to cross the gap], does it himself, gives examples himself, and so on. The unsaid [without anything else] part is load-bearing and thus inappropriate to leave out or merely hint at. [Or, alternatively, using "ask people for examples" to refer to comments that do only that, as opposed to the conversational move which can be included or not in a comment with other moves.]

Instead we got this comment [LW(p) · GW(p)], where Duncan interprets Said's claim narrowly, disagrees, and accuses Said of either lying or being bad at reading comprehension. (This does not count as two hypotheses [LW(p) · GW(p)] in my culture.)

Said provides four examples [LW(p) · GW(p)]; Duncan finds them unconvincing and calls using them as citations a blatant falsehood [LW(p) · GW(p)]. Said leaves it up to the readers to adjudicate here [LW(p) · GW(p)]. I do think this was a missed opportunity for Said to see the gap between what he stated and what I think he intended to state.

From my perspective, my reading of Said's accusation is not clearly suggested in the comment gjm objected to [LW(p) · GW(p)], is obviously suggested from the comment Duncan responds to [LW(p) · GW(p)], with the second paragraph[1] doing most of the work, and then further pointed at by later comments. If Said ate breakfasts of only cereal, and Duncan said that was unhealthy and he shouldn't do it, it is not quite right to say Duncan 'thinks you shouldn't eat cereal', as he might be in favor of cereal as part of a balanced breakfast; but also it is not quite right for Duncan to ignore Said's point that one of the main issues under contention is whether Said can eat cereal by itself (i.e. asking for examples without putting in interpretative labor). This looks like white horses are not horses.

So what about Said's four examples [LW(p) · GW(p)]? As one might expect, all four are evidence for my interpretation, and none of the four are evidence for Duncan's interpretation. I would not call this a blatant falsehood,[2] and think all four of Duncan's example-by-example responses are weak. Do we treat the examples as merely 'evidence for the claim', or also as 'identification of the claim'?

So then we have to step back and consider non-object-level considerations, of which I see a few:

  1. I think this situation is, on some level, pretty symmetric.
    1. I think the features of Said's commenting style that people (not just Duncan!) find annoying are things that Said is deliberately optimizing for or the results of principled commitments he's made, so it's not just a simple bug that can be fixed.
    2. I think the features of Duncan's conflict resolution methods that people find offputting are similarly things that Duncan is deliberately optimizing for or the results of principled commitments he's made, so it's not just a simple bug that can be fixed.
    3. I think both Said and Duncan a) contribute great stuff to the site and b) make some people like posting on LW less and it's unclear what to do about that balance. This is one of the things that's nice about clear rules that people are either following or not--it makes it easier for everyone to tell whether something is 'allowed' or 'not allowed', 'fine' or 'not fine', and so on, rather than making complicated judgments of whether or not you want someone around. I think the mod team does want to exercise some judgment and discernment beyond just rule-following, however.
  2. How bad is it to state something that's incorrect because it is too broad and then narrow it afterwards? Duncan has written about this in Ruling Out Everything Else [LW · GW], and I think Said did an adequate but not excellent job.
  3. What's the broader context of this discussion? Said has a commenting style that Duncan strongly dislikes [LW(p) · GW(p)], and Duncan seems to be in the midst of an escalating series of comments and posts pointing towards "the mods should ban Said". My reckless speculation is that this comment looked to Duncan like the smoking gun that he could use to prove Said's bad faith, and he tried to prosecute it accordingly. (Outside of context, I would be surprised by my reading not being raised to Duncan's attention; in context, it seems obvious why he would not want (consciously or subconsciously) to raise that hypothesis.) My explanation is that Said's picture of good faith is different than Duncan's (and, as far as I can tell, both fit within the big tent of 'rationality').
    1. Incidentally, I should note that I view Duncan's escalation as something of a bet, where if the mods had clearly agreed with Duncan, that probably would have been grounds for banning Said. If the mods clearly disagree with Duncan, then what does 'losing the bet' look like? What was staked here?
    2. The legal system sees a distinction between 'false testimony' (being wrong under oath) and 'perjury' (deliberately being wrong under oath), and it seems like a lot of this case hinges on "was Said deliberately wrong, or accidentally wrong?" and "was Duncan deliberately wrong, or accidentally wrong?".
    3. I also don't expect it to be uncontroversial "who started it". Locally, my sense is Duncan started it, and yet when I inhabit Duncan's perspective, this is all a response to Said and his style. I interpret a lot of Duncan's complaints here thru the lens of imaginary injury that he writes about here.
  4. I think also there's something going on where Duncan is attempting to mimic Said's style when interacting with Said, but in a way that wouldn't pass Said's ITT. Suppose my comment here had simply been a list of ways that Duncan behaved poorly in this exchange; then I think Duncan could take the approach of "well, but Said does the same thing in places A, B, and C!". I think he overestimates how convincing I would find that, and Duncan did a number of things in this exchange that my model of Said would not do and has not done (according to my interpretation, but not my model of Duncan's, in a mirror of the four examples above).
  5. I think Said is trying to figure out which atomic actions are permissible or impermissible (in part because it is easier to do local validity checking [LW · GW] on atomic actions), and Duncan is trying to suggest what is permissible or impermissible is more relational and deals with people's attitudes towards each other (as suggested by gjm here [LW(p) · GW(p)]). I feel sympathetic to both views here; I think Duncan often overestimates how familiar readers will be with his works / how much context he can assume, and yet also I think Said is undercounting how much people's memory of past interactions colors their experience of comments. [Again, I think these are not simple bugs but deliberate choices--I think Duncan wants to build up a context in which people can hold each other accountable and build further work together, and I think Said views colorblindness of this sort as superior to being biased.]
  1. ^

    Duncan has, I think, made it very clear that that a comment that just says “what are some examples of this claim?” is, in his view, unacceptable. That’s what I was talking about. I really do not think it’s controversial at all to ascribe this opinion to Duncan.

  2. ^

    I note that my reasons for this are themselves perhaps white horses are not horses reasons, where I think Said's original statement and follow-up are both imprecise, but they're missing the additional features that would make them 'blatant falsehood's, while both imprecise statements and blatant falsehoods are 'incorrect'.

Replies from: SaidAchmiz, Duncan_Sabien, SaidAchmiz, SaidAchmiz, SaidAchmiz, Duncan_Sabien, Duncan_Sabien, Duncan_Sabien
comment by Said Achmiz (SaidAchmiz) · 2023-04-15T00:05:05.622Z · LW(p) · GW(p)

Vaniver privately suggested to me that I may want to offer some commentary on what I could’ve done in this situation in order for it to have gone better, which I thought was a good and reasonable suggestion. I’ll do that in this comment, using Vaniver’s summary of the situation as a springboard of sorts.

In this comment [LW(p) · GW(p)], Said describes as bad “various proposed norms of interaction such as “don’t ask people for examples of their claims” and so on”, without specifically identifying Duncan as proposing that norm (tho I think it’s heavily implied).

Then gjm objects [LW(p) · GW(p)] to that characterization as a straw man.

So, first of all, yes, I was clearly referring to Duncan. (I didn’t expect that to be obscure to anyone who’d bother to read that subthread in the first place, and indeed—so far as I can tell—it was not. If anyone had been confused, they would presumably have asked “what do you mean?”, and then I’d have linked what I mean—which is pretty close to what happened anyway. This part, in any case, is not the problem.)

The obvious problem here is that “don’t ask people for examples of their claims”—taken literally—is, indeed, a strawman.

The question is, whose problem (to solve) is it?

There are a few possible responses to this (which are not mutually exclusive).

On the one hand, if I want people to know what I mean, and instead of saying what I mean, I say something which is only approximately what I mean, and people assume that I meant what I said, and respond to it—well, whose fault is that, but mine?

Certainly one could make protestations along the lines of “haven’t you people ever heard of [ hyperbole / colloquialisms / writing off the cuff and expecting that readers will infer from surrounding context / whatever ]”, but such things are always suspect. (And even if one insists that there’s nothing un-virtuous about any particular instance of any one of those rhetorical or conversational patterns, nevertheless it would be a bit rich to get huffy about people taking words literally on Less Wrong, of all places.)

So, in one sense, the whole problem would’ve been avoided if I’d taken pains to write as precisely as I usually try to do. Since I didn’t do that, and could have, the fault would seem to be mine; case closed.

But that account doesn’t quite work.

For one thing, if someone says something you think is wrong, and you say “seems wrong to me actually”, and they reply “actually I meant this other thing”—well, that seems to me to be a normal and reasonable sort of exchange; this is how understanding is reached. I made a claim; gjm responded that it seemed like a strawman; I responded with a clarification.

Note that here I definitely made a mistake; what I should’ve included in that comment, but left out, was a clear and unambiguous statement along the lines of:

“Yes, taken literally, ‘don’t ask people for examples of their claims’ would of course be a strawman. I thought that the intended reading would be clear, but I definitely see the potential for literal (mis-)reading, sorry. To clarify:”

The rest of that comment would then have proceeded as written. I don’t think that it much needs amendment. In particular, the second paragraph (which, as Vaniver notes, does much of the work) gives a concise and clear statement of the claim which I was originally (and, at first, sloppily) alluding to. I stand by that clarified claim, and have seen nothing that would dissuade me from it.

Importantly, however, we can see that Duncan objects, quite strenuously, even to this clarified and narrowed form of what I said!

(As I note in this comment [LW(p) · GW(p)], it was not until after essentially the whole discussion had already taken place that Duncan edited his reply to my latter comment to explicitly disclaim the view that I ascribed to him. For the duration of that whole long comment exchange, it very much seemed to me that Duncan was not objecting because I was ascribing to him a belief he does not hold, but rather because he had not said outright that he held such a belief… but, of course, I never claimed that he had!)

So even if that clarified comment had come first (having not, therefore, needed any acknowledgment of previous sloppiness), there seems to be little reason to believe that Duncan would not have taken umbrage at it.

Despite that, failing to include that explicit acknowledgement was an error. Regardless of whether it can be said to be responsible for the ensuing heated back-and-forth (I lean toward “probably not”), this omission was very much a failure of “local validity” on my part, and for that there is no one to blame but me.

Of the rest of the discussion thread, there is little that needs to be said. (As Vaniver notes, some of my subsequent comments both clarify my claims further and also provide evidence for them.)

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-15T06:19:55.729Z · LW(p) · GW(p)

In the response I would have wanted to see, Duncan would have clearly and correctly pointed to that difference. He is in favor of people asking for examples [combined with other efforts to cross the gap], does it himself, gives examples himself, and so on. The unsaid [without anything else] part is load-bearing and thus inappropriate to leave out or merely hint at. [Or, alternatively, using "ask people for examples" to refer to comments that do only that, as opposed to the conversational move which can be included or not in a comment with other moves.]

I agree that the hypothetical comment you describe as better is in fact better. I think something like ... twenty-or-so exchanges with Said ago, I would have written that comment?  I don't quite know how to weigh up [the comment I actually wrote is worse on these axes of prosocial cooperation and revealing cruxes and productively clarifying disagreement and so forth] with [having a justified true belief that putting forth that effort with Said in particular is just rewarded with more branches being created].

(e.g. there was that one time [LW(p) · GW(p)] recently where Said said I'd blocked people due to disagreeing with me/criticizing me, and I said no, I haven't blocked anybody for disagreeing/criticizing, and he responded "I didn’t say anything about 'blocked for disagreeing [or criticizing]'. (Go ahead, check!)" and the actual thing he'd said was that they'd been blocked due to disagreeing/criticizing; that's the level of ... gumming up the works? gish-gallop? ... that I've viscerally come to expect.)

Like, I think there's plausibly a CEV-ish code of conduct in which I "should", at that point, still have put forth the effort, but I think it's also plausible that the correct code of conduct is one in which doing so is a genuine mistake and ... noticing that there's a hypothetical "better" comment is not the same as there being an implication that I should've written it?

Something something, how many turns of the cheek are actually correct, especially given that, the week prior, multiple commenters had been unable, with evidence+argument+personal testimony, to shift Said away from a strikingly uncharitable prior.


(This does not count as two hypotheses [LW(p) · GW(p)] in my culture.)

Mine either, to be clear; I felt by that point that Said had willingly put himself outside of the set of [signatories to the peace treaty], turning down many successive opportunities to remain in compliance with it.  I was treating his statements closer to the way I think it is correct to treat the statements of the literal Donald Trump than the way I think it is correct to treat the statements of an undistinguished random Republican.

(I can go into the reasoning for that in more detail, but it seems sort of conflicty to do so unprompted.)


If Said ate breakfasts of only cereal, and Duncan said that was unhealthy and he shouldn't do it, it is not quite right to say Duncan 'thinks you shouldn't eat cereal', as he might be in favor of cereal as part of a balanced breakfast; but also it is not quite right for Duncan to ignore Said's point that one of the main issues under contention is whether Said can eat cereal by itself (i.e. asking for examples without putting in interpretative labor).

I'm a little lost in this analogy; this is sort of where the privileging-the-hypothesis complaint comes in.

The conversation had, in other places, centered on the question of whether Said can eat cereal by itself; Logan for instance highlighted Said's claim in a reply on FB:

Furthermore, you have mentioned the “inferential gap” several times, and suggested that it is the criticizer’s job, at least in part, to bridge it. I disagree.

There, the larger question of "can you eat only cereal, or must you eat other things in balance?" is front-and-center.

But at that point in the subthread, it was not front-and-center; yes, it was relevant context, but the specific claim being made by Said was clear, and discrete, and not at all dependent-on or changed-by that context.

The history of that chain:

Said includes, in a long comment "In summary, I think that what’s been described as 'aiming for convergence on truth' is some mixture of" ... "contentless" ... "good but basically unrelated to the rest of it" ... "bad (various proposed norms of interaction such as 'don't ask people for examples of their claims' and so on)"

gjm, in another long comment, includes "I don't know where you get 'don't ask people for examples of their claims' from and it sounds like a straw man" and goes on to elaborate "I think the things Duncan has actually said are more like 'Said engages in unproductive modes of discussion where he is constantly demanding more and more rigour and detail from his interlocutors while not providing it himself', and wherever that lands on a scale from '100% truth' to '100% bullshit' it is not helpful to pretend that he said 'it is bad to ask people for examples of their claims'.

There's a bunch of other stuff going on in their back and forth, but that particular thread has been isolated and directly addressed, in other words.  gjm specifically noted the separation between the major issue of whether balance is required, and this other, narrower claim.

Said replied:

If “asking people for examples of their claims” doesn’t fit Duncan’s stated criteria for what constitutes acceptable engagement/criticism, then it is not pretending, but in fact accurate, to describe Duncan as advocating for a norm of “don’t ask people for examples of their claims”.

Which, yes, I straightforwardly agree with the if-then statement; if "asking people for examples of their claims" didn't fit my stated criteria for what constitutes acceptable engagement or criticism, then it would be correct to describe me as advocating for a norm of "don't ask people for examples of their claims."

But like.  The if does not hold.  It really clearly doesn't hold.  It was enough of an out-of-nowhere strawman/non-sequitur that gjm specifically called it out as "???" at which point Said doubled down, saying the above and also

Duncan has, I think, made it very clear that that a comment that just says “what are some examples of this claim?” is, in his view, unacceptable. That’s what I was talking about. I really do not think it’s controversial at all to ascribe this opinion to Duncan.

It seems like, in your interpretation, I "should" (in some sense) be extending a hand of charity and understanding and, I dunno, helping Said to coax out his broader, potentially more valid point—helping him to get past his own strawman and on to something more steel, or at least flesh.  Like, if I am reading you correctly above, you're saying that, by focusing in on the narrow point that had been challenged by gjm and specifically reaffirmed by Said, I myself was making some sort of faux pas.

(Am I in fact reading you correctly?)

I do not think so.  I think that, twenty exchanges prior, I perhaps owed Said something like that degree of care and charity and helping him avoid tying his own shoelaces together.  I certainly feel I would owe it to, I dunno, Eric Rogstad or Julia Galef, and would not be the slightest bit loath to provide it.

But here, Said had just spent several thousand words the week prior, refusing to be budged from a weirdly uncharitable belief about the internals of my mind, despite that belief being incoherent with observable evidence and challenged by multiple non-me people.  I don't think it's wise-in-the-sense-of-wisdom to a) engage with substantial charity in that situation, or b) expect someone else to engage with substantial charity in that situation.

(You can tell that my stated criteria do not rule out asking people for examples of their claims in part because I've written really quite a lot about what I think constitutes acceptable engagement or criticism, and I've just never come anywhere close to a criterion like that, nor have I ever complained about someone asking for examples unless it was after a long, long string of what felt like them repeatedly not sharing in the labor of truthseeking. Like, the closest I can think of is this thread with tailcalled [LW(p) · GW(p)], in which (I think/I hope) it's pretty clear that what's going on is that I was trying to cap the total attention paid to the essay and its discussion, and thus was loath to enter into something like an exchange of examples—not that it was bad in any fundamental sense for someone to want some.  I did in fact provide some, a few comments deeper in the thread, though I headlined that I hadn't spent much time on them.)

So in other words: I don't think it was wrong to focus on the literal, actual claim that Said had made (since he made it, basically, twice in a row, affirming "no, I really mean this" after gjm's objection and even saying that he thinks it is so obvious as to not be controversial.  I don't think I "ought" to have had a broader focus, under the circumstances—Said was making a specific, concrete, and false claim, and his examples utterly fail to back up that specific, concrete, and false claim (though I do agree with you that they back up something like his conception of our broader disagreement).

I dunno, I'm feeling kind of autistic, here, but I feel like if, on Less Wrong dot com, somebody makes a specific, concrete claim about my beliefs or policies, clarifies that yes, they really meant that claim, and furthermore says that such-and-such links are "citations for [me] expressing the sentiment [they've] ascribed to [me]" when they simply are not—

It feels like emphatic and unapologetic rejection should be 100% okay, and not looked at askance.  The fact that they are citations supporting a different claim is (or at least, I claim, should be) immaterial; it's not my job to steelman somebody who spent hours and hours negatively psychologizing me in public (while claiming to have no particular animus, which, boy, a carbon copy of Said sure would have had Words about).

I think there's a thing here of standards unevenly applied; surely whatever standard would've had me address Said's "real" concern would've also had Said behave much differently at many steps prior, possibly never strawmanning me so hard in the first place?


I think this situation is, on some level, pretty symmetric.

  1. I think the features of Said's commenting style that people (not just Duncan!) find annoying are things that Said is deliberately optimizing for or the results of principled commitments he's made, so it's not just a simple bug that can be fixed.
  2. I think the features of Duncan's conflict resolution methods that people find offputting are similarly things that Duncan is deliberately optimizing for or the results of principled commitments he's made, so it's not just a simple bug that can be fixed.

I think the asymmetry breaks in that, like, a bunch of people have asked Said to stop and he won't; I'm quite eager to stop doing the conflict resolution that people don't like, if there can pretty please be some kind of system in place that obviates it. I much prefer the world where there are competent police to the world where I have to fight off muggers in the alley—that's why I'm trying so hard to get there to be some kind of actually legible standards rather than there always being some plausible reason why maybe we shouldn't just say "no" to the bullshit that Zack or Said or anonymouswhoever is pulling.

Right now, though, it feels like we've gone from "Ben Hoffman will claim Duncan wants to ghettoize people and it'll be left upvoted for nine days with no mod action" to "Ray will expound on why he thinks it's kinda off for Said to be doing what he's doing but there won't be anything to stop Said from doing it" and I take Oli's point about this stuff being hard and there being other priorities but like, it's been years.  And I get a stance of, like, "well, Duncan, you're asking for a lot," but I'm trying pretty hard to earn it, and to ... pave the way?  Help make the ask smaller? ... with things like the old Moderating LessWrong post and the Concentration of Force post and the more recent Basics post.  Like, I can't think of much more that someone with zero authority and zero mantle can do.  My problem is that abuse and strawmanning of me gets hosted on LW and upvoted on LW and people are like, well, maybe if you patiently engaged with and overturned the abuse and strawmanning in detail instead of fighting back—

I dunno.  If mods would show up and be like "false" and "cut it out" I would pretty happily never get into a scrap on LW ever again.  


Locally, my sense is Duncan started it,

:(((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((((

This, more than anything else, is like "just give up and leave, this is definitely not a garden."


I didn't make it to every point, but hopefully you find this more of the substantive engagement you were hoping for.

Replies from: Ruby, Vaniver
comment by Ruby · 2023-04-15T17:51:13.503Z · LW(p) · GW(p)

At the risk of guessing wrong, and perhaps typical-mind-fallacying, I imagining that you're [rightly?] feeling a lot frustration, exasperation, and even despair about moderation on LessWrong. You've spend dozens (more?) and tens of thousands of words trying to make LessWrong the garden you think it ought to be (and to protect yourself here against attackers) and just to try to uphold, indeed basic standards for truthseeking discourse.  You've written that some small validation goes a long way, so this is me trying to say that I think your feelings have a helluva lot of validity.

I don't think that you and I share exactly the same ideals for LessWrong. PerfectLessWrong!Ruby and PerfectLessWrong!Duncan would be different (or heck, even just VeryGoodLessWrongs), though I also am pretty sure that you'd be much happier with my ideal, you'd think it was pretty good if not perfect. Respectable, maybe adequate. A garden.

And I'm really sad that the current LessWrong feels really really far short of my own ideals (and Ray of his ideals, and Oli of his ideals), etc. And not just short of a super-amazing-lofty-ideal, also short of a "this place is really under control" kind of ideal. I take responsibility for it not being so, and I'm sorry. I wouldn't blame you for saying this isn't good enough and wanting to leave[1], there are some pretty bad flaws.

But sir, you impugn my and my site's honor. This is not a perfect garden, it also not a jungle. And there is an awful lot of gardening going on. I take it very seriously that LessWrong is not just any place, and it takes ongoing work to keep it so. This is approx my full-time job (and that of others too), and while I don't work 80-hour weeks, I feel like I put a tonne of my soul into this site.

Over the last year, I've been particularly focused on what I suspect are existential threats to LessWrong (not even the ideal, just the decently-valuable thing we have now). I think this very much counts as gardening. The major one over last year is how to both have all the AI content [LW · GW] (and I do think AI is the most important topic right now) and not have it eat LessWrong and turn it into the AI-website rather than the truth-seeking/effectiveness/rationality website which is actually what I believe is its true spirit[2]. So far, I feel like we're still failing at this. On many days, the Frontpage is 90+% AI posts. It's not been a trivial problem for many problems.

The other existential problem, beyond the topic, that I've been anticipating for a long time and is now heating up is the deluge of new users flowing to the site because of the rising prominence of AI. Moderation is currently our top focus, but even before that, every day – the first thing we do when the team gets in the morning – is review every new post, all first time submissions from users, and the activity of users who are getting a lot of downvotes. It's not exactly fun, but we do it basically everyday[3]. In the interests of greater transparency and accountability, we will soon build a Rejected Content section of the site where you'll be able to view the content we didn't go live, and I predict that will demonstrate just how much this garden is getting tended, and that counterfactually the quality would be a lot lot worse. You can see here a recent internal document [LW · GW]that describes my sense of priorities for the team.

I think the discourse norms and bad behavior (and I'm willing to say now in advance of my more detailed thoughts that there's a lot of badness to how Said behaves) are also serious threats to the site,  and we do give those attention too. They haven't felt like the most pressing threats (or for that matter, opportunities, recently), and I could be making a mistake there, but we do take them seriously. Our focus (which I think has a high opportunity cost) has been turned to the exchanges between you and Said this week, plausibly you've done us a service to draw our attention to behavior we should be deeming intolerable, and it's easily 50-100 hours of team attention.

It is plausible the LessWrong team has made a mistake in not prioritizing this stuff more highly over the years (it has been years – though Said and Zack and others have in fact received hundreds of hours of attention), and there are definitely particular projects that I think turned out to be misguided and less valuable than marginal moderation would have been, but I'll claim that it was definitely not an obvious mistake that we haven't addressed the problems you're most focused on. 

It is actually on my radar and I've been actively wanted for a while a system that reliably gets the mod team to show up and say "cut it out" sometimes. I suspect that's what should have happened a lot earlier on in your recent exchanges with Said. I might have liked to say "Duncan, we the mods certify that if you disengage, it is no mark against you" or something. I'm not sure. Ray mentioned the concept of "Maslow's Hierarchy of Moderation" and I like that idea, and would like to get soon to the higher level where we're actively intervening in this cases. I regret that I in particular on the team am not great at dropping what I'm doing to pivot when these threads come up, perhaps I should work on that.

I think a claim you could make is the LessWrong team should have hired more people so they could cover more of this. Arguing why we haven't (or why Lightcone as a whole didn't keep more team members on LessWrong team) is a bigger deal. I think things would be worse if LessWrong had been bigger most of the time, and barring unusually good candidate, it'd be bad to hire right now. 

All this to say, this garden has a lot of shortcomings, but the team works quite hard to keep it at least as good as it is and try to make it better. Fair enough if it doesn't meet your standards or not how you'd do it, perhaps we're not all that competent, fair enough.

(And also you've had a positive influence on us, so your efforts are not completely in vain. We do refer to your moderation post/philosophy even if we haven't adopted it wholesale, and make use of many of the concepts you've crystallized. For that I am grateful. Those are contributions I'd be sad to lose, but I don't want to push you to offer to them to us if doing so is too costly for you.)

  1. ^

    I will also claim though that a better version of Duncan would be better able to tolerate the shortcomings of LessWrong and improve it too; that even if your efforts to change LW aren't working enough, there are efforts on yourself that would make you better, and better able to benefit from the LessWrong that is.

  2. ^

    Something like the core identity of LessWrong is rationality. In alternate worlds, that is the same, but the major topic could be something else.

  3. ^

    Over the weekend, some parts of the reviewing get deferred till the work week.

Replies from: Duncan_Sabien, Ruby
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-15T18:16:58.334Z · LW(p) · GW(p)

But sir, you impugn my and my site's honor

This is fair, and I apologize; in that line I was speaking from despair and not particularly tracking Truth.

A [less straightforwardly wrong and unfair] phrasing would have been something like "this is not a Japanese tea garden; it is a British cottage garden."

Replies from: Ruby
comment by Ruby · 2023-04-15T21:05:05.918Z · LW(p) · GW(p)

I have been to the Japanese tea garden in Portland, and found it exquisite, so I think get your referent there.

Aye, indeed it is not that.

comment by Ruby · 2023-04-15T18:34:38.694Z · LW(p) · GW(p)

I probably rushed this comment out the door in a "defend my honor, set the record straight" instinct that I don't think reliably leads to good discourse and is not what I should be modeling on LessWrong. 

comment by Vaniver · 2023-04-15T16:34:36.158Z · LW(p) · GW(p)

I didn't make it to every point, but hopefully you find this more of the substantive engagement you were hoping for.

I did, thanks.

gjm specifically noted the separation between the major issue of whether balance is required, and this other, narrower claim.

I think gjm's comment was missing the observation that "comment that just ask for examples" are themselves an example of "unproductive modes of discussion where he is constantly demanding more and more rigour and detail from his interlocutors while not providing it himself", and so it wasn't cleanly about "balance: required or not?". I think a reasonable reader could come away from that comment of gjm's uncertain whether or not Said simply saying "examples?" would count as an example.

My interpretation of this section is basically the double crux dots arguing over the labels they should have, with Said disagreeing strenuously with calling his mode "unproductive" (and elsewhere over whether labor is good or bad, or how best to minimize it) and moving from the concrete examples to an abstract pattern (I suspect because he thinks the former is easier to defend than the latter).

I should also note here that I don't think you have explicitly staked out that you think Said just saying "examples?" is bad (like, you didn't here [LW(p) · GW(p)], which was the obvious place to), I am inferring that from various things you've written (and, tho this source is more suspect and so has less influence, ways other people have reacted to Said before).

Said to coax out his broader, potentially more valid point

Importantly, I think Said's more valid point was narrower, not broader, and the breadth was the 'strawmanning' part of it. (If you mean to refer to the point dealing with the broader context, I agree with that.) The invalid "Duncan's rule against horses" turning into the valid "Duncan's rule against white horses". If you don't have other rules against horses--you're fine with brown ones and black one and chestnut ones and so on--I think that points towards your rule against white horses pretty clearly. [My model of you thinks that language is for compiling into concepts instead of pointing at concepts and so "Duncan's rule against horses" compiles into "Duncan thinks horses should be banned" which is both incorrect and wildly inconsistent with the evidence. I think language is for both, and when one gives you a nonsense result, you should check the other.]

I will note a way here in which it is not quite fair that I am saying "I think you didn't do a reasonable level of interpretive labor when reading Said", in the broader context of your complaint that Said doesn't do much interpretive labor (deliberately!). I think it is justified by the difference in how the two of you respond to the failure of that labor.

(Am I in fact reading you correctly?)

I am trying to place the faux pas not in that you "reacted at all to that prompt" but "how you reacted to the prompt". More in the next section.

clarifies that yes, they really meant that claim,

I think this point is our core disagreement. I see the second comment saying "yeah, Duncan's rule against horses, the thing where he dislikes white ones", and you proceeding as if he just said "Duncan's rule against horses." I think there was a illusion of transparency behind "specifically reaffirmed by Said".

Like, I think if you had said "STRAWMAN!" and tried to get us to put a scarlet S in Said's username, this would have been a defensible accusation, and the punishment unusual but worth considering. Instead I think you said "LIAR!" and that just doesn't line up with my reading of the thread (tho I acknowledge disagreement about the boundary between 'lying' and 'strawmanning') or my sense of how to disagree properly. In my favorite world, you call it a mislabeling and identify why you think the label fails to match (again, noting that gjm attempted to do so, tho I think not in a way that bridged the gap).

I think there's a thing here of standards unevenly applied; surely whatever standard would've had me address Said's "real" concern would've also had Said behave much differently at many steps prior, possibly never strawmanning me so hard in the first place?

I mean, for sure I wish Said had done things differently! I described them in some detail, and not strawmanning you so hard in the first place was IMO the core one.

When I say "locally", I am starting the clock at Killing Socrates [LW · GW], which was perhaps unclear. 

if there can pretty please be some kind of system in place that obviates it.

Do you think Said would not also stop if, for every post he read on LW, he found that someone else had already made the comment he would have liked to have made?

(I do see a difference where the outcomes you seek to achieve are more easily obtained with mod powers backing them up, but I don't think that affects the primary point.)

If mods would show up and be like "false" and "cut it out" I would pretty happily never get into a scrap on LW ever again. 

So, over here [LW(p) · GW(p)] Elizabeth 'summarizes' Said in an unflattering way, and Said objects [LW(p) · GW(p)].  I don't think I will reliably see such comments before those mentioned in them do (there were only 23 minutes before Said objected) and it is not obvious to me that LW would be improved by me also objecting now.

But perhaps our disagreement is that, on seeing Elizabeth's comment, I didn't have a strong impulse to 'set the record straight'; I attribute that mostly to not seeing Elizabeth's comment as "the record," tho I'm open to arguments that I should.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-15T17:11:05.602Z · LW(p) · GW(p)

I think a reasonable reader could come away from that comment of gjm's uncertain whether or not Said simply saying "examples?" would count as an example.

...

I should also note here that I don't think you have explicitly staked out that you think Said just saying "examples?" is bad (like, you didn't here [LW(p) · GW(p)], which was the obvious place to), I am inferring that from various things you've written (and, tho this source is more suspect and so has less influence, ways other people have reacted to Said before).

To clarify:

If one starts out looking to collect and categorize evidence of their conversational partner not doing their fair share of the labor, then a bunch of comments that just say "Examples?" would go into the pile. But just encountering a handful of comments that just say "Examples?" would not be enough to send a reasonable person toward the hypothesis that their conversational partner reliably doesn't do their fair share of the labor.

"Do you have examples?" is one of the core, common, prosocial moves, and correctly so.  It is a bid for the other person to put in extra work, but the scales of "are we both contributing?" don't need to be balanced every three seconds, or even every conversation.  Sometimes I'm the asker/learner and you're the teacher/expounder, and other times the roles are reversed, and other times we go back and forth.

The problem is not in asking someone to do a little labor on your behalf. It's having 85+% of your engagement be asking other people to do labor on your behalf, and never reciprocating, and when people are like, hey, could you not, or even just a little less? being supercilious about it.

Said simply saying "examples?" is an example, then, but only because of the strong prior from his accumulated behavior; if the rule is something like "doing this <100x/wk is fine, doing it >100x/wk is less fine," then the question of whether a given instance "is an example" is slightly tricky.

I think this point is our core disagreement. I see the second comment saying "yeah, Duncan's rule against horses, the thing where he dislikes white ones", and you proceeding as if he just said "Duncan's rule against horses." I think there was a illusion of transparency behind "specifically reaffirmed by Said".

Yeah, you may have pinned it down (the disagreement).  I definitely don't (currently) think it's sensible to read the second comment that way, and certainly not sensible enough to mentally dock someone for not reading it that way even if that reading is technically available (which I agree it is).  

Like, I think if you had said "STRAWMAN!" and tried to get us to put a scarlet S in Said's username, this would have been a defensible accusation

I perhaps have some learned helplessness around what I can, in fact, expect from the mod team; I claim that if I had believed that this would be received as defensible I would've done that instead. At the time, I felt helpless and alone*/had no expectation of mod support for reasons I think are reasonable, and so was not proceeding as if there was any kind of request I could make, and so was not brainstorming requests.

*alone vis-a-vis moderators, not alone vis-a-vis other commenters like gjm

I do think that you should put a scarlet P [LW · GW] in Said's username, since he's been doing it for a couple weeks now and is still doing it [LW(p) · GW(p)] (c.f. "I have yet to see any compelling reason to conclude that this [extremely unlikely on its face hypothesis] is false."). 

In my favorite world, you call it a mislabeling and identify why you think the label fails to match (again, noting that gjm attempted to do so, tho I think not in a way that bridged the gap).

I again agree that this is clearly a better set of moves in some sense, but I'm thinking in a fabricated options frame and being, like, is that really actually a possible world, in that the whole problem is Said's utterly exhausting and unrewarding mode of engagement.  Like, I wonder if I might convince you that your favorite world is incoherent and impossible, because it's one in which people are engaging in the colloquial definition of insanity and never updating their heuristics based on feedback.  Or maybe you're saying "do it for the audience and for site norms, then," which feels less like throwing good money after bad.

But like.  I think I'm getting dinged for impatience when I did not, previously, get headpats for patience? The wanted behavior feels unincentivized relative to the unwanted behavior.

When I say "locally", I am starting the clock at Killing Socrates [LW · GW], which was perhaps unclear. 

No, that was pretty clear, and that's what generated the :((((((((. The choice to start the clock there feels unfair-to-Neville, like if I were a teacher I would glance at that and say "okay, obviously this is not the local beginning" and look further.

Do you think Said would not also stop if, for every post he read on LW, he found that someone else had already made the comment he would have liked to have made?

I am wary of irresponsibly theorizing about the contents of someone else's mind. I do think that, if one looks over the explosive proliferation of his threads once he starts a back-and-forth, it's unlikely that there's some state in which Said is like "ah, people are already saying all the things!"  I suspect that Said (like others, to be clear; this is not precisely a criticism) has an infinite priority list, and if all the things of top priority are handled by other commenters, he'll move down to lower ones.

I do think that if you took all of Said's comments, and distributed 8% of them each into the corpus of comments of Julia Galef, Anna Salamon, Rob Bensinger, Scott Garrabrant, you, Eliezer Yudkowsky, Logan Brienne Strohl, Oliver Habryka, Kelsey Piper, Nate Soares, Eric Rogstad, Spencer Greenberg, and Dan Keys this would be much better.  Part of the problem is the sheer concentration of princely entitlement and speaking-as-if-it-is-the-author's-job-to-convince-Said-particularly-regardless-of-whether-Said's-skepticism-is-a-signal-of-any-real-problem-with-the-claims.

If Kelsey Piper locally is like, buddy, you need to give me more examples, or if Spencer Greenberg locally is like, but what the heck do you even mean by "annoying," there's zero sense (on my part, at least) that here we go again, more taking-without-contributing.  Instead, with Kelsey and Spencer it feels like a series of escalating favors and a tightening of the web of mutual obligation in which everybody is grateful to everybody else for having put in so many little bits of work here and there, of course I want to spill some words to help connect the dots for Kelsey and Spencer, they've spilled so many words helping me.

The pattern of "give, then take, then give, then take, then take, then take, then give, then give" is a healthy one to model, and is patriotically Athenian in the frame of my recent essay, and is not one which, if a thousand newbies were to start emulating, would cause a problem.

But perhaps our disagreement is that, on seeing Elizabeth's comment, I didn't have a strong impulse to 'set the record straight'

I don't think that mods should be chiming in and setting the record straight on every little thing. But when, like, Said spends multiple thousands of words in a literally irrational (in the sense of not having cruxes and not being open to update and being directly contradicted by evidence) screed strawmanning me and claiming that I block people for disagreeing with my claims or criticizing my arguments—

—and furthermore when I ask for mod help—

—then I do think that a LessWrong where a mod shows up to say "false" and "actually cut it out for real" is meaningfully different and meaningfully better than the current Wild West feel where Said doesn't get in trouble but I do.

Replies from: SaidAchmiz, Raemon
comment by Said Achmiz (SaidAchmiz) · 2023-04-15T17:44:20.410Z · LW(p) · GW(p)

The problem is not in asking someone to do a little labor on your behalf. It’s having 85+% of your engagement be asking other people to do labor on your behalf, and never reciprocating, and when people are like, hey, could you not, or even just a little less? being supercilious about it.

But why should this be a problem?

Why should people say “hey, could you not, or even just a little less”? If you do something that isn’t bad, that isn’t not a problem, why should people ask you to stop? If it’s a good thing to do, why wouldn’t they instead ask you to do it more?

And why, indeed, are you still speaking in this transactional way?

If you write a post about some abstract concept, without any examples of it, and I write a post that says “What are some examples?”, I am not asking you to do labor on my behalf, I am not asking for a favor (which must be justified by some “favor credit”, some positive account of favors in the bank of Duncan). Quite frankly, I find that claim ridiculous to the point of offensiveness. What I am doing, in that scenario, is making a positive contribution to the discussion, both for your benefit and (even more importantly) for the benefit of other readers and commenters.

There is no good reason why you should resent responding to a request like “what are some examples”. There is no good reason why you should view it as an unjustified and entitled demand for a favor. There is definitely no good reason why you should view acceding to that request as being “for my benefit” (instead of, say, for your benefit, and for the benefit of readers).

(And the gall of saying “never reciprocating”, to me! When I write a post [LW · GW], I include examples pre-emptively, because I know that I should be asked to do so otherwise. Not “will be asked”, of course—but “should”. And when I write a post without enough examples, and someone asks for examples [LW(p) · GW(p)], I respond in great detail. Note that my responses in that thread are much, much longer than the comment which asked for examples. Of course they are! Because the question doesn’t need to be longer—but the answers do!)

(And you might say: “but Said, you barely write any posts—like one a year, at best!”. Indeed. Indeed.)

Replies from: Vladimir_Nesov, Duncan_Sabien, Vaniver
comment by Vladimir_Nesov · 2023-04-16T04:46:00.502Z · LW(p) · GW(p)

There is no good reason why you should resent responding to a request like “what are some examples”.

Maybe "resent" is doing most work here, but an excellent reason to not respond is that it takes work. To the extent that there are norms in place that urge response, they create motivation to suppress criticism that would urge response. An expectation that it's normal for criticism to be a request for response that should normally be granted is pressure to do the work of responding, which is costly, which motivates defensive action in the form of suppressing criticism.

A culture could make it costless (all else equal) to ignore the event of a criticism having been made. This is an inessential reason for suppressing criticism that can be removed, and therefore should, to make criticism cheaper and more abundant.

The content of criticism may of course motivate the author of a criticized text to make further statements, but the fact of criticism's posting by itself should not. The fact of not responding to criticism is some sort of noisy evidence of not having a good response that is feasible or hedonic to make, but that's Law, not something that can change for the sake of mechanism design.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-16T05:00:48.023Z · LW(p) · GW(p)

Maybe “resent” is doing most work here

It’s certainly doing a decent amount of work, I agree.

Anyhow, your overall point is taken—although I have to point out that that your last sentence seems like a rebuttal of your next-to-last sentence.

That having been said, of course the content of criticism matters. A piece of criticism could simply be bad, and clearly wrong; and then it’s good and proper to just ignore it (perhaps after having made sure that an interested party could, if they so wished, easily see or learn why that criticism is bad). I do not, and would not, advocate for a norm that all comments, all critical questions, etc., regardless of their content, must always be responded to. That is unreasonable.

I also want to note—as I’ve said several times in this discussion, but it bears repeating—there is nothing problematic or blameworthy about someone other than the author of a post responding to questions, criticism, requests for examples, etc. That is fine. Collaborative development of ideas is a perfectly normal and good thing.

What that adds up to, I think, is a set of requirements for a set of social norms which is quite compatible with your suggestion of making it “costless (all else equal) to ignore the event of a criticism having been made”.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-04-16T06:57:24.420Z · LW(p) · GW(p)

The content of criticism may of course motivate the author of a criticized text to make further statements, but the fact of criticism's posting by itself should not. The fact of not responding to criticism is some sort of noisy evidence of not having a good response that is feasible or hedonic to make, but that's Law, not something that can change for the sake of mechanism design.

I have to point out that that your last sentence seems like a rebuttal of your next-to-last sentence

They are in opposition, but the point is that they are about different kinds of things, and one of them can't respond to policy decisions. It's useful to have a norm that lessens the burden of addressing criticism. It's Law of reasoning that this burden can nonetheless materialize. The Law is implacable but importantly asymmetric, it only holds when it does, not when the court of public opinion says it should. While the norms are the other way around, and their pressure is somewhat insensitive to facts of a particular situation, so it's worth pointing them in a generally useful direction, with no hope for their nuanced or at all sane response to details.

Perhaps the presence of Law justifies norms that are over-the-top forgiving to ignoring criticism, or find ignoring criticism a bit praiseworthy when it would be at all unpleasant not to ignore it, to oppose the average valence of Law, while of course attempting to preserve its asymmetry. So I'd say my last sentence in that comment argues that the next-to-last sentence should be stronger. Which I'm not sure I agree with, but here's the argument.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-15T18:35:55.009Z · LW(p) · GW(p)

Said, above, is saying a bunch of things, many of which I agree with, as if they are contra my position or my previous claims.

He can't pass my ITT (not that I've asked him to), which means that he doesn't understand the thing he's trying to disagree with, which means that his disagreement is not actually pointing at my position; the things he finds ridiculous and offensive are cardboard cutouts of his own construction. More detail on that over here [LW(p) · GW(p)].

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-15T18:40:31.402Z · LW(p) · GW(p)

This response is manifestly untenable, given the comment of yours that I was responding to.

comment by Vaniver · 2023-04-15T22:40:29.353Z · LW(p) · GW(p)

BTW I was surprised earlier to see you agree with the 'relational' piece of this comment [LW(p) · GW(p)] because Duncan's grandparent comment seems like it's a pretty central example of that. (I view you as having more of a "visitor-commons" orientation towards LW, and Duncan has more of an orientation where this is a place where people inhabit their pairwise relationships, as well as more one-to-many relationships.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-15T22:46:03.197Z · LW(p) · GW(p)

Sorry, I’m not quite sure I follow the references here. You’re saying that… this comment [LW(p) · GW(p)]… is a central example of… what, exactly?

(I view you as having more of a “visitor-commons” orientation towards LW, and Duncan has more of an orientation where this is a place where people inhabit their pairwise relationships, as well as more one-to-many relationships.)

That… seems like it’s probably accurate… I think? I think I’d have to more clearly understand what you’re getting at in your comment, in order to judge whether this part makes sense to me.

Replies from: Vaniver
comment by Vaniver · 2023-04-17T07:05:29.690Z · LW(p) · GW(p)

Sorry, my previous comment wasn't very clear. Earlier I said:

Duncan is trying to suggest what is permissible or impermissible is more relational and deals with people's attitudes towards each other (as suggested by gjm here [LW(p) · GW(p)]).

and you responded with:

I also—and, perhaps, more importantly—think that the interactions in question are not only fine, but good, in a “relational” sense.

(and a few related comments) which made me think "hmm, I don't think we mean the same thing by 'relational'. Then Duncan's comment had a frame that I would have described as 'relational'--as in focusing on the relationships between the people saying and hearing the words--which you then described as transactional. 

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-17T08:04:21.197Z · LW(p) · GW(p)

Ah, I see.

I think that the sense in which I would characterize Duncan’s description as “transactional” is… mostly orthogonal to the question of “is this a relational frame”. I don’t think that this has much to do with the “‘visitor commons’ vs. ‘pairwise relationships’” distinction, either (although that distinction is an interesting and possibly important one in its own right, and you’re certainly more right than wrong about where my preferences lie in that regard).

(There’s more that I could say about this, but I don’t know whether anything of importance hinges on this point. It seems like it mostly shouldn’t, but perhaps you are a better judge of that…)

comment by Raemon · 2023-04-15T17:37:31.383Z · LW(p) · GW(p)

A couple quick notes for now:

I agree with Duncan here it's kinda silly to start the clock at "Killing Socrates". Insofar as there's a current live fight that is worth tracking separately from overall history, I think it probably starts in the comments of LW Team is adjusting moderation policy [LW · GW], and I think the recent-ish back and forth on Basics of Rationalist Discourse [LW · GW] and "Rationalist Discourse" Is Like "Physicist Motors" [LW · GW] is recent enough to be relevant (hence me including the in the OP)

"—then I do think that a LessWrong where a mod shows up to say "false" and "actually cut it out for real" is meaningfully different and meaningfully better than the current Wild West feel where Said doesn't get in trouble but I do."

I think Vaniver right now is focusing on resolving the point "is Said a liar?", but not resolving the "who did most wrong?" question. (I'm not actually 100% sure on Vaniver's goals/takes at the moment). I agree this is an important subquestion but it's not the primary question I'm interested in. 

I'm somewhat worried about this thread taking in more energy that it quite warrants, and making Duncan feel more persecuted than really makes sense here. 

I roughly agree with Vaniver than "Liar!" isn't the right accusation to have levied, but also don't judge you harshly for having made it. 

I think this comment [LW(p) · GW(p)] of mine summarizes my relevant opinions here.

(tagging @Vaniver [LW · GW] to make sure he's at least tracking this comment)

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-15T18:21:20.557Z · LW(p) · GW(p)

Thanks.

I note (while acknowledging that this is a small and subtle distinction, but claiming that it is an important one nonetheless) that I said that I now categorize Said as a liar, which is an importantly and intentionally weaker claim than Said is a liar, i.e. "everyone should be able to see that he's a liar" or "if you don't think he's a liar you are definitely wrong."

(This is me in the past behaving in line with the points I just made under Said's comment, about not confusing [how things seem to me] with [how they are] or [how they do or should seem to others].)

This is much much closer to saying "Liar!" than it is to not saying "Liar!" ... if one is to round me off, that's the correct place to round me off to. But it is still a rounding.

Replies from: Raemon
comment by Raemon · 2023-04-15T18:22:31.272Z · LW(p) · GW(p)

Nod, seems fair to note.

comment by Said Achmiz (SaidAchmiz) · 2023-04-14T20:33:55.375Z · LW(p) · GW(p)

I interpret a lot of Duncan’s complaints here thru the lens of imaginary injury that he writes about here.

I just want to highlight this link (to one of Duncan’s essays on his Medium blog), which I think most people are likely to miss otherwise.

That is an excellent post! If it was posted on Less Wrong (I understand why it wasn’t, of course EDIT: I was mistaken about understanding this; see replies), I’d strong-upvote it without reservation. (I disagree with some parts of it, of course, such as one of the examples—but then, that is (a) an excellent reason to provide specific examples, and part of what makes this an excellent post, and (b) the reason why top-level posts quite rightly don’t have agree/disagree voting. On the whole, the post’s thesis is simply correct, and I appreciate and respect Duncan for having written it.)

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-14T21:23:50.912Z · LW(p) · GW(p)

It's not on LessWrong because of you, specifically. Like, literally that specific essay, I consciously considered where to put it, and decided not to put it here because, at the time, there was no way to prevent you from being part of the subsequent conversation.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-14T21:26:46.458Z · LW(p) · GW(p)

Hmm. I retract the “I understand why it wasn’t [posted on Less Wrong]” part of my earlier comment! I definitely no longer understand.

(I find your stated reason bizarre to the point where I can’t form any coherent model of your thinking here.)

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-15T00:36:50.029Z · LW(p) · GW(p)

Said, as a quick note - this particular comment reminds me of the "bite my thumb" scene from Romeo and Juliet. To you, it might be innocuous, but to me, and I suspect to Duncan and others, it sounds like a deliberate insult, with just enough of a veil of innocence to make it especially infuriating.

I am presuming you did not actually mean this as an insult, but were instead meaning to express your genuine confusion about Duncan's thought process. I am curious to know a few things:

  1. Did you recognize that it sounded potentially insulting?
  2. If so, why did you choose to express yourself in this insulting-sounding manner?
  3. If not, does it concern you that you may not recognize when you are expressing yourself in an insulting-sounding way, and is that something you are interested in changing?
  4. And if you didn't know you sounded insulting, and don't care to change, why is that?
Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-15T01:00:27.393Z · LW(p) · GW(p)

There are some things which cannot be expressed in a non-insulting manner (unless we suppose that the target is such a saint that no criticism can affect their ego; but who among us can pretend to that?).

I did not intend insult, in the sense that insult wasn’t my goal. (I never intend insult, as a rule. What few exceptions exist, concern no one involved in this discussion.)

But, of course, I recognize that my comment is insulting. That is not its purpose, and if I could write it non-insultingly, I would do so. But I cannot.

So, you ask:

If so, why did you choose to express yourself in this insulting-sounding manner?

The choice was between writing something that was necessary for the purpose of fulfilling appropriate and reasonable conversational goals, but could be written only in such a way that anyone but a saint would be insulted by it—or writing nothing.

I chose the former because I judged it to be the correct choice: writing nothing, simply in order to to avoid insult, would have been worse than writing the comment which I wrote.

(This explanation is also quite likely to apply to any past or future comments I write which seem to be insulting in similar fashion.)

Replies from: benwr, Jasnah_Kholin, AllAmericanBreakfast
comment by benwr · 2023-04-15T03:03:00.419Z · LW(p) · GW(p)

But, of course, I recognize that my comment is insulting. That is not its purpose, and if I could write it non-insultingly, I would do so. But I cannot.


I want to register that I don't believe you that you cannot, if we're using the ordinary meaning of "cannot". I believe that it would be more costly for you, but it seems to me that people are very often able to express content like that in your comment, without being insulting.

I'm tempted to try to rephrase your comment in a non-insulting way, but I would only be able to convey its meaning-to-me, and I predict that this is different enough from its meaning-to-you that you would object on those grounds. However, insofar as you communicated a thing to me, you could have said that thing in a non-insulting way.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-15T03:06:46.872Z · LW(p) · GW(p)

I believe you when you say that you don’t believe me.

But I submit to you that unless you can provide a rephrasing which (a) preserves all relevant meaning while not being insulting, and (b) could have been generated by me, your disbelief is not evidence of anything except the fact that some things seem easy until you discover that they’re impossible.

Replies from: benwr
comment by benwr · 2023-04-15T03:16:08.960Z · LW(p) · GW(p)

My guess is that you believe it's impossible because the content of your comment implies a negative fact about the person you're responding to. But insofar as you communicated a thing to me, it was in fact a thing about your own failure to comprehend, and your own experience of bizarreness. These are not unflattering facts about Duncan, except insofar as I already believe your ability to comprehend is vast enough to contain all "reasonable" thought processes.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-15T03:28:08.400Z · LW(p) · GW(p)

These are not unflattering facts about Duncan

Indeed, they are not—or so it would seem. So why would my comment be insulting?

After all, I didn’t write “your stated reason is bizarre”, but “I find your stated reason bizarre”. I didn’t write “it seems like your thinking here is incoherent”, but “I can’t form any coherent model of your thinking here”. I didn’t… etc.

So what makes my comment insulting?

Please note, I am not saying “my comment isn’t insulting, and anyone who finds it so is silly”. It is insulting! And it’s going to stay insulting no matter how you rewrite it, unless you either change what it actually says or so obfuscate the meaning that it’s not possible to tell what it actually says.

The thing I am actually saying—the meaning of the words, the communicated claims—imply unflattering facts about Duncan.[1] There’s no getting around that.

The only defensible recourse, for someone who objects to my comment, is to say that one should simply not say insulting things; and if there are relevant things to say which cannot be said non-insultingly, then they oughtn’t be said… and if anything is lost thereby, well, too bad.

And that would be a consistent point of view, certainly. But not one to which I subscribe; nor do I think that I ever will.


  1. To whatever extent a reader believes that I’m a basically reasonable person, anyway. Ironically, a reader with a low opinion of me should find my comment less insulting to Duncan. Duncan himself, one might imagine, would not finding it insulting at all. But of course that’s not how people work, and there’s no point in deluding ourselves otherwise… ↩︎

Replies from: benwr
comment by benwr · 2023-04-15T03:48:08.683Z · LW(p) · GW(p)

For what it's worth, I don't think that one should never say insulting things. I think that people should avoid saying insulting things in certain contexts, and that LessWrong comments are one such context.

I find it hard to square your claim that insultingness was not the comment's purpose with the claim that it cannot be rewritten to elide the insult.

An insult is not simply a statement with a meaning that is unflattering to its target - it involves using words in a way that aggressively emphasizes the unflatteringness and suggests, to some extent, a call to non-belief-based action on the part of the reader.

If I write a comment entirely in bold, in some sense I cannot un-bold it without changing its effect on the reader. But I think it would be pretty frustrating to most people if I then claimed that I could not un-bold it without changing its meaning.

Replies from: JacobKopczynski, SaidAchmiz
comment by Czynski (JacobKopczynski) · 2023-04-16T03:12:56.040Z · LW(p) · GW(p)

You still haven't actually attempted the challenge Said laid out.

Replies from: benwr
comment by benwr · 2023-04-16T04:59:41.037Z · LW(p) · GW(p)

I'm not sure what you mean - as far as I can tell, I'm the one who suggested trying to rephrase the insulting comment, and in my world Said roughly agreed with me about its infeasibility in his response, since it's not going to be possible for me to prove either point: Any rephrasing I give will elicit objections on both semantics-relative-to-Said and Said-generatability grounds, and readers who believe Said will go on believing him, while readers who disbelieve will go on disbelieving.

Replies from: JacobKopczynski
comment by Czynski (JacobKopczynski) · 2023-04-18T01:20:05.796Z · LW(p) · GW(p)

You haven't even given an attempt at rephrasing.

Replies from: benwr
comment by benwr · 2023-04-18T01:55:12.610Z · LW(p) · GW(p)

Nor should I, unless I believe that someone somewhere might honestly reconsider their position based on such an attempt. So far my guess is that you're not saying that you expect to honestly reconsider your position, and Said certainly isn't. If that's wrong then let me know! I don't make a habit of starting doomed projects.

Replies from: Vladimir_Nesov, JacobKopczynski
comment by Vladimir_Nesov · 2023-04-22T20:39:23.050Z · LW(p) · GW(p)

Nor should I, unless I believe that someone somewhere might honestly reconsider their position based on such an attempt.

I think for the purposes of promoting clarity this is a bad rule of thumb. The decision to explain should be more guided by effort/hedonicity and availability of other explanations of the same thing that are already there, not by strategically withholding things based on predictions of how others would treat an explanation. (So for example "I don't feel like it" seems like an excellent reason not to do this, and doesn't need to be voiced to be equally valid.)

Replies from: benwr
comment by benwr · 2023-04-22T22:25:27.903Z · LW(p) · GW(p)

I think I agree that this isn't a good explicit rule of thumb, and I somewhat regret how I put this.

But it's also true that a belief in someone's good-faith engagement (including an onlooker's), and in particular their openness to honest reconsideration, is an important factor in the motivational calculus, and for good reasons.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-04-22T22:45:40.775Z · LW(p) · GW(p)

openness to honest reconsideration, is an important factor in the motivational calculus

The structure of a conflict and motivation prompted by that structure functions in a symmetric way, with the same influence irrespective of whether the argument is right or wrong.

But the argument itself, once presented, is asymmetric, it's all else equal stronger when correct than when it's not. This is a reason to lean towards publishing things, perhaps even setting up weird mechanisms [LW(p) · GW(p)] like encouraging people to ignore criticism they dislike in order to make its publication more likely.

comment by Czynski (JacobKopczynski) · 2023-04-18T03:56:06.603Z · LW(p) · GW(p)

If you're not even willing to attempt the thing you say should be done, you have no business claiming to be arguing or negotiating in good faith.

You claimed this was low-effort. You then did not put in the effort to do it. This strongly implies that you don't even believe your own claim, in which case why should anyone else believe it?

It also tests your theory. If you can make the modification easily, then there is room for debate about whether Said could. If you can't, then your claim was wrong and Said obviously can't either.

Replies from: benwr
comment by benwr · 2023-04-18T04:00:52.100Z · LW(p) · GW(p)

I think it's pretty rough for me to engage with you here, because you seem to be consistently failing to read the things I've written. I did not say it was low-effort. I said that it was possible. Separately, you seem to think that I owe you something that I just definitely do not owe you. For the moment, I don't care whether you think I'm arguing in bad faith; at least I'm reading what you've written.

Replies from: JacobKopczynski, JacobKopczynski
comment by Czynski (JacobKopczynski) · 2023-04-18T04:13:36.787Z · LW(p) · GW(p)

Additionally, yes, you do owe me something. The same thing you owe to everyone else reading this comment section, Said included. An actual good-faith effort to probe at cruxes to the extent possible. You have shown absolutely no sign of that in this part of the conversation and precious little of it in the rest of it. Which means that your whole side of this conversation has been weak evidence that Said is correct and you are not.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-04-22T20:48:51.807Z · LW(p) · GW(p)

Which means that your whole side of this conversation has been weak evidence that Said is correct and you are not.

This might be true, but it doesn't follow that anyone owes anyone anything as a result. Doing something as a result might shift the evidence, but people don't have obligations to shift evidence.

Also, I think cultivating an environment where arguments against your own views can take root [LW · GW] is more of an obligation than arguing for them, and it's worth arguing against your own views when you see a clear argument pointing in that direction. But still, I wouldn't go so far as to call even that an actual obligation.

Replies from: JacobKopczynski
comment by Czynski (JacobKopczynski) · 2023-04-23T05:40:49.575Z · LW(p) · GW(p)

Owing people a good-faith effort to probe at cruxes is not a result of anything in this conversation. It is universal.

comment by Czynski (JacobKopczynski) · 2023-04-18T04:09:37.982Z · LW(p) · GW(p)

You've said very little in a great deal of words. And, as I said initially, you haven't even attempted this.

unless you can provide a rephrasing which (a) preserves all relevant meaning while not being insulting, and (b) could have been generated by me, your disbelief is not evidence of anything except the fact that some things seem easy until you discover that they’re impossible.

Forget requirement (b). You haven't even attempted fulfilling requirement (a). And for as long as you haven't, it is unarguably true that your disbelief is not evidence for any of your claims or beliefs.

This is the meaning of "put up or shut up". If you want to be taken seriously, act seriously.

comment by Said Achmiz (SaidAchmiz) · 2023-04-15T03:54:32.173Z · LW(p) · GW(p)

I think that people should avoid saying insulting things in certain contexts, and that LessWrong comments are one such context.

I more or less agree with this; I think that posting and commenting on Less Wrong is definitely a place to try to avoid saying anything insulting.

But not to try infinitely hard. Sometimes, there is no avoiding insult. If you remove all the insult that isn’t core to what you’re saying, and if what you’re saying is appropriate, relevant, etc., and there’s still insult left over—I do not think that it’s a good general policy to avoid saying the thing, just because it’s insulting.

An insult is not simply a statement with a meaning that is unflattering to its target—it involves using words in a way that aggressively emphasizes the unflatteringness and suggests, to some extent, a call to non-belief-based action on the part of the reader.

By that measure, my comment does not qualify as an insult. (And indeed, as it happens, I wouldn’t call it “an insult”; but “insulting” is slightly different in connotation, I think. Either way, I don’t think that my comment may fairly be said to have these qualities which you list. Certainly there’s no “call to non-belief-based action”…!)

If I write a comment entirely in bold, in some sense I cannot un-bold it without changing its effect on the reader. But I think it would be pretty frustrating to most people if I then claimed that I could not un-bold it without changing its meaning.

True, of course… but also, so thoroughly dis-analogous to the actual thing that we’re discussing that it mostly seems to me to be a non sequitur.

Replies from: benwr
comment by benwr · 2023-04-15T04:14:44.950Z · LW(p) · GW(p)

By that measure, my comment does not qualify as an insult. (And indeed, as it happens, I wouldn’t call it “an insult”; but “insulting” is slightly different in connotation, I think. Either way, I don’t think that my comment may fairly be said to have these qualities which you list.

I think I disagree that your comment does not have these qualities in some measure, and they are roughly what I'm objecting to when I ask that people not be insulting. I don't think I want you to never say anything with an unflattering implication, though I do think this is usually best avoided as well. I'm hopeful that this is a crux, as it might explain some of the other conversation I've seen about the extent to which you can predict people's perception of rudeness.

There are of course more insulting ways you could have conveyed the same meaning. But there are also less insulting ways (when considering the extent to which the comment emphasizes the unflatteringness and the call to action that I'm suggesting readers will infer).
 

Certainly there’s no “call to non-belief-based action”…!)

I believe that none was intended, but I also expect that people (mostly subconsciously!) interpret (a very small) one from the particular choice of words and phrasing. Where the action is something like "you should scorn this person", and not just "this person has unflattering quality X". The latter does not imply the former.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-15T04:24:07.024Z · LW(p) · GW(p)

I think that, at this point, we’re talking about nuances so subtle, distinctions so fragile (in that they only rarely survive even minor changes of context, etc.), that it’s basically impossible to predict how they will affect any particular person’s response to any particular comment in any particular situation.

To put it another way, the variation (between people, between situations, etc.) in how any particular bit of wording will be perceived, is much greater than the difference made by the changes in wording that you seem to be talking about. So the effects of any attempt to apply the principles you suggest is going to be indistinguishable from noise.

And that means that any effort spent on doing so will be wasted.

comment by Jasnah Kholin (Jasnah_Kholin) · 2023-04-18T15:46:38.006Z · LW(p) · GW(p)

I actually DO believe you can't write this in not-insulting way. I find it the result of not prioritizing developing and practicing those skills in general. 

while i do judge you for this, i judge you for this one time, on the meta-level, instead of judging any instance separately. as i find this behavior orderly and predictable.

 

Replies from: JacobKopczynski
comment by Czynski (JacobKopczynski) · 2023-04-23T05:47:07.734Z · LW(p) · GW(p)

If it's really a skill issue, why hasn't anyone done that? If it can be written in a non-insulting way, demonstrate! I submit that you cannot.

Replies from: ambigram, Jasnah_Kholin
comment by ambigram · 2023-04-23T08:44:19.752Z · LW(p) · GW(p)

I'm curious, what do you think of these options?

Original: "I find your stated reason bizarre to the point where I can’t form any coherent model of your thinking here."

New version 1: "I can't form any coherent model of your thinking here." 

New version 2: "I don't understand your stated reason at all." 

New version 3: Omit that sentence. 

These shift the sentence from a judgment on Duncan's reasoning to a sharing of Said's own experience, which (for me, at least) removes the unnecessary/escalatory part of the insult.

Replies from: philh, JacobKopczynski
comment by philh · 2023-04-23T23:13:27.807Z · LW(p) · GW(p)

New version 4: "(I find your stated reason bizarre to the point where I can’t form any coherent model of your thinking here. Like, this is a statement about me, not about your thinking, but that's where I am. I kinda wish there was a way to say this non-insultingly, but I don't know such a way.)"

Replies from: JacobKopczynski
comment by Czynski (JacobKopczynski) · 2023-05-01T02:22:03.625Z · LW(p) · GW(p)

That's still shifting to a claim about social reality and therefore not the same thing.

Replies from: philh
comment by philh · 2023-05-01T07:59:06.131Z · LW(p) · GW(p)

Experiment:

It seems to me that Czynski is just plain wrong here. But I have no expectation of changing his mind, no expectation that engaging with him will be fun or enlightening for me, and also I think he's wrong in ways that not many bystanders will be confused about if they even see this.

If someone other than Czynski or Said would be interested in a reply to the above comment, feel free to say so and I'll provide one.

Replies from: JacobKopczynski
comment by Czynski (JacobKopczynski) · 2023-05-02T21:18:31.917Z · LW(p) · GW(p)

You really have no intellectual integrity at all, do you?

comment by Czynski (JacobKopczynski) · 2023-05-01T02:21:27.112Z · LW(p) · GW(p)

Version 1 is probably not the same content, since it is mostly about the speaker, and in any case preserves most of the insultingness. Version 2 is making it entirely about the speaker and therefore definitely different, losing the important content. Version 3 is very obviously definitely not the same content and I don't know why you bothered including it. (Best guess: you were following the guideline of naming 3 things rather than 1. If so, there is a usual lesson when that guideline fails.)

Shifting to sharing the speaker's experience is materially different. The content of the statement was a truth claim - making it a claim about an individual's experience changes it from being about reality to being about social reality, which is not the same thing. It is important to be able to make truth claims directly about other people's statements, because truth claims are the building blocks of real models of the world.

Replies from: ambigram
comment by ambigram · 2023-05-01T11:32:52.593Z · LW(p) · GW(p)

Hmm interesting. I agree that there is a difference between a claim about an individual's experience, and a claim about reality. The former is about a perception of reality, whereas the latter is about reality itself. In that case, I see why you would object to the paraphrasing—it changes the original statement into a weaker claim. 

I also agree that it is important to be able to make claims about reality, including other people's statements. After all, people's statements are also part of our reality, so we need to be able to discuss and reason about it.

I suppose what I disagree with thus that the original statement is valid as a claim about reality. It seems to me that statements are generally/by default claims about our individual perceptions of reality. (e.g. "He's very tall.") A claim becomes a statement about reality only when linked (implicitly or explicitly) to something concrete. (e.g. "He's in the 90th percentile in height for American adult males." or "He's taller than Daddy." or "He's taller than the typical gymnast I've trained for competitions.")

To say a stated reason is "bizarre" is a value judgment, and therefore cannot be considered a claim about reality. This is because there is no way to measure its truth value. If bizarre means "strange/unusual", then what exactly is "normal/usual"? How Less Wrong posters who upvoted Said's comment would think? How people with more than 1000 karma on Less Wrong would think? There is no meaning behind the word "bizarre" except as an indicator of the writer's perspective (i.e. what the claim is trying to say is "The stated reason is bizarre to Said"). 

I suppose this also explains why such a statement would seem insulting to people who are more Duncan-like. (I acknowledge that you find the paraphrase as insulting as the original. However, since the purpose of discussion is to find a way so people who are Duncan-like and people who are Said-like can communicate and work together, I believe the key concern should be whether or not someone who is Duncan-like would feel less insulted by the paraphrase. After all, people who are Duncan-like feel insulted by different things than people who are Said-like.)

For people who are Duncan-like, I expect the insult comes about because it presents a subjective (social reality) statement in the form of an objective (reality) statement. Said is making a claim about his own perspective, but he is presenting it as if it is objective truth, which can feel like he is invalidating all other possible perspectives. I would guess that people who are more Said-like are less sensitive, either because they think it is already obvious that Said is just making a claim from his own perspective or because they are less susceptible to influence from other people's claims (e.g. I don't care if the entire world tells me I am wrong, I don't ever waver because I know that I am right.)


Version 3 is very obviously definitely not the same content and I don't know why you bothered including it.

I included Version 3 because after coming up with Version 2, I noticed it was very similar to the earlier sentence ("I definitely no longer understand."), so I thought another valid example would be simply omitting the sentence. It seemed appropriate to me because part of being polite is learning to keep your thoughts to yourself when they do not contribute anything useful to the conversation.

comment by Jasnah Kholin (Jasnah_Kholin) · 2023-04-23T06:32:33.632Z · LW(p) · GW(p)

somewhere (i can't find it now) some else wrote that if he will do that, Said always can say it's not exactly what he means.

In this case, i find the comment itself not very insulting - the insult is in the general absent of Goodwill between Said and Duncan, and in the refuse to do interpretive labor. so any comment of "my model of you was <model> and now i just confused" could have worked.

my model of Duncan avoided to post it here from the general problems in LW, but i wasn't surprised it was specific problem. I have no idea what was Said's model of Duncan. but, i will try, with the caveat that the Said's model of Duncan suggested is almost certainly not true :

I though that you avoid putting it in LW because there will be strong and wrong pushback here against the concept of imaginary injury. it seem coherent with the crux of the post. now, when I learn the true, i simply confused. in my model, what you want to avoid is exactly the imaginary injury described in the post, and i can't form coherent model of you.

i suspect Said would have say i don't pass his ideological Turning test on that, or continue to say it's not exact. I submit that if i cannot, it's not writing not-insultingly, but passing his ideological turning test.

comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-15T01:42:39.022Z · LW(p) · GW(p)

There are some things which cannot be expressed in a non-insulting manner (unless we suppose that the target is such a saint that no criticism can affect their ego; but who among us can pretend to that?).

I did not intend insult, in the sense that insult wasn’t my goal. (I never intend insult, as a rule. What few exceptions exist, concern no one involved in this discussion.)

But, of course, I recognize that my comment is insulting. That is not its purpose, and if I could write it non-insultingly, I would do so. But I cannot.

I'm not quite clear: are you saying that it's literally impossible to express certain non-insulting meanings in a non-insulting way? Or that you personally are not capable of doing so? Or that you potentially could, but you're not motivated to figure out how?

Edit - also, do you mean that it's impossible to even reduce the degree to which it sounds insulting? Or are you just saying that such comments are always going to sound at least a tiny bit insulting?

The choice was between writing something that was necessary for the purpose of fulfilling appropriate and reasonable conversational goals, but could be written only in such a way that anyone but a saint would be insulted by it—or writing nothing.

I chose the former because I judged it to be the correct choice: writing nothing, simply in order to to avoid insult, would have been worse than writing the comment which I wrote.

(This explanation is also quite likely to apply to any past or future comments I write which seem to be insulting in similar fashion.)

This is helpful to me understanding you better. Thank you.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-15T02:13:53.058Z · LW(p) · GW(p)

I’m not quite clear: are you saying that it’s literally impossible to express certain non-insulting meanings in a non-insulting way? Or that you personally are not capable of doing so? Or that you potentially could, but you’re not motivated to figure out how?

I… think that the concept of “non-insulting meaning” is fundamentally a confused one in this context.

Edit—also, do you mean that it’s impossible to even reduce the degree to which it sounds insulting? Or are you just saying that such comments are always going to sound at least a tiny bit insulting?

Reduce the degree? Well, it seems like it should be possible, in principle, in at least some cases. (The logic being that it seems like it should be quite possible to increase the degree of insultingness without changing the substance, and if that’s the case, then one would have to claim that I always succeed at selecting exactly the least insulting possible version—without changes in substance—of any comment; and that seems like it’s probably unlikely. But there’s a lot of “seems” in that reasoning, so I wouldn’t place very much confidence in it. And I can also tell a comparably plausible story that leads to the opposite conclusion, reducing my confidence even further.)

But I am not sure what consequence that apparent in-principle truth has on anything.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-15T02:34:11.110Z · LW(p) · GW(p)

Here's a potential alternative wording of your previous statement.

Original: (I find your stated reason bizarre to the point where I can’t form any coherent model of your thinking here.)

New version: I am very confused by your stated reason, and I'm genuinely having trouble seeing things from your point of view. But I would genuinely like to. Here's a version that makes a little more sense to me [give it your best shot]... but here's where that breaks down [explain]. What am I missing?

I claim with very high confidence that this new version is much less insulting (or is not insulting at all). It took me all of 15 seconds to come up with, and I claim that it either conveys the same thing as your original comment (plus added extras), or that the difference is negligible and could be overcome with an ongoing and collegial dialog of a kind that the original, insulting version makes impossible. If you have an explanation for what of value is lost in translation here, I'm listening.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-15T03:17:55.488Z · LW(p) · GW(p)

It’s certainly possible to write more words and thereby to obfuscate what you’re saying and/or alter your meaning in the direction of vagueness.

And you can, certainly, simply say additional things—things not contained in the original message, and that aren’t simply transformations of the meaning, but genuinely new content—that might (you may hope) “soften the blow”, as it were.

But all of that aside, what I’d actually like to note, in your comment, is this part:

It took me all of 15 seconds to come up with

First of all, while it may be literally true that coming up with that specific wording, with the bracketed parts un-filled-in, took you 15 seconds (if you say it, I believe it), the connotation that transmuting a comment from the “original” to the (fully qualified, as it were) “new version” takes somewhere on the order of 15 seconds (give or take a couple of factors of two, perhaps) is not believable.

Of course you didn’t claim that—it’s a connotation, not a denotation. But do you think it’s true? I don’t. I don’t think that it’s true even for you.

(For one thing, simply typing out the “fully qualified” version—with the “best shot” at explanation outlined, and the pitfalls noted, and the caveats properly caveated—is going to take a good bit longer. Type at 60 WPM? Then you’ve got the average adult beat, and qualify as a “professional typist”; but even so just the second paragraph of your comment would take you most of a minute to type out. Fill out those brackets, and how many words are you adding? 100? 300? More?)

But, perhaps more importantly, that stuff requires not just more typing, but much more thinking (and reading). What is worse, it’s thinking of a sort that is very, very likely to be a complete waste of time, because it turns out to be completely wrong [LW(p) · GW(p)].

For example, consider this attempt [LW(p) · GW(p)], by me, to describe in detail Duncan’s approach to banning people from his posts. It seemed—and still seems—to me to be an accurate characterization; and certainly it was written in such a way that I quite expected Duncan to assent to it. But instead the response was, more or less, “nah” [LW(p) · GW(p)]. Now, either Duncan is lying there, and my characterization was correct but he doesn’t want to admit it; or, my characterization was wrong. In the former case I’ve mostly wasted my time; in the latter case I’ve entirely wasted my time. And this sort of outcome is ubiquitous, in my experience. Trying to guess what people are thinking, when you’re unsure or confused, is pointless. Guessing incorrectly tends to annoy people, so it doesn’t help to build bridges or maintain civility. The attempt wastes the guesser’s time and energy. It’s pretty much all downside, no upside.

If you don’t know, just say that you don’t know.

And the rest is transparent boilerplate.

Replies from: AllAmericanBreakfast, Vladimir_Nesov
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-15T03:54:32.524Z · LW(p) · GW(p)

It’s certainly possible to write more words and thereby to obfuscate what you’re saying and/or alter your meaning in the direction of vagueness.

And you can, certainly, simply say additional things—things not contained in the original message, and that aren’t simply transformations of the meaning, but genuinely new content—that might (you may hope) “soften the blow”, as it were.

This is the part I think is important in your objection - I agree with you that expanding the bracketed part would take more than 15 seconds. You're claiming somewhere on the implicit-explicit spectrum that something substantial is lost in the translation from the original insulting version by you to the new non-insulting version by me.

I just straightforwaredly disagree with that, and I challenge you to articulate what exactly you think is lost and why it matters.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-15T04:03:12.742Z · LW(p) · GW(p)

I confess that I am not sure what you’re asking.

As far as saying additional things goes—well, uh, the additional things are the additional things. The original version doesn’t contain any guessing of meaning or any kind of thing like that. That’s strictly new.

As I said, the rest is transparent boilerplate. It doesn’t much obfuscate anything, but nor does it improve anything. It’s just more words for more words’ sake.

I don’t think anything substantive is lost in terms of meaning; the losses are (a) the time and effort on the part of the comment-writer, (b) annoyance (or worse) on the part of the comment target (due to the inevitably-incorrect guessing), (c) annoyance (or worse) on the part of the comment target (due to the transparent fluff that pretends to hide a fundamentally insulting meaning).

The only way for someone not to be insulted by a comment that says something like this is just to not be insulted by what it says. (Take my word for this—I’ve had comments along these lines directed at me many, many times, in many places! I mostly don’t find them insulting—and it’s not because people who say such things couch them in fluff. They do no such thing.)

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-15T04:21:34.999Z · LW(p) · GW(p)

I don’t think anything substantive is lost in terms of meaning; the losses are (a) the time and effort on the part of the comment-writer, (b) annoyance (or worse) on the part of the comment target (due to the inevitably-incorrect guessing), (c) annoyance (or worse) on the part of the comment target (due to the transparent fluff that pretends to hide a fundamentally insulting meaning).

 

Ah, I see. So the main thing I'm understanding here is that the meaning you were trying to convey to Duncan is understood, by you, as a fundamentally insulting one. You could "soften" it by the type of rewording I proposed. But this is not a case where you mean to say something non-insulting, and it comes out sounding insulting by accident. Instead, you mean to say something insulting, and so you're just saying it, understanding that the other person will probably, very naturally, feel insulted.

An example of saying something fundamentally insulting is to tell somebody that you think they are stupid or ugly. You are making a statement of this kind. Is that correct?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-15T04:25:52.318Z · LW(p) · GW(p)

An example of saying something fundamentally insulting is to tell somebody that you think they are stupid or ugly. You are making a statement of this kind. Is that correct?

No, I don’t think so…

But this comment of yours baffles me. Did we not already cover this ground [LW(p) · GW(p)]?

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-15T04:30:45.875Z · LW(p) · GW(p)

Then what did you mean by this:

I don’t think anything substantive is lost in terms of meaning; the losses are... (c) annoyance (or worse) on the part of the comment target (due to the transparent fluff that pretends to hide a fundamentally insulting meaning).

My understanding of this statement was that you are asserting that the core meaning of the original quote by you, in both your original version and my rewrite, was a fundamentally insulting one. Are you saying it was a different kind of fundamental insult from calling somebody stupid or ugly? Or are you now saying it was not an insult?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-15T04:44:37.286Z · LW(p) · GW(p)

Well, firstly—as I say here [LW(p) · GW(p)], I think that there’s a subtle difference between “insulting” and “an insult”. But that’s perhaps not the key point.

That aside, it really seems like your question is answered, very explicitly, in this earlier comment of mine [LW(p) · GW(p)]. But let’s try again:

Is my comment insulting? Yes, as I said earlier, I think that it is (or at least, it would not be unreasonable for someone to perceive it thus).

(Should it be insulting? Who knows; it’s complicated. Is it gratuitously insulting, or insulting in a way that is extraneous to its propositional meaning? No, I don’t think so. Would all / most people perceive it as insulting if they were its target? No / probably, respectively. Is it possible not to be insulted by it? Yes, it’s possible; as I said earlier, I’ve had this sort of thing said to me, many times, and I have generally failed to be insulted by it. Is it possible for Duncan, specifically, to not be insulted by that comment as written by me, specifically? I don’t know; probably not. Is that, specifically, un-virtuous of Duncan? No, probably not.)

Is my comment thereby similar to other things which are also insulting, in that it shares with those other things the quality of being insulting? By definition, yes.

Is it insulting in the same way as is calling someone stupid, or calling someone ugly? No, all three of these are different things, which can all be said to be insulting in some way, but not in the same way.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-15T04:56:35.632Z · LW(p) · GW(p)

OK, this is helpful.

So it sounds like you perceive your comment as conveying information - a fact or a sober judgment of yours - that will, in its substance, tend to trigger a feeling of being insulted in the other person, possibly because they are sensitive to that fact or judgment being called to their attention.

But it is not primarily intended by you to provoke that feeling of being insulted. You might prefer it if the other person did not experience the feeling of being insulted (or you might simply not care) - your aim is to convey the information, irrespective of whether or not it makes the other person feel insulted.

Is that correct?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-15T05:20:44.021Z · LW(p) · GW(p)

Sounds about right.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-15T05:26:05.491Z · LW(p) · GW(p)

Now that we've established this, what is your goal when you make insulting comments? (Note: I'll refer to your comments as "insulting comments," defined in the way I described in my previous comment). If you subscribe to a utilitarian framework, how does the cost/benefit analysis work out? If you are a virtue ethicist, what virtue are you practicing? If you are a deontologist, what maxim are you using? If none of these characterizes the normative beliefs you're acting under, then please articulate what motivates you to make them in whatever manner makes sense to you. Making statements, however true, that you expect to make the other person feel insulted seems like a substantial drawback that needs some rationale.

Replies from: JacobKopczynski, SaidAchmiz
comment by Czynski (JacobKopczynski) · 2023-04-16T03:25:17.378Z · LW(p) · GW(p)

If you care more about not making social attacks than telling the truth, you will get an environment which does not tell the truth when it might be socially inconvenient. And the truth is almost always socially inconvenient to someone.

So if you are a rationalist, i.e. someone who strongly cares about truth-seeking, this is highly undesirable.

Most people are not capable of executing on this obvious truth even when they try hard; the instinct to socially-smooth is too strong. The people who are capable of executing on it are, generally, big-D Disagreeable, and therefore also usually little-d disagreeable and often unpleasant. (I count myself as all three, TBC. I'd guess Said would as well, but won't put words in his mouth.)

Replies from: Viliam
comment by Viliam · 2023-04-16T18:39:27.577Z · LW(p) · GW(p)

Yes, caring too much about not offending people means that people do not call out bullshit.

However, are rude environments more rational? Or do they just have different ways of optimizing for something other than truth? -- Just guessing here, but maybe disagreeable people derive too much pleasure from disagreeing with someone, or offending someone, so their debates skew that way. (How many "harsh truths" are not true at all; they are just popular because offend someone?)

(When I tried to think about examples, I thought I found one: military. No one cares about the feelings of their subordinates, and yet things get done. However, people in the military care about not offending their superiors. So, probably not a convincing example for either side of the argument.)

Replies from: JacobKopczynski, M. Y. Zuo
comment by Czynski (JacobKopczynski) · 2023-04-18T01:44:17.283Z · LW(p) · GW(p)

I'm sure there is an amount of rudeness which generates more optimization-away-from-truth than it prevents. I'm less sure that this is a level of rudeness achievable in actual human societies. And for whether LW could attain that level of rudeness within five years even if it started pushing for rudeness as normative immediately and never touched the brakes - well, I'm pretty sure it couldn't. You'd need to replace most of the mod team (stereotypically, with New Yorkers, which TBF seems both feasible and plausibly effective) to get that to actually stick, probably, and it'd still be a large ship turning slowly.

A monoculture is generally bad, so having a diversity of permitted conduct is probably a good idea regardless. That's extremely hard to measure, so as a proxy, ensuring there are people representing both extremes who are prolific and part of most important conversations will do well enough.

Replies from: Viliam
comment by Viliam · 2023-04-18T08:57:38.440Z · LW(p) · GW(p)

I am probably just saying the obvious here, but a rude environment is not only one where people say true things rudely, but also where people say false things rudely.

So when we imagine the interactions that happen there, it is not just "someone says the truth, ignoring the social consequences" which many people would approve, but also "someone tries to explain something complicated, and people not only respond by misunderstanding and making fallacies, but they are also assholes about it" where many people would be tempted to say 'fuck this' and walk away. So the website would gravitate towards a monoculture anyway.

(I wanted to give theMotte as an example of a place that is further in that direction and the quality seems to be lower... but I just noticed that the place is effectively dead.)

Replies from: Vladimir_Nesov, philh, JacobKopczynski, localdeity
comment by Vladimir_Nesov · 2023-04-22T21:11:54.045Z · LW(p) · GW(p)

a rude environment is not only one where people say true things rudely, but also where people say false things rudely

The concern is with requiring the kind of politeness that induces substantive self-censorship. This reduces efficiency of communicating dissenting observations, sometimes drastically. This favors beliefs/arguments that fit the reigning vibe.

The problems with (tolerating) rudeness don't seem as asymmetric, it's a problem across the board, as you say. It's a price to consider for getting rid of the asymmetry of over-the-top substantive-self-censorship-inducing politeness.

comment by philh · 2023-04-18T13:23:51.256Z · LW(p) · GW(p)

The Motte has its own site now. (I agree the quality is lower than LW, or at least it was several months ago and that's part of why I stopped reading. Though idk if I'd attribute that to rudeness.)

comment by Czynski (JacobKopczynski) · 2023-04-19T05:08:26.578Z · LW(p) · GW(p)

I do not think that is the usual result.

comment by M. Y. Zuo · 2023-04-16T19:09:50.275Z · LW(p) · GW(p)

(When I tried to think about examples, I thought I found one: military. No one cares about the feelings of their subordinates, and yet things get done. However, people in the military care about not offending their superiors. So, probably not a convincing example for either side of the argument.)

There's another example, frats.

Even though the older frat members harass their subordinates via hazing rituals and so on, the new members wouldn't stick around if they genuinely thought the older members were disagreeable people out to get them. 

comment by Said Achmiz (SaidAchmiz) · 2023-04-15T05:41:18.325Z · LW(p) · GW(p)

Now that we’ve established this, what is your goal when you make [comments that will, in [their] substance, tend to trigger a feeling of being insulted in the other person, possibly because they are sensitive to that fact or judgment being called to their attention … [but that are] not primarily intended by you to provoke that feeling of being insulted]?

I write comments for many different reasons. (See this [LW · GW], this [LW(p) · GW(p)], etc.) Whether a comment happens to be (or be likely to be perceived as) “insulting” or not generally doesn’t change those reasons.

Making statements, however true, that you expect to make the other person feel insulted seems like a substantial drawback that needs some rationale.

I do not agree.

Please see this comment [LW(p) · GW(p)] and this comment [LW(p) · GW(p)] for more details on my approach to such matters.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-15T05:52:52.236Z · LW(p) · GW(p)

OK, I have read the comments you linked. My understanding is this:

  • You understand that you have a reputation for making comments perceived as social attacks, although you don't intend them as such.
  • You don't care whether or not the other person feels insulted by what you have to say. It's just not a moral consideration for your commenting behavior.
  • Your aesthetic is that you prefer to accept that what you have to say has an insulting meaning, and to just say it clearly and succinctly.

Do you care about the manner in which other people talk to you? For example, if somebody wished to say something with an insulting meaning to you, would you prefer them to say it to you in the same way you say such things to others?

(Incidentally, I don't know who's been going through our comment thread downvoting you, but it wasn't me. I'm saying this because I now see myself being downvoted, and I suspect it may be retaliation from you, but I am not sure about that).

Replies from: SaidAchmiz, SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-15T06:22:19.183Z · LW(p) · GW(p)

You understand that you have a reputation for making comments perceived as social attacks, although you don’t intend them as such.

I have (it would seem) a reputation for making certain sorts of comments, which are of course not intended as “attacks” of any sort (social, personal, etc.), but which are sometimes perceived as such—and which perception, in my view, reflects quite poorly on those who thus perceive said comments.

You don’t care whether or not the other person feels insulted by what you have to say. It’s just not a moral consideration for your commenting behavior.

Certainly I would prefer that things were otherwise. (Isn’t this often the case, for all of us?) But this cannot be a reason to avoid making such comments; to do so would be even more blameworthy, morally speaking, than is the habit on the part of certain interlocutors to take those comments as attacks in the first place. (See also this old comment thread [LW(p) · GW(p)], which deals with the general questions of whether, and how, to alter one’s behavior in response to purported offense experienced by some person.)

Your aesthetic is that you prefer to accept that what you have to say has an insulting meaning, and to just say it clearly and succinctly.

I don’t know if “aesthetic” is the right term here. Perhaps you mean something by it other than what I understand the term to mean.

In any case, indeed, clarity and succinctness are the key considerations here—out of respect for both my interlocutors and for any readers, who surely deserve not to have their time wasted by having to read through nonsense and fluff.

Do you care about the manner in which other people talk to you? For example, if somebody wished to say something with an insulting meaning to you, would you prefer them to say it to you in the same way you say such things to others?

I would prefer that people say things to me in whatever way is most appropriate and effective, given the circumstances. Generally it is better to be more concise, more clear, more comprehensive, more unambiguous. (Some of those goals conflict, you may notice! Such is life; we must navigate such trade-offs.)

I have other preferences as well, though they are less important. I dislike vulgarity, for example, and name-calling. Avoiding these things is, I think, no more than basic courtesy. I do not employ them myself, and certainly prefer not to hear them addressed to me, or even in my presence. (This has never presented a problem, in either, direction, on Less Wrong, and I don’t expect this to change.) Of course one can conceive of cases when these preferences must be violated in order to serve the goals of conciseness, clarity, etc.; in such a case I’d grin and bear it, I suppose. (But I can’t recall encountering such.)

Now that I’ve answered your questions, here’s one of my own:

What, exactly, is the point of this line of questioning? We seem to be going very deep down this rabbit hole, litigating these baroque details of connotation and perception… and it seems to me that nothing of any consequence hinges on any of this. What makes this tangent even slightly worth either my time or yours?

Replies from: Duncan_Sabien, AllAmericanBreakfast
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-15T07:30:00.241Z · LW(p) · GW(p)

I have (it would seem) a reputation for making certain sorts of comments, which are of course not intended as “attacks” of any sort (social, personal, etc.), but which are sometimes perceived as such—and which perception, in my view, reflects quite poorly on those who thus perceive said comments.

Just a small note that "Said interpreting someone as [interpreting Said's comment as an attack]" is, in my own personal experience, not particularly correlated with [that person in fact having interpreted Said's comment as an attack].

Said has, in the past, seemed to have perceived me as perceiving him as attacking me, when in fact I was objecting to his comments for other reasons, and did not perceive them as an attack, and did not describe them as attacks, either.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-15T07:44:04.928Z · LW(p) · GW(p)

The comment you quoted was not, in fact, about you. It was about this [LW(p) · GW(p)] (which you can see if you read the thread in which you’re commenting).

Note that in the linked discussion thread, it is not I, but someone else, who claims that certain of my comments are perceived as attacks.

In short, your comment is a non sequitur in this context.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-15T07:54:38.729Z · LW(p) · GW(p)

No, it's relevant context, especially given that you're saying in the above ~[and I judge people for it].

(To be clear, I didn't think that the comment I quoted was about me. Added a small edit to make that clearer.)

comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-15T07:07:24.996Z · LW(p) · GW(p)

What, exactly, is the point of this line of questioning? We seem to be going very deep down this rabbit hole, litigating these baroque details of connotation and perception… and it seems to me that nothing of any consequence hinges on any of this. What makes this tangent even slightly worth either my time or yours?

I wrote about five paragraphs in response to this, which I am fine with sharing with you on two conditions. First, because my honest answer contains quite a bit of potentially insulting commentary toward you (expressed in the same matter of fact tone I've tried to adopt throughout our interaction here), I want your explicit approval to share it. I am open to not sharing it, DMing it to you, or posting it here.

Secondly, if I do share it, I want you to precommit not to respond with insulting comments directed at me.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-15T07:22:31.345Z · LW(p) · GW(p)

Secondly, if I do share it, I want you to precommit not to respond with insulting comments directed at me.

This seems like a very strange, and strangely unfair, condition. I can’t make much sense of it unless I read “insulting” as “deliberately insulting”, or “intentionally insulting”, or something like it. (But surely you don’t mean it that way, given the conversational context…?)

Could you explain the point of this? I find that I’m increasingly perplexed by just what the heck is going on in this conversation, and this latest comment has made me more confused than ever…

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-15T16:28:45.900Z · LW(p) · GW(p)

Yes, it's definitely an unfair condition, and I knew that when I wrote it. Nevertheless - that is my condition.

If you would prefer a vague answer with no preconditions, I am satisfying my curiosity about somebody who thinks very differently about commenting norms than I do.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-15T17:02:39.427Z · LW(p) · GW(p)

Alright, thanks.

comment by Said Achmiz (SaidAchmiz) · 2023-04-15T06:25:28.460Z · LW(p) · GW(p)

(Incidentally, I don’t know who’s been going through our comment thread downvoting you, but it wasn’t me. I’m saying this because I now see myself being downvoted, and I suspect it may be retaliation from you, but I am not sure about that).

I did (weak-)downvote one comment of yours in this comment section, but only one. If you’re seeing multiple comments downvoted, then those downvotes aren’t from me. (Of course I don’t know how I’d prove that… but for whatever my word’s worth, you have it.)

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-15T06:29:07.287Z · LW(p) · GW(p)

I believe you, and it doesn't matter to me. I just didn't want you to perceive me incorrectly as downvoting you.

comment by Vladimir_Nesov · 2023-04-16T03:48:48.836Z · LW(p) · GW(p)

Guessing incorrectly tends to annoy people, so it doesn’t help to build bridges or maintain civility. The attempt wastes the guesser’s time and energy. It’s pretty much all downside, no upside.

If you don’t know, just say that you don’t know.

I like the norm of discussing a hypothetical interpretation you find interesting/relevant, without a need to discuss (let alone justify) its relation to the original statement or God forbid intended meaning. If someone finds it interesting to move the hypothetical in another direction (perhaps towards the original statement, or even intended meaning), that is a move of the same kind, not a move of a different and privileged kind.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-16T03:52:37.316Z · LW(p) · GW(p)

I agree that this can often be a reasonable and interesting thing to do.

I would certainly not support any such thing becoming expected or mandatory. (Not that you implied such a thing—I just want to forestall the obvious bad extrapolation.)

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-04-16T06:32:32.998Z · LW(p) · GW(p)

I like the norm of discussing a hypothetical interpretation you find interesting/relevant, without a need to discuss (let alone justify) its relation to the original statement or God forbid intended meaning.

I would certainly not support any such thing becoming expected or mandatory.

Do you mean that you don't support the norm of it not being expected for hypothetical interpretations of statements to not needing to justify themselves as being related to those statements? In other words, that (1) you endorse the need to justify discussion of hypothetical interpretations of statements by showing those interpretations to be related to the statements they interpret, or something like that? Or (2) that you don't endorse endless tangents becoming the norm, forgetting about the original statement? The daisy chain is too long.

It's unclear how to shape the latter option with policy. For the former option, the issue is demand for particular proof [LW · GW]. Things can be interesting for whatever reason, doesn't have to be a standard kind of reason. Prohibiting arbitrary reasons is damaging to the results, in this case I think for no gain.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-16T06:47:10.013Z · LW(p) · GW(p)

Do you mean that … (1) you endorse the need to justify discussion of hypothetical interpretations of statements by showing those interpretations to be related to the statements they interpret, or something like that?

No, absolutely not.

Or (2) that you don’t endorse endless tangents becoming the norm, forgetting about the original statement?

Yeah.

My view is that first it’s important to get clear on what was meant by some claim or statement or what have you. Then we can discuss whatever. (If that “whatever” includes some hypothetical interpretation of the original (ambiguous) claim, which someone in the conversation found interesting—sure, why not.) Or, at the very least, it’s important to get that clarity regardless—the tangent can proceed in parallel, if it’s something the participants wish.

EDIT: More than anything, what I don’t endorse is a norm that says that someone asking “what did you mean by that word/phrase/sentence/etc.?” must provide some intepretation of their own, whether that be a guess at the OP’s meaning, or some hypothetical, or what have you. Just plain asking “what did you mean by that?” should be ok!

Things can be interesting for whatever reason, doesn’t have to be a standard kind of reason. Prohibiting arbitrary reasons is damaging to the results, in this case I think for no gain.

Totally agreed.

comment by Said Achmiz (SaidAchmiz) · 2023-04-14T23:05:19.030Z · LW(p) · GW(p)

(Expanding on this comment [LW(p) · GW(p)])

The key thing missing from your account of my views is that while I certainly think that “local validity checking” is important, I also—and, perhaps, more importantly—think that the interactions in question are not only fine, but good, in a “relational” sense.

So, for example, it’s not just that a comment that just says “What are some examples of this?” doesn’t, by itself, break any rules or norms, and is “locally valid”. It’s that it’s a positive contribution to the discussion, which is aimed at (a) helping a post author to get the greatest use out of his post and the process and experience of posting it, and (b) helping the commentariat get the greatest use out of the author’s post. (Of course, (b) is more important than (a)—but they are both important!)

Some points that follow from this, or depend on this:

First, such contributions should be socially rewarded to the degree that they are necessary. By “necessary”, here, I mean that if it is the case that some particular sort of criticism or some particular sort of question is good (i.e., it contributes substantially to how much use can be gotten out of a post), but usually nobody asks that sort of question or makes that sort of criticism, then anyone who does do that, should be seen as making not only a good but a very important contribution. (And it’s a bad sign when this sort of thing is common—it means that at least some sorts of important criticisms, or some sorts of important questions, are not asked nearly often enough!)

Meanwhile, asking a sort of question or making a sort of criticism which is equally good but is usually or often made, such that it is fairly predictable and authors can, with decent probability, expect to get it, then such a question or criticism is still good and praiseworthy, but not individually as important (though of course still virtuous!).

In the limit, an author will know that if they don’t address something in their post, somebody will ask about it, or comment on it. (And note that it’s not always necessary, in such a case, to anticipate a criticism or question in your post, even if you expect it will be made! You can leave it to the comments, being ready to respond to it if it’s brought up—or proactively bringing it up yourself [LW(p) · GW(p)], filling the role of your own devil’s advocate.)

In other words

I mean, if 10 years from now, when you are doing something quick and dirty, you suddenly visualize that I am looking over your shoulders and say to yourself “Dijkstra would not have liked this”, well, that would be enough immortality for me.

And this is a good thing. If you posit some abstraction in your post, you should think “they’re gonna ask me for examples in the comments”. (It’s a bad sign, again, if what you actually think is “Said Achmiz is gonna ask me for examples in the comments”!) And this should make you think about whether you have examples; and what those examples demonstrate; or, if you don’t have any, what that means; etc.

And the same goes for many other sorts of questions one could ask, or criticisms one could make.

(Relatedly: I, too, want to “build up a context in which people can hold each other accountable”. But what exactly do you think that looks like?)

Second, it is no demerit to a post author, if one commenter asks a question, and another commenter answers it, without the OP’s involvement (or perhaps with merely a quick note saying “endorsed!”). Indeed it’s no demerit to an author, even, if questions are asked, or criticisms made, in the comments, to which the OP has no good answer, but which are answered satisfactorily by others, such that the end result is that knowledge and understanding are constructed by a collective effort that results in even the author of the post, himself, learning something new!

This, by the way, is related to the reasons why I find the “authors can ban people from their posts” thing so frustrating and so thoroughly counterproductive. If I write a comment under someone’s post, about someone’s post, certainly there’s an obvious sense in which it’s addressed to the author of the post—but it’s not just addressed to them! If I wanted to talk to someone one-on-one, I could send a private message… but unless I make a point of noting that I’m soliciting the OP’s response in particular (and even then, what’s to stop anyone else from answering anyway?), or ask for something that only the OP would know… comments / questions are best seen as “put to the whole table”, so to speak. Yes, if the post author has an answer they think is appropriate to provide, they can, and should, do that. But so can and should anyone else!

It’s no surprise that, as others have noted [LW(p) · GW(p)], the comments section of a post is, not infrequently, at least as useful as the post itself. And that is fine! It’s no indictment of a post’s author, when that turns out to be the case!

The upshot of this point and the previous one is that in (what I take to be) a healthy discussion environment, when someone writes a comment under your post that just says, for instance, “What are some examples of this?”, there is no good reason why that should contribute to any “relational” difficulties. It is the sort of thing that helps to make posts useful, not just to the commentariat as a whole but also to those posts’ authors; and the site is better if people regularly make such comments, ask such questions, pose such criticisms.

And, thus: third, if someone finds that they react to such engagement as if it were some sort of attack, annoyance, problem, etc., that is a bug, and one which they should want to fix. Reacting to a good thing as if it were a bad thing is, quite simply, a mistake.

Note, again, that the question isn’t whether some particular comment is “locally valid” in an “atomic” sense while being problematic in a “relational” sense. The question, rather, is whether the comment is simply good (in a “relational” sense or in any other sense), but is being mistakenly reacted to as though it were bad.

comment by Said Achmiz (SaidAchmiz) · 2023-04-14T19:03:30.811Z · LW(p) · GW(p)

Thank you for laying out your reasoning.

I don’t have any strong objections to any of this (various minor ones, but that’s to be expected)…

except the last paragraph (#5, starting with “I think Said is trying to figure out …”). There I think you importantly mis-characterize my views; or, to be more precise, you leave out a major aspect, which (in addition to being a missing key point), by its absence colors the rest of your characterization. (What is there is not wrong, per se, but, again, the missing aspect makes it importantly misleading.)

I would, of course, normally elaborate here, but I hesitate to end up with this comment thread/section being filled with my comments. Let me know if you want me to give my thoughts on this in detail here, or elsewhere.

(EDIT: Now expanded upon in this comment [LW(p) · GW(p)].)

Replies from: Vaniver
comment by Vaniver · 2023-04-14T19:25:14.527Z · LW(p) · GW(p)

I would appreciate more color on your views; by that point I was veering into speculation and hesitant to go too much further, which naturally leads to incompleteness.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-15T01:16:25.654Z · LW(p) · GW(p)

By the way, I will note that I am both quite surprised and, separately, something like dismayed, at how devastatingly effective has been what I will characterize as "Said's privileging-the-hypothesis [LW · GW] gambit."

Like, Said proposed, essentially, "Duncan holds a position which basically no sane person would advocate, and he has somehow held this position for years without anyone noticing, and he conspicuously left this position out of his very-in-depth statement of his beliefs about discourse norms just a couple of months ago"

and if I had realized that I actually needed to seriously counter this claim, I might have started with "bro do you even Bayes?"

(Surely a reasonable prior on someone holding such a position is very very very low even before taking into account the latter parts of the conjunction.)

Like, that Vaniver would go so far as to take the hypothesis

Duncan has, I think, made it very clear that that a comment that just says 'what are some examples of this claim?' is, in his view, unacceptable"

and then go sifting through the past few comments with an eye toward using them to distinguish between "true" and "false" is startling to me.

"Foolishness," Severus said softly. "Utter foolishness. The Dark Mark has not faded, nor has its master."

"See, that's what I mean by formally insufficient Bayesian evidence. Sure, it sounds all grim and foreboding and stuff, but is it that unlikely for a magical mark to stay around after the maker dies? Suppose the mark is certain to continue while the Dark Lord's sentience lives on, but a priori we'd only have guessed a twenty percent chance of the Dark Mark continuing to exist after the Dark Lord dies. Then the observation, 'The Dark Mark has not faded' is five times as likely to occur in worlds where the Dark Lord is alive as in worlds where the Dark Lord is dead. Is that really commensurate with the prior improbability of immortality? Let's say the prior odds were a hundred-to-one against the Dark Lord surviving. If a hypothesis is a hundred times as likely to be false versus true, and then you see evidence five times more likely if the hypothesis is true versus false, you should update to believing the hypothesis is twenty times as likely to be false as true. Odds of a hundred to one, times a likelihood ratio of one to five, equals odds of twenty to one that the Dark Lord is dead -"

"Where are you getting all these numbers, Potter?"

"That is the admitted weakness of the method," Harry said readily. "But what I'm qualitatively getting at is why the observation, 'The Dark Mark has not faded', is not adequate support for the hypothesis, 'The Dark Lord is immortal.' The evidence isn't as extraordinary as the claim."

The observation "Duncan groused at Said for doing too little interpretive and intellectual labor relative to that which he solicited from others" is not adequate support for "Duncan generally thinks that asking for examples is unacceptable." This is what I meant by the strength of the phrase "blatant falsehood." I suppose if you are starting from "either Mortimer Snodgrass did it, or not," rather than from "I wonder who did the murder," then you can squint at my previous comments—

(including the one that was satirical [LW(p) · GW(p)], which satire, I infer from Vaniver pinging me about my beliefs on that particular phrase offline, was missed)

—and see in them that the murderer has dark hair, and conclude from Mortimer's dark hair that there should be a large update toward his guilt.

But I rather thought we didn't do that around here, and did not expect anyone besides Said to seriously entertain the hypothesis, which is ludicrous.

(I get that Said probably genuinely believed it, but the devout genuinely believe in their gods and we don't give them points for that around here.)

Replies from: habryka4, Vaniver
comment by habryka (habryka4) · 2023-04-15T02:10:46.359Z · LW(p) · GW(p)

Again, just chiming in, leaving the actual decision up to Ray: 

My current take here is indeed that Said's hypothesis, taking fully literal and within your frame was quite confused and bad. 

But also, like, people's frames, especially in the domain of adversarial actions, hugely differ, and I've in the past been surprised by the degree to which some people's frames, despite seeming insane and gaslighty to me at first turned out to be quite valuable. Most concretely I have in my internal monologue indeed basically fully shifted towards using "lying" and "deception" the way Zack, Benquo and Jessica are using it, because their concept seems to carve reality at its joints much better than my previous concept of lying and deception. This despite me telling them many times that their usage of those terms is quite adversarial and gaslighty. 

My current model is that when Said was talking about the preference he ascribes to you, there is a bunch of miscommunication going on, and I probably also have deep disagreements with his underlying model, but I have updated against trying to stamp down on that kind of stuff super hard, even if it sounds quite adversarial to me on first glance. 

This might be crazy, and maybe making this a moderation policy would give rise to all kinds of accusations thrown around and a ton of goodwill being destroyed, but I currently generally feel more excited about exploring different people's accusations of adversarialness in a bunch of depth, even if they seem unlikely on the face of it. This is definitely also partially driven by my thoughts on FTX, and trying to somehow create a space where more uncharitable/adversarial accusations could have been brought up somehow.

But this is really all very off-the-cuff and I have thought about this specific situation and the relevant thread much less than Ray and Ruby have, so I am currently leaving the detailed decisions up to them. But seemed potentially useful to give some of my models here.

comment by Vaniver · 2023-04-15T03:37:17.459Z · LW(p) · GW(p)

I think you are mistaken about the process that generated my previous comment; I would have preferred a response that engaged more with what I wrote.

In particular, it looks to me like you think the core questions are "is the hypothesis I quote correct? Is it backed up by the four examples?", and the parent comment looks to me like you wrote it thinking I thought the hypothesis you quote is correct and backed up by the examples. I think my grandparent comment makes clear that I think the hypothesis you quote is not correct and is not backed up by the four examples. 

Why does the comment not just say "Duncan is straightforwardly right"? Well, I think we disagree about what the core questions are. If you are interested in engaging with that disagreement, so am I; I don't think it looks like your previous comment.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-15T04:39:36.832Z · LW(p) · GW(p)

(I intended to convey with "by the way" that I did not think I had (yet) responded to the full substance of your comment/that I was doing something of an aside.)

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-14T21:48:06.494Z · LW(p) · GW(p)

If the mods clearly disagree with Duncan, then what does 'losing the bet' look like? What was staked here?

I plan to just leave/not post essays here anymore if this isn't fixed. LW is a miserable place to be, right now. ¯\_(ツ)_/¯

(I also said the following in a chat with several of the moderators on 4/8: > I spent some time wondering if I would endorse a LW where both Duncan and Said were banned, and my conclusion was "yes, b/c that place sounds like it knows what it's for and is pruning and weeding accordingly.")

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-14T21:39:41.161Z · LW(p) · GW(p)

I note that this is leaving out recent and relevant background mentioned in this comment [LW(p) · GW(p)].

comment by Alicorn · 2023-04-15T21:29:03.555Z · LW(p) · GW(p)

I don't keep track of people's posting styles and correlate them with their names very well. Most people who post on LW, even if they do it a lot, I have negligible associations beyond "that person sounds vaguely familiar" or "are they [other person] or am I mixing them up?".

I have persistent impressions of both Said and Duncan, though.

I am limited in my ability to look up any specific Said comment or things I've said elsewhere about him because his name tragically shares a spelling with a common English word, but my model of him is strongly positive.  I don't think I've ever read a Said comment and thought it was a waste of time, or personally bothersome to me, or sneaky or pushy or anything.

Meanwhile I find Duncan vaguely fascinating like he is a very weird bug which has not, yet, sprayed me personally with defensive bug juice or bitten me with its weird bug pincers.  Normally I watch him from a safe distance and marvel at how high a ratio of "incredibly suspicious and hackle-raising" to "not often literally facially wrong in any identifiable ways" he maintains when he writes things.  It's not against any rules to be incredibly suspicious and hackle-raising in a public place, of course, it just means that I don't invite him to where I'm at.  But if he's coming into conflict with, not just Said, but Said's presence on LW, I fear I must venture closer to the weird bug.

I'm a big believer in social incompatibility.  Some people just don't click!  It's probably not inherently impossible to navigate but it's almost never worth the trouble.  Duncan shouldn't have to interact with Said if he doesn't want to.

Also, being the kind of person who has any social conflicts like that, let alone someone as prone as Duncan is, to my mind fundamentally disqualifies them from claiming to be objective, taking on public-facing moderator-like roles, etc.  I myself am not qualified for these roles!  I run a walled garden Discord server that only has people I am chill with and don't pretend to be fair about it.  But I also don't write LW posts about how people I don't like are unsuited for polite society.  I support the notion of simply not allowing authoritative posturing about norms like Duncan often does on LW.

Replies from: T3t, adamzerner
comment by RobertM (T3t) · 2023-04-16T03:42:52.417Z · LW(p) · GW(p)

Meanwhile I find Duncan vaguely fascinating like he is a very weird bug

I don't know[1] for sure what purpose this analogy is serving in this comment, and without it the comment would have felt much less like it was trying to hijack me into associating Duncan with something viscerally unpleasant.

  1. ^

    My guess is that it's meant to convey something like your internal emotional experience, with regards to Duncan, to readers.

Replies from: DanielFilan, Alicorn
comment by DanielFilan · 2023-04-16T19:52:26.279Z · LW(p) · GW(p)

I think weird bugs are neat.

comment by Alicorn · 2023-04-16T04:35:26.314Z · LW(p) · GW(p)

I wasn't sure if I should include the analogy.  I came up with it weeks ago when I was remarking to people in my server about how suspicious I find things Duncan writes, and it was popular there; I guess people here are less universally delighted by metaphors about weird bugs than people on my server, whoops!  For what it's worth I think the world is enriched by the presence of weird bugs.  The other day someone remarked that they'd found a weird caterpillar on the sidewalk near my house and half my dinner guests got up to go look at it and I almost did myself.  I just don't want to touch weird bugs, and am nervous in a similar way about making it publicly knowable that I have an opinion about Duncan.

Replies from: Duncan_Sabien, T3t, agrippa
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-30T09:16:42.758Z · LW(p) · GW(p)

I've tried for a bit to produce a useful response to the top-level comment and mostly failed, but I did want to note that

"Oh, it sort of didn't occur to me that this analogy might've carried a negative connotation, because when I was negatively gossiping about Duncan behind his back with a bunch of other people who also have an overall negative opinion of him, the analogy was popular!"

is a hell of a take. =/

Replies from: Alicorn
comment by Alicorn · 2023-04-30T18:07:12.135Z · LW(p) · GW(p)

Oh, no, it's absolutely negative.  I don't like you.  I just don't specifically think that you are disgusting, and it's that bit of the reaction to the analogy that caught me by surprise.

"Oh, I'm going to impute malice with the phrase 'gossiping behind my back' about someone I have never personally interacted with before who talked about my public blog posts with her friends, when she's specifically remarked that she's worried about fallout from letting me know that she doesn't care for me!" is also kind of a take, and a pretty good example of why I don't like you.  I retract the tentative positive update I made when your only reaction to my comment had been radio silence; I'd found that really encouraging wrt it being safe to have opinions about you where you might see them, but no longer.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-30T20:00:40.703Z · LW(p) · GW(p)

It is only safe for you to have opinions if the other people don't dislike them?

I think you're trying to set up a really mean dynamic where you get to say mean things about me in public, but if I point out anything frowny about that fact you're like "ah, see, I knew that guy was Bad; he's making it Unsafe for me to say rude stuff about him in the public square."

(Where "Unsafe" means, apparently, "he'll respond with any kind of objection at all."  Apparently the only dynamic you found acceptable was "I say mean stuff and Duncan just takes it.")

*shrug

I won't respond further, since you clearly don't want a big back-and-forth, but calling people a weird bug and then pretending that doesn't in practice connote disgust is a motte and bailey.

Replies from: Alicorn, AllAmericanBreakfast
comment by Alicorn · 2023-04-30T20:54:20.142Z · LW(p) · GW(p)

I kind of doubt you care at all, but here for interested bystanders is more information on my stance.

  • I suspect you of brigading-type behavior wrt conflicts you get into.  Even if you make out like it's a "get out the vote" campaign where the fact that rides to the polls don't require avowing that you're a Demoblican is important to your reception, when you're the sort who'll tell all your friends someone is being mean to you and then the karma swings around wildly I make some updates.  This social power with your clique of admirers in combination with your contagious lens on the world that they pick up from you is what unnerves me.
  • I experience a lot of your word choices (e.g. "gossiping behind [your] back") as squirrelly[1] , manipulative, and more rhetoric than content.  I would not have had this experience in this particular case if, for example, you'd said "criticizing [me] to an unsympathetic audience".  Gossip behind one's back is a social move for a social relationship.  One doesn't clutch one's pearls about random people gossiping about Kim Kardashian behind her back.  We have never met.  I'd stand a better chance of recognizing Ms. Kardashian in the grocery store than you.  I have met some people who know some people who you hang out with, but it's disingenuous to suggest that I had any affordances to instead gossip to your face, or that it's mean to dislike your public blog posts and then talk about disliking them with my friends[2].
  • Further, it's rhetorically interesting that you said "Apparently the only dynamic you found acceptable was "I say mean stuff and Duncan just takes it.""  You didn't try a lot of different dynamics!  I said I was favorably impressed when you didn't respond.  If someone is nervous about you, holding very still and not making any hostile moves is a great way to help them feel safe, and when you tried that (or... looked like you were trying it) it worked.  The only other thing you tried was, uh, this, which, as I'm explaining here, I do not find impressive.  However, scientists have discovered that there are often more than two possible approaches to social conflict.  You could have tried something else!  Maybe you could have dug up a mutual friend who'd mediate, or asked a neutral curious question about whether there was something I could point to that would help you understand why you were coming off badly, instead of unloading a dump truck of sneaky nasty connotations on my lap.  Maybe you believe every one of those connotations in your heart of hearts.  This does not imbue your words with magic soothing power, any more than my intentions successfully accompanied my analogy about weird bugs.  You still seem sneaky and nasty to me.
  1. ^

    I maintain that I sincerely like squirrels; I am using a colloquial definition which, of definitions I found on the internet, most closely matches the Urban Dictionary cluster.

  2. ^

    The "I talk about things with my friends, you brigade" conjugation is not lost on me but I wish to point out in my defense that, as I said in my original comment, I did not intend to touch this situation where it could possibly affect you until it seemed like it was also affecting Said, of whom I am fond.

comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-30T21:11:10.322Z · LW(p) · GW(p)

Positive reinforcement for disengaging!

comment by RobertM (T3t) · 2023-04-16T05:22:50.248Z · LW(p) · GW(p)

It doesn't seem like too many people had a reaction similar to mine, so I don't know that you were especially miscalibrated.  (On reflection, I think the "bug" part is maybe only half of what I found disagreeable about the analogy.  Not sure this is worth the derailment.)

Replies from: quanticle
comment by quanticle · 2023-04-16T05:47:12.814Z · LW(p) · GW(p)

For what it's worth, I had a very similar reaction to yours. Insects and arthropods are a common source of disgust and revulsion, and so comparing anyone to an insect or an arthropod, to me, shows that you're trying to indicate that this person is either disgusting or repulsive.

Replies from: Alicorn
comment by Alicorn · 2023-04-16T05:58:52.121Z · LW(p) · GW(p)

I'm sorry!  I'm sincerely not trying to indicate that.  Duncan fascinates and unnerves me but he does not revolt me.  I think that "weird bug" made sense to my metaphor generator instead of "weird plant" or "weird bird" or something is that bugs have extremely widely varying danger levels - an unfamiliar bug may have all kinds of surprises in the mobility, chemical weapons, aggressiveness, etc. department, whereas plants reliably don't jump on you and birds are basically all just WYSIWYG; but many weird bugs are completely harmless, and I simply do not know what will happen to me if I poke Duncan.

Replies from: jkaufman, T3t
comment by jefftk (jkaufman) · 2023-04-18T13:35:30.203Z · LW(p) · GW(p)

What about "weird frog"? Frogs don't have the same negative connotations as bugs and they have the same wide range of danger levels.

Replies from: Alicorn
comment by Alicorn · 2023-04-18T17:34:45.906Z · LW(p) · GW(p)

I think most poisonous frogs look it and would accordingly pick up a frog that wasn't very brightly colored if I otherwise wanted to pick up this frog, whereas bugs may look drab while being dangerous.

Replies from: jkaufman
comment by jefftk (jkaufman) · 2023-04-18T18:24:32.298Z · LW(p) · GW(p)

Poisonous frogs often have bright colors to say "hey don't eat me", but there are also ones that use a "if you don't notice me you won't eat me" strategy. Ex: cane toad, pickerel frog, black-legged poison dart frog.

Replies from: Alicorn, Raemon
comment by Alicorn · 2023-04-18T21:55:55.393Z · LW(p) · GW(p)

Welp, guess I shouldn't pick up frogs.  Not what I expected to be the main takeaway from this thread but still good to know.

Replies from: M. Y. Zuo
comment by M. Y. Zuo · 2023-04-18T21:57:48.331Z · LW(p) · GW(p)

Don't pick up amphibians, or anything else with soft porous skin, in general, unless your sure.

comment by Raemon · 2023-04-18T22:47:02.670Z · LW(p) · GW(p)

...why do they bother being poisonous then tho?

comment by RobertM (T3t) · 2023-04-16T06:12:28.580Z · LW(p) · GW(p)

I believe it: https://slatestarcodex.com/2017/10/02/different-worlds/

comment by agrippa · 2023-07-06T06:55:07.822Z · LW(p) · GW(p)

I liked the analogy and I also like weird bugs

comment by Adam Zerner (adamzerner) · 2023-04-16T05:20:36.294Z · LW(p) · GW(p)

I'm a big believer in social incompatibility.  Some people just don't click!  It's probably not inherently impossible to navigate but it's almost never worth the trouble.  Duncan shouldn't have to interact with Said if he doesn't want to.

Yup, I strongly agree with this.

And it seems to me that the effort spent moderating this is mostly going to be consequential for Duncan and Said's future interactions instead of generalizing and being consequential to the interactions between other people on LessWrong, because these sorts of conflicts seem to be quite infrequent. If so, it doesn't seem worth spending too much time on.

Maybe as a path forward, Duncan and Said can agree to keep exchanges to a maximum of 10 total comments and subsequently move the conversation to a private DM, see if that works, and if it doesn't re-evaluate from there?

comment by tcheasdfjkl · 2023-04-16T00:12:47.226Z · LW(p) · GW(p)

I have not read all the words in this comment section, let alone in all the linked posts, let alone in their comments sections, but/and - it seems to me like there's something wrong with a process that generates SO MANY WORDS from SO MANY PEOPLE and takes up SO MUCH PERSON-TIME for what is essentially two people not getting along. I get that an individual social conflict can be a microcosm of important broader dynamics, and I suspect that Duncan and/or Said might find my "not getting along" summary trivializing, which may even be true, as noted I haven't read all the words - just, still, is this really the best thing for everyone involved to be doing with their time?

Replies from: Viliam, TekhneMakre
comment by Viliam · 2023-04-16T18:59:09.005Z · LW(p) · GW(p)

It is already happening, so the choices are either one big thread, or dozen (not much) smaller ones.

comment by TekhneMakre · 2023-04-16T02:17:34.714Z · LW(p) · GW(p)

Or at least, if there's something so compelling-in-some-way going on for some people that they want to keep engaging, at least we could hope that somehow they could be facilitated in doing mental work that will be helpful for whatever broader things there are. Like, if it's a microcosm of stuff, if it represents some important trends, if there's something important but hard to see without trying really hard, then it might be good for them to focus on that rather than being in a fight. (Of course, easier said than done(can); a lot of the ink spilled will feel like trying to touch on the broader things, but only some of it actually will.)

comment by Adam Zerner (adamzerner) · 2023-04-14T18:38:49.078Z · LW(p) · GW(p)

This seems like a situation that is likely to end up ballooning into something that takes up a lot of time and energy. So then, it seems worth deciding on an "appetite" [LW · GW] up front. Is this worth an additional two hours of time? Six? Sixty? Deciding on that now will help avoid a scenario where (significantly) more time is spent than is desirable.

comment by LoganStrohl (BrienneYudkowsky) · 2023-04-19T01:16:23.797Z · LW(p) · GW(p)

Here is some information about my relationship with posting essays and comments to LessWrong. I originally wrote it for a different context (in response to a discussion about how many people avoid LW because the comments are too nitpicky/counterproductive) so it's not engaging directly with anything in the OP, but @Raemon [LW · GW] mentioned it would be useful to have here.

*

I *do* post on LW, but in a very different way than I think I would ideally. For example, I can imagine a world where I post my thoughts piecemeal pretty much as I have them, where I have a research agenda or a sequence in mind and I post each piece *as* I write it, in the hope that engagement with my writing will inform what I think, do, and write next. Instead, I do a year's worth of work (or more), make a 10-essay sequence, send it through many rounds of editing, and only begin publishing any part of it when I'm completely done, having decided in advance to mostly ignore the comments.

It appears to me that what I write is strongly in line with the vision of LW (as I understand it; my understanding is more an extrapolation of Eliezer's founding essays and the name of the site than a reflection of discussion with current mods), but I think it is not in line with the actual culture of LW as it exists.  A whole bunch of me does not want to post to LW at all and would rather find a different audience for my work, one where I feel comfortable and excited and surrounded by creative peers who are jamming with each other and building things together or something. But I don't know of any such place that meets my standards in all the important ways, and LW seems like the place where my contributions are most likely to gradually drag the culture in a direction where I'll actually *enjoy* posting there, instead of feeling like I'm doing a scary unpleasant diligence thing. (Plus I really believe in the site's underlying vision!)

Sometimes people do say cool interesting valuable-to-me things under my posts. But it's pretty rare, and I'm always surprised when this happens. Mostly my posts get not much engagement, and the engagement they do get feels a whole lot to me like people attempting to use my post as an opportunity to score points in one way or another, often by (apparently) trying to demonstrate that they're ahead of me in some way while also accidentally demonstrating that have probably not even tried to hear me.

My perception is very likely skewed here, but my impression is that the median comment on LW is along the lines of "This is wrong/implausible/inadequate because X." The comments I *want* are more like, "When I thought about/tried this for five minutes, here is what happened, and here is how I'm thinking about that, and I wonder x y and z."

Here [LW · GW] is a comment thread that demonstrates what it looks like when *I* think that an interesting-to-me post is inadequate/not quite right. I'm not saying commenters in general should be held to this ridiculous standard, I'm just saying, "Here's a shining example of the kind of thing that is possible, and I really want the world to move in this direction, especially in response to my posts", or something. (However apparently it wasn't considered particularly valuable commentary by readers *shrug*.)

Raymond has been trying to get me to post my noticing stuff from Agenty Duck to LW for *years*, or even to let *him* cross post it for me.  And I keep saying "no" or "not yet", because the personal consequences I imagine for me are mostly bad, and I just think I need to make something good enough to outweigh that first. It's just now, after literally five to ten years of further development, that I've gotten that material into a shape where I think the benefit to the world and my local social spaces (and also my bank account) outweighs the personal unpleasantness of posting the stuff to LW.

(This is just one way of looking at it. The full story is a lot bigger and more complicated, I think.)

 

Replies from: MondSemmel, TekhneMakre
comment by MondSemmel · 2023-04-26T20:54:49.915Z · LW(p) · GW(p)

Sometimes people do say cool interesting valuable-to-me things under my posts. But it's pretty rare, and I'm always surprised when this happens. Mostly my posts get not much engagement, and the engagement they do get feels a whole lot to me like people attempting to use my post as an opportunity to score points in one way or another, often by (apparently) trying to demonstrate that they're ahead of me in some way while also accidentally demonstrating that have probably not even tried to hear me.

My perception is very likely skewed here, but my impression is that the median comment on LW is along the lines of "This is wrong/implausible/inadequate because X." The comments I *want* are more like, "When I thought about/tried this for five minutes, here is what happened, and here is how I'm thinking about that, and I wonder x y and z."

I also have the sense that most posts don't get enough / any high-quality engagement, and my bar for such engagement is likely lower than yours.

I suspect though that the main culprit here is not the site culture, but instead a bunch of related reasons: the sheer amount of words on the site and in each essay, which cause the readership to spread out over a gigantic corpus of work; standard Internet engagement patterns (only a small fraction of readers write comments, and only a small fraction of those are high-quality); median LW essays receive too few views to produce healthy discussions; high-average-quality commenters are rare on the Internet, and their comments are spread out over everything they read; imperfect karma incentives; etc.

Are there ways for individuals to reliably get a number of comments sufficiently large to produce the occasional high-quality engagement? The only ways I've seen are for them to either already be famous essayists (e.g. the comments sections on ACX or Slow Boring are sufficiently big to contain the occasional gem), or to post in their own Facebook community or something. Feed-like sites like Facebook suffer from their recency bias, however, which is kind of antithetical to the goal of writing truth-seeking and timeless essays.

comment by TekhneMakre · 2023-04-19T05:20:42.458Z · LW(p) · GW(p)

Mostly my posts get not much engagement, and the engagement they do get feels a whole lot to me like people attempting to use my post as an opportunity to score points in one way or another, often by (apparently) trying to demonstrate that they're ahead of me in some way while also accidentally demonstrating that have probably not even tried to hear me.

Strong agree. Though I also engage in the commenting behavior, at an uncharitable view of my behavior.

One can dream of some genius cracking the filtering problem and creating a criss-crossing tesseract of subcultures that can occupy the same space (e.g. LW) but go off in their own shared-goals directions (those people who jam and analyze with each other; those people who carefully nitpick and verify; those people who gather facts; those people who just vibe; ...).

comment by Richard_Ngo (ricraz) · 2023-04-17T07:47:06.561Z · LW(p) · GW(p)

Skimmed all the comments here and wanted to throw in my 2c (while also being unlikely to substantively engage further, take that into account if you're thinking about responding):

  • It seems to me that people should spend less time litigating this particular fight and more time figuring out the net effects that Duncan and Said have on LW overall. It seems like mods may be dramatically underrating the value of their time and/or being way too procedurally careful here, and I would like to express that I'd support them saying stuff like "idk exactly what went wrong but you are causing many people on our site (including mods) to have an unproductive time, that's plenty of grounds for a ban".
  • It seems to me that many (probably most) people who engage with Said will end up having an unproductive and unpleasant time. So then my brain started generating solutions like "what if you added a flair to his comments saying 'often unproductive to engage'" and then I was like "wait this is clearly a missing stair situation (in terms of the structural features not the severity of the misbehavior) and people are in general way too slow to act on those; at the point where this seems like a plausibly-net-positive intervention he should clearly just be banned".
  • It seems to me that Duncan has very strong emotional reactions about which norms are used, and how they're used, and that his preferred norms seem pretty bizarre to many people (I relate to several of Alicorn's reactions to him, including "marvel at how high a ratio of "incredibly suspicious and hackle-raising" to "not often literally facially wrong in any identifiable ways"") and again the solution my brain generated was to have some kind of flair like 'often dies on the hill of unusual discourse norms' (this is a low-effort phrasing that's directionally correct but there's probably a much better one) and then I was like "wait this is another missing stair situation". But it feels like there's plausibly an 80/20 solution here where Duncan can still post his posts (with some kind of "see my profile for a disclaimer about discourse norms" header) but not comment on other people's.
  • I say all this despite agreeing with Said's pessimism about the quality of most LW content [LW(p) · GW(p)]. I just don't think there's any realistic world in which commenting pessimistically on lots of stuff in the way that Said does actually helps with that, but it does hurt the few good things. Wei Dai had a comment below about how important it is to know whether there's any criticism or not, but mostly I don't care about this either because my prior is just that it's bad whether or not there's criticism. In other words, I think the only good approach here is to focus on farming the rare good stuff and ignoring the bad stuff (except for the stuff that ends up way overrated, like (IMO) Babble or Simulators, which I think should be called out directly).
Replies from: Wei_Dai, Raemon, SaidAchmiz
comment by Wei Dai (Wei_Dai) · 2023-04-17T16:18:22.549Z · LW(p) · GW(p)

Wei Dai had a comment below about how important it is to know whether there’s any criticism or not, but mostly I don’t care about this either because my prior is just that it’s bad whether or not there’s criticism. In other words, I think the only good approach here is to focus on farming the rare good stuff and ignoring the bad stuff (except for the stuff that ends up way overrated, like (IMO) Babble or Simulators, which I think should be called out directly).

But how do you find the rare good stuff amidst all the bad stuff? I tend to do it with a combination of looking at karma, checking the comments to see whether or not there’s good criticism, and finally reading it myself if it passes the previous two filters. But if a potentially good criticism was banned or disincentivized, then that 1) causes me to waste time (since it distorts both signals I rely on), and 2) potentially causes me to incorrectly judge the post as "good" because I fail to notice the flaw myself. So what do you do such that it doesn't matter whether or not there's criticism?

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2023-04-18T00:34:37.956Z · LW(p) · GW(p)

My approach is to read the title, then if I like it read the first paragraph, then if I like that skim the post, then in rare cases read the post in full (all informed by karma).

I can't usually evaluate the quality of criticism without at least having skimmed the post. And once I've done that then I don't usually gain much from the criticisms (although I do agree they're sometimes useful).

I'm partly informed here by the fact that I tend to find Said's criticisms unusually non-useful.

comment by Raemon · 2023-04-17T13:53:45.825Z · LW(p) · GW(p)

Thanks for weighing in! Fwiw I've been skimming but not particularly focused on the litigation of the current dispute, and instead focusing on broader patterns. (I think some amount of litigation of the object level was worth doing but we're past the point where I expect marginal efforts there to help)

One of the things that's most cruxy to me is what people who contribute a lot of top content* feel about the broader patterns, so, I appreciate you chiming in here.

*roughly operationalized as "write stuff that ends up in the top 20 or top 50 of the annual review"

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2023-04-17T15:05:47.104Z · LW(p) · GW(p)

Makes sense.

One of the things that's most cruxy to me is what people who contribute a lot of top content* feel about the broader patterns, so, I appreciate you chiming in here.


FYI I personally haven't had bad experiences with Said (and in fact I remember talking to mods who were at one point surprised by how positively he engaged with some of my posts). My main concern here is the missing stair dynamic of "predictable problem that newcomers will face".

comment by Said Achmiz (SaidAchmiz) · 2023-04-17T09:00:23.332Z · LW(p) · GW(p)

I say all this despite agreeing with Said’s pessimism about the quality of most LW content [LW(p) · GW(p)]. I just don’t think there’s any realistic world in which commenting pessimistically on lots of stuff in the way that Said does actually helps with that, but it does hurt the few good things.

You know, I’ve seen this sort of characterization of my commenting activity quite a few times in these discussions, and I’ve mostly shrugged it off; but (with apologies, as I don’t mean to single you out, and indeed you’re one of the LW members whom I respect significantly more than average) I think at this point I have to take the time to address it.

My objection is simply this:

Is it actually true that I “comment pessimistically on lots of stuff”? Do I do this more than other people?

There are many ways of operationalizing that, of course. Here’s one that seems reasonable to me: let’s find all the posts (not counting “meta”-type posts that are already about me, or referring to me, or having to do with moderation norms that affect me, etc.) on which I’ve commented “pessimistically” in, let’s say, the last six months, and see if my comments are, in their level of “pessimism”, distinguishable from those of other commenters there; and also what the results of those comments turn out to be.

#1: https://www.lesswrong.com/posts/Hsix7D2rHyumLAAys/run-posts-by-orgs [LW · GW]

Multiple people commenting in similarly “pessimistic” ways, including me. The most, shall we say, vigorous, discussion that takes place there doesn’t involve me at all.

#2: https://www.lesswrong.com/posts/2yWnNxEPuLnujxKiW/tabooing-frame-control [LW · GW]

My overall view is certainly critical, but here I write multiple medium-length comments, which contain substantive analyses of the concept being discussed. (There is, however, a very brief comment from someone else [LW(p) · GW(p)] which is just a request—or “demand”?—for clarification; such is given, without protest.)

#3: https://www.lesswrong.com/posts/67NrgoFKCWmnG3afd/you-ll-never-persuade-people-like-that [LW · GW]

Here I post what can be said to be a critical comment, but one that offers my own take. Other [LW(p) · GW(p)] comments [LW(p) · GW(p)] are substantially more critical than mine.

#4: https://www.lesswrong.com/posts/Y4hN7SkTwnKPNCPx5/why-don-t-more-people-talk-about-ecological-psychology#JcADzrnoJjhFHWE5W [LW(p) · GW(p)]

Probably the most central example of the sort of “short questions that are potentially critical” comments that some people here seem to so dislike. Note that the two comments I posted were (a) both answered (satisfactorily or not, you can judge for yourself, though I found the answers reasonable enough, given the context), and (b) answered by two different people—the OP and someone else. This is exactly the sort of entirely reasonable and praiseworthy outcome which I’ve described!

#5: https://www.lesswrong.com/posts/yCuzmCsE86BTu9PfA/there-are-no-coherence-theorems#bGBy9uYdZrGcpvXCG [LW(p) · GW(p)]

Notable because here I am responding to someone else (one of the LW/Lightcone team, in fact) posting a fairly harsh criticial comment, with some comments in support of the OP’s thesis.

#6: https://www.lesswrong.com/posts/C6oNRFt4dvtM25vpw/living-nomadically-my-80-20-guide#Zs9T4CebjkLvsmvDy [LW(p) · GW(p)]

This is a reply to a comment, not a post, but still “pessimistic” (i.e., mildly, or even just potentially, skeptical). A brief, but quite reasonable, discussion ensues.

#7: https://www.lesswrong.com/posts/yepKvM5rsvbpix75G/you-don-t-exist-duncan [LW · GW]

Probably the most “pessimistic” comment of the bunch, but noteworthy in that here I’m not even starting a comment thread, but only agreeing with someone else’s existing critical comment.

#8: https://www.lesswrong.com/posts/rwkkcgSpnAyE8oNo3/alexander-and-yudkowsky-on-agi-goals [LW · GW]

A couple of critical comments from me, among many others. Nothing particularly stands out. Nothing exciting or terrible results from them.

That’s it, all the way back to around the end of January. (Feel free to go back further and check if I’ve omitted anything, but I think this is a reasonable block of time to examine.)

The bottom line, I think, is that it’s just not true that I stand out from the crowd in terms of how “pessimistically” I comment, or on how much stuff, or how often, or how often relative to other sorts of comments (where, e.g., I give my own thoughts on something in detail), or how often anything meaningfully bad happens as a result, or… anything, really.

From now on, whenever anyone claims otherwise, I’m just going to ask for proof.

Wei Dai had a comment below about how important it is to know whether there’s any criticism or not, but mostly I don’t care about this either because my prior is just that it’s bad whether or not there’s criticism. In other words, I think the only good approach here is to focus on farming the rare good stuff and ignoring the bad stuff (except for the stuff that ends up way overrated, like (IMO) Babble or Simulators, which I think should be called out directly).

So, your view is that that most content here is just bad, and some of it is so bad, while being highly acclaimed (what does that say about Less Wrong’s “epistemic immune system”!), that it needs to be called out directly. I think that’s a more pessimistic view than even my own!

But then what exactly is the concern? Are you suggesting that I’ve misguidedly criticized some of the good stuff? If so—what stuff are we talking about, here?

Replies from: ricraz
comment by Richard_Ngo (ricraz) · 2023-04-17T15:00:51.163Z · LW(p) · GW(p)

Not responding to the main claim, cos mods have way more context on this than me, will defer to them.

 think that’s a more pessimistic view than even my own!

Very plausibly. But pessimism itself isn't bad, the question is whether it's the sort of pessimism that leads to better content or the sort that leads to worse content. Where, again, I'm going to defer to mods since they've aggregated much more data on how your commenting patterns affect people's posting patterns.

comment by Max H (Maxc) · 2023-04-14T18:50:53.931Z · LW(p) · GW(p)

This set of issues sounds like a huge time sink for all involved.

My own plan is to upvote this post, hide it from my frontpage, and then not follow any further discussion about it closely, or possibly at all, at least until the conclusions are posted. I'd encourage anyone else who is able, and thinks they're at risk of getting sucked in, to do the same.

I trust that Raemon and the rest of the mod team will make good decisions regardless of my (or anyone else's) input on the matter, and I'm very grateful that they are willing to put in the time to do so, so that others may be spared.

My only advice to them is not to put too much of their own time in, to the detriment of their other priorities and sanity. (This too is a decision I trust them to make on their own, but I think it's worth repeating publicly.)

A tiny bit of object-level discourse: I like Duncan's posts, and would be sad to see fewer of them in the future. I mostly don't pay attention to comments by anyone else mentioned here, including Duncan.

comment by Lukas_Gloor · 2023-04-15T23:54:05.359Z · LW(p) · GW(p)

Said's way of asking questions, and the uncharitable assumptions he sometimes makes, is one of the most off-putting things I associate with LW. I don't find it okay myself, but it seems like the sort of thing that's hard to pin down with legible rules. Like, if he were to ask me "what is it that you don't like, exactly" – I feel like it's hard to pin down.

Edit: So, on the topic of moderation policy, seems like the option that individual users can ban specific other users if they have trouble dealing with their style or just if conflicts happen, that seems like a good solution to me. And I don't think it should reflect poorly on the banner (unless they ban an extraordinary number of other users). 

comment by Raemon · 2023-04-14T18:04:36.934Z · LW(p) · GW(p)

Okay, overall outline of thoughts on my mind here:

  • What actually happened in the recent set of exchanges? Did anyone break any site norms? Did anyone do things that maybe should be site norms but we hadn't actually made it an explicit rule and we should take the opportunity to develop some case law and warn people not to do it in the future?
  • 5 years ago, the moderation team has issued Said a mod warning about a common pattern of engagment he does that a lot of people have complained about (this was operationalized as "demanding more interpretive labor than he has given"). We said if he did it again we'd ban him for a month. My vague recollection is he basically didn't do it for a couple years after the warning, but maybe started to somewhat over the past couple years, but I'm not sure, (I think he may have not done the particular thing we asked him not to, but I've had a growing sense his commenting making me more wary of how I use the site). What are my overall thoughts on that?
  • Various LW team members have concerns about how Duncan handles conflict. I'm a bit confused about how to think about it in this case. I think a number of other users are worried about this too. We should probably figure out how we relate to that and make it clear to everyone.
  • It's Moderation Re-Evaluation Month. It's a good time to re-evaluate our various moderation policies. This might include "how we handle conflict between established users", as well as "are there any important updates to the Authors Can Moderate Their Posts rules/tech?

It seems worthwhile to touch on each of these at least somewhat. I'll follow up on each topic at least somewhat.

Replies from: Raemon, Raemon, Raemon, Duncan_Sabien
comment by Raemon · 2023-04-14T19:38:04.622Z · LW(p) · GW(p)

Maybe explicit rules against blocking users from "norm-setting" posts.

On blocking users from commenting 

I still endorse authors being able to block other users (whether for principles reasons, or just "this user is annoying"). I think a) it's actually really important for authors for the site to be fun to use, b) there's a lot of users who are dealbreakingly annoying to some people but not others. Banning them from the whole site would be overkill. c) authors aren't obligated to lend their own karma/reputation to give space to other people's content. If an author doesn't want your comments on his post, whether for defensible reasons or not, I think it's an okay answer that those commenters make their own post or shortform arguing the point elsewhere. 

Yes, there are some trivial inconveniences to posting that criticism. I do track that in the cost. But I think that is outweighed by the effect on authors being motivated to post.

That all said...

Blocking users on "norm-setting posts"

I think it's more worrisome to block users on posts that are making major momentum towards changing site norms/culture. I don't think the censorship effects are that strong or distorting in most cases, but I'm most worried about censorship effects being distorting in cases that affect ongoing norms about what people can say. 

There's a blurry line here, between posts that are putting forth new social concepts, and posts advocating for applying those concepts towards norms (either in the OP or in the comments), and a further blurry line between that and posts which arguing about applying that to specific people. i.e. I'd have an ascending wariness of:

I think it was already a little sketchy that Basics of Rationalist Discourse went out of it's way to call itself "The Basics" rather than "Duncan's preferred norms" (a somewhat frame-control-y move [LW(p) · GW(p)] IMO although not necessarily unreasonably so), while also blocking Zack at the time. It feels even more sketchy to me to write Killing Socrates, which AFAICT a thinly veiled "build-social-momentum-against-Said-in-particular", where Said can't respond (and it's disproportionately likely that Said's allies also can't respond)

Right now we don't have tech to unblock users from a specific post, who have been banned from all of a user's posts. But this recent set of events has me learning towards "build tech to do that", and then make it a rule that post over at the threshold of "Basics" or higher (in terms of site-norm-momentum-building), need to allow everyone to comment.

I do expect that to make it less rewarding to make that sort of post. And, well, to (almost) quote Duncan: [LW · GW]

Put another way: a frequent refrain is "well, if I have to put forth that much effort, I'll never say anything at all," to which the response is often ["sorry I acknowledge the cost here but I think that's an okay tradeoff"]

Okay but what do I do about Said when he shows up doing his whole pattern of subtly-missing/and/or/reframing-the-point-while-sprawling massive threads, in an impo

My answer is "strong downvote him, announce you're not going to engage, maybe link to a place where you went into more detail about why if this comes up a lot, and move on with your day." (I do generally wish Duncan did more of this and less trying to set-the-record straight in ways that escalate in IMO very costly ways)

(I also kinda wish gjm had also done this towards the beginning of the thread on LW Team is adjusting moderation policy [LW · GW])

Replies from: AllAmericanBreakfast, Jasnah_Kholin, Duncan_Sabien
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-15T00:21:57.541Z · LW(p) · GW(p)

maybe link to a place where you went into more detail about why if this comes up a lot, and move on with your day.

This is exactly why I wrote Here's Why I'm Hesitant To Respond In More Depth [LW · GW]. The purpose wasn't just to explain myself to somebody specific. It was to give myself an alternative resource when I received a specific time of common feedback that was giving me negative vibes. Instead of my usual behaviors (get in an argument, ignore and feel bad, downvote without explanation, or whatever), I could link to this post, which conveyed more detail, warmth and charity than I would be able to muster reliably or in the moment. I advocate that others should write their own versions tailored to their particular sensitivities, and I think it would be a step toward a healthier site culture.

comment by Jasnah Kholin (Jasnah_Kholin) · 2023-04-18T16:04:59.026Z · LW(p) · GW(p)

"I do generally wish Duncan did more of this and less trying to set-the-record straight in ways that escalate in IMO very costly ways"

strongly agree.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-14T22:43:21.187Z · LW(p) · GW(p)

while also blocking Zack at the time

I note for context/as a bit of explanation that Zack was blocked because of having shot from the hip with "This is insane" on what was literally a previous partial draft of that very post (made public by accident); I didn't want a repeat of a specific sort of interaction I had specific reason to fear.

comment by Raemon · 2023-04-14T19:27:04.851Z · LW(p) · GW(p)

Recap of mod team history with Said Achmiz

First, some background context. When LW2.0 was first launched, the mod team had several back-and-forth with Said over complaints about his commenting style. He was (and I think still is) the most-complained-about LW user. We considered banning him. 

Ultimately we told him this:

As Eliezer is wont to say, things are often bad because the way in which they are bad is a Nash equilibrium. If I attempt to apply it here, it suggests we need both a great generative and a great evaluative process before the standards problem is solved, at the same time as the actually-having-a-community-who-likes-to-contribute-thoughtful-and-effortful-essays-about-important-topics problem is solved, and only having one solved does not solve the problem.

I, Oli and Ray will build a better evaluative process for this online community, that incentivises powerful criticism. But right now this site is trying to build a place where we can be generative (and evaluative) together in a way that's fun and not aggressive. While we have an incentive toward better ideas (weighted karma and curation), it is far from a finished system. We have to build this part as well as the evaluative before the whole system works, and while we've not reached there you're correct to be worried and want to enforce the standards yourself with low-effort comments (and I don't mean to imply the comments don't often contain implicit within them very good ideas).

But unfortunately, given your low-effort criticism feels so aggressive (according to me, the mods, and most writers I talk to in the rationality community), this is just going to destroy the first stage before we get the second. If you write further comments in this pattern which I have pointed to above, I will not continue to spend hours trying to pass your ITT and responding; I will just give you warnings and suspensions.

I may write another comment in this thread if there is something simple to clarify or something, but otherwise this is my last comment in this thread.

Followed by:

This was now a week ago. The mod team discussed this a bit more, and I think it's the correct call to give Said an official warning (link [LW · GW]) for causing a significant number of negative experiences for other authors and commenters.

Said, this moderation call is different than most others, because I think there is a place for the kind of communication culture that you've advocated for, but LessWrong specifically is not that place, and it's important to be clear about what kind of culture we are aiming for. I don't think ill of you or that you are a bad person. Quite the opposite; as I've said above, I deeply appreciate a lot of the things you've build and advice you've given, and this is why I've tried to put in a lot of effort and care with my moderation comments and decisions here. I'm afraid I also think LessWrong will overall achieve its aims better if you stop commenting in (some of) the ways you have so far.

Said, if you receive a second official warning, it will come with a 1-month suspension. This will happen if another writer has an extensive interaction with you primarily based around you asking them to do a lot of interpretive labour and not providing the same in return, as I described in my main comment [LW(p) · GW(p)] in this thread.

I do have a strong sense of Said being quite law-abiding/honorable about the situation despite disagreeing with us on several object and meta-level moderation policy, which I appreciate a lot.

I do think it's worth noting that LessWrong 2.0 feels like it's at a more stable point than it was in 2018. There's enough critical mass of people posting here I that I'm less worried about annoying commenters killing it completely (which was a very live fear during the initial LW2.0 revival)

But I am still worried about the concerns from 5 years ago, and do basically stand by Ben's comment. And meanwhile I still think Said's default commenting style is much worse than nearby styles that would accomplish the upside with less downside.

My summary of previous discussions as I recall them is something like:

Mods: "Said, lots of users have complained about your conversation style, you should change it."

Said: "I think a) your preferred conversation norms here don't make sense to me and/or seem actively bad in many cases, and b) I think the thing my conversation style is doing is really important for being a truthtracking forum."

[...lots of back-and-forth...]

Mods: "...can you change your commenting style at all?"

Said: "No, but I can just stop commenting in particular ways if you give me particular rules."

Then we did that, and it sorta worked for awhile. But it hasn't been wholly satisfying to me. (I do have some sense that Said has recently ended up commenting more in threads that are explicitly about setting norms, and while we didn't spell this out in our initial mod warning, I do think it is extra costly to ban someone from discussions of moderation norms than from other discussion. I'm not 100% sure how to think about this)

Replies from: Vaniver
comment by Vaniver · 2023-04-14T20:18:44.443Z · LW(p) · GW(p)

I think some additional relevant context is this discussion [LW · GW] from three years ago, which I think was 1) an example of Said asking for definitions without doing any interpretive labor, 2) appreciated by some commenters (including the post author, me), and 3) reacted to strongly by people who expected it to go poorly, including some mods. I can't quickly find any summaries we posted after the fact. 

comment by Raemon · 2023-04-14T19:36:44.460Z · LW(p) · GW(p)

Death by a thousand cuts and "proportionate"(?) response

A way this all feels relevant to current disputes with Duncan is that thing that is frustrating about Said is not any individual comment, but an overall pattern that doesn't emerge as extremely costly until you see the whole thing. (i.e. if there's a spectrum of how bad behavior is, from 0-10, and things that are a "3" are considered bad enough to punish, someone who's doing things that are bad at a "2.5" or "2.9" level don't quite feel worth reacting to. But if someone does them a lot it actually adds up to being pretty bad. 

If you point this out, people mostly shrug and move on with their day. So, to point it out in a way that people actually listen to, you have to do something that looks disproportionate if you're just paying attention to the current situation. And, also, the people who care strongly enough to see that through tend to be in an extra-triggered/frustrated state, which means they're not at their best when they're dong it.

I think Duncan's response looks very out-of-proportion. I think Duncan's response is out of proportion to some degree (see Vaniver thread for some reasons why. I have some more reasons I plan to write about). 

But I do think there is a correct thing that Duncan was noting/reacting to, which is that actually yeah, the current situation with Said does feel bad enough that something should change, and it indeed the mods hadn't been intervening on it because it didn't quite feel like a priority. 

I liked Vaniver's description of Duncan's comments/posts as making a bet that Said was in fact obviously banworthy or worthy of significant mod action, and that there was a smoking gun to that effect, and if this was true then Duncan would be largely vindicated-in-retrospect.

I'll lay out some more thinking as to why, but, my current gut feeling + somewhat considered opinion is that "Duncan is somewhat vindicated, but not maximally, and there are some things about his approach I probably judge him for." 

Replies from: JacobKopczynski
comment by Czynski (JacobKopczynski) · 2023-04-16T03:32:22.106Z · LW(p) · GW(p)

Personally, the thing I think should change with Said is that we need more of him, preferably a dozen more people doing the same thing. If there were a competing site run according to Said's norms, it would be much better for pursuing the art of rationality than modern LessWrong is; disagreeable challenges to question-framing and social moves are desperately necessary to keep discussion norms truth-tracking rather than convenience-tracking.

But this is not an argument I expect to be able to win without actually trying the experiment. And even then I would expect at least five years would be required to get unambiguous results.

Replies from: Viliam
comment by Viliam · 2023-04-16T18:54:20.384Z · LW(p) · GW(p)

It would definitely be an interesting experiment. Different people would make different predictions about its outcome, but that's exactly what the experiments are good for.

(My bet would be that the participants would only discuss "safe" topics, such as math and programming.)

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-14T21:33:10.587Z · LW(p) · GW(p)
  • Various LW team members have concerns about how Duncan handles conflict. I'm a bit confused about how to think about it in this case. I think a number of other users are worried about this too. We should probably figure out how we relate to that and make it clear to everyone.

When Said was spilling thousands of words uncharitably psychoanalyzing me last week, I asked for mod help [LW(p) · GW(p)], and got none. I did, in fact, try the strategy of "don't engage much" (I think I left like three total comments to Said's dozens) and "get someone else to handle the conflict," and the moderators demurred.

If you don't want me to defend myself my way, please make it not necessary to defend myself.

Replies from: habryka4
comment by habryka (habryka4) · 2023-04-14T22:50:51.427Z · LW(p) · GW(p)

I am not sure what you mean, didn't Ray respond on the same day that you tagged him? 

I haven't read the details of all of the threads, but I interpreted your comment here as "the mod team ignored your call for clarification" as opposed to "the mod team did respond to your call for clarification basically immediately, but there was some <unspecified issue> with it".

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-15T01:27:31.003Z · LW(p) · GW(p)

He responded to say ~"I don't like this much but we're not gonna do anything."

EDIT: to elaborate, Ray actually put quite a bit of effort into a back and forth with Said, and eventually asked him to stop commenting/put a pause on the whole conversation.  But there wasn't any "this thing that Said was doing before I showed up is not clearing the bar for LW."

Replies from: habryka4, habryka4
comment by habryka (habryka4) · 2023-04-15T01:48:28.072Z · LW(p) · GW(p)

EDIT: to elaborate, Ray actually put quite a bit of effort into a back and forth with Said, and eventually asked him to stop commenting/put a pause on the whole conversation.  But there wasn't any "this thing that Said was doing before I showed up is not clearing the bar for LW."

Yeah, I think Ray is currently working on figuring out what the actual norms here should be, which I do think just takes awhile. Ideally we would have a moderation philosophy pinned down in which the judgement here is obvious, but as moderation disputes go, a common pattern is if people disagree with a moderation philosophy, they tend to go right up to the edge of the clear rules you have established (in a way I don't really think is inherently bad, in domains where I disagree with the law I also tend to go right up to the edge of what it allows).

This seems like one of those cases, where my sense is there is a bunch of relatively deep disagreement about character and spirit of LessWrong, and people are going right up to the edge of what's allowed, and disputing those edge-cases almost always tends to require multiple days of thought. My model of you thinks that things were pretty clearly over your line, though indeed my sense is Said's behavior was more optimized to go up to the line of the rules we had set previously, and wasn't that optimized to not cross your lines.

It's plausible there is some meta-level principle here about line-toeing, but I am not even confident line-toeing is going on here, and I have a bunch of complicated meta thoughts on how to handle line-toeing (one of which is that if you try to prevent line-toeing, people will toe the line of ambiguity of whether they are toeing lines, which makes everything really confusing).

comment by habryka (habryka4) · 2023-04-15T01:37:25.772Z · LW(p) · GW(p)

That does not seem like an accurate summary of this comment? 

My current take is "this thread seems pretty bad overall and I wish everyone would stop, but I don't have an easy succinct articulation of why and what the overall moderation policy is for things like this." I'm trying to mostly focus on actually resolving a giant backlog of new users who need to be reviewed while thinking about our new policies, but expect to respond to this sometime in the next few days. 

What I will say immediately to @Said Achmiz [LW · GW] is "This point of this thread is not to prosecute your specific complaints about Duncan. Duncan banning you is the current moderation policy working as intended. If you want to argue about that, you should be directing your arguments at the LessWrong team, and you should be trying to identify and address our cruxes."

I have more to say about this but it gets into an effortcomment that I want to allocate more time/attention to.

I'd note: I do think it's an okay time to open up Said's longstanding disagreements with LW moderation policy, but, like, all the previous arguments still apply. Said's comments so far haven't added new information we didn't already consider.

I think it is better to start a new thread rather than engaging in this one, because this thread seems to be doing a weird mix of arguing moderation-abstract-policies while also trying to prosecute one particular case in a way that feels off.

He said pretty clearly "I am dealing with a backlog of users so won't give this the full response it deserves until a few days later" (which is right now). It also responded pretty clearly to a bunch of the object-level. 

I think it's fine for you to say you didn't feel helped immediately, or something, but I really don't think characterizing Ray's response as "not doing anything" is remotely accurate. My guess is he has spent on the order of 20 hours on this conflict in the last week, with probably another 10-15 hours from both Ruby and Robert, resulting at least thousands, possible tens-of-thousands of words written publicly by now. Again, it might be the case that somehow those moderation comments didn't align with your preferences, but I do sure think it counts as "clarifying whether this is something we want happening on LessWrong" which was your literal request.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-15T01:38:19.884Z · LW(p) · GW(p)

Yeah, as you were typing this I was also typing an edit. My apologies, Ray, for the off-the-cuff wrong summary.

Replies from: habryka4
comment by habryka (habryka4) · 2023-04-15T01:41:06.749Z · LW(p) · GW(p)

Cool, no problem.

comment by Raemon · 2023-04-16T19:22:24.941Z · LW(p) · GW(p)

A lot of digital ink has been spilled, and if I were a random commenter I wouldn't think it that valuable to dig into my object level reasoning. But, since I'm the one making the final calls here it seemed important to lay out how I think about the broader patterns in Said's behavior.

I'll start by clarifying my own take on the "what's up with Said and asking for examples?"

I think it is (all else being equal) basically always fine to ask for examples. I think most posts could be improved by having them, I agree that the process of thinking about concrete examples is useful for sanity checking that your idea is real at all. And there is something good and rationalistly wholesome about not seeing it as an attack, but just as "hey, this is a useful thing to consider" (whether or not Said is consistent about this interpretation [LW(p) · GW(p)])

My take on "what the problem here is" is not the part where Said asks for examples, but that when Said shows up in a particular kind of thread, I have a pretty high expectation that there will be a resulting long conversation that won't actually clarify anything important.

The "particular kind of thread" is a cluster of things surrounding introspection, interpersonal-interaction, modeling other people's inner states, and/or interpretative labor. Said is highly skeptical of claims about things in this cluster, and the couple of times I've seen someone actually pursue a conversation with him through to completion the results have rarely seemed illuminating or satisfying. (It seems like Said's experience of things in this space is genuinely different from most people I know. He'll ask for examples, I anticipate giving examples, the examples will be subtle and not obviously real to him. I don't mind that he doesn't believe the examples, but then he'll ask a bunch of followup questions with increasing skepticism/vague-undertone-of-insultingness that is both infuriating and pointless)

I think Said sees himself in often pointing out "the emperor has no clothes", but, I think he just doesn't actually have good taste in what clothes look like in a number of domains that I think are essential for improving the art of rationality. 

do actually get some value from his first couple comments in a thread – they do serve a useful reminder to me to step outside my current frame, see what what hidden assumptions I'm making, etc. I feel fine doing this because I feel comfortable just ignoring him after he's said those initial things, when a normal/common social script would consider that somewhat rude. But this requires a significant amount of backbone. Backbone is great, more people should build it, but I don’t think it’s super correlated with people who are otherwise intellectually generative. And meanwhile there’s still something missing-stair-y about Said, where he’s phrasing his questions in ways that are just under the radar of feeling unreasonable, until you find yourself knee deep in a long annoying comment tree, so “it’s time to use some backbone” isn’t even obvious.

do sometimes think he successfully points out "emperor has no clothes". Or, more commonly/accurately, "the undergrad has no thesis." In some cases his socratic questioning seems like an actually-appropriate relationship between an adjunct professor, and an undergrad who shows up to his philosophy class writing an impassioned manifesto that doesn't actually make sense and is riddled with philosophical holes. I don't super mind when Said plays this role, but often in my experience Said is making these comments about people I respect a lot more, who've put hundreds/thousands of hours into studying how to teach rationality (which absolutely requires being able to model people's minds, what mistakes they're likely to be making, what thought processes tend to lead to significant breakthroughs)

Said takes pride in only posting ~1 post a year or so that actually passes his bar for correct and useful. I think this is massively missing the point of how intellectual progress works. I've talked to many people who seem to reliably turn out philosophically competent advances, and a very common thread is that their early stage idea-formation is fragile, they're not always able to rigorously explain it right away. It eventually stands up to scrutiny but it wouldn't be at all helpful to subject their early idea formation to Said's questioning.

Said is holding LessWrong to the standard of a final-publication-journal, when the thing I think LessWrong needs to be includes many stages before that, when you see the messy process that actually generated those final ideas.

do think there are some important unsolved problems here. I quite liked DirectedEvolution's comment here [LW(p) · GW(p)] where he notes:

I do agree with Said that LessWrong can benefit from improved formative evaluations. I have written some fairly popular LessWrong reviews, and one of the things I've uncovered is that some of the most memorable and persuasive evidence underpinning key ideas is much weaker and more ambiguous than I thought it was when I originally read the post. At LessWrong, we're fairly familiar as a culture with factors contributing to irreproducibility in science - p-hacking and the like.

And I do agree in many ways with Said's prior comment [LW(p) · GW(p)]:

The time for figuring out whether the ideas or claims in a post are even coherent, or falsifiable, or whether readers even agree on what the post is saying, is immediately.

Immediately—before an idea is absorbed into the local culture, before it becomes the foundation of a dozen more posts that build on it as an assumption, before it balloons into a whole “sequence”—when there’s still time to say “oops” with minimal cost, to course-correct, to notice important caveats or important implications, to avoid pitfalls of terminology, or (in some cases) to throw the whole thing out, shrug, and say “ah well, back to the drawing board” [LW(p) · GW(p)].

[...]

It is an accepted truism among usability professionals that any company, org, or development team that only or mostly does summative evaluations, and neglects or disdains formative evaluations, is not serious about usability.

We're now ~4 years into experimenting with the LessWrong Review, it's accomplished some of the goals I had for it but not all of them. Five years ago, before the first Review year was even complete, we told Said [LW(p) · GW(p)]:

I, Oli and Ray will build a better evaluative process for this online community, that incentivises powerful criticism. But right now this site is trying to build a place where we can be generative (and evaluative) together in a way that's fun and not aggressive. While we have an incentive toward better ideas (weighted karma and curation), it is far from a finished system. We have to build this part as well as the evaluative before the whole system works, and while we've not reached there you're correct to be worried and want to enforce the standards yourself with low-effort comments (and I don't mean to imply the comments don't often contain implicit within them very good ideas).

Five years later, we've built part of the evaluative system, but I did update after this year's Review that yeah, we need some kind of faster system as well. I’ve found the comments in this discussion helpful for thinking through what needs to happen here. I’ll hopefully write up a top level post about that.

For now, I agree we probably need to directly incentivize good formative critique. But I don't think Said is actually very good at that in many cases. The best critiques of (say) Circling IMO have come from people who actually understood the good things about Circling, got some value from it, and nonetheless said “but, CFAR still massively overinvested in it” or “the people who do tons of circling get better at relating but in a distorted way, where they go off to circling retreats where everyone is into Openness and Connection, and they don’t do the sort of crosstraining you need to actually also be good at working as a professional or being a good roommate.”

I agree that in the domain of “rationality training”, it’s pretty easy to fool yourself. i.e. Schools Proliferating Without Evidence [LW · GW] and whatnot. I think there’s a difficulty that lives in the territory of “it actually does take awhile to hone in on the training processes that work best, and navigating that domain is going to look from the outside like futzing around with stuff that isn’t obviously real/important. (I have thoughts on how to do this better, that are outside the scope here)

...

I note that this comment is focused on particular genre of conversation-involving-Said, which isn't necessarily directly relevant to the case at hand. But it seemed like important background for a lot of the discussion and eventual decisionmaking here.

Replies from: adamzerner, Wei_Dai, SaidAchmiz
comment by Adam Zerner (adamzerner) · 2023-04-16T21:28:46.778Z · LW(p) · GW(p)

My take on "what the problem here is" is not the part where Said asks for examples, but that when Said shows up in a particular kind of thread, I have a pretty high expectation that there will be a resulting long conversation that won't actually clarify anything important.

Agreed. It reminds me of this excerpt from HPMoR:

"You should have deduced it yourself, Mr. Potter," Professor Quirrell said mildly. "You must learn to blur your vision until you can see the forest obscured by the trees. Anyone who heard the stories about you, and who did not know that you were the mysterious Boy-Who-Lived, could easily deduce your ownership of an invisibility cloak. Step back from these events, blur away their details, and what do we observe? There was a great rivalry between students, and their competition ended in a perfect tie. That sort of thing only happens in stories, Mr. Potter, and there is one person in this school who thinks in stories. There was a strange and complicated plot, which you should have realized was uncharacteristic of the young Slytherin you faced. But there is a person in this school who deals in plots that elaborate, and his name is not Zabini. And I did warn you that there was a quadruple agent; you knew that Zabini was at least a triple agent, and you should have guessed a high chance that it was he. No, I will not declare the battle invalid. All three of you failed the test, and lost to your common enemy."

When I blur my vision so that the details are fuzzy but the broad strokes are still visible, I too see "a pretty high expectation that there will be a resulting long conversation that won't actually clarify anything important". And I think that this approach of blurring ones vision is a wise one to adopt in this situation.

comment by Wei Dai (Wei_Dai) · 2023-04-16T20:28:46.231Z · LW(p) · GW(p)

I feel fine doing this because I feel comfortable just ignoring him after he’s said those initial things, when a normal/common social script would consider that somewhat rude. But this requires a significant amount of backbone.

I still wish [LW · GW] that LW would try my idea [LW(p) · GW(p)] for solving this (and related) problem(s), but it doesn't seem like that's ever going to happen. (I've tried to remind LW admins about my feature request over the years, but don't think I've ever seen an admin say why it's not worth trying.) As an alternative, I've seen people suggest that it's fine to ignore comments unless they're upvoted. That makes sense to me (as a second best solution). What about making that a site-wide norm, i.e., making it explicit that we don't or shouldn't consider it rude or otherwise norm-violating to ignore comments unless they've been upvoted above some specific karma threshold?

Replies from: Vladimir_Nesov, Raemon, T3t
comment by Vladimir_Nesov · 2023-04-17T19:35:10.590Z · LW(p) · GW(p)

we don't or shouldn't consider it rude or otherwise norm-violating to ignore comments unless they've been upvoted above some specific karma threshold

My guess is that people should be rewarded [LW(p) · GW(p)] for ignoring criticism [LW(p) · GW(p)] they want to ignore, it should be convenient for them to do so. So I disagree with the caveat.

This way authors are less motivated to take steps that discourage criticism (including steps such as not writing things). Criticism should remain convenient, not costly, and directly associated with the criticised thing (instead of getting pushed to be published elsewhere).

Replies from: Raemon
comment by Raemon · 2023-04-17T20:50:58.871Z · LW(p) · GW(p)

I already wrote a separate reply saying a similar, but I did particularly like your frame here and wanted to +1 it.

comment by Raemon · 2023-04-17T19:48:22.341Z · LW(p) · GW(p)

Hmm. On one hand, I do think it's moderately likely we experiment with Reacts [LW · GW], which can partially address your desire here. 

But it seems like the problem you're mostly trying to solve is not that big a problem to me (i.e I think it's totally fine for conversations to just peter out, nobody is entitled to being responded to. I'd at least want to see a second established user asking for it before I considered prioritizing it more. I personally expect a "there is a norm of responding to upvoted comments" to make the site much worse. "Getting annoying comments that miss the point" is one of the most cited things people dislike about LW, and forcing authors to engage with them seems like it'd exacerbate it.)

Generally, people are busy, don't have time to reply to everything, and commenters should just assume they won't necessarily get a response unless the author/their-conversation-partner continues to thinks a conversation is rewarding.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2023-04-18T04:10:29.141Z · LW(p) · GW(p)

I’d at least want to see a second established user asking for it before I considered prioritizing it more.

I doubt you'll ever see this, because when you're an established / high status member, ignoring other people feels pretty natural and right, and few people ignore you so you don't notice any problems. I made the request back when I had lower status on this forum. I got ignored by others way more than I do now, and ignored others way less than I do now. (I had higher motivation to "prove" myself to my critics and the audience.)

If I hadn't written down my request back then, in all likelihood I would have forgotten my old perspective and wouldn't be talking about this today.

“Getting annoying comments that miss the point” is one of the most cited things people dislike about LW, and forcing authors to engage with them seems like it’d exacerbate it.)

In my original feature request, I had a couple of "agreement statuses" that require only minimal engagement, like "I don’t understand this. I give up." and "I disagree, but don’t want to bother writing out why." We could easily add more, like "I think further engagement won't be productive." or "This isn't material to my main point." And then we could experiment with setting norms for how much social reward or punishment to give out for such responses (if people's natural reactions to them cause bad consequences). I wouldn't be surprised that such a system ends up making authors more willing or more comfortable to engage less with annoying critics, and makes their LW experience better, by making it more explicit that it's ok to engage with such critics minimally.

comment by RobertM (T3t) · 2023-04-16T21:27:28.695Z · LW(p) · GW(p)

We are currently thinking about "reacts" as a way of providing users with an 80:20 for giving feedback on comments, though motivated by a somewhat different set of concerns.  It's a tricky UX problem and not at the very top of our priority list, but it has come up recently.

comment by Said Achmiz (SaidAchmiz) · 2023-04-16T20:51:57.492Z · LW(p) · GW(p)

This… still misconstrues my views, in quite substantive and important ways. Very frustrating.

You write:

Said is holding LessWrong to the standard of a final-publication-journal, when the thing I think LessWrong needs to be includes many stages before that, when you see the messy process that actually generated those final ideas.

I absolutely am not doing that. It makes no sense to say this! It would be like saying “this user test that you’re doing with our wireframe is holding the app we’re developing to the standard of a final-release product”. It’s simply a complete confusion about what testing is even for. The whole point of doing the user test now is that it is just a wireframe, not even a prototype or an alpha version, so getting as much information as possible now is extremely helpful! Nobody’s saying that you have to throw out the whole project and fire everyone involved into the sun the moment you get a single piece of negative user feedback; but if you don’t subject the thing to testing, you’re losing out on a critical opportunity to improve, to correct course… heck, to just plain learn something new! (And for all you know, the test might have a surprisingly positive result! Maybe some minor feature or little widget, which your designers threw in on a lark, elicits an effusive response from your test users, and clues you in to a highly fruitful design approach which you wouldn’t’ve thought worth pursuing. But you’ll never learn that if you don’t test!)

It feels to me like I’ve explained this… maybe as many as a dozen times in this post’s comment section alone. (I haven’t counted. Probably it’s not quite that many. But several, at least!)

I have to ask: is that you read my explanations but found them unconvincing, and concluded that “oh sure, Said says he believes so-and-so, but I don’t find his actions consistent with those purported beliefs, despite his explicit explanations of why they are consistent with them”?

If so, then the follow-up question is: why do you think that?

I don’t super mind when Said plays this role, but often in my experience Said is making these comments about people I respect a lot more, who’ve put hundreds/thousands of hours into studying how to teach rationality (which absolutely requires being able to model people’s minds, what mistakes they’re likely to be making, what thought processes tend to lead to significant breakthroughs)

What jumps out at me immediately, in this description, is that you describe the people in question as having put a lot of time into studying how to teach rationality. (This, you imply, allows us to assume certain qualifications or qualities on these individuals’ parts, from which we may further conclude… well, you don’t say it explicitly, but the implication seems to be something like “clearly such people know what they’re talking about, and deserve the presumption of such, and therefore it’s epistemically and/or socially inappropriate to treat them as though their ideas might be bullshit the equivalent of an eager undergrad’s philosophy manifesto”.)

But I notice that you don’t instead (or even in addition) say anything like “people … who have a clear and impressive track record of successfully teaching rationality”.

Of course this could be a simple omission, so I’ll ask explicitly: do you think that the people in question have such a track record?

If you do, and if they do, then of course that’s the relevant fact. And then at least part of the reply to my (perhaps at least seemingly) skeptical questioning (maybe after you give some answer to a question, but I’m not buying it, or ask follow-ups, etc.) might be “well, Said, here’s my track record; that’s who I am; and when I say it’s like this, you can disbelieve my explanations, but my claims are borne out in what I’ve demonstrably done”.

Now, it’s entirely possible that some people might find such a reply unconvincing, in any given case. Being an expert at something doesn’t make you omniscient, even on one subject! But it’s definitely the sort of response which buys you a good bit of indulgence from skepticism, so to speak, about claims for which you cannot (or don’t care to) provide legible evidence, on the spot and at the moment.

But (as I’ve noted before, though I can’t seem to find the comment in question, right now), those sorts of unambiguous qualifications tend to be mostly or entirely absent, in such cases.

And in the absence of such qualifications, but in the presence of claims like those about “circling” and other such things, it is not less but rather more appropriate to apply the at-least-potentially-skeptical, exploratory, questioning approach. It is not less but rather more important to “poke” at ideas, in ways that may be expected to reveal interesting and productive strengths if the ideas are strong, but to reveal weaknesses if the ideas are weak. It is not less but rather more important not to suppress all but those comments which take the “non-bullshit” nature of the claims for granted.

(EDIT: Clarified follow-up question)

Replies from: Raemon
comment by Raemon · 2023-04-17T03:54:23.896Z · LW(p) · GW(p)

Said: I absolutely am not doing that. It makes no sense to say this! 

Yeah I agree this phrasing didn't capture your take correctly, and I do recall explicit comments about that in this thread, sorry. 

I do claim your approach is in practice often anti-conducive to people doing early stage research. You've stated a willingness (I think eagerness?) to drive people away and cause fewer posts from people who I think are actually promising. 

But I notice that you don’t instead (or even in addition) say anything like “people … who have a clear and impressive track record of successfully teaching rationality”. Of course this could be a simple omission, so I’ll ask explicitly: do you think that the people in question have such a track record?

My actual answer is "To varying degrees, some more than others." I definitely do not claim any of them have reached the point of 'we have a thing working well enough we could persuade an arbitrary skeptic our thing is real and important.' (i.e. a reliable training program that demonstrably improves quantifiable real world successes). But I think this is a process you should naturally expect to take 4-20 years. 

Meanwhile, there are many steps along the way that don't "produce a cake a skeptical third party can eat", but if you're actually involved and paying attention, like, clearly are having an effect that is relevant, and is at least an indication that you're on a promising path worth experimenting more with. I observe the people practicing various CFAR and Leverage techniques seem to have a good combination of habits that makes it easier to have difficult conversations in domains with poor feedback loops. The people doing the teaching have hundreds of hours of practice trying to teach skills, seeing mistakes people make along the way, and see them making fewer mistakes and actually grokking the skill. 

Some of the people involved do feel a bit like they're making some stuff up and coasting on CFAR's position in the ecosystem, but other seem like they're legitimately embarking on longterm research projects, tracking their progress in ways that make sense, looking for the best feedback loops they can find, etc. 

Anecdata: I talked a bunch with a colleague who I respect a lot in 2014, who seemed much smarter -. We parted ways for 3 years. Later, I met him again, we talked a bunch over the course of a month, and he said "hey, man, you seem smarter than you did 3 years ago." I said "oh, huh, yeah I thought so too and, like, had worked to become smarter on purpose, but I wasn't sure whether it worked." 

Nowadays, when I observe people as they do their thinking, I notice tools they're not using, mistakes they're making, suggest fixes, and it seems like they in fact do better thinking. 

I think it's reasonable to not believe me (that the effect is significant, or that it's CFAR/Leverage mediated). I think it is quite valuable to poke at this. I just don't think you're very good at it, and I'm not very interested in satisfying your particular brand of skepticism.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-17T04:53:54.175Z · LW(p) · GW(p)

My actual answer is “To varying degrees, some more than others.” I definitely do not claim any of them have reached the point of ‘we have a thing working well enough we could persuade an arbitrary skeptic our thing is real and important.’ (i.e. a reliable training program that demonstrably improves quantifiable real world successes).

An arbitrary skeptic is perhaps too high a bar, but what about a reasonable skeptic? I think that, from that perspective (and especially given the “outside view” on similar things attempted in the past), if you don’t have “a reliable training program that demonstrably improves quantifiable real world successes”, you basically just don’t have anything. If someone asks you “do you have anything to show for all of this”, and all you’ve got is what you’ve got, then… well, I think that I’m not showing any even slightly unreasonable skepticism, here.

But I think this is a process you should naturally expect to take 4-20 years.

Well, CFAR was founded 11 years ago. That’s well within the “4–20” range. Are you saying that it’s still too early to see clear results?

Is there any reason to believe that there will be anything like “a reliable training program that demonstrably improves quantifiable real world successes” in five years (assuming AI doesn’t kill us all or what have you)? Has there been any progress? (On evaluation methods, even?) Is CFAR even measuring progress, or attempting to measure progress, or… what?

Meanwhile, there are many steps along the way … Anecdata …

But you see how these paragraphs are pretty unconvincing, though, right? Like, at the very least, even if you are indeed seeing all these things you describe, and even if they’re real things, you surely can see how there’s… basically no way for me, or anyone else who isn’t hanging out with you and your in-person acquaintances on a regular basis, to see or know or verify any of this?

I think it’s reasonable to not believe me (that the effect is significant, or that it’s CFAR/Leverage mediated). I think it is quite valuable to poke at this. I just don’t think you’re very good at it, and I’m not very interested in satisfying your particular brand of skepticism.

Hold on—you’ve lost track of the meta-level point.

The question isn’t whether it’s valuable to poke at these specific things, or whether I’m good at poking at these specific things.

Here’s what you wrote earlier [LW(p) · GW(p)]:

I do sometimes think [Said] successfully points out “emperor has no clothes”. Or, more commonly/accurately, “the undergrad has no thesis.” In some cases his socratic questioning seems like an actually-appropriate relationship between an adjunct professor, and an undergrad who shows up to his philosophy class writing an impassioned manifesto that doesn’t actually make sense and is riddled with philosophical holes. I don’t super mind when Said plays this role, but often in my experience Said is making these comments about people I respect a lot more, who’ve put hundreds/thousands of hours into studying how to teach rationality (which absolutely requires being able to model people’s minds, what mistakes they’re likely to be making, what thought processes tend to lead to significant breakthroughs)

Which I summarized/interpreted as:

… the implication seems to be something like “clearly such people know what they’re talking about, and deserve the presumption of such, and therefore it’s epistemically and/or socially inappropriate to treat them as though their ideas might be bullshit the equivalent of an eager undergrad’s philosophy manifesto”.

(You didn’t object to that interpretation, so I’m assuming for now that it’s basically correct.)

But the problem is that it’s not clear that the people in question know what they’re talking about. Maybe they do! But it’s certainly not clear, and indeed there’s really no way for me (or any other person outside your social circle) to know that, nor is there any kind of evidence for it, other than personal testimony/anecdata, which is not worth much.

So it doesn’t make sense to suggest that we (the commentariat of Less Wrong) must, or should, treat such folks any differently from anyone else, such as, say, me. There’s no basis for it. From my epistemic position—which, it seems to me, is an eminently reasonable one—these are people who may have good ideas, or they may have bad ideas; they may know what they’re talking about, or may be spouting the most egregious nonsense; I really don’t have any reason to presume one or the other, no more than they have any reason to presume this of me. (Of course we can judge one another by things like public writings, etc., but in this, the people you refer to are no different from any other Less Wrong participant, including wholly anonymous or pseudonymous ones.)

And that, in turn, means that when you say:

… but often in my experience Said is making these comments about people I respect a lot more, who’ve put hundreds/thousands of hours into studying how to teach rationality

… there is actually no good reason at all why that should mean anything or carry any weight in any kind of decision or evaluation.

(There are bad reasons, of course. But we may take it as given that you are not swayed by any such.)

comment by Gordon Seidoh Worley (gworley) · 2023-04-14T22:03:33.166Z · LW(p) · GW(p)

I don't have especially strong opinions about what to do here. But, for the curious, I've had run ins with both Said and Duncan on LW and elsewhere, so perhaps this is useful background information to folks outside the moderation team look at this who aren't already aware (I know they are aware of basically everything I have to say here because I've talked to some of them about these situations).

Also, before I say anything else, I've not had extensive bad interactions with either Said or Duncan recently. Maybe that's because I've been writing a book instead of making posts of the sort I used to make? Either way, this is a bit historical and is based on interactions from 1+ years ago.

I've faced the brunt of Said's comments before. I've spent a lot of very long threads discussing things with him and finally gave up because it felt like talking to a brick wall. I have a soft ban on Said on my posts and comments, where I've committed to only reply to him once and not reply to his replies to me, since it seems to go in circles and not get anywhere. I often feel frustrated with Said because I feel like I've put in a lot of work in a conversation to just have him ignore what I said, so this is mostly a rule to protect myself from going down a path that wastes my time.

Duncan and I have had some pretty extensive disagreements, mostly over norms. In particular, I've been quite displeased with Duncan for trying to unilaterally impose his preferred norms in places he did not have the authority to do so (or at least that's how I interpreted his actions). Our biggest blow up was on a post where, as I recall, he objected to me presenting claims in a way that he interpreted as bad and that I was acting in a malicious way when I wrote in a way that allowed such an interpretation. My understanding from a third party who helped mediate was that I was more just the incidental object of Duncan's wrath than being personally called out.

I'll also say that Duncan and I used to be neighbors (shared a backyard fence) and that was fine. I hung out at Dragon Army Barracks several times, though now that I think of it I don't think Duncan was really around when I was there.

Sorry for the lack of links above. I'm sure I could find links to threads to give examples if this matter was more important to me. It's just barely important enough to write the above, but I don't care enough to do any more work. Hopefully it's useful background information to a few folks anyway.

Replies from: Duncan_Sabien, SaidAchmiz
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-14T22:46:57.171Z · LW(p) · GW(p)

Sorry for the lack of links above.

I affirm the accuracy of Gordon's summary of our interactions; it feels fair and like a reasonable view on them.

comment by Said Achmiz (SaidAchmiz) · 2023-04-14T22:07:45.602Z · LW(p) · GW(p)

I’ve had run ins with both Said and Duncan on LW and elsewhere

To clarify—you’re not including me in the “and elsewhere” part, are you? (To my knowledge, I’ve only ever interacted with you on Less Wrong. Is there something else that I’m forgetting…?)

Replies from: gworley
comment by Ruby · 2023-04-14T20:46:18.237Z · LW(p) · GW(p)

Some meta notes about moderation process

Preamble: I like transparency

I think it is much better when the LessWrong userbase knows more about how site moderation happens, i.e. who does it, what the tools are, what actions and decisions are, who’s responsible for what, how they think about things, etc. While being careful to say that LessWrong is not a democracy and we will not care equally about the judgments of everyone on the site just because they're an active member[1], I think transparency is valuable here for at least these overlapping reasons:

  • It means we can be held accountable by people either calling out decisions or policies they think are bad, or leaving because of them (vote with your feet). I really value getting feedback.
  • By letting ourselves be held accountable by people we wish to be accountable to [LW · GW], we set up good incentives and allow us to get corrective feedback.
  • It builds trust and confidence (assuming people like what we're doing) that the site is somewhere worth investing your time and attention.

LessWrong team members mostly speak for themselves

I think it's important that LessWrong team members don't have to pretend to all agree with each other some aggregate official team belief. We each have our models which while being pretty correlated are our own, and it's good when we just speak from them. See my post on this topic for more elaboration [LW · GW].

This approach is a little tricky in the context of moderation since people really need a clear sense of the policies and rules and that's hard if different moderators say different and conflicting things. I'm not sure how to best handle this, but one idea is that moderators are always clear about what's their judgment vs from what they think the policy we plan to endorse is. The team is currently three people who are very in-sync and work very closely, so for any bigger mod decision, we'll have checked to clarify underlying policies. Or so I hope.

Who moderates, who's responsible?

The LessWrong team is part of Lightcone Infrastructure [LW · GW]. The core LessWrong team is Ruby (me, team lead), Raemon, and RobertM[2]. Habryka is head of Lightcone and also responsible for reviving LW/building LW2.0, and was for a long-time the chief moderator. He isn't the day-to-day moderator any more, but we regularly consult him and he'll poke us about things he thinks are an issue. Other Lightcone team members will often weigh in moderation or take some actions sometimes[3]

I (and Lightcone generally) have found generally that many collaborative situations go much better if a single person owns each decision, other people can weigh in, but ultimately that person decides and gets to be held responsible. (You can also hold people responsible for who they delegate ownership to.) This is how we aspire to operate on the LessWrong team: for any decision, we can tell you who was responsible for making it.

In that vein:

Generally, I am the final decision-maker for LessWrong matters unless I have delegated. (Habryka could fire me or in very exceptional circumstances overrule me, but that'd be surprising for that to happen.) However, there are two general large delegations that I've made, one very relevant for moderation. RobertM is CTO and has ownership of all technical aspects of the codebase (architecture, standards, etc). Raemon is Head of New User and <something something corrective/problem user moderation> (we haven't figured out what exactly to call it it) moderation, i.e. Ray gets to decide which new users are welcome on the site, how they get onboarded, etc., Ray is also in charge of judgments about what we do when users seem to violating norms or makes the site worse, etc., i.e. bans, warnings, and other matters. However, decisions about overall site policies, values, norms, etc remain with me.

The Duncan and Said situation plausibly requires some kind of corrective action of one or more people's behavior, and therefore it is Ray's final call what happens with those users. If you want a certain decision made (e.g. disciplinary action), you should focus on addressing his cruxes, etc. To the extent there's a broader site policy question (e.g. which behaviors are ok or not in general), that's in my court. I care a lot of the moderation judgment of others, particular Raemon and Habryka who shape my own thinking a lot, but if you want a certain site policy, know that my cruxes are key (but if you can persuade Raemon or Habryka, good chance I'll be convinced too.)
 

  1. ^

    I (and I am pretty sure others) care a great deal about the opinions and feelings of: 1) users who we think share the core values of the site as we see them and who have good judgment about things, 2) the users who we think contribute most to LessWrong's goals of intellectual progress, etc. It feels less important to me to appease users I'm more ambivalent about being on the site.

  2. ^

    jimrandomh also works heavily on the LessWrong site, that not as part of the core team.

  3. ^

    Less so at the moment as policies are unclear  since LW Team is adjusting moderation policy [LW · GW

Replies from: rhollerith_dot_com
comment by RHollerith (rhollerith_dot_com) · 2023-04-15T03:15:05.299Z · LW(p) · GW(p)

Very informative (and thanks for your efforts)! Who owns the domain name lesswrong.com?

comment by RobertM (T3t) · 2023-04-15T07:48:33.235Z · LW(p) · GW(p)

This is not directly related to the current situation, but I think is in part responsible for it.

Said claims that it is impossible to guess what someone might mean by something they wrote, if for some reason the reader decided that the writer likely didn't intend the straightforward interpretation parsed by the reader.  It's somewhat ambiguous to me whether Said thinks that this is impossible for him, specifically, or impossible for people (either most or all).

Relevant part of the first comment [LW(p) · GW(p)] making this point:

(B) Alice meant something other than what it seems like she wrote.

What might that be? Who knows. I could try to guess what Alice meant. However, that is impossible. So I won’t try. If Alice didn’t mean the thing that it seems, on a straightforward reading, like she meant, then what she actually meant could be anything at all.

Relevant part of the second comment [LW(p) · GW(p)]:

“Impossible” in a social context means “basically never happens, and if it does happen then it is probably by accident” (rather than “the laws of physics forbid it!”). Also, it is, of course, possible to guess what someone means by sheer dumb luck—picking an interpretation at random out of some pool of possibilities, no matter how unlikely-seeming, and managing by chance to be right.

But, I can’t remember a time when I’ve read what someone said, rejected the obvious (but obviously wrong) interpretation, tried to guess what they actually meant, and succeeded. When I’ve tried, the actual thing that (as it turned out) they meant was always something which I could never have even imagined as a hypothesis, much less picked out as the likeliest meaning. (And, conversely, when someone else has tried to interpret my comments in symmetric situations, the result has been the same.)

In my experience, this is true: for all practical purposes, either you understand what someone meant, or it’s impossible to guess what they could’ve meant instead.

For the sake of argument, I will accept that Said finds this impossible.  With that said, the idea that this is impossible - or that it "basically never happens, and if it does happen then it is probably by accident" - is incompatible with my experience, and the experience of approximately anybody I have queried on the subject.  (Said may object here, and claim that people are not reliable reporters.  And yet conversations happen anyways; I've done this before in situations where there was no possible double illusion of transparency.  This is not to say that there are no trade-offs; I would not be surprised if Said finds himself confidently holding an incorrect understanding of others' claims less often than most people.)

My guess is that this is responsible for a large part of what many consider to be objectionable about Said's conversational style.  Many other objections presented in the comments here (and in the past) seem confused, wrong, or misguided.  It might be slightly more pleasant to read Said's comments if he added some trimmings of "niceness" to them, but I agree with him that sort of thing carries meaningful costs.  Rather, I think the bigger problem is that the way Said responds to other people's writing, when he is e.g. seeking clarification, or arguing a point, is that he does not believe in the value of interpretive labor [LW(p) · GW(p)], and therefore doesn't think it's valuable to do any upfront work to reduce how much interpretive labor his interlocutors will need to do, since according to him, that should in any case be "zero".

This basically doesn't work when you're trying to communicate with people who do, in fact, successfully[1] do interpretive labor, and therefore expect their conversational partners to share in that effort, to some degree.


Separately, and more to the matter at hand, although I think that there were supererogatory paths that Duncan could have taken to reduce escalation at various points, I do think that Said's claim that Duncan advocated for a norm of interaction accurately described as "don't ask people for examples of their claims" was obviously unsupported by his linked evidence.  After Duncan calls this out, Said doubles down, and then later (in the comments on this post) tries to offload this onto a distinction between whether he was making a claim about what Duncan literally wrote, vs. what could straightforwardly be inferred about Duncan's intentions (based on what he wrote).

I find this uncompelling given that Said has also admitted (in the comments here) that his literal claim was indeed a strawman, while at the same time the entire thread was precipitated by gjm indicating that he thought the claim was a strawman.  Said claims to have then given a more "clarified and narrow form" of his claim in response to gjm's comment:

If “asking people for examples of their claims” doesn’t fit Duncan’s stated criteria for what constitutes acceptable engagement/criticism, then it is not pretending, but in fact accurate, to describe Duncan as advocating for a norm of “don’t ask people for examples of their claims”. (See, for example, this subthread [LW(p) · GW(p)] on this very post, where Duncan alludes to good criticism requiring that the critic “[put] forth at least half of the effort required to bridge the inferential gap between you and the author as opposed to expecting them to connect all the dots themselves”. Similar descriptions and rhetoric can be found in many of Duncan’s recent posts and comments.)

Duncan has, I think, made it very clear that that a comment that just says “what are some examples of this claim?” is, in his view, unacceptable. That’s what I was talking about. I really do not think it’s controversial at all to ascribe this opinion to Duncan.

If Said is referring to the parenthetical starting with "See, for example", then I am sorry to say that adding such a parenthetical in the context of repeating the original claim nearly verbatim (to describe Duncan as advocating for a norm of “don’t ask people for examples of their claims”, and “what are some examples of this claim?” is, in his view, unacceptable) does not count as clarifying or narrowing his claim, but is simply performing the same motion that Duncan took issue with, which is attempting to justify a false claim with evidence that would support a slightly-related but importantly different claim.


I'm leaving out a lot of salient details because this is, frankly, exhausting.  I think the dynamics around Killing Socrates were not great, but I also have less well-formed thoughts there.

  1. ^

    Sometimes - often enough that it's worth relying on, at least.

Replies from: Vladimir_Nesov, SaidAchmiz, SaidAchmiz
comment by Vladimir_Nesov · 2023-04-16T05:04:11.873Z · LW(p) · GW(p)

This basically doesn't work when you're trying to communicate with people who do, in fact, successfully do interpretive labor, and therefore expect their conversational partners to share in that effort, to some degree.

Ability to be successful is crucially different from considering it a useful activity. The expectation of engaging isn't justified by capability to do so.

comment by Said Achmiz (SaidAchmiz) · 2023-04-15T08:27:01.721Z · LW(p) · GW(p)

Separately from my other reply, I want to call attention to this:

This basically doesn’t work when you’re trying to communicate with people who do, in fact, successfully[1] do interpretive labor, and therefore expect their conversational partners to share in that effort, to some degree.

[1] Sometimes—often enough that it’s worth relying on, at least.

I have said this in the past, I think, but I want to note again that I am deeply skeptical of the claim that such “interpretive labor” actually succeeds often enough to be worth its serious downsides. I think that—much more often than most people here care to admit—the result of such efforts are illusionary understanding, and (to speak frankly) the erosion of the ability, of all involved, to detect bullshit (both their own and that of others), and to identify when they simply do not know or do not understand something.

I think that it would be greatly to the benefit of all participants of Less Wrong if everyone here was all much, much more reluctant to perform such “labor”.

comment by Said Achmiz (SaidAchmiz) · 2023-04-15T08:19:54.346Z · LW(p) · GW(p)

I just want to make it clear—since I think this point may have gotten lost in the shuffle—that I still think that this part of my comment is pretty clearly true:

Duncan has, I think, made it very clear that that a comment that just says “what are some examples of this claim?” is, in his view, unacceptable. That’s what I was talking about.

(The next sentence, which says “I really do not think it’s controversial at all to ascribe this opinion to Duncan.”, is now clearly false; it is, obviously, controversial, as demonstrated by the controversy which has resulted. I think that to retain its truth value while still being assertable now, that sentence would now have to say something like “I really do not see why it should be controversial at all to ascribe this opinion to Duncan”, or perhaps “I don’t see that there was any reason, prior to that point, to have expected it to be controversial at all to ascribe this opinion to Duncan”, or something else along these lines. But this is not as important as the previous two sentences.)

I have yet to see any compelling reason to conclude that this is false. (I am aware that Duncan specifically disclaims this, and do not find that to be a compelling reason, in this circumstance.) (EDIT: See below for more)

I say this to forestall any potential misunderstandings in general, but also, more specifically, to note that any analysis of the situation which depends on the notion that I’ve admitted to having been wrong, or to have insisted on a position which I now admit was untenable, must be mistaken—as I have, indeed, neither admitted doing, nor done, either of those things.

Finally, I must object that, relative to the original, sloppily phrased, version of the claim (‘various proposed norms of interaction such as “don’t ask people for examples of their claims” and so on’, with the contextual implication that this referred to Duncan), the above-quoted sentences absolutely (and, in my view, quite obviously) do count as clarifying and narrowing the claim. The quoted text provides a narrower and more specific rendition of the claim, and makes it clear that this is the claim which was intended. (It is, again, a perfectly normal conversational pattern, where one person says a thing, another person says “that seems weird/wrong”, and the first says “what I meant was [some clarified / corrected version]”; there is nothing blameworthy about that.)

EDIT: What I wrote in this comment [LW(p) · GW(p)] is also relevant (the “you” here is Duncan, naturally):

Now, you may protest that the claim is actually false. Perhaps. Certainly I don’t make any pretensions to omniscience. But neither do I withdraw the claim entirely. While I would no longer say that I “do not think it’s controversial at all to ascribe this opinion” to you (obviously it is controversial!), your previous statements (including some in this very discussion thread) and your behavior continue (so it seems to me) to support my claim about your apparent views.

I now say “apparent”, of course, because you did say that you don’t, in fact, hold the belief which I ascribed to you. But that still leaves the question of why you write and act as though you did hold that belief. Is it that your actual views on the matter are similar to (perhaps even indistinguishable for practical purposes from) the previously-claimed belief, but differ in some nuance (whether that be important nuance or not)? Is it that there are some circumstantial factors at play, which you perceive but I do not? Something else?

I think that it would be useful—not just to you or to me, but to everyone on Less Wrong—to dig into this further.

Replies from: habryka4
comment by habryka (habryka4) · 2023-04-15T17:10:01.947Z · LW(p) · GW(p)

Just to provide a concrete example, I am quite confident Duncan would not mind a comment of the form "Do you have more examples?" from me or really anyone else on the Lightcone team, I am pretty sure.

I don't know whether he would always respond, but my sense is the cost Duncan (and a decent number of other authors) perceive as a result of that post is related primarily to the follow-up conversation to that question, not the question itself, as well as the background model of the motivations of the person asking it.

Not sure how much this counts as evidence for you, but I do want to flag that I would take bets against your current suggested prediction.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-15T17:23:38.272Z · LW(p) · GW(p)

This certainly counts as evidence. (I’m not sure how we’d operationalize “how much” here, but that’s probably not necessary anyhow.)

Basically, what you’re providing here is part of an answer to the question I ask (“you”, again, refers to Duncan):

But that still leaves the question of why you write and act as though you did hold that belief. Is it that your actual views on the matter are similar to (perhaps even indistinguishable for practical purposes from) the previously-claimed belief, but differ in some nuance (whether that be important nuance or not)? Is it that there are some circumstantial factors at play, which you perceive but I do not? Something else?

And you’re saying, I take it, that the answer is “indeed, there are circumstantial factors at play”.

Well, fair enough. The follow-up questions are then things like “What is the import of those circumstantial factors?”, and “Taking into account those factors, what then is the fully clarified principle/belief?”, and “What justifies that principle/belief?”, and so on.

I don’t know if it would be productive to explore those questions here, in this thread. (Or anywhere? Well, that depends on the outcome of this discussion, I imagine…)

I will note, though, that it seems like a whole lot of this could’ve been avoided if Duncan had replied to one of my earliest comments, in that thread or perhaps even an earlier thread on a previous topic, with something like: “To clarify, I think asking for examples is fine, and here are links to me doing so [A] [B] [C] and here are links to other people doing so to me and me answering them [1] [2] [3], but I specifically think that when you, Said, ask for examples, that is bad, for specific reasons X Y Z which, as we can see, do not apply to my other examples”.

(Indeed, he can still do so!)

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-15T17:42:44.920Z · LW(p) · GW(p)

I note for any other readers that Said is evincing a confusion somewhere in the neighborhood of the Second Guideline [LW · GW] and the typical mind fallacy.

In particular, it's false that I "write and act as though I did hold that belief," in the sense that a supermajority of those polled would check "true" on a true-false question about it, after reading through (say) two of my essays and a couple dozen of my comments.

("That belief" = "Duncan has, I think, made it very clear that that a comment that just says 'what are some examples of this claim?' is, in his view, unacceptable.")

It's pretty obvious that it seems to Said that I write and act in this way. But one of the skills of a competent rationalist is noticing that [how things seem to me] might not be [how they actually are] or [how they seem to others].

Said, in my experience, is not versed in this skill, and does not, as a matter of habit, notice "ah, here I'm stating a thing about my interpretation as if it's fact, or as if it's nearly-universal among others."

e.g. an unequivocally true statement would have been something like "But that still leaves the question of why you write and act in a way that indicates to me that you do hold that belief."

In addition to being unequivocally true (since it limits its claims to the contents of Said's own experience, about which he has total authority to speak), it also highlights the territory more clearly, since it draws the reader's attention to the fact that what's going isn't:

Duncan writes and acts in a way that indicates [period; no qualification] that he holds that belief

but rather

Duncan writes and acts in a way that indicates [to me, Said] that he holds that belief

Which makes it more clear that the problem is either in Duncan's words and actions or in Said's oft-idiosyncratic interpretation, rather than eliding the whole question and predeciding that of course it's a Duncan-problem.

My various models of Said retort that:

  • This is a meaningless distinction; too small to care about and drowned out by nose (I disagree)
  • Everybody Knows that his statement comes with a prepended "it seems to me" and it's silly to treat it as if it were intended to be a stronger claim than that (I argue that this is a motte-and-bailey [LW · GW])
  • This is too much labor to expect of a person (that they correctly confine their commentary to true things, or herald their speculation as speculation; I am unsympathetic)

But I think a lot of Said's confusions would actually make more sense to Said if he came to the realization that he's odd, actually, and that the way he uses words is quite nonstandard, and that many of the things which baffle and confuse him are not, in fact, fundamentally baffling or confusing but rather make sense to many non-Said people.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-15T17:51:13.887Z · LW(p) · GW(p)

But I think a lot of Said’s confusions would actually make more sense to Said if he came to the realization that he’s odd, actually, and that the way he uses words is quite nonstandard, and that many of the things which baffle and confuse him are not, in fact, fundamentally baffling or confusing but rather make sense to many non-Said people.

There is nothing shocking about finding oneself to be unusual, even (or, perhaps, especially) on Less Wrong. So this particular revelation isn’t very… revelatory.

But I don’t think that many of the things that baffle and confuse me actually make sense to many others. What I do think is that many others think that those things make sense to them—but beneath that perception of understanding is not, fact, any real understanding.

Of course this isn’t true of everything that I find confusing. (How could it be?) But it sure is true of many more things than anyone generally cares to admit.

(As for using words in a nonstandard way, I hardly think that you’re one to make such an accusation characterization! Of the two of us, it seems to me that your use of language is considerably more “nonstandard” than is mine…)

(EDIT: Wording [LW(p) · GW(p)])

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-15T18:47:15.808Z · LW(p) · GW(p)

As for using words in a nonstandard way, I hardly think that you’re one to make such an accusation!

I think the best response to this is one of Said's own comments:

I have (it would seem) a reputation for making certain sorts of comments, which are of course not intended as “attacks” of any sort (social, personal, etc.), but which are sometimes perceived as such—and which perception, in my view, reflects quite poorly on those who thus perceive said comments.

I am not optimizing particularly hard for Said not feeling criticized but also treating my comment above as an "accusation" seems to somewhat belie Said's nominal policy of looking down on people for interpreting statements as attacks.

In any event: oh yah for sure I use language SUPER weird, on the regular, but I'm also a professional communicator whose speech and writing is widely acclaimed and effective and "nuh uh YOU'RE the one who uses words weird" is orthogonal to the question of whether Said has blind spots and disabilities here (which he does).  

(If there was another copy of Said lying around, I might summon him to point out the sheer ridiculousness of responding to "You do X" with "how dare you say I do X when YOU do X", since that seems like the sort of thing Said loves to do.  But in any event, I don't think having a trait would in fact make me less able to notice and diagnose the trait in others.)

Replies from: SaidAchmiz, SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-15T18:53:13.929Z · LW(p) · GW(p)

“Accusation” in the grandparent wasn’t meant to imply anything particularly blameworthy or adversarial, though I see how it could be thus perceived, given the context. Consider the word substituted with “characterization” (and I will so edit the previous comment).

comment by Said Achmiz (SaidAchmiz) · 2023-04-15T19:00:28.254Z · LW(p) · GW(p)

In any event: oh yah for sure I use language SUPER weird, on the regular, but I’m also a professional communicator whose speech and writing is widely acclaimed and effective and “nuh uh YOU’RE the one who uses words weird” is orthogonal to the question of whether Said has blind spots and disabilities here (which he does).

I dispute the claim of effectiveness. (As for “acclaimed”, well, the value of this really depends on who’s doing the acclaiming.)

And the question certainly is not orthogonal. My point was that your use of words is more weird and more often weird than mine. You have no place to stand, in my view, when saying of me that I use words weirdly, in some way that leads to misunderstandings. (I also don’t think that the claim is true; but regardless of whether it’s true in general, it’s unusually unconvincing coming from you.)

(If there was another copy of Said lying around, I might summon him to point out the sheer ridiculousness of responding to “You do X” with “how dare you say I do X when YOU do X”, since that seems like the sort of thing Said loves to do.

Indeed this is not ridiculous, when the X in question is something like “using words weirdly”, which can be understood only in a relative way. The point is not “how dare you” but rather “you are unusually unqualified to evaluate this”.

But in any event, I don’t think having a trait would in fact make me less able to notice and diagnose the trait in others.)

This could surely not be claimed for arbitrary traits, but for a trait like this, it seems to me to make plenty of sense.

Replies from: Raemon
comment by Raemon · 2023-04-16T00:06:12.406Z · LW(p) · GW(p)

Quick note re: "acclaimed": Duncan had fairly largish number of posts highly upvoted during the 2021 Review [LW · GW]. You might dispute whether that's a noteworthy achievement, but, well, in terms of what content should be considered good on LessWrong, I don't know of a more objective measure of "what the LessWrong community voted on as good, with lots of opportunity for people to argue that each other are mistaken." (and, notably, Duncan's posts show up in the top 50 and a couple in the top 20 whether you're tracking votes from all users or just high karma ones)

(I suppose seeing posts actually cited outside the LessWrong community would be a better/more-objective measure of "something demonstrably good is happening, not potentially just circle-jerky". I'm interested in tracking that although it seems trickier)

((Not intending to weigh in on any of the other points in this comment))

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-16T03:29:54.344Z · LW(p) · GW(p)

(I suppose seeing posts actually cited outside the LessWrong community would be a better/more-objective measure of "something demonstrably good is happening, not potentially just circle-jerky". I'm interested in tracking that although it seems trickier)

In order from "slightly outside of LessWrong" to "very far outside of LessWrong," I refactored the CFAR handbook against (mild) internal resistance from CFAR and it was received well, I semi-regularly get paid four or low-five figures to teach people rationality, I've been invited to speak at 4+ EA Globals and counting, my In Defense of Punch Bug essay has 1800 claps which definitely did not primarily come from this community, my Magic color wheel article has 18,800 claps and got a shoutout from CGPGrey, my sixth grade classroom was featured in a chapter in a book on modern education, and my documentary on parkour was translated by volunteers into like eight different languages and cited by the founder as his favorite parkour video of all time (at at least one moment in time). *shrug

comment by localdeity · 2023-04-14T20:48:43.169Z · LW(p) · GW(p)

Let's see if I can give constructive advice to the parties in question.

First, I'll address Said, on the subject of asking for examples (and this will inform some later commentary to Duncan):

It might be helpful, when asking for examples, to include a few words about why, or what kind of examples you're looking for, or other information.  This serves multiple purposes: (a) it can help the author choose a good example and good explanation [and avoid wasteful "No, not that kind of example; I wanted [...]" exchanges]; (b) it signals cognitive effort on your part and helps to distinguish you from a "low-effort obtuse fool or nitpicker"; (c) it gives the author feedback about how their post landed with real users [based on you as a sample, plus the votes on your comment suggesting how the rest of the audience feels]; (d) a sort of proof-of-work assures other people that you care at least some amount about getting their reply.

Examples of being a little more specific:

  • I am skeptical of your claim.  Can you give an example?
    • To add weight to this, you could say "I googled X and the most plausible result was Y, which still doesn't match your claim."  (As a general principle, if it's worth posting a comment, it's worth doing a Google search.)
  • Your point is stated very abstractly and I'm not sure what you mean.  Could you illustrate with an example?
  • I can think of a couple of interpretations of your position, some of which I think are wrong, but I'd rather not argue against a position you don't actually hold.  Could you give an example in xyz scenario?
  • That's an amazing claim, and any examples would be delicious and things I'd like to look at further.  Could you point me at any?

Sometimes it does work fine to ask the bare question without giving any other data about your thought-process, but I expect it's usually hard to be sure that this is true of any particular situation.

To Duncan:

It seems that an important part of your perspective is the perception that leaving a question or criticism unanswered makes it look like the criticism is well-founded, and the author has realized he's wrong and is too cowardly to admit it, or something like that.  This then leads to the pressure you say you feel, to engage in the effortful process of replying to all the criticisms/questions.

I note that, on Less Wrong, there are vote counters.  I submit that if there's a one-word "Examples?" comment that is downvoted with no replies, this looks plausibly to bystanders like "an obtuse low-effort poster asking a dumb question, not worthy of the author's time".  This generalizes to more verbose comments as well.  I do often read downvoted comments anyway, but much of the time (not quite always) I agree that they should be downvoted and aren't worth people's time.  Conversely, if it is highly upvoted, that is evidence that something wasn't clear to a bunch of your readers, which seems to make it worth addressing.

So you could downvote what you perceive to be low-effort obtuse questions/criticisms.  In fact, you have plenty of karma, so your strong downvote is powerful.  Then maybe you check back a day later, and if it's been upvoted significantly (which should be rare if your evaluation of these comments is well-calibrated), then you feel pressure to reply.  (Or maybe not even then; on many forums, threads fall off the front page and people just stop bothering to reply, partly because few will see it.  Spend more time on Hacker News to experience this.)  In principle one could imagine forum features like "ignore this comment until karma reaches threshold" or "notify me when karma reaches threshold".

I note that you sometimes post on Facebook, where there is no downvote button.  I wonder if some of your beliefs about commenting come from there.

You could also advertise in the moderation guidelines on your posts that you'll downvote low-effort comments.

Come to think of it, I think you even have the power to delete comments on your posts.  Delete them with messages like "low-effort obtuse question" when appropriate.  There's certainly a part of me that is uneasy about suppressing any comments from anyone, but once you accept that any moderation at all is to be done, it seems like that would be a thing to do.

Have you been doing much of the above?  Are there major problems with it?

comment by Algon · 2023-04-25T19:59:30.914Z · LW(p) · GW(p)

I find Said Achmiz for be vaguely offputting, abrasive. And yet, I find it difficult not to read his comments when I see them. Even so, reading Said's opinions has ~always left me better off than I was before. Thinking about it, the vibe of Said's posts remind me of Hanson's, which can only be an endorsement in my view.
 

comment by Ruby · 2023-04-15T20:57:49.771Z · LW(p) · GW(p)

Some of Ruby's high-level broader thoughts about Moderation Philosophy, what LessWrong ought to be, kinds of behavior that are okay, what to do with users who are perhaps behaving badly, etc.

Replies from: Ruby, Ruby
comment by Ruby · 2023-04-15T20:58:03.469Z · LW(p) · GW(p)

Politeness/Cooperativeness/Etc

When I first joined the LessWrong team four years ago, I came in with the feeling that politeness and civility were very important and probably standards of those should be upheld on LessWrong. I wrote Combat vs Nurture [LW · GW] and felt that LessWrong broadly ought to be a bit more Nurture-y. 

Standard arguments in favor this be nice/friendlier/etc:

  • Most basically, it encourages people to participate more. Most people don't want to feel "attacked" in a way that feels imperson, uncaring, or mean – even if they'd like helpful criticism that feels more clearly rooted in collaborate truthseeking.
  • Many ideas are Butterfly Ideas [LW · GW] and an overly critical atmosphere can stifle them

And further many will claim:

  • It's just not that much harder to rephrase criticisms in a nice more constructive-feeling way.

Some counterclaims are:

  • You cannot in fact rephrase things "nicer" without changing the meaning, often in ways that matter.
  • If you require people to put in this effort (and for some it is much more effort), then you are taxing their participation.
  • It is a plus that some people are not worrying about other people's feelings. Worrying about other people's feelings is a liability for truthseeking.
    • (To which the counterargument is humans are humans, conversation does not proceed better when people feel threatened or attacked, we have to work with who we are, and that means perhaps putting some thought into how people feel.)

 

My thoughts on this

I think there's truth to each side of the argument. Neither maximal politeness or maximal not caring about politeness is correct. Which makes it tricky, because then you have to find the optimal balance.

In fact, I think it's something that made LessWrong a special place for intellectual discussion in the first place that you got to be fairly disagreeable, say what you think, and worry less about people's feelings. Habryka feels some of the most "difficult/disagreeable" (my words) people were pushed out when we went from LW1.0 to 2.0, and that might have been a mistake, because there was spirit to them that was valuable. They have in fact some part of the spirit that is core to LessWrong, and that we don't want to lose.

Habryka can probably articulate that better than I can, my stating it here is downstream of him, but I am compelled. And it makes me outright reluctant to ban Said or others who seem pretty damn difficult and frankly I do really want to discuss with them myself, because I dunno, there might be something there. The site can't be made of people disagreeable, but you want a few to be there to question things, push against social reality, etc., poke you.

For that matter, I think this same sentiment of "there's sometimes a lot of value to the more difficult people, perhaps by dint of and precisely because of their difficultness" that means we have not been tougher on Duncan generally, and continued to welcome him on the site even Duncan is difficult/work (as measured by how many hours the team has spent figuring out how to respond). Some of this are the particularly valuable posts (which he has even been paid for writing), but also it's even cases like this where Duncan has pushed hard for something to be corrected. I want to believe that there could have been more effective and less costly ways for Duncan to instigate some action getting taken here, but in fact what he did (even though unpleasant), has made us focus on something possibly quite important. (Also possible it's a major distraction and everything would have been better if Duncan had walked away, unclear.) 

I don't expect to ever want to use GPT-N to inspect comments for unnecessary rudeness and automatically message people, even if all else considered I'd prefer less rudeness.

There is such a thing as too much though, and Said has gone too far before. I think the warning given here [LW(p) · GW(p)]was likely well earned, and also that Said is now engaging in the behavior he was called out for; plus that behavior is still too much, and that needs to be corrected. It's in Raemon's court to make a determination here. This comment however suggests where I'm at with site philosophy though.

Replies from: Ruby, humm1lity, JacobKopczynski
comment by Ruby · 2023-04-15T21:43:20.209Z · LW(p) · GW(p)

I'm reminded of a related point here around banning people. When you ban a person, you do not just have the consequence of:

  • That person is no longer around
  • You discourage the kinds of behavior they engaged in

You also:

  • Sadden everyone who overall valued that person, and alienate them somewhat
  • Make anyone who thinks their own behavior resembles the bans person behavior (even if you the banner, think they're different) feel more afraid
  • Upset the people who particularly cared about what the banned person represented, and make them feel the site doesn't value what they value (even if you think you do, banning an exemplar is scary).

(Which isn't to say these aren't mirrored for not banning someone. That also alienates people who think the potentially banned behavior is bad for them, people who think that's not acceptable in general, etc.

All this to say, let's say there's some behavior that's valuable in moderation, e.g. "being Socratic and poking a bit in a way that's a bit uncomfortable" but is bad if you have too much of it. There's a cost to banning the person who does it too much in that it discourages and threatens many of the people who were doing it a good amount. I think that ought to factor into moderation decisions.

comment by Caerulea-Lawrence (humm1lity) · 2023-04-16T18:40:44.161Z · LW(p) · GW(p)

Hello Ruby, 

I did read some of the other comments here, and also the article you linked about butterflies (which I enjoyed):

Since I am a relatively new member, I have ideas, and not that much experience with regard to LW site, technically or historically. I do have experience from various other communities and social arenas, and maybe something can be applicable here as well. Since I myself experienced getting down-voted and 'attacked' the moment I let out some butterflies here on the community, which even made me leave feeling hurt and disappointed, reading Duncan_Sabiens post Killing Socrates, and also seeing another person writing about MBTI having their butterfly squished, made me realize I wasn't really evaluating my experience clearly, and I decided to re-activate my account and try to engage a bit more.

The situation with Duncan_Sabien and Said, to me, is similar to how people value PvP (Player versus Player) and PvE (Player versus Environment) in games. Both are useful, but opening up for both all the time can be a bit taxing for everyone involved.

Having a dedicated space, where arguments are more sharp and less catering to emotion or other considerations, is very good. But it creates issues when it is in the same room as the nursery.

Big idea:
Is it possible to have something like a PvE zone/meta-tag here on LW; a Supportive, co-operative and/or nurturing zone/meta-tag, where both new and veteran authors/posters alike, can safely hatch/store their ideas? A place where the expectations and norms for comments are catered towards being PvE only?

And similarly, to have a dedicated PvP zone/meta-tag, involving more direct confrontation and battle of ideas, concepts, knowledge, wits and with, possibly, or possibly not, the goal of sharpening and improving ideas? Where you might want more direct confrontation or debate, and where the focus is more on skill than tact.

When posting an article/post, the choice of where to put things is clear, and you can have more stream-lined expectations of what you will get if you post under either-tag. And of course, you can always choose to add both at the same time.

With regard to the Front Page, or what should be seen as the most visible layer of LW, Ideas, concepts and texts should be hatched to a certain standard, and be available for both zones - at a certain standard. 
I am not very familiar with either Duncan_Sabien or Said, but if I intuit correctly, maybe Duncan_Sabien's posts and also his stress level as an author would fare better if he could let new articles hatch more slowly and safely. And Said wouldn't have to watch his words as much if he knew that posts on the Front Page, or with the particular PvP tag, were 'available' for a certain level of direct interaction.

Which means that a comment, that might be 'flagged' in the PvP zone, could be welcome in the PvE zone, and vice versa. It isn't a complete solution, but I was hoping it could contribute somewhat.

On a side-note:
With regard to the standards of comments, I hope to see a bigger focus on fostering good commentators. Highly skilled PvP players, should get recognition as good and useful players, as should PvE commentators that are good at meeting the criteria for the PvE section. 
And to differentiate good PvP comments from good PvE comments, why not simply let everyone choose their primary alignment (PvE or PvP) and then let the respective groups up-vote good commentators. 
A little distinction close to their names could also help in a quick evaluation of posters.

I do not have a good idea for what to do about the Front Page, but with more information, I might find some idea that could be worthy of pursuit.

Edit: This is just a rough sketch, and I might patch it out if given enough reason to. But Said said he agreed, even though he loves WoW, so for now I'm going to let it stand.

Kindly,
Caerulea-Lawrence

Replies from: SaidAchmiz, Ruby
comment by Said Achmiz (SaidAchmiz) · 2023-04-16T19:59:23.897Z · LW(p) · GW(p)

It is interesting that you use this analogy.

It so happens that I’ve spent a good deal of time playing World of Warcraft (as I have [LW · GW] written about [LW · GW] a few [LW(p) · GW(p)] times [LW(p) · GW(p)]), which, of course, also has PVP as well as PVE elements. And if I were analogizing participation on Less Wrong to aspects of WoW gameplay, I would unhesitatingly say that the sort of patterns of communication and engagement which I prefer (for myself) and admire (in others) are most like the PVE, not the PVP, part of WoW.

What I mean by that is the following. World of Warcraft famously includes many different “things you can do” in the game (the better to appeal to a broad player base)—you can do solo questing, you can advance trade skills, you can explore, you can go hunting for exotic pets, you can engage in “world PVP”[1], etc., etc. However, all of that is in some sense peripheral; there are three sorts of activities which I would consider to be “core” to the experience: roleplaying, organized PVP, and dungeons (including, and especially, raids).

Dungeons and raids are high-end PVE content, requiring the cooperative participation of anywhere from 5 to 40[2] people. Organized PVP is battlegrounds and arenas—that is, teams of players facing each other on defined battlefields, fighting to achieve some objective (or simply kill everyone on the other team before they do the same to you). And roleplaying is, by its nature, more amorphous and less inherently structured, but in overall form it boils down to using the chat functionality and the character emote features to act out various scenarios (which are defined wholly by the players—think of any text-based roleplay, except with character avatars being portrayed by WoW characters), possibly aided by some aspects of the “actual” game world[3].

Now, the thing about roleplaying in WoW is that there aren’t any “rules” or “game mechanics” that are imposed on it by World of Warcraft, the computer game. The players can, of course, define and follow whatever rules they like, but this has no more force than following the rules of a tabletop RPG (like D&D). It’s all just text. You can have your character slay purported dragons, just as you can in a TTRPG, but this is unconnected with any actual WoW-game dragons—it’s just text. Even if you have your actual WoW character take this or that WoW-game action—including the killing of actual game creatures—to add verisimilitude to the roleplay, the two things still have no connection with each other except that which is imposed by the shared fantasy of the roleplay.

In other words—in terms of the game mechanics of WoW—roleplaying is epiphenomenal. It does not involve or result in accomplishing anything. (Which is not to say it can’t be fun!) Any in-game character actions taken as part of roleplaying, per se, have no requirements imposed on them, and cannot in any meaningful way fail, since their WoW-game consequences as such as irrelevant to the roleplay.

This is very different from high-end PVE.

I’ve written about WoW raiding (see the links at the start of this comment). It is a very seriously and determinedly cooperative environment, and high-end raiding guilds/teams exhibit a degree of coordination, of unitary action, which is deeply impressive. (And it’s very easy to get used to this sort of thing, to start to take it for granted—until you try, for example, to defeat some difficult raid boss with a less experienced or more ad-hoc raid group, and find, to your dismay, that what seemed easy, even boring, for a team where everyone knows exactly what to do and calmly does the correct thing every time, is impossible for a team without that degree of both individual skill and group synchrony.)

In high-end WoW PVE (in raiding, in “heroic” dungeons, etc.), success and failure, for all that they are made up of only bits and pixels, nevertheless very much satisfy the condition of “not going away when you stop believing in them”. If you don’t perform the correct in-game actions, you simply will not defeat the encounters. You either do it right or you fail.

And there are all sorts of ways [LW · GW] to try to ensure that everyone on your team performs as well as is required. But if someone is doing something incorrectly, which they must do in order for you to defeat some raid encounter, either they fix their mistake, or you replace them, or you don’t succeed. There aren’t any other options. Similarly, if your team is failing to defeat some encounter(s), either you identify the problem and fix it, or you don’t succeed. It doesn’t matter how anyone feels about the situation, or about each other, or about anything else. The game code has no concern for any of that. You must play correctly, or you will fail.

I emphasize again that WoW high-end PVE is a deeply, thoroughly cooperative endeavor. You cannot, by construction, gain any benefit whatever from causing any other member of your raid team to fail to defeat a raid boss—because it’s the whole team that succeeds or fails, together. If you cause any other team member to perform worse, you sabotage your own chances of success. (Certainly there are “free rider” problems, and similar game-theoretic concerns—but those can, at worse, motivate you to invest less effort than you otherwise might; they offer no reason to direct your efforts against other players.)

And (not unrelatedly, I think) almost all good high-end PVE guilds in WoW—and especially those raid teams which take on the most challenging of raid content—tend to be friendly, supportive places, with camaraderie aplenty… while, at the same time, expecting, and demanding, nothing less than one’s consistent best, from all their members.

My point is this: the sort of distinction you are proposing, seems more to me like the distinction between roleplaying and PVE, than like that between PVE and PVP. (I can think of no aspect of participation on Less Wrong which I would analogize to PVP in WoW or any similar game.)

If you wish to “play” in an unstructured way, explore, etc., that is fine. There is no reason to abjure such activities wholesale. But in order to accomplish anything non-trivial—to collectively take on a real challenge of any sort—one has to make demands on those who wish to take part. This has nothing to do with opposition, with any adversarial context or attitude. It’s not PVP, in other words. It’s PVE with real stakes.


  1. That is, chance hostile encounters with players of the opposite faction, while traveling through the open world. ↩︎

  2. Depending on the particular dungeon/raid, and the expansion being played. ↩︎

  3. One might also think of this sort of roleplay as “LARPing, but in WoW instead of in real life”. (This in contrast with, for example, having your WoW characters sit down, in-game, at a table in an inn, and then playing Dungeons and Dragons, using the in-game chat in place of something like IRC or Discord.) ↩︎

Replies from: humm1lity
comment by Caerulea-Lawrence (humm1lity) · 2023-04-16T21:21:36.497Z · LW(p) · GW(p)

Yes, got it. Thanks for taking the time.

comment by Ruby · 2023-04-16T18:54:55.518Z · LW(p) · GW(p)

Hi Caerulea-Lawrence,

Thanks for the suggestion, I think it is one worth giving thought, though tricky to implement in practice.

LessWrong (in its first incarnation) had different sections like "Main" and "Discussion", but it didn't work great in the end. People became afraid to post on Main, so everything ended up Discussion. And then, while this might work for a niche community, as LessWrong becomes more and more of a destination (due to the rising popularity of AI), we'd still have to enforce a minimum standard on Discussion/PvE before the quality diluted catastrophically, which means you end up facing the same challenges again (but with more people).

I'm interested in solutions here, but it is tricky. Right now I'm interested in Open Threads that have lower bars. Shortform was is also supposed to be more of a Butterfly place, though I'd want to give it more thought before making it a more 101 sanctioned area. But lots of things to explore.

Replies from: humm1lity
comment by Caerulea-Lawrence (humm1lity) · 2023-04-16T21:33:01.134Z · LW(p) · GW(p)

Hello again Ruby,

You are welcome. You answered before I had time to write:

Edit: This is just a rough sketch, and I'll be happy to patch it up if prompted. 

Yeah, I imagine I am missing a lot of nuances, history of LW and otherwise. 

If you want my specific help with anything, let me know. I'm only on the outside looking in, and there is only so much I am able to see from my vantage point. 

I do believe I could make my idea work somewhat, and I understand it would have to accommodate a lot of different issues I might not be aware about, but I would be willing to give it a try.

With regard to PvE, I do not mean it as a sleeping pillow where anything goes. Or PvP as a free for all. There would be just as strict rules on both sides, but there would be different nuances, and probably different people giving the down-votes and commenting. It is more the separation between typical communication forms and understanding. So, maybe the whole analogue is bad (Blaming you for this Said :)

I wish you all the best whatever you choose to do, and hope you find a solution that errs a little bit less - as hoped for.

Kindly,
Caerulea-Lawrence

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-16T21:46:20.694Z · LW(p) · GW(p)

So, maybe the whole analogue is bad (Blaming you for this Said :)

Hah.

For what it’s worth, I do, actually, agree with the overall thrust of your suggestion. I have made similar suggestions myself, in the past… unfortunately, my understanding is that the LW team basically don’t think that anything like this is workable. I don’t think I agree with their reasoning, but they seem sufficiently firm in their conviction that I’ve mostly given up on trying to convince anyone that this sort of thing is a good idea.

(At one time, after the revival of Less Wrong, I hoped that the Personal / Frontpage distinction would serve a function similar to the one you describe. Unfortunately, the LW system design / community norms have been taken in a direction that makes it impossible for things to work that way. I understand that this, too, is a principled decision on the LW team’s part, but I think that it’s an unfortunate one.)

Replies from: Raemon, humm1lity
comment by Raemon · 2023-04-16T22:03:16.397Z · LW(p) · GW(p)

fwiw I think we've considered this sort of idea fairly seriously (I think there are a few nearby ideas clustered together, and it seems like various users have very different opinions on which ones seem "fine" and which ones seem pointed in a horribly wrong direction. I recall Benquo/Zack/Jessicata thinking one version of the idea was bad, although not sure I recall their opinion clearly enough to represent it)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-16T22:29:21.764Z · LW(p) · GW(p)

I think there are a few nearby ideas clustered together, and it seems like various users have very different opinions on which ones seem “fine” and which ones seem pointed in a horribly wrong direction.

That does seem plausible (and frustrating).

I recall Benquo/Zack/Jessicata thinking one version of the idea was bad, although not sure I recall their opinion clearly enough to represent it

I would be very interested to hear from any or all of those people about their opinions on this topic!

Replies from: Raemon
comment by Raemon · 2023-04-16T22:43:09.266Z · LW(p) · GW(p)

I maybe want to specify-in-my-words the version of this I'm most enthusiastic about, to check that you in fact think this version of the thing is fine, rather than a perversion of rationality that should die-in-a-fire and/or not solve any problems you care about:

There are two clusters of norms people choose between. Both emphasize truthseeking, but have different standards for some flavor of politeness, how much effort critics are supposed to put in, Combat vs Nurture [LW · GW], etc. Authors pick a default setting but can change setting for individual posts. 

Probably even the more-combaty-one has some kind of floor for basic politeness (you probably don't want to be literal 4chan?) but not at a level you'd expect to come up very often on LessWrong. 

There might be different moderators for each one.

Does that sound basically good to you?

Replies from: Vladimir_Nesov, SaidAchmiz
comment by Vladimir_Nesov · 2023-04-22T22:16:47.225Z · LW(p) · GW(p)

I think the precious thing lost in the Nurture cluster [LW(p) · GW(p)] is not Combat, but tolerance for or even encouragement of unapologetic and uncompromising dissent. This is straightforwardly good if it can be instantiated without regularly spawning infinite threads of back-and-forth arguing (unapologetic and uncompromising).

It should be convenient for people who don't want to participate in that to opt out, and the details of this seem to be the most challenging issue.

comment by Said Achmiz (SaidAchmiz) · 2023-04-16T23:26:32.586Z · LW(p) · GW(p)

Hmmm. I… do not think that this version of the thing is fine.

(I may write more later to elaborate on why I think that. Or maybe this isn’t the ideal place to do that? But I did want to answer your question here, at least.)

Replies from: Raemon
comment by Raemon · 2023-04-16T23:42:41.553Z · LW(p) · GW(p)

Nod. Since it somewhat informs the solution space I'm considering, I think I'll go ahead and ask here what seem not-fine about it. (Or, maybe to resolve a thing I'm actually confused about, what seems different about this phrasing from what Caerulea said?)

comment by Caerulea-Lawrence (humm1lity) · 2023-04-16T21:59:58.002Z · LW(p) · GW(p)

You are taking punches like a true champ. :) 

I do believe the piece that is missing is emotions, human weakness, vulnerability and compassion.

If that isn't enough, it is time to bring out the megaphone and start screaming "Misanthropy!" in the streets.
I'll join you, no worries. We can even wear matching WoW costumes. 

NB: (I'm also blaming you for this comment, Said. Have you no shame?)

comment by Czynski (JacobKopczynski) · 2023-04-16T03:42:36.265Z · LW(p) · GW(p)

It is a plus that some people are not worrying about other people's feelings. Worrying about other people's feelings is a liability for truthseeking.

(To which the counterargument is humans are humans, conversation does not proceed better when people feel threatened or attacked, we have to work with who we are, and that means perhaps putting some thought into how people feel.)

If the counterargument is that humans are humans... then, well, we must become more. And isn't this the place for that, particularly on the particular axis of truth-seeking?

Replies from: T3t
comment by RobertM (T3t) · 2023-04-16T03:52:37.461Z · LW(p) · GW(p)

we must become more

Yes, of course - while not forgetting that we should not create systems that only function if we have already acheived that future state.  (While also being wary of incentivizing fragility, etc.  As always, best to try to solve for the equilibrium.)

comment by Ruby · 2023-04-15T20:58:22.029Z · LW(p) · GW(p)

Technological Solutions

I find myself increasingly in favor of tech solutions to moderation problems. It can be hard for users to change their behavior based on a warning, but perhaps you can do better than just ban them – instead shape their incentives and save them from their own worst impulses. 

Only recently has the team been playing with rate limits as alternative to bans that can be used to strongly encourage users to improve their content (if by no other mechanism to incentivize investing more time into fewer posts and comments). I don't think it should be overly hard to detect nascent Demon Threads [LW · GW] and then intervene. Slowing them down both gives the participants times to reflect more and handle emotions that are coming up, and more time for the mod team to react.

In general, I'd like to build better tools for noticing places that would benefit from intervention, and have more ready trigger-action plans for making them go better. In this recent case, we were aware of the exchanges but didn't have a go-to thing to do. Some of this was not being sure of policies regarding certain behaviors and hashing those out is much slower than the thread proceeds. In my ideal world, we're more clear on policy, we know what our tools are, so it's easy to act.

It might be apparent to everyone, but late 2021, day-to-day leadership went from Habryka to me as Habryka went to lead the newly created Lightcone Infrastructure more broadly. My views on moderation are extremely downstream of Oli's, but they're my own, and it's taken time for me to have more confident takes on how to do things (Oli's views are not so well codified that it would have even been feasible to just try to do what he woudl have done, even if I'd want to do). All that is a long way of saying that I and the current team are to some degree building up fresh our moderation policies, and trying to build them for LessWrong in 2023 which is a different situation than in 2018. I hope that as we figure it out more and more, it's easier/cheaper/faster for us to moderate generally.

I might write more in a bit, will post this 

comment by ShardPhoenix · 2023-04-15T08:14:18.912Z · LW(p) · GW(p)

This whole drama is pretty TL;DR but based on existing vibes I'd rather the rules lean (if a lean is necessary) in favor of overly disagreeable gadflys rather than overly sensitive people who try to manipulate the conversation by acting wounded.

comment by Dagon · 2023-04-14T19:07:22.805Z · LW(p) · GW(p)

[ I don't have strong opinions on the actual individuals or the posts they've made and/or objected to.  I've both enjoyed and been annoyed by things that each of them have said, but nothing has triggered my "bad faith/useless/ignore" bit on either of them.  I understand that I'm more thick-skinned than many, and I care less about this particular avenue of social reinforcement than many, so I will understand if others fall on the "something must be done" side of the line, even though I don't.  I'm mostly going to ask structural questions, rather than exploring communication or behavioral preferences/requirements. ]

Is there anything we can learn from the votes (especially from people who are neither the commenter nor the poster) on the possibly-objectionable threads and posts?  Is this something moderation needs to address, or is voting sufficient (possibly with some tweaks to algorithm)?

Echoing Adam's point, what is the budget that admins have for time spent policing subleties of fairly verbose engagement?  None of the norms under discussion seem automatable, nor even cheap to detect/adjudicate.  This isn't spam, this isn't short, obviously low-value comments, this is (apparently) well-thought-out, reasonable debate.  Whether it's useful or not, whether it's well-motivated or not is insanely difficult to classify.  Worse, any attempt to simplify will be subject to adversarial Goodhart (not implying actual intent to harm, but a legit difference of opinion over what "the problem" is, and what changes are needed).

comment by Caerulea-Lawrence (humm1lity) · 2023-04-24T17:09:03.993Z · LW(p) · GW(p)

400 comments... :)

When I read Killing Socrates, I had no idea it alluded to Said in any way. The point I took from it, [LW(p) · GW(p)] was that it is important to treat both commentors and authors as responsible for the building process.

My limited point of view on Duncan_Sabien and Said is the following:

I really loved the above post by Duncan_Sabien. It was amazing, and on my comment on that post, they answered this [LW(p) · GW(p)]. It felt reassuring and caring, and fitting for my comment,

I did really enjoy my brief interaction with Said as well. I wrote an idea [LW(p) · GW(p)], they answered with a valid, but much more solid critique of a specificity [LW(p) · GW(p)], shooting way above idea-level. Which made me first confused, then irritated, then angry, until I decided to just go for what I truly wanted to answer them, which was:


Yes, got it. Thanks for taking the time.
 

Which, I mean, looks pretty dismissive Said however, answered this:


Hah.

For what it’s worth, I do, actually, agree with the overall thrust of your suggestion. I have made similar suggestions myself, in the past… unfortunately, my understanding is that the LW team basically don’t think that anything like this is workable. I don’t think I agree with their reasoning, but they seem sufficiently firm in their conviction that I’ve mostly given up on trying to convince anyone that this sort of thing is a good idea.

(At one time, after the revival of Less Wrong, I hoped that the Personal / Frontpage distinction would serve a function similar to the one you describe. Unfortunately, the LW system design / community norms have been taken in a direction that makes it impossible for things to work that way. I understand that this, too, is a principled decision on the LW team’s part, but I think that it’s an unfortunate one.)

To which I didn't answer, but not because I did not appreciate their answer, but just because it felt finished. It was a very different kind of interaction, but still one I value. The acknowledgment I read in the short "Hah." is still touching to read. 


The way I see it, Said and Duncan_Sabien are on very opposing ends of a spectrum. Chaotic Useful and Lawful Useful, respectively. I'll describe them a bit more below.

Said is the 'gentle' Crocodile in the well-kept garden. If you want to go into the depths of the big pond, and you can't deal with his bite, it should light up as a warning that there seems to be some incoherence between how 'deep' you want to go, and how deep you should go. 

Duncan_Sabien is more of a 'protective' Husky. They want to focus on creating a perceived safe space for progress. Usually gentle, can get very verbal about what is right and wrong in their opinion, and want to make sure things are good.

(I'm sorry if that is offensive somehow, it wasn't meant to simplify, but to add more info in a short amount of space)
 

Ironically, I believe they are pointing at and working on improving the exact same issues, with the only difference being the methods and form. Hopefully the focus and the response, from mods and users, will help pave a way forward.
 

Ideal scenario:

If I look at something of an ideal scenario, it would be that Said and Duncan_Sabien get together, hash it out (with mediators if need be) and then combine their efforts. That would be a great achievement, and one I would like to see here on LW. It would spell a step forward, especially towards higher understanding and progress between what can seem like entrenched points of view. 

It reminds me of the front between left/right in politics, like why don't they try to collaborate and create the best possible solution, together? But of course, they would disagree on how to collaborate and how to create a solution, but since they see different parts of reality from a different view-point, can't that be complementary and not antagonistic?

Moreover, I believe they are veteran enough to be worthy of the effort. If they can resolve things, I assume it will have a cascading effect, and potentially resolve much underlying tension that has been part of this community for years. Not mentioning, give some great traction going forward. 


Kindly,
Caerulea-Lawrence

comment by Gentzel · 2023-04-18T11:33:53.787Z · LW(p) · GW(p)

My model of the problem boils down to a few basic factors:

  1. Attention competition prompts speed and rewards some degree of imprecision and controversy with more engagement.
  2. It is difficult to comply with many costly norms and to have significant output/win attention competitions.
  3. There is debate over which norms should be enforced, and while getting the norms combination right is positive-sum overall, different norms favor different personalities in competition.
  4. Just purging the norm breakers can create substantial groupthink if the norm breakers disproportionately express neglected ideas or comply with other neglected and costly but valuable norms.
  5. It is costly for 3rd parties to adjudicate and intervene precisely in conflicts involving attention competition, since they are inherently costly to sort out.

General recommendations/thoughts:

  1. Slow the pace of conversation, perhaps through mod rate limits on comment length and frequency or temporary bans. This seems like a proportional response to argument spam and attention competition, and would seem to push toward better engagement incentives without inducing groupthink from overzealous censorship.
  2. If entangled in comment conflict yourself, aim to write more carefully, clearly, and in a condensed manner that is more inherently robust against adversarial misinterpretation. If the other side doesn't reciprocate, make your effort explicit to reduce the social cost of unilaterally not responding quickly (e.g. leaving a friendly temporary comment about responding later when you get time to convey your thoughts clearly).
  3. To the degree possible, reset and focus on conversations going forward, not publicly adjudicating who screwed-up what in prior convos. While it is valuable to set norms, those who are intertwined in conflict and stand to competitively benefit from the selective enforcement of the norms they favor are inherently not credible as sources of good norm sets.

In general we should be aiming for positive-sum and honest incentives, while economizing in how we patch exploits in the norms that are promoted and enforced. Attention competition makes this inherently hard, thus it makes sense to attack the dynamic itself.

comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-14T18:54:12.101Z · LW(p) · GW(p)

I think that the problem could be alleviated with the following combination of site capabilities:

  • Duncan should be able to prevent Said's comments and posts from being visible to Duncan's own profile -making Said invisible to Duncan.
  • Duncan should also have the ability to make his reasons for blocking Said from his own posts and invisibling Said's output elsewhere legible. For example, if Said replied to one of Duncan's comments in a third-party post, Duncan should not have to see the comment from Said. Said's comments on Duncan's third-party-post comments could get auto-tagged with a note saying something like "Duncan has set Said to 'invisible' and cannot see this comment."
  • It might be good if users who block or invisible others could provide an explanation in a way that's publicly available but not highly visible. For example, a user interested in knowing why Duncan set Said to invisible could go on Duncan's profile to find out, but the explanation would not get automatically linked to Said's comments replying to Duncan to avoid Duncan having the ability to unilaterally tag all of Said's comments replying to Duncan with Duncan's subjective criticism of Said.

My view is that the lengthy and unpleasant back and forth is largely due to Said thinking Duncan's ignoring important criticism, while Duncan thinks Said is trying to tear down his reputation in public with shallow criticism or just ruin Duncan's day. Those are both very normal and relatable perspectives, and I think that a technological solution like this would offer each of them most of what they want - Duncan would get insulation from criticism he views as destructive, and Said gets to continue offering criticism he thinks is constructive, and neither of them has to deal directly with each other each time such a disagreement comes up.

comment by Viliam · 2023-04-14T23:19:37.965Z · LW(p) · GW(p)

My two cents:

I suspect that Said is really bad at predicting which of his comments will be perceived as rude.

If I had to give him a rule of thumb, it would probably be like this: "Those that are very short, only one or two lines, but demand an answer that requires a lot of thinking or writing. That feels like entitlement to make others spend orders of magnitude more effort than you did. Even if from the Spock-rational perspective this makes perfect sense (asking someone to provide specific examples to their theory can benefit everyone who finds the theory interesting; and why write a long comment when a short one says the same thing), the feeling of rudeness is still there, especially if you do this repeatedly enough that people associate this thing with your name. Even if it feels inefficient, try to expand your comments to at least five lines. For example, provide your own best guess, or a counter-example. Showing the effort is the thing that matters, but too short length is a proxy for low effort." This sounds susceptible to Goodharting, but who knows...

When I think about Duncan, my first association is the "punch bug" thing (the article, the discussion [LW · GW] on LW, Duncan's complaints). My impression, and I am sorry if this is an unfair generalization, was that Duncan is a smart and interesting thinker and writer, but bad at accepting when people disagree with his proposals. And quick to escalate the disagreement into meta debates. This is also a connotational challenge to Raemon's timeline: yes, the current round of escalation started three months ago; but I think it is interesting to note that Duncan has a longer history of proposing new moderation policies in response to getting replies he didn't like. (Here, my unsolicited advice would be to accept that there is a certain amount of noise in human communication, however frustrating that is. We can, and should, try to reduce it, but we should not mandate noise-less communication under the implied threat of bans.)

I have no strong opinion on whether banning Said would be a net benefit to Less Wrong. In absence of a strong opinion, I would default to "in dubio pro reo". (Plus there is the option for especially annoyed people to ban him from commenting their articles.) On the other hand, I generally trust the moderators, and I understand that doing the job properly [LW · GW] requires making hard decisions even in controversial situations. (So, if Raemon says that Said is banned, from my perspective this concludes the debate.) I just... basically feel bad about Duncan being simultaneously an accuser and also co-authoring the rules of conduct. (Not sure how to put this precisely.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-15T00:24:41.278Z · LW(p) · GW(p)

I suspect that Said is really bad at predicting which of his comments will be perceived as rude.

If I had to give him a rule of thumb, it would probably be like this: “Those that are very short, only one or two lines, but demand an answer that requires a lot of thinking or writing. That feels like entitlement to make others spend orders of magnitude more effort than you did. Even if from the Spock-rational perspective this makes perfect sense (asking someone to provide specific examples to their theory can benefit everyone who finds the theory interesting; and why write a long comment when a short one says the same thing), the feeling of rudeness is still there, especially if you do this repeatedly enough that people associate this thing with your name. Even if it feels inefficient, try to expand your comments to at least five lines. For example, provide your own best guess, or a counter-example. Showing the effort is the thing that matters, but too short length is a proxy for low effort.” This sounds susceptible to Goodharting, but who knows...

Why waste time say lot word, when few word do trick?

Look, we covered this [LW · GW] already [LW · GW]. We covered the “effort” [LW(p) · GW(p)] part, we covered the “Goodharting” [LW(p) · GW(p)] part, we covered the “add boilerplate” [LW(p) · GW(p)] part, we covered the “exchange of demands” [LW(p) · GW(p)] part. It’s all been done.

The bottom line (and I apologize for being blunt, because it’s clear that you’re saying this in good faith and with good intentions) is that this isn’t a case of “Gosh, if only I had known this one weird trick! But now I do, and all is well henceforth”. The problem (however we construe it and whomever we blame for it) is much deeper than that. It has to do with structural concerns and principled commitments. It won’t be solved by padding my comments to some word count.

Replies from: lsusr, Viliam, philh
comment by lsusr · 2023-04-26T05:33:28.517Z · LW(p) · GW(p)

I also tend to write concisely. A trick I often use is writing statements instead of questions. I feel statements are less imposing, since they lack the same level of implicit demand that they be responded to.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-26T05:55:42.353Z · LW(p) · GW(p)

Hmm, it’s an interesting tactic, certainly. I’m not sure that it’s applicable in all cases, but it’s interesting. Perhaps you might point to some examples of how it’s best applied?

Replies from: lsusr
comment by lsusr · 2023-04-28T02:23:36.923Z · LW(p) · GW(p)

"Perhaps you might point to some examples of how it’s best applied?" ⇒ "I'd be curious to read some examples of how it’s best applied."

By changing from a question to a statement, the request for information is transferred from a single person [me] to anyone reading the comment thread. This results in a diffusion of responsibility, which reduces the implicit imposition placed on the original parent.

Another advantage of using statements instead of questions is that they tend to direct me toward positive claims, instead of just making demands for rigor. This avoids some of the more annoyingly asymmetric aspects of Socratic dialogue.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-28T03:13:31.054Z · LW(p) · GW(p)

“Perhaps you might point to some examples of how it’s best applied?” ⇒ “I’d be curious to read some examples of how it’s best applied.”

The request can be fulfilled by anyone either way, though. There doesn’t seem to me to be any difference, in that regard.

Another advantage of using statements instead of questions is that they tend to direct me toward positive claims, instead of just making demands for rigor. This avoids some of the more annoying aspects of Socratic dialogue.

Hmm. I’m afraid I find the linked essay somewhat hard to make sense of.

But, in any case, I’ll give your comments some thought, thanks.

comment by Viliam · 2023-04-15T10:30:32.146Z · LW(p) · GW(p)

Clearly, we have different preferences for what a good comment should look like. I am curious, is there a website where your preferred style is the norm? I would like to see how it works in practice.

(I realize that my request may not make sense; websites have different styles of comments. But if there is a website that feels more compatible with your preferences, I'd like to update my model.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-15T17:04:56.575Z · LW(p) · GW(p)

Not completely. Of course some websites approach it, from different directions. Current Less Wrong approaches it from one direction, old Less Wrong from a slightly different direction (and gets closest, I’d say), Data Secrets Lox from another.

comment by philh · 2023-04-18T00:33:09.027Z · LW(p) · GW(p)

we covered the “add boilerplate” part

It still seems to me that my "less social-attack-y" rewrite of one of your comments, in that thread, does feel less social-attack-y. You said then that you had no idea why it would be so.

If that's still the case - and if you meant something like "I dispute that it is less social-attack-y" rather than "I acknowledge that it is less social-attack-y but I have no idea why" - then I think this lends credence to Villiam's idea that you're bad at perceiving which of your comments will be perceived as rude.

(And I think our other exchange [LW(p) · GW(p)] from this thread is more evidence. You said a thing would not be perceived as insulting, I said I would perceive it as insulting, and you replied that it shouldn't be perceived that way. But of course what should be and what is are two different things, and it seems to me that you're less capable of tracking that distinction in this domain than in others.)

The comment I'm replying to doesn't explicitly say that Villiam's wrong here. It's consistent with you thinking any of

  • I'm quite capable of predicting it, I just have principled reasons not to take it into account.
  • I'm indeed bad at predicting it, but that's fine because I have principled reasons not to take it into account anyway.
  • I have no idea how good I am at predicting it, but that's fine because etc.

But it gives the impression, to me, more of the former than the latter two.

And (supposing I'm right so far, which I may not be) I don't think it would be surprising, if [your overestimate of your skills] turns out to be a crux as to [your principles generating the kind of comment you write]. That is, if the same principles would generate comments less-perceived-as-rude, if you were indeed better at predicting which of your comments would be perceived as rude.

(e: I should say that I wrote this comment before seeing the verdict [LW(p) · GW(p)]. Dunno if I'd have written it differently, if I'd seen it.)

comment by ambigram · 2023-04-16T13:50:49.923Z · LW(p) · GW(p)

It feels like an argument between a couple where person A says "You don't love me, you never tell me 'I love you' when I say it to you." and the person B responds "What do you mean I don't love you? I make you breakfast every morning even though I hate waking up early!". If both parties insist that their love language is the only valid way of showing love, there is no way for this conflict to be addressed. 

Maybe the person B believes actions speak louder than words and that saying "I love you" is pointless because people can say that even when they don't mean it  And perhaps person B believes that that is the ideal way the world works, where everyone is judged purely based on their actions and 'meaningless' words are omitted, because it removes a layer of obfuscation. But the thing is, the words are meaningless to person B; they are not meaningless to person A. It doesn't matter whether or not the words should be meaningful to person A. Person A as they are right now has a need to hear that verbal affirmation, person A genuinely has a different experience when they hear those words; it's just the way person A (and many people) are wired. 

If you want to have that relationship, both sides are going to have to make adjustments to learn to speak the other person's language. For example, both parties may agree to tapping 3 times as a way of saying "I love you" if Person B is uncomfortable with verbal declarations. 

If both parties think the other party is obliged to adjust to their frame, then it would make sense to disengage; there is no way of resolving that conflict. 


I actually think I prefer Said's frame on the whole, even though my native frame is closer to Duncan's. However, I think Said's commenting behavior is counter-productive to long-term shifting of community norms towards Said's frame. 

I am not familiar with the history, but from what I've read, Said seems to raise good points (though not necessarily expressed productive ways). It's just that the subsequent discussion often devolves into something that's exhausting to read (like I wish people would steelman Said's point and respond to that instead of just responding directly, and I wish people would just stop responding to Said if they felt the discussion is getting nowhere rather than end up in long escalating conflicts, and I don't have a clear idea of how much Said is actually contributing to the dynamics in such conversations because I get very distracted by the maybe-justified-maybe-not uncharitable assumptions being thrown around by all the participants). 

I think there are small adjustments that Said can make to the phrasing of comments that can make a non-trivial difference, that can have positive effects even for people who are not as sensitive as Duncan.

For example, instead of saying "I find your stated reason bizarre to the point where I can’t form any coherent model of your thinking here", Said could have said "I don't understand your stated reason at all". This shifts from a judgment on Duncan's reasoning to a sharing of Said's own experience, which (for me, at least) removes the unnecessary insult[1]. I suspect other people's judgments have limited impact on Said's self-perception, so this phrasing won't sound meaningfully different to Said, but I think it does make a difference to other people, whether or not it is ideal that this is how they experience the world. And maybe it's important that people learn to care less about other people's judgments, but I don't think it's fair to demand them to just change instantly and become like Said, or to say that people who are unable or refuse to do that simply should not be allowed to participate at all (or like saying sure you can participate, as long as you are willing to stick your hand in boiling water even though you don't have gloves and I do).

Being willing to make adjustments to one's behavior for the sake of the other party would be a show of good faith, and builds trust. At least in my native frame/culture, direct criticism is a form of rudeness/harm in neutral/low-trust relationships and a show of respect in high-trust relationships, and so building this trust would allow the relationship to shift closer to Said's preferred frame.


Of course, this only works if Duncan is similarly willing to accommodate Said's frame. 

I agree that there is something problematic with Said's commenting style/behavior given that multiple people have had similar complaints, and given that it seems to have led to consequences that are negative even within Said's frame. And it is hard to articulate the problem, which makes things challenging. However, it feels like in pushing against Said's behaviors, Duncan is also invalidating Said's frame as a valid approach for the community discourse. This feels unfair to people like Said, especially when it seems like a potentially more productive norm (when better executed, or in certain contexts). That's why it feels unfair to me that Said is unable to comment on the Basics of Rationalist Discourse post. 

It's a bit like there's a group of people who always play a certain board game by its rules, while there's another group where everyone cheats and the whole point is to find clever ways to cheat. To people from the first group, cheating is immoral and an act of bad faith, but to the other group, it's just a part of the game and everyone knows that. One day, someone from the first group gets fed up with people from the second group, and so they decide to declare a set of rules for all game players, that says cheating is wrong. And then they add that the only people who get to vote are people who don't cheat. Of course the results aren't going to be representative! And why does the first group have the authority to decide the rules for the entire community?

I don't know for certain if this is the right characterization, but here are a few examples why I think it is more of an issue of differing frames rather than something with clear right/wrong: (I am not saying the people were right to comment as they did, just pointing out that the conflict is not just about a norm, there is a deeper issue of frames)

  • In a comment thread, Said says something like Duncan banned Said likely because he doesn't like being criticized, even though Duncan explicitly said otherwise. To Duncan, this is a wrongful accusation of lying, (I think) because Duncan believes Said is saying that Duncan-in-particular is wrong about his own motivations. However, I think Said believes that everyone is incapable of knowing their true motivations, and therefore, his claim that Duncan might be motivated by subconscious reasons is just a general claim that has no bearing on Duncan as a person, i.e. it's not intended as a personal attack. It's only a personal attack if you share the same frame as Duncan.
  • When clone of saturn said "However, I suspect that Duncan won't like this idea, because he wants to maintain a motte-and-bailey where his posts are half-baked when someone criticizes them but fully-baked when it's time to apportion status.", I read it to mean that "I suspect" applies to the entire sentence, not just the first half. This is because I started out with the assumption that it is impossible for anyone to truly know a person's motivations, and therefore the only logical reading is that "I suspect" also applies to "he wants to maintain a motte-and-bailey". There's no objective true meaning to the sentence (though one may agree on the most common interpretation). It's like some people when they say "I don't like it", it's implied that "and I want you to stop doing it", but for others it just means "I don't like it" and also "that's just my opinion, you do you". Thus, I personally would consider it a tad extreme (though understandable given Duncan's experiences) to call for moderator response immediately without first clarifying with clone of saturn what was meant by the sentence.

While I do think Said is contributing to the problem (whether intentionally or unintentionally), it would be inappropriate to dismiss Said's frame just because Said is such a bad example of it. This does not mean I believe Said and Duncan are obliged to adjust to each other's norms. Choosing to disengage and stay within their respective corners, is in my opinion, a perfectly valid and acceptable solution.


I didn't really want to speak up about the conflicts between Duncan and other members, because I don't have the full picture. However, this argument is spilling out into public space, so it feels important to address the issue.

As someone who joined about a year ago, I have had very positive experiences on LW so far. I have commented on quite a few of Duncan's posts and my experience has always been positive, in part because I trust that Duncan will respond fairly to what I say. Reading Duncan's recent comments, however, made me wonder if I was wrong about that.

Because I am less sensitive than Duncan, it often felt like Duncan was making disproportionately hostile and uncharitable responses. I couldn't really see what distinguished comments that triggered such extreme responses from other comments. That made me worried that if I'd made a genuine mistake understanding Duncan's point, that Duncan would also accuse me of strawmanning or not trying hard enough, or that I was being deliberately obtuse. After all, I do and have misunderstood other people's words before.  Seeing Duncan's explanations on subsequent comments helped me get a better understanding of Duncan's perspective, but I don't think it is reasonable to expect people to read through various threads to get the context behind Duncan's replies. 

This means that from an outsider's perspective, the natural takeaway is that we should not post questions, feedback or criticisms, because we might be personally attacked (accused of bad intentions) for what seems like no reason. It is all the more intimidating/impactful given that Duncan is such an established writer. I know it can be unfair to Duncan (or writers in general) because of the asymmetries, but things continuing as they are would make it harder to nurture healthy conflict at LW, which I believe is also counter to what Duncan hopes for the community. 


To end off more concretely, here are some of the things I think would be good for LW:

  • To consider it pro-social (and reward?) when participants actively choose to slow down, step back, or stop when engaged in unproductive, escalating conflicts (e.g. Stopping Out Loud [LW · GW])
  • To be acceptable to post half-baked ideas and request gentler criticisms and have such requests respected, e.g. for critics to make their point clear and step back, if it is clear that their feedback is unwanted, so readers can judge for themselves
    • It should be made obvious via the UI if certain people have been blocked, otherwise it gives a skewed perspective.
  • When commenting on posts by authors who prefer more collaborative approaches, or for posts that are for half-baked ideas, 
    • commenters to provide more context behind comments (e.g. why you're asking about a particular point, is it because you feel it is a critical gap or are you just curious), because online communication is more error-prone than in-person interactions and also so it is easier to for both parties to reach a shared understanding of the discussion
    • If readers agree with a comment, but the comment doesn't meet the author's preferred requirements, to help refine the comment instead of just upvoting it (might need author to indicate if this is the case though, because sometimes it's not obvious?).
  • To be willing to adjust commenting styles or tolerance levels based on who you are interacting with, especially if it is someone you have had significant history with (else just disengage with people you don't get along with)
  • If one feels a comment is being unfair, to express that sentiment rather than going for a reciprocal tit-for-tat response so the other has an opportunity to clarify. If choosing to respond in poor form as a tit-for-tat strategy (which I really don't like), to at least make that intent explicit and provide the reasoning.
  • To avoid declaring malicious intent without strong evidence or to disengage/ignore the comment when unable to do so. e.g. "You are not trying hard enough to understand me/you are deliberately misunderstanding me.." --> "That is not what I meant. <explanation/request for someone to help explain/choose to disengage>."
  • For authors to have the ability to establish the norms they prefer within their spaces, but to be required to respect the wider community norms if it involves the community.
  • Common knowledge of the different cultures as well as the associated implications.

 

  1. ^

    Insult here referring to the emotional impact sense that I'm not sure how to make more explicit, not Said's definition of insult.

Replies from: ambigram
comment by ambigram · 2023-04-22T15:19:51.827Z · LW(p) · GW(p)

Still trying to figure out/articulate the differences between the two frames, because it feels like people are talking past each other. Not confident and imprecise, but this is what I have so far:

Said-like frame (truth seeking as a primarily individual endeavor)

  • Each individual is trying to figure out their own beliefs. Society reaches truer beliefs through each individual reaching truer beliefs.
  • Each individual decides how much respect to accord someone, (based on the individual's experiences). The status assigned by society (e.g. titles) are just a data point.
    • e.g. Just because someone is the teacher doesn't mean they are automatically given more respect. (A student who believes an institution has excellent taste in teachers may respect teachers from that institution more because of that belief, but the student would not respect a teacher just because they have the title of "teacher".)
      • If a student believes a teacher is incompetent and is making a pointless request (e.g. assigned a homework exercise that does not accomplish the learning objectives), the student questions the teacher. 
      • A teacher that responds in anger without engaging with the student's concerns is considered to be behaving poorly in this culture. A teacher who is genuinely competent and has valid reasons should either be able to explain it to the student or otherwise manage the student, or should have enough certainty in their competence that they will not be upset by a mere student.
  • Claims/arguments/questions/criticisms are suggestions. If they are valid, people will respond accordingly. If they are not, people are free to disagree or ignore it.
    • If someone makes a criticism and is upset when no one responds, the person who criticizes is in the wrong, because no one is obliged to listen or engage.
  • The ideal post is well-written, well-argued, more true than individuals' current beliefs. Through reading the post, the reader updates towards truer beliefs.
    • If a beginner writes posts that are of poorer quality, the way to help them is by pointing out problems with their post (e.g. lack of examples), so that next time, they can pre-empt similar criticisms, producing better quality work. Someone more skilled at critique would be able to give feedback that is closer to the writer's perspective, e.g. steelman to point out flaws, acknowledge context (interpretive labor). 
    • The greatest respect a writer can give to readers is to present a polished, well-written piece, so readers can update accordingly, ideally with ways for people to verify the claims for themselves (e.g. source code they can test).
  • The ideal comment identifies problems, flaws, weaknesses or provides supporting evidence, alternative perspectives, relevant information for the post, that helps each individual reader better gauge the truth value of a post.
    • If a commenter writes feedback or asks questions that are irrelevant or not valuable, people are free to ignore or downvote it.
    • The greatest respect a commenter can give to writers is to identify major flaws in the argument. To criticize is a sign of respect, because it means the commenter believes that the writer can do better and is keen to make their post a stronger piece.

 

Duncan-like frame (truth seeking as a primarily collectivist endeavor)

  • Each society is trying to figure out their collective beliefs. Society reaches truer beliefs through each individual helping other individuals converge towards truer beliefs.
  • Amount of respect accorded to someone is significantly informed by society. The status assigned by society (e.g. titles) act as a default amount of respect to give someone. For example, one is more likely to believe a doctor's claim that "X is healthier than Y" than a random person's claim that Y is healthier, even if you do not necessarily understand the doctor's reasoning, because society has recognized the doctor as medically knowledgeable through the medical degree.
    • e.g. A student gives a teacher more respect in the classroom by default, and only lowers the respect when the teacher is shown to be incompetent. If a student does not understand the purpose of a homework exercise, the student assumes that they are lacking information and will continue assuming so until proven otherwise. 
      • If a student questions the teacher's homework exercise, teacher would be justified in being angry or punishing the student because they are being disrespected. (If students are allowed to question everything the teacher does, it would be far less efficient to get things done, making things worse for the group.) 
  • Claims/arguments/questions/criticisms are requests to engage. Ignoring comments would be considered rude, unless they are obviously in bad faith (e.g. trolling).
  • The ideal post presents a truer view of reality, or highlights a different perspective or potential avenue of exploration for the group. Through reading the post, the reader updates towards truer beliefs, or gets new ideas to try so that the group is more likely to identify truer beliefs.
    • If a beginner writes posts that are of poorer quality, the way to help them is to steelman and help them shape it into something useful for the group to work on. Someone more skilled at giving feedback is better at picking out useful ideas and presenting them with clarity and concision. 
    • The greatest respect a writer can give to readers is to present a piece that is grounded in their own perspectives and experiences (so the group gets a more complete picture of reality) with clear context (e.g. epistemic status, so people know how to respond to it) and multiple ways for others to build on the work (e.g. providing source code so others can try it out and make modifications).
  • The ideal comment builds on the post, such as by providing supporting evidence, alternative perspectives, relevant information (contributing knowledge) or by identifying problems, flaws, weaknesses and providing suggestions on how to resolve those (improving/building on the work).
    • If a commenter writes feedback or asks questions that are irrelevant or not valuable, the writer (or readers) respond to it in good faith, because the group believes in helping each other converge to the truth (e.g. by helping others clear up their misunderstandings).
    • The greatest respect a commenter can give to writers is to identify valuable ideas from the post and build on it.
comment by TekhneMakre · 2023-05-02T18:58:15.596Z · LW(p) · GW(p)

It seems that Duncan has deactivated his account. https://www.lesswrong.com/users/duncan_sabien?mention=user [LW · GW]

comment by iceman · 2023-04-15T02:50:44.144Z · LW(p) · GW(p)

I have a very strong bias about the actors involved, so instead I'll say:

Perhaps LessWrong 2.0 was a mistake and the site should have been left to go read only.

My recollection was that the hope was to get a diverse diaspora to post in one spot again. Instead of people posting on their own blogs and tumblrs, the intention was to shove everyone back into one room. But with a diverse diaspora, you can have local norms to a cluster of people. But now when everyone is trying to be crammed into one site, there is an incentive to fight over global norms and attempt to enforce them on others.

comment by lionhearted (Sebastian Marshall) (lionhearted) · 2023-04-16T19:43:30.705Z · LW(p) · GW(p)

Hmm. Looks like I was (inadvertently) one of the actors in this whole thing. Not intended and unforeseen. Three thoughts.

(1) At the risk of sounding like a broken record, I just wanna say thanks again to the moderation team and everyone who participates here. I think oftentimes the "behind the scenes coordination work" doesn't get noticed during all the good times and not enough credit is noticed. I just like to notice it and say it outright. For instance, I went to the Seattle ACX meetup yesterday which I saw on here (LW), since I check ACX less frequently than LW. I had a great time and had some really wonderful conversations. I'm appreciative of all the people facilitating that, including Spencer (Seattle meetup host) and the whole team that built the infrastructure here to facilitate sharing information, getting to know each other, etc.

(2) Just to clarify - not that it matters - my endorsement of Duncan's post was about the specific content in it, not about any the author of the post. I do think Duncan did a really nice job taking very complex concepts and boiling them down to guidelines like "Track (for yourself) and distinguish (for others) your inferences from your observations" and "Estimate (for yourself) and make clear (for others) your rough level of confidence in your assertions" — he really summed up some complex points very straightforwardly and in a way that makes the principles much easier to implement / operationalize in one's writing style. That said, I didn't realize when I endorsed the Rationalist Discourse post that there was some interpersonal tensions independent from the content itself. Both of those posters seem like decent people to me, but I haven't dug deep on it and am not particularly informed on the details.

(3) I won't make a top-level post about this, because second-degree meta-engagement with community mechanics risks setting off more second-degree and third-degree meta-engagement, and the things spiral. But as a quick recommendation to people interested in how people relate with each other, my favorite movie is Unforgiven, a very non-traditional Clint Eastwood movie. It's like a traditional Western (cowbows, horses, etc) but really very different than the normal genre. Basically, there's only one genuinely unprovoked "bad guy" in the movie, who has causal agency for only about 30-60 seconds of doing something bad. After that, it's all just a chain reaction of people doing as best as they can by their values and friends, and yet the results are very bad for everyone. Incidentally, it's also a really cinematically beautiful movie, which contrasts with the unfolding tragedy. It's a great movie. Highly recommended. 

Replies from: Jasnah_Kholin
comment by Jasnah Kholin (Jasnah_Kholin) · 2023-04-18T18:04:56.302Z · LW(p) · GW(p)

(3) i didn't watch the movie, nor i plan to watch it, but i read the plot summary in Wikipedia. and I see it as caution against escalation. the people there consistently believe that you should revenge on 1 point offense at 4 points punishment. and this create escalation cycle.

while i think most of Duncan's writing is good, the thing when i think he consistently create bad situations, is in unproportional escalations of conflict, and inability to just let things be. 


once upon a time if i saw someone did something 1 point bad and someone reacting in 3 point bad thing, i would think the first one is 90% of the problem. with time, i find robustness more and more important, and now i see the second one more problematic. as such. i disagree with your description of the movie.

the plot is one people doing something bad, other refuse to punish him, and a lot of people that escalate things, and so, by my standards, doing bad things. LOT of bad things. to call it a chin reaction is to not assign the people that doing bad unproportional escalating things agency over their bad choices. it's strange for me, as i see this agency very clearly. 

comment by Jasnah Kholin (Jasnah_Kholin) · 2023-04-18T14:10:28.284Z · LW(p) · GW(p)

So this is the fourth time I am trying to write a comment. This comment is far from ideal, but I feel like I did the best as my current skill in writing in English and understanding such situations allow.

 

1. I find 90% of the practical problems to be Drama. as in, long, repetitive, useless arguments. if it was facebook and Duncan blocked Said, and then proceeded to block anyone that was too much out of line by Duncan-standards, it would have solved 90% of Duncan-related problems. if he would have given up already on making LW his kind of garden, it would have solved another 9%.

 

2. In my ideal Garden, Said would have been banned long ago. but it is my believe (and i have like five posts waiting to be written to explain my framework and evidence on that, if i will actually write them) that LW will never be something even close to my or Duncan's Garden (there is 80%-90% similarity in our definitions of garden, by my model of Duncan).
 

In this LessWrong, he may remain and not be blocked. It is also good that more people will ignore his comments that predictably start a useless argument. aka - if i will write something about introspective, i expect Said comment to be useless. a also expect most third and after comments in thread to be useless. 

 

In better LW, those net-negative comments would have been ignored, downvoted, and maybe deleted by mods. while the good ones upvoted and got reactions.

 

3. Duncan, I will be sad if you leave LW. I really enjoy and learn from your posts. I also believe LW will never be your Garden. i would like you to just give up already on changing LW, but still remain here and write. I wish you could have just... care less about comments, assume that 90% of what is important in LW is posts, not comments. Ignore most comments, answer only to those that you deem good and written in Goodwill. LessWrong is not YOUR version of Garden, and will never be. but it has good sides, and you (hopefully) can choose to enjoy the good parts and ignore the bad ones. while now it looks to me like you are optimized toward trying things you object to and engage with them, in hope to change LW to be more to your standards. 

 

comment by clone of saturn · 2023-04-14T20:24:56.224Z · LW(p) · GW(p)

One technical solution that occurs to me is to allow explicitly marking a post as half-baked, and therefore only open to criticism that comes along with substantial effort towards improving the post, or fully-baked and open to any criticism. However, I suspect that Duncan won't like this idea, because [edit: I suspect that] he wants to maintain a motte-and-bailey where his posts are half-baked when someone criticizes them but fully-baked when it's time to apportion status.

Replies from: habryka4, Duncan_Sabien
comment by habryka (habryka4) · 2023-04-14T21:18:47.184Z · LW(p) · GW(p)

My current model of this is that the right time to really dig into posts is actually the annual review. 

I've been quite sad that Said hasn't been participating much in the annual review, since I do feel like his poking is a pretty good fit for the kind of criticism that I was hoping would come up there, and the whole point of that process is to have a step of "ok, but like, do these ideas actually check out" before something could potentially become canonized.

Replies from: SaidAchmiz, clone of saturn
comment by Said Achmiz (SaidAchmiz) · 2023-04-14T21:39:01.352Z · LW(p) · GW(p)

My apologies! I regret that I’ve mostly not taken part in the annual review. To a large extent this is due to a combination of two things:

  1. The available time I have to comment on Less Wrong (or do anything similar) comes and goes depending on how busy I am with other things; and

  2. The annual review is… rather overwhelming, frankly, since it asks for attention to many posts in a relatively short time.

Also, I don’t have much to say about many (perhaps even most?) posts on Less Wrong. There’s quite a bit of alignment discussion and similar stuff which I simply am not qualified to weigh in on.

Finally, most discussion of a post tends to take place close in time to when it’s first published. To the extent that I tend to find it useful or interesting to comment on any given post, active discussion of it tends to be a major factor in my so finding it. (Indeed, the discussion in the comments is sometimes at least as useful, or even more useful, than the post itself!)

I wish I could promise that I’ll be more active in the annual review process, but that wouldn’t be a fair promise to make. I will say that I hope you don’t intend to shunt all critical discussion into that process; I think that would be quite unfortunate.

Replies from: Benito, pktechgirl
comment by Ben Pace (Benito) · 2023-04-15T00:50:18.033Z · LW(p) · GW(p)

(Commenting from recent discussion, also intended as a reply to Gwern)

The annual review is an attempt to figure out what were the best contributions with the benefit of a great deal of hindsight, and I think it's prosocial to contribute to it, similar to how it was prosocial to contribute to the LW survey back when Scott ran a big one every year.

I am always pleased when people contribute, and sometimes I am sad if there are particular users whose reviews I'd really like to read but don't write any. But I don't think anyone is obligated to write reviews!

comment by Elizabeth (pktechgirl) · 2023-04-14T23:50:13.690Z · LW(p) · GW(p)

The fact that you (EDIT: make this argument but) didn't make a single review in 2020 [? · GW], 2021 [? · GW], or 2022 [? · GW] makes me much less charitable towards your reasons or goals in commenting harshly on LW. 

Replies from: gwern, SaidAchmiz
comment by gwern · 2023-04-15T00:11:49.791Z · LW(p) · GW(p)

I didn't make a review then either. Will this be held against me in the future?

Replies from: pktechgirl, AllAmericanBreakfast
comment by Elizabeth (pktechgirl) · 2023-04-15T00:42:43.849Z · LW(p) · GW(p)

Not by me.  My cruxes are: 

  • There is a general trade off between authors' experience and improving correctness.
  • Said's claim that he's optimizing for correctness, and doesn't care about author experience.
  • Habryka believes (and I agree) the trade offs of Said's style are more suited to the review than daily commenting.
  • Said's response was "that seems less fun to me" rather than "I think the impact on correctness is greater earlier".

Perhaps the question should always be "what are the costs and benefits to others?" rather than "what's in Said's heart?", in which case this doesn't matter. But to the extent motivation matters, I do think complete disinterest in the review speaks to motivation. 

Replies from: SaidAchmiz, SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-15T02:07:07.404Z · LW(p) · GW(p)

Habryka believes (and I agree) the trade offs of Said’s style are more suited to the review than daily commenting.

I think that this is diametrically wrong.


In the field of usability engineering, there are two kinds of usability evaluations: formative and summative.

Formative evaluations are done as early as possible. Not just “before the product is shipped”, but before it’s in beta, or in alpha, or in pre-alpha; before there’s any code—as soon as there’s anything at all that you can show to users (even paper prototypes), or apply heuristic analysis to, you start doing formative evaluations. Then you keep doing them, on each new prototype, on each new feature, continuously—and the results of these evaluations should inform design and implementation decisions at each step. Sometimes (indeed, often) a formative evaluation will reveal that you’re going down the wrong path, and need to throw out a bunch of work and start over; or the evaluation will reveal some deep conceptual or practical problem, which may require substantial re-thinking and re-planning. That’s the point of doing formative evaluations; you want to find out about these problems as soon as possible, not after you’ve invested a ton of development resources (which you’ll be understandably reluctant to scrap).

Summative evaluations are done at or near the end of the development process, where you’re evaluating what is essentially a finished product. You might uncover some last-minute bugs to be fixed; you might tweak some things here and there. (In theory, a summative evaluation may lead to a decision not to ship a product at all. In practice, this doesn’t really happen.)

It is an accepted truism among usability professionals that any company, org, or development team that only or mostly does summative evaluations, and neglects or disdains formative evaluations, is not serious about usability.

Summative evaluations are useless for correcting serious flaws. (That is not their purpose.) They can’t be used to steer your development process toward the optimal design—how could they? By the time you do your summative evaluation, it’s far too late to make any consequential design decisions. You’ve already got a finished design, a chosen and built architecture, and overall a mostly, or even entirely, finished product. You cannot simply “bolt usability onto” a poorly-designed piece of software or hardware or anything. It’s got to be designed with usability in mind from the ground up. And you need formative evaluation for that.


And just the same principles apply here.

The time for clarifications like “what did you mean by this word” or “can you give a real-world example” [LW(p) · GW(p)] is immediately.

The time for pointing out problems with basic underlying assumptions or mistakes in motivating ideas is immediately.

The time for figuring out whether the ideas or claims in a post are even coherent, or falsifiable, or whether readers even agree on what the post is saying, is immediately.

Immediately—before an idea is absorbed into the local culture, before it becomes the foundation of a dozen more posts that build on it as an assumption, before it balloons into a whole “sequence”—when there’s still time to say “oops” with minimal cost, to course-correct, to notice important caveats or important implications, to avoid pitfalls of terminology, or (in some cases) to throw the whole thing out, shrug, and say “ah well, back to the drawing board” [LW(p) · GW(p)].

To only start doing all of this many months later, is way, way too late.

Of course the reviews serve a purpose as well. So do summative evaluations.

But if our only real evaluations are the summative ones, then we are not serious about wanting to be less wrong.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-15T19:25:28.835Z · LW(p) · GW(p)

The time for figuring out whether the ideas or claims in a post are even coherent, or falsifiable, or whether readers even agree on what the post is saying, is immediately.

Immediately—before an idea is absorbed into the local culture, before it becomes the foundation of a dozen more posts that build on it as an assumption, before it balloons into a whole “sequence”—when there’s still time to say “oops” with minimal cost, to course-correct, to notice important caveats or important implications, to avoid pitfalls of terminology, or (in some cases) to throw the whole thing out, shrug, and say “ah well, back to the drawing board” [LW(p) · GW(p)].

To only start doing all of this many months later, is way, way too late.

 

We have to distinguish whether comment X is a useful formative evaluation and whether formative evaluations are useful, but I do agree with Said that LessWrong can benefit from improved formative evaluations.

I have written some fairly popular LessWrong reviews, and one of the things I've uncovered is that some of the most memorable and persuasive evidence underpinning key ideas is much weaker and more ambiguous than I thought it was when I originally read the post. At LessWrong, we're fairly familiar as a culture with factors contributing to irreproducibility in science - p-hacking and the like.

One of the topics where I think we could gain some of the greatest benefits is in getting better at dealing with the accumulated layers of misinterpretation, mis-summarization, and decontextualization.

Here are some examples (and I mean this with respect toward the authors I am critiquing below):

  • In a Scott Alexander post on group selection, he emphasized only the bits of the cited article in which group selection dynamics among beetles were most obvious, and entirely left out the aspects of the paper where group selection failed to emerge or was ambiguous.
  • In a recent post on LED stimulation (which was not itself making any claims, just raising a question), the motivating quote it cited was about putting LEDs on a beanie, which was claimed to massively amplify productivity. This is based on scientific evidence that involved using a high-grade medical laser at a precisely set wavelength with much more limited and inconsistent benefits in the underlying literature.
  • Zvi posted a link to another blogger on the benefits of flashing lights set at the frequency of the subject's (IIRS) alpha waves as tripling the learning rate, when the study in question only showed benefits on an extremely narrow and specific learning task closely linked with visual perception of stimuli at a specific rate. Another commenter I talked with told me he'd "tried it" and not seen results, but when I checked he had just picked a reasonable frequency and strobed himself for a bit - he hadn't actually replicated the study.
  • I've refereed whole exchanges between folks like Nathalia Mendonca and Alexey Guzey, as well as many others, in which a host of misunderstandings arose because person A said that person B said X, but didn't supply a quote, and person B felt misrepresented, and I ultimately did a lot of work diving into the history of their respective outputs to try and aggregate the relevant quotes into a place where they'd be visible for the discussion.
  • Holden Karnofsky wrote a whole bit blog post analyzing data from a book called Human Accomplishments, which aggregated information on the achievement of technological and cultural progress over the millennia, and used that information to make claims about the rate of tech progress slowing down (IIRC). Tyler Cowen has also quoted a paper by a physicist doing something similar. Yet I emailed the author of the book used by the physicist to do his analysis, and that author had never heard of the physicist's work and told me that, as his book was not written to be a comprehensive or representative list of technical innovations, it was not proper to use as data for such an analysis. And when I emailed Tyler to let him know, he just didn't seem to care, pointing me to an entirely new set of metrics he thinks shows the same thing (even though he still uses the physicist's paper to lead off his analysis that uses these new metrics).
  • In the ongoing academic debate over the morality of legalization of selling kidneys, the anti-legalization side has a long and extremely duplicitous - given the importance of the issue, I'd go so far as to say evil - history of misrepresenting and shutting out/down the arguments from the pro-legalization side. The pro-legalization (or at least just ban-skeptical) side has been extremely careful and thoughtful in their approach both to articulating their ideas and in addressing critiques, and the anti-legalization side, which is bigger and higher-status, just takes a crap on it over and over again. I have been all over this literature, the strength of the argument is (almost, not quite) entirely one-sided, and the current disastrous state of things is due entirely to an active distorting of the issue by the anti-legalization side. But you'd only figure that out if you took a deep dive into the literature and media coverage and also communicated directly with some of the authors involved.

One of the reasons I appreciate people like Elizabeth, Nathalia, and Guzey (as well as others) is because they put an unusual amount of emphasis on interrogating the underlying evidence and making that interrogation legible. It's not the only way to contribute value, but on the margin it's where I think LessWrong stands to gain the most.

I don't find anybody here blameworthy, or at least not very much, but I do think a lot of heat and a lot of at least overconfident, if not outright wrong, information gets shared and believed and built upon because we're not doing enough work to go back, check the original source, and think about whether the underlying evidence is being accurately represented. We are too willing to turn weak evidence into strong heuristics, then extrapolate from those heuristics and apply them to new domains. I'm being a hypocrite here because I'm braindumping this stuff from memory rather than providing quotes and so on, and this is part of why I don't find anybody that blameworthy (with the possible exception of Tyler who is doing this stuff professionally and has a bigger duty to check things carefully).

And of course, this can be a self-reinforcing problem - Guzey was trying to critique the misleading interpretation of the evidence he saw in Matthew Walker's "Why We Sleep" book, and Nathalia was in turn trying to critique the counterevidence Guzey supplied, and then both Nathalia and Guzey would get periodically frustrated with each other over the same thing.

Overall, I think there are so many gains to be had by providing quotes from original sources, then showing how they connect to the idea you are trying to present. If that makes your post more ambiguous, or makes you change your mind, that's just normal LessWrong stuff that we ought to be doing more.

Said, part of the reason why I often don't find value in your attempts to do formative evaluations is that you usually don't do the stuff that makes formative evaluations useful, to me. Like, if somebody uses a word in a way that's potentially ambiguous, but that also does have a central or typical meaning, then a good formative analysis should do things like making a case for why the ambiguity could lead to a serious misunderstanding. And the most useful things a formative analysis can do is usually to supply additional evidence, or check the evidence being used to see if it's being represented accurately, or supplying a specific counterargument under the assumption that you do understand more or less what the other person is trying to say. If you misunderstood them, they can correct you, and then you can update/apologize/move on, and that is an actually useful formative evaluation because it gives them a piece of evidence not only about what you find ambiguous, but the way their meaning might be misinterpreted.

This is what I'm looking for in a good LessWrong formative evaluation, and there are lots of people who put in this kind of effort, and that's what I find praiseworthy and would like to highlight.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-15T19:45:15.256Z · LW(p) · GW(p)

I agree with most of your comment, and those are good examples / case studies.

Said, part of the reason why I often don’t find value in your attempts to do formative evaluations is that you usually don’t do the stuff that makes formative evaluations useful, to me.

This, however, assumes that “formative evaluations” must be complete works by single contributors, rather than collaborative efforts contributed to by multiple commenters. That is an unrealistic and unproductive assumption, and will lead to less evaluative work being done overall, not more.

Like, if somebody uses a word in a way that’s potentially ambiguous, but that also does have a central or typical meaning, then a good formative analysis should do things like making a case for why the ambiguity could lead to a serious misunderstanding.

This does not seem to me to be necessary or even beneficial, unless the author has already responded to clarify their usage of the word. Certainly it would be a waste of everyone’s time to do it pre-emptively.

And the most useful things a formative analysis can do is usually to supply additional evidence, or check the evidence being used to see if it’s being represented accurately, or supplying a specific counterargument under the assumption that you do understand more or less what the other person is trying to say.

Those are certainly useful things to do. They are not the only useful things that can be done, nor are they necessary, nor should they be required, nor would the overall effect be positive if we were to limit ourselves to such things only.

This is what I’m looking for in a good LessWrong formative evaluation, and there are lots of people who put in this kind of effort, and that’s what I find praiseworthy and would like to highlight.

I agree that some (but not all, as I note above) of these things are praiseworthy. Other things are also praiseworthy, such as the sorts of more granular contributions which we have been discussing.

Replies from: AllAmericanBreakfast, dxu
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-15T20:20:01.002Z · LW(p) · GW(p)

I think the crux of our disagreement is that you seem to think there's this sort of latent potential for people to overcome their feelings of insult and social attack, and that even low-but-nonzero contributions to the discussion have positive value.

My view is this:

  • There is little-no hope of most people overcoming their tendency to feel insulted and attacked, and that when you talk in a way that provokes these feelings, you destroy the opportunity to do a useful formative evaluation, very reliably.
  • What makes a low-but-nonzero-value FE bad is that it's a poor use of time, failing to consider opportunity cost. You are right in saying that there are many ways to contribute to FEs, and what I am saying is that many of the ones you exhibit seem to me to be about on this level of value. It's the epistemic equivalent of making money by looking for loose change dropped on the sidewalk.
  • While ideally, such comments can and would be ignored or blocked by the people who see them as having such low value, it is about has hard to do this as it is to not feel insulted.

I know that you have definitely contributed some comments (and posts too, in the past) where clearly a substantial number of people derived real value from them. I would count myself among them at times.

Although I often have found myself frustrated by you, I think that if you learned how to distinguish between the low-value/negative comments that are causing almost all the loss of value in your overall commenting behavior, and either keep them to yourself or work harder to improve them, then I probably would be pleased by your presence on LessWrong.

Like, if you were able to predict with 90% accuracy "THIS comment will lead to a 20-comment exchange in which the other person feels frustrated and insulted, but THAT comment won't," and then only post stuff that you think won't cause that heated back-and-forth, and briefly apologized for your part in contributing to it when it does happen, I would have no issue at all.

Replies from: SaidAchmiz, JacobKopczynski
comment by Said Achmiz (SaidAchmiz) · 2023-04-15T21:35:36.507Z · LW(p) · GW(p)

Thank you for the kind words.

However, I’m afraid I disagree with your view. Taking your points in reverse order of importance:

While ideally, such comments can and would be ignored or blocked by the people who see them as having such low value, it is about has hard to do this as it is to not feel insulted.

This I find to be a basically irrelevant point. If someone is so thin-skinned that they can’t bear even to ignore/block things they consider to be of low value, but rather find themselves compelled to read them, and then get angry, then that person should perhaps consider avoiding, like… the Internet. In general. This is simply a pathetic sort of complaint.

Now, don’t misunderstand me: if you (I mean the general “you” here, not you in particular) want to engage with what you see as a low-value comment, because you think that’s a productive view of your time and effort, well, by all means—who am I to tell you otherwise? If you feel that here is a person being WRONG on the Internet, and you simply must explain to them how WRONG they are, so that all and sundry can see that they are unacceptably and shamefully WRONG—godspeed, I say. Such things can be both valuable and entertaining, to their participants and their audience alike.

But then don’t complain about it. Don’t whine about the emotional damage you incurred in the process.

If you had the option all along to just block, ignore, downvote, collapse, etc., and move on with your life, but you chose not to take it, and instead opted to engage, that is a choice you were fully within your rights to make, but for which only you are responsible.

What makes a low-but-nonzero-value FE bad is that it’s a poor use of time, failing to consider opportunity cost. You are right in saying that there are many ways to contribute to FEs, and what I am saying is that many of the ones you exhibit seem to me to be about on this level of value.

Again you confuse things with their parts. If I say “what are some examples”, or “what did you mean by that word?”, or any such thing, that’s not an evaluation. That’s a small contribution to a collaborative process of evaluation. It makes no sense at all to object that such a question is of low value, by comparing it to some complete analysis. That is much like saying that a piston is of low value compared to an automobile. We’re not being presented with one of each and then asked to choose which one to take—the automobile or the piston. We’re here to make automobiles out of pistons (and many other parts besides).

It’s the epistemic equivalent of making money by looking for loose change dropped on the sidewalk.

I think it’s exactly the opposite. A question like “what are examples of this concept you describe?”, or “what does this word, which is central to your post, actually mean?” [LW(p) · GW(p)], are contributions with an unusually high density of value; they offer a very high return on investment. They have the virtuous property of spending [LW(p) · GW(p)] very few words [LW(p) · GW(p)] to achieve the effect of pointing to a critical lacuna in the discussion, thus efficiently selecting—out of the many, many things which may potentially be discussed in the comments under a post—a particular avenue of discussion which is among the most likely to clarify, correct, and otherwise improve the ideas in the post.

There is little-no hope of most people overcoming their tendency to feel insulted and attacked, and that when you talk in a way that provokes these feelings, you destroy the opportunity to do a useful formative evaluation, very reliably.

There’s a subtlety here, if you like (or perhaps we might say, an aspect of the question which tends to be avoided in discussion, out of courtesy—of which, as it happens, I generally approve; but here it seems we must make it explicit).

Consider three scenarios.

Scenario I

You write a post about some concept. You’re not very confident about this idea, but you think it might well be true, and maybe even important. In the post, you make this reasonably clear. It’s an early-stage exploration, relatively speaking. You could be totally wrong, of course, the whole thing might be nonsense, but you think there’s a chance that you’re onto something, and if you are, then other commenters could help you make something true and useful out of this idea you had.

You publish the post. First comment: “Examples?”.

Do you feel insulted?

No, of course not. You’re ready for this. You respond:

“It’s a good question, but I don’t have an answer. As I said, this really is more of a brainstorming, early-stage sort of post. Actually, I was hoping that other folks here, if they think that there’s anything to this thing, might provide examples of it that they’ve encountered or know about. (Or if someone thinks that they have a good argument for why I’m wrong about this, and this is impossible, and examples won’t and can’t be found, then I’d like to hear that, too!)”

(As an aside, I’ve found that if someone says “what if… X” and someone else says “X is impossible! and here’s why!”, that reply is more effective at drawing out people who then counter with “WRONG actually, it’s not impossible at all, here are some examples!”. Cunningham’s Law in action, in other words. Of course, this holds true only in contexts where contrarianism and contradiction aren’t socially punished.)

Scenario II

You write a post about some concept. You write about this concept, not in terms of “brainstorming”, or “hey guys, I was thinking and I thought maybe X; does this seem right? anyone else think this might be true?”—but rather, as a positive and confident claim which you are making, a model of (some part of) the world which you are willing to stand behind.

Your post contains no substantive examples or case studies. However, you actually do have some good examples in mind; or, you’re confident that you can call them to mind as needed (being quite sure that you are basing your ideas on firm recollection of experiences you’ve had, or on thorough research which you’ve done).

You publish the post. First comment: “Examples?”.

Do you feel insulted?

No, of course not. You’re ready for this. You respond:

“Good question! I didn’t want to clutter up the post with examples (it’s long enough already!)—but here are just three: [ example 1, example 2, example 3 ]”.

The discussion then proceeds from there. Your examples can be analyzed (by you and/or others), the post’s ideas examined in light of the clarifying effect of the examples, and so on. The work of building useful knowledge and understanding proceeds.

Scenario III

You write a post about some concept. As in the previous scenario, you write about this concept confidently, as a positive claim which you are making, a model of (some part of) the world which you are willing to stand behind.

Your post contains no substantive examples or case studies.

And nor do you have any in mind.

(Why did you write the post, then, and write it as you did? Yep, good question.)

You publish the post. First comment: “Examples?”.

Do you feel insulted?

Well… first, let’s ask: should you feel insulted?

What is the comment saying?

Is it saying “you have no examples”? Well, no. But you don’t, actually. The combination of the comment plus either your lack of response (and the lack of anyone else jumping in to provide examples for you) or your response to the effect that you have no examples, is what says that you have no examples.

So, is that an insult, or is it insulting, or what? In the sense that a flat statement like “you have no examples” (which, let’s assume, is correct) is insulting—sure. (Should you be insulted by it? Well, should you be insulted if you make some embarrassing error in analysis, and someone says “you’ve made such-and-such error in analysis”? Should you be insulted if you misconstrue some basic concept in a field relevant to your claims, and someone points that out?)

But regardless of the “should”, it’s likely that you do feel insulted.

But is that a strike against the comment which asked for examples?

No. No, it is not. In fact, it’s exactly the opposite.

As nshepperd says [LW(p) · GW(p)], in a related discussion:

But my sense is that if the goal of these comments is to reveal ignorance, it just seems better to me to argue for an explicit hypothesis of ignorance, or a mistake in the post.

My sense is the exact opposite. It seems better to act so as to provide concrete evidence of a problem with a post, which stands on its own, than to provide an argument for a problem existing, which can be easily dismissed (ie. show, don’t tell). Especially when your epistemic state is that a problem may not exist, as is the case when you ask a clarifying question and are yet to receive the answer!

This is the key point: the request for examples isn’t insulting, isn’t an attack, isn’t even an unfair request to perform onerous “labor”—unless it turns out that the author had no examples. If the author turns out to have examples, or even if someone else provides examples that illustrate or vindicate the author’s point, then the reques was completely innocuous and served only to prompt a useful discussion. But if no examples can be produced, then—but only then!—retroactively the request becomes an indictment.

(Perhaps it’s not a very serious indictment! Maybe the lack of examples isn’t all that bad, in any given case. Such things can happen! But at the very least, it’s a minor ding; a poke, a prod, which could have revealed robustness and rigor, but instead revealed a weakness. It could be a mild weakness, a fixable weakness—but a weakness nonetheless.)

In this light, the question of whether people should overcome their feelings of being insulted is confused. What people should do instead is to act so that a question like “what are some examples?” is not, and cannot be, insulting.

And, likewise, in this light, asking such questions is not destructive, but unambiguously constructive and beneficial.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-15T22:29:18.623Z · LW(p) · GW(p)

OK, first of all, let me say that this is an example of Said done well - I really like this comment a lot.

I think most of our disagreement flows from fundamentally different perspectives on how bad it is to make people feel insulted or belittled. In my view, it's easy to hurt people's feelings, that outcome is very destructive, and it's natural for people to make suboptimal choices in reacting to those hurt feelings, especially when the other person knows full well that they routinely provoke that response and choose to do it anyway.

Insulting and harsh posts can still be net valuable (as some of yours are, the majority of Eliezer's, and ~all of the harsh critiques of Gwern's that I've read), but they have to be quite substantial in order to overcome the cost of harshness. But there will have to be a high absolute quantity of positive value overall, not just per word, in order to overcome harshness. After all, it's very easy to deliver a huge absolute magnitude of harshness ("f*** you!") in very few words, but much harder to provide an equally large total quantity of value in the same word count.

I know from our previous comment thread that you just don't think about insulting comments that way - you don't see the fact that somebody else got insulted by a comment of yours as a downside or a moral consideration. From that point of view, providing short, mildly valuable comments with high value-per-word is a great thing to do, because any feelings of insult and frustration they provoke simply doesn't matter.

I think this is probably the crux of our disagreement.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-15T22:42:35.717Z · LW(p) · GW(p)

I don’t think that is the crux.

Again (and it seems I must emphasize this, because for whatever reason, I cannot seem to get this point across effectively): a comment that says “Examples?” is not “harsh”, it is not “belittling”, it is not “insulting”; it may be perceived as being insulting only if it turns out that the author should have examples, but doesn’t.

But in that case, the fault for that is the author’s! If the author didn’t want to feel insulted when someone asked him for examples he didn’t have (and knew that he could not avoid so feeling, despite the fact that such a question is fair and the implied rebuke in the event that no answer is forthcoming is just), then he should not have written such a post in such a way! Who forced him to do that? Nobody!

What would you have us do? To sabotage our own ability to understand a post, a claim, an idea, because asking a certain sort of question about it would mean risking the possibility that the post/claim/idea has a glaring flaw in it? Even aside from the obvious desirability of uncovering such a flaw, if one exists, there is the fact that this policy would prevent us from getting use out of posts/ideas/claims that are perfectly well formed, that have no flaws at all!

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-15T22:54:13.063Z · LW(p) · GW(p)

I actually do think it is the crux, because you seem to be rearticulating the point of view that I was ascribing to you.

You think that:

  1. Your comments don't usually provoke feelings of insult in the target.
  2. Or if they are, it's the other person's fault for being thin-skinned or writing a bad post
  3. And anyway, there's a lot of value in calling out flaws with brief remarks, enough to overcome any downsides with being insulting you might want to impute

And I am saying:

  1. Your comments routinely provoke feelings of insult in the target.
  2. Authors are typically not blameworthy for feeling insulted, and their response to you is not very indicative of their correctness or depth of thought
  3. And the value of calling out flaws with brief remarks is small, and not nearly worth it relative to the damage you do by being insulting while you go about it

Sounds pretty cruxy to me.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-15T23:14:44.771Z · LW(p) · GW(p)

Once again you miss a (the?) key point.

“What are some examples?” does not constitute “calling out a flaw”—unless there should be examples but aren’t. Otherwise, it’s an innocuous question, and a helpful prompt.

“What are some examples?” therefore will not be perceived as insulting—except in precisely those cases where any perceived insult is the author’s fault.

Of course, I also totally disagree with this:

the value of calling out flaws with brief remarks is small

Calling out flaws with brief remarks is not only good (because calling out flaws is good), but (insofar as it’s correct, precise, etc.), it’s much better than calling out flaws with long comments. It is not always possible, mind you! Condensing a cogent criticism into a brief remark is no small feat. But where possible, it ought to be praised and rewarded.

And I want to note a key disagreement with your construal of this part:

Or if they are, it’s the other person’s fault for being thin-skinned being insulting while you go about it

Things would be different if what I were advocating was something like “if a post is bad, in your view, then it’s ok to say ‘as would be obvious to anyone with half a brain, your post is wrong in such-and-such ways; and saying so-and-so is just dumb; and only a dunderhead like you would fail to see it, you absolute moron. you idiot.’”.

Then it would be totally fair to rebuke me in the way similar to what you suggest—to say “however bad a post may actually be, however right you may be in your criticisms, it’s wrong of you to resort to insults and belittlement”.

But of course I neither advocate, nor do, any such thing.

Being able to tolerate “insults” that are nothing more than calm criticisms of one’s ideas should be no more than the price of admission to Less Wrong—regardless of the entirely uncontroversial fact that most people, in most situations, do indeed tend to find criticisms of their ideas insulting.

This is especially true because there is no way of not being “insulting”, in this sense, while still criticizing someone’s ideas. If someone finds criticism of their ideas insulting, that is irreduceable. You either offer such criticism or you do not. There’s no way of offering it while also not offering it.

Replies from: AllAmericanBreakfast, Duncan_Sabien, philh
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-15T23:31:27.198Z · LW(p) · GW(p)

Mm, I still think my original articulation of our crux is fine.

Here, you're mostly making semantic quibbles or just disagreeing with my POV, rather than saying I identified the wrong crux.

However, we established the distinction between an insulting comment (i.e. a comment the target is likely to feel insulted by) from a deliberate insult (i.e. a comment primarily intended to provoke feelings of insult) in a whole separate thread, which most people won't see here. It is "insulting comment" that I meant in the above, and I will update the articulation to make that more clear.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-16T00:53:31.134Z · LW(p) · GW(p)

A meta point that is outside of the scope of the object level disagreement/is a tangent:

Once again you miss a (the?) key point.

“What are some examples?” does not constitute “calling out a flaw”—unless there should be examples but aren’t. Otherwise, it’s an innocuous question, and a helpful prompt.


I note that the following exchange recently took place [LW(p) · GW(p)]:

Said: [multiple links to him just saying "Examples?"]

Me: [in a style I would not usually use but with content that is not far from my actual belief] I'm sorry, how do any of those (except possibly 4) satisfy any reasonable definition of the word "criticism?"

Said: Well, I think that “criticism”, in a context like this topic of discussion, certainly includes something like “pointing to a flaw or lacuna, or suggesting an important or even necessary avenue for improvement”.


So we have Said a couple of days ago defending "What are some examples?" as definitely being under the umbrella of criticism, further defined as the subset of criticism which is pointing to a flaw or suggesting an important or even necessary avenue for improvement.

Then we have Said here saying that it is a key point that "What are some examples?" does not constitute calling out a flaw.

(The difference between the two situations being (apparently) the entirely subjective/mysterious/unstated property of "whether there should be examples but aren't," noting that Said thinking there exists a skipped step or a confusing leap is not particularly predictive of the median high-karma LWer thinking there exists a skipped step or a confusing leap.)

I am reminded again of Said saying that I A'd people due to their B, and I said no, I had not A'd anyone for B'ing, and Said replied ~"I never said you A'd anyone for B'ing; you can go check; I said you'd A'd them due to B'ing."

i.e. splitting hairs and swirling words around to create a perpetual motte-and-bailey fog that lets him endlessly nitpick and retreat and say contradictory things at different times using the same words, and pretending to a sort of principle/coherence/consistency that he does not actually evince.

Replies from: anonymousaisafety
comment by anonymousaisafety · 2023-04-16T01:34:28.211Z · LW(p) · GW(p)

i.e. splitting hairs and swirling words around to create a perpetual motte-and-bailey fog that lets him endlessly nitpick and retreat and say contradictory things at different times using the same words, and pretending to a sort of principle/coherence/consistency that he does not actually evince.

Yeah, almost like splitting hairs around whether making the public statement "I now categorize Said as a liar" is meaningfully different than "Said is a liar".

Or admonishing someone for taking a potshot at you when they said 

However, I suspect that Duncan won't like this idea, because he wants to maintain a motte-and-bailey where his posts are half-baked when someone criticizes them but fully-baked when it's time to apportion status.

...while acting as though somehow that would have been less offensive if they had only added "I suspect" to the latter half of that sentence as well. Raise your hand if you think that "I suspect that you won't like this idea, because I suspect that you have the emotional maturity of a child" is less offensive because it now represents an unambiguously true statement of an opinion rather than being misconstrued as a fact. A reasonable person would say "No, that's obviously intended to be an insult" -- almost as though there can be meaning beyond just the words as written.

The problem is that if we believe in your philosophy of constantly looking for the utmost literal interpretation of the written word, you're tricking us into playing a meta-gamed, rules-lawyered, "Sovereign citizen"-esque debate instead of, what's the word -- oh, right, Steelmanning. Assuming charity from the other side. Seeking to find common ground.

For example, I can point out that Said clearly used the word "or" in their statement. Since reading comprehension seems to be an issue for a "median high-karma LWer" like yourself, I'll bold it for you. 

Said: Well, I think that “criticism”, in a context like this topic of discussion, certainly includes something like “pointing to a flaw or lacuna, or suggesting an important or even necessary avenue for improvement”.

Is it therefore consistent for "asking for examples" to be contained by that set, while likewise not being pointing to a flaw? Yes, because if we say that a thing is contained by a set of "A or B", it could be "A", or it could be "B".

Now that we've done your useless exercise of playing with words, what have we achieved? Absolutely nothing, which is why games like these aren't tolerated in real workplaces, since this is a waste of everyone's time.

You are behaving in a seriously insufferable way right now.

Sorry, I meant -- "I think that you are behaving in what feels like to me a seriously insufferable way right now, where by insufferable I mean having or showing unbearable arrogance or conceit".

Replies from: AllAmericanBreakfast, Duncan_Sabien
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-16T02:19:25.478Z · LW(p) · GW(p)

On reflection, I do think both Duncan and Said are demonstrating a significant amount of hair-splitting and less consistent, clear communication than they seem to think. That's not necessarily bad in and of itself - LW can be a place for making fine distinctions and working out unclear thoughts, when there's something important there.

It's really just using them as the basis for a callout and fuel for an endless escalation-spiral when they become problematic.

When I think about this situation from both Duncan and Said's point of views to the best of my ability, I understand why they'd be angry/frustrated/whatever, and how the search for reasons and rebuttals has escalated to the point where the very human and ordinary flaws of inconsistency and hair-splitting can seem like huge failings.

At this point, I really have lost the ability and interest to track the rounds and rounds of prosecutorial hair-splitting across multiple comment threads. It was never fun, it's not enlightening, and I don't think it's really the central issue at stake. It's more of a bitch eating crackers scenario at this point.

I made an effort to understand Said's point of view, and whatever his qualms with how I've expressed the crux of our disagreement, I feel satisfied with my level of understanding. From previous interactions and readings, I also think I understand what Duncan is frustrated about.

In my opinion, we need to disaggregate:

  • The interpersonal behavior of Duncan and Said
  • Their ideas
  • Their ways of expressing those ideas

My feeling right now is that Duncan and Said both have contributed valuable things in the past, and hopefully will in the future. Their ideas, and ways of expressing them, are not always perfect, and that is OK. But their approach to interpersonal behavior on this website, especially toward each other but also, to a lesser extent, toward other people, is not OK. We're really in the middle of a classic feud where "who started it" and "who's worse" and the litany of who-did-what-to-whom just goes on forever and ever, and I think the traditional solution in these cases is for some higher authority to come in and say "THIS FEUD IS DECLARED ENDED BY THE AUTHORITY OF THE CROWN."

If they can both recognize that about themselves, I would be satisfied if they just agreed to not speak to each other for a long time and to drop the argument. I would also like it if they both worked on figuring out how to cut their rate of becoming involved in angry escalation-spirals in half. Now would be an excellent time to begin that journey. I would also be open to that being mod-enforced in some sense.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-16T03:49:59.586Z · LW(p) · GW(p)

On reflection, I do think both Duncan and Said are demonstrating a significant amount of hair-splitting and less consistent, clear communication than they seem to think.

Communication is difficult; communication when subtleties must be conveyed, while there is interpersonal conflict taking place, much more difficult.

I don’t imagine that I have, in every comment I’ve written over the past day, or the past week (or month, or year, or decade), succeeded perfectly in getting my point across to all readers. I’ve tried to be clear and precise, as I always do; sometimes I succeed excellently, sometimes less so. If you say “Said, in that there comment you did not make your meaning very clear”, I think that’s a plausible criticism a priori, and certainly a fair one in some actual cases.

This is, to a greater or lesser degree, true of everyone. I think it is true of me less so than is the average—that is, I think that my writing tends to be more clear than most people’s. (Of course anyone is free to disagree; this sort of holistic judgment isn’t easy to operationalize!)

What I think I can’t be accused of, in general, is:

  • failing to provide (at least attempted) clarifications upon request
  • failing to cooperate with efforts aimed at achieving mutual understanding
  • failing to acknowledge the difficulties of communication, and to make reasonable attempts to overcome them
  • failing to maintain a civil and polite demeanor in the process

(Do you disagree?)

It also seems to me that there has been no “escalation” on my part, at any point in this process. (In general, I would say that as far as interpersonal behavior goes, mine has been close to exemplary given the circumstances.)

I am perfectly content to be ignored by Duncan. He is perfectly welcome to pretend that I don’t exist, as far as I’m concerned. I won’t even take it as an insult; I take the freedom of association quite seriously, and I believe that if some person simply doesn’t want to associate with another person, that is (barring various exceptional circumstances—having to do with, e.g., offices of public responsibility, etc.—none of which, as far as I can tell, apply here) their absolute right.

(Of course, that choice, while it is wholly Duncan’s, cannot possibly impose on me any obligation to act in any way I would not normally be obligated to act—to avoid referring to Duncan, to avoid replying to his comments, to avoid criticizing his ideas, etc. That’s just how the world is: you can control your own actions, but not the actions of others. Most people learn that lesson fairly early in life.)

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-16T04:38:55.837Z · LW(p) · GW(p)

What I think I can’t be accused of, in general, is:

  • failing to provide (at least attempted) clarifications upon request
  • failing to cooperate with efforts aimed at achieving mutual understanding
  • failing to acknowledge the difficulties of communication, and to make reasonable attempts to overcome them
  • failing to maintain a civil and polite demeanor in the process

(Do you disagree?)

 

Speaking to our interactions in this post, I do agree with you on all counts. Elsewhere, I think you fall short of my minimum definition of 'cooperative,' but I also understand that you have very different standards for what constitutes cooperative and I see this as a normative crux, one that is unlikely to be resolved through debate.

It also seems to me that there has been no “escalation” on my part, at any point in this process. (In general, I would say that as far as interpersonal behavior goes, mine has been close to exemplary given the circumstances.)

I also think this is true for our interactions here. Elsewhere, I disagree - you frequently are one of two main players in escalation spirals. I understand that, for you, that is typically the other person's fault. The most charitable way I can put my point of view is that, even if it is the other person's fault, I think that you should prioritize figuring out how to cut your rate of being involved in escalation spirals in half. That might involve a choice to reconsider certain comments, to comment differently, or to redirect your attention to people who have demonstrated a higher level of appreciation for your comments in the past.

(Of course, that choice, while it is wholly Duncan’s, cannot possibly impose on me any obligation to act in any way I would not normally be obligated to act—to avoid referring to Duncan, to avoid replying to his comments, to avoid criticizing his ideas, etc. That’s just how the world is: you can control your own actions, but not the actions of others. Most people learn that lesson fairly early in life.)

I think another lesson people learn early in life is that you can do whatever you want, but often, you shouldn't, because it has negative effects on others, and they learn to empathically care about other people's wellbeing. Our previous exchanges have convinced me that in important ways, you reject the idea that you ought to care about how your words and actions affect other people as long as they're within the bounds of the law. Again, I think this just brings us back to the crux of our disagreement, over whether and to what extent the feelings of insult you provoke in others is a moral consideration in deciding how to interact.

As I have grown quite confident in the nature of our disagreement, as well as its intractability, I am going to commit to signing off of LessWrong entirely for two weeks, because I think it will distract me. I will revisit further comments of yours (or PMs if you prefer) at that time.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-16T05:25:08.927Z · LW(p) · GW(p)

The most charitable way I can put my point of view is that, even if it is the other person’s fault, I think that you should prioritize figuring out how to cut your rate of being involved in escalation spirals in half.

If we’re referring to my participation in Less Wrong specifically (and I must assume that you are), then I have to point out that it would be very easy for me to cut my rate of being involved in what you call “escalation spirals” (regardless of whether I agree with your characterization of the situations in question) not only in half or even tenfold, but to zero. To do this, I would simply stop posting and commenting here.

The question then becomes whether there’s any unilateral action I can take, any unilateral change I can make, whose result would be that I could continue spending time on participation in Less Wrong discussions in such a way that there’s any point or utility in my doing so, while also to any non-trivial degree reducing the incidence of people being insulted (or “insulted”), escalating, etc.

It seems to me that there is not.

Certainly there are actions that other people (such as, say, the moderators of the site) could take, that would have that sort of outcome! Likewise, there are all sorts of trends, cultural shifts, organic changes in norms, etc., which would have a similarly fortuitous result.

But is there anything that I could do, alone, to “solve” this “problem”, other than just not posting or commenting here? I certainly can’t imagine anything like that.

(EDIT: And this is, of course, to say nothing of the question of whether it even should be “my problem to solve”! I think you can guess where I stand on that issue…)

Our previous exchanges have convinced me that in important ways, you reject the idea that you ought to care about how your words and actions affect other people as long as they’re within the bounds of the law.

I do not think that this is an accurate characterization of any views that I hold.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-16T02:50:50.243Z · LW(p) · GW(p)

...while acting as though somehow that would have been less offensive if they had only added "I suspect" to the latter half of that sentence as well. Raise your hand if you think that "I suspect that you won't like this idea, because I suspect that you have the emotional maturity of a child" is less offensive because it now represents an unambiguously true statement of an opinion rather than being misconstrued as a fact.

The thing that makes LW meaningfully different from the rest of the internet is people bothering to pay attention to meaningful distinctions even a little bit.

The distance between "I categorize Said as a liar" and "Said is a liar" is easily 10x and quite plausibly 100-1000x the distance between "You blocked people due to criticizing you" and "you blocked people for criticizing you." The latter is two synonymous phrases; the former is not.

(I also explicitly acknowledged that Ray's rounding was the right rounding to make, whereas Said was doing the opposite and pretending that swapping "due to" and "for" had somehow changed the meaning in a way that made the paraphrase invalid.)

You being like "Stop using phrases that meticulously track uncommon distinctions you've made; we already have perfectly good phrases that ignore those distinctions!" is not the flex you seem to think it is; color blindness [LW · GW] is not a virtue.

Replies from: AllAmericanBreakfast, anonymousaisafety
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-16T03:20:21.834Z · LW(p) · GW(p)

The thing that makes LW meaningfully different from the rest of the internet is people bothering to pay attention to meaningful distinctions even a little bit.

In my opinion, the internet has fine-grained distinctions aplenty. In fact, where to split hairs and where to twist braids is sort of basic to each political subculture. What I think makes LessWrong different is that we take a somewhat, maybe not agnostic but more like a liberal/pluralistic view of the categories. We understand them as constructs, "made for man," as Scott put it once, and as largely open to critical investigation and not just enforcement. We try and create the social basis for a critical investigation to happen productively.

When anonymousaisafety complains of hair-splitting, I think they are saying that, while the distinction between "I categorize Said as a liar" and "Said is a liar" is probably actually 100-1000x as important a distinction between "due to" and "for" in your mind, other people also get to weigh in on that question and may not agree with you, at least not in context.

If you really think the difference between these two very similar phrasings is so huge, and you want that to land with other people, then you need to make that difference apparent in your word choice. You also need to accept that other factors beyond word choice play into how your words will be perceived: claiming this distinction is of tremendous importance lands differently in the context of this giant adversarial escalation-spiral than it would in an alternate reality where you were writing a calm and collected post and had never gotten into a big argument with Said. This is part of why it's so important to figure out how to avoid these conflict spirals. They make it very difficult to avoid reading immediate personal motivations into your choice of words and where you lay the emphasis, and thus it becomes very hard to consider your preferred categorization scheme as a general principle. That's not to say it wouldn't be good - just that the context in which you're advocating for it gets in the way.

That said, I continue to think anonymousaisafety is clearly taking sides here, and continuing to use an escalatory/inflammatory tone that only contributes further to the dynamic. While I acknowledge that I am disagreeing with Duncan here, and that might be very frustrating for him, I hope that I come across not as blaming but more as explaining my point of view on what the problem is here, and registering my personal reaction to what Duncan is arguing for in context.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-16T03:25:27.079Z · LW(p) · GW(p)

You also need to accept that other factors beyond word choice play into how your words will be perceived: claiming this distinction is of tremendous importance lands differently in the context of this giant adversarial escalation-spiral than it would in an alternate reality where you were writing a calm and collected post and had never gotten into a big argument with Said

Er. I very explicitly did not claim that it was a distinction of tremendous importance. I was just objecting to the anonymous person's putting them in the same bucket. 

In my opinion, the internet has fine-grained distinctions aplenty. In fact, where to split hairs and where to twist braids is sort of basic to each political subculture. What I think makes LessWrong different is that we take a somewhat, maybe not agnostic but more like a liberal/pluralistic view of the categories.

Endorsed/updated; this is a better summary than the one I gave.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-16T04:19:12.316Z · LW(p) · GW(p)

So are you saying that although the distinction between the two versions of the “liar” phrase is 100-1000x bigger than between the due to/for distinction, it is still not tremendously important?

Replies from: dxu, Duncan_Sabien
comment by dxu · 2023-04-16T04:36:58.435Z · LW(p) · GW(p)

As a single point of evidence: it's immediately obvious to me what the difference is between "X is true" and "I think X" (for starters, note that these two sentences have different subjects, with the former's subject being "X" and the latter's being "I"). On the other hand, "you A'd someone due to their B'ing" and "you A'd someone for B'ing" do, actually, sound synonymous to me—and although I'm open to the idea that there's a distinction I'm missing here (just as there might be people to whom the first distinction is invisible), from where I currently stand, the difference between the first pair of sentences looks, not just 10x or 1000x bigger, but infinitely bigger than the difference between the second, because the difference between the second is zero.

(And if you accept that [the difference between the second pair of phrases is zero], then yes, it's quite possible for some other difference to be massively larger than that, and yet not be tremendously important.)

Here, I do think that Duncan is doing something different from even the typical LWer, in that he—so far as I can tell—spends much more time and effort talking about these fine-grained distinctions than do others, in a way that I think largely drags the conversation in unproductive directions; but I also think that in this context, where the accusation is that he "splits hairs" too much, it is acceptable for him to double down on the hair-splitting and point that, actually, no, he only splits those hairs that are actually splittable.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-16T04:50:23.215Z · LW(p) · GW(p)

On the other hand, “you A’d someone due to their B’ing” and “you A’d someone for B’ing” do, actually, sound synonymous to me—and although I’m open to the idea that there’s a distinction I’m missing here

With the caveat that I think this sort of “litigation of minutiae of nuance” is of very limited utility[1], I am curious: would you consider “you A’d someone as a consequence of their B’ing” different from both the other two forms? Synonymous with them both? Synonymous with one but not the other?


  1. I find that I am increasingly coming around to @Vladimir_Nesov’s stance [LW(p) · GW(p)] on [LW(p) · GW(p)] nuance [LW(p) · GW(p)]. ↩︎

Replies from: dxu
comment by dxu · 2023-04-16T05:00:29.185Z · LW(p) · GW(p)

With the caveat that I think this sort of “litigation of minutiae of nuance” is of very limited utility

Yeah, I think I probably agree.

would you consider “you A’d someone as a consequence of their B’ing” different from both the other two forms? Synonymous with them both? Synonymous with one but not the other?

Synonymous as far as I can tell. (If there's an actual distinction in your view, which you're currently trying to lead me to via some kind of roundabout, Socratic pathway, I'd appreciate skipping to the part where you just tell me what you think the distinction is.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-16T05:13:26.465Z · LW(p) · GW(p)

(If there’s an actual distinction in your view, which you’re currently trying to lead me to via some kind of roundabout, Socratic pathway, I’d appreciate skipping to the part where you just tell me what you think the distinction is.)

I had no such intention. It’s just that we already know that I think that X and Y seem like different things, and you think X and Y seem like the same thing, and since X and Y are the two forms which actually appeared in the referenced argument, there’s not much further to discuss, except to satisfy curiosity about the difference in our perceptions (which inquiry may involve positing some third thing Z). That’s really all that my question was about.

In case you are curious in turn—personally, I’d say that “you A’d someone as a consequence of their B’ing” seems to me to be the same as “you A’d someone due to their B’ing”, but different from “you A’d someone for their B’ing”. As far as characterizing the distinction, I can tell you only that the meaning I, personally, was trying to convey was the difference in what sort of rule or principle was being applied. (See, for instance, the difference between “I shot him for breaking into my house” and “I shot him because he broke into my house”. The former implies a punishment imposed as a judgment for a transgression, while the latter can easily include actions taken in self-defense or defense of property, or even unintentional actions.)

But, as I said, there is probably little point in pursuing this inquiry further.

Replies from: dxu
comment by dxu · 2023-04-16T05:15:30.178Z · LW(p) · GW(p)

Gotcha. Thanks for explaining, in any case; I appreciate it.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-16T04:48:09.873Z · LW(p) · GW(p)

Yeah. One is small, and the other is tiny. The actual comment that the anonymous person is mocking/castigating said:

I note (while acknowledging that this is a small and subtle distinction, but claiming that it is an important one nonetheless) that I said that I now categorize Said as a liar, which is an importantly and intentionally weaker claim than Said is a liar, i.e. "everyone should be able to see that he's a liar" or "if you don't think he's a liar you are definitely wrong."

(This is me in the past behaving in line with the points I just made under Said's comment, about not confusing [how things seem to me] with [how they are] or [how they do or should seem to others].)

This is much much closer to saying "Liar!" than it is to not saying "Liar!" ... if one is to round me off, that's the correct place to round me off to. But it is still a rounding.

comment by anonymousaisafety · 2023-04-16T03:05:58.989Z · LW(p) · GW(p)

I see that reading comprehension was an issue for you, since it seems that you stopped reading my post halfway through. Funny how a similar thing occurred on my last post too. It's almost like you think that the rules don't apply to you, since everyone else is required to read every single word in your posts with meticulous accuracy, whereas you're free to pick & choose at your whim.

Replies from: T3t, dxu
comment by RobertM (T3t) · 2023-04-16T05:36:34.258Z · LW(p) · GW(p)

I'm deeply uncertain about how often it's worth litigating the implied meta-level concerns; I'm not at all uncertain that this way of expressing them was inappropriate.  I don't want see sniping like this on LessWrong, and especially not in comment threads like this.

Consider this a warning to knock it off.

comment by dxu · 2023-04-16T04:18:26.389Z · LW(p) · GW(p)

Might I ask what you hoped to achieve in this thread by writing this comment?

comment by philh · 2023-04-17T23:40:29.815Z · LW(p) · GW(p)

I expect to perceive a bare "what are some examples?" as mildly insulting even if the author is like "yes absolutely, here you go". And I expect to percieve a bare "examples?" as slightly more insulting.

Replies from: Benito, SaidAchmiz
comment by Ben Pace (Benito) · 2023-04-17T23:48:29.197Z · LW(p) · GW(p)

I don't think it's mildly insulting, I think it's ambiguously insulting, in that a person wanting to insult you might do it. But in general I think it's a totally reasonable question in truth-seeking and I'd be sad if people required disclaimers to clarify that it isn't meant insultingly, just to ask for examples of what the person is talking about.

(Commenting from Recent Discussion)

comment by Said Achmiz (SaidAchmiz) · 2023-04-18T00:16:07.364Z · LW(p) · GW(p)

Seconding Ben Pace’s answer [LW(p) · GW(p)]. This sort of thing is one case of a larger category of questions one might ask. Others include:

“Is the raw data available for download/viewing?” (No reason to be insulted, if your answer is “yes”, or if you have a good reason/excuse for not providing the data. Definitely reason to be insulted otherwise—but then you deserve the “insult”. Scare quotes because “insult” is really the wrong word; it’s more like “fairly inflicted disapproval”.)

“Could you make the code for your experimental setup available?” (Ditto. There could be good reasons why you can’t or won’t provide this! There’s no insult in that case. But if you don’t provide the code and you have no good reason for not doing so, then you deserve the disapproval.)

“Do you have a reference for that?” (Providing references for claims is good, but not always possible. But if you make an unreferenced claim and you have no good reason for doing that, you deserve the disapproval.)

In cases like this, there is, or should be, an expectation that people who are communicating and truth-seeking in good faith, with integrity, with honest intention of effectiveness, etc., will offer cooperation to each other and to their potential audience. This cooperation takes the form of—where possible—citing references for claims, providing data, publishing code, providing examples, clarifying usage of terms, etc., etc. Where possible, note! Of course these things cannot always be done. But where they can be done, they should be. These are simply the basic expectations, the basic epistemic courtesies we owe to each other (and to ourselves!).

So a question or request like “what are some examples”, “where is the data”, “citation please”—these are nothing more than requests (or reminders, if you like) for those basic elements of cooperation. There is no reason not to fulfill them, if you can. (And plenty of reasons to do so!) Sometimes you can’t, of course; then you say so, explaining why.

But why would you be insulted by any of this? What is the sense in refusing to cooperate in these ways?

(Especially if you have the answer to the question! If you have examples to provide—or data, code, citations, etc.—how the heck am I supposed to extract these things from you, if you think that asking for them is outré? You can provide them up front, or provide them on request—but if you don’t do the first, and take umbrage to the second, then… what’s left?)

Replies from: Raemon
comment by Raemon · 2023-04-18T00:23:51.232Z · LW(p) · GW(p)

(Flagging this as the second of the two comments I said Said could make. I've disabled his ability to comment/post for now. You're welcome to send moderators PMs to continue discussion with us. I'm working on a reply to your other comment addressed more specifically)

comment by Czynski (JacobKopczynski) · 2023-04-16T03:48:26.842Z · LW(p) · GW(p)

you seem to think there's this sort of latent potential for people to overcome their feelings of insult and social attack

Of course there is! People can and do overcome that when it's actually important to them. At work, as part of goals they care about, in relationships they care about. If we care about truth-seeking - and it's literally in the name that we do - then we can and will overcome that.

comment by dxu · 2023-04-15T20:02:13.895Z · LW(p) · GW(p)

This, however, assumes that “formative evaluations” must be complete works by single contributors, rather than collaborative efforts contributed to by multiple commenters. That is an unrealistic and unproductive assumption, and will lead to less evaluative work being done overall, not more.

I am curious as to your assessment of the degree of work done by a naked "this seems unclear, please explain"?

My own assessment would place the value of this (and nothing else) at fairly close to zero—unless, of course, you are implicitly taking credit for some of the discussion that follows (with the reasoning that, had the initiating comment been absent, the resulting discussion would not counterfactually exist). If so, I find this reasoning unconvincing, but I remain open to hearing reasons you might disagree with me about this—if in fact you do disagree. (And if you don't disagree, then from my perspective that sounds awfully like conceding the point; but perhaps you disagree with that, and if so, I would also like to hear why.)

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-15T20:16:04.918Z · LW(p) · GW(p)

I am curious as to your assessment of the degree of work done by a naked “this seems unclear, please explain”?

By “degree of work” do you mean “amount of effort invested” or “magnitude of effect achieved”?

If the former, then the answer, of course, is “that is irrelevant”. But it seems like you mean the latter—yes? In which case, the answer, empirically, is “often substantial”.

… unless, of course, you are implicitly taking credit for some of the discussion that follows (with the reasoning that, had the initiating comment been absent, the resulting discussion would not counterfactually exist)

Essentially, yes. And we do not need to imagine counterfactuals, either; we can see this happen, often enough (i.e., some post will be written, and nobody asks for examples, and none are given, and no discussion of particulars ensues). Individual cases differ in details, of course, but the pattern is clear.

Although I wouldn’t phrase it quite in terms of “taking credit” for the ensuing discussion. That’s not the point. The point is that the effect be achieved, and that actions which lead to the effect being achieved, be encouraged. If I write a comment like this one [LW(p) · GW(p)], and someone (as an aside, note, that in this case it was not the OP!) responds with comments like this one [LW(p) · GW(p)] and this one [LW(p) · GW(p)], then of course it would be silly of me to say “I deserve the credit for those replies!”—no, the author of those replies deserves the credit for those replies. But insofar as they wouldn’t’ve have existed if I hadn’t posted my comment, then I deserve credit for having posted my comment. You are welcome to say “but you deserve less credit, maybe even almost no credit”; that’s fine. (Although, as I’ve noted before, the degree to which such prompts are appreciated and rewarded ought to scale with the likelihood of their counterfactual absence, i.e., if I hadn’t written that comment, would someone else have? But that’s a secondary point.) It’s fine if you want to assign me only epsilon credit.

What’s not fine is if, instead, you debit me for that comment. That would be completely backwards, and fundamentally confused about what sorts of contributions are valuable, and indeed about what the point of this website even is.

If so, I find this reasoning unconvincing

Why?

Replies from: dxu
comment by dxu · 2023-04-15T21:03:20.603Z · LW(p) · GW(p)

If so, I find this reasoning unconvincing

Why?

I mostly don't agree that "the pattern is clear"—which is to say, I do take issue with saying "we do not need to imagine counterfactuals". Here is (to my mind) a salient example [LW(p) · GW(p)] of a top-level comment which provides an example illustrating the point of the OP, without the need for prompting.

I think this is mostly what happens, in the absence of such prompting: if someone thinks of a useful example, they can provide it in the comments (and accrue social credit/karma for their contribution, if indeed other users found said contribution useful). Conversely, if no examples come to mind, then a mere request from some other user ("Examples?") generally will not cause sudden examples to spring into mind (and to the extent that it does, the examples in question are likely to be ad hoc, generated in a somewhat defensive frame of mind, and accordingly less useful).

And, of course, the crucial observation here is that in neither case was the request for examples useful; in the former case, the request was unnecessary, as the examples would have been provided in any case, and in the latter case, the request was useless, as it failed to elicit anything of value.

Here, I anticipate a two-pronged objection from you—one prong for each branch I have described. The first prong I anticipate is that, empirically, we do observe people providing examples when asked, and not otherwise. My response to this is that (again) this does not serve as evidence for your thesis, since we cannot observe the counterfactual worlds in which this request was/wasn't made, respectively. (I also observe that we have some evidence to the contrary, in our actual world, wherein sometimes an exhortation to provide examples is simply ignored; moreover, this occurs more often in cases where the asker appears to have put in little effort to generate examples of their own before asking.)

The second prong is that, in the case where no useful examples are elicited, this fact in itself conveys information—specifically, it conveys that the post's thesis is (apparently) difficult to substantiate, which should cause us to question its very substance. I am more sympathetic to this objection than I am to the previous—but still not very sympathetic, as there are quite often other reasons, unrelated to the defensibility of one's thesis, one might not wish to invest effort in producing such a response. In fact, I read Duncan's complaint as concerned with just this effect: not that being asked to provide examples is bad, but that the accompanying (implicit) interpretation wherein a failure to respond is interpreted as lack of ability to defend one's thesis creates an asymmetric (and undue) burden on him, the author.

That last bit in bold is, in my mind, the operative point here. Without that, even accepting everything else I said as valid and correct, you would still be able to respond, after all, that

What’s not fine is if, instead, you debit me for that comment. That would be completely backwards, and fundamentally confused about what sorts of contributions are valuable, and indeed about what the point of this website even is.

After all, even if such a comment is not particularly valuable in and of itself, it is not a net negative for discussion—and at least (arguably) sometimes positive. But with the inclusion of the bolded point, the cost-benefit analysis changes: asking for examples (without accompanying interpretive effort, much of whose use is in signaling to the author that you, the commenter, are interested in reducing the cost to them of responding) is, in this culture, not merely a "formative evaluation" or even a start to such, but a challenge to them to respond—and a timed challenge, at that. And it is not hard at all for me to see why we ought to increase the cost ("debit", as you put it) for writing minimally useful comments that (often get construed as) issuing unilateral challenges to others!

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2023-04-15T22:10:22.473Z · LW(p) · GW(p)

I mostly don’t agree that “the pattern is clear”—which is to say, I do take issue with saying “we do not need to imagine counterfactuals”. Here is (to my mind) a salient example of a top-level comment which provides an example illustrating the point of the OP, without the need for prompting.

Yep, indeed, that is an example, and a good one.

Conversely, if no examples come to mind, then a mere request from some other user (“Examples?”) generally will not cause sudden examples to spring into mind (and to the extent that it does, the examples in question are likely to be ad hoc, generated in a somewhat defensive frame of mind, and accordingly less useful).

But I linked a case of exactly the thing you just said won’t happen! I linked it in the comment you just responded to!

Here is another example [LW(p) · GW(p)].

Here are more examples: one [LW(p) · GW(p)] two [LW(p) · GW(p)] three [LW(p) · GW(p)] (and a bonus particularly interesting sort-of-example [LW(p) · GW(p)])

The first prong I anticipate is that, empirically, we do observe people providing examples when asked, and not otherwise. My response to this is that (again) this does not serve as evidence for your thesis, since we cannot observe the counterfactual worlds in which this request was/wasn’t made, respectively.

This is a weak response given that I am pointing to a pattern.

I am more sympathetic to this objection than I am to the previous—but still not very sympathetic, as there are quite often other reasons, unrelated to the defensibility of one’s thesis, one might not wish to invest effort in producing such a response.

A very suspicious reply, in the general case. Not always false, of course! But suspicious. If such a condition obtains, it ought to be pointed out explicitly, and defended. It is quite improper, and lacking in intellectual integrity, to simply rely on social censure against requests for examples to shield you from having to explain why in this case it so happens that you don’t need to point to any extensions for your proffered intensions.

In fact, I read Duncan’s complaint as concerned with just this effect: not that being asked to provide examples is bad, but that the accompanying (implicit) interpretation wherein a failure to respond is interpreted as lack of ability to defend one’s thesis creates an asymmetric (and undue) burden on him, the author.

I agree that Duncan’s complaint includes this. I just think that he’s wrong about this. (And wrong in such a way that he should know that he’s wrong.) The burden is (a) not just on the author, but also on the reader (including the one who requested the examples!), and (b) not undue, but in fact quite the opposite.

But with the inclusion of the bolded point, the cost-benefit analysis changes: asking for examples (without accompanying interpretive effort, much of whose use is in signaling to the author that you, the commenter, are interested in reducing the cost to them of responding) is, in this culture, not merely a “formative evaluation” or even a start to such, but a challenge to them to respond—and a timed challenge, at that. And it is not hard at all for me to see why we ought to increase the cost (“debit”, as you put it) for writing minimally useful comments that (often get construed as) issuing unilateral challenges to others!

First, on the subject of “accompanying interpretive effort”: I think that such effort not only doesn’t reduce the cost to authors of responding, it can easily increase the cost. (See my previous commentary on the subject of “interpretive effort” for much expansion of this point.)

Second, on the subject of “cost to the author of responding”: that cost should not be very high, since the author should, ideally, already have examples in mind.

(As an aside, I wonder at the fact that you, and others here, seem so consistently to ignore this point: if an author makes a strong claim, and has no examples ready, and can’t easily come up with such, and also has no good case for why examples are inapplicable / unhelpful / irrelevant / whatever, that is a bad sign. There is a good chance that the author should not have written the post at all, in such a case!)

Third, on the subject of “challenge to the author”: see above re: “cost to the author”, but also note that the “challenge”, such as it is (I’d call it a “question” or a “prompt”; as I say elsewhere [LW(p) · GW(p)], it’s not adversarial by default!) can be met by others, as well.

comment by Said Achmiz (SaidAchmiz) · 2023-04-15T01:05:02.718Z · LW(p) · GW(p)

Said’s response was “that seems less fun to me”

It was not.

I did not say anything like this, nor is this my reason for not participating, nor is this a reasonable summary of what I described as my reasons.

(I have another comment on another one of your listed cruxes, but I just wanted to very clearly object to this one.)

comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-15T21:04:18.538Z · LW(p) · GW(p)

Gwern, harsh as you can sometimes be, your critical comments are consistently well-researched, cited, and dense with information. I'm not always qualified to figure out if you're right or wrong, but your comments always seem substantive to me. This is the piece that I perceive as missing with so many of Said's comments - they lack the substance that you contribute, while being harsh and insulting in tone.

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2023-04-22T23:48:19.659Z · LW(p) · GW(p)

Gwern, harsh as you can sometimes be, your critical comments are consistently well-researched, cited, and dense with information.

The question is validity of the argument about non-participation in annual review, not direction of your conclusion in particular cases, which is influenced by many reasons besides this argument. If you like gwern's comments for those other reasons, that doesn't inform the question of whether non-participation in annual review should make you (or someone else) less charitable towards someone's "reasons or goals in commenting harshly on LW" (in whatever instances that occurs).

comment by Said Achmiz (SaidAchmiz) · 2023-04-15T00:07:10.277Z · LW(p) · GW(p)

Uh… I’m not quite sure that I follow. Is writing reviews… obligatory? Or even, in any sense, expected? I… wasn’t aware that I had been shirking any sort of duty, by not writing reviews. Is this a new site policy, or one which I missed? Otherwise, this seems like somewhat of an odd comment…

comment by clone of saturn · 2023-04-15T00:54:07.624Z · LW(p) · GW(p)

I'll go along with whatever rules you decide on, but that seems like an extremely long time to wait for basic clarifications like "what did you mean by this word" or "can you give a real-world example".

Replies from: habryka4
comment by habryka (habryka4) · 2023-04-15T01:28:52.850Z · LW(p) · GW(p)

I'll go along with whatever rules you decide on, but that seems like an extremely long time to wait for basic clarifications like "what did you mean by this word" or "can you give a real-world example".

Yep, I think genuine questions for clarification seems quite reasonable. Asking for additional clarifying examples is also pretty good. 

I think doing an extended socratic dialogue where the end goal is to show some contradiction within the premise of the original post in a way that tries to question the frame of the post at a pretty deep level is I think the kind of thing that can often make sense to wait until people had time to contextualize a post, though I am not confident here and it's plausible it should also happen almost immediately. 

Replies from: clone of saturn, SaidAchmiz
comment by clone of saturn · 2023-04-15T03:54:50.711Z · LW(p) · GW(p)

I see. If the issue here is only with extended socratic dialogues, rather than any criticism which is perceived as low-effort, that wasn't clear to me. I wouldn't be nearly as opposed to banning the former, if that could be operationalized in a reasonable way.

comment by Said Achmiz (SaidAchmiz) · 2023-04-15T02:07:53.648Z · LW(p) · GW(p)

See this comment [LW(p) · GW(p)] for my thoughts on the matter.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-14T21:29:25.112Z · LW(p) · GW(p)

This is false and uncharitable and I would like moderator clarification on whether this highly-upvoted [EDIT: at the time] comment is representative of the site leaders' vision of what LW should be.

Replies from: anonymousaisafety, habryka4
comment by anonymousaisafety · 2023-04-14T22:13:36.505Z · LW(p) · GW(p)

@Duncan_Sabien [LW · GW] I didn't actually upvote @clone of saturn [LW · GW]'s post, but when I read it, I found myself agreeing with it.

I've read a lot of your posts over the past few days because of this disagreement. My most charitable description of what I've read would be "spirited" and "passionate".

You strongly believe in a particular set of norms and want to teach everyone else. You welcome the feedback from your peers and excitedly embrace it, insofar as the dot product between a high-dimensional vector describing your norms and a similar vector describing the criticism is positive.

However, I've noticed that when someone actually disagrees with you -- and I mean disagreement in the sense of "I believe that this claim rests on incorrect priors and is therefore false." -- I have been shocked by the level of animosity you've shown in your writing.

Full disclosure: I originally messaged the moderators in private about your behavior, but I'm now writing this in public because in part because of your continued statements on this thread that you've done nothing wrong.

I think that your responses over the past few days have been needlessly escalatory in a way that Said's weren't. If we go with the Socrates metaphor, Said is sitting there asking "why" over and over, but you've let emotions rule and leapt for violence (metaphorically, although you then did then publish a post about killing Socrates, so YMMV). 

There will always be people who don't communicate in a way that you'd prefer. It's important (for a strong, functioning team) to handle that gracefully. It looks to me that you've become so self-convinced that your communication style is "correct" that you've taken a war path towards the people who won't accept it -- Zack and Said.

In a company, this is problematic because some of the things that you're asking for are actually not possible for certain employees. Employees who have English as a second language, or who come from a different culture, or who may have autism, all might struggle with your requirements. As a concrete example, you wrote at length that saying "This is insane" is inflammatory in a way that "I think that this is insane" wouldn't be -- while I understand and appreciate the subtlety of that distinction, I also know that many people will view the difference between those statements as meaningless filler at best. I wrote some thoughts on that here: https://www.lesswrong.com/posts/9vjEavucqFnfSEvqk/on-aiming-for-convergence-on-truth?commentId=rGaKpCSkK6QnYBtD4 [LW(p) · GW(p)]

I believe that you are shutting down debates prematurely by casting your peers as antagonist towards you. In a corporate setting, as an engineer acquires more and more seniority, it becomes increasingly important for them to manage their emotions, because they're a role model for junior engineers. 

I do think that @Said Achmiz [LW · GW] can improve their behavior too. In particular, I think Said could recognize that sometimes their posts are met with hostility, and rather than debating this particular point, they could gracefully disengage from a specific conversation when they determine that someone does not appreciate their contributions.

However, I worry that you, Duncan, are setting an increasingly poor example. I don't know that I agree with the ability to ban users from posts. I think I lean more towards "ability to hide any posts from a user" as a feature, more than "prevent users from commenting". That is to say, I think if you're triggered by Said or Zack, then the site should offer you tools to hide those posts automatically. But I don't think that you should be able to prevent Said or Zack from commenting on your posts, or prevent other commentators from seeing that criticism. In part, I agree strongly (and upvoted strongly) with @Wei_Dai [LW · GW]'s point elsewhere in this thread that blocking posters means we can't tell the difference between "no one criticized this" and "people who would criticize it couldn't", unless they write their own post, as @Zack_M_Davis [LW · GW] did.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-14T22:34:32.468Z · LW(p) · GW(p)

your continued statements on this thread that you've done nothing wrong.

This is literally false; it is objectively the case that no such statement exists. Here are all [LW(p) · GW(p)] the [LW(p) · GW(p)] comments [LW(p) · GW(p)] I've [LW(p) · GW(p)] left [LW(p) · GW(p)] on [LW(p) · GW(p)] this [LW(p) · GW(p)] thread [LW(p) · GW(p)] up to this point, none of which says or strongly implies "I've done nothing wrong." Some of them note that behavior that might seem disproportionate has additional causes upstream of it, that other people seem to me to be discounting, but that's not the same as me saying "I've done nothing wrong."

This is part of the problem. The actual words matter. The actual facts matter. If you inject into someone's words whatever you feel like, regardless of whether it's there or not, you can believe all sorts of things about e.g. their intentions or character.

LessWrong is becoming a place where people don't care to attend to stuff like "what was actually said," and that is something I find alienating, and am trying to pump against.

(My actual problem is less "this stuff appears in comments," which it always has, and more "it feels like it gets upvoted to the top more frequently these days," i.e. like the median user cares less than the median user of days past. I don't feel threatened by random strawmanning or random uncharitableness; I feel threatened when it's popular.)

Replies from: Vladimir_Nesov, anonymousaisafety
comment by Vladimir_Nesov · 2023-04-16T06:03:55.884Z · LW(p) · GW(p)

The actual facts matter.

But escalating to arbitrary levels of nuance makes communication infeasible, robustness to some fuzziness on the facts and their descriptions is crucial. When particular distinctions matter, it's worth highlighting. Highlighting consumes a limited resource, the economy of allocating importance to particular distinctions.

The threat of pointing to many distinction as something that had to be attended imposes a minimum cost on all such distinctions, it's costs across the board.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-16T06:10:07.489Z · LW(p) · GW(p)

I agree that escalating to arbitrary levels of nuance makes communication infeasible, and that you can and should only highlight the relevant and necessary distinctions.

I think "someone just outright said I'd repeatedly said stuff I hadn't" falls above the line, though.

comment by anonymousaisafety · 2023-04-14T22:54:16.069Z · LW(p) · GW(p)

Yes, I have read your posts. 

I note that in none of them did you take any part of the responsibility for escalating the disagreement to its current level of toxicity. 

You have instead pointed out Said's actions, and Said's behavior, and the moderators lack of action, and how people "skim social points off the top", etc.

Replies from: AllAmericanBreakfast, Duncan_Sabien
comment by DirectedEvolution (AllAmericanBreakfast) · 2023-04-15T00:14:54.558Z · LW(p) · GW(p)

Anonymousaisafety, with respect, and acknowledging there's a bit of the pot calling the kettle black intrinsic in my comment here, I think your comments in this thread are also functioning to escalate the conflict, as was clone of saturn's top-level comment.

The things your comments are doing that seem to me escalatory include making an initially inaccurate criticism of Duncan ("your continued statements on this thread that you've done nothing wrong"), followed by a renewed criticism of Duncan that doesn't contain even a brief acknowledgement or apology for the original inaccuracy. Those are small relational skills that can be immensely helpful in dealing with a conflict smoothly.

None of that has any bearing on the truth-value of your critical claims - it just bears on the manner and context in which you're expressing them.

I think it is possible and desirable to address this conflict in a net-de-escalatory manner. The people best positioned to do so are the people who don't feel themselves to be embroiled in a conflict with Duncan or Said, or who can take genuine emotional distance from any such conflict.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-15T01:26:02.041Z · LW(p) · GW(p)

*shrug

You're an anonymous commenter who's been here for a year sniping from the sidelines who has shown that they're willing to misrepresent comments that are literally visible on this same page, and then, when I point that out, ignore it completely and reiterate your beef. I think Ray wants me to say "strong downvote and I won't engage any further."

comment by habryka (habryka4) · 2023-04-14T22:48:05.722Z · LW(p) · GW(p)

Ray is owning stuff, so this is just me chiming in with some quick takes, but I think it is genuinely important for people to be able to raise hypotheses like "this person is trying to maintain a motte-and-bailey", and to tell people if that is their current model. 

I don't currently think the above comment violated any moderation norms I would enforce, though navigating this part of conversational space is super hard and it's quite possible there are some really important norms in the space that are super important and should be enforced, that I am missing. I have a model of a lot of norms in the space already, however the above comment does not violate any of them right now (mostly because it does prefix the statement with a "I suspect X", and does not claim any broader social consensus beyond that). 

I also think it's good for you to chime in and say that it's false (you are also correct in that it is uncharitable, but assuming that everyone is well-intentioned is IMO not true and not a required part of good discourse, so it not being charitable seems true but also not obviously bad and I am not sure what you pointing it out means. I think we should create justified knowledge of good intentions wherever possible, I just don't think LW comment threads, especially threads about moderation, are a space where achieving such common knowledge is remotely feasible). 

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-15T01:28:12.783Z · LW(p) · GW(p)

This did not raise the hypothesis that I want to maintain a motte-and-bailey.

It asserted that I do, as if fact.

Replies from: habryka4
comment by habryka (habryka4) · 2023-04-15T01:31:23.592Z · LW(p) · GW(p)

It asserted that I do, as if fact

I am quite confused. The comment clearly says "I suspect"? That seems like one of the clearest prefixes I know for raising something as a hypothesis, and very clearly signals that something is not being asserted as a fact. Am I missing something?

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-15T01:34:41.597Z · LW(p) · GW(p)

The "I suspect" is attached to the "Duncan won't like this idea."  I would bet $10 that if you polled 100 readers on whether it was meant to include "I suspect that Duncan wants, etc." a majority would say no, the second part was taken as given.

It's of the form "I suspect X, because Y." Not "I suspect X because I suspect Y."

Replies from: habryka4
comment by habryka (habryka4) · 2023-04-15T01:39:59.472Z · LW(p) · GW(p)

Oh, sure, I would be happy to take that bet. I agree there is some linguistic ambiguity here, but I think my interpretation is more natural.

In any case, @clone of saturn [LW · GW] can clarify here. I would currently bet this is just a sad case of linguistic ambiguity, not actually someone making a confident statement about you having ill-intent.

Replies from: clone of saturn, Duncan_Sabien
comment by clone of saturn · 2023-04-15T03:30:49.802Z · LW(p) · GW(p)

I can't read Duncan's mind and have no direct access to facts about his ultimate motivations. I can be much more confident that a person who is currently getting away with doing X has reason to dislike a rule that would prevent X. So the "I suspect" was much more about the second clause than the first. I find this so obvious that it never occurred to me that it could be read another way.

I don't accept Duncan's stand-in sentence "I suspect that Eric won't like the zoo, because he wants to stay out of the sun." as being properly analogous, because staying out of the sun is not something people typically need to hide or deny.

To be honest, I think I have to take this exchange as further evidence that Duncan is operating in bad faith. (Within this particular conflict, not necessarily in general.)

Replies from: Vaniver
comment by Vaniver · 2023-04-15T05:05:00.055Z · LW(p) · GW(p)

I would've preferred if you had proposed another alternative wording, so that poll could be run as well, instead of just identifying the feature you think is disanalogous. (If you supply the wording, after all, Duncan can't have twisted it, and your interpretation gets fairly tested.)

Replies from: clone of saturn, Duncan_Sabien
comment by clone of saturn · 2023-04-15T05:46:58.906Z · LW(p) · GW(p)

Why not just use the original sentence, with only the name changed? I don't see what is supposed to be accomplished by the other substitutions.

Replies from: T3t
comment by RobertM (T3t) · 2023-04-16T06:16:57.929Z · LW(p) · GW(p)

Unfortunately, I don't have quite the reach that Duncan has, but I think the result is still suggestive.  (Subtract one from each vote, since I left one of each to start, as is usual.)

Replies from: clone of saturn
comment by clone of saturn · 2023-04-16T07:50:20.600Z · LW(p) · GW(p)

Ok, I edited the comment.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-16T17:22:21.313Z · LW(p) · GW(p)

Does that influence 

To be honest, I think I have to take this exchange as further evidence that Duncan is operating in bad faith. (Within this particular conflict, not necessarily in general.)

in any way?

Four days' later edit: guess not. :/

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-15T07:05:41.705Z · LW(p) · GW(p)

Oliver proposed an alternative wording and I affirmed that I'd still bet on his wording.  I was figuring I shouldn't try to run a second poll myself because of priming/poisoning the well but I'm happy for someone else to go and get data.  

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-15T01:44:04.052Z · LW(p) · GW(p)

The poll is here for people to watch results trickle in, though I ask that no one present in this subthread vote so the numbers can be more raw.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-15T02:08:29.884Z · LW(p) · GW(p)

(It's early yet, but so far it is unanimously in favor of my interpretation, with twenty reactions one way and zero the other, and one comment in between the two choices I gave but writing out that the epistemic status on the second clause seems stronger than "I suspect".)

(Somewhat ironically, this makes me marginally more likely to interpret "well, I meant the more epistemically reserved thing" as being a fallback to a motte, if such a statement ever appears.)

comment by tslarm · 2023-04-16T06:37:10.346Z · LW(p) · GW(p)

Why do LW users need the ability to ban other users from commenting on their posts? 

If user X could choose to:

  • make all comments by user Y invisible to user X; and, optionally, 
  • enable a public notice on all of Y's replies to X that "Y is on X's ignore list",

what desirable thing would be missing?

The optional public notice would ensure that X's non-response to Y would not be taken to imply anything like tacit agreement; it would also let other users know that their comments downstream of a Y comment would not be seen by X.

('All comments' in the first point could be replaced by 'all replies to X', or 'all replies to a top-level post by X', depending on what exactly X wants to achieve.)

Replies from: Raemon
comment by Raemon · 2023-04-16T06:48:43.885Z · LW(p) · GW(p)

Then there'd be a whole discussion happening on an author's post that the author can't see, which produces weird affects where not everyone can be quite sure which things everyone has seen, the author will be aware enough of it to know something is happening, but operating blind.

My experience with this style of blocking on FB is that it's pretty terrible.

Replies from: tslarm
comment by tslarm · 2023-04-16T06:54:01.333Z · LW(p) · GW(p)

Wouldn't the 'public notice' in my second point remove that ambiguity?

And I'm not playing dumb here, I just don't use Facebook: does it have something like that public notice?

Replies from: Raemon
comment by Raemon · 2023-04-16T06:55:58.251Z · LW(p) · GW(p)

The public notice is an innovation over facebook, but you'd still see a bunch of people referring to one conversation thread that you can't see. The problem is the lack of common knowledge of what's actually going on.

Replies from: tslarm
comment by tslarm · 2023-04-16T07:10:55.113Z · LW(p) · GW(p)

Fair enough, but it still seems like an okay situation to me, with something pretty close to common knowledge of what's going on: everyone but X knows exactly what is visible to whom, and X knows everything except the content (edit: and actual existence, but not potential existence) of a well-defined set of comments that they have chosen to opt out of.

So nobody but Y will have any trouble communicating with X; I guess occasionally someone will unthinkingly refer to something from a Y subthread, but any resulting confusion will be easy to resolve. (And there could be a norm/rule against anything akin to bypassing X's ignore list by reposting Y's comments.)

Not absolutely perfect, sure -- but the existing system certainly isn't either.

Replies from: Raemon
comment by Raemon · 2023-04-16T07:20:54.949Z · LW(p) · GW(p)

This mostly just doesn't actually solve the sort of problem that I think most authors have with hosting discussions they don't want on their post. (But, it's cruxy for me that I don't expect people to want it. If some authors I respected did want it I'd be open to it)

Replies from: tslarm
comment by tslarm · 2023-04-16T08:47:56.062Z · LW(p) · GW(p)

I guess an unstated part of my position is that there's a limit to how much control a LW user can reasonably expect to have over other users' commenting, and that if they want more control than my suggested system allows them then they should probably post to their own blog rather than LW. But I get that you (and at least some others) disagree with me, and/or are aware of users who do want more control and are sufficiently valuable to LW to justify catering to their needs in this way. I won't push the point; thanks for engaging.

(FWIW, my biggest issue with the current system is that it's not obvious to most readers when people are banned from commenting on a post, and thus some posts could appear to have an exaggerated level of support/absence of good counterarguments from the LW community.)

comment by Raemon · 2023-04-24T00:02:53.654Z · LW(p) · GW(p)

Update: Ruby and I have posted moderator notices for Duncan and Said in this thread. This was a set of fairly difficult moderation calls on established users and it seems good for the LessWrong userbase to have the opportunity to evaluate it and respond. I'm stickying this post for a day-or-so.

@Duncan_Sabien [LW · GW]'s commenting privileges are restored, with a warning [LW(p) · GW(p)]. @Said Achmiz [LW · GW] is currently under a rate limit (see details [LW(p) · GW(p)]).

comment by shminux · 2023-04-24T01:14:29.236Z · LW(p) · GW(p)

I have no horse in this race, but from my very much outside perspective it feels like the one who was drama-queening came out on top. Not a great look, but an important lesson. It would make much more sense to me if the consequences were equal or at least obviously commensurable. 

Replies from: habryka4, philh
comment by habryka (habryka4) · 2023-04-24T03:05:16.325Z · LW(p) · GW(p)

Ray is owning this decision, so he can say more if he wants, but as I understand the judgements here are at least trying to be pretty independent of the most recent conflict and are both predominantly rooted in data gathered over many years of complaints, past comment threads, and user interviews. 

It feels like it would be quite surprising if based on that, consequences should be equal or obviously commensurable, given that these things hugely differ for different users (including in this case).

comment by philh · 2023-04-24T07:28:20.969Z · LW(p) · GW(p)

I feel like "drama queen" is kind of a weird accusation to make here. Like it has connotations to me of "making a fuss out of a triviality", but it seems clear that the mods did not think it was a triviality. My sense is that the mods wish Duncan had handled this situation differently, but do also think the thing he was reacting to was actually quite bad and worth-reacting-to. Maybe you just disagree with the mods, but...

I guess, can you clarify whether you mean something like

  • "I have not looked closely at this situation, but from a glance it seems to me that Duncan is making a fuss out of a triviality"? (If this is the case, do you really think you can draw important lessons from your glance?)
  • "I have looked closely at this situation, and it seems to me that Duncan is making a fuss out of a triviality"?
  • Neither of those is a good fit?
Replies from: shminux
comment by shminux · 2023-04-25T08:17:47.893Z · LW(p) · GW(p)

Funny how you are pulling a Said and asking for clarifications. My view is that in a situation where "someone is wrong on the internet", the most important skill is to be able to step away, and I suspect the mods would very much agree. I can also very much sympathize with Duncan here, having read his post "you don't exist, Duncan", and identified with the sentiment. Still, an argument online is a "triviality", as you call it, unless your real-life well being depends on it, which I don't think it does for either of the quarreling parties here. They both would do well to learn the skill of disengagement rather than creating drama (Duncan) or poking the other person incessantly (Said). As I said in my original comment, the drama queen won, whereas both should have been slapped equally. 

Replies from: pktechgirl, philh
comment by Elizabeth (pktechgirl) · 2023-04-25T19:56:17.225Z · LW(p) · GW(p)

Funny how you are pulling a Said and asking for clarifications

 

This sounds like a claim that what Said does is asking for clarification, and that other people do so infrequently enough that Said has major ownership over the concept. I strenuously object to both of these. Lots of people ask for clarifications, it's very normal, and the difference between how they ask for clarification and how Said asks is the issue at hand.

Replies from: shminux
comment by shminux · 2023-04-25T20:25:21.939Z · LW(p) · GW(p)

Noted.

comment by philh · 2023-04-25T11:20:07.938Z · LW(p) · GW(p)

Funny how you are pulling a Said and asking for clarifications.

It's been said multiple times that "asking for clarifications" is not the problem with what Said does. I don't think the similarities go much further than that.

an argument online is a “triviality”, as you call it, unless your real-life well being depends on it

I reject this blanket assertion.

It kinda sounds like you just... don't really care about LessWrong, and don't see how anyone else could? I am a person who does care about LW. The idea that I should automatically judge any argument on LW as trivial, without knowing the details beyond "this does not impact my real-life wellbeing", is frankly ridiculous.

Replies from: shminux
comment by shminux · 2023-04-25T16:59:22.933Z · LW(p) · GW(p)

Your view has been acknowledged. Since you call what I said "ridiculous", I do not believe any further exchange on this topic is worthwhile.

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-14T21:28:09.133Z · LW(p) · GW(p)

I claim an important missing piece between 3 and 4 is "Said spent literally thousands of words uncharitably psychoanalyzing me in a subthread under the LW moderation policy post, and mods did not care to intervene."

Replies from: gworley
comment by Gordon Seidoh Worley (gworley) · 2023-04-14T21:46:01.795Z · LW(p) · GW(p)

For what it's worth, I think you're unusually uncomfortable with people doing this. I've not read the specific thread you're referring to, but I recall you expressing especially and unusually high dislike for other performing analysis of your mind and motivations.

I'm not sure what to do with this, only that I think it's important background for folks trying to understand the situation. Most people dislike psychoanalyzing to some extent, but you seem like P99 in your degree of dislike. And, yes, I realize that annoyingly my comment is treading in the direction of analyzing you, but I'm trying to keep it just to external observations.

Replies from: Duncan_Sabien, Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-15T01:29:33.472Z · LW(p) · GW(p)

(I also note that I'm a bit sensitive after that one time that a prominent Jewish member of the rationality community falsely accused me on LW of wanting to ghettoize people and the comment was highly upvoted and there was no mod response for over a week. I imagine it's rather easier for other people to forget the impact of that than for me to.)

comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-14T21:50:25.141Z · LW(p) · GW(p)

I don't mind people forming hypotheses, as long as they flag that what they've got is a hypothesis.

I don't like it when people sort of ... skim social points off the top?  They launch a social attack with the veneer that it's just a hypothesis ("What, am I not allowed to have models and guesses?"), but (as gjm pointed out in that subthread) they don't actually respond to information, and update.  Observably, Said was pretending to represent a reasonable prior, but then refusing to move to a posterior.

That's the part I don't like.  If you gamble with someone else's reputation and prove wrong, you should lose, somehow; it shouldn't be a free action to insinuate negative things about another person and just walk away scot-free if they were all false and unjustified.

Replies from: pktechgirl, Duncan_Sabien
comment by Elizabeth (pktechgirl) · 2023-04-15T00:25:27.995Z · LW(p) · GW(p)

Not sure if this is a crux, but my impression is Said in particular is not accruing social credit for his comments. I agree that other people pulling similar maneuvers probably do, and that's often bad, but my impression is Said in particular has just gone too far.

Replies from: Duncan_Sabien
comment by [DEACTIVATED] Duncan Sabien (Duncan_Sabien) · 2023-04-15T01:19:19.315Z · LW(p) · GW(p)

He was successful enough that Vaniver took it seriously and, in a highly upvoted comment on this thread, fell for what I believe is a privileging-the-hypothesis gambit (details above under Vaniver's comment).

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2023-04-15T01:43:16.294Z · LW(p) · GW(p)

Thanks, I retract the comment

comment by Gentzel · 2023-04-18T10:17:13.791Z · LW(p) · GW(p)