LW Team is adjusting moderation policy
post by Raemon · 2023-04-04T20:41:07.603Z · LW · GW · 185 commentsContents
Broader Context Ideas we're considering, and questions we're trying to answer: None 185 comments
Lots of new users have been joining LessWrong recently, who seem more filtered for "interest in discussing AI" than for being bought into any particular standards for rationalist discourse. I think there's been a shift in this direction over the past few years, but it's gotten much more extreme in the past few months.
So the LessWrong team is thinking through "what standards make sense for 'how people are expected to contribute on LessWrong'?" We'll likely be tightening up moderation standards, and laying out a clearer set of principles so those tightened standards make sense and feel fair.
In coming weeks we'll be thinking about those principles as we look over existing users, comments and posts and asking "are these contributions making LessWrong better?".
Hopefully within a week or two, we'll have a post that outlines our current thinking in more detail.
Generally, expect heavier moderation, especially for newer users.
Two particular changes that should be going live within the next day or so:
- Users will need at least N karma in order to vote, where N is probably somewhere between 1 and 10.
- Comments from new users won't display by default until they've been approved by a moderator.
Broader Context
LessWrong has always had a goal of being a well-kept garden [LW · GW]. We have higher and more opinionated standards than most of the rest of the internet. In many cases we treat some issues as more "settled" than the rest of the internet, so that instead of endlessly rehashing the same questions we can move on to solving more difficult and interesting questions.
What this translates to in terms of moderation policy is a bit murky. We've been stepping up moderation over the past couple months and frequently run into issues like "it seems like this comment is missing some kind of 'LessWrong basics', but 'the basics' aren't well indexed and easy to reference." It's also not quite clear how to handle that from a moderation perspective.
I'm hoping to improve on "'the basics' are better indexed", but meanwhile it's just generally the case that if you participate on LessWrong, you are expected to have absorbed the set of principles in The Sequences (AKA Rationality A-Z).
In some cases you can get away without doing that while participating in local object level conversations, and pick up norms along the way. But if you're getting downvoted and you haven't read them, it's likely you're missing a lot of concepts or norms that are considered basic background reading on LessWrong. I recommend starting with the Sequences Highlights [? · GW], and I'd also note that you don't need to read the Sequences in order [LW · GW], you can pick some random posts that seem fun and jump around based on your interest.
(Note: it's of course pretty important to be able to question all your basic assumptions. But I think doing that in a productive way requires actually understand why the current set of background assumptions are the way they are, and engaging with the object level reasoning)
There's also a straightforward question of quality. LessWrong deals with complicated questions. It's a place for making serious progress on those questions. One model I have of LessWrong is something like a university – there's a role for undergrads who are learning lots of stuff but aren't yet expected to be contributing to the cutting edge. There are grad students and professors who conduct novel research. But all of this is predicated on there being some barrier-to-entry. Not everyone gets accepted to any given university. You need some combination of intelligence, conscientiousness, etc to get accepted in the first place.
See this post by habryka for some more models of moderation [LW · GW].
Ideas we're considering, and questions we're trying to answer:
- What quality threshold does content need to hit in order to show up on the site at all? When is the right solution to approve but downvote immediately?
- How do we deal with low quality criticism? There's something sketchy about rejecting criticism. There are obvious hazards of groupthink. But a lot of criticism isn't well thought out, or is rehashing ideas we've spent a ton of time discussing and doesn't feel very productive.
- What are the actual rationality concepts LWers are basically required to understand to participate in most discussions? (for example: "beliefs are probabilistic, not binary, and you should update them incrementally [LW · GW]")
- What philosophical and/or empirical foundations can we take for granted for building off of (i.e. reductionism, meta-ethics)
- How much familiarity with the existing discussion of AI should you be expected to have to participate in comment threads about that?
- How does moderation of LessWrong intersect with moderating the Alignment Forum?
Again, hopefully in the near future we'll have a more thorough writeup about our answers to these. Meanwhile it seemed good to alert people this would be happening.
185 comments
Comments sorted by top scores.
comment by Raemon · 2023-04-05T00:02:14.260Z · LW(p) · GW(p)
I'm about to process the last few days worth of posts and comments. I'll be linking to this comment as a "here are my current guesses for how to handle various moderation calls".
Replies from: Raemon, Raemon, Raemon, Raemon, Ruby↑ comment by Raemon · 2023-04-05T00:02:25.569Z · LW(p) · GW(p)
Succinctly explain the main point.
When we're processing many new user-posts a day, we don't have much time to evaluate each post.
So, one principle I think is fairly likely to become a "new user guideline" is "Make it pretty clear off the bat what the point of the post is." In ~3 sentences, try to make it clear who your target audience is, and what core point you're trying to communicate to them. If you're able to quickly gesture at the biggest-bit-of-evidence or argument that motivates your point, even better. (Though I understand sometimes this is hard).
This isn't necessarily how you have to write all the time on LessWrong! But your first post is something like an admissions-essay and should be optimized more for being legibly coherent and useful. (And honestly I think most LW posts should lean more in this direction)
In some sense this is similar to submitting something to a journal or magazine. Editors get tons of submissions. For your first couple posts, don't aim to write something that takes a lot of works for us to evaluate.
Corollary: Posts that are more likely to end up in the reject pile include...
- Fiction, especially if it looks like it's trying to make some kind of philosophical point, while being wrapped in a structure that makes that harder to evaluate. (I think good fiction plays a valuable role on LessWrong, I just don't recommend it until you've gotten more of a handle of the culture and background knowledge)
- Long manifestos. Like fiction, these are sometimes valuable. They can communicate something like an overarching way-of-seeing-the-world that is valuable in a different way from individual factual claims. But, a) I think it's a reasonable system to first make some more succinct posts, build up some credibility, and then ask the LessWrongOSphere to evaluate your lengthy treatise. b) honestly... your first manifesto just probably isn't very good. That's okay. No judgment. I've written manifestos that weren't very good and they were an important part of my learning process. Even my more recent manifestos tend to be less well received than my posts that argue a particular object-level claim.
Some users have asked: "Okay, but, when will I be allowed to post the long poetic prose that expresses the nuances of the idea I have in my heart?"
Often the answer is, well, when you get better at thinking and expressing yourself clearly enough that you've written a significantly different piece.
Replies from: Ruby↑ comment by Ruby · 2023-04-05T00:51:51.719Z · LW(p) · GW(p)
I've always like the the pithy advice "you have to know the rules to break the rules" which I do consider valid in many domains.
Before I let users break generally good rules like "explain what your point it up front", I want to know that they could keep to the rule before they don't. The posts of many first time users give me the feeling that their author isn't being rambly on purpose, they don't know how to write otherwise (or aren't willing to).
↑ comment by Raemon · 2023-04-05T00:22:54.127Z · LW(p) · GW(p)
Some top-level post topics that get much higher scrutiny:
1. Takes on AI
Simply because of the volume of it, the standards are higher. I recommend reading Scott Alexander's Superintelligence FAQ [LW · GW] as a good primer to make sure you understand the basics. Make sure you're familiar with the Orthogonality Thesis and Instrumental Convergence. I recommend both Eliezer's AGI Ruin: A List of Lethalities [LW · GW] and Paul Christiano's response post [LW · GW] so you understand what sort of difficulties the field is actually facing.
I suggest going to the most recent AI Open Questions [? · GW] thread, or looking into the FAQ at https://ui.stampy.ai/
2. Quantum Suicide/Immortality, Roko's Basilisk and Acausal Extortion.
In theory, these are topics that have room for novel questions and contributions. In practice, they seem to attract people who seem... looking for something to be anxious about? I don't have great advice for these people, but my impression is that they're almost always trapped in a loop where they're trying to think about it in enough detail that they don't have to be anxious anymore, but that doesn't work. They just keep finding new subthreads to be anxious about.
For Acausal Trade, I do think Critch's Acausal normalcy [LW · GW] might be a useful perspective that points your thoughts in a more useful direction. Alas, I don't have a great primer that succinctly explains why quantum immortality isn't a great frame, in a way that doesn't have a ton of philosophical dependencies.
I mostly recommend... going outside, hanging out with friends and finding other more productive things to get intellectually engrossed in.
3. Needing help with depression, akrasia, or medical advice with confusing mystery illness.
This is pretty sad and I feel quite bad saying it – on one hand, I do think LessWrong has some useful stuff to offer here. But too much focus on this has previously warped the community in weird ways – people with all kinds of problems come trying to get help and we just don't have the resources to help all of them.
For your first post on LessWrong, think of it more like you're applying to a university. Yes, universities have mental health departments for students and faculty... but when we're evaluating "does it make sense to let this person into this university", the focus should be on "does this person have the ability to make useful intellectual contributions?" not "do they need help in a way we can help with?"
Replies from: Sherrinford, Celarix↑ comment by Sherrinford · 2023-04-11T05:36:30.602Z · LW(p) · GW(p)
Maybe an FAQ for the intersection of #1, #2 and #3, "depressed/anxious because of AI", might be a good thing to be able to link to, though?
↑ comment by Celarix · 2023-04-07T14:38:45.621Z · LW(p) · GW(p)
3. Needing help with depression, akrasia, or medical advice with confusing mystery illness.
Bit of a shame to see this one, but I understand this one. It's crunch time for AGI alignment and there's a lot on the line. Maybe those of us interested in self-help can go to/post their thoughts on some of the rationalsphere blogs, or maybe start their own.
I got a lot of value out of the more self-help and theory of mind posts here, especially Kaj Sotala's and Valentine's work on multiagent models of mind, and it'd be cool to have another place to continue discussions around that.
↑ comment by Raemon · 2023-04-05T02:09:38.473Z · LW(p) · GW(p)
A key question when I look at a new user on LessWrong trying to help with AI is, well, are they actually likely to be able to contribute to the field of AI safety?
If they are aiming to make direct novel intellectual contributions, this is in fact fairly hard. People have argued back and forth about how much raw IQ, conscientiousness or other signs of promise a person needs to have. There has been some posts arguing that people are overly pessimistic and gatekeeping-y about AI safety.
But, I think it's just pretty importantly true that it takes a fairly significant combination of intelligence and dedication to contribute. Not everyone is cut out for doing original research. Many people pre-emptively focus on community building and governance because that feels easier and more tractable to them than original research. But those areas still require you to have a pretty understanding of the field you're trying to govern or build a community for.
If someone writes a post on AI that seems like a bad take, which isn't really informed by the real challenges, should I be encouraging that person to make improvements and try again? Or just say "idk man, not everyone is cut out for this?"
Here's my current answer.
If you've written a take on AI that didn't seem to hit the LW team's quality bar, I would recommend some combination of:
- Read ~16 hours of background content, so you're not just completely missing the point. (I have some material in mind that I'll compile later, but for now highlight roughly the amount of effort involved)
- Set aside ~4 hours to think seriously about the topic. Try to find one sub-question you don't know the answer to, and make progress answering that sub-question.
- Write up your thoughts as a LW post.
(For each of these steps, organizing some friends to work together as a reading or thinking group can be helpful to make it more fun)
This doesn't guarantee that you'll be a good fit for AI safety work, but I think this is an amount of effort where it's possible for a LW mod to look at your work, and figure out if this is likely to be a good use of your time.
Some people may object "this is a lot of work." Yes, it is. If you're the right sort of person you may just find this work fun. But the bottom line is yes, this is work. You should not expect to contribute to the field without putting in serious work, and I'm basically happy to filter out of LessWrong people who a) seem to superficially have pretty confused takes, and b) are unwilling to put in 20 hours of research work.
↑ comment by Raemon · 2023-04-06T23:01:22.962Z · LW(p) · GW(p)
Draft in progress. Common failures modes for AI posts that I want to reference later:
Trying to help with AI Alignment
"Let's make the AI not do anything."
This is essentially a very expensive rock. Other people will be building AIs that do do stuff. How does your AI help the situation over not building anything at all?
"Let's make the AI do [some specific thing that seems maybe helpful when parsed as an english sentence], without actually describing how to make sure they do exactly or even approximately that english sentence"
The problem is a) we don't know how to point an AI at doing anything at all, and b) your simple english sentence includes a ton of hidden assumptions.
(Note: I think Mark Xu sort of disagreed with Oli on something related to this recently, so I don't know that I consider this class of solution is completely settled. I think Mark Xu thinks that we don't currently know how to get an AI to do moderately complicated actions with our current tech, but, our current paradigms for how to train AIs are likely to yield AIs that can do moderately complicated actions)
I think the typical new user who says things like this still isn't advancing the current paradigm though, nor saying anything useful that hasn't already been said.
Arguing Alignment is Doomed
[less well formulated]
Lately there's been a crop of posts arguing alignment is doomed. I... don't even strongly disagree with them, but they tend to be poorly argued and seem confused about what good problem solving looks like.
Arguing AI Risk is a dumb concern that doesn't make sense
Lately (in particular since Eliezer's TIME article), we've had a bunch of people coming in to say we're a bunch of doomsday cultists and/or gish gallopers.
And, well, I think if you're just tuning into the TIME article, or you have only been paying bits of attention over the years, I think this is a kinda reasonable belief-state to have. From the outside when you hear an extreme-sounding claim, it's reasonable for alarm bells to go off and assume this is maybe crazy.
If you were the first person bringing up this concern, I'd be interested in your take. But, we've had a ton of these, so you're not saying anything new by bringing it up.
You're welcome to post your take somewhere else, but if you want to participate on LessWrong, you need to engage with the object level arguments.
Here's a couple things I'll say about this:
- One particularly gishgallopy-feeling thing is that many arguments for AI catastrophe are disjunctive. So, yeah, there's not just one argument you can overturn and then we'll all be like "okay great, we can change our mind about this problem." BUT, it is the case that we are pretty curious about individual arguments getting overturned. If individual disjunctive arguments turned out to be flawed, that'd make the problem easier. So I'd be fairly excited about someone who digs into the details of various claims in AGI Ruin: A List of Lethalities [LW · GW] and either disproves them or finds a way around them.
- Another potentially gishgallopy-feeling thing is If you're discussing things on LessWrong, you'll be expected to have absorbed the concepts from the sequences (such as how to think about subjective probability, tribalism, etc), either by reading the sequences or lurking a lot. I acknowledge this is as pretty gishgallopy at first glance, if you came here to debate one particular thing. Alas, that's just how it is [see other FAQ question delving more into this]
↑ comment by Ruby · 2023-04-06T20:22:44.011Z · LW(p) · GW(p)
Here's a quickly written draft for an FAQ we might send users whose content gets blocked from appearing on the site.
The “My post/comment was rejected” FAQ
Why was my submission rejected?
Common reasons that the LW Mod team will reject your post or comment:
- It fails to acknowledge or build upon standard responses and objections that are well-known on LessWrong.
- The LessWrong website is 14 years old and the community behind it older still. Our core readings [link] are over half a million words. So understandably, there’s a lot you might have missed!
- Unfortunately, as the amount of interest in LessWrong grows, we can’t afford to let cutting-edge content get submerged under content from people who aren’t yet caught up to the rest of the site.
- It is poorly reasoned. It contains some mix of bad arguments and obviously bad positions that it does not feel worth the LessWrong mod team or LessWrong community’s time or effort responding to.
- It is difficult to read. Not all posts and comments are equally well-written and make their points as clearly. While established users might get more charity, for new users, we require that we (and others) can easily follow what you’re saying well or enough to know whether or not you’re saying something interesting.
- There are multiple ways to end up difficult to read:
- Poorly crafted sentences and paragraphs
- Long and rambly
- Poorly structure without signposting
- Poor formatting
- There are multiple ways to end up difficult to read:
- It’s a post and doesn’t say very much.
- Each post takes up some space and requires effort to click on and read, and the content of the post needs to make that worthwhile. Your post might have had a reasonable thought, but it didn’t justify a top-level post.
- If it’s a quick thought about AI, try an “AI No Dumb Questions Open Thread”
- Sometimes this just means you need to put in more effort, though effort isn’t the core thing.
- Each post takes up some space and requires effort to click on and read, and the content of the post needs to make that worthwhile. Your post might have had a reasonable thought, but it didn’t justify a top-level post.
- You are rude or hostile. C’mon.
Can I appeal or try again?
You are welcome to message us however note that due to volume, even though we read most messages, we will not necessarily respond, and if we do, we can’t engage in a lengthy back-and-forth.
In an ideal world, we’d have a lot more capacity to engage with each new contributor to discuss what was/wasn’t good about their content, unfortunately with new submissions increasing on short timescales, we can’t afford that and have to be pretty strict in order to ensure site quality stays high.
This is censorship, etc.
LessWrong, while on the public internet, is not the general public. It was built to host a certain kind of discussion between certain kinds of users who’ve agreed to certain basics of discourse, who are on board with a shared philosophy, and can assume certain background knowledge.
Sometimes we say that LessWrong is a little bit like a “university”, and one way that it is true is that not anybody is entitled to walk in and demand that people host the conversation they like.
We hope to keep allowing for new accounts created and for new users to submit new content, but the only way we can do that is if we reject content and users who would degrade the site’s standards.
But diversity of opinion is important, echo chamber, etc.
That’s a good point and a risk that we face when moderating. The LessWrong moderation team tries hard to not reject things just because we disagree, and instead only do so if it feels like the content it failing on some other criteria.
Notably, one of the best ways to disagree (and that will likely get you upvotes) is to criticize not just a commonly-held position on LessWrong, but the reasons why it is held. If you show that you understand why people believe what they do, they’re much more likely to be interested in your criticisms.
How is the mod team kept accountable?
If your post or comment was rejected from the main site, it will be viewable alongside some indication for why it was banned. [we haven't built this yet but plan to soon]
Anyone who wants to audit our decision-making and moderation policies can review blocked content there.
↑ comment by Ben Pace (Benito) · 2023-04-06T20:33:14.294Z · LW(p) · GW(p)
It fails to acknowledge or build upon standard responses and objections that are well-known on LessWrong.
- The LessWrong website is 14 years old and the community behind it older still. Our core readings [link] are over half a million words. So understandably, there’s a lot you might have missed!
- Unfortunately, as the amount of interest in LessWrong grows, we can’t afford to let cutting-edge content get submerged under content from people who aren’t yet caught up to the rest of the site.
I do want to emphasize a subtle distinction between "you have brought up arguments that have already been brought up" and "you are challenging basic assumptions of the ideas here". I think challenging basic assumptions is good and well (like here [LW · GW]), while bringing up "but general intelligences can't exist because of no-free-lunch theorems" or "how could a computer ever do any harm, we can just unplug it" is quite understandably met with "we've spent 100s or 1000s of hours discussing and rebutting that specific argument, please go read about it <here> and come back when you're confident you're not making the same arguments as the last dozen new users".
I would like to make sure new users are specifically not given the impression that LW mods aren't open to basic assumptions being challenged, and I think it might be worth the space to specifically rule out that interpretation [LW · GW] somehow.
Replies from: Ruby↑ comment by Ruby · 2023-04-06T23:26:03.335Z · LW(p) · GW(p)
Can be made more explicit, but this is exactly why the section opens with "acknowledge [existing stuff on topic]".
Replies from: Benito↑ comment by Ben Pace (Benito) · 2023-04-06T23:28:25.221Z · LW(p) · GW(p)
Well, I don't think you have to acknowledge existing stuff on this topic if you have a new and good argument.
Added: I think the phrasing I'd prefer is "You made an argument that has already been addressed extensively on LessWrong" rather than "You have talked about a topic without reading everything we've already written about on that topic".
Replies from: Ruby, Ruby↑ comment by Ruby · 2023-04-07T00:06:43.933Z · LW(p) · GW(p)
I do think there is an interesting question of "how much should people have read?" which is actually hard to answer.
There are people who don't need to read as much in order to say sensible and valuable things, and some people that no amount of reading seems to save.
The half a million words is the Sequences. I don't obviously want a rule that says you need to have read all of them in order to post/comment (nor do I think doing so is a guarantee), but also I do want to say that if you make mistakes the the Sequences would teach you not to make, that could be grounds for not having your content approved.
A lot of the AI newbie stuff I'm disinclined to approve is the kind that makes claims that are actually countered in the Sequences, e.g. orthogonality thesis, treating the AI too much like humans, various fallacies involving words.
↑ comment by Ruby · 2023-04-06T23:36:45.543Z · LW(p) · GW(p)
How do you know you have a new and good argument if you don't know the standards thing said on the topic
And relatedly, why should I or other readers on LW assume that you have a new and good argument without any indication that you know the arguments in general?
This is aimed at users making their very first post/comment. I think it is likely a good policy/heuristic for the mod team in judging your post that claims "AIs won't be dangerous because X", tells me early on that you're not wasting my time because you're already aware of all the standard arguments.
In a world where everyday a few dozen people who started thinking about AI two weeks ago show up on LessWrong and want to give their "why not just X?", I think it's reasonable to say "we want you to give some indication that you're aware of the basic discussion this site generally assumes".
Replies from: Making_Philosophy_Better↑ comment by Portia (Making_Philosophy_Better) · 2023-04-07T01:53:11.400Z · LW(p) · GW(p)
I find it hilarious that you can say this, while simultaneously, the vast majority of this community is deeply upset they are being ignored by academia and companies, because they often have no formal degrees or peer reviewed publication, or other evidence of having considered the relevant science. Less wrong fails the standards of these fields and areas. You routinely re-invent concepts that already exist. Or propose solutions that would be immediately rejected as infeasible if you tried to get them into a journal.
Explaining a concept your community takes for granted to outsiders can help you refresh it, understand it better, and spot potential problems. A lot of things taken for granted here are rejected by outsiders because they are not objectively plausible.
And a significant number of newcomers, while lacking LW canon, will have other relevant knowledge. If you make the bar too high, you deter them.
All this is particularly troubling because your canon is spread all over the place, extremely lengthy, and individually usually incomplete or outdated, and filled in implicitly from prior forum interactions. Academic knowledge is more accessible that way.
Replies from: Ruby, lc, lahwran↑ comment by Ruby · 2023-04-08T00:25:20.932Z · LW(p) · GW(p)
Writing hastily in the interests of time, sorry if not maximally clear.
Explaining a concept your community takes for granted to outsiders can help you refresh it, understand it better, and spot potential problems.
It's very much a matter of how many newcomers there are relative to existing members. If the number of existing members is large compared to newcomers, it's not so bad to take the time to explain things.
If the number of newcomers threatens to overwhelm the existing community, it's just not practical to let everyone in. Among other factors, certain conversation is possible because you can assume that most people have certain background and even if they disagree, at least know the things you know.
The need for getting stricter is because of the current (and forecasted) increase in new user. This means we can't afford to become 50% posts that ignore everything our community has already figured out.
LessWrong is an internet forum, but it's in the direction of a university/academic publication, and such publications only work because editors don't accept everything.
↑ comment by lc · 2023-04-07T05:09:36.553Z · LW(p) · GW(p)
the vast majority of this community is deeply upset they are being ignored by academia and companies, because they often have no formal degrees or peer reviewed publication, or other evidence of having considered the relevant science.
Source?
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-04-07T05:16:52.852Z · LW(p) · GW(p)
my guess is that that claim is slightly exaggerated, but I expect sources exist for an only mildly weaker claim. I certainly have been specifically mocked for my username in places that watch this site, for example.
Replies from: lc↑ comment by lc · 2023-04-07T05:19:35.332Z · LW(p) · GW(p)
I certainly have been specifically mocked for my username in places that watch this site, for example.
This is an example of people mocking LW for something. Portia is making a claim about LW users' internal emotional states; she is asserting that they care deeply about academic recognition and feel infuriated they're not getting it. Does this describe you or the rest of the website, in your experience?
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-04-07T05:29:50.063Z · LW(p) · GW(p)
lens portia's writing out of frustrated tone first and it makes more sense. they're saying that recognition is something folks care about (yeah, I think so) and aren't getting to an appropriate degree (also seems true). like I said in my other comment - tone makes it harder to extract intended meaning.
Replies from: lc↑ comment by lc · 2023-04-07T05:43:39.062Z · LW(p) · GW(p)
Well, I disagree. I have literally zero interest in currying the favor of academics, and think Portia is projecting a respect and yearning for status within universities onto the rest of us that mostly doesn't exist. I would additionally prefer if this community were able to set standards for its members without having to worry about or debate whether or not asking people to read canon is a status grab.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-04-07T06:02:50.242Z · LW(p) · GW(p)
sure. I do think it's helpful to be academically valid sometimes though. you don't need to care, but some do some of the time somewhat. maybe not as much as the literal wording used here. catch ya later, anyhow.
↑ comment by the gears to ascension (lahwran) · 2023-04-07T05:13:08.159Z · LW(p) · GW(p)
strong agree, single upvote: harsh tone, but reasonable message. I hope the tone doesn't lead this point to be ignored, as I do think it's important. but leading with mocking does seem like it's probably why others have downvoted. downvote need not indicate refusal to consider, merely negative feedback to tone, but I worry about that, given the agree votes are also in the negative.
Replies from: Making_Philosophy_Better↑ comment by Portia (Making_Philosophy_Better) · 2023-04-07T12:37:51.492Z · LW(p) · GW(p)
Thank you. And I apologise for the tone. I think the back of my mind was haunted by Shoggoth with a Smiley face giving me advice for my weekend plans, and that emotional turmoil came out the wrong way.
I am in the strange position of being on this forum, and in academia, and seeing both sides engage in the same barrier keeping behaviour, and call it out as elitist and misguided in the other but a necessary way to ensure quality and affirm your superior identity in your own group is jarring. I've found valuable and admirable practices and insights in both, else I would not be there.
Replies from: thoth-hermes↑ comment by Thoth Hermes (thoth-hermes) · 2023-04-07T15:20:26.030Z · LW(p) · GW(p)
Any group that bears a credential, or performs negative selection of some kind, will bear the traits you speak of. 'Tis the nature of most task-performing groups human society produces. Alas, one cannot escape it, even coming to a group that once claimed to eschew credentialism. Nonetheless, it is still worthwhile to engage with these groups intellectually.
↑ comment by papetoast · 2023-04-10T02:06:17.358Z · LW(p) · GW(p)
I cannot access www.lesswrong.com/rejectedcontent [? · GW] (404 error). I suspect you guys forgot to give access to non-moderators, or you meant www.lesswrong.com/moderation [? · GW] (But there are no rejected posts there, only comments)
Replies from: Raemon↑ comment by Raemon · 2023-04-10T02:13:43.361Z · LW(p) · GW(p)
We didn't build that yet but plan to soon. (I think what happened was Ruby wrote this up in a private google doc, I encouraged him to post it as a comment so I could link to it, and both of us forgot it included that explicit link. Sorry about that, I'll edit it to clarify)
comment by Raemon · 2023-04-09T22:19:41.906Z · LW(p) · GW(p)
Here's my best guess for overall "moderation frame", new this week, to handle the volume of users. (Note: I've discussed this with other LW team members, and I think there's rough buy-in for trying this out, but it's still pretty early in our discussion process, other team members might end up arguing for different solutions)
I think to scale the LessWrong userbase, it'd be really helpful to shift the default assumptions of LessWrong to "users by default have a rate limit of 1-comment-per day" and "1 post per week."
If people get somewhat upvoted, they fairly quickly increase that rate limit to either "1 comment per hour" or "~3 comments per day" (I'm not sure which is better), so they can start participating in conversations. If they get somewhat more upvoted the rate limit disappears completely.
But to preserve this, you need to be producing content that is actively upvoted. If they get downvoted (or just produce a long string of barely-upvoted comments), they go back to the 1-per-day rate limit. If they're getting significantly downvoted, the rate limit ratchets up (to 1 per 3 days, then once per week and eventually once-per month which is essentially saying "you're sort of banned, but you can periodically try again, and if your new comments get upvoted you'll get your privileges restored")
Getting the tuning here exactly right to avoid being really annoying to existing users who weren't doing anything wrong is somewhat tricky, but a) I think there are at least some situations where I think the rules would be pretty straightforward, b) I think it's an achievable goal to the tune the system to basically work as intended.
When users have a rate limit, they get UI elements giving them some recommendations for what to do differently. (I think it's likely we can also build some quick-feedback buttons that moderators and some trusted users can use, so people have a bit more idea of what to do differently).
Once users have produced a multiple highly upvoted posts/comments, they get more leniency (i.e. they can have a larger string of downvotes or longer non-upvoted back-and-forths before getting rate limited).
If we were starting a forum from scratch with this sort of design at it's foundation, I think this could feel more like a positive thing (kinda like a videogame incentivizing good discussion and idea-generation, with built in self-moderation).
Since we're not starting from scratch, I do expect this to feel pretty jarring and unfair to people. I think this is sad, but, I think some kind of change is necessary and we just have to pay the costs somewhere.
My model of @Vladimir_Nesov [LW · GW] pops up to warn about negative selection [LW · GW] here (I'm not sure whether he thinks rate-limiting is as risky as banning, for negative-selection reasons. It certainly still will cause some people to bounce off. I definitely see risks with negative selection punishing variance, but even the current number of mediocre comments has IMO been pretty bad for lesswrong, the growing amount I'm expecting in the coming year seems even worse, and I'm not sure what else to do.
Replies from: Benito, Raemon, pktechgirl, Vladimir_Nesov, Ruby, Vladimir_Nesov↑ comment by Ben Pace (Benito) · 2023-04-09T23:18:08.802Z · LW(p) · GW(p)
shift the default assumptions of LessWrong to "users by default have a rate limit of 1-comment-per day"
Natural times I expect this to be frustrating are when someone's written a post, got 20 comments, and tries to reply to 5 of them, but is locked after the first one. 1 per day seems too strong there. I might say "unlimited daily comments on your own posts".
I also think I'd prefer a cut-off where after which you're trusted to comment freely. Reading the positive-selection post (which I agree with), I think some bars here could include having written a curated post or a post with 200+ karma or having 1000 karma on your account.
Replies from: Raemon↑ comment by Raemon · 2023-04-10T00:20:55.679Z · LW(p) · GW(p)
I'm not particularly attached to these numbers, but fyi the scale I was originally imagining was "after the very first upvote, you get something like 3 comments a day, and after like 5-10 karma you don't have a rate limit." (And note, initially you get one post and one comment, so you get to reply to your post's first comment)
I think in practice, in the world where you receive 4 comments but a) your post hasn't been upvoted much and b) none of your responses to the first three comments didn't get upvoted, my expectation is you're a user I'd indeed prefer to slow down, read up on site guidelines and put more effort into subsequent comments.
I think having 1000 karma isn't actually a very high bar, but yeah I think users with 2+ posts that either have 100+ karma or are curated, should get a lot more leeway.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2023-04-10T00:53:30.900Z · LW(p) · GW(p)
Ah good, I thought you were proposing a drastically higher bar.
↑ comment by Raemon · 2023-04-09T22:26:17.723Z · LW(p) · GW(p)
Here are some principles that are informing some of my thinking here, some pushing in different directions
- Karma isn't that great a metric – I think people often vote for dumb reasons, and they vote highest in drama-threads that don't actually reflect important new intellectual principles. I think there are maybe ways we can improve on the karma system, and I want to consider those soon. But I still think karma-as-is is at least a pretty decent proxy metric to keep the site running smoothly and scaling.
- Because karma is only a proxy metric, I'd still expect moderator judgment to play a significant role in making sure the system isn't going off the rails in the immediate future
- each comment comes with a bit of an attentional cost. If you make a hundred comments and get 10 karma (and no downvotes), I think you're most likely not a net-positive contributor. (i.e. each comment maybe costs 1/5th of a karma in attention or something like that)
- in addition, I think highly upvoted comments/posts tend to be dramatically more valuable than weakly upvoted comments/posts. (i.e. a 50 karma comment is more than 10 times as valuable as a 5 karma comment, most of the time [with an exception IMO for drama threads]
The current karma system kinda encourages people to write lots of comments that get slightly upvoted and gives them the impression of being an established regular. I think in most cases users with a total average karma of ~1-2 are typically commenting in ways that are persistently annoying in some way, in a way that'd be sort of fine with each individual comment but adds up to some kind of "death by a thousand cuts" thing that makes the site worse.
On the other hand, lots of people drawn to LessWrong have a lot of anxiety and scrupulosity issues and I generally don't want people overthinking this and spending a lot of time worrying about it.
My hope is to frame the thing more around positive rewards than punishments.
↑ comment by Elizabeth (pktechgirl) · 2023-04-10T05:24:17.801Z · LW(p) · GW(p)
I suggest not counting people's comments on their own posts towards the rate limit or the “barely upvoted” count. This both seems philosophically correct, and avoids penalizing authors of medium-karma posts for replying to questions (which often don’t get much if any karma).
Replies from: Raemon↑ comment by Vladimir_Nesov · 2023-04-09T23:39:21.550Z · LW(p) · GW(p)
risks with negative selection
There should be fast tracks that present no practical limits to the new users. First few comments should be available immediately upon registration, possibly regenerating quickly. This should only degrade if there is downvoting or no upvoting, and the limits should go away completely according to an algorithm that passes backtesting on first comments made by users in good standing who started commenting within the last 3-4 years. That is, if hypothetically such a rate-limiting algorithm were to be applied 3 years ago to a user who started commenting then, who later became a clearly good contributor, the algorithm should succeed in (almost) never preventing that user from making any of the comments that were actually made, at the rate they were actually made.
If backtesting shows that this isn't feasible, implementing this feature is very bad. Crowdsource moderation instead, allow high-Karma users to rate-limit-vote on new users, but put rate-limit-level of new users to "almost unlimited" by default, until rate-limit-downvoted manually.
↑ comment by Ruby · 2023-04-10T04:11:40.053Z · LW(p) · GW(p)
I'm less optimistic than Ray about rate limits, but still think they're worth exploring. I think getting the limits/rules correct will be tricky since I do care about the normal flow of good conversation not getting impeded.
I think it's something we'll try soon, but not sure if it'll be the priority for this week.
↑ comment by Vladimir_Nesov · 2023-04-09T23:21:11.337Z · LW(p) · GW(p)
"users by default have a rate limit of 1-comment-per day" and "1 post per week."
Imagine a system that lets a user write their comments or posts in advance, and then publishes comments according to these limits automatically. Then these limits wouldn't be enough. On the other hand, if you want to write a comment, you want to write it right away, instead of only starting to write it the next day because you are out of commenting chits. It's very annoying if the UI doesn't allow you to do that and instead you need to write it down in a file on your own device, make a reminder to go back to the site once the timeout is up, and post it at that time, all the while remaining within the bounds of the rules.
Also, being able to reply to responses to your comments is important, especially when the responses are requests for clarification, as long as that doesn't turn into an infinite discussion. So I think commenting chits should accumulate to a maximum of at least 3-4, even if it takes a week to get there, possibly even more if it's been a month. But maybe an even better option is for all but one of these to be "reply chits" that are weaker than full "comment chits" and only work for replies-to-replies to your own comments or posts. While the full "comment chits" allow commenting anywhere.
I don't see a way around the annoyance of feasibility of personally managed manual posting schedule workarounds other than implementing the queued-posting feature on LW, together with ability to manage the queue, arranging the order/schedule in which the pending comments will be posted. Which is pretty convoluted, putting this whole development in question.
Replies from: Raemon↑ comment by Raemon · 2023-04-09T23:32:21.065Z · LW(p) · GW(p)
LessWrong already stores comments you write in local storage so you can edit it over the rose of the day and post it later.
I… don’t see a reason to actively facilitate users having an easier time posting as often as possible, and not sure I understand your objection here.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2023-04-09T23:52:41.112Z · LW(p) · GW(p)
An obvious issue that could be fixed by the UI but isn't, that can be worked around outside the UI, is deliberate degradation of user experience. The blame is squarely on the developers, because it's an intentional decision by the developers. This should be always avoided, either by not creating this situation, or by fixing the UI. If this is not done, users will be annoyed, I think justifiably.
a reason to actively facilitate users having an easier time posting as often as possible
When users want to post, not facilitating that annoys them. If you actually knew that we want them to go away, you could've banned them already. You don't actually know, that's the whole issue here, so some of them are the reason there is a site at all, and it's very important to be a good host for them.
comment by Raemon · 2023-04-06T19:26:16.697Z · LW(p) · GW(p)
After chatting for a bit about what to do with low-quality new posts and comments, while being transparent and inspectably fair, the LW Team is currently somewhat optimistic about adding a section to lesswrong.com/moderation which lists all comments/posts that we've rejected for quality.
We haven't built it yet, so for immediate future we'll just be strong downvoting content that doesn't meet our quality bar. And for immediate future, if existing users in good standing want to defend particular pieces as worth inclusion they can do so here.
This is not a place for users who submitted rejected content to write an appeal (they can do that via PM, although we don't promise to reply since often we were just pretty confident in our take and the user hasn't offered new information), and I'll be deleting such comments that appear here.
(Is this maximally transparent? No. But, consider that it's still dramatically more transparent than a university or journal)
j/k I just tried this for 5 minutes and a) I don't actually want to approve users to make new posts (which is necessary currently to make their post appear), b) there's no current transparent solution that isn't a giant pain. So, not doing this for now, but we'll hopefully build a Rejected Content section at some point.
comment by Garrett Baker (D0TheMath) · 2023-04-05T13:26:34.870Z · LW(p) · GW(p)
What are the actual rationality concepts LWers are basically required to understand to participate in most discussions?
I am prior to having this bar be set pretty high, like 80-100% of the sequences level. I remember years ago when I finished the sequences, I spent several months practicing everyday rationality in isolation, and only then deigned to visit LessWrong and talk to other rationalists, and I was pretty disappointed with the average quality level, and like I dodged a bullet by spending those months thinking alone rather than with the wider community.
It also seems like average quality has decreased over the years.
Predictable confusion some will have: I’m talking about average quality here. Not 90th percentile quality posters.
Replies from: MondSemmel↑ comment by MondSemmel · 2023-04-06T08:52:32.844Z · LW(p) · GW(p)
I think I'd prefer setting the bar lower, and instead using downvotes as a filter for merely low-quality (rather than abysmal-quality) content. For instance, most posts on LW receive almost no comments, so I'd suspect that filtering for even higher quality would just dry up the discussion even more.
Replies from: D0TheMath, Benito↑ comment by Garrett Baker (D0TheMath) · 2023-04-06T13:28:57.287Z · LW(p) · GW(p)
The main reason I don’t reply to most posts is because I’m not guaranteed an interesting conversation, and it is not uncommon that I’d just be explaining a concept which seems obvious if you’ve read the sequences, which aren’t super fun conversations to have compared to alternative uses of my time.
For example, the other day I got into a discussion on LessWrong about whether I should worry about claims which are provably useless, and was accused of ignoring inconvenient truths for not doing so.
If the bar to entry was a lot higher, I think I’d comment more (and I think others would too, like TurnTrout).
Replies from: MondSemmel↑ comment by MondSemmel · 2023-04-06T13:49:12.007Z · LW(p) · GW(p)
Maybe we have different experiences because we tend to read different LW content? I skip most of the AI content, so I don't have a great sense of the quality of comments there. If most AI discussions get a healthy amount of comments, but those comments are mostly noise, then I can certainly understand your perspective.
↑ comment by Ben Pace (Benito) · 2023-04-06T17:03:11.304Z · LW(p) · GW(p)
In my experience actively getting terrible comments can be more frustrating than a lack-of-comments is demotivating.
Replies from: awg↑ comment by awg · 2023-04-06T17:06:47.981Z · LW(p) · GW(p)
Agreed. I think this also trends exponentially with the number of terrible comments. It is possible to be overwhelmed to death and have to completely relocate/start over (without proper prevention).
One thing that I think in the long term might be worth considering is something like the SomethingAwful approach: a one-time payment per account that is high enough to discourage trolls but low enough for most anyone to afford in combination with a strong culture and moderation (something LessWrong already has/is working on).
comment by lionhearted (Sebastian Marshall) (lionhearted) · 2023-04-04T23:54:50.237Z · LW(p) · GW(p)
Hey, first just wanted to say thanks and love and respect. The moderation team did such an amazing job bringing LW back from nearly defunct into the thriving place it is now. I'm not so active in posting now, but check the site logged out probably 3-5 times a week and my life is much better for it.
After that, a few ideas:
(1) While I don't 100% agree with every point he made, I think Duncan Sabien did an incredible job with "Basics of Rationalist Discourse" - https://www.lesswrong.com/posts/XPv4sYrKnPzeJASuk/basics-of-rationalist-discourse-1 [LW · GW] - perhaps a boiled-down canonical version of that could be created. Obviously the pressure to get something like that perfect would be high, so maybe something like "Our rough thoughts on how to be a good a contributor here, which might get updated from time to time". Or just link Duncan's piece as "non-canonical for rules but a great starting place." I'd hazard a guess that 90% of regular users here agree with at least 70% of it? If everyone followed all of Sabien's guidelines, there'd be a rather high quality standard.
(2) I wonder if there's some reasonably precise questions you could ask new users to check for understanding and could be there as a friendly-ish guidepost if a new user is going wayward. Your example - "(for example: "beliefs are probabilistic, not binary, and you should update them incrementally")" - seems like a really good one. Obviously those should be incredibly non-contentious, but something that would demonstrate a core understanding. Perhaps 3-5 of those, maybe something that a person formally writes up some commentary on their personal blog before posting?
(3) It's fallen from its peak glory years, but sonsofsamhorn.net might be an interesting reference case to look at — it was one of the top analytical sports discussion forums for quite a while. At the height of its popularity, many users wanted to join but wouldn't understand the basics - for instance, that a poorly-positioned player on defense making a flashy "diving play" to get the baseball wasn't a sign of good defense, but rather a sign that that player has a fundamental weakness in their game, which could be investigated more deeply with statistics - and we can't just trust flashy replay videos to be accurate indicators of defensive skill. (Defense in American baseball is particularly hard to measure and sometimes contentious.) What SOSH did was create an area called "The Sandbox" which was relatively unrestricted — spam and abuse still weren't permitted of course, but the standard of rigor was a lot lower. Regular members would engage in Sandbox threads from time to time, and users who made excellent posts and comments in The Sandbox would get invited to full membership. Probably not needed at the current scale level, but might be worth starting to think about for a long-term solution if LW keeps growing.
Thanks so much for everything you and the team do.
Replies from: Ruby, Raemon, NicholasKross↑ comment by Ruby · 2023-04-05T00:25:17.446Z · LW(p) · GW(p)
I did the have the idea of there being regions with varying standards and barriers, in particular places where new users cannot comment easily and place where they can, as an idea.
Replies from: TekhneMakre↑ comment by TekhneMakre · 2023-04-05T09:57:06.688Z · LW(p) · GW(p)
This feels like a/the natural solution. In particular, what occurred to me was:
- Make LW about rationality again.
- Expand the Alignment Forum: 2.1. By default, everything is as it is currently: a small set of users post, comment, and upvote, and that's what people see by default. 2.2. There's another section that's open to whoever.
The reasoning being that the influx is specifically about AI, not just a big influx.
Replies from: steve2152, niplav↑ comment by Steven Byrnes (steve2152) · 2023-04-05T15:27:04.927Z · LW(p) · GW(p)
The idea of AF having both a passing-the-current-AF-bar section and a passing-the-current-LW-bar section is intriguing to me. With some thought about labeling etc., it could be a big win for non-alignment people (since LW can suppress alignment content more aggressively by default), and a big win for people trying to get into alignment (since they can host their stuff on a more professional-looking dedicated alignment site), and no harm done to the current AF people (since the LW-bar section would be clearly labeled and lower on the frontpage).
I didn’t think it through very carefully though.
↑ comment by niplav · 2023-04-05T11:46:47.173Z · LW(p) · GW(p)
I like this direction, but I'm not sure how broadly one would want to define rationality: Would a post collecting quotes about intracranial ultrasound stimulation for meditation enhancement be rationality related enough? What about weird quantified self experiments?
In general I appreciate LessWrong because it is so much broader than other fora, while still staying interesting.
Replies from: TekhneMakre↑ comment by TekhneMakre · 2023-04-05T14:58:43.611Z · LW(p) · GW(p)
Well, at least we can say, "whatever LW has been, minus most AI stuff".
↑ comment by Raemon · 2023-04-05T00:04:03.392Z · LW(p) · GW(p)
I do agree Duncan's post is pretty good, and while I don't think it's perfect I don't really have an alternative I think is better for new users getting a handle on the culture here.
Replies from: Duncan_Sabien, gilch, Vladimir_Nesov, Benito, SaidAchmiz↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-04-05T06:02:01.993Z · LW(p) · GW(p)
I'd be willing to put serious effort into editing/updating/redrafting the two sections that got the most constructive pushback, if that would help tip things over the edge.
Replies from: Ilio↑ comment by Ilio · 2023-04-06T14:30:54.085Z · LW(p) · GW(p)
If you could add QAs, this could turn into a certificate that one has verifiably read or copy&past what LessWrong expects its users to know before writing under some tag (with a specific set of QAs for each main tags). Of course LW could also certify that this or that trusted user is welcome on all tags, and chose what tag can or can’t appear in front page.
↑ comment by gilch · 2023-04-07T00:19:21.585Z · LW(p) · GW(p)
I vaguely remember being not on board with that one and downvoting it. Basics of Rationalist Discourse doesn't seem to get to the core of what rationality is, and seems to preclude other approaches that might be valuable. Too strict and misses the point. I would hate for this to become the standard.
↑ comment by Vladimir_Nesov · 2023-04-05T20:42:19.100Z · LW(p) · GW(p)
I don't really have an alternative I think is better for new users getting a handle on the culture here
Culture is not systematically rationality (dath ilan wasn't built in a day). Not having an alternative that's better can coexist with this particular thing not being any good for same purpose. And a thing that's any good could well be currently infeasible to make, for anyone.
Zack's post [LW · GW] describes the fundamental difficulty with this project pretty well. Adherence to most rules of discourse is not systematically an improvement for processes of finding truth [LW · GW], and there is a risk of costly cargo cultist activities, even if they are actually good for something else. The cost could be negative [LW · GW] selection by culture, losing its stated purpose [LW · GW].
↑ comment by Ben Pace (Benito) · 2023-04-05T02:25:52.486Z · LW(p) · GW(p)
I would vastly prefer new users to read it than to not read anything at all.
↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-05T11:33:07.594Z · LW(p) · GW(p)
There is no way that a post which some (otherwise non-banned) members of the site are banned from commenting on should be used as an onboarding tool for the site culture. The very fact of such bannings is a clear demonstration of the post’s unsuitability for purpose.
Replies from: gjm, Duncan_Sabien↑ comment by gjm · 2023-04-05T15:48:57.754Z · LW(p) · GW(p)
How does the fact of such bannings demonstrate the post's unsuitability for purpose?
I think it doesn't. For instance, I think the following scenario is clearly possible:
- There are users A and B who detest one another for entirely non-LW-related reasons. (Maybe they had a messy and distressing divorce or something.)
- A and B are both valuable contributors to LW, as long as they stay away from one another.
- A and B ban one another from commenting on their posts, because they detest one another. (Or, more positively, because they recognize that if they start interacting then sooner or later they will start behaving towards one another in unhelpful ways.)
- A writes an excellent post about LW culture, and bans B from it just like A bans B from all their posts (and vice versa).
If you think that Duncan's post specifically shouldn't be an LW-culture-onboarding tool because he banned you specifically from commenting on it, then I think you need reasons tied to the specifics of the post, or the specifics of your banning, or both.
(To be clear: I am not claiming that you don't have any such reasons, nor that Duncan is right to ban you from commenting on his frontpaged posts, nor that Duncan's "basics" post is good, nor am I claiming the opposite of any of those. I'm just saying that the thing you're saying doesn't follow from the thing you're saying it follows from.)
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-05T16:34:20.874Z · LW(p) · GW(p)
I suspect that you know perfectly well that the sort of scenario you describe doesn’t apply here. (If, by some chance, you did not know this: I affirm it now. There is no “messy and distressing divorce”, or any such thing; indeed I have never interacted with Duncan, in any way, in any venue other than on Less Wrong.)
The other people in question were, to my knowledge, also banned from commenting on Duncan’s posts due to their criticism of “Basics of Rationalist Discourse” (or due to related discussions, on related topics on Less Wrong), and likewise have no such “out of band” relationship with Duncan.
(All of this was, I think, obvious from context. But now it has been said explicitly. Given these facts, your objection does not apply.)
Replies from: gjm, Duncan_Sabien↑ comment by gjm · 2023-04-05T22:42:29.407Z · LW(p) · GW(p)
I did not claim that my scenario describes the actual situation; in fact, it should be very obvious from my last two paragraphs that I thought (and think) it likely not to.
What I claimed (and still claim) is that the mere fact that some people are banned from commenting on Duncan's frontpage posts is not on its own anything like a demonstration that any particular post he may have written isn't worthy of being used for LW culture onboarding.
Evidently you think that some more specific features of the situation do have that consequence. But you haven't said what those more specific features are, nor how they have that consequence.
Actually, elsewhere in the thread you've said something that at least gestures in that direction:
the point is that if the conversational norms cannot be discussed openly [...] there's no reason to believe that they're good norms. How were they vetted? [...] the more people are banned from commenting on the norms as a consequence of their criticism of said norms, the less we should believe that the norms are any good!
I don't think this argument works. (There may well be a better version of it, along the lines of "There seem to be an awful lot of people Duncan is unable or unwilling to get on with on LW. That suggests that there's something wrong with his style of interaction, or with how he deals with other people's styles of interaction. And that suggests that we should be very skeptical of anything he writes that purports to tell us what's a good style of interaction". But that would be a different argument.)
It's not true that the norms can't be discussed openly; at least, it seems to me that in general "there are a few people who aren't allowed to talk in place X" is importantly different from "there are a few people who aren't allowed to talk about the thing that's mostly talked about in place X" which in turn is importantly different from "the thing that's mostly talked about in place X cannot be discussed openly".
It does matter how extensively they've been discussed, and that question is worth asking. Part of the answer is that right now that post has 179 comments. Another part is that in fact quite a few of those comments are from you.
If it were true that criticizing Duncan's proposed norms gets you banned from interacting with Duncan (and hence from commenting on that post) then indeed that would be a problem. I do not get the impression that that's true. In particular, to whatever extent it's true that you were banned "as a consequence of [your] criticism of said norms", I think that has to be interpreted as "as a consequence of the manner of your criticism of said norms" as opposed to "as a consequence of the fact that you criticized said norms" or "as a consequence of the content of your criticism of said norms", and I don't think there's any particular conflict between "these norms have been subject to robust debate" and "some people argued about them in a way Duncan found disagreeable enough to trigger a ban".
(Again, it might be that Duncan is too sensitive to some styles of argument, or too ban-happy, or something, and that that is reason to be skeptical of his proposed principles. Again, that's a different argument.)
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-05T23:39:08.287Z · LW(p) · GW(p)
In particular, to whatever extent it’s true that you were banned “as a consequence of [your] criticism of said norms”, I think that has to be interpreted as “as a consequence of the manner of your criticism of said norms” as opposed to “as a consequence of the fact that you criticized said norms” or “as a consequence of the content of your criticism of said norms”
This is false.
I don’t think there’s any particular conflict between “these norms have been subject to robust debate” and “some people argued about them in a way Duncan found disagreeable enough to trigger a ban”.
There certainly is a conflict, if “the way Duncan found disagreeable” is “robust”.
(Again, it might be that Duncan is too sensitive to some styles of argument, or too ban-happy, or something, and that that is reason to be skeptical of his proposed principles. Again, that’s a different argument.)
Sorry, no. It’s the very argument in question. The “styles of argument” are “the ones that are directly critical of the heart of the claims being made”.
Replies from: gjm↑ comment by gjm · 2023-04-06T00:34:30.283Z · LW(p) · GW(p)
It is at best debatable that "this is false". Duncan (who, of course, is the person who did the banning) explicitly denies that you were banned for criticizing his proposed norms. Maybe he's just lying, but it's certainly not obvious that he is and it looks to me as if he isn't.
Duncan has also been pretty explicit about what he dislikes about your interactions with him, and what he says he objects to is definitely not simply "robust disagreement". Again, of course it's possible he's just lying; again, I don't see any reason to think he is.
You are claiming very confidently, as if it's a matter of common knowledge, that Duncan banned you from commenting on his frontpage posts because he can't accept direct criticism of his claims and proposals. I do not see any reason to think that that is true. You have not, so far as I can see, given any reason to think it is true. I think you should stop making that claim without either justification or it-seems-to-me-that hedging.
(Since it's fairly clear[1] that this is a matter of something like enmity rather than mere disagreement and in such contexts everything is liable to be taken as a declaration of What Side One Is On, I will say that I think both you and Duncan are clear net-positive contributors to LW, that I am pretty sure I understand what each of you finds intolerable about the other, and that I have zero intention of picking a side in the overall fight.)
[1] So it seems to me. I suspect you disagree, and perhaps Duncan does too, but this is the sort of thing it is very easy to deceive oneself about.
Replies from: Vladimir_Nesov, SaidAchmiz↑ comment by Vladimir_Nesov · 2023-04-06T18:32:22.171Z · LW(p) · GW(p)
You are claiming very confidently, as if it's a matter of common knowledge
As a decoupled aside, something not being a matter of common knowledge is not grounds for making claims of it less confidently, it's only grounds for a tiny bit of acknowledgment of this not being common knowledge, or of the claim not being expected to be persuasive in isolation.
Replies from: gjm↑ comment by gjm · 2023-04-06T18:59:00.195Z · LW(p) · GW(p)
I agree. If you are very certain of X but X isn't common knowledge (actually, "common knowledge" in the technical sense isn't needed, it's something like "agreed on by basically everyone around you") then it's fine to say e.g. "I am very certain of X, from which I infer Y", but I think there is something rude about simply saying "X, therefore Y" without any acknowledgement that some of your audience may disagree with X. (It feels as if the subtext is "if you don't agree with X then you're too ignorant/stupid/crazy for me to care at all what you think".)
In practice, it's rather common to do the thing I'm claiming is rude. I expect I do it myself from time to time. But I think it would be better if we didn't.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2023-04-06T19:29:01.910Z · LW(p) · GW(p)
My point is that this concern is adequately summarized by something like "claiming without acknowledgment/disclaimers", but not "claiming confidently" (which would change credence in the name of something that's not correctness).
I disagree that this is a problem in most cases (acknowledgment is a cost, and usually not informative), but acknowledge that this is debatable. Similarly to the forms of politeness the require more words, as opposed to forms of politeness that, all else equal, leave the message length unchanged. Acknowledgment is useful where it's actually in doubt.
Replies from: gjm↑ comment by gjm · 2023-04-06T22:13:54.306Z · LW(p) · GW(p)
In this case, Said is both (1) claiming the thing very confidently, when it seems pretty clear to me that that confidence is not warranted, and (2) claiming it as if it's common knowledge, when it seems pretty clear to me that it's far from being common knowledge.
↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-06T00:56:50.968Z · LW(p) · GW(p)
It is at best debatable that “this is false”. Duncan (who, of course, is the person who did the banning) explicitly denies that you were banned for criticizing his proposed norms.
But of course he would deny it. As I’ve said, that’s the problem with giving members the power to ban people from their posts: it creates a conflict of interest. It lets people ban commenters for simply disagreeing with them, while being able to claim that it’s for some other reason. Why would Duncan say “yeah, I banned these people because I don’t like it when people point out the flaws in my arguments, the ways in which something I’ve written makes no sense, etc.”? It would make him look pretty bad to admit that, wouldn’t it? Why shouldn’t he instead say that he banned the people in question for some respectable reason? What downside is there, for him?
And given that, why in the world would we believe him when he says such things? Why would we ever believe any post author who, after banning a commenter who’s made a bunch of posts disagreeing with said author, claims that the ban was actually for some other reason? It doesn’t make any sense at all to take such claims seriously!
The reason why the “ban people from your own post(s)” feature is bad is that it gives people an incentive to make such false claims, not just to deceive others (that would be merely bad) but—much worse!—to deceive themselves about their reasons for issuing bans.
Duncan has also been pretty explicit about what he dislikes about your interactions with him, and what he says he objects to is definitely not simply “robust disagreement”. Again, of course it’s possible he’s just lying; again, I don’t see any reason to think he is.
The obvious reason to think so is that, having written something which is deserving of strong criticism—something which is seriously flawed, etc.—both letting people point this out, in clear and unmerciful terms, and banning them but admitting that you’ve banned them because you can’t take criticism, is unpleasant. (The latter more so than the former… or so we might hope!) Given the option to simply declare that the critics have supposedly violated some supposed norm (and that their violation is so terrible, so absolutely intolerable, that it outweighs the benefit of permitting their criticisms to be posted—quite a claim!), it would take an implausible, an almost superhuman, degree of integrity and force of will to resist doing just that. (Which is why it’s so bad to offer the option.)
it’s fairly clear[1] that this is a matter of something like enmity rather than mere disagreement
I have no idea where you think such “enmity” could come from. As I said, I haven’t interacted with Duncan in any venue except on Less Wrong, ever. I have no personal feelings, positive or negative, toward him.
Replies from: gjm↑ comment by gjm · 2023-04-06T02:59:41.017Z · LW(p) · GW(p)
We would believe it because
- on the whole, people are more likely to say true things than false things
- Duncan has said at some length what he claims to find unpleasant about interacting with you, it isn't just "Said keeps finding mistakes in what I have written", and it is (to me) very plausible that someone might find it unpleasant and annoying
- (I'm pretty sure that) other people have disagreed robustly with Duncan and not had him ban them from commenting on his posts.
You don't give any concrete reason for disbelieving the plausible explanations Duncan gives, you just say -- as you could say regardless of the facts of the matter in this case -- that of course someone banning someone from commenting on their posts won't admit to doing so for lousy reasons. No doubt that's true, but that doesn't mean they all are doing it for lousy reasons.
It seems pretty obvious to me where enmity could come from. You and Duncan have said a bunch of negative things about one another in public; it is absolutely commonplace to resent having people say negative things about you in public. Maybe it all started with straightforward disagreement about some matter of fact, but where we are now is that interactions between you and Duncan tend to get hostile, and this happens faster and further than (to me) seems adequately explained just by disagreements on the points ostensibly at issue.
(For the avoidance of doubt, I was not at all claiming that whatever enmity there might be started somewhere other than LW.)
Replies from: Dagon, SaidAchmiz↑ comment by Dagon · 2023-04-06T08:23:07.045Z · LW(p) · GW(p)
[ I don't follow either participant closely enough to have a strong opinion on the disagreement, aside from noting that the disagreement seems to use a lot of words, and not a lot of effort to distill their own positions toward a crux, as opposed to attacking/defending. ]
on the whole, people are more likely to say true things than false things
In the case of contentious or adversarial discussions, people say incorrect and misleading things . "more likely true than false" is a uselessly low bar for seeking any truth or basing any decisions on.
↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-06T05:15:47.761Z · LW(p) · GW(p)
on the whole, people are more likely to say true things than false things
This is a claim so general as to be meaningless. If we knew absolutely nothing except “a person said a thing”, then retreating to this sort of maximally-vague prior might be relevant. But we in fact are discussing a quite specific situation, with quite specific particular and categorical features. There is no good reason to believe that the quoted prior survives that descent to specificity unscathed (and indeed it seems clear to me that it very much does not).
it isn’t just “Said keeps finding mistakes in what I have written”
It’s slightly more specific, of course—but this is, indeed, a good first approximation.
it is (to me) very plausible that someone might find it unpleasant and annoying
Of course it is! What is surprising about the fact that being challenged on your claims, being asked to give examples of alleged principles, having your theories questioned, having your arguments picked apart, and generally being treated as though you’re basically just some dude saying things which could easily be wrong in all sorts of ways, is unpleasant and annoying? People don’t like such things! On the scale of “man bites dog” to the reverse thereof, this particular insight is all the way at the latter end.
The whole point of this collective exercise that we’re engaged in, with the “rationality” and the “sanity waterline” and all that, is to help each other overcome this sort of resistance, and thereby to more consistently and quickly approach truth.
(I’m pretty sure that) other people have disagreed robustly with Duncan and not had him ban them from commenting on his posts.
Let’s see some examples, then we can talk.
You don’t give any concrete reason for disbelieving the plausible explanations Duncan gives, you just say—as you could say regardless of the facts of the matter in this case—that of course someone banning someone from commenting on their posts won’t admit to doing so for lousy reasons. No doubt that’s true, but that doesn’t mean they all are doing it for lousy reasons.
If Alice criticizes one of Bob’s posts, and Bob immediately or shortly thereafter bans Alice from commenting on Bob’s posts, the immediate default assumption should be that the criticism was the reason for the ban. Knowing nothing else, just based on these bare facts, we should jump right to the assumption that Bob’s reasons for banning Alice were lousy.
If we then learn that Bob has banned multiple people who criticized him robustly/forcefully/etc., and Bob claim that the bans in all of these cases were for good reasons, valid reasons, definitely not just “these people criticized me”… then unless Bob has some truly heroic evidence (of the sort that, really, it is almost never possible to get), his claims should be laughed out of the room.
(Indeed, I’ll go further and say that the default assumption—though a slightly weaker default—in all cases of anyone banning anyone else[1] from commenting on their posts is that the ban was for lousy reasons. Yes, in some cases that default is overridden by some exceptional circumstances. But until we learn of such circumstances, evaluate them, and judge them to be good reasons for such a ban, we should assume that the reasons are bad.)
And the problem here isn’t that our hypothetical Bob, or the actual Duncan, is a bad person, a liar, etc. Nothing of the sort need be true! (And in Duncan’s case, I think that probably nothing of the sort is true.) But it would be very foolish of us to simply take someone’s word in a case like this.
Again, this is the whole problem with the “post authors can ban people from their posts” feature—that it creates such situations, where people are tempted (by intellectual vanity—a terrible temptation indeed) to do things which instantly cast them in a bad light, because they cannot be distinguished from the actions of someone who is either unable (due to failure of reasoning or knowledge) or unwilling (due to weakness of ego or lack of integrity) to submit their ideas and writing to proper scrutiny. (That they truly believe themselves to be acting from only the purest motives is, of course, quite beside the point; the power of self-deception is hardly news to us, around these parts.)
It seems pretty obvious to me where enmity could come from.
If there’s any “enmity” (and I remain unsure that any such thing exists), it’s wholly one-sided. I haven’t said anything about or to Duncan that I wouldn’t say to anyone, should the situation warrant it. And I don’t think (though I haven’t gone through all my comments to verify this, but I can think of no exceptions) that I’ve said anything which I wouldn’t stand by.
With the exception of, say, obvious spammers or cranks or other such egregious malefactors whom most reasonable observers would expect to just be banned from the whole forum. ↩︎
↑ comment by gjm · 2023-04-06T13:31:21.878Z · LW(p) · GW(p)
You continue to assert, with apparent complete confidence, a claim about Duncan's motivations that (1) Duncan denies, (2) evidently seems to at least two people (me and dxu) to be far from obviously true, and (3) you provide no evidence for that engages with any specifics at all. The trouble with 3 is that it cuts you off from the possibility of getting less wrong. If in fact Duncan's motivations were not as you think they are, how could you come to realise that?
(Maybe the answer is that you couldn't, because you judge that in the situation we're in the behaviour of someone with the motivations you claim is indistinguishable from that of someone with the motivations Duncan claims, and you're willing to bite that bullet.)
I don't agree with your analysis of the Alice/Bob situation. I think that in the situation as described, given only the information you give, we should be taking seriously at least these hypotheses: (1) Bob is just very ban-happy and bans anyone who criticizes him, (2) Bob keeps getting attacked in ban-worthy ways, but the reason that happens is that he's unreasonably provoking them, (3) Bob keeps getting attacked in ban-worthy ways, for reasons that don't reflect badly on Bob. And also various hybrids -- it's easy to envisage situations where 1,2,3 are all important aspects of what's going on. And to form a firm opinion between 1,2,3 we need more information. For instance, what did Alice's criticism actually look like? What did Bob say about his reasons? How have other people who are neither Alice nor Bob interpreted what happened?
Here's Duncan's actual description of what he says he finds unpleasant about interacting with you; it's from Zack's response to Duncan's proposed discourse norms [LW · GW]. (What happened next was that you made a brief reply, Duncan claimed that it was a demonstration of the exact dynamic he was complaining about, and said "Goodbye, Said"; I take it that's the point at which he banned you from commenting on his frontpage posts. So I think it's reasonable to take it as his account of why-Duncan-banned-Said.
I find that interacting with Said is overwhelmingly net negative; most of what he seems to me to do is sit back and demand that his conversational partners connect every single dot for him, doing no work himself while he nitpicks with the entitlement of a spoiled princeling. I think his mode of engagement is super unrewarding and makes a supermajority of the threads he participates in worse, by dint of draining away all the energy and recursively proliferating non-cruxy rabbitholes. It has never once felt cooperative or collaborative; I can make twice the intellectual progress with half the effort with a randomly selected LWer. I do not care to spend any more energy whatsoever correcting the misconceptions that he is extremely skilled at producing, ad infinitum, and I shan't do so any longer; he's welcome to carry on being however confused or wrong he wants to be about the points I'm making; I don't find his confusion to be a proxy for any of the audiences whose understanding I care about.
Now, it seems clear to me that (1) if Duncan felt that way about interacting with you, it would be a very plausible explanation for the ban; (2) Duncan's claimed perception does somewhat match my impression of your commenting style on LW (though I would not frame it nearly as negatively as he did; as already mentioned I think your presence on LW is clearly net positive); (3) I accordingly find it plausible that Duncan is fairly accurately describing his own subjective reasons for not wanting to interact with you.
Of course all of that is consistent with Duncan actually being upset by being criticized and then casting about for rationalizations. But, again, it doesn't appear to me that he does anything like this every time someone strongly disagrees with him. In addition to dxu's example in a sibling of this comment (to which you objected that the disagreement there wasn't "robust"; like dxu I think it would be helpful to understand what you actually mean by "robust" and e.g. whether it implies what most people would call "rude"), here are a few more things people have said to Duncan without getting any sort of ban:
- https://www.lesswrong.com/posts/yepKvM5rsvbpix75G/you-don-t-exist-duncan?commentId=ZChwgNps7KactBaHh [LW(p) · GW(p)] (JBlack commenting on the "you don't exist" post; more dismissive than disagreeing; how much that matters depends on just what your model of Duncan is); see also https://www.lesswrong.com/posts/yepKvM5rsvbpix75G/you-don-t-exist-duncan?commentId=H65ffCkthuDfkJafp [LW(p) · GW(p)] (Charlie Steiner commenting on the same post, also kinda dismissive and rude, though my model of your model of Duncan isn't all that bothered by it)
- https://www.lesswrong.com/posts/k9dsbn8LZ6tTesDS3/sazen?commentId=LdDqFSijLievEQiQv [LW(p) · GW(p)] ("the gears to ascension" commenting on the "sazen" post; alleges that the whole post is a bad idea while apparently misunderstanding it badly; Duncan is clearly annoyed but no ban)
- https://www.lesswrong.com/posts/gbdqMaADqxMDwtgaf/exposure-to-lizardman-is-lethal?commentId=sneYCBckEFvL2hJdm [LW(p) · GW(p)] and the thread descending from it (link is to jaspax saying that one of Duncan's examples in the "lizardman" post is badly wrong; much of the subsequent discussion is a lengthy and vigorous dispute between tailcalled and Duncan; Duncan has not blocked either jaspax or tailcalled). There is a lot of other strong-disagreement-with-Duncan in that thread. One instance did result in Duncan blocking the user in question, but the fact that it's currently sitting at -31/-28 suggests that Duncan isn't the only person who considers it obnoxious.
I do, for the avoidance of doubt, think that Duncan is unusually willing to block people from commenting on his posts. But I don't think "Duncan blocks anyone who disagrees robustly with him" is a tenable position unless you are defining "robustly" in a nonstandard way.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-06T16:54:19.989Z · LW(p) · GW(p)
You continue to assert, with apparent complete confidence, a claim about Duncan’s motivations that (1) Duncan denies
Of course he denies it. I already explained that we’d expect him to deny it if it were true. Come on! This is extremely obvious stuff. Why would he not deny it?
And if indeed he’d deny it if it were true, and obviously would also deny it if it were false, then it’s not evidence. Right? Bayes!
(2) evidently seems to at least two people (me and dxu) to be far from obviously true,
Yes, many people on Less Wrong have implausible degrees of “charity” in their priors on human behavior.
and (3) you provide no evidence for that engages with any specifics at all. The trouble with 3 is that it cuts you off from the possibility of getting less wrong.
But of course it does no such thing! It means merely that I have a strong prior, and have seen no convincing evidence against.
If in fact Duncan’s motivations were not as you think they are, how could you come to realise that?
Same way I come to realize anything else: updating on evidence. (But it’d have to be some evidence!)
(Maybe the answer is that you couldn’t, because you judge that in the situation we’re in the behaviour of someone with the motivations you claim is indistinguishable from that of someone with the motivations Duncan claims, and you’re willing to bite that bullet.)
Pretty close to indistinguishable, yeah.
I don’t agree with your analysis of the Alice/Bob situation. I think that in the situation as described, given only the information you give, we should be taking seriously at least these hypotheses: (1) Bob is just very ban-happy and bans anyone who criticizes him, (2) Bob keeps getting attacked in ban-worthy ways, but the reason that happens is that he’s unreasonably provoking them, (3) Bob keeps getting attacked in ban-worthy ways, for reasons that don’t reflect badly on Bob. And also various hybrids—it’s easy to envisage situations where 1,2,3 are all important aspects of what’s going on.
(1) is the obvious default (because it’s quite common and ordinary). (2) seems to rest on the meaning of “unreasonably”; I think we can mostly conflate it with (3). And (3) certainly happens but isn’t anywhere close to the default.
Also, your (1) says “anyone”, but it could also be “anyone over a certain threshold of criticism strength/salience/etc.”. That makes it even more the obvious default.
And to form a firm opinion between 1,2,3 we need more information. For instance, what did Alice’s criticism actually look like? What did Bob say about his reasons? How have other people who are neither Alice nor Bob interpreted what happened?
Well, for one thing, “what did Bob say” can’t be given much weight, as I noted above.
The interpretation of third parties seems mostly irrelevant. If Carol observes the situation, she can reach her own conclusion without consulting Dave. Dave’s opinion shouldn’t be any kind of meaningful input into Carol’s evaluation.
As for “what did Alice’s criticism look like”, sure. We have to confirm that there aren’t any personal insults in there, for instance. Easy enough.
Here’s Duncan’s actual description of what he says he finds unpleasant about interacting with you … Now, it seems clear to me that (1) if Duncan felt that way about interacting with you, it would be a very plausible explanation for the ban … I accordingly find it plausible that Duncan is fairly accurately describing his own subjective reasons for not wanting to interact with you.
Yes, of course! I agree completely! In the quoted bit, Duncan says pretty much exactly what we’d expect him to say if what he were very annoyed at being repeatedly questioned, challenged, and contradicted by some other commenter, in ways that he found himself unable to convincingly respond to, and which inability made him look bad. It makes sense that Duncan would, indeed, describe said commenter’s remarks in tendentious ways, using emotionally charges descriptions with strongly negative valence but few details, and that he would dismiss said commenter’s contributions as irrelevant, unimportant, and unworthy of engagement. It is totally unsurprising both that Duncan would experience aversive feelings when interacting with said commenter, and that he would report so experiencing.
Of course all of that is consistent with Duncan actually being upset by being criticized and then casting about for rationalizations.
You don’t say…
Actually, even “rationalizations” is too harsh. It’s more like “describing in a negative light things that are actually neutral or positive”. And the “casting about” absolutely need not (and, indeed, is unlikely to be) conscious.
And I know I’ve been hammering on this point, but I’m going to do it again: this is the problem with the “authors can ban users from their posts” feature. It gives LW participants an incentive to have these sorts of entirely genuine and not at all faked emotional responses. (See this old comment thread by Vladimir_M [LW(p) · GW(p)] for elaboration on the idea.) As I’ve said, I don’t think that Duncan is lying!
But, again, it doesn’t appear to me that he does anything like this every time someone strongly disagrees with him. In addition to dxu’s example in a sibling of this comment (to which you objected that the disagreement there wasn’t “robust”; like dxu I think it would be helpful to understand what you actually mean by “robust” and e.g. whether it implies what most people would call “rude”)
Nah, I don’t mean “rude”. But let’s take a look at your examples:
JBlack commenting on the “you don’t exist” post
User JBlack made a single comment on a single post by Duncan. Why ban him? What would that accomplish? Duncan isn’t some sort of unthinking ban-bot; he’s a quite intelligent person who, as far as I can tell, is generally capable of behaving reasonably with respect to his goals. Expecting that he’d ban JBlack as a result of this single comment doesn’t make much sense, even if we took everything I said about Duncan’s disposition were wholly true! It’s not even much of a criticism!
“the gears to ascension” commenting on the “sazen” post
A brief exchange, at which point the user in question seems to have been deterred from commenting further. Their two comments were also downvoted, which is significant.
(Note, however, that if I were betting on “who will Duncan ban next”, user “the gears to ascension” would certainly be in the running—but not because of this one comment thread that you linked there.)
link is to jaspax saying that one of Duncan’s examples in the “lizardman” post is badly wrong; much of the subsequent discussion is a lengthy and vigorous dispute between tailcalled and Duncan
User jaspax made one comment on that post (and hasn’t commented on any of Duncan’s other posts, as far as I can find on a quick skim). User tailcalled is a more plausible candidate. I would likewise expect a ban if they commented in similar fashion on one or more subsequent posts by Duncan (this seems to have been the first such interaction).
I do, for the avoidance of doubt, think that Duncan is unusually willing to block people from commenting on his posts. But I don’t think “Duncan blocks anyone who disagrees robustly with him” is a tenable position unless you are defining “robustly” in a nonstandard way.
For one thing, I didn’t say “Duncan blocks anyone who disagrees robustly with him”. What I said (in response to your “other people have disagreed robustly with Duncan and not had him ban them from commenting on his posts”) was “Let’s see some examples, then we can talk”. Well, we’ve got one example now (the last one, with tailcalled), so we can talk.
Like I said before—Duncan’s a smart guy, not some sort of ban-bot who reflexively bans anyone who disagrees with him. Here’s what seems to me to be the heuristic:
- Some other user X comments on Duncan’s posts, criticizing Duncan in ways that he can’t easily counter.
- Duncan does not consider the criticism to be fair or productive.
- The critical comments are upvoted and/or endorsed or supported by others; there is no (or insufficient) convincing counter from any sympathetic third parties.
- The gestalt impression left with most readers seems likely to be one of Duncan being mistaken/unreasonable/wrong/etc.
- Duncan feels that this impression is false and unfair (he considers himself to have been in the right; see #2).
- User X comments on more than one post of Duncan’s, and seems likely to continue to comment on his future posts.
- X is undeterred by Duncan’s attempted pushback. (And why would they be? See #3.)
In such a scenario, the only way (other than leaving Less Wrong, or simply ceasing to write posts, which amounts to the same thing) to stop X from continuing to cast Duncan and his ideas and claims in a bad light—which Duncan feels is undeserved—is to ban X from his posts. So that’s what Duncan does.
Does this seem so far-fetched? I don’t think so. Indeed it seems almost reasonable, doesn’t it? I think there’s even a good chance that Duncan himself might endorse this characterization! (I certainly hope so, anyway; I’ve tried to ensure that there’s nothing in this description that would be implausible for Duncan to assent to.)
Do you disagree?
Replies from: gjm↑ comment by gjm · 2023-04-06T19:55:08.676Z · LW(p) · GW(p)
You say that if you were wrong about Duncan's motivations then you would discover "by updating on evidence" but I don't understand what sort of evidence you could possibly see that would make you update enough to make any difference. (Again, maybe this is a bullet you bite and you're content with just assuming bad faith and having no realistic way to discover if you're wrong.)
Although you say "Bayes!" it seems to me that what you're actually doing involves an uncomfortable amount of (something like) snapping probabilities to 0 and 1. That's a thing everyone does at least a bit, because we need to prune our hypothesis spaces to manageable size, but I think in this case it's making your reasoning invalid.
E.g., you say: Duncan would deny your accusation if it were true, and he would deny it if it were false, hence his denial tells us nothing. But that's all an oversimplification. If it were true, he might admit it; people do in fact not-so-infrequently admit it when they do bad things and get called out on it. Or he might deny it in a less specific way, rather than presenting a concrete explanation of what he did. Or he might just say nothing. (It's not like your accusation had been made when he originally said what he did.) Or he might present a concrete explanation that is substantially less plausible than the one he actually presented. So his denial does tell us something. Obviously not as much as if no one ever lied, deceived themselves, etc., but still something.
... I need to be a bit more precise, because there are two different versions of your accusation and the details of the calculation are different in the two cases. A1 is "Duncan blocked Said because he couldn't cope with being robustly criticized". A2 is something like "Duncan blocked Said because he couldn't cope with robustly criticized, but he was unaware that that was his real reason and sincerely thought he was doing it because of how Said goes about criticizing, the things he objects to being things that many others might also object to". The claim "if it were true then he would behave that way" is much truer about A2 than about A1, but it is also much less probable a priori. (Compare "God created the universe 6000 years ago" with "God created the universe 6000 years ago, and carefully made it look exactly the same as it would have if it were billions of years old": the evidence that makes the former very unlikely is powerless against the latter, but that is not in fact an advantage of the latter.)
So, anyway, we have A1 and A2, and the alternative B: that Duncan blocked Said on account of features of Said's interaction-style that genuinely don't basically come down to "Said pointed out deficiencies in what Duncan said". (Because to whatever extent Duncan's objections are other things, whether they're reasonable or not, his blocking of Said after Said said negative things about Duncan's proposed norms doesn't indicate that robust discussion of those norms is impossible.) You say that some version of A is an "obvious default". Maybe so, but here again I think you're rounding-to-0-and-1. How obvious a default? How much more likely than B? It seems to me that A1 is obviously not more than, say, 4:1 favoured over B. (I am not sure it's favoured over B at all; 4:1 is not my best estimate, it's my most generous estimate.) And how much evidence does Duncan's own account of his motivations offer for B over A1? I think at least 2:1.
In other words, even if we agree that (1) Duncan's words aren't very much evidence of his motivations (most of the time, when someone says something it's much more than 2:1 evidence in favour of that thing being true) and (2) the bad-faith scenario is a priori substantially more likely than the good-faith one, with what seem to me actually realistic numbers we don't get more than 2:1 odds for bad faith over good faith.
I claim that that is very much not sufficient grounds for writing as if the bad-faith explanation is definitely correct. (And I reiterate that 2:1 for bad over good is not my best estimate, it's my most charitable-to-Said's-position estimate.)
("Ah, but I meant A2 not A1!". Fine, but A2 implies A1 and cannot be more probable than A1.)
As you refine your proposed Duncan-blocking-model, it seems to me, it becomes less capable of supporting the criticism you were originally making on the basis of Duncan's blocking behaviour. You weren't very specific about exactly what that criticism was -- rather, you gestured broadly towards the fact of the blocking and invited us all to conclude that Duncan's proposed norms are bad, without saying anything about your reasoning -- and it still isn't perfectly clear to me exactly how we're supposed to get from the blocking-related facts to any conclusion about the merits of the proposed norms. But it seems like it has to be something along the lines of (a) "Duncan's blocking behaviour has ensured that his proposals don't get the sort of robust debate they should get before being treated as norms" and/or (b) "Duncan's blocking behaviour demonstrates that his discourse-preferences are bad, so we shouldn't be adopting guidelines he proposes"; and as we move from the original implicit "Duncan blocks everyone who disagrees with him" (which, no, you did not say explicitly in those words, nor did you say anything explicitly, but it definitely seemed like that was pretty much what you were gesturing at) to "Duncan blocks people who disagree with him persistently in multiple posts, in ways he considers unfair, and are in no way mollified by his responses to that disagreement", the amount of support the proposition offers for (a) and/or (b) decreases considerably.
Incidentally, I am fairly sure that the vagueness I am being a bit complainy about in the previous paragraph is much the same thing (or part of the same thing) as Duncan was referring to when he said
most of what he seems to me to do is sit back and demand that his conversational partners connect every single dot for him, doing no work himself while he nitpicks
and one reason why I find Duncan's account of his motivations more plausible than your rival account (not only as a description of the conscious motivations he permits himself to be aware of, but as a description of the actual causal history) is that the reasons he alleges do seem to me to correspond to elements of your behaviour that aren't just a matter of making cogent criticisms that he doesn't have good answers to.
I think you would do well to notice the extent to which your account of Duncan's motivations is seemingly optimized to present you in a good light, and consider whether some of the psychological mechanisms you think are at work in Duncan might be at work in you too. No one likes to admit that someone else has presented cogent criticisms of their position that they can't answer, true. But, also, no one likes to admit that when they thought they were presenting cogent criticisms they were being needlessly rude and uncooperative.
Also, while we're on the subject of your model of Duncan's motivations: a key element of that model seems to be that Duncan has trouble coping with criticisms that he can't refute. But when looking for examples of other people disagreeing robustly with Duncan, I found several cases where other people made criticisms to which he responded along the lines of "Yes, I agree; that's a deficiency in what I wrote.". So criticisms Duncan can't refute, as such, don't seem to send him off the rails: maybe there's an ingredient in the process that leads to his blocking people, but it can't be the only ingredient.
[EDITED to add:] It is not clear to me whether this discussion is making much useful progress. It is quite likely that my next reply in this thread will be along the lines of "Here are answers to specific questions/criticisms; beyond that, I don't think it's productive for me to continue".
Replies from: Raemon, SaidAchmiz↑ comment by Raemon · 2023-04-06T20:05:17.952Z · LW(p) · GW(p)
I think I do want to ask everyone to stop this conversation because it seems weirdly anchored on one particular example, that, as far as I can tell, was basically a central of what we wanted the author-moderation norms to be for in Meta-tations on Moderation [LW · GW], and they shouldn't be getting dragged through a trial-like thing for following the rules we gave them.
If I had an easy lock-thread button I'd probably have hit that ~last night. We do have a lock thread functionality but it's a bit annoying to use.
Replies from: Vladimir_Nesov, gjm, M. Y. Zuo↑ comment by Vladimir_Nesov · 2023-04-08T21:32:15.725Z · LW(p) · GW(p)
they shouldn't be getting dragged through a trial-like thing for following the rules we gave them
They don't need to be personally involved. The rules protect author's posts, they don't give the author immunity from being discussed somewhere else.
This situation is a question that merits discussion, with implications for general policy. It might have no place in this particular thread, but it should have a place somewhere convenient [LW · GW] (perhaps some sort of dedicated meta "subreddit", or under a meta tag [? · GW]). Not discussing particular cases restricts allowed forms of argument [LW · GW], distorts understanding in systematic ways.
↑ comment by M. Y. Zuo · 2023-04-06T20:17:56.381Z · LW(p) · GW(p)
Tangentially, isn't there already plenty of onboarding material that's had input from most of the moderating team?
Just not including the stuff that hasn't been vetted by a large majority/unanimity of the team seems to be straightforward.
↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-06T20:40:15.720Z · LW(p) · GW(p)
Apologies, but I have now been forbidden from discussing the matter further [LW(p) · GW(p)].
Please feel free to contact me via private message if you’re interested in continuing the discussion. But if you want to leave things here, that’s also perfectly fine. (Strictly speaking, the ball at this point is in my court, but I wouldn’t presume to take the discussion to PM unilaterally; my guess is that you don’t think that’s particularly worth the effort, and that seems to me to be a reasonable view.)
↑ comment by dxu · 2023-04-06T05:54:19.930Z · LW(p) · GW(p)
This is a claim so general as to be meaningless. If we knew absolutely nothing except “a person said a thing”, then retreating to this sort of maximally-vague prior might be relevant. But we in fact are discussing a quite specific situation, with quite specific particular and categorical features. There is no good reason to believe that the quoted prior survives that descent to specificity unscathed (and indeed it seems clear to me that it very much does not).
The prior does in fact survive, in the absence of evidence that pushes one's conclusion away from it. And this evidence, I submit, you have not provided. (And the inferences you do put forth as evidence are—though this should be obvious from my previous sentence—not valid as inferences; more on this below.)
it isn’t just “Said keeps finding mistakes in what I have written”
It’s slightly more specific, of course—but this is, indeed, a good first approximation.
This is a substantially load-bearing statement. It would appear that Duncan denies this, that gjm thinks otherwise as well, and (to add a third person to the tally) I also find this claim suspicious. Numerical popularity of course does not determine the truth (or falsity) of a claim, but in such a case I think it behooves you to offer some additional evidence for your claim, beyond merely stating it as a brute fact. To wit:
What, of the things that Duncan has written in explanation of his decision to ban you from commenting on his posts (as was the subject matter being discussed in the quoted part of the grandparent comment, with the complete sentence being "Duncan has said at some length what he claims to find unpleasant about interacting with you, it isn't just 'Said keeps finding mistakes in what I have written', and it is (to me) very plausible that someone might find it unpleasant and annoying"), do you claim "approximates" the explanation that he did so because you "keep finding mistakes in what he has written"? I should like to see a specific remark from him that you think is reasonably construed as such.
(I’m pretty sure that) other people have disagreed robustly with Duncan and not had him ban them from commenting on his posts.
Let’s see some examples, then we can talk.
I present myself as an example [LW(p) · GW(p)]; I confirm that, after leaving this comment expressing clear disagreement with Duncan, I have not been banned from commenting on any of his posts.
I am (moreover) quite confident in my ability to find additional such examples if necessary, but in lieu of that, I will instead question the necessity of such: did you, Said Achmiz, (prior to my finding an example) honestly expect/suspect that there were no such examples to be found? This would seem to equate to a belief that Duncan has banned anyone and everyone who has dared to disagree with him in the past, which in turn would (given his prolific writing and posting behavior) imply that he should have a substantial fraction of the regular LW commentariat banned—which should have been extremely obviously false to you from the start!
Indeed, this observation has me questioning the reliability of your stance on this particular issue, since the tendency to get things like this wrong suggests a model of (this subregion of) reality so deeply flawed, little to no wisdom avails to be extracted.
If Alice criticizes one of Bob’s posts, and Bob immediately or shortly thereafter bans Alice from commenting on Bob’s posts, the immediate default assumption should be that the criticism was the reason for the ban. Knowing nothing else, just based on these bare facts, we should jump right to the assumption that Bob’s reasons for banning Alice were lousy.
As alluded to in the quote/response pair at the beginning of this comment, this is not a valid inference. What you propose is a valid probabilistic inference in the setting where we are presented only with the information you describe (although even then the strength of update justified by such information is limited at best). Nonetheless, there are plenty of remaining hypotheses consistent with the information in question, and which have (hence) not been ruled out merely by observing Bob to have banned Alice.
For example, suppose it is the case that Alice (in addition to criticizing Bob's object-level points) also takes it upon herself to include, in each of her comments, a remark to the effect that Bob is physically unattractive. I don't expect it controversial to suggest that this behavior would be considered inappropriate by the standards, not just of LW, but of any conversational forum that considers itself to have standards at all; and if Bob then proceeded to ban Alice for such provocations, we would not consider this evidence that he cannot tolerate criticism. The reason for the ban, after all, would have been explained, and thus screened off [LW · GW], leaving us with no reason to suspect him of banning Alice for "lousy reasons".
No doubt you will claim, here, that the situation is not relevantly analogous, since you have not, in fact, insulted Duncan's physical appearance. But the claim that you have not, in any of your prior interactions with him, engaged in a style of discourse that made him think of you as an unusually unlikely-to-be-productive commenter, is, I think, unsupported. And if he had perceived you as such, why, this might then be perceived as sufficient grounds to remove the possibility of such unproductive interactions going forward, and to make that decision independent of the quality (or, indeed, existence) of your object-level criticisms.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-06T07:19:53.851Z · LW(p) · GW(p)
The prior does in fact survive, in the absence of evidence that pushes one’s conclusion away from it.
Categories like “conflicts of interest”, “discussions about who should be banned”, “arguments about moderation in cases in which you’re involved”, etc., already constitute “evidence” that push the conclusion away from the prior of “on the whole, people are more likely to say true things than false things”, without even getting into anything more specific.
I should like to see a specific remark from him that you think is reasonably construed as such.
You’ve misunderstood. My point was that “Said keeps finding mistakes in what I have written” is a good first approximation (but only that!) of what Duncan allegedly finds unpleasant about interacting with me, not that it’s a good first approximation of Duncan’s description of same.
I present myself as an example [LW(p) · GW(p)]; I confirm that, after leaving this comment expressing clear disagreement with Duncan, I have not been banned from commenting on any of his posts.
A single circumspectly disagreeing comment on a tangential, secondary (tertiary? quaternary?) point, buried deep in a subthread, having minimal direct bearing on the claims in the post under which it’s posted. “Robust disagreement”, this ain’t.
(Don’t get me wrong—it’s a fine comment, and I see that I strong-upvoted it at the time. But it sure is not anything at all like an example of the thing I asked for examples of.)
I am (moreover) quite confident in my ability to find additional such examples if necessary
Please do. So far, the example count remains at zero.
but in lieu of that, I will instead question the necessity of such: did you, Said Achmiz, (prior to my finding an example) honestly expect/suspect that there were no such examples to be found?
Given that you did not, in fact, find an example, I think that this question remains unmotivated.
This would seem to equate to a belief that Duncan has banned anyone and everyone who has dared to disagree with him in the past, which in turn would (given his prolific writing and posting behavior) imply that he should have a substantial fraction of the regular LW commentariat banned—which should have been extremely obviously false to you from the start!
Most people don’t bother to think about other people’s posts in sufficient detail and sufficiently critically to have anything much to say about them.
Of the remainder, some agree with Duncan.
Of the remainder of those, many don’t care enough to engage in arguments, disagreements, etc., of any sort.
Of the remainder of those, many are either naturally disinclined to criticize forcefully, to press the criticism, to make points which are embarrassing or uncomfortable, etc., or else are deterred from doing so by the threat of moderation.
That cuts the candidate pool down to a small handful.
Separately, recall that Duncan has (I think more than once now) responded to similar situations by leaving (or “leaving”) Less Wrong. (What is the significance of his choice to respond this time by banning people instead of leaving the site again, I do not know. I suppose it’s an improvement, such as it is, though obviously I’d prefer it if he did neither of these things.)
That cuts out a lot of “the past”.
We then observe that Duncan has now banned more people from commenting on his frontpage posts than any other user of the site (twice as many as the runner-up).
So my request for examples of the alleged phenomenon wherein “other people have disagreed robustly with Duncan and not had him ban them from commenting on his posts” is not so absurd, after all.
Indeed, this observation has me questioning the reliability of your stance on this particular issue, since the tendency to get things like this wrong suggests a model of (this subregion of) reality so deeply flawed, little to no wisdom avails to be extracted.
I think that, on the contrary, it is you who should re-examine your stance on the matter. Perhaps the absurdity heuristic, coupled with a too-hasty jump to a conclusion, has led you astray?
As alluded to in the quote/response pair at the beginning of this comment, this is not a valid inference. What you propose is a valid probabilistic inference in the setting where we are presented only with the information you describe (although even then the strength of update justified by such information is limited at best). Nonetheless, there are plenty of remaining hypotheses consistent with the information in question, and which have (hence) not been ruled out merely by observing Bob to have banned Alice.
That’s why I said “default”.
For example, suppose it is the case that Alice (in addition to criticizing Bob’s object-level points) also takes it upon herself to include, in each of her comments, a remark to the effect that Bob is physically unattractive.
That would be one of those “exceptional circumstances” I referred to. Do you claim such circumstances obtain in the case at hand?
I don’t expect it controversial to suggest that this behavior would be considered inappropriate by the standards, not just of LW, but of any conversational forum that considers itself to have standards at all
This is why I specifically noted that I was referring to people who hadn’t been banned from the site. Surely the LW moderators would see fit to censure a commenter for such behavior, since, as you suggest, it would be quite beyond the pale in any civilized discussion forum.
and if Bob then proceeded to ban Alice for such provocations, we would not consider this evidence that he cannot tolerate criticism. The reason for the ban, after all, would have been explained, and thus screened off, leaving us with no reason to suspect him of banning Alice for “lousy reasons”.
All of this, as I said, was quite comprehensively covered in the comment to which you’re responding. (I begin to suspect that you did not read it very carefully.)
No doubt you will claim, here, that the situation is not relevantly analogous, since you have not, in fact, insulted Duncan’s physical appearance.
Indeed…
But the claim that you have not, in any of your prior interactions with him, engaged in a style of discourse that made him think of you as an unusually unlikely-to-be-productive commenter, is, I think, unsupported.
But of course I never claimed anything like this. What the heck sort of strawman is this? Where is it coming from? And what relevance does it have?
And if he had perceived you as such, why, this might then be perceived as sufficient grounds to remove the possibility of such unproductive interactions going forward, and to make that decision independent of the quality (or, indeed, existence) of your object-level criticisms.
What is this passive-voice “might then be perceived” business? Do you perceive this to be the case?
It seems like you are saying something like “if Bob decides that he is unlikely to engage in productive discussion with Alice, then that is a good and honorable reason for Bob to ban Alice from commenting on his posts”. Are you, in fact, saying that? If not—what are you saying?
Replies from: dxu, Duncan_Sabien↑ comment by dxu · 2023-04-06T10:19:00.756Z · LW(p) · GW(p)
Categories like “conflicts of interest”, “discussions about who should be banned”, “arguments about moderation in cases in which you’re involved”, etc., already constitute “evidence” that push the conclusion away from the prior of “on the whole, people are more likely to say true things than false things”, without even getting into anything more specific.
The strength of the evidence is, in fact, a relevant input. And of the evidential strength conferred by the style of reasoning employed here, much has already been written [LW · GW].
You’ve misunderstood. My point was that “Said keeps finding mistakes in what I have written” is a good first approximation (but only that!) of what Duncan allegedly finds unpleasant about interacting with me, not that it’s a good first approximation of Duncan’s description of same.
Then your response to gjm's point seems misdirected, as the sentence you were quoting from his comment explicitly specifies that it concerns what Duncan himself said. Furthermore, I find it unlikely that this is an implication you could have missed, given that the first quote-block above speaks specifically of the likelihood that "people" (Duncan) may or may not say false things with regards to a topic in which they are personally invested; indeed, this back-and-forth stemmed from discussion of that initial point!
Setting that aside, however, there is a further issue to be noted (one which, if anything, is more damning than the previous), which is that—having now (apparently) detached our notion of what is being "approximated" from any particular set of utterances—we are left with the brute claim that "'Said keeps finding mistakes in what Duncan have written' is a good approximation of what Duncan finds unpleasant about interacting with Said"—a claim for which I don't see how you could defend even having positive knowledge of, much less its truth value! After all, neither of us has telepathic access to Duncan's inner thoughts, and so the claim that his ban of you was been motivated by some factor X—which factor he in fact explicitly denies having exerted an influence—is speculation at best, and psychologizing at worst.
A single circumspectly disagreeing comment on a tangential, secondary (tertiary? quaternary?) point, buried deep in a subthread, having minimal direct bearing on the claims in the post under which it’s posted. “Robust disagreement”, this ain’t.
I appreciate the starkness of this response. Specifically, your response makes it quite clear that the word "robust" is carrying essentially entirety of the weight of your argument. However, you don't appear to have operationalized this anywhere in your comment, and (unfortunately) I confess myself unclear as to what you mean by it. "Disagreement" is obvious enough, which is why I was able to provide an example on such short notice, but if you wish me to procure an example of whatever you are calling "robust disagreement", you will have to explain in more detail what this thing is, and (hopefully) why it matters!
I am (moreover) quite confident in my ability to find additional such examples if necessary
Please do. So far, the example count remains at zero.
but in lieu of that, I will instead question the necessity of such: did you, Said Achmiz, (prior to my finding an example) honestly expect/suspect that there were no such examples to be found?
Given that you did not, in fact, find an example, I think that this question remains unmotivated.
[...]
So my request for examples of the alleged phenomenon wherein “other people have disagreed robustly with Duncan and not had him ban them from commenting on his posts” is not so absurd, after all.
It is my opinion that the response to the previous quoted block also serves adequately as a response to these miscellaneous remarks.
Indeed, this observation has me questioning the reliability of your stance on this particular issue, since the tendency to get things like this wrong suggests a model of (this subregion of) reality so deeply flawed, little to no wisdom avails to be extracted.
I think that, on the contrary, it is you who should re-examine your stance on the matter. Perhaps the absurdity heuristic, coupled with a too-hasty jump to a conclusion, has led you astray?
This question is, in fact, somewhat difficult to answer as of this exact moment, since the answer depends in large part on the meaning of a term ("robustness") whose contextual usage you have not yet concretely operationalized. I of course invite such an operationalization, and would be delighted to reconsider my stance if presented with a good one; until that happens, however, I confess myself skeptical of what (in my estimation) amounts to an uncashed promissory note.
As alluded to in the quote/response pair at the beginning of this comment, this is not a valid inference. What you propose is a valid probabilistic inference in the setting where we are presented only with the information you describe (although even then the strength of update justified by such information is limited at best). Nonetheless, there are plenty of remaining hypotheses consistent with the information in question, and which have (hence) not been ruled out merely by observing Bob to have banned Alice.
That’s why I said “default”.
Well. Let's review what you actually said, shall we?
If Alice criticizes one of Bob’s posts, and Bob immediately or shortly thereafter bans Alice from commenting on Bob’s posts, the immediate default assumption should be that the criticism was the reason for the ban. Knowing nothing else, just based on these bare facts, we should jump right to the assumption that Bob’s reasons for banning Alice were lousy.
Rereading, it appears that the word you singled out ("default") was in fact part of a significantly longer phrase (which you even italicized for emphasis); and this phrase, I think, conveys a notion substantially stronger than the weakened version you appear to have retreated to in response to my pushback. We are presented with the idea, not just of a "default" state, but an immediate assumption regarding Bob's motives—quite a forceful assertion to make!
An assumption with what confidence level, might I ask? And (furthermore) what kind of extraordinarily high "default" confidence level must you postulate, sufficient to outweigh other, more situationally specific forms of evidence, such as—for example—the opinions of onlookers (as conveyed through third-party comments such as gjm's or mine, as well as through voting behavior)?
For example, suppose it is the case that Alice (in addition to criticizing Bob’s object-level points) also takes it upon herself to include, in each of her comments, a remark to the effect that Bob is physically unattractive.
That would be one of those “exceptional circumstances” I referred to. Do you claim such circumstances obtain in the case at hand?
I claim that Duncan so claims, and that (moreover) you have thus far made no move to refute that claim directly, preferring instead to appeal to priors wherever possible (a theme present throughout many of the individual quote/response pairs in this comment). Of course, that doesn't necessarily mean that Duncan's claim here is correct—but as time goes on and I continue to observe [what appear to me to be] attempts to avoid analyzing the situation on the object level, I do admit that one side's position starts to look increasingly favored over the other!
(Having said that, I realize that the above may come off as "taking sides" to some extent, and so—both for your benefit and for the benefit of onlookers—I would like to stress for myself the same point gjm stressed upthread, which is that I consider both Said and Duncan to be strong positive contributors to LW content/culture, and would be accordingly sad to see either one of them go. That I am to some extent "defending" Duncan in this instance is not in any way a broader indictment of Said—only of the accusations of misconduct he [appears to me to be] leveling at Duncan.)
and if Bob then proceeded to ban Alice for such provocations, we would not consider this evidence that he cannot tolerate criticism. The reason for the ban, after all, would have been explained, and thus screened off, leaving us with no reason to suspect him of banning Alice for “lousy reasons”.
All of this, as I said, was quite comprehensively covered in the comment to which you’re responding. (I begin to suspect that you did not read it very carefully.)
Perhaps the topic of discussion (as you have construed it) differs substantially from how I see it, because this statement is, so far as I can tell, simply false. Of course, it should be easy enough to disconfirm this merely by pointing out the specific part of the grandparent comment you believe addresses the point I made inside of the nested quote block; and so I will await just such a response.
But the claim that you have not, in any of your prior interactions with him, engaged in a style of discourse that made him think of you as an unusually unlikely-to-be-productive commenter, is, I think, unsupported.
But of course I never claimed anything like this. What the heck sort of strawman is this? Where is it coming from? And what relevance does it have?
Well, by the law of the excluded middle, can I take your seeming disavowal of this claim as an admission that its negation holds—in other words, that you have, in fact, engaged with Duncan in ways that he considers unproductive? If so, the relevance of this point seems nakedly obvious to me: if you are, in fact, (so far as Duncan can tell) an unproductive presence in the comment section of his posts, then... well, I might as well let my past self of ~4 hours ago say it:
And if he had perceived you as such, why, this might then be perceived as sufficient grounds to remove the possibility of such unproductive interactions going forward, and to make that decision independent of the quality (or, indeed, existence) of your object-level criticisms.
What is this passive-voice “might then be perceived” business? Do you perceive this to be the case?
It seems like you are saying something like “if Bob decides that he is unlikely to engage in productive discussion with Alice, then that is a good and honorable reason for Bob to ban Alice from commenting on his posts”. Are you, in fact, saying that? If not—what are you saying?
And in response to this, I can only say: the sentence within quotation marks is very nearly the opposite of what I am saying—which, phrased within the same framing, would go like this:
"If Bob decides that Alice is unlikely to engage in productive discussion with him, then that is a good and honorable reason for Bob to ban Alice from commenting on his posts."
We're not talking about a commutative operation here; it does in fact matter, whose name goes where!
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-06T17:17:00.321Z · LW(p) · GW(p)
I appreciate the starkness of this response. Specifically, your response makes it quite clear that the word “robust” is carrying essentially entirety of the weight of your argument. However, you don’t appear to have operationalized this anywhere in your comment, and (unfortunately) I confess myself unclear as to what you mean by it. “Disagreement” is obvious enough, which is why I was able to provide an example on such short notice, but if you wish me to procure an example of whatever you are calling “robust disagreement”, you will have to explain in more detail what this thing is, and (hopefully) why it matters!
Please see my reply to gjm [LW(p) · GW(p)].
That’s why I said “default”.
Well. Let’s review what you actually said, shall we? …
Yes. A strong default. I stand by what I said.
An assumption with what confidence level, might I ask?
A high one.
And (furthermore) what kind of extraordinarily high “default” confidence level must you postulate
This seems to me to be only an ordinarily high “default” confidence level, for things like this.
sufficient to outweigh other, more situationally specific forms of evidence, such as—for example—the opinions of onlookers (as conveyed through third-party comments such as gjm’s or mine
See my above-linked reply to gjm, re: “the opinions of onlookers”.
as well as through voting behavior)?
People on Less Wrong downvote for things other than “this is wrong”. You know this. (Indeed, this is wholly consonant with the designed purpose of the karma vote.)
I claim that Duncan so claims, and that (moreover) you have thus far made no move to refute that claim directly, preferring instead to appeal to priors wherever possible (a theme present throughout many of the individual quote/response pairs in this comment). Of course, that doesn’t necessarily mean that Duncan’s claim here is correct—but as time goes on and I continue to observe [what appear to me to be] attempts to avoid analyzing the situation on the object level, I do admit that one side’s position starts to look increasingly favored over the other!
Likewise see my above-linked reply to gjm.
All of this, as I said, was quite comprehensively covered in the comment to which you’re responding. (I begin to suspect that you did not read it very carefully.)
Perhaps the topic of discussion (as you have construed it) differs substantially from how I see it, because this statement is, so far as I can tell, simply false. Of course, it should be easy enough to disconfirm this merely by pointing out the specific part of the grandparent comment you believe addresses the point I made inside of the nested quote block; and so I will await just such a response.
I refer there to the three quote–reply pairs above that one.
accusations of misconduct
I must object to this. I don’t think what I’ve accused Duncan of can be fairly called “misconduct”. He’s broken no rules or norms of Less Wrong, as far as I can tell. Everything he’s done is allowed (and even, in some sense, encouraged) by the site rules. He hasn’t done anything underhanded or deliberately deceptive, hasn’t made factually false claims, etc. It does not seem to me that either Duncan, or Less Wrong’s moderation team, would consider any of his behavior in this matter to be blameworthy. (I could be wrong about this, of course, but that would surprise me.)
Well, by the law of the excluded middle, can I take your seeming disavowal of this claim as an admission that its negation holds—in other words, that you have, in fact, engaged with Duncan in ways that he considers unproductive?
Yes, of course. Duncan has said as much, repeatedly. It would be strange to disbelieve him on this.
Just as obviously, I don’t agree with his characterization!
(As before, see my above-linked reply to gjm for more details.)
And if he had perceived you as such, why, this might then be perceived as sufficient grounds to remove the possibility of such unproductive interactions going forward, and to make that decision independent of the quality (or, indeed, existence) of your object-level criticisms.
What is this passive-voice “might then be perceived” business? Do you perceive this to be the case?
It seems like you are saying something like “if Bob decides that he is unlikely to engage in productive discussion with Alice, then that is a good and honorable reason for Bob to ban Alice from commenting on his posts”. Are you, in fact, saying that? If not—what are you saying?
And in response to this, I can only say: the sentence within quotation marks is very nearly the opposite of what I am saying—which, phrased within the same framing, would go like this:
“If Bob decides that Alice is unlikely to engage in productive discussion with him, then that is a good and honorable reason for Bob to ban Alice from commenting on his posts.”
We’re not talking about a commutative operation here; it does in fact matter, whose name goes where!
This seems clearly wrong to me. The operation is of course commutative; it doesn’t matter in the least whose name goes where. In any engagement between Alice and Bob, Alice can decide that Bob is engaging unproductively, at the same time as Bob decides that Alice is engaging unproductively. And of course Bob isn’t going to decide that it’s he who is the one engaging unproductively with Alice (and vice-versa).
And both formulations can be summarized as “Bob decides that he is unlikely to engage in productive discussion with Alice” (regardless of whether Bob or Alice is allegedly to blame; Bob, clearly, will hold the latter view; Alice, the former).
In any case, you have now made your view clear enough:
“If Bob decides that Alice is unlikely to engage in productive discussion with him, then that is a good and honorable reason for Bob to ban Alice from commenting on his posts.”
All I can say is that this, too, seems clearly wrong to me (for reasons which I’ve already described in some detail).
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-04-06T18:32:29.141Z · LW(p) · GW(p)
My point was that “Said keeps finding mistakes in what I have written” is a good first approximation (but only that!) of what Duncan allegedly finds unpleasant about interacting with me, not that it’s a good first approximation of Duncan’s description of same.
This is incoherent. Said is hiding the supposer with this use of passive voice. A coherent rewrite of this sentence would either be:
My point was that "Said keeps finding mistakes in what I have written" is a good first approximation (but only that!) of what I, Said, allege that Duncan finds unpleasant about interacting with me, not that it's a good first approximation of Duncan's description of same.
or
My point was that "Said keeps finding mistakes in what I have written" is a good first approximation (but only that!) of what Duncan alleges that he finds unpleasant about interacting with me, not that it's a good first approximation of Duncan's description of same.
Both of these sentences are useless, since the first is just saying "I, Said, allege what I allege" and the second is just saying "what Duncan alleges is not what he alleges."
(Or I guess, as a third version, what dxu or others are alleging?)
I note that Said has now done something between [accusing me of outright lying] and [accusing me of being fully incompetent to understand my own motivations and do accurate introspection] at least four or five times in this thread. I request moderator clarification on whether this is what we want happening a bunch on LessWrong. @Raemon [LW · GW]
Replies from: Raemon, SaidAchmiz↑ comment by Raemon · 2023-04-06T19:00:18.279Z · LW(p) · GW(p)
My current take is "this thread seems pretty bad overall and I wish everyone would stop, but I don't have an easy succinct articulation of why and what the overall moderation policy is for things like this." I'm trying to mostly focus on actually resolving a giant backlog of new users who need to be reviewed while thinking about our new policies, but expect to respond to this sometime in the next few days.
What I will say immediately to @Said Achmiz [LW · GW] is "This point of this thread is not to prosecute your specific complaints about Duncan. Duncan banning you is the current moderation policy working as intended. If you want to argue about that, you should be directing your arguments at the LessWrong team, and you should be trying to identify and address our cruxes."
I have more to say about this but it gets into an effortcomment that I want to allocate more time/attention to.
I'd note: I do think it's an okay time to open up Said's longstanding disagreements with LW moderation policy, but, like, all the previous arguments still apply. Said's comments so far haven't added new information we didn't already consider.
I think it is better to start a new thread rather than engaging in this one, because this thread seems to be doing a weird mix of arguing moderation-abstract-policies while also trying to prosecute one particular case in a way that feels off.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-06T19:25:45.610Z · LW(p) · GW(p)
What I will say immediately to @Said Achmiz is “This point of this thread is not to prosecute your specific complaints about Duncan. Duncan banning you is the current moderation policy working as intended. If you want to argue about that, you should be directing your arguments at the LessWrong team, and you should be trying to identify and address our cruxes.”
But that seems to me to be exactly what I have been doing. (Why else would I bother to write these comments? I have no interest in any of this except insofar as it affects Less Wrong.)
And how else can I do this, without reference to the most salient (indeed, the only!) specific example in which I have access to the facts? One cannot usefully debate such things in purely abstract fashion!
(Please note that as I have said [LW(p) · GW(p)], I have not accused Duncan of breaking any site rules or norms; he clearly has done no such thing.)
Replies from: Raemon↑ comment by Raemon · 2023-04-06T19:48:22.701Z · LW(p) · GW(p)
You currently look like you're doing two things – arguing about what the author-moderation norms should be, and arguing whether/how we should adopt a particular set of norms that Duncan advocated. I think those two topics are getting muddied together and making the conversation worse.
My answer to the "whether/how should we adopt the norms in Basics of Rationalist Discourse?" is addressed here [LW(p) · GW(p)]. If you disagree with that, I suggest replying to that with your concrete disagreement on that particular topic.
I think if you also want to open up "should LW change our 'authors can moderate content' policy", I think it's better to start a separate thread for that. Duncan's blocking-of-you-and-others so far seems like a fairly central example of what the norms were intended to protect, on purpose, and so far you haven't noted any example relating to the Duncan thread that seem... at all particularly unusual for how we expected authors to use the feature?
Like, yes, you can't be confident whether an author blocks someone due to them disagreeing, or having a principled policy, or just being annoyed. But, we implemented the rules because "commenters are annoying" is actually a central existential threat to LessWrong.
If we thought it was actually distorting conversation in a bad way, we'd re-evaluate the policy. But I don't see reason to think that's happening (given that, for example, Zack went ahead and wrote a top-level post about stuff. It's not obvious this outcome was better for Duncan, so we might revisit the policy for that reason, but, not for 'important arguments are getting surpressed' reasons).
Part of the whole point of the moderation policy is that it's not the job of individual users to have to defend their right to use the moderation tools, so I do now concretely ask you to stop arguing about Duncan-in-particular.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-06T20:34:10.869Z · LW(p) · GW(p)
You currently look like you’re doing two things – arguing about what the author-moderation norms should be, and arguing whether/how we should adopt a particular set of norms that Duncan advocated. I think those two topics are getting muddied together and making the conversation worse.
These two things are related, in the way to which I alluded in my very first comment on this topic. (Namely: the author-moderation feature shouldn’t exist [in its current form], because it gives rise to situations like this, where we can’t effectively discuss whether we should do something like adopting Duncan’s proposed norms.) I’m not just randomly conflating these two things for no reason!
My answer to the “whether/how should we adopt the norms in Basics of Rationalist Discourse?” is addressed here [LW(p) · GW(p)]. If you disagree with that, I suggest replying to that with your concrete disagreement on that particular topic.
Uh… sorry, I don’t see how that comment is actually an answer to that question? It… doesn’t seem to be…
Duncan’s blocking-of-you-and-others so far seems like a fairly central example of what the norms were intended to protect, on purpose, and so far you haven’t noted any example relating to the Duncan thread that seem… at all particularly unusual for how we expected authors to use the feature?
Yes, of course! That’s why it makes perfect sense to discuss this case as illustrative of the broader question!
What I am saying is not “you guys made this feature, but now look, people are using it in a bad way which is totally not the way you intended or expected”. No! What I’m saying is “you guys made this feature, and people are using it in exactly the way you intended and expected, but we can now see that this is very bad”.
Like, yes, you can’t be confident whether an author blocks someone due to them disagreeing, or having a principled policy, or just being annoyed.
Those are not different things!
This really bears emphasizing: there is no difference between “banning people who disagree with you [robustly / in some other specific way]” and “finding some people (who happen to be the ones disagreeing with you [robustly/etc.] annoying, and banning them for (supposedly) that reason”[1] and “having a principled policy of doing any of the above”. “Finding people annoying, and quite reasonably banning them for being annoying” is simply how “banning people for disagreeing with you” feels from the inside.
But, we implemented the rules because “commenters are annoying” is actually a central existential threat to LessWrong.
Yes. Such things are central existential threats to many (perhaps most) discussion forums, and online communities in general.
But traditionally, this is handled by moderators.
The reason why this is necessary is well known: nemo judex in sua causa. Alice and Bob engage in disputation. Bob complains to the moderators that Alice is being annoying. Moderator Carol comes along, reads the exchange, and says one of two things:
“You’re right, Bob; Alice was out of line. Alice, stop that—on pain of censure.”
or
“Sorry, Bob, it looks like Alice hasn’t done anything wrong. She’s just disagreeing with you. No action is warranted against her; you’ll just have to deal with it.”
But if Bob is the moderator, then there’s no surprise if he judges the case unfairly, and renders the former verdict when the latter would be just!
If we thought it was actually distorting conversation in a bad way, we’d re-evaluate the policy. But I don’t see reason to think that’s happening (given that, for example, Zack went ahead and wrote a top-level post about stuff. It’s not obvious this outcome was better for Duncan, so we might revisit the policy for that reason, but, not for ‘important arguments are getting surpressed’ reasons).
Come now! Have you suddenly forgotten about “trivial inconveniences”, about “criticism being more expensive than praise”? You “don’t see reason to think” that any distortions result from this?! Writing top-level posts is effortful, costly in both time and willpower. What’s more, writing a top-level post just for the purpose of arguing with another member, who has banned you from his posts, is, for many (most?) people, something that feels socially awkward and weird and aversive (and quite understandably so). It reads like a social attack, in a way that simply commenting on their post does not. (I have great respect for Zack for having the determination and will to overcome these barriers, but not everyone is Zack. Most people, seeing that insistent and forceful criticism gets them banned from someone’s posts, will simply close the browser tab.)
In short, the suggestion that this isn’t distorting conversation seems to me to be manifestly untenable.
Part of the whole point of the moderation policy is that it’s not the job of individual users to have to defend their right to use the moderation tools, so I do now concretely ask you to stop arguing about Duncan-in-particular.
As you please.
However, you have also invited [LW(p) · GW(p)] me to discuss the matter of the author-moderation feature in general. How do you propose that I do that, if I am forbidden to refer to the only example I have? (Especially since, as you note, it is a central example of the phenomenon in question.) It seems pretty clearly unfair to invite debate while handicapping one’s interlocutor in this way.
Well, more properly, “banning people for disagreeing” is generally a subset of “banning people for being annoying”. But we can generally expect it to be a proper subset only if the traditional sort of moderation is not taking place, because the complement (within the set of “people being annoying”) of “people disagreeing”—that is, people who are being annoying for other reasons—is generally handled by mods. ↩︎
↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-06T19:19:03.144Z · LW(p) · GW(p)
Sorry, that wasn’t meant to be ambiguous; I thought it would be clear that the intended meaning was more like (see below for details) the latter (“Duncan alleges that he”), and definitely not the former—since, as you say, the former interpretation is tautological.
(Though, yes, it also covers third parties, under the assumption—which so far seems to be borne out—that said third parties are taking as given what you [Duncan] claim re: what you find unpleasant.)
the second is just saying “what Duncan alleges is not what he alleges.”
No, not quite. Consider the following three things, which are all different:
(a) “Alice’s description of something which Alice says she finds unpleasant”
(b) “The thing Alice claims to find unpleasant, according to Alice’s description of it”
(c) “The thing Alice claims to find unpleasant, in (claimed, by someone who isn’t Alice) reality (which may differ from the thing as described by Alice)”
Obviously, (a) is of a different kind from (b) and (c). I was noting that I was not referring to (a), but instead to (c).
(An example: Alice may say “wow, that spider really scared me!”. In this case, (a) is “that spider” [note the double quote marks]; (b) is a spider [supposedly]; and (c) may be, for example, a harvestman [also supposedly].)
In other words: there’s some phenomenon which you claim to find unpleasant. We believe your self-report of your reaction to this thing. It remains, however, to characterize the thing in question. You offer some characterization. It seems to me that there’s nothing either incoherent or unusual about me disputing the characterization—without, in the process, doubting your self-report, accusing you of lying, claiming that you’re saying something other than what you’re saying, etc.
I note that Said has now done something between [accusing me of outright lying] and [accusing me of being fully incompetent to understand my own motivations and do accurate introspection]
Well, as I’ve said (several times), I don’t think that you’re lying. (You might be, of course; I’m no telepath. But it seems unlikely to me.)
Take a look, if you please, at my description of your perspective and actions, found at the end of this comment [LW(p) · GW(p)]. As I say there, it’s my hope that you’ll find that characterization to be fair and accurate.
And certainly I don’t think that anything in that description can be called an accusation of lying, or anything much like lying (in the sense of consciously attempted deception of people other than oneself).
(We do often speak of “lying to yourself”—indeed, I’ve done so, in this conversation—and that seems to me to be an understandable enough usage; but, of course, “you’re lying to yourself” is a very different from just plain “you’re lying”. One may accuse someone who says “you’re lying to yourself” of Bulverism, perhaps, of argument-by-armchair-psychoanalysis, or some such thing, but we don’t say “he just called me a liar!”—because that’s not what happened, in this scenario.)
As far as “being fully incompetent to understand my own motivations and do accurate introspection” goes, well… no, that’s not exactly right either. But I would prefer to defer discussion of this point until you comment on whether my aforementioned description of your perspective seems to you to be accurate and fair.
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-04-07T02:46:42.930Z · LW(p) · GW(p)
It was not; I both strong downvoted and, separately, strong disagreed.
(I missed the call to end the conversation; sorry for replying.)
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-04-05T20:19:52.666Z · LW(p) · GW(p)
(Adhering to Ray's request of making <1 reply per hour, though in this case I was already planning to do so.)
The above fails to note something analogous to "arrested while driving ≠ arrested for driving."
It is not in fact the case that anyone was blocked for disagreeing with or criticizing the things that I had written, though it is true that a couple of people have been blocked while disagreeing or criticizing.
EDIT: I went and looked up the fancy words for this: post hoc, ergo propter hoc.
What they were blocked for was not disagreement. I shall not enumerate the dozens-if-not-hundreds of people who have disagreed with me often and at length (and even sometimes with some vehemence) without being blocked, but I'll note that you can find multiple instances of people on my block list disagreeing with me previously in ways that were just fine.
Metaphor: if you were to disagree with someone while throwing bricks at them, subsequently going "aHA! They blocked me for disagreeing!" would be disingenuous.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-05T21:31:11.796Z · LW(p) · GW(p)
I didn’t say anything about “blocked for disagreeing [or criticizing]”. (Go ahead, check!)
What I said was:
The other people in question were, to my knowledge, also banned from commenting on Duncan’s posts due to their criticism of “Basics of Rationalist Discourse” (or due to related discussions, on related topics on Less Wrong)
To deny this, it seems to me, is untenable.
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-04-05T22:24:38.283Z · LW(p) · GW(p)
Here Said is, as far as I can tell, arguing that "blocked for disagreeing or criticizing" is not straightforwardly synonymous with "blocked due to disagreeing or criticizing."
In any event, none of the people in question were blocked for disagreeing or criticizing, and (saying it the other way too, just in case I'm missing some meaningful semantic difference) none of them were blocked due to disagreeing or criticizing, either.
I again mention that it's not at all hard to find instances of people disagreeing with me or criticizing me or my ideas quite hard, without getting blocked, and that there are even plentiful instances of several of the blocked people having done so in the past (which did not, in the past, result in them getting blocked).
Replies from: Raemon↑ comment by Raemon · 2023-04-05T23:27:12.281Z · LW(p) · GW(p)
I think the important bit from Said's perspective is that these people were blocked for reasons not-related-to whether they had something useful to say about those rules, so we may be missing important things.
I'll reiterate Habryka's take on "I do think if we were to canonize some version of the rules, that should be in a place that everyone can comment on." And I'd go on to say: on that post we also should relax some of the more opinionated rules about how to comment. i.e. we should avoid boxing ourselves in so that it's hard to criticize the rules in practice.
I think a separate thing Said cares about is that there is some period for arguing about the rules before they "get canonized." I do think there should be at least some period for this, but not worried about it being particularly long because
a) the mod team has had tons of time to think about what norms are good and people have had tons of time to argue, and I think this is mostly going to be cementing things that were already de-facto site norms,
b) people can still argue about the rules after the fact (and I think comments on The Rules post, and top level posts about site norms, should have at least some more leeway about how to argue. I think there'll probably still be some norms like, 'don't do ad hominem attacks' but don't expect that to actually cause an issue)
That said, I certainly don't promise that everyone will be happy with the rules, the process here will not be democratic, it'll be the judgment of the LW mod team.
Replies from: Duncan_Sabien↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-04-06T03:23:15.769Z · LW(p) · GW(p)
Strong upvote, strong agree.
↑ comment by Duncan Sabien (Deactivated) (Duncan_Sabien) · 2023-04-05T15:29:22.939Z · LW(p) · GW(p)
Or an indication that some otherwise non-banned members of the site are actually kind of poor at exhibiting one or more of the basics of rationalist discourse and have been tolerated on LW for other reasons unrelated to their quality as thinkers, reasoners, and conversational partners.
For instance, they might think that, because they can't think of a way, this means that there literally exists no way for a thing to be true (or be prone to using exaggerated language that communicates that even though it doesn't reflect their actual belief).
(The Basics post was written because I felt it was needed on LW, because there are people who engage in frequent violation of good discourse norms and get away with it because it's kind of tricky to point at precisely what they're doing that's bringing down the quality of the conversations. That doesn't mean that my particular formulation was correct (I have already offered above to make changes to the two weakest sections), but it is not, in fact, the case, that [a user who's been barred from commenting but otherwise still welcome on LW as a whole] is necessarily in possession of good critique. Indeed they might be, but they might also be precisely the kind of user who was the casus belli of the post in the first place.)
Replies from: SaidAchmiz, SaidAchmiz, lahwran↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-05T16:28:11.950Z · LW(p) · GW(p)
All of this is irrelevant, because the point is that if the conversational norms cannot be discussed openly (by people who otherwise aren’t banned from the site for being spammers or something similarly egregious), then there’s no reason to believe that they’re good norms. How were they vetted? How were they arrived at? Why should we trust that they’re not, say, chock-full of catastrophic problems? (Indeed, the more people[1] are banned from commenting on the norms as a consequence of their criticism of said norms, the less we should believe that the norms are any good!)
Of all the posts on the site, the post proposing new site norms is the one that should be subjected to the greatest scrutiny—and yet it’s also the post[2] from which more critics have been banned than from almost any other. This is an extremely bad sign.
Weighted by karma (as a proxy for “not just some random person off the street, but someone whom the site and its participants judge to have worthwhile things to say”). (The ratio of “total karma of people banned from commenting on ‘Basics of Rationalist Discourse’” to “karma of author of ‘Basics of Rationalist Discourse’” is approximately 3:1. If karma represents some measure of “Less Wrong, as a site and a community, has endorsed this person as someone whose participation in discussions here is a positive good”—and the OP here suggests that it does, indeed, represent that—what does it mean that the so-called “Basics of Rationalist Discourse” cannot even bear discussion by people who are, collectively, so relatively well-regarded?) ↩︎
Technically, the user. But that hardly changes the point. ↩︎
↑ comment by habryka (habryka4) · 2023-04-05T18:14:47.785Z · LW(p) · GW(p)
For what it's worth, the high level point here seems right to me (I am not trying to chime into the rest of the discussion about whether the ban system is a good idea in the first place).
If we canonize something like Duncan's post I agree that we should do something like copy over a bunch of it into a new post, give prominent attribution to Duncan at the very top of the post, explain how it applies to our actual moderation policy, and then we should maintain our own ban list.
I think Duncan's post is great, but I think when we canonize something like this it doesn't make sense for Duncan's ban list to carry over to the more canonized version.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-05T18:59:48.740Z · LW(p) · GW(p)
This is certainly well and good, but it seems to me that the important thing is to do something like this before canonizing anything. Otherwise, it’s a case of “feel free to discuss this, but nothing will come of it, because the decision’s already been made”.
The whole point of community discussion of something like this is to serve as input into any decisions made. If you decide first, it’s too late to discuss. This is exactly what makes the author-determined ban lists so extraordinarily damaging in a case like this, where the post in question is on a “meta” topic (setting aside for the moment whether they’re good or bad in general).
↑ comment by Thoth Hermes (thoth-hermes) · 2023-04-05T17:34:25.350Z · LW(p) · GW(p)
The most interesting thing you said is left to your footnote:
The ratio of “total karma of people banned from commenting on ‘Basics of Rationalist Discourse’” to “karma of author of ‘Basics of Rationalist Discourse’” is approximately 3:1.
I would love to see if this pattern is also present in other posts.
There should be some mechanism to judge posts / comments that have a mix of good and bad karma, especially since there are two measures of this now. In the olden-days it still would have been possible to do, since an overall score of "0" could still be a very high-quality information signal, if the total number of votes is high. This is only even more the case now.
Indeed, the more people[1] [LW · GW] are banned from commenting on the norms as a consequence of their criticism of said norms, the less we should believe that the norms are any good!
At issue is whether low-quality debate is ever fruitful. That piece of data in your footnote suggests that the issue might instead be whether or not there even is low-quality debate, or at least whether or not we can rely on moderators' judgement calls or the sum of all (weighted) votes to make such calls.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-05T17:47:24.933Z · LW(p) · GW(p)
That piece of data in your footnote suggests that the issue might instead be whether or not there even is low-quality debate, or at least whether or not we can rely on moderators’ judgement calls or the sum of all (weighted) votes to make such calls.
Certainly we can’t rely on the judgment of post authors! (And I am not even talking about this case in particular, but—just in general, for post authors to have the power to make such calls introduces a massive conflict of interest, and incentivizes ego-driven cognitive distortions. This is why the “post authors can ban people from their posts” feature is so corrosive to anything resembling good and useful discussion… truly, it seems to me like an egregious, and entirely unforced, mistake in system design.)
Replies from: thoth-hermes↑ comment by Thoth Hermes (thoth-hermes) · 2023-04-05T22:29:32.218Z · LW(p) · GW(p)
The mistake in system design started with the implementation of downvoting, but I have more complicated reasoning for this. If you have a system that implements downvoting, the reason for having that feature in place is to prevent ideas that are not easily argued away from being repeated. I tend to be skeptical of such systems because I tend to believe that if ideas are not easily argued away, they are more likely to have merit to them. If you have a post which argues for the enforcement of certain norms which push down specific kinds of ideas that are hard to argue away, one of which is the very idea that this ought to be done, it creates the impression that there are certain dogmas which can no longer be defended in open dialogue.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-05T23:24:55.166Z · LW(p) · GW(p)
I am not sure that I’d go quite that far, but I certainly sympathize with the sentiment.
If you have a post which argues for the enforcement of certain norms which push down specific kinds of ideas that are hard to argue away, one of which is the very idea that this ought to be done, it creates the impression that there are certain dogmas which can no longer be defended in open dialogue.
And with this, I entirely agree.
↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-05T16:42:07.980Z · LW(p) · GW(p)
… it is not, in fact, the case, that [a user who’s been barred from commenting but otherwise still welcome on LW as a whole] is necessarily in possession of good critique. Indeed they might be, but they might also be precisely the kind of user who was the casus belli of the post in the first place.)
Note, by the way, that these two things are not at all mutually exclusive. It might, indeed, be the case that the post was motivated by some kinds of people/critiques—which have/are good critiques. (Indeed that’s one of the most important and consequential sorts of criticism of the post: that among its motivations was one or more bad motivations, which we should not endorse, and which we should, in fact, oppose, as following it would have bad effects.)
Replies from: Raemon↑ comment by Raemon · 2023-04-05T16:53:04.773Z · LW(p) · GW(p)
(I think this is a thread that, if I had a "slow mode" button to make users take longer to reply, I'd probably have clicked it right about now. I don't have such a button, but Said and @Duncan_Sabien [LW · GW] can you guys a) hold off for a couple hours on digging into this and b) generally take ~an hour in between replies here if you were gonna keep going)
↑ comment by the gears to ascension (lahwran) · 2023-04-05T17:50:28.828Z · LW(p) · GW(p)
who are the banned users? I'm not sure how to access the list and would like it mildly immortalized in case you change it later.
Replies from: Raemon↑ comment by Raemon · 2023-04-05T18:04:13.136Z · LW(p) · GW(p)
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-04-05T18:18:13.526Z · LW(p) · GW(p)
I don't see any bans related to that post there.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2023-04-05T18:33:50.252Z · LW(p) · GW(p)
See the lower section, on user bans.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-04-05T18:37:24.299Z · LW(p) · GW(p)
Ah. So, these people have banned duncan from commenting on their frontpage posts? or, duncan has banned them from commenting on his frontpage posts? I guess you're implying the latter.
Zack_M_Davis
JenniferRM
AnnaSalamon
Said Achmiz
M. Y. Zuo
LVSN
Makes sense.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-04-05T18:38:26.154Z · LW(p) · GW(p)
shrug. people can make posts in reply. Zack has done so. no great loss, I think - if anything, it created slightly more discussion.
↑ comment by Nicholas / Heather Kross (NicholasKross) · 2023-04-05T23:17:55.212Z · LW(p) · GW(p)
Agreed w.r.t. "basic questions" we could ask new users. The subreddit /r/ControlProblem now makes people take a quiz before they can post, to filter for people who e.g. know and care about the orthogonality thesis. (The quiz is pretty easy to pass if you're familiar with the basic ideas of AI safety/alignment/DontKillEveryoneism.)
comment by Vladimir_Nesov · 2023-04-05T05:34:11.230Z · LW(p) · GW(p)
New qualified users need to remain comfortable (trivial inconveniences [LW · GW] like interacting with a moderator at all are a very serious issue), and for new users there is not enough data to do safe negative [LW · GW] selection on anything subtle. So talking of "principles in the Sequences" is very suspicious, I don't see an operationalization that works for new users and doesn't bring either negative selection or trivial inconvenience woes.
Replies from: MondSemmel, Raemon↑ comment by MondSemmel · 2023-04-06T08:57:55.197Z · LW(p) · GW(p)
Agreed. Talking from the perspective of a very occasional author, suppose I post a LW essay and then link it in r/slatestarcodex. I want it to be as easy as possible for readers to comment on my essay wherever I link it. If it's impossible or even just inconvenient for them to simply comment on the essay, they might not do so.
In which case, why would the author post their stuff on LW, specifically?
Replies from: steve2152, pktechgirl↑ comment by Steven Byrnes (steve2152) · 2023-04-10T13:11:50.727Z · LW(p) · GW(p)
I also see that as a downside (although not the end of the world—it depends on the filters). Like, is LW a blogging platform? Or is it a discussion forum for a particular online community? Right now we kinda have it both ways—when describing why I post on LW, I might says something like “it’s a very nice blogging platform, and also has a great crowd of regular readers & commenters who I tend to know and like”. But the harder it is for random people to comment on my posts, the less it feels like a “blogging platform”, and the more it feels like I’m just talking within a gated community, which isn’t necessarily what I’m going for.
Right now I have a pretty strong feeling that I don’t want to start a substack / wordpress / whatever and cross-post everything to LW, mostly for logistical reasons (more annoying to post, need to fix typos in two places), plus it splits up the comment section. But I do get random people opening a LW account to comment on my posts sometimes, and I like that †, and if that stops being an option it would be a marginal reason for me to switch to “separate blog + crossposting”. Wouldn’t be the end of the world, just wanted to share. Hmm, I might also / alternatively mitigate the problem by putting an “email-me” link / invitation at the bottom of all my posts.
Random thought: Just like different authors get to put different moderation guidelines on their own posts, maybe different authors could also get to put different barriers-to-new-user-comments on their own posts?? I haven’t really thought it through, it’s just an idea that popped into my head.
† Hmm, actually, I’m happy about the pretty-low-friction ability of anyone to comment on my LW posts in the case of e.g. obscure technical posts, and neuroscience posts, and random posts. I haven’t personally written posts that draw lots of really bad takes on AI, at least not so far, and I can see that being very annoying.
↑ comment by Elizabeth (pktechgirl) · 2023-04-10T05:49:02.117Z · LW(p) · GW(p)
Some authors would view the moderation as a feature, not a bug.
↑ comment by Raemon · 2023-04-06T16:12:18.572Z · LW(p) · GW(p)
I think avoiding the negative selection failure modes is an important point. I'm mulling over how to think about it.
Do you have a thing you're imagining with "positive selection?" that you expect to work?
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2023-04-06T18:18:47.877Z · LW(p) · GW(p)
Stop displaying user's Karma total, so that there is no numbers-go-up reward for posting lots of mediocre stuff, instead count the number of comments/posts in some upper quantile by Karma (which should cash out as something like 15+ Karma for comments). Use that number where currently Karma is used, like vote weights. (Also, display the number of comments below some negative threshold like -3 in the last few months.)
Replies from: evand, Raemon↑ comment by evand · 2023-04-08T20:13:51.617Z · LW(p) · GW(p)
Something like an h-index might be better than a total.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2023-04-08T21:04:04.142Z · LW(p) · GW(p)
In some ways this sounds better than either my proposal or Raemon's [LW(p) · GW(p)]. But there is still the spurious upvoting [LW(p) · GW(p)] issue, so a metric should be able to not get too excited about a few highly upvoted things.
↑ comment by Raemon · 2023-04-06T18:32:50.937Z · LW(p) · GW(p)
Mmm. Yeah something in that space makes sense.
FYI a similar I've thinking about is "you see the total karma of the user's top ~20 comments/posts", which you can initially improve by writing somewhat-good-comments but you'll quickly max that out. That metric emphasizes "what was their best content like?", your metric is something like "how much 'at least pretty solid' content do they have?" and I'm not sure which is better.
Replies from: Vladimir_Nesov↑ comment by Vladimir_Nesov · 2023-04-06T19:02:30.289Z · LW(p) · GW(p)
There are some viral posts (including on community drama) where (half of) everything gets unusually highly upvoted, compared to normal. So the inclination I get is to recalibrate thresholds according to post's popularity and the current year, to count fewer such comments (that's too messy, isn't worth it, but other things should be robust to this effect). This is why I specifically proposed number of 15+ Karma comments, not their total Karma. Also, the total number still counts as some sort of "total contribution" as opposed to the less savory "user quality".
comment by Ben (ben-lang) · 2023-04-05T09:46:16.778Z · LW(p) · GW(p)
One of the things I really like about LW is the "atmosphere", the way people discuss things. So very well done at curating that so far. I personally would be nervous about over-pushing "The Sequences". I didn't read much of them a little while into my LW time, but I think I picked up the vibes fine without them.
I think the commenting guidelines are an excellent feature which have probably done a lot of good work in making LW the nice place it is (all that "explain, not persuade" stuff). I wonder how much difference it would make if the first time a new user posts a comment they could be asked to tick a box saying "I read the commenting guidelines".
comment by Jon Garcia · 2023-04-05T00:26:47.365Z · LW(p) · GW(p)
Would it make sense to have a "Newbie Garden" section of the site? The idea would be to give new users a place to feel like they're contributing to the community, along with the understanding that the ideas shared there are not necessarily endorsed by the LessWrong community as a whole. A few thoughts on how it could work:
- New users may be directed toward the Newbie Garden (needs a better name) if they try to make a post or comment, especially if a moderator deems their intended contribution to be low-quality. This could also happen by default for all users with karma below a certain threshold.
- New users are able to create posts, ask questions, and write comments with minimal moderation. Posts here won't show up on the main site front page, but navigation to this area should be made easy on the sidebar.
- Voting should be as restricted here as on the rest of the site to ensure that higher-quality posts and comments continue trickling to the top.
- Teaching the art of rationality to new users should be encouraged. Moderated posts that point out trends and examples of cognitive biases and failures of rationality exhibited in recent newbie contributions, and that advise on how to correct for them in the future, could be pinned to the top of the Newbie Garden (still needs a better name). Moderated comments that serve a similar purpose could also be pinned to the top of comment sections of individual posts. This way, even heavily downvoted content could lead (indirectly) to higher quality contributions in the future.
- Newbie posts and questions with sufficient karma can be queued up for moderator approval to be posted to the main site.
I appreciate the high quality standards that have generally been maintained on LessWrong over the years, and I would like to see this site continue to act as both a beacon and an oasis of rationality.
But I also want people not to feel like they're being excluded from some sort of elitist rationality club. Anyone should feel like they can join in the conversation as long as they're willing to question their assumptions, receive critical feedback, and improve their ability to reason, about both what is true and what is good.
Replies from: Vaniver↑ comment by Vaniver · 2023-04-05T02:51:33.607Z · LW(p) · GW(p)
I think this works at universities because teachers are paid to grade things (they wouldn't do it otherwise) and students get some legible-to-the-world certificate once they graduate.
Like, we already have a wealth of curriculum / as much content for newbies as they can stand; the thing that's missing is the peer reading group and mentors. We could probably construct peer reading groups (my simple idea is that you basically put people into groups based on what <month> they join, varying the duration until you get groups of the right size, and then you have some private-to-that-group forum / comment system / whatever), but I don't think we have the supply of mentors. [This is a crux--if someone thinks they have supply here or funding for it, I want to hear about it.]
↑ comment by Elizabeth (pktechgirl) · 2023-04-05T04:40:09.014Z · LW(p) · GW(p)
I think the peer thing is pretty good, and recreates the blind-leading-the-blind aspect of early lesswrong.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-04-05T18:23:41.550Z · LW(p) · GW(p)
You are implying that blind-leading-the-blind is good, not bad, here, correct? I'm interested to hear more of your thoughts on why that will result in collective intelligence and not collective decoherence; it seems plausible to me, but some swarm algorithms work and some don't.
Replies from: pktechgirl↑ comment by Elizabeth (pktechgirl) · 2023-04-06T00:10:50.507Z · LW(p) · GW(p)
I'm claiming that blind-leading-the-blind can work at all, and is preferable to a low-karma section containing both newbies and long time members whose low karma reflects quality issues. Skilled mentorship is almost certainly better, but I don't think that's available at the necessary scale.
comment by Ruby · 2023-04-07T00:36:51.500Z · LW(p) · GW(p)
A few observations triggered the LessWrong team focusing on moderation now. The initial trigger was our observations of new users signing up plus how the distribution of quality of new submissions had worsened (I found myself downvoting many more posts), but some analytics helps drive home the picture:
(Note that not every week does LessWrong get linked in the Times...but then again....maybe it roughly does from this point onwards.)
LessWrong data from Google Analytics: traffic has doubled Year-on-Year
comment by Ben Pace (Benito) · 2023-04-04T21:38:39.967Z · LW(p) · GW(p)
Comments from new users won't display by default until they've been approved by a moderator.
I'm pretty sad about this from a new-user experience, but I do think it would have made my LW experience much better these past two weeks.
it's just generally the case that if you participate on LessWrong, you are expected to have absorbed the set of principles in The Sequences (AKA Rationality A-Z).
Some slight subtlety: you can get these principles in other ways, for example great scientists or builders or people who've read Feynman can get these. But I think the sequences gives them really well and also helps a lot with setting the site culture.
How do we deal with low quality criticism? There's something sketchy about rejecting criticism. There are obvious hazards of groupthink. But a lot of criticism isn't well thought out, or is rehashing ideas we've spent a ton of time discussing and doesn't feel very productive.
My current guess is something in the genre of "schelling place to discuss the standard arguments" to point folks to. I tried to start one here [LW(p) · GW(p)] responding to basic AI x-risk questions people had.
Replies from: dkirmanicomment by tailcalled · 2023-04-04T21:45:07.895Z · LW(p) · GW(p)
I've been looking at creating a GPT-powered program which can automatically generate a test of whether one has absorbed the Sequences. It doesn't currently work that well and I don't know whether it's useful, but I thought I should mention it. If I get something that I think is worthwhile, then I'll ping you about it.
Though my expectation is that one cannot really meaningfully measure the degree to which a person has absorbed the Sequences.
Replies from: nim↑ comment by nim · 2023-04-05T03:15:09.370Z · LW(p) · GW(p)
I too expect that testing whether someone can talk as if they've absorbed the sequences would measure password-guessing more accurately than comprehension.
The idea gets me wondering whether it's possible to design a game that's easy to learn and win using the skills taught by the sequences, but difficult or impossible without them. Since the sequences teach a skill, it seems like we should be able to procedurally generate novel challenges that the skill makes it easy to complete.
As someone who's gone through the sequences yet isn't sure whether they "really" "fully" understand them, I'd be interested in taking and retaking such a test from time to time (if it was accurate) to quantify any changes to my comprehension.
Replies from: tailcalled↑ comment by tailcalled · 2023-04-05T07:36:57.570Z · LW(p) · GW(p)
I too expect that testing whether someone can talk as if they've absorbed the sequences would measure password-guessing more accurately than comprehension.
I think the test could end up working as an ideology measure rather than a password-guessing game.
The idea gets me wondering whether it's possible to design a game that's easy to learn and win using the skills taught by the sequences, but difficult or impossible without them. Since the sequences teach a skill, it seems like we should be able to procedurally generate novel challenges that the skill makes it easy to complete.
It's tricky to me, because the Sequences teach a network of ideas that are often not directly applicable to problem-solving/production tasks, but rather relevant for analyzing or explaining ideas. It's hard to really evaluate an analysis/explanation without considering its downstream applications, but also it's hard to come up with a task that is simultaneously:
- Big enough that it has both analysis/explanation and downstream applications,
- Small enough that it can be done quickly as a single person filling out a test.
comment by Max H (Maxc) · 2023-04-04T21:24:52.120Z · LW(p) · GW(p)
This isn't directly related to moderation issue, but there are a couple of features I would find useful, given the the recent increase in post and comment volume:
-
A way to hide the posts I've read (or marked as read) from my own personal view of the front page. (Hacker News has this feature)
-
Keeping comment threads I've collapsed, collapsed across page reloads.
I support a stricter moderation policy, but I think these kinds of features would go a long way in making my own reading experience as pleasant as it's always been.
Replies from: lahwran↑ comment by the gears to ascension (lahwran) · 2023-04-04T21:32:25.101Z · LW(p) · GW(p)
Re: #1, could you comment on what additional functionality on top of this button would help you?
css - the repetition is to elevate the css specificity, overriding even css styles from sites that use !important in their code; apply to all sites with the stylus
extension in chrome:
a:not(:visited):not([href="#"])[href]:not(:visited)[href]:not(:visited)[href]:not(:visited)[href]:not(:visited)[href], a:not([href="#"]):not(:visited)[href]:not(:visited)[href]:not(:visited)[href]:not(:visited)[href]:not(:visited)[href] * {
color: #32a1ce !important;
}
a:visited[href]:not([href="#"]):visited[href]:visited[href]:visited[href]:visited[href], a:visited[href]:not([href="#"]):visited[href]:visited[href]:visited[href]:visited[href] * {
color: #939393 !important;
}
Replies from: Maxc↑ comment by Max H (Maxc) · 2023-04-04T21:41:43.065Z · LW(p) · GW(p)
Oh, yes, that's basically what I'm looking for, not sure how I missed it. Thanks!
I think a bulk toggle for read / unread would still potentially be useful, but this is most of what I want.
comment by nim · 2023-04-04T21:08:54.832Z · LW(p) · GW(p)
Thank you for the transparency!
Comments from new users won't display by default until they've been approved by a moderator.
It sounds like you're getting ready to add a pretty significant new workload to the tasks already incumbent upon the mod team. Approving all comments from new users seems like a high volume of work compared to my impression of your current duties, and it seems like the moderation skill threshhold for new user comment approval might potentially be lower than it is for moderators' other duties.
You may have already considered this possibility and ruled it out, but I wonder if it might make sense to let existing users above a given age and karma threshhold help with the new user comment queue. If LW is able to track who approved a given comment, it might be relatively easy to take away the newbie-queue-moderation permissions from anybody who let too many obviously bad ones through.
I would be interested in helping out with a newbie comment queue to keep it moving quickly so that newbies can have a positive early experience on lesswrong, whereas I would not want to volunteer for the "real" mod team because I don't have the requisite time and skills for reliably showing up for the more nuanced aspects of the role. Others who lurk here might also like to help out in this way, and might like to help out intermittently by opting in to a notification that the newbie comment queue is longer than usual and needs some curation.
Thank you for the work you do in maintaining this well-kept garden, and I hope the necessary new work can be undertaken in a way that doesn't risk burnout for our dedicated mods.
Replies from: zac-hatfield-dodds, dxu, Benito↑ comment by Zac Hatfield-Dodds (zac-hatfield-dodds) · 2023-04-04T22:55:19.452Z · LW(p) · GW(p)
It's also quite plausible to me that carefully prompted language models, with a few dozen carefully explained examples and detailed instructions on the decision criteria, would do a good job at this specific moderation task. Less clear what the payoff period of such an investment would be so I'm not actually recommending it, but it's an option worth considering IMO.
Replies from: Ruby↑ comment by dxu · 2023-04-04T21:20:34.163Z · LW(p) · GW(p)
I would be interested in helping out with a newbie comment queue to keep it moving quickly so that newbies can have a positive early experience on lesswrong, whereas I would not want to volunteer for the "real" mod team because I don't have the requisite time and skills for reliably showing up for the more nuanced aspects of the role.
Were such a proposal to be adopted, I would be likewise willing to participate.
↑ comment by Ben Pace (Benito) · 2023-04-04T21:37:09.902Z · LW(p) · GW(p)
I really appreciate the thought here (regardless of whether it works out) :)
comment by Shmi (shminux) · 2023-04-04T22:29:27.249Z · LW(p) · GW(p)
One thing I would love to see that is missing on a lot of posts is a summary upfront that makes it clear to the reader the context and the main argument or just content. (Zvi's posts are an excellent example of this.) At least from the newbies. Good writers, like Eliezer and Scott Alexander can produce quality posts without a summary. Most people posting here are not in that category. It is not wrong to post a stream of consciousness or an incomplete draft, but at least spend 5 minutes writing up the gist in a paragraph upfront. If you can't be bothered, or do not have the skill of summarizing,
By the way, GPT will happily do it for you if you paste your text into the prompt, as a whole or in several parts. GPT-4/Bing can probably also evaluate the quality of the post and give feedback on how well it fits into the LW framework and what might be missing or can be improved. Maybe this part can even be automated.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2023-04-04T22:32:52.918Z · LW(p) · GW(p)
Scott Garrabrant once proposed being able to add abstracts to posts that would appear if you clicked on posts on the frontpage. Then you could read the summary, and only read the rest of the post if you disagreed with it.
comment by Raemon · 2023-04-12T00:31:28.010Z · LW(p) · GW(p)
I'm not entirely sure what I want the longterm rule to be, but I do think it's bad for the comment section of Killing Socrates [LW · GW] to be basically discussing @Said Achmiz [LW · GW] specifically where Said can't comment. It felt a bit overkill to make an entire separate overflow post for a place where Said could argue back, but it seemed like this post might be a good venue for it.
I will probably weigh in here with my own thoughts, although not sure if I'll get to it today.
Replies from: SaidAchmiz↑ comment by Said Achmiz (SaidAchmiz) · 2023-04-12T05:03:26.007Z · LW(p) · GW(p)
I appreciate the consideration. I don’t know that I particularly have anything novel or interesting to say about the post in question; I think it mostly stands (or, rather, falls) on its own, and any response I could make would merely repeat things that I’ve said many times. I could say those things again, but what would be the point? Nobody will hear them who hasn’t already heard. (In any case, some decent responses have already been written by other commenters.)
There is one part (actually a quote from Vaniver) which I want to object to, specifically in the context of my work:
There’s a claim I saw and wished I had saved the citation of, where a university professor teaching an ethics class or w/e gets their students to design policies that achieve ends, and finds that the students (especially more ‘woke’ ones) have very sharp critical instincts, can see all of the ways in which policies are unfair or problematic or so on, and then are very reluctant to design policies themselves, and are missing the skills to do anything that they can’t poke holes in (or, indeed, missing the acceptance that sometimes tradeoffs require accepting that the plan will have problems). In creative fields, this is sometimes called the Taste Gap, where doing well is hard in part because you can recognize good work before you can do it, and so the experience of making art is the experience of repeatedly producing disappointing work.
In order to get the anagogic ascent, you need both the criticism of Socrates and the courage to keep on producing disappointing work (and thus a system that rewards those in balanced ways).
In my professional (design) experience, I have found the above to be completely untrue.
My work is by no means perfect now, nor was it perfect when I started; nor will I claim that I’ve learned nothing and have not improved. But it’s simply not the case that I started out “repeatedly producing disappointing work” and only then (and thereby) learned to make good work. On the contrary, I started out with a strong sense and a good understanding of what bad design was, and what made it bad; and then I just didn’t do those things. Instead of doing bad and wrong things, I did good and correct things. Knowing what is good and what is bad, and why, made that relatively straightforward.
(Is there a “rationality lesson” to be drawn from this? I don’t know; perhaps, perhaps not. But it stands as a non-metaphorical point, either way.)
comment by tremmor19 · 2023-04-08T16:15:09.536Z · LW(p) · GW(p)
Im noticing two types of comments I would consider problematic increasing lately-- poorly thought out or reasoned long posts, and snappy reddit-esque one-line comments. The former are more difficult to filter for, but dealing with the second seems much easier to automate-- for example, have a filter which catches any comment below a certain length too be approved manually (potentially with exceptions for established users)
There's also a general attitude that goes along with that-- in general, not reading full posts, nitpicking things to be snarky about, not participating in discussion or responding when someone attempts to engage. Honestly, Id much rather see twenty people making poorly-thought-out longposts that don't know what they're talking about, as long as they're willing to participate in discussion, than an increase of these guys. At least some of the confused but sincere guys can be curious and engaging.
So to that end, I'd like to support moderation that focuses on culling low-effort, failure-to-participate or failure-to-read-the-damn-post. a flag for users who regularly make short comments could be that, or better methods for regular users to flag comments for review for lack of engagement. Or perhaps an automatic flag for how often someone posts once and doesn't respond to any comments
I do think this is mostly what you meant anyway, I just wanted to point out what exactly i personally are as the issue versus just "underinfomed newbies".
comment by Dagon · 2023-04-04T21:03:55.912Z · LW(p) · GW(p)
Do you keep metrics on moderated (or just downvoted) posts from users, in order to analyze whether a focus on "new users" or "low total karma" users is sufficient?
I welcome a bit stronger moderation, or at least encouragement of higher-minimum-quality posts and comments. I'm not sure that simple focus on newness or karma for the user (as opposed to the post/comment) is sufficient.
I don't know whether this is workable, but encouraging a bit more downvoting norms, as opposed to ignoring and moving on, might be a way to distribute this gardening work a bit, so it's not all on the moderators.
comment by David Bravo (davidbravocomas) · 2023-04-05T15:11:00.625Z · LW(p) · GW(p)
Nice to hear the high standards you continue to pursue. I agree that LessWrong should set itself much higher standards than other communities, even than other rationality-centred or -adjacent communities.
My model of this big effort to raise the sanity waterline [LW · GW] and prevent existential catastrophes contains three concentric spheres. The outer sphere is all of humanity; ever-changing yet more passive. Its public opinion is what influences most of the decisions of world leaders and companies, but this public opinion can be swayed by other, more directed forces.
The middle sphere contains communities focused on spreading important ideas and doing so by motivating a rationalist discourse (for example, ACX, Asterisk Magazine, or Vox's Future Perfect). It aims, in other words, for this capacity to sway public opinion, to make key ideas enter popular discussion.
And the inner sphere is LessWrong, which shares the same aims as the middle sphere, and in addition is the main source of generation of ideas and patterns of thought. Some of these ideas (hopefully a concern for AI alignment, awareness of the control problem, or Bayesianism, for instance) will eventually trickle down to the general public; others, such as technical topics related to AI safety, don't need to go down to that level because they belong to the higher end of the spectrum which is directly working to solve these issues.
So I very much agree with the vision to maintain LW as a sort of university, with high entry barriers in order to produce refined, high-quality ideas and debates, while at the same time keeping in mind that for some of these ideas to make a difference, they need to trickle down and reach the public debate.
comment by memeticimagery · 2023-04-05T14:53:00.366Z · LW(p) · GW(p)
Disclaimer: I myself am a newer user from last year.
I think trying to change downvoting norms and behaviours could help a lot here and save you some workload on the moderation end. Generally, poor quality posters will leave if you ignore and downvote them. Recently, there has been an uptick in these posts and of the ones I have seen many are upvoted and engaged with. To me, that says users here are too hesitant to downvote. Of course, that raises the question of how to do that and if doing so is undesirable because it will broadly repel many new users some of whom will not be "bad". Overall though I think encouraging existing users to downvote should help keep the well-kept garden.
Replies from: Legionnaire↑ comment by Legionnaire · 2023-04-06T04:09:02.835Z · LW(p) · GW(p)
I think more downvoting being the solution depends on the goals. If our goal is only to maintain the current quality, that seems like a solution. If the goal is to grow in users and quality, I think diverting people to a real-time discussion location like Discord could be more effective.
Eg. a new user coming to this site might not have any idea a particular article exists that they should read before writing and posting their 3 page thesis on why AI will/wont be great, only to have their work downvoted (it is insulting and off-putting to be downvoted) and in the end we may miss out on persuading/gaining people. In a chat a quick back and forth could steer them in the right direction right off the bat.
Replies from: dxu↑ comment by dxu · 2023-04-06T04:53:07.814Z · LW(p) · GW(p)
I think diverting people to a real-time discussion location like Discord could be more effective.
Agreed—which raises to mind the following question: does LW currently have anything like an official/primary public chatroom (whether hosted on Discord or elsewhere)? If not, it may be worth creating one, announcing it in a post (for visibility), and maintaining a prominently visible link to it on e.g. the sidebar (which is what many subreddits do).
comment by carboniferous_umbraculum (Spencer Becker-Kahn) · 2023-04-05T14:21:30.759Z · LW(p) · GW(p)
I've always found it a bit odd that Alignment Forum submissions are automatically posted to LW.
If you apply some of these norms, then imo there are questionable implications, i.e. it seems weird to say that one should have read the sequences in order to post about mechanistic interpretability on the Alignment Forum.
↑ comment by habryka (habryka4) · 2023-04-05T18:21:51.061Z · LW(p) · GW(p)
If you apply some of these norms, then imo there are questionable implications, i.e. it seems weird to say that one should have read the sequences in order to post about mechanistic interpretability on the Alignment Forum.
The AI Alignment Forum was never intended as the central place for all AI Alignment discussion. It was founded at a time when basically everyone involved in AI Alignment had read the sequences, and the goal was to just have any public place for any alignment discussion.
Now that the field is much bigger, I actually kind of wish there was another forum where AI Alignment people could go to, so we would have more freedom in shaping a culture and a set of background assumptions that allow people to make further strides and create a stronger environment of trust.
I personally am much more interested in reading about mechanistic interpretability from people who have read the sequences. That one in-particular is actually one of the ones where a good understanding of probability theory, causality and philosophy of science seems particularly important (again, it's not that important that someone has acquired that understanding via the sequences instead of some other means, but it does actually really benefit from a bunch of skills that are not standard in the ML or general scientific community).
I expect we will make some changes here in the coming months, maybe by renaming the forum or starting off a broader forum that can stand more on its own, or maybe just shutting down the AI Alignment Forum completely and letting other people fill that niche.
↑ comment by the gears to ascension (lahwran) · 2023-04-05T14:31:57.392Z · LW(p) · GW(p)
similarly, I've been frustrated that medium quality posts on lesswrong about ai often get missed in the noise. I want alignmentforum longform scratchpad, not either lesswrong or alignmentforum. I'm not even allowed to post on alignmentforum!
some recent posts I've been frustrated to see get few votes and generally less discussion:
- https://www.lesswrong.com/posts/JqWQxTyWxig8Ltd2p/relative-abstracted-agency [LW · GW] - this one deserves at least 35 imo
www.lesswrong.com/posts/fzGbKHbSytXH5SKTN/penalize-model-complexity-via-self-distillation [LW · GW]- https://www.lesswrong.com/posts/bNpqBNvfgCWixB2MT/towards-empathy-in-rl-agents-and-beyond-insights-from-1 [LW · GW]
- https://www.lesswrong.com/posts/LsqvMKnFRBQh4L3Rs/steering-systems [LW · GW]
- ... many more open in tabs I'm unsure about.
↑ comment by Gordon Seidoh Worley (gworley) · 2023-04-05T16:58:38.177Z · LW(p) · GW(p)
There's been a lot of really low quality posts lately, so I know I've been having to skim more and read fewer things from new authors. I think resolving general issues around quality should help valuable stuff rise to the top, regardless of whether it's on AF or not.
↑ comment by Garrett Baker (D0TheMath) · 2023-04-06T04:08:57.907Z · LW(p) · GW(p)
[Justification for voting behavior, not intending to start a discussion. If I were I would have commented on the linked post]
I’ve read the model distillation post, and it is bad, so strong disagree. I don’t think that person understands the arguments for AI risk and in particular don’t want to continuously reargue the “consequentialism is simpler, actually” line of discussion with someone who hasn’t read pretty basic material like risks from learned optimization.
Replies from: lahwran, lahwran↑ comment by the gears to ascension (lahwran) · 2023-04-06T17:15:32.664Z · LW(p) · GW(p)
I still think this one is interesting and should get more attention, though: https://www.lesswrong.com/posts/JqWQxTyWxig8Ltd2p/relative-abstracted-agency [LW · GW]
↑ comment by the gears to ascension (lahwran) · 2023-04-06T04:12:08.939Z · LW(p) · GW(p)
fair enough. I've struck it from my comment.
comment by DanielFilan · 2023-04-05T01:04:52.791Z · LW(p) · GW(p)
What's a "new user"? It seems like this category matters for moderation but I don't see a definition of it. (Maybe you're hoping to come up with one?)
Replies from: Raemon↑ comment by Raemon · 2023-04-05T01:29:43.211Z · LW(p) · GW(p)
There's maybe two tiers of new user:
- User that has never posted or commented before
- User that has posted or commented a couple times, but not been given a stamp-of-approval by moderators which means we don't have to pay special attention to them anymore.
Until a user has been approved, moderators at least glance at every comment and post they make.
comment by ChristianKl · 2023-04-05T12:03:36.417Z · LW(p) · GW(p)
Stackoverflow has a system where users with more karma get more power. When it comes to the job of deciding whether or not to approve comments of new users, I don't see why that power should be limited to a handful of mods. Have you thought about giving that right out at a specific amount of karma?
comment by trevor (TrevorWiesinger) · 2023-04-05T01:26:32.428Z · LW(p) · GW(p)
Nice! Through much of 2022, I was pretty worried that Lesswrong would eventually stop thriving for some reason or another. This post is a strong update in the direction of "I never had anything to worry about, because the mods will probably adapt ahead of time and stay well ahead of the curve on any issue".
I'm also looking forward to the results of the Sequences requirement. I've heard some good things about rationality engines contributing to humans solving alignment, but I'm not an expert on that approach.
comment by mako yass (MakoYass) · 2024-08-20T03:58:35.254Z · LW(p) · GW(p)
How do we deal with low quality criticism?
Criticism that's been covered before should be addressed by citing prior discussion and flagging the post as a duplicate unless they can point out some way their phrasing is better.
Language models are potentially very capable of making the process of citing dupes much more efficient, and I'm going to talk to AI Objectives about this stuff [LW · GW] at some point in the next week and this is one of the technologies we're planning on discussing.
(less relevant to the site, but general advice: In situations where a bad critic is exhibiting vices that make good faith conversation impossible, it's not ad hominem to redirect the conversation and focus on that. To have a viable discourse, it is necessary to hold bad critics accountable.)
What are the actual rationality concepts LWers are basically required to understand to participate in most discussions?
Being interested in finding out when you're wrong about something/promoting good critics is one of the defining norms of rationality, but you have a big problem with this, because the way a user/community communicates interest/promotes posts in a redditlike is by voting, and voting is anonymous, so you have no way of punishing violations of this norm.
I think this is actually a serious problem with the site, and the approach I'd take to fixing it is to make a different kind of site.
Replies from: Raemon↑ comment by Raemon · 2024-08-20T16:40:57.533Z · LW(p) · GW(p)
Language models are potentially very capable of making the process of citing dupes much more efficient, and I'm going to talk to AI Objectives about this stuff [LW · GW] at some point in the next week and this is one of the technologies we're planning on discussing.
This is a cool suggested use of language models. I'll think about whether/how to implement it on LW.
comment by LoganStrohl (BrienneYudkowsky) · 2023-04-11T18:22:18.144Z · LW(p) · GW(p)
Cheering over here! This seems like a tricky problem and I'm so happy about how you seem to be approaching it. :)
I'm especially pleased with the stuff about "people need to read the sequences, but shit the sequences are long, which particular concepts are especially crucial for participation here?", as opposed to wishing people would read the sequences and then giving up because they're long and stylistically polarizing (which is a mental state I've often found myself occupying).
comment by the gears to ascension (lahwran) · 2023-04-04T21:28:28.254Z · LW(p) · GW(p)
re: discussing criticism - I'd love to see tools to help refer back to previous discussions of criticism and request clarification of the difference. Though of course this can be an unhelpful thought-stopper, I think more often than not it's simply context retrieval and helps the new criticism clarify its additions. ("paper does not clarify contributions"?)
Replies from: Raemon↑ comment by Raemon · 2023-04-04T21:37:56.902Z · LW(p) · GW(p)
A thing I'm finding difficult at the moment is that criticism just isn't really indexed in a consistent way, because, well, everyone has subtly different frames on what they're criticizing and why.
It's possible there should be, like, 3 major FAQs that just try to be a comprehensive index on frequent disagreements people have with LW consensus, and people who want to argue about it are directed there to leave comments, and maybe over time the FAQ becomes even more comprehensive. It's a lot of work, might be worth it anyway
(I'm imagining such an FAQ mostly linking to external posts rather than answering everything itself)
comment by metachirality · 2023-04-05T02:06:43.878Z · LW(p) · GW(p)
Will the karma thing affect users who've joined before a certain period of time? Asking this because I joined quite a while ago but have only 4 karma right now.
Replies from: Raemoncomment by trevor (TrevorWiesinger) · 2023-04-06T22:11:01.929Z · LW(p) · GW(p)
I'm strongly in favor of the sequences requirement. If I had been firmly encouraged/pressured into reading the sequences when I joined LW at ~march/april 2022, my life would have been much better and more successful by now. I suspect this would be the case for many other people. I've spent a lot of time thinking about ways that LW could set people up to steer themselves (and eachother) towards self-improvement, like the Battle School in Ender's Game, but it seems like it's much easier to just tell people to read the Sequences.
Something that I'm worried about is that the reading sequences requirement actually makes lesswrong too reputable, i.e. that it makes LW into an obvious elite exclusive club that people race to become a member of in order to show status. This scenario is in contrast with the current paradigm, where people who are good enough to notice LW's value, often at a glance, are the ones who stay and hang around the most.
Replies from: MakoYass↑ comment by mako yass (MakoYass) · 2024-08-20T08:43:36.204Z · LW(p) · GW(p)
It seems to me that building trust by somehow confirming that a person understands certain important background knowledge (some might call this knowledge a "religious story", those stories that inspire a certain social order wherever they're common knowledge), but I haven't ever seen a nice, efficient social process for confirming the presence of knowledge within a community. It always seems very ad hoc. The processes I've seen too demand very uncritical, entry-level understandings of the religious stories, or just randomly misfire sometimes, or are vulnerable to fakers who have no deep or integrated understanding of the stories, or sometimes there will be random holes in peoples' understandings of the stories that cause problems to occur even when everyone's being good faith. Maybe this stuff just inherently requires good old fashioned time and effort and I should stop looking for an easy way through.
comment by Metacelsus · 2023-04-05T02:05:59.384Z · LW(p) · GW(p)
Thanks. It's gotten to the point where I have completely hidden the "AI" tag from my list of latest posts.
Replies from: pktechgirl, Benito↑ comment by Elizabeth (pktechgirl) · 2023-04-05T04:47:28.728Z · LW(p) · GW(p)
Highly recommend this, wish the UI was more discoverable. For people who want this themselves: you can change how posts are weighted for you by clicking "customize feed" to right of "front page". You'll be shown some default tags, and can also add more. If you hover over a tag you can set it to hidden, reduced (posts with tag treated as having 50% less karma), promoted (+25 karma), or a custom amount.
It occurs to me I don't know how tags stack on a given post, maybe staff can clarify?
↑ comment by jimrandomh · 2023-04-05T07:55:08.434Z · LW(p) · GW(p)
All of the additive modifiers that apply (eg +25 karma, for each tag the post has) are added together and applied first, then all of the multiplicative modifiers (ie the "reduced" option) are multiplied together and applied, then time decay (which is multiplicative) is applied last. The function name is filterSettingsToParams
.
↑ comment by Raemon · 2023-04-05T07:11:44.221Z · LW(p) · GW(p)
I think @jimrandomh is most likely to know.
↑ comment by Ben Pace (Benito) · 2023-04-05T17:59:42.722Z · LW(p) · GW(p)
I have it at -200.
comment by Review Bot · 2024-04-03T08:18:15.229Z · LW(p) · GW(p)
The LessWrong Review [? · GW] runs every year to select the posts that have most stood the test of time. This post is not yet eligible for review, but will be at the end of 2024. The top fifty or so posts are featured prominently on the site throughout the year.
Hopefully, the review is better than karma at judging enduring value. If we have accurate prediction markets on the review results, maybe we can have better incentives on LessWrong today. Will this post make the top fifty?
comment by Douglas_Reay · 2023-04-12T13:50:29.834Z · LW(p) · GW(p)
I wonder if there would be a use for an online quiz, of the sort that asks 10 questions picked randomly from several hundred possible questions, and which records time taken to complete the quiz and the number of times that person has started an attempt at it (with uniqueness of person approximated by ip address, email address or, ideally, lesswrong username) ?
Not as prescriptive as tracking which sequences someone has read, but perhaps a useful guide (as one factor among many) about the time a user has invested in getting up to date on what's already been written here about rationality?
comment by segfault (caleb-ditchfield) · 2023-04-05T03:24:18.443Z · LW(p) · GW(p)
So, I've been a user for awhile now, but entirely lurk and don't comment. Only recently actually made an account to vote on things .
...therefore, I have no comment karma. Thoughts on this case? I regularly vote on things normally (now that I have an account), but now that I don't have any karma, I suppose this is my first comment, partially in purpose to get a single karma, but also to raise the case that there are probably many others like me. Maybe make this contingent on account age as well?