[Meta] New moderation tools and moderation guidelines

post by habryka (habryka4) · 2018-02-18T03:22:45.142Z · LW · GW · 74 comments

Contents

74 comments

[I will move this into meta in a few days, but this seemed important enough to have around on the frontpage for a bit]

Here is a short post with some of the moderation changes we are implementing. Ray, Ben and me are working on some more posts explaining some of our deeper reasoning, so this is just a list with some quick updates.

Even before the start of the open beta, I intended to allow trusted users to moderate their personal pages. The reasoning I outlined in our initial announcement post was as follows:

“We want to give trusted authors moderation powers for the discussions on their own posts, allowing them to foster their own discussion norms, and giving them their own sphere of influence on the discussion platform. We hope this will both make the lives of our top authors better and will also create a form of competition between different cultures and moderation paradigms on Lesswrong.”

And I also gave some further perspectives on this in my “Models of Moderation” post that I posted a week ago.

We now finally got around to implement the technology for this. But the big question that has been on my mind while working on the implementation has been:

How should we handle moderation on frontpage posts?

Me, Ray, Ben and Vaniver talked for quite a while about the pros and cons, and considered a bunch of perspectives, but the two major considerations on our mind were:

  1. The frontpage is a public forum that should reflect the perspectives of the whole community, as opposed to just the views of the active top-level authors.
  2. We want our best authors to feel safe posting to LessWrong, and us promoting a post from your private blog to the frontpage shouldn’t feel like a punishment (which it might if it also entails losing control over it)

After a good amount of internal discussion, as well as feedback from some of the top content contributors on LW (including Eliezer), we settled on allowing users above 2000 karma to moderate their own frontpage posts, and allow users above 100 karma to moderate their personal blogs. This strikes me as the best compromise between the different considerations we had.

Here are the details about the implementation:

I also want to allow users to create private comments on posts, that are only visible to themselves and the author of the post, and allow authors to make comments private (as an alternative to deleting them). But that will have to wait until we get around to implementing it.

We tested this reasonably thoroughly, but there is definitely a chance we missed something, so let us know if you notice any weird behavior around commenting on posts, or using the moderation tools, and we will fix it ASAP.

74 comments

Comments sorted by top scores.

comment by Said Achmiz (SaidAchmiz) · 2018-02-18T04:25:18.351Z · LW(p) · GW(p)

While I certainly have thoughts on all of this, let me point out one aspect of this system which I think is unusually dangerous and detrimental:

The ability (especially for arbitrary users, not just moderators) to take moderation actions that remove content, or prevent certain users from commenting, without leaving a clearly and publicly visible trace.

At the very least (if, say, you’re worried about something like “we don’t want comments sections to be cluttered with ‘post deleted’”), there ought to be a publicly viewable log of all moderation actions. (Consider the lobste.rs moderation log feature as an example of how such a thing might work.) This should apply to removal of comments and threads, and it should definitely also apply to banning a user from commenting on a post / on all of one’s posts.

Let me say again that I consider a moderation log to be the minimally acceptable moderation accountability feature on a site like this—ideally there would also be indicators in-context that a moderation action has taken place. But allowing totally invisible / untraceable moderation actions is a recipe for disaster.

Edit: For another example, note Scott’s register of bans/​warnings, which is notable for the fact that Slate Star Codex is one guy’s personal blog and explicitly operates on a “Reign of Terror” moderation policy—yet the ban register is maintained, all warnings/​bans/​etc. are very visibly marked with red text right there in the comment thread which provokes them—and, I think, this greatly contributes to the atmosphere of open-mindedness that SSC is now rightly famous for.

Replies from: Thrasymachus, habryka4, Benito, skybrian, srdjan-miletic, adamzerner
comment by Thrasymachus · 2018-02-18T16:25:36.952Z · LW(p) · GW(p)

I'm also mystified at why traceless deletition/banning are desirable properties to have on a forum like this. But (with apologies to the moderators) I think consulting the realpolitik will spare us the futile task of litigating these issues on the merits. Consider it instead a fait accompli with the objective to attract a particular writer LW2 wants by catering to his whims.

For whatever reason, Eliezer Yudkowsky wants to have the ability to block commenters and have the ability to do traceless deletion on his own work, and he's been quite clear this is a condition for his participation. Lo and behold precisely these features have been introduced, with suspiciously convenient karma thresholds which allow EY (at his current karma level) to traceless delete/ban on his own promoted posts, yet exclude (as far as I can tell) the great majority of other writers with curated/front page posts from being able to do the same.

Given the popularity of EY's writing (and LW2 wants to include future work of his), the LW2 team are obliged to weigh up the (likely detrimental) addition of these features versus the likely positives of his future posts. Going for the latter is probably the right judgement call to make, but let's not pretend it is a principled one: we are, as the old saw goes, just haggling over the price.

Replies from: habryka4, Benito
comment by habryka (habryka4) · 2018-02-18T20:28:53.871Z · LW(p) · GW(p)

Yeah, I didn't want to make this a thread about discussing Eliezer's opinion, so I didn't put that front and center, but Eliezer only being happy to crosspost things if he has the ability to delete things was definitely a big consideration.

Here is my rough summary of how this plays into my current perspective on things:

1. Allowing users to moderate their own posts and set their own moderation policies on their personal blogs is something I wanted before we even talked to Eliezer about LW2 the first time.

2. Allowing users to moderate their own front-page posts is not something that Eliezer requested (I think he would be happy with them just being personal posts), but is a natural consequence of wanting to allow users to moderate their own posts, while also not giving up our ability to promote the best content to the front-page and to curated

3. Allowing users to delete things without a trace was a request by Eliezer, but is also something I thought about independently to deal with stuff like spam and repeated offenders (for example, Eugine has created over 100 comments on one of Ozy's posts, and you don't want all of them to show up as deleted stubs). I expect we wouldn't have built the future as it currently stands without Eliezer, but I hadn't actually considered a moderation logs page like the one Said pointed out, and I actually quite like that idea, and don't expect Eliezer to object too much to it. So that might be a solution that makes everyone reasonably happy.

comment by Ben Pace (Benito) · 2018-02-18T20:59:33.364Z · LW(p) · GW(p)
I think consulting the realpolitik will spare us the futile task of litigating these issues on the merits. Consider it instead a fait accompli with the objective to attract a particular writer LW2 wants by catering to his whims.

As usual Greg, I will always come to you first if I ever need to deliver well-articulated sick burn that my victim needs to read twice before they can understand ;-)

Edit: Added a smiley to clarify this was meant as a joke.

Replies from: Thrasymachus
comment by Thrasymachus · 2018-02-18T23:45:03.991Z · LW(p) · GW(p)

Let's focus on the substance, please.

comment by habryka (habryka4) · 2018-02-18T20:31:35.544Z · LW(p) · GW(p)

I actually quite like the idea of a moderation log, and Ben and Ray also seem to like it. I hadn't really considered that as an option, and my model is that Eliezer and other authors wouldn't object to it either, so this seems like something I would be quite open to implementing.

Replies from: jimrandomh
comment by jimrandomh · 2018-02-25T06:11:36.292Z · LW(p) · GW(p)

I actually think some hesitation and thought is warranted on that particular feature. A naively-implemented auto-filled moderation log can significantly tighten the feedback loop for bad actors trying to evade bans. Maybe if there were a time delay, so moderation actions only become visible when they're a minimum number of days old?

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-02-26T20:03:03.714Z · LW(p) · GW(p)

There is some sense in what you say, but… before we allow concerns like this to guide design decisions, it would be very good to do some reasonably thorough investigating about whether other platforms that implement moderation logs have this problem. (The admin and moderators of lobste.rs, for example, hang out in the #lobsters IRC channel on Freenode. Why not ask them if they have found the moderation log to result in a significant ban evasion issue?)

comment by Ben Pace (Benito) · 2018-02-18T20:59:58.765Z · LW(p) · GW(p)

I really like the moderation log idea - I think it could be really good for people to have a place where they can go if they want to learn what the norms are empirically. I also propose there be a similar place which stores the comments explaining why posts are curated.

(Also note that Satvik Beri said to me I should do this a few months ago and I forgot and this is my fault.)

comment by skybrian · 2018-02-18T05:30:30.162Z · LW(p) · GW(p)

I'm just a lurker, but as an FYI, on The Well, hidden comments were marked <hidden> (and clickable) and deleted comments were marked <scribbled> and it seemed to work out fine. I suppose with more noise, this could be collapsed to one line: <5 scribbled>.

comment by Srdjan Miletic (srdjan-miletic) · 2018-02-18T12:47:31.780Z · LW(p) · GW(p)

I agree. There are a few feairly simple ways to implement this kind of transparancy.

  • When a comment is deleted, change it's title to [deleted] and remove any content. This at least shows when censorship is happening and roughly how much.
  • When a comment is deleted, do as above but give users the option to show it by clicking on a "show comment" button or something similar.
  • Have a "show deleted comments" button on users profile pages. Users who want to avoid seeing the kind of content that is typically censored can do so. Those who would prefer to see everything can just enable the option and see all comments.

I think these features would add at least some transparancy to comment moderation. I'm still unsure how to make user bans transparent. I'm worried that without doing so, bad admins can just bad users they dislike and give the impression of a balanced discussion with little censorship.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-02-18T18:58:31.148Z · LW(p) · GW(p)

User bans can be made transparent via the sort of centralized moderation log I described in my other comment [LW(p) · GW(p)]. (For users banned by individual users, from their own personal blogs, there should probably also be a specific list, on the user page of the one who did the banning, of everyone they’ve banned from their posts.)

Replies from: srdjan-miletic
comment by Srdjan Miletic (srdjan-miletic) · 2018-02-18T19:46:35.771Z · LW(p) · GW(p)

A central log would indeed allow anyone to see who was banned and when. My concern is more that such a solution would be practically ineffective. I think that most people reading an article aren't likely to navigate to the central log and search the ban list to see how many people have been banned by said articles author. I'd like to see a system for flagging up bans which is both transparent and easy to access, ideally so anyone reading the page/discussion will notice if banning is taking place and to what extent. Sadly, I haven't been able to think of a good solution which does that.

Replies from: habryka4, SaidAchmiz
comment by habryka (habryka4) · 2018-02-18T20:19:48.766Z · LW(p) · GW(p)

Yeah, I agree it doesn't create the ideal level of transparency. In my mind, a moderation log is more similar to an accounting solution than an educational solution, where the purpose of accounting is not something that is constantly broadcasted to the whole system, but is instead used to backtrack if something has gone wrong, or if people are suspicious that there is some underlying systematic problem going on. Which might get you a lot of the value that you want, for significantly lower UI-complexity cost.

comment by Said Achmiz (SaidAchmiz) · 2018-02-18T21:06:13.574Z · LW(p) · GW(p)

I believe it was Eliezer who (perhaps somewhere in the Sequences) enjoined us to consider a problem for at least five minutes, by the clock, before judging it to be unsolvable—and I have found that this applies in full measure in UX design.

Consider the following potential solutions (understanding them to be the products of a brainstorm only, not a full and rigorous design cycle):

  1. A button (or other UI element, etc.) on every post, along the lines of “view history of moderation actions which apply to this post”.
  2. A flag, attached to posts where moderation has occurred; which, when clicked, would take you to the central moderation log (or the user-specific one), and highlight all entries that apply to the referring post.
  3. The same as #2, but with the flag coming in two “flavors”—one for “the OP has taken moderation actions”, and one for “the LW2 admin team has taken moderation actions”.

This is what I was able to come up with in five minutes of considering the problem. These solutions both seem to me to be quite unobtrusive, and yet at the same time, “transparent and easy to access”, as per your criteria. I also do not see any fundamental design or implementation difficulties that attach to them.

No doubt other approaches are possible; but at the very least, the problem seems eminently solvable, with a bit of effort.

comment by Adam Zerner (adamzerner) · 2018-02-21T18:09:31.359Z · LW(p) · GW(p)

Why exactly do you find it to be unusually dangerous and detrimental? The answer may seem obvious, but I think that it would be valuable to be explicit.

comment by Adam Zerner (adamzerner) · 2018-02-21T17:53:49.186Z · LW(p) · GW(p)
  1. I love the idea of having private comments on posts. Sometimes I want to point out nitpicky things like grammatical errors or how something could have been phrased differently. But I don't want to "take up space" with an inconsequential comment like that, and don't want to appear nitpicky. Private comments would solve those sorts of problems. Another alternative feature might be different comment sections for a given post. Like, a "nitpicks" section, a "stupid questions" section, a "thanks" section.
  2. I have an impression that, as Said Achmitz already noted, if it were required that if you delete a comment, you must explain why, people would feel much less aversion to this policy. I feel like there's something particularly frustrating about having a comment of yours just deleted out of thin air without any explanation as to why. Feels very Big Brother-y.
  3. One thing that I like about this is that regardless of whether or not it works, it's an experiment. You can't improve without trying new things. I generally applaud efforts to experiment. It makes me feel excited about the future of Less Wrong. "What cool features will we end up stumbling upon over the next 12 months?"
  4. I personally don't see that there is much of a need for this comment moderation. On the rest of the internet, there's tons of trolls and idiots. But here I feel like very, very few comments are so bad that someone would want to delete them. And in the few cases where they are that bad, they get downvoted heavily and appear minimized so as to be unintrusive. I think you guys do a great job with product development and are all really smart so I fear that I'm being too uncharitable in asking this, but how much user research has been done before spending time developing this feature? One exercise that I think would be useful is to go through some sample of comments and judge how many of them are delete-worthy. If that percentage is under some number (eg. 1%), perhaps the feature isn't needed, or at least be worth deprioritizing. Very few comments seem to be downvoted to less than, say, -3, which makes me think that the result of the experiment would show that very few comments are delete-worthy.
comment by Kaj_Sotala · 2018-02-18T19:24:04.017Z · LW(p) · GW(p)

Dividing the site to smaller sub-tiefs where individual users have ultimate moderation power seems to have been a big part of why Reddit (and to some extent, Facebook) got so successful, so I'm having big hopes for this model.

comment by lionhearted (Sebastian Marshall) (lionhearted) · 2018-02-20T13:55:12.989Z · LW(p) · GW(p)

I don't have an opinion on the moderation policy, but I did want to say thanks for all the hard work in bringing the new site to life.

LessWrong 1.0 was basically dead, and 2.0 is very much alive. Huge respect and well-wishes.

Replies from: habryka4
comment by habryka (habryka4) · 2018-02-20T17:54:00.994Z · LW(p) · GW(p)

Thank you! :)

comment by alkjash · 2018-02-18T21:14:14.755Z · LW(p) · GW(p)

Just want to say this moderation design addresses pretty much my only remaining aversion to posting on LW and I will be playing around with Reign of Terror if I hit the karma. Also really prefer not to leave public traces.

Replies from: PDV
comment by PDV · 2018-02-21T02:07:23.232Z · LW(p) · GW(p)

If you don't want to leave public traces, others must assume that we wouldn't like what we saw if the traces were public.

Replies from: alkjash
comment by alkjash · 2018-02-21T03:24:47.501Z · LW(p) · GW(p)

No, others could be a bit more charitable than that. Looking back at the very few comments I would have considered deleting, I would use it exclusively to remove low-effort comments that could reasonably be interpreted as efforts to derail the conversation into demon threads.

Replies from: SaidAchmiz, PDV
comment by Said Achmiz (SaidAchmiz) · 2018-02-21T04:10:09.564Z · LW(p) · GW(p)

Consider the possible reasons why you, as the OP, would not want a comment to appear in the comments section of your post. These fall, I think, into two broad categories:

Category 1: Comments that are undesirable because having other people respond to them is undesirable.

Category 2: Comments that are undesirable because having people read them is undesirable (regardless of whether anyone responds to them).

Category 1 (henceforth, C1) includes things like what you just described. Trolling (comments designed to make people angry or upset and thus provoke responses), off-topic comments (which divert attention and effort to threads that have nothing to do with what you want to discuss), low-effort or malicious or intentionally controversial or etc. comments that are likely to spawn “demon threads”, pedantry, nitpicking, nerdsniping, and similar things, all fall into this category as well.

Category 2 (henceforth, C2) is quite different. There, the problem is not the fact that the comment provokes responses (although it certainly might); the problem is that the actual content of the comment is something which you prefer people not to see. This can include everything from doxxing to descriptions of graphic violence to the most banal sorts of spam (CHEAP SOCKS VERY GOOD PRICE) to things which are outright illegal (links to distribution of protected IP, explicit incitement to violence, etc.).

And, importantly, C2 also includes things like criticism of your ideas (or of you!), comments which mention things about you that paint you in a bad light (such as information about conflicts of interest), and any number of similar things.

It should be clear from this description that Category 2 cleaves neatly into two subtypes (let’s call them C2a and C2b), the key distinction between which is this: for comments in C2a, you (the OP) do not want people to read them, and readers themselves also do not want to read them; your interests and those of your readers are aligned. But for comments in C2b, you—even more so than for C2a!—don’t want people to read them… but readers may (indeed, almost certainly do) feel very differently; your interests and those of your readers are at odds.

It seems clear to me that these three types of comments require three different approaches to handling them.

For comments in C1 (those which are undesirable because it’s undesirable for people to respond to them), it does not seem necessary to delete them at all! In fact, they need not even be hidden; simply disable responses to the comment (placing an appropriate flag or moderator note on it; and, ideally, an explanation). I believe LW2 already includes this capability.

Comments in C2a (those which are undesirable because you do not want people to read them, and readers also have no desire to read them) clearly need to be hidden, at the least; by construction, anything less fails to solve the problem. Should they be entirely deleted, however? Well, read on.

Comments in C2b (those which are undesirable to you because you prefer that people not see them, but which may be quite desirable indeed to your readers)… well, this is the crux of the matter. It’s a very dubious proposition, to say that such comments are a problem in the first place. Indeed, I’d claim the opposite is true. Of course, you (the OP) might very much like to delete them without a trace—if you are dishonest, and lacking in integrity! But your readers don’t want you to be able to delete them tracelessly; and it seems obvious to me that the admins of any forum which aims to foster honest seeking after truth, should be on the readers’ side in such cases.

Now let’s go back to comments of type C2a. Should they be entirely deleted, without a trace? No, and here’s why: if you delete a comment, then that is evidence for that comment having been in C2b; certainly it casts a shadow of suspicion on whoever deleted it. Is that really what you want? Is an atmosphere of mistrust, of uncertainty, really what we want to foster? It seems a wholly undesirable side effect of merely wanting to protect your readers from things that they themselves don’t wish to see! Much better simply to hide the comments (in some suitably unobtrusive way—I won’t enumerate possible implementations here, but they are legion). That way, anyone who wishes to assure themselves of your integrity can easily do so, while at the same time, saving readers from having to view spam and other junk text.

Replies from: alkjash
comment by alkjash · 2018-02-21T05:29:22.366Z · LW(p) · GW(p)

My primary desire to remove the trace is that there are characters so undesirable on the internet that I don't want to be reminded of their existence every time I scroll through my comments section, and I certainly don't want their names to be associated with my content. Thankfully, I have yet to receive any comments anything close to this level on LW, but give a quick browse through the bans section of SlateStarCodex and you'll see they exist.

I am in favor of a trace if it were on a moderation log that does not show up on the comment thread itself.

Replies from: Gurkenglas
comment by Gurkenglas · 2018-02-26T09:15:15.653Z · LW(p) · GW(p)

Wouldn't someone just make a client or mirror like greaterwrong that uses the moderation log to unhide the moderation?

Replies from: SaidAchmiz, gjm
comment by Said Achmiz (SaidAchmiz) · 2018-02-26T19:56:50.060Z · LW(p) · GW(p)

This is a valid concern, one I would definitely like to respond to. I obviously can’t speak for anyone else who might develop another third-party client for LW2, but as far as GreaterWrong goes—saturn and I have discussed this issue. We don’t feel that it would be our place to do what you describe, as it would violate the LW2 team’s prerogative to make decisions on how to set up and run the community. We’re not trying to undermine them; we’re providing something that (hopefully) helps them, and everyone who uses LW2, by giving members of the community more options for how to interact with it. So you shouldn’t expect to see GW add features like what you describe (i.e. those that would effectively undo the moderation actions of the LW2 team, for any users of GW).

comment by gjm · 2018-02-26T11:20:37.697Z · LW(p) · GW(p)

They might. But that would unhide it only for them. For most undesirable comments, the point of deleting them is to keep them out of everyone's face, and that's perfectly compatible with there being other ways of viewing the content on LW that reinstate the comments.

What fraction of users who want the ability to delete comments without trace would be satisfied with that, I don't know.

(A moderation log wouldn't necessarily contain the full text of deleted comments, anyway, so restoring them might not be possible.)

Replies from: habryka4
comment by habryka (habryka4) · 2018-02-26T19:14:16.057Z · LW(p) · GW(p)

Yeah, I wasn’t thinking of showing the full text of deleted comments, but just a log of its deletion. This is also how lobste.rs does it.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-02-26T19:47:30.098Z · LW(p) · GW(p)

You’re right about lobste.rs, but in this case I would strongly suggest that you do show the full text of deleted comments in the moderation log. Hide them behind a disclosure widget if you like. But it is tremendously valuable, for transparency purposes, to have the data be available. It is a technically insignificant change, and it serves all the same purposes (the offending comment need not appear in the thread; it needs not even appear by default in the log—hence the disclosure widget); but what you gain, is very nearly absolute immunity to accusations of malfeasance, to suspicion-mongering, and to all the related sorts of things that can be so corrosive to an internet community.

Replies from: habryka4
comment by habryka (habryka4) · 2018-02-26T20:04:09.108Z · LW(p) · GW(p)

Hmm, so the big thing I am worried about is the Streisand effect, with deleted content ending up getting more attention than normal content (which I expect is the primary reason why Lobse.rs does not show the original content).

Sometimes you also delete things because they reveal information that should not be public (such as doxing and similar things) and in those situations we obviously still want the option of deleting it without showing the original content.

This might be solvable by making the content of the deleted comments only available to people who have an account, or above a certain level of karma, or to make it hard to link to individual entries in the moderation log (though that seems like it destroys a bunch of the purpose of the moderation log).

Currently, I would feel uncomfortable having the content of the old comments be easily available, simply because I expect that people will inevitably start paying more attention to the deleted content section than the average comment with 0 karma, completely defeating the purpose of reducing the amount of attention and influence bad content has.

The world where everyone can see the moderation log, but only people above a certain karma threshold can see the content seems most reasonable to me, though I still need to think about it. If the karma threshold is something like 100, then this would drastically increase the number of people who could provide information about the type of content that was deleted, while avoiding the problem of deleted contents getting tons of attention.

Replies from: SaidAchmiz, Gurkenglas
comment by Said Achmiz (SaidAchmiz) · 2018-02-26T20:39:52.715Z · LW(p) · GW(p)

Hmm, so the big thing I am worried about is the Streisand effect, with deleted content ending up getting more attention than normal content (which I expect is the primary reason why Lobse.rs does not show the original content).

This view seems to imply some deeply worrying things about what comments you expect to see deleted—and that you endorse being deleted! Consider again my taxonomy of comments that someone might want gone. What you say applies, it seems to me, either to comments of type C1 (comments whose chief vice is that they provoke responses, but have little or no intrinsic value), or to comments of type C2b (criticism of the OP, disagreement, relevant but embarrassing-to-the-author observations, etc.).

The former sort of comment is unlikely to provoke a response if they are in the moderation log and not in the thread. No one will go and dig a piece of pedantry or nitpickery out of the mod-log just to response to it. Clearly, such comments will not be problematic.

But the latter sort of comment… the latter sort of comment is exactly the type of comment which it should be shameful to delete; the deletion of which reflects poorly on an author; and to whose deletion, attention absolutely should be paid! It is right and proper that such comments, if removed, should attract even more attention than if they remain unmolested. Indeed, if the Streisand effect occurs in such a case, then the moderation log is doing precisely that which it is meant to do.

Sometimes you also delete things because they reveal information that should not be public (such as doxing and similar things) and in those situations we obviously still want the option of deleting it without showing the original content.

This category of comment ought not meaningfully inform your overall design of the moderation log feature, as there is a simple way to deal with such cases that doesn’t affect anything else:

Treat it like any other deleted comment, but instead of showing the text of the comment in the mod-log, instead display a message (styled and labeled so as to clearly indicate its nature—perhaps in bold red, etc.) to the effect of “The text of this comment has been removed, as it contained non-public information / doxxing / etc.”. (If you were inclined to go above and beyond in your dedication to transparency, you might even censor only part of the offending comment—after all, this approach is good enough for our government’s intelligence organizations… surely it’s good enough for a public discussion forum? ;)

The world where everyone can see the moderation log, but only people above a certain karma threshold can see the content seems most reasonable to me, though I still need to think about it. If the karma threshold is something like 100, then this would drastically increase the number of people who could provide information about the type of content that was deleted, while avoiding the problem of deleted contents getting tons of attention.

This is certainly not the worst solution in the world. If this is the price to be paid for having the text of comments be visible, then I endorse this approach (though of course it is still an unfortunate barrier, for the reasons I outline above).

comment by Gurkenglas · 2018-02-27T15:01:18.166Z · LW(p) · GW(p)

Whoever provides a mirror would only need the cooperation of some user with 100 karma to circumvent that restriction. Unless you log which users viewed which deleted posts, and track which deleted posts have been published. Then the mirror might become a trading hub where you provide content from deleted posts in exchange for finding out content from other deleted posts. And at some point money might enter into it, incentivizing karma farms.

comment by PDV · 2018-02-21T03:58:08.802Z · LW(p) · GW(p)

Others could, if they are unwise. But they should not. There is no shame in deleting low-effort comments and so no reason to hide the traces of doing so. There is shame in deleting comments for less prosocial reasons, and therefore a reason to hide the traces.

The fact that you desire to hide the traces is evidence that the traces being hidden are of the type it is shameful to create.

Replies from: alkjash
comment by alkjash · 2018-02-21T05:39:29.858Z · LW(p) · GW(p)

I agree that desiring to hide traces is evidence of such a desire, but it's simply not my motivation:

The primary reason I want comments at all are (a) to get valuable corrective feedback and discussion, and (b) as motivation and positive reinforcement to continue writing frequently. There are comments that provide negligible-to-negative amounts of (a) and even leaving a trace of which stands a serious chance of fucking with (b) when I scroll past in the future. These I would like to delete without trace.

Now I would like to have a discussion about whether a negative reaction to seeing even traces of the comments of trolls is a rational aversion to have, but I know I currently have it and would guess that most other writers do as well.

Replies from: Gurkenglas, PDV, SaidAchmiz
comment by Gurkenglas · 2018-02-27T15:04:49.132Z · LW(p) · GW(p)

Can't you just use AdBlock to hide such comments from your browser?

comment by PDV · 2018-02-21T16:45:58.739Z · LW(p) · GW(p)

I agree that desiring to hide traces is evidence of such a desire, but it's simply not my motivation

Irrelevant. Stated motivation is cheap talk, not reliable introspectively, let alone coming from someone else.

Or, in more detail:

1) Unchecked, this capability being misused will create echo chambers.

2) There is a social incentive to misuse it; lack of dissent increases perceived legitimacy and thus status.

3) Where social incentives to do a thing for personal benefit exist, basic social instincts push people to do that thing for personal benefit.

4) These instincts operate at a level below and before conscious verbalization.

5) The mind's justifier will, if feasible, throw up more palatable reasons why you are taking the action.

6) So even if you believe yourself to be using an action for good reasons, if there is a social incentive to be misusing it, you are very likely misusing it a significant fraction of the time.

7) Even doing this a fraction of the time will create an echo chamber.

8) For good group epistemics, preventing the descent into echo chambers is of utmost importance.

9) Therefore no given reason can be an acceptable reason.

10) Therefore this capability should not exist.

comment by Said Achmiz (SaidAchmiz) · 2018-02-21T06:01:46.001Z · LW(p) · GW(p)

I think you are seriously missing the point of the concerns that PDV is (and that I am) raising, if you respond by saying “but I don’t plan to use traceless deletion for the bad reason you fear!”.

Do I really need to enumerate the reasons why this is so? I mean, I will if asked, but every time I see this sort of really very frustrating naïveté, I get a bit more pessimistic…

Replies from: Raemon
comment by Raemon · 2018-02-21T06:55:27.291Z · LW(p) · GW(p)

This seems to be missing the point of Alkjash's comment, though. I don't think Alkjash is missing the concerns you and PDV have.

PDV said "others can only assume that we wouldn't like what we saw if the traces were public." This sounded to me like PDV could only imagine one reason why someone might delete a comment with no trace. Alkjash provided another possible reason. (FYI, I can list more).

(if PDV was saying ‘it’s strategically adviseable to assume the worst reason, that’s... plausible, and would lead me to respond differently.)

FYI I agree with most of your suggestion solutions, but think you’re only look at one set of costs and ignoring others.

Replies from: clone of saturn, PDV, SaidAchmiz
comment by clone of saturn · 2018-02-21T09:14:23.422Z · LW(p) · GW(p)

(if PDV was saying 'it’s strategically adviseable to assume the worst reason, that’s… plausible, and would lead me to respond differently.)

Making it easier to get away with bad behavior is bad in itself, because it reduces trust and increases the bad behavior's payoff, even if no bad behavior was occurring before. It's also corrosive to any norm that exists against the bad behavior, because "everyone's getting away with this except me" becomes a plausible hypothesis whether or not anyone actually is.

I interpret PDV's comments as an attempt to implicitly call attention to these problems, but I think explicitly spelling them out would be more more likely to be well-received on this particular forum.

comment by PDV · 2018-02-21T16:48:15.875Z · LW(p) · GW(p)

It is strategically necessary to assume that social incentives are the true reason, because social incentives disguise themselves as any acceptable reason, and the corrosive effect of social incentives is the Hamming Problem for group epistemics. (I went into more detail here.)

comment by Said Achmiz (SaidAchmiz) · 2018-02-21T07:44:02.164Z · LW(p) · GW(p)

I don’t think Alkjash is missing the concerns you and PDV have.

Then his comments are simply non-responsive to what I and PDV have said, and make little to no sense as replies to either of our comments. I assumed (as I usually do) compliance with the maxim of relation.

FYI I agree with most of your suggestion solutions, but think you’re only look at one set of costs and ignoring others.

Indeed I am, and for good reason: the cost I speak of is one which utterly dwarfs all others.

PDV said “others can only assume that we wouldn’t like what we saw if the traces were public.” This sounded to me like PDV could only imagine one reason why someone might delete a comment with no trace. Alkjash provided another possible reason. (FYI, I can list more).

I think here I’m going to say “plausible deniability” and “appearance of impropriety” and hope that those keywords get my point across. If not, then I’m afraid I’ll have to bow out of this for now.

Replies from: dxu
comment by dxu · 2018-02-21T09:29:28.093Z · LW(p) · GW(p)
Indeed I am, and for good reason: the cost I speak of is one which utterly dwarfs all others.

This is a claim that requires justification, not bald assertion--especially in this kind of thread, where you are essentially implying that anyone who disagrees with you must be either stupid or malicious. Needless to say, this implication is not likely to make the conversation go anywhere positive. (In fact, this is a prime example of a comment that I might delete were it to show up on my personal blog--not because of its content, but because of the way in which that content is presented.)

Issues with tone aside, the quoted statement strongly suggests to me that you have not made a genuine effort to consider the other side of the argument. Not to sound rude, but I suspect that if you were to attempt an Ideological Turing Test of alkjash's position, you would not in fact succeed at producing a response indistinguishable from the genuine article. In all charitability, this is likely due to differences of internal experience; I'm given to understand that some people are extremely sensitive to status-y language, while others seem blind to it entirely, and it seems likely to me (based on what I've seen of your posts) that you fall into the latter category. In no way does this obviate the existence or the needs of the former category, however, and I find your claim that said needs are "dwarfed" by the concerns most salient to you extremely irritating.

Footnote: Since feeling irritation is obviously not a good sign, I debated with myself for a while about whether to post this comment. I decided ultimately to do so, but I probably won't be engaging further in this thread, so as to minimize the likelihood of it devolving into a demon thread. (It's possible that it's already too late, however.)

comment by alkjash · 2018-02-21T16:50:46.796Z · LW(p) · GW(p)

Here's a hypothesis for the crux of the disagreement in this comments section:

There's a minor identity crisis about whether LW is/should primarily be a community blog or a public forum.

If it is to be a community blog, then the focus is in the posts section, and the purpose of moderation should be to attract all the rationality bloggers to post their content in one place.

If it is to be a public forum/reddit (I was surprised at people referring to it like so), then the focus is in the comments section, and the main purpose of moderation should be to protect all viewpoints and keep a bare minimum of civility in a neutral and open discussion.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-02-21T17:23:26.384Z · LW(p) · GW(p)

No, I don’t think that’s the crux. In fact, I’ll go further and say that believing these two things are somehow distinct is precisely what I disagree with.

Ever read the sequences? Probably you have. Now go back through those posts, and count how many times Eliezer is responding to something a commenter said, arguing with a commenter, using a commenter’s argument as an example, riffing on a commenter’s objection… and then go back and read the comments themselves, and see how many of them are full of critical insight. (Robin Hanson’s comments alone are a gold mine! And he’s only the first of many.)

Attracting “rationality bloggers” is not just useless, but actively detrimental, if the result is that people come here to post “rationality content” which is of increasingly questionable value and quality—because it goes unchallenged, unexamined, undiscussed. “Rationality content” which cannot stand up to (civil, but incisive) scrutiny is not worthy of the name!

LW should be a community blog and a public forum, and if our purpose is the advancement of “rationality” in any meaningful sense whatsoever, then these two identities are not only not in conflict—they are inseparable.

Replies from: habryka4
comment by habryka (habryka4) · 2018-02-21T18:18:02.849Z · LW(p) · GW(p)

While it seems clearly correct to me that all content should have a space to be publicly discussed at some point, it is not at all clear to me that all of that needs to happen simultaneously.

If you create an environment where people feel uncomfortable posting their bad ideas and initial guesses on topics, for fear of being torn to shreds by critical commenters, then you simply won’t see that content on this site. And often this means those people will not post hat content anywhere, or post it privately on Facebook, and then a critical step in the idea pipeline will be missing from this community.

Most importantly, the person you are using as the central example here, namely Eliezer, has always deleted comments and banned people, and was only comfortable posting his content in a place where he had control over the discussion. The amazing comment sections you are referring to are not the result of a policy of open discussion, but of a highly moderated space in which unproductive contributions got moderated and deleted.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-02-21T19:29:58.382Z · LW(p) · GW(p)

If you create an environment where people feel uncomfortable posting their bad ideas and initial guesses on topics, for fear of being torn to shreds by critical commenters, then you simply won’t see that content on this site. And often this means those people will not post hat content anywhere, or post it privately on Facebook, and then a critical step in the idea pipeline will be missing from this community.

… good?

I… am very confused, here. Why do you think this is bad? Do you want to incentivize people to post bad ideas? Why do you want to see that content here?

What makes this “step in the idea pipeline”—the one that consists of discussing bad ideas without criticism—a “critical” one? Maybe we’re operating under some very different assumptions here, so I would love it if you could elaborate on this.

Most importantly, the person you are using as the central example here, namely Eliezer, has always deleted comments and banned people, and was only comfortable posting his content in a place where he had control over the discussion. The amazing comment sections you are referring to are not the result of a policy of open discussion, but of a highly moderated space in which unproductive contributions got moderated and deleted.

This is only true under a very, very different (i.e., much more lax) standard of what qualifies as “unproductive discussion”—so different as to constitute an entirely other sort of regime. Calling Sequence-era OB/LW “highly moderated” seems to me like a serious misuse of the term. I invite you to go back to many of the posts of 2007-2009 and look for yourself.

Replies from: Gurkenglas, habryka4
comment by Gurkenglas · 2018-02-27T15:21:29.671Z · LW(p) · GW(p)
This is only true under a very, very different (i.e., much more lax) standard of what qualifies as “unproductive discussion”—so different as to constitute an entirely other sort of regime.

Weren't you objecting to the poster tracelessly moderating at all, rather than the standard they intended to enforce? Surely present-you would object to a reinstatement of OB as it was?

comment by habryka (habryka4) · 2018-02-21T19:38:26.217Z · LW(p) · GW(p)

People being able to explore ideas strikes me as a key part of making intellectual progress. This involves discussing bad arguments and ideas, and involves discussing people’s initial hunches about various things that might or might not turn out to be based in reality, or point to good arguments.

I might continue this discussion at some later point in time, but am tapping out for at least today, since I need to deal with a bunch of deadlines. I also notice that I am pretty irritated, which is not a good starting point for a productive discussion.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2018-02-21T20:15:56.128Z · LW(p) · GW(p)

Fair enough. And thanks for the elaboration—I have further thoughts, of course, but we can certainly table this for now.

comment by Davis_Kingsley · 2018-02-18T05:12:41.656Z · LW(p) · GW(p)

I quite dislike the idea of people being able to moderate their content in this fashion - that just isn't what a public discussion is in my view - but thanks for being transparent about this change.

Replies from: habryka4
comment by habryka (habryka4) · 2018-02-18T20:27:08.075Z · LW(p) · GW(p)

Yeah, I agree that there is an important distinction between a public discussion that you know isn't censored in any way, and one that is intentionally being limiting in what can be said.

I would be worried about a world where the majority of frontpage posts was non-public in the sense you said, but do think that the marginal non-fully-public conversation doesn't really cause much damage, as long as it's easy to create a public conversation in another thread that isn't limited in the same way.

I do think it's very important for users to see whether a post is moderated in any specific way, which is why I tried to make the moderation guidelines at the top of the comment thread pretty noticeable.

comment by Elizabeth (pktechgirl) · 2018-02-21T18:30:13.342Z · LW(p) · GW(p)

What was the logic behind having a karma threshold for moderation? What were you afraid would happen if low karma people could moderate, especially on their personal blog?

Replies from: habryka4
comment by habryka (habryka4) · 2018-02-21T18:48:09.371Z · LW(p) · GW(p)

The karma threshold for personal blogs is mostly just to avoid bad first interactions for posters and commenters. If you create a post that is super incendiary, and then you go on and delete all comments that disagree with you on it, then we would probably revoke your moderation privileges, or have to ban you or something like that, or delete the posts, which seems like a shitty experience for everyone. And similarly as a commenter, it’s a pretty shitty experience to have your comment deleted. And if you have someone who doesn’t have any experience with the community and who just randomly showed up from the internet, then either of these seems pretty likely to happen, and it seemed better to me to avoid them from the start by requiring a basic level of trust before we got out the moderation tools.

Replies from: pktechgirl
comment by Elizabeth (pktechgirl) · 2018-02-21T23:21:14.075Z · LW(p) · GW(p)

That makes sense. Why such a high threshold for front page posts?

Replies from: habryka4
comment by habryka (habryka4) · 2018-02-22T21:16:40.481Z · LW(p) · GW(p)

Allowing someone to moderate their own frontpage posts is similar to them being a side-wide moderator. They can now moderate a bunch of public discussion that is addressed to the whole community. That requires a large amount of trust, and so a high karma threshold seemed appropriate.

comment by ryan_b · 2018-02-18T20:19:02.152Z · LW(p) · GW(p)

Does allowing users to moderate mean the moderation team of the website will not also be moderating those posts? If so, that seems to have two implications: one, this eases the workload of the moderation team; two, this puts a lot more responsibility on the shoulders of those contributors.

Replies from: habryka4
comment by habryka (habryka4) · 2018-02-18T20:23:20.175Z · LW(p) · GW(p)

Ah, sorry, looks like I forgot to mention that in the post above. There is a checkbox you can check on your profile that says "I'm happy for LW site moderators to help enforce my policy", which then makes it so that the sitewide moderators will try to help with your moderation.

We will also continue enforcing the frontpage guidelines on all frontpage posts, in addition to whatever guidelines the author has set up.

Replies from: ryan_b
comment by ryan_b · 2018-02-19T18:11:27.868Z · LW(p) · GW(p)

No worries, thank you for the clarification.

I would like to state plainly that I am in favor of measures taken to mitigate the workload of the moderation team: I would greatly prefer shouldering some of the burden myself and dealing with known-to-be-different moderation policies from some contributors in exchange for consistent, quality moderation of the rest of the website.

comment by Chris_Leong · 2018-02-18T04:13:13.873Z · LW(p) · GW(p)

I'm still somewhat uncomfortable with authors being able to moderate front-page comments, but I suppose it could be an interesting experiment to see if they use this power responsibly or if it gets abused.

I think that there should also be an option to collapse comments (as per Reddit), instead of actually deleting them. I would suggest that very few comments are actually so bad that they need to be deleted, most of the time it's simply a matter of reducing the incentive to incite controversy in order to get more people replying to your comment.

Anyway, I'm really hoping that it encourages some of the old guard to post more of their content on Less Wrong.

Replies from: PDV
comment by PDV · 2018-02-21T01:55:08.481Z · LW(p) · GW(p)

I don't think it's an interesting experiment. The outcome is obvious: it will be abused to silence competing points of view.

comment by ChristianKl · 2018-02-20T20:13:05.416Z · LW(p) · GW(p)
If a comment of yours is ever deleted, you will automatically receive a PM with the text of your comment, so you don’t lose the content of your comment.

My intuition is that it would be better to allow users to see posts of their own that were deleted in a grayed out way instead of going through the way of sending an PM.

If there's a troll, sending a troll a PM that one of their post got deleted creates a stronger invitation to respond. That especially goes for deletions without giving reasons.

In addition I would advocate that posts that are deleted in this way stay visible to the Sunshine regiment in a grayed out way. For the Sunshine regiment it's important to understand what get's deleted.

Replies from: habryka4
comment by habryka (habryka4) · 2018-02-20T22:46:55.056Z · LW(p) · GW(p)

We considered the grayed-out way, but it was both somewhat technically annoying, and I did also feel like it is justified to notify people if one of their comments was deleted, without them having to manually check the relevant section of the comment area.

The PM comes from a dummy account, and I think makes it clear that there is no use in responding. But unsure whether that was what you were pointing to with "stronger invitation to respond".

And yep, all deleted comments are visible to sunshines.

Replies from: ChristianKl
comment by ChristianKl · 2018-02-21T13:10:01.263Z · LW(p) · GW(p)

If you have a person who writes a trolling post out of anger, the event of them getting a PM that their post was deleted triggers the anger again. This can lead to more engagement.

On the other hand, just greying out the post doesn't produce engagement with the topic in the same strength.

Given that we don't have that many angry trolls at the moment, I however don't think this is an important issue.

comment by Daniel_Armak · 2018-02-20T16:28:56.380Z · LW(p) · GW(p)

Will there be a policy on banned topics, such as e.g. politics, or will that be left to author discretion as part of moderation? Perhaps topics that are banned from promotion / front page (regardless of upvotes and comments) but are fine otherwise?

If certain things are banned, can they please be listed and defined more explicitly? This came up recently in another thread and I wasn't answered there.

Replies from: habryka4
comment by habryka (habryka4) · 2018-02-20T17:52:41.796Z · LW(p) · GW(p)

We have the frontpage post and commenting guidelines here, which are relatively explicit:

https://www.lesserwrong.com/posts/tKTcrnKn2YSdxkxKG/frontpage-posting-and-commenting-guidelines

comment by PDV · 2018-02-21T02:04:52.534Z · LW(p) · GW(p)

I think this is extremely bad. Letting anyone, no matter how prominent, costlessly remove/silence others is toxic to the principle of open debate.

At minimum, there should be a substantial penalty for banning and deleting comments. And not a subtraction, a multiplication. My first instinct would be to use the fraction of users you have taken action against as a proportional penalty to your karma, for all purposes. Or, slightly more complex, take the total "raw score" of karma of all users you've taken action against, divide by the total "raw score" of everyone on the site, double it, and use that as the penalty factor. If Eliezer actually only bans unhelpful newbies, then this will be a small penalty. If he starts taking repeated action against many people otherwise regarded as serious contributors, then it will be a large penalty.

The intended use case of this may be positive, but let's be real: even among rationalists, status incentives always win out. Put on your David Monroe/ialdabaoth hats and remember that for a group rationality project, priorities 1, 2, and 3 must be defanging social incentives to corrupt group epistemics.

Replies from: ChristianKl
comment by ChristianKl · 2018-02-21T13:05:50.846Z · LW(p) · GW(p)

Nobody is silenced here in the sense that their ability to express themselves gets completely removed.

If someone has a serious objection to a given post they are free to write a rebuttal to the post on their personal page.

This policy rewards people for writing posts instead of writing comments and that's a good choice. The core goal is to get more high quality posts and comments are a lesser concern.

Replies from: PDV
comment by PDV · 2018-02-21T16:32:44.261Z · LW(p) · GW(p)

People absolutely are silenced by this, and the core goal is to get high quality discussion, for which comments are at least as important as posts.

Writing a rebuttal on your personal page, if you are low-status, is still being silenced. To be able to speak, you need not just a technical ability to say things, but an ability to say them to the audience that cares.

Under this moderation scheme, if I have an novel, unpopular dissenting view against a belief that is important to the continuing power of the popular, they can costlessly prevent me from getting any traction.

Replies from: habryka4, ChristianKl
comment by habryka (habryka4) · 2018-02-21T18:40:18.851Z · LW(p) · GW(p)

No, you can still get traction, if your argument is good enough. It just requires that your rebuttal itself, on the basis of its own content and quality, attracts enough attention to be read, instead of you automatically getting almost as much attention as the original author got just because you are the first voice in the room.

If you give exposure to whoever first enters a conversation opened by someone with a lot of trust, then you will have a lot of people competing to just be the first ones dominating that discussion, because it gives their ideas a free platform. Bandwith is limited, and you need to allocate bandwidth by some measure of expected quality, and authors should feel free to not have their own trust and readership given to bad arguments, or to people furthering an agenda that is not aligned with what they want.

There should be some mechanisms by which the best critiques of popular content get more attention than they would otherwise, to avoid filter bubble effects, but critiques should not be able to just get attention by being aggressive in the comment section of a popular post, or by being the first comment, etc. If we want to generally incentivize critiques, then we can do that via our curation policies, and by getting people to upvote critiques more, or maybe by other technical solutions, but the current situation does not strike me as remotely the best at giving positive incentives towards the best critiques.

Replies from: aNeopuritan
comment by aNeopuritan · 2018-03-08T20:44:49.172Z · LW(p) · GW(p)

If a nobody disagrees with, being less wrong than, Yudkowsky, they'll be silenced for all practical purposes. And I do think there was a time when people signalled by going against him, which was the proof of non-phyggishness. Phygs are bad.

You could try red-letter warnings atop posts saying, "there's a rebuttal by a poster banned from this topic: [link]", but I don't expect you will, because the particular writer obviously won't want that.

comment by ChristianKl · 2018-02-21T18:46:20.073Z · LW(p) · GW(p)

Comments on the personal page show up for people who browse Popular Posts/Community and also for people who look at the Daily list of posts.

Giving people with a history of providing value contributions (=high status) a better ability to have an audience is desirable.

Replies from: aNeopuritan
comment by aNeopuritan · 2018-03-08T20:46:35.858Z · LW(p) · GW(p)

Definitely put on the Ialdabaoth hat. You do not in any circumstances have to consciously devise any advantage to hand to high-status people, because they already get all conceivable advantages for free.

Replies from: ChristianKl
comment by ChristianKl · 2018-03-09T18:33:50.351Z · LW(p) · GW(p)

High-status people get advantages for free because it's beneficial for agents to give them advantages. For a high status person it's easy to stay away and publish their content on their own blog and have an audience on their own blog. This makes it more important to incentive them to contribute.

Companies have bonus system to reward the people who already have the most success in the company because it's very important to keep high performers happy.