META: Deletion policy

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-26T01:46:53.561Z · LW · GW · Legacy · 92 comments

This is my attempt to codify the informal rules I've been working by.

I'll leave this post up for a bit, but strongly suspect that it will have to be deleted not too long thereafter.  I haven't been particularly encouraged to try responding to comments, either.  Nonetheless, if there's something I missed, let me know.


Comments sorted by top scores.

comment by buybuydandavis · 2012-12-26T03:13:49.113Z · LW(p) · GW(p)

Suggestion: I recommend sending people their deleted posts.

I find it annoying to spend the effort to type a post, only to have it disappear into a bit bucket. If you want it gone, that's your prerogative, but I think it is a breach of etiquette for a forum to destroy information created by a forum user.

Now I assume you found the original post a breach of etiquette, so may feel that tit for tat is the right policy here. I'd consider an intentional breach of etiquette as an unnecessary escalation.

Replies from: Vladimir_Nesov, gwillen, Kawoomba, shminux
comment by Vladimir_Nesov · 2012-12-26T13:32:15.649Z · LW(p) · GW(p)

You can still see your own banned comments on your user page. This might be false for posts, I'm not sure.

Replies from: ahartell
comment by ahartell · 2012-12-27T22:59:52.240Z · LW(p) · GW(p)

Judging by Kodos96's user page, the same is the case for posts, i.e., they are still visible after being "censored."

comment by gwillen · 2012-12-26T04:37:20.856Z · LW(p) · GW(p)

This seems like a good thing to do as a courtesy in cases where it seems reasonable.

If it were an actual policy, you'd want to put some limits on it, i.e. "if the post is longer than X words and/or contains something that was clearly meant to be intelligent thought."

comment by Kawoomba · 2012-12-26T07:06:08.961Z · LW(p) · GW(p)

Suggestion: I recommend sending people their deleted posts.

I used to do that for a long time on a large-ish subreddit I mod. Eventually, it became too much of a burden, the workload footprint was too large. It may be a feasible policy to try and do that on LW, given the (hopefully) very low volume of deleted content.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2012-12-26T19:15:18.976Z · LW(p) · GW(p)

This sounds a like something that could be handled by a script so as to be an utterly transparent process. In your role as a subreddit mod, it wouldn't be so easy, but they have source access.

Replies from: Kawoomba
comment by Kawoomba · 2012-12-26T19:38:57.046Z · LW(p) · GW(p)

Good idea, that difference escaped my notice.

comment by shminux · 2012-12-26T08:48:36.111Z · LW(p) · GW(p)

I find it annoying to spend the effort to type a post, only to have it disappear into a bit bucket.

Post deletion is apparently rare and will remain so. If you type a post which clearly falls under the deletion policy, you deserve to have it disappeared without a trace. I'm sure that borderline cases would be discussed first and you will have a chance to edit your submission.

comment by fubarobfusco · 2012-12-26T10:25:00.700Z · LW(p) · GW(p)

Concrete suggestions:

1. Bring the policy statements to the forefront; put the lengthy "background" discussion of "free speech" vs. "walled gardens" and the like in a brief FAQ or discussion section at the end. The first line of the policy statement should be the one beginning "Most of the burden of moderation ..."

Reason: Most readers want to know what the policy is — so that should come first. Most of the people who want to argue about the theory of the policy are looking to have an enjoyably clever argument, which the "background" provides — so that should be there, but not in front.

2. Use formatting to emphasize the document's structure. As it stands, there's not enough visual structure for the eye to pick out the little numbers that indicate new points. More notably, the paragraph that separates the "more controversial" items looks structurally like it should be the explanation of the spam item.

3. Readers have heard of the common cases. Spam, harassment, and posting of personal information are things that lots of forums ban; LW is not unusual in this regard. In gist, if it's against Reddit's policy, it doesn't need a lot of explanation.

4. Careful about spam and SEO. A major (possibly primary) to delete spam is that spam is clutter that gets in the way of people reading the forum. If someone posted a thousand posts that just contained "foo", that would be spam and would be deleted; even though it has nothing to do with SEO. Commercial spam is bad because allowing it creates a monetary incentive for endless clutter production.

5. Harassment section is too specific. There are a lot of forms of harassment that I suspect you'd want to get rid of that don't involve "following a particular user around and leaving insulting comments".


The violence section is much better explained than in the previous post discussing it. Specifically, the unwelcoming effect of "hypothetical" violence proposals is a really good point.

Replies from: Eliezer_Yudkowsky, wedrifid
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-26T13:09:12.127Z · LW(p) · GW(p)

Formatting added.

Replies from: fubarobfusco
comment by fubarobfusco · 2012-12-26T19:33:18.341Z · LW(p) · GW(p)

Yay! Thank you.

comment by wedrifid · 2012-12-26T11:47:50.832Z · LW(p) · GW(p)

Harassment section is too specific.

Either that or it isn't specific enough and he could have come out and said what he really meant.

Replies from: Barry_Cotter
comment by Barry_Cotter · 2012-12-28T12:39:45.455Z · LW(p) · GW(p)

Harassment section is too specific.

Either that or it isn't specific enough and he could have come out and said what he really meant.

It was annoying to think I knew what you were referring to by reading this comment in isolation but it was depressing to be right.

comment by Wei_Dai · 2012-12-26T13:58:42.812Z · LW(p) · GW(p)

I own the "everything-list" Google Group, which has no explicit moderation policy, although I do block spam and the occasional completely off-topic post from newbies who seemingly misunderstood the subject matter of the forum. It worked fine without controversy or anything particularly bad happening, at least in the first decade or so of its existence, when I still paid attention to it. I would prefer if Eliezer also adopted an informal but largely "hands off" policy here. But looking at Eliezer's responses to recent arguments as well as past history, the disagreement seems to be due to some sort of unresolvable differences in priors/values/personality and not amenable to discussion. So I disagree but feel powerless to do anything about it.

Replies from: Emile, Dr_Manhattan
comment by Emile · 2012-12-26T14:24:45.018Z · LW(p) · GW(p)

Interesting. A couple hypotheses:

1) Admins overestimate the effect that certain policies have on behavior (they may underestimate random effects, or assign effects to the wrong policy); just like parents might overestimate the effect of parenting choices, or managers overestimate the impact of their decisions ("we did daily stand-up meetings, and the project was completed on time - the daily stand-up meetings must be the cause!").

2) Eliezer is more concerned about the public image of LessWrong (both because of how it reflects on CFAR and SIAI, and on the kind of people it may attract) than you are (were?) about the everything-list.

For what it's worth I'm fine with moderation of stupid things like discussing assassinations, and of banning obnoxious trolls and cranks and idiots, and the main reason to refrain from those kind of mod actions would be to avoid scaring naive young newcomers who might see it as an affront against Sacred Free Speech.

Your testimony of a case where you still have quality discussion with very light moderation makes me slightly less in favor of heavy-handed moderation.

(I'm not sure that the moderation here is becoming "stronger" recently, as opposed to merely a bit more explicit)

Replies from: drethelin, Wei_Dai, Eugine_Nier, handoflixue
comment by drethelin · 2012-12-26T15:29:47.919Z · LW(p) · GW(p)

3) Eliezer's tolerance for "crazy" or stupid posts is so low that he's way more pissed off by even a small number of them existing than other people are.

comment by Wei_Dai · 2012-12-26T18:32:23.795Z · LW(p) · GW(p)

It seems to me the occasional crazy idea posted here wouldn't reflect that badly on CFAR and SIAI, if they had a policy of "LW is an open forum and we're not responsible for other people's posts", especially if the bad ideas are heavily voted down and argued against, with the authors often apologizing and withdrawing their own posts.

Replies from: crap
comment by crap · 2012-12-27T10:32:52.629Z · LW(p) · GW(p)

A crazy idea reflects badly on the ideology that spawned the crazy idea.

Replies from: handoflixue
comment by handoflixue · 2012-12-27T19:38:28.739Z · LW(p) · GW(p)

If that were true, LessWrong would have such an INCREDIBLY HUGE advantage over most every major religion. LessWrong hasn't managed to raise armies and invade sovereign nations yet, after all.

Thinking in those terms, it makes me strongly suspect anyone turned away by a single bad post is engaging in some VERY motivated cognition, and probably would not have stayed long. (A high noise:signal ratio, on the other hand, would be genuinely damaging)

Replies from: crap
comment by crap · 2012-12-27T22:26:37.066Z · LW(p) · GW(p)

No one here felt distraught with religion? Not even a little? :)

comment by Eugine_Nier · 2012-12-26T23:32:48.600Z · LW(p) · GW(p)

For what it's worth I'm fine with moderation of stupid things like discussing assassinations, and of banning obnoxious trolls and cranks and idiots, and the main reason to refrain from those kind of mod actions would be to avoid scaring naive young newcomers who might see it as an affront against Sacred Free Speech.

No, the main reason is to avoid evaporative cooling and slippery slopes, a.k.a., the reasons free speech is such a sacred value.

Keep in mind Eliezer himself would be considered a crank by most "mainstream skeptics".

Replies from: Emile
comment by Emile · 2012-12-27T11:45:43.340Z · LW(p) · GW(p)

Do you think there's a big risk of evaporative cooling because Eliezer bans too many things? (assuming his current level of banning, not a much higher one) It's true that the infamous Roko case seems to fit the bill, and Wei Dai's concerns make me at least think it's possible - but I would expect a greater risk in the opposite direction, of the quality of discussion being watered down by floods of comments on stupid topics, meaning that people who don't have time to sort through all the clutter may end up giving up participating in most discussions.

Replies from: Elithrion
comment by Elithrion · 2013-01-25T05:07:36.250Z · LW(p) · GW(p)

I would expect a greater risk in the opposite direction, of the quality of discussion being watered down by floods of comments on stupid topics, meaning that people who don't have time to sort through all the clutter may end up giving up participating in most discussions.

Having spent a few years chatting on karma-less, completely unmoderated fora (spam would be deleted, but nothing else), I can say that this does not seem to occur. The pattern seems to be that when someone says something the forum considers stupid, this is remarked upon, and then they either attempt to improve to be more in line with the general opinion, or leave. People are not really gluttons for punishment - if a community does not welcome them, they (usually) will not continue participating in it - and the ratio of new users to old users is typically very low, so norms are maintained in the medium term (barring major news coverage or something).

Although I guess without the deletion policy discussion may drift further away from rationality, so if you think most of that would be boring or mindkilling, it may be of value.

comment by handoflixue · 2012-12-27T19:36:10.363Z · LW(p) · GW(p)

Eliezer has pretty blatantly stated that the reasoning was #2

comment by Dr_Manhattan · 2012-12-26T17:21:47.650Z · LW(p) · GW(p)

There is a large difference between running a private list and a more accessible forum associated with an organization (the logos on top).

comment by Morendil · 2012-12-26T09:05:07.509Z · LW(p) · GW(p)

The section on "information hazards" has an actual live link to TVTropes. Irony much?

Replies from: Eliezer_Yudkowsky, Dorikka
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-26T13:08:34.500Z · LW(p) · GW(p)

Heh! Irony emphasized.

comment by Dorikka · 2012-12-27T02:02:52.428Z · LW(p) · GW(p)

This started me on a trope-walk, though I was eventually able to pull myself back to what I was doing. :P

Irony indeed.

comment by Nick_Tarleton · 2012-12-26T05:50:32.462Z · LW(p) · GW(p)

I agree with this policy.

comment by JonathanLivengood · 2012-12-26T06:18:41.578Z · LW(p) · GW(p)

When a certain episode of Pokemon contained contained a pattern of red and blue flashes capable of inducing epilepsy, 685 children were taken to hospitals, most of whom had seen the pattern not on the original Pokemon episode but on news reports showing the episode which had induced epilepsy.

At the very least, this needs a citation or two, since the following sources cast doubt on the story as presented:

WebMD's account

CNN's account

Snopes' account

And CSI's account, which includes the following:

At about 6:51, the flashing lights filled the screens. By 7:30, according to the Fire-Defense agency, 618 children had been taken to hospitals complaining of various symptoms.

News of the attacks shot through Japan, and it was the subject of media reports later that evening. During the coverage, several stations replayed the flashing sequence, whereupon even more children fell ill and sought medical attention. The number affected by this “second wave” is unknown.

And then goes on to argue that the large number of cases was due to mass hysteria.

comment by [deleted] · 2012-12-26T06:17:53.101Z · LW(p) · GW(p)

Please link to the wiki page somewhere so that it's not an orphan. Official policies need to be readily accessible. Also consider making it visible on the main site somewhere, if at all possible.

Replies from: Vladimir_Nesov, Eliezer_Yudkowsky
comment by Vladimir_Nesov · 2012-12-26T14:26:04.417Z · LW(p) · GW(p)

Linked to the new page from Moderation tools and policies, linked to 'Moderation tools and policies' from the wiki sidebar (section 'Community').

Replies from: None
comment by [deleted] · 2012-12-26T15:24:30.378Z · LW(p) · GW(p)

Thank you.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-26T13:08:49.213Z · LW(p) · GW(p)

This can be carried out by non-admins (at least the first part).

Replies from: Vladimir_Nesov
comment by Vladimir_Nesov · 2012-12-26T14:24:28.584Z · LW(p) · GW(p)

It usually doesn't happen.

comment by RobertLumley · 2012-12-26T02:57:20.330Z · LW(p) · GW(p)

Is I read it, the policy does not address the basilisk and basilisk type issues, which, while I don't think should be moderated, are. "Information Hazards" specifically says "not mental health reasons."

Replies from: evand, Manfred, wedrifid
comment by evand · 2012-12-26T03:38:59.229Z · LW(p) · GW(p)

A true basilisk is not a mental health risk, or at least not only such. Whether one such has been found is a separate question (I lean toward no).

Replies from: None, Username
comment by [deleted] · 2012-12-27T00:12:31.807Z · LW(p) · GW(p)

IIRC, allegedly there were a few people with OCD having nightmares after reading that post by Roko.

Replies from: evand
comment by evand · 2012-12-27T00:28:14.319Z · LW(p) · GW(p)

My point was that it doesn't cause mental health problems, not that it can't trigger them. Perhaps that's a bad way to put it. If it does, there's something beyond the information hazard going on, either an existing problem being triggered, or a multiple hazard. As I understand it, a basilisk is hazardous because you know the argument, without it needing to corrupt your reasoning abilities. Roko's is alleged to be hazardous even to a rational agent. (I don't think it is, and I think censoring it prevents an interesting debate about why. I don't plan to say any more, given the existing censorship policies. If this is already too much, please let me know and I will edit accordingly.)

comment by Username · 2012-12-31T16:59:04.995Z · LW(p) · GW(p)

Quantum roulette is a possible candidate.

comment by Manfred · 2012-12-26T21:39:17.306Z · LW(p) · GW(p)

Well, the "LW basilisk" just turned out to be a knife sharp enough to cut yourself with. And sometimes you need sharp knives.

comment by wedrifid · 2012-12-26T04:18:03.738Z · LW(p) · GW(p)

Is I read it, the policy does not address the basilisk and basilisk type issues

It does, in as much as it includes:

8) Topics we have asked people to stop discussing.

This particular entry makes all the others more or less redundant. This is perhaps better than only having the "information Hazard" clause. Because Eliezer deleting something based on the "Eliezer says so" is at least coherent and unambiguous. It doesn't matter whether a post by Roko is actually dangerous. The says so clause can still cover it and we can just roll our eyes and tolerate Eliezer's quirks.

Replies from: RobertLumley
comment by RobertLumley · 2012-12-26T06:11:04.186Z · LW(p) · GW(p)

Because Eliezer deleting something based on the "Eliezer says so" is at least coherent and unambiguous.

Well his attempt here is to lay out a bit more than "Because Eliezer says so" as a reason.

comment by Emile · 2012-12-27T12:14:36.022Z · LW(p) · GW(p)

I suspect a good deal of angst around the topic has been from people seeing the issues in online communities as symbolic of real-world issues - opposing policies not because they are bad for an online community, but because they would be bad if applied by a real-world government to a real-world nation; real-world governments come to mind because we have reasons to care more strongly about them, and we hear much more about them. But there are important differences! The biggest is that you can easily leave an online community any time you're not happy about it. I don't think an online community is more similar to a nation than it is to a bridge club, or a company, or a supermarket, or the people making an encyclopedia.

I don't think the concern about the symbolism of censorship is completely wrong; it's quite possible that China could argue that real-world censorship is important for the same reasons it is in online communities!

Somewhat off-topic, but this makes me think that maybe school should teach a bit about "online history" - the history of Usenet and Wikipedia for example.

comment by SilasBarta · 2012-12-26T23:26:41.503Z · LW(p) · GW(p)

This seems like a good deletion policy, but doesn't cover all the actual deletions that have been threatened. Edit: specifically, the policy of allowing certain parties to ban direct refutations of their arguments (edit2: from particular users).

comment by RichardKennaway · 2012-12-26T18:30:37.805Z · LW(p) · GW(p)

At the end, the policy says that the policy does not force the mods to delete anything. Perhaps it should in the same breath also say that it does not prevent them from deleting anything. The judgement of the mods and admins is final and above the policy; the purpose of the policy is to inform them and the readership of the general principles that will be applied.

comment by Eugine_Nier · 2012-12-28T01:20:37.004Z · LW(p) · GW(p)

I was asked to post the following by an anonymous member.

There is a very big issue which this new policy fails to address:

Self defense is a widely advocated legal right in most jurisdictions. For instance, if someone is about to press a button that will activate a bomb which would kill you, and you have no other means of stopping them, in many jurisdictions you have a right to shoot them. Even when the offending party is not legally at fault (e.g. is insane).

This right puts extra burden of moral responsibility on the people that make certain claims. If someone made an unjustified claim that a button on your cellphone would trigger a bomb, and you get your face smashed against the ground by the concerned bystanders or the police - or get shot - the person that made that claim will take the fall for the incident even though formally it can be said that this person has never advocated any violence.

One can clearly see relevance of the above hypothetical to organizations and individuals which make broad and specific claims with regards to dangers and existential risks. Such as Singularity Institute, or a famous Friendly AI proponent Eliezer S. Yudkowsky, known for his somewhat dramatic statements with regards to dangers and risks posed by certain types of software and by completion of some specific projects.

Many replies in comments section on the censorship(sic) proposal on LessWrong do not seem to indicate that the authors accept this moral burden, instead seeing it as a fallacy. For instance, in , Eliezer S. Yudkowsky writes:

Point one: We never said X->Y. We said X, and a bunch of people too stupid to understand the fallacy of appeal to consequences said 'X->violence, look what those bad people advocate' as an attempted counterargument. Since no actual good can possibly come of discussing this on any set of assumptions, it would be nice to have the counter-counterargument, "Unlike this bad person here, we have a policy of deleting posts which claim Q->specific-violence even if the post claims not to believe in Q because the identifiable target would have a reasonable complaint of being threatened".

Replies from: drethelin
comment by drethelin · 2012-12-29T17:59:46.006Z · LW(p) · GW(p)

Regardless of whether the authors "accept" this moral burden, to "indicate" that they do would be unwise. If you can get in serious trouble for saying something the public statements of smart people are a lot less evidence for what they actually think on that topic.

comment by Kaj_Sotala · 2012-12-26T05:38:35.262Z · LW(p) · GW(p)

I agree with this policy.

comment by gjm · 2012-12-26T02:53:43.959Z · LW(p) · GW(p)

Is the Pokemon story actually true? Casual googling suggests probably not, but I haven't investigated carefully enough to have a very strong opinion. Specifically, I didn't find corroboration of the claim that most of the children who went to hospital had seen news reports rather than the original programme.

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-12-26T03:42:47.905Z · LW(p) · GW(p)

This just says that some of the children were stricken later -- if I had to guess I'd say that the vast majority was done during the actual show.

Replies from: Eliezer_Yudkowsky, arundelo
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-26T08:37:36.573Z · LW(p) · GW(p)

So noted. Will try to remember to edit at some point.

comment by arundelo · 2012-12-26T04:40:02.207Z · LW(p) · GW(p)

"[...] 'Pikachu,' a rat-like creature [...]"

comment by [deleted] · 2012-12-27T00:06:36.455Z · LW(p) · GW(p)

That looks quite wall-of-text-y. It could be made more concise. Also, “We live in a society” -- “we” who? Not all LW users are from the US, or even from the Anglosphere, or even from the Western world. Whereas probably each LWer comes from some society with some stupid laws, that sentence still sounds kind of off, to me.

comment by shminux · 2012-12-26T08:42:22.057Z · LW(p) · GW(p)

It's nice to have written ground rules, even if they are basically common sense.

comment by gwillen · 2012-12-26T17:53:53.732Z · LW(p) · GW(p)

I think this seems like a basically fine policy.

I will also say that my own experience being a moderator is firmly in agreement with , and thus in opposition to those who would rather see a totally hands-off approach to moderation.

comment by evand · 2012-12-26T03:40:54.779Z · LW(p) · GW(p)

Why would this post need to be deleted?

Replies from: wedrifid
comment by wedrifid · 2012-12-26T04:08:10.739Z · LW(p) · GW(p)

Why would this post need to be deleted?

Because people can reply to it and some replies are disagreements.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-12-26T04:31:06.739Z · LW(p) · GW(p)

So, there might be comments on LW of people disagreeing with Eliezer's policy. The horror.

Replies from: Multiheaded
comment by Multiheaded · 2012-12-26T13:07:40.966Z · LW(p) · GW(p)

Nah, he likely means that the comments might become so full of censorable examples that the entire branch of discussion would get tainted. I hope not.

(I'm moderately against the tightening of censorship policy, BTW, but I understand Eliezer's reasoning, and I'm fine with it.)

comment by [deleted] · 2012-12-26T07:19:13.296Z · LW(p) · GW(p)

I agree with this policy. It sounds totally benign and ordinary.

I haven't been particularly encouraged to try responding to comments, either.

If you mean comment karma, consider that in the case where people appreciate your responses, but strongly disagree with their content, they will downvote you instinctively, as soon as they would furrow their brows: it's an immediately available, low effort way to scratch the itch of dissenting feelings. Since downvotes seem to give you cold-stabbies, but don't make you reevaluate your positions, instinct-downvoting is doubly ineffective, but still the default. We've now learned that saying "This isn't a poll. You have to correct me or I won't stop being wrong", isn't enough to break that habit.

Replies from: None
comment by [deleted] · 2012-12-26T19:10:06.047Z · LW(p) · GW(p)

Indeed, and we (the LW community) have to learn to tell the difference between deliberate trolls and misguided rationalists for our moderation to be effective. In the same way that replying to a troll is a mistake in that it feeds their attention craving, not replying to a wrong non-troll can be a mistake in that they don't notice their error. Maybe a lower downvote limit (4xkarma) would help break aforementioned habit.

Replies from: Epiphany
comment by Epiphany · 2012-12-27T02:43:39.844Z · LW(p) · GW(p)

Then there's the possibility that someone enjoys intentionally pretending to be clueless as a means of trolling and further enjoys that it disrupts people's instinct to provide guidance to misguided rationalists.

Replies from: None
comment by [deleted] · 2012-12-27T06:07:24.443Z · LW(p) · GW(p)

That would be incredibly difficult on the moderators. Thankfully, being smart enough to think of that and dumb enough to be a troll isn't a very plausible interval for human intellect.

Replies from: Epiphany
comment by Epiphany · 2012-12-27T08:07:19.262Z · LW(p) · GW(p)

Unfortunately, sometimes gifted people are trolls.

comment by Paul Crowley (ciphergoth) · 2012-12-26T14:20:44.525Z · LW(p) · GW(p)

I would repeat the thing about not binding at the top.

comment by pleeppleep · 2012-12-26T03:31:10.283Z · LW(p) · GW(p)


I'm upset by this.

Not sure why, exactly, but yeah, definitely upset by this. Just felt like sharing.

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2012-12-26T19:28:12.765Z · LW(p) · GW(p)

Not sure why, exactly, but yeah, definitely upset by this.

If you could figure that out, that would be helpful.

Replies from: pleeppleep
comment by pleeppleep · 2012-12-26T21:33:42.201Z · LW(p) · GW(p)

Intuitive gut reaction. If I had an argument to make I would have said so. Any case I make would have been formed from backtracking from my initial feeling, and I'm probably not the only commenter here arguing based on an "ick" or "yay" gut reaction to the idea of censorship. I thought it was worth pointing out.

Replies from: Epiphany, None
comment by Epiphany · 2012-12-27T02:49:03.444Z · LW(p) · GW(p)

As I see it, this is sort of like that quote on truth that goes something like "You may as well acknowledge the truth - you're already dealing with it."

Censorship was already happening on LessWrong. Now that Eliezer is making an effort to share some of his decision-making process, there is less to fear in a way since you get to have that additional info for guessing what he's likely to do.

Fear of the unknown can feel a lot worse than fear of the known.

Replies from: pleeppleep
comment by pleeppleep · 2012-12-27T03:19:22.680Z · LW(p) · GW(p)

I think you mean the Litany of Gendlin, and I believe some of these rules are being newly implemented, but I could be wrong about that.

He can run his site anyway he wants, and most of the ideas here are reasonable precautions given his values. That doesn't change the fact that I intuitively don't like them when I read them, and that gut reaction (or possibly it's opposite) is probably shared with others here who probably allow it to color their arguments one way or the other. Just something to keep in mind, is all.

Replies from: Epiphany
comment by Epiphany · 2012-12-27T03:40:33.161Z · LW(p) · GW(p)

Oh thank you. I kept wondering what that quote was.

others here who probably allow it to color their arguments one way or the other.

Oh, that is a good point.

I was trying to make you feel better.

comment by [deleted] · 2012-12-28T14:40:57.110Z · LW(p) · GW(p)

Status quo bias: I'm reasonably sure that if this policy had been in place from Day 1, very few people would have given it a second thought.

Replies from: None
comment by [deleted] · 2012-12-28T15:19:22.046Z · LW(p) · GW(p)

I remember that one way to combat status quo bias is re-framing. I am about to read the new deletion policy for the first time, but I am going to consciously frame it as "this is a deletion policy already in place for a site I am considering joining" rather than "this is a change to a deletion policy for a site I have already joined."

[Goes to read the policy]

In that frame, I would like the deletion policy and it wouldn't otherwise discourage me from joining the site. I would appreciate that the moderators would be taking moderation seriously, as opposed to some other sites I know of. In particular, the example about academic conferences is a great illustration of the argument.

My only concern is about the broad language used under the sections "Prolific trolls" and "Trollfeeding." The policy refers to commentators who

been downvoted sufficiently strongly sufficiently many times

as well as

Sufficiently downvoted comments.

Can the policy be amended to quantify those qualitative standards? Or if for practical purposes we can't quantify those standards, then include an a sentence to emphasize that interpretation of the standard is at the moderator's individual discretion.

comment by DanArmak · 2012-12-28T12:13:17.058Z · LW(p) · GW(p)

LessWrong is focused on rationality and will remain focused on rationality. There's a good deal of side conversation which goes on and this is usually harmless. Nonetheless, if we ask people to stop discussing some other topic instead of rationality, and they go on discussing it anyway, we may enforce this by deleting posts, comments, or comment trees.

This has always been the LW mission, and it's true that some threads are not at all on subject. And then it makes sense to delete them if their net value is even slightly negative, perhaps even if they are merely shown to take too much attention away from rationality topics. Although, I would appreciate it if the first tool used was a request or warning by a moderator to stop discussing something, rather than just deleting it.

People do want to discuss off-topic things, and I at least would like to do it with fellow LW users. (And I prefer forums or mailing lists to IRC.) Perhaps there is enough interest now to establish an offsite, unaffiliated, lightly moderated, Offtopic Discussion forum for LW users. Perhaps such a splitting off would also benefit LW by keeping it more focused on rationality. What do people think?

comment by Epiphany · 2012-12-27T02:35:54.877Z · LW(p) · GW(p)

I see no definition for the word troll. It seems like a thing that should be obvious, but I've seen people using the word "troll" to describe people who are simply ignorant. I think I'm also picking up on a trend where, if a comment is downvoted, it is considered trolling regardless of the fact that it was simply an unpopular comment by an otherwise likable user. LessWrong seems to use a broader definition of the word "trolling" than I am used to. If you guys have your own twist on "trolling" it would be good to add LessWrong's definition to the wiki.

Replies from: Emile, ArisKatsaris
comment by Emile · 2012-12-27T12:03:44.515Z · LW(p) · GW(p)

I don't think a formal definition of the word "troll" would be useful; the term is used somewhat informally to the general blob of "problematic users" - trolls, idiots, cranks, aggressive and self-centered users, people who won't shut up about their pet topic, etc. - the borders are somewhat fuzzy, and any attempt to try to formalize them is likely to be too broad or too narrow. Would you be able to properly formalize the kind of behavior you don't want on a website you run, without being too broad or too narrow?

"Troll" is a bit like an unambiguous example of the class of behaviors to be discouraged, but if the policies hit a broader target and also discourage non-trolling obnoxious cranks and idiots, that's a feature, not a bug.

Incidentally, I agree that using 'trolling" to describe any downvoted comments (like the "troll toll") is somewhat unfortunate, meany downvoted comments are from users who sincerely want to convince everybody that if they would stop being blinded by politically correct groupthink they would recognize that lizard-men are controlling the government. But then, "troll toll" has a nice ring to it.

Replies from: Epiphany
comment by Epiphany · 2012-12-27T18:26:12.913Z · LW(p) · GW(p)

term is used somewhat informally to the general blob of "problematic users"

I can see how this would be more useful from the perspective of the person doing the banning, but I don't see why it would be useful from the perspective of the person who is attempting to avoid being banned. Flexible for one purpose, too vague for the other.

Would you be able to properly formalize the kind of behavior you don't want on a website you run, without being too broad or too narrow?

Somebody has probably already done so. Not perfectly, of course. But they've probably already done so. There might even be a description of undesired behavior in an open source context, either as part of a free legal terms of service agreement, or as part of a piece of open source software. It is quite possible that a good free description has already been written and just needs editing. It's also possible to do better than be flexible/vague and provide a list of behaviors (such as the one you created above) that briefly describes the main concerns, without it being perfect, and simply aim to make an improvement on flexible/vague.

if the policies hit a broader target and also discourage non-trolling obnoxious cranks and idiots

The problem is that people with idiotic ideas do not know they are being idiotic, and I think that although some cranks do know that they're wrong and are content trying to scam people, other cranks are just as clueless as their customers, and have no idea that what they're selling is a ripoff. For instance: I'm not religious, but do I consider a priest a crank? No. I consider a priest somebody who genuinely believes the ideas they're selling, not somebody intentionally deceiving people in order to collect donation money. For this reason, using the words "cranks" and "idiots" is probably not likely to work - something like "If you don't bother to support your points with rational arguments and don't update and keep bothering us, we'll boot you." would be more likely to help them realize it's targeted at them.

Replies from: Emile
comment by Emile · 2012-12-28T09:41:49.426Z · LW(p) · GW(p)

I agree with most of what you say here, there are probably some places where "troll" could have been replaced by something more precise in a way that would be more useful.

I agree that it's important to help "borderline problematic users" to mend their ways, but I don't think the deletion policy is the best place to do that; a precise and detailed deletion policy risks increasing the amount of nitpicking over whether such-and-such moderator action was really justified by the rules (even if those "rules" are actually just said moderator trying to explain by what principles he acts, not a binding legal document!), or nitpicking about whether such-and-such hypothetical case should be banned or not; neither of those two conversations are things I'm particularly interested in reading.

So I think it may be more efficient to help good faith users by improving welcome pages, or talking to them in welcome threads, etc.

Replies from: Epiphany
comment by Epiphany · 2012-12-28T20:05:27.068Z · LW(p) · GW(p)

The not wanting to nitpick is a good point. I don't know whether a more specific definition of troll would necessarily result in more nitpicking. If readers take "troll" by the stereotypical definition (like what ArisKatsaris provided over here and then somebody gets deemed a troll and censored for saying idiotic things without an intent to annoy (or for some other reason not typically associated with the stereotypical troll), then this could spark controversy, and you still get the nitpicking conversation. Verbiage like "anybody who trolls, but not limited to that" or "we think trolls are this that and the other, but not limited to that" may make any nitpicking conversations rather short. "We said it wasn't limited to that. End of conversation."

comment by ArisKatsaris · 2012-12-28T13:32:22.222Z · LW(p) · GW(p)

Trolls are generally people who post with the hope of invoking a negative reaction (e.g. negative responses, flames, downvotes, censorship, bans). Identifying trolls is often a harder job than defining them.

Replies from: Eugine_Nier, Epiphany
comment by Eugine_Nier · 2012-12-28T23:24:50.442Z · LW(p) · GW(p)

So does asking for criticism of your argument count as trolling?

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-12-29T02:40:10.743Z · LW(p) · GW(p)

There's a difference between asking for criticism of a post/argument that you nonetheless hope to be good, and intentionally making a bad argument so that you will be criticized.

I think the difference I'm talking about is well understood.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-12-29T21:56:22.038Z · LW(p) · GW(p)

Basically, would Socrates be considered a troll?

comment by Epiphany · 2012-12-28T20:11:25.857Z · LW(p) · GW(p)

Thanks. That looks like the stereotypical definition of troll to me. Is it that you're saying LessWrong does not use the word "troll" differently, and the ambiguity is just due to people having a hard time figuring out who is a troll?

Replies from: ArisKatsaris
comment by ArisKatsaris · 2012-12-28T22:55:27.887Z · LW(p) · GW(p)

'LessWrong' is composed of many people. I'm sure that some use it the way I use it, and some have different definitions. I don't think that LessWrong differs in this respect from any other forum or community.

comment by Cyan · 2012-12-26T02:37:20.013Z · LW(p) · GW(p)

I'm really disappointed in EY -- the wiki page is incredibly careless of the safety of Brittany Fleegelburger, purple-eyed people, and congohelium producers. Large amounts of common sense indeed!

Replies from: Cyan
comment by Cyan · 2012-12-26T21:36:20.059Z · LW(p) · GW(p)

(The parent has an intended meaning over and above the feeble attempt at humor. It lies in the fact that I could have posted about a genuine concern -- if I had one.)

comment by James_Miller · 2012-12-26T03:20:09.621Z · LW(p) · GW(p)

Consider adding something like "in return for donating $X to Y you will get a detailed reason for why your post was deleted."

Replies from: evand, BrassLion
comment by evand · 2012-12-26T03:39:58.308Z · LW(p) · GW(p)

Doesn't this create a very poor set of incentives?

Replies from: James_Miller
comment by James_Miller · 2012-12-26T03:56:29.299Z · LW(p) · GW(p)

Not if X is small or Y is unaffiliated with the censors.

comment by BrassLion · 2012-12-26T06:10:06.885Z · LW(p) · GW(p)

A (short) reason should be common courtesy except for spam and egregious trolls.

EDIT: Assuming this sort of thing is low enough volume not to substantially add to the work the deleter does in deleting posts.