New censorship: against hypothetical violence against identifiable people

post by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-23T21:00:31.045Z · LW · GW · Legacy · 448 comments

New proposed censorship policy:

Any post or comment which advocates or 'asks about' violence against sufficiently identifiable real people or groups (as opposed to aliens or hypothetical people on trolley tracks) may be deleted, along with replies that also contain the info necessary to visualize violence against real people.

Reason: Talking about such violence makes that violence more probable, and makes LW look bad; and numerous message boards across the Earth censor discussion of various subtypes of proposed criminal activity without anything bad happening to them.

More generally: Posts or comments advocating or 'asking about' violation of laws that are actually enforced against middle-class people (e.g., kidnapping, not anti-marijuana laws) may at the admins' option be censored on the grounds that it makes LW look bad and that anyone talking about a proposed crime on the Internet fails forever as a criminal (i.e., even if a proposed conspiratorial crime were in fact good, there would still be net negative expected utility from talking about it on the Internet; if it's a bad idea, promoting it conceptually by discussing it is also a bad idea; therefore and in full generality this is a low-value form of discussion).  

This is not a poll, but I am asking in advance if anyone has non-obvious consequences they want to point out or policy considerations they would like to raise. In other words, the form of this discussion is not 'Do you like this?' - you probably have a different cost function from people who are held responsible for how LW looks as a whole - but rather, 'Are there any predictable consequences we didn't think of that you would like to point out, and possibly bet on with us if there's a good way to settle the bet?'

Yes, a post of this type was just recently made.  I will not link to it, since this censorship policy implies that it will shortly be deleted, and reproducing the info necessary to say who was hypothetically targeted and why would be against the policy.

448 comments

Comments sorted by top scores.

comment by [deleted] · 2012-12-24T00:27:38.753Z · LW(p) · GW(p)

I'm started to feel strongly uncomfortable about this, but I'm unsure if that's reasonable. Here's some arguments ITT that are concerning me:

Does advocating gun control, or increased taxes, count? They would count as violence is private actors did them, and talking about them makes them more likely (by states).

Violence is a very slippery concept. Perhaps it is not the best one to base mod rules on. (more at end)

We're losing Graham cred by being unwilling to discuss things that make us look bad.

This one is really disturbing to me. I don't like all the self-conscious talk about how we are percieved outside. Maybe we need to fork LW, to accomplish it, but I want to be able to discuss what's true and good without worrying about getting moderated. My post-rationality opinions have already diverged so far from the mainstream that I feel I can't talk about my interests in polite society. I don't want this here too.

If I see any mod action that could be destroyed by the truth, I will have to conclude that LW management is borked and needs to be forked. Until then I will put my trust in the authorities here.

Would my pro-piracy arguments be covered by this? What about my pro-coup d'etat ones?

Would it censor a discussion of, say, compelling an AI researcher by all means necessary to withhold their research from, say, the military?

The whole purpose of discussing such plans is to reduce uncertainty over their utility; you haven't proven that the utility gain of a plan turning out to be good must be less than the cost of discussing it in public.

Yeah seriously. What if violence is the right thing to do? (EDIT: Derp. Don't discuss it in public, (except for stuff like Konkvistador's piracy and reaction advocacy, which are supposed to be public))

My post was indeed inappropriate. I have used the "Delete" function on it.

This is important. If the poster in question agrees when it is pointed out that their post is stupid, go ahead and delete it. But if they disagree in some way that isn't simple defiance, please take a long look at why.

In general, two conclusions:

I support censorship, but only if it is based on the unaccountable personal opinion of a human. Anything else is too prone to lost purposes. If a serious rationalist (e.g. EY) seriously thinks about it and decides that some post has negative utility, I support its deletion. If some unintelligent rule like "no hypothetical violence" decides that a post is no good, why should I agree? Simple rules do not capture all the subtlety of our values; they cannot be treated as Friendly.

And, as usual, that which can be destroyed by the truth should be. If moderator actions start serving some force other than truth and good, LW, or at least the subset dedicated to truth and rationality, should be forked.

Replies from: AlexMennen, handoflixue, Multiheaded, Eliezer_Yudkowsky
comment by AlexMennen · 2012-12-24T01:06:43.549Z · LW(p) · GW(p)

I support censorship, but only if it is based on the unaccountable personal opinion of a human. Anything else is too prone to lost purposes. If a serious rationalist (e.g. EY) seriously thinks about it and decides that some post has negative utility, I support its deletion. If some unintelligent rule like "no hypothetical violence" decides that a post is no good, why should I agree? Simple rules do not capture all the subtlety of our values; they cannot be treated as Friendly.

It makes sense to have mod discretion, but it also makes sense to have a list of rules that the mods can point to so that people whose posts get censored are less likely to feel that they are being personally targeted.

Replies from: None
comment by [deleted] · 2012-12-24T01:23:36.380Z · LW(p) · GW(p)

Yes. Explanatory rules are good. Letting the rules drive is not.

Replies from: Eliezer_Yudkowsky, Luke_A_Somers
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-24T02:17:54.597Z · LW(p) · GW(p)

These are explanations, not rules, check.

comment by Luke_A_Somers · 2012-12-24T05:10:10.398Z · LW(p) · GW(p)

Hence "may at the admins' option be censored"

comment by handoflixue · 2012-12-24T20:50:09.374Z · LW(p) · GW(p)

If I see any mod action that could be destroyed by the truth, I will have to conclude that LW management is borked and needs to be forked. Until then I will put my trust in the authorities here.

I want to upvote these two sentences again and again.

comment by Multiheaded · 2012-12-24T07:15:17.363Z · LW(p) · GW(p)

I support censorship, but only if it is based on the unaccountable personal opinion of a human.

I think that there's the usual paradox of benevolent dictatorship here; you can only trust humans who clearly don't seek this position for selfish ends and aren't likely to present a rational/benevolent front just so you would give them political power.

In a liberal/democratic political atmosphere, self-proclaimed benevolent dictators are a rare and prized resource; you can pressure one to run a website, an organization, etc to the best of their ability. But if dictatorship were to be seen as the norm, and you couldn't easily fall back on democracy, rule by committee, anarchy, etc, and had to choose between a few dictators, then the standards of dictatorial control would surely plummet and it would be psychologically much more difficult to change the form of organization. So, IMO, isolated experiments with dictatorship are fine; overall preference for it is terribly dangerous.

(All of the above goes only for humans, of course; I have no qualms about FAI rule.)

P.S.: I googled for "benevolent dictator" + "paradox" and found an argument similar to mine.

Being governed by people instead of a system isn’t just dangerous, it suffers from a limited attention span, too. The Chinese oligarchy is, indeed, very effective. Beijing was cleaner for the Olympics and those pesky plastic bags are gone, but there is only so much bandwidth for the authorities to enforce regulation and address new concerns. Pollution is a serious problem in China that no one denies, but little is done so far. The people and the government are both troubled, but frankly, they have bigger fish to stir fry. Three hundred million people may be living middle class western lives, but that leaves another billion in a falling apart shack.

The Chinese have every reason to be proud of their beautiful country and amazing progress. There is much to enjoy and appreciate and, even if it pained me to admit it, their system works far better than I would like to give it credit. My worry for them is if it’s sustainable. Can those billion people rely on replacing great technocrats with new ones who also make the right decisions? Is it even possible for a system which depends on the vagaries of people to even effectively address all the concerns and needs of the people they govern and the society they guide?

Replies from: None
comment by [deleted] · 2012-12-24T07:22:35.319Z · LW(p) · GW(p)

But if dictatorship were to be seen as the norm, and you couldn't easily fall back on democracy, rule by committee, anarchy, etc, and had to choose between a few dictators, then the standards of dictatorial control would surely plummet and it would be psychologically much more difficult to change the form of organization.

Interesting. Do you think there are dictator-selection procedures that don't have either set of failure modes (selecting for looks/promises to loot the commons/lack of leadership, selecting for power-hungry tyrants)?

Replies from: Multiheaded
comment by Multiheaded · 2012-12-24T07:33:14.372Z · LW(p) · GW(p)

Do you think there are dictator-selection procedures that don't have either set of failure modes (selecting for looks/promises to loot the commons/lack of leadership, selecting for power-hungry tyrants)?

Only a single one: a great actually-benevolent-dictator, with a good insight into people and lots of rationality, personally selects his successor among several candidates, after lengthy consideration and hidden testing. But, of course, remove one of the above qualifiers, and it can blow up regardless of the first dictator's best intentions. See e.g. Marcus Aurelius and Commodus. So, on a meta level, no, there's likely no system that would work for humans.

(I think that "real" democracy is also too dangerous - see the 19th and early 20th century - so either some form of sophisticated rule by committee or a state of anarchy could be the safest option for baseline humanity.)

Replies from: None
comment by [deleted] · 2012-12-24T07:41:13.443Z · LW(p) · GW(p)

What about technocracy a-la china?

And FAI, obviously.

so either some form of sophisticated rule by committee or a state of anarchy could be the safest option for baseline humanity.

Really? Safe in the sense of "too incompetent to execute a mass-murder"? Also, anarchy is a military vacuum.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-24T02:17:18.764Z · LW(p) · GW(p)

Yeah seriously. What if violence is the right thing to do?

Then discussing it on the public Internet is the wrong thing to do. I can't compare it to anything but juvenile male locker-room boasting.

Replies from: None, DataPacRat, Kawoomba, AdeleneDawner, MugaSofer, None
comment by [deleted] · 2012-12-24T14:11:58.217Z · LW(p) · GW(p)

What if you aren't sure if violence is the right thing to do? You obviously should want as many eyeballs to debug your thinking on that as possible no?

Replies from: Plasmon
comment by Plasmon · 2012-12-24T15:21:50.432Z · LW(p) · GW(p)

If you actually believe that violence might be the right thing to do, then you assign non-negligible probability to

  • the discussion will convince you that violence is indeed the right thing to do
  • you now have moral imperative to do violence, and you will act on this or convince others to act on it
  • you will want the discussion to never have occurred in the first place, because authorities can use it to track you down , and suppress your justified violence

If you want to discuss a coup or something do it in a less easily traceable fashion (not on a public forum. Use encryption. ).

Replies from: None, None
comment by [deleted] · 2012-12-24T16:38:23.582Z · LW(p) · GW(p)

You do realize this argument generalizes to discussing many things beyond violence right? So if this is your true rejection I hope you've spent some time decompartmentalizing on this.

Replies from: None
comment by [deleted] · 2012-12-25T00:22:10.355Z · LW(p) · GW(p)

I don't see how to decompartmentalize that, so I'm interested in what you are referring to.

comment by [deleted] · 2012-12-24T16:30:18.687Z · LW(p) · GW(p)

The thing is discussing desirability of violence and carrying out violence are not necessarily done by the same person. Indeed historically they usually aren't. This does not remove moral culpability but does provide some legal protection.

Replies from: Plasmon
comment by Plasmon · 2012-12-25T07:03:31.500Z · LW(p) · GW(p)

The thing is discussing desirability of violence and carrying out violence are not necessarily done by the same person. Indeed historically they usually aren't.

Certainly. I consider this to be evidence that the people discussing the desirability of violence do not actually believe what they are saying. They are merely attempting to raise their status in an in-group which hates the group against which violence is being discussed.

Due to hate speech laws, you may have less legal protection than you expect.

Replies from: None, None, Eugine_Nier
comment by [deleted] · 2012-12-25T10:28:30.599Z · LW(p) · GW(p)

Certainly. I consider this to be evidence that the people discussing the desirability of violence do not actually believe what they are saying. They are merely attempting to raise their status in an in-group which hates the group against which violence is being discussed.

This fits with gwern's model of terrorist groups as not being about political objectives but about dysfunctional support groups of people who bully each other into action because of all too human social games.

But you are making a mistake here. A similar one that people make after hearing about the evolutionary origins of altruism and then go on to behave as if altruism doesn't really exist. Like thinking a mother was really doing fitness maximizing calculations when deciding to give the runt cub less food than the strong ones. She just feels less inclined to give it food because it isn't as cute or something. The mechanism that produced that feeling certainly was optimized with fitness maximization as a goal in the past but that isn't what is going on in her brain.

I'm pretty sure Ayatollah Khomeini, Thomas Paine or Lenin probably honestly believed in the desirability of the violence they where promoting. They weren't faking it. But I think they probably did believed in it because of the social reasons you mention.

Replies from: Plasmon
comment by Plasmon · 2012-12-25T11:08:35.826Z · LW(p) · GW(p)

Like thinking a mother was really doing fitness maximizing calculations when deciding to give the runt cub less food than the strong ones. She just feels less inclined to give it food because it isn't as cute or something.

The fitness maximising calculations are encoded, by evolution, in the neural patterns relating to the cuteness response. The individuals whose cuteness response correlates with fitness are themselves more fit. Those who would give more food to their malformed three-legged offspring will go extinct. So of course the mother is doing fitness calculations. It doesn't feel like that from the inside, but that doesn't make it any less true.

And the religions people who merely believe that they believe in their god, they feel very religions from the inside.

There is a distinction to be made between the internal state of they who argue violence against a certain group is a good thing, but don't lift a finger themselves (BELIEF1) , and, they who upon being convinced that violence against this group is a good thing, actually attack said group (BELIEF2). It is that distinction that I mean when I say they don't really believe, even though both feel like honest belief from the inside. Neither is faking it, but there's still a distinction.

Replies from: None
comment by [deleted] · 2012-12-25T11:34:45.756Z · LW(p) · GW(p)

The fitness maximising calculations are encoded, by evolution, in the neural patterns relating to the cuteness response. The individuals whose cuteness response correlates with fitness are themselves more fit. Those who would give more food to their malformed three-legged offspring will go extinct. So of course the mother is doing fitness calculations. It doesn't feel like that from the inside, but that doesn't make it any less true.

No she is not. Except in the sense that she is perhaps one small step in something that does fitness calculations, but looking at her brain you wouldn't find fitness maximization calculations being done just execution of old adaptations.

And the religions people who merely believe that they believe in their god, they feel very religions from the inside.

Religious people believe they believe in God. And many of them are correct on this.

There is a distinction to be made between the internal state of they who argue violence against a certain group is a good thing, but don't lift a finger themselves (BELIEF1) , and, they who upon being convinced that violence against this group is a good thing, actually attack said group (BELIEF2). It is that distinction that I mean when I say they don't really believe, even though both feel like honest belief from the inside. Neither is faking it, but there's still a distinction.

It can also be called division of labour. My comparative advantage may lie in bashing Wiggin heads or crafting arguments for why bashing Wiggin heads is good or organizing the logistics so our heads don't get bashed by Wiggins so that we can bash more of theirs.

I don't see from a consquentalist stand point what is so different between me pyhsically bashing a Wiggin head, pressing a button that activates a machine that bashes as Wiggin head and manipulating someone into bashing a Wiggin head. To call only one of these indication of "real" belief that bashing Wiggin heads is good (and I hope we all agree it is!) seems not very useful at all, especially since it is perfectly possible that the preferences are literally identical inside their brains but merely the means available to them are what varies.

The distinction seems useful only in very peculiar circumstances, like trying to discover my preferences with regards to personal physical confrontation or combat.

Replies from: Plasmon, Plasmon
comment by Plasmon · 2012-12-25T18:58:10.405Z · LW(p) · GW(p)

No she is not. Except in the sense that she is perhaps one small step in something that does fitness calculations, but looking at her brain you wouldn't find fitness maximization calculations being done just execution of old adaptations.

These old adaptations encode rough heuristics of limited applicability which approximate fitness calculations (if the environment has been fairly constant for long enough). They are actually there inside the brain. What else do you think the cuteness response as you used here is?

Religious people believe they believe in God. And many of them are correct on this.

Are they? So very few of them actually take their beliefs seriously. So very few of them actually behave as if their expected utility calculations are dominated by treats of eternal damnation and promises of eternal salvation.

comment by Plasmon · 2012-12-25T18:48:11.486Z · LW(p) · GW(p)

It can also be called division of labour. My comparative advantage may lie in bashing Wiggin heads or crafting arguments for why bashing Wiggin heads is good or organizing the logistics so our heads don't get bashed by Wiggins so that we can bash more of theirs.

Yes. The problem is that this is exactly the rationalisation that someone would use if it weren't true. Then again, it might be true.

We need to distinguish

  • (type A) Someone wants to rise in power within a certain group, advocates violence against a hated out-group, and remains largely protected from legal consequences himself because he doesn't actually commit any violent acts. When asked, he claims his non-action is due to division-of-labour-reasoning.
  • (type B) Someone actually thinks violence against a certain out-group is a good thing (in the greater good sense), and doesn't commit any violent acts himself based on division-of-labour-reasoning. When asked about his motivations, he is not (easily?) distinguishable from (A).

What's the difference? The difference is that (type A) should be discouraged from encouraging violence. If a (type A) successfully encourages a group of followers to commit violence against a hated out-group , people get hurt. This was not the (type A)'s intention, it's just an unfortunate side effect that he doesn't really care about.

(type B)s, on the other hand, should be listened to, and their arguments weighed carefully. For the greater good, you know. In fact this seems like a good reason for (type B)s to signal that they themselves do not in any way profit from the violence.

What are your priors? More (type B)? More (type A)?

I don't see from a consquentalist stand point what is so different between me pyhsically bashing a Wiggin head, pressing a button that activates a machine that bashes as Wiggin head and manipulating someone into bashing a Wiggin head.

You said it yourself : not being the one who actually commits the violent acts provides some legal protection. Your not ending up in jail is a consequence. (I don't actually know what a Wiggin head is, I assume "bashing a Wiggin head" is some socially unaccepted form of violence).

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-12-25T23:40:58.249Z · LW(p) · GW(p)

See here: wiggin

comment by [deleted] · 2012-12-25T10:19:59.057Z · LW(p) · GW(p)

Due to hate speech laws, you may have less legal protection than you expect.

lol. Hate speech laws are primarily about punishing ethnocentric white people, secondarily for protecting very specific minorities. Even when written so as to protect people of a certain profession or class or education level or political ideology as they are in my country they are never used that way.

An example: Do you think that saying you want to take stuff from or harm rich people without getting into specifics about a particular person will ever get you into legal trouble?

Replies from: TheOtherDave, Plasmon
comment by TheOtherDave · 2012-12-25T16:03:44.119Z · LW(p) · GW(p)

Do you think that saying you want to take stuff from or harm rich people without getting into specifics about a particular person will ever get you into legal trouble?

I imagine it depends a lot on the extent to which the legal jurisdiction I'm in at the time is influenced by rich people, and the extent to which those rich people take my having said that seriously. In most jurisdictions and for most audiences, very likely not, unless I'm a far more compelling speaker than I think I am.

comment by Plasmon · 2012-12-25T10:49:42.246Z · LW(p) · GW(p)

I was talking in general, not about you specifically. In fact I much appreciate your out-of-the-box view on many subjects, and I can guess why you would argue against any form of censorship here, slippery slopes and all that.

Example of hate-speech laws being used

Former Sharia4Belgium spokesman ... convicted of charges relating to incitement to hatred and violence and the discrimination of non-Muslims, receiving a 6-month prison sentence.

Replies from: None
comment by [deleted] · 2012-12-25T11:49:08.877Z · LW(p) · GW(p)

I think your example is rather atypical to be honest at least in the wider West. Emma West being the more typical one. Very much like with hate crime laws there is controversy whether hate speech against white people even is hate speech.

What would be considered unacceptable for one group is not unacceptable for another. The star of the recent popular movie Django Unchained, Jamie Fox joked for example:

"I get free. I save my wife and I kill all the white people in the movie," Foxx said to thunderous applause. "How great is that?”

in light of his other comments this is interesting

"As a black person it's always racial. ... when I get home my other homies are like how was your day? Well, I only had to be white for at least eight hours today, [or] I only had to be white for four hours."

Foxx went on to say that “black is the new white.”

Whether this combined is ominous, righteous or innocuous depends on your model of the world. That how such laws are applied depends heavily on what kind of model of the world judges or police officers are likely to use is hardly disputable however.

Replies from: Plasmon
comment by Plasmon · 2012-12-25T18:01:07.121Z · LW(p) · GW(p)

Oh, I agree fully that such laws are problematic and open to abuse, and that it might well be better for no such laws to exist at all. Nonetheless they exist and should occur as a (possibly very low) cost in the calculation of the expected utility of advocating violence.

comment by Eugine_Nier · 2012-12-26T02:57:38.459Z · LW(p) · GW(p)

Certainly. I consider this to be evidence that the people discussing the desirability of violence do not actually believe what they are saying.

Not necessarily. It could be division of labor since the people who are good at figuring out which violence to do are not necessarily the same people good at doing violence.

comment by DataPacRat · 2012-12-24T02:45:23.259Z · LW(p) · GW(p)

A friend and I once put together a short comic trying to analyze democracy from an unusual perspective, including presenting the idea that an underlying threat of violent popular uprising should the system be corrupted helps keep it running well. This was closely related to a shorter comic presenting some ideas on rationality. The project led to some interesting discussions with interesting people, which helped me figure out some ideas I hadn't previously considered, and I consider it to have been worth the effort; but I'm unsure whether or not it would fall afoul of the new policy.

How 'identifiable' do the targets of proposed violence have to be for the proposed policy to apply, and how 'hypothetical' would they have to be for it not to? Some clarification there would be appreciated.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-25T09:40:58.381Z · LW(p) · GW(p)

How 'identifiable' do the targets of proposed violence have to be for the proposed policy to apply, and how 'hypothetical' would they have to be for it not to? Some clarification there would be appreciated.

It's only applied if a mod feels like it.

comment by Kawoomba · 2012-12-24T08:26:35.294Z · LW(p) · GW(p)

Then discussing it on the public Internet is the wrong thing to do.

Also, implying that violence is best discussed in private, versus not being discussed at all. It's like saying in public "But let's talk about our illegal activities in a more private venue." There should be no perception of LW being associated with such, period (.)

comment by AdeleneDawner · 2012-12-24T14:31:58.075Z · LW(p) · GW(p)

Actually, I can think of at least one type of situation where this isn't true, though it seems unwise to explain it in public and in any case it's still not something you'd want associated with LW, or in fact happening at all in most cases.

comment by MugaSofer · 2012-12-24T22:37:46.888Z · LW(p) · GW(p)

What if someone is, y'know, unsure?

Replies from: Eliezer_Yudkowsky, wedrifid
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-24T23:02:18.959Z · LW(p) · GW(p)

Generally speaking, there's a lot of options grownups in real life resort to before they resort to violence, and I would have no problem with a post describing the fully generic considerations and how far you'd actually have to go down the decision tree before you got to violence, without any identifiables being named. People who honestly don't realize this would be welcome to read that post. I may be somewhat prejudiced by considering it completely obvious that jumping straight to violence as a cognitive answer and then blathering about your conspiracy on the Internet is merely stupid.

Replies from: MugaSofer
comment by MugaSofer · 2012-12-25T13:00:56.071Z · LW(p) · GW(p)

That ... doesn't seem to answer my question.

Perhaps an example is in order.

Someone lives in an area where there recently been a number of violent muggings. They are considering bringing a gun with them when they go out, in order to defend themself; they suspect they may be overestimating the danger based on news reports. So they decide to ask here if there are any relevant biases that may be coloring their judgement., and this leads into a general discussion of what chance there should be of encountering violent criminals before it becomes rational to arm yourself (and risk accidentally injuring or killing yourself or passersby.)

Does this help clarify my problem?

comment by wedrifid · 2012-12-24T22:46:08.939Z · LW(p) · GW(p)

What if someone is, y'know, unsure?

Discussion on lesswrong is not likely to give them an answer. Honestly, I can't think of any public place on the internet that is likely to be all that helpful, unfortunately.

comment by [deleted] · 2012-12-24T02:33:08.767Z · LW(p) · GW(p)

Good point.

comment by quintopia · 2012-12-24T01:03:56.585Z · LW(p) · GW(p)

EY has publicly posted material that is intended to provoke thought on the possibility of legalizing rape (which is considered a form of violence). If he believed that there was positive utility in considering such questions before, then he must consider them to have some positive utility now, and determining whether the negative utility outweighs that is always a difficult question. This is why I will be opposed to any sort of zero tolerance policy in which the things to be censored is not well-defined a definite impediment to balanced and rationally-considered discussion. It's clear to me that speaking about violence against a particular person or persons is far more likely to have negative consequences on balance, but discussion of the commission of crimes in general seems like something that should be weighed on a case-by-case basis.

In general, I prefer my moderators to have a fuzzy set of broad guidelines about what should be censored in which not deleting is the default position, and they actually have to decide that it is definitely bad before they take the delete action. The guidelines can be used to raise posts to the level of this consideration and influence their judgment on this decision, but they should never be able to say "the rules say this type of thing should be deleted!"

Replies from: Error, army1987, Eliezer_Yudkowsky, prase, jimrandomh
comment by Error · 2012-12-24T03:05:52.534Z · LW(p) · GW(p)

EY has publicly posted material that is intended to provoke thought on the possibility of legalizing rape (which is considered a form of violence)

I'm not sure how this is relevant; there's a good bit of difference between discussion of breaking a law and discussion of changing it. That said, I think I'm reading this differently than most in the thread. I'm understanding it as aimed against hypotheticals that are really "hypotheticals".

In answer to the question that was actually asked in the post, here is a non-obvious consequence: My impression of the atheist/libertarian/geek personspace cluster that makes up much of LW's readership is that they're generally hostile to anything that smells like conflating "legal" with "okay"; and also to the idea that they should change their behavior to suit the rest of the world. You might find you're making LW less off-putting to the mainstream at the cost of making it less attractive to its core audience. (but you might consider it worth that cost)

As both a relatively new contributor and a member of said cluster, this policy makes me somewhat uncomfortable at first glance. Whether that generalizes to other potential new contributors, I cannot say. I present it as proof-of-concept only.

comment by A1987dM (army1987) · 2012-12-24T10:24:12.212Z · LW(p) · GW(p)

IAWYC, but that was a story set in the far future with a discussion that makes clear (to me at least) that our present is so different from that that the author wouldn't ever even dream of suggesting to do anything remotely like that in our times. It isn't remotely similar to (what Poe's Law predicts people will get from) the recent suggestion about tobacco CEOs.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-24T02:15:33.739Z · LW(p) · GW(p)

EY has publicly posted material that is intended to provoke thought on the possibility of legalizing rape (which is considered a form of violence).

That's an... interesting way of putting it, where by "interesting" I mean "wrong". I could go off on how the idea is that there's particular modern-day people who actually exist and that you're threatening to harm, and how a future society where different things feel harmful is not that, but you know, screw it.

This is why I will be opposed to any sort of zero tolerance policy

The 'rules' do not 'mandate' that I delete anything. They hardly could. I'm just, before I start deleting things, giving people fair notice that this is what I'm considering doing, and offering them a chance to say anything I might have missed about why it's a terrible idea.

Replies from: wedrifid, MugaSofer
comment by wedrifid · 2012-12-24T03:00:54.065Z · LW(p) · GW(p)

That's an... interesting way of putting it, where by "interesting" I mean "wrong".

If you genuinely can't see how similar considerations apply to you personally publishing rape-world stories and the reasoning you explicitly gave in the post then I suggest you have a real weakness in evaluating the consequences of your own actions on perception.

I could go off on how the idea is that there's particular modern-day people who actually exist and that you're threatening to harm, and how a future society where different things feel harmful is not that, but you know, screw it.

I approve of your Three Worlds Collide story (in fact, I love it). I also approve of your censorship proposal/plan. I also believe there is no need to self censor that story (particularly at the position you were when you published it). That said:

This kind of display of evident obliviousness and arrogant dismissal rather than engagement or---preferably---even just outright ignoring it may well do more to make Lesswrong look bad than half a dozen half baked speculative posts by CronoDAS. There are times to say "but you know, screw it" and "where by interesting I mean wrong" but those times don't include when concern is raised about your legalised-rape-and-it's-great story in the context of your own "censor hypothetical violence 'cause it sounds bad" post.

comment by MugaSofer · 2012-12-24T22:33:13.174Z · LW(p) · GW(p)

That's an... interesting way of putting it, where by "interesting" I mean "wrong". I could go off on how the idea is that there's particular modern-day people who actually exist and that you're threatening to harm, and how a future society where different things feel harmful is not that, but you know, screw it.

So if I suggest killing people in the context of futurism, that's OK with you?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-24T23:12:05.776Z · LW(p) · GW(p)

This seems to me like a deliberate misunderstanding. But taking it at face value, a story in which violence is committed against targets not analogous to any present-day identifiable people, or which is not committed for any reasons obviously analogous to present-day motives, is fine. The Sword of Good is not advocating for killing wizards who kill orcs, although Dolf does get his head cut off. Betrayed-spouse murder mysteries are not advocating killing adulterers - though it would be different if you named the victim after a specific celebrity and depicted the killer in a sympathetic light. As much as people who don't like this policy, might wish that it were impossible for anyone to tell the difference so that they could thereby argue against the policy, it's not actually very hard to tell the difference.

Replies from: kodos96, MugaSofer
comment by kodos96 · 2012-12-24T23:23:38.601Z · LW(p) · GW(p)

As much as people who don't like this policy, might wish that it were impossible for anyone to tell the difference so that they could thereby argue against the policy, it's not actually very hard to tell the difference.

I didn't interpret CronoDAS's post as intending to actually advocate violence. I viewed it as really silly and kind of dickish, and a good thing that he ultimately removed it, but an actual call to violence? No. It was a thought experiment. His thought experiment was set in the present day, while yours was set in the far future, but other than that I don't see a bright line separating them.

It may not be be very hard for you to tell the difference, since you wrote the policy, so you may very well have a clear bright line separating the two in your head, but we don't.

comment by MugaSofer · 2012-12-25T12:19:17.092Z · LW(p) · GW(p)

a story in which violence is committed against targets not analogous to any present-day identifiable people, or which is not committed for any reasons obviously analogous to present-day motives, is fine

I was unsure if people who do not currently exist might also be considered "identifiable real-world individuals", if discussed in the context of futurism. Thank you for clarifying.

comment by prase · 2012-12-24T14:11:51.280Z · LW(p) · GW(p)

If he believed that there was positive utility in considering such questions before, then he must consider them to have some positive utility now, and determining whether the negative utility outweighs that is always a difficult question.

He was in a different position then. Trying to gain reputation for being an original thinker requires different public outputs than attempting to earn mainstream recognition of the origanisation one is the head of.

comment by jimrandomh · 2012-12-26T21:29:14.174Z · LW(p) · GW(p)

EY has publicly posted material that is intended to provoke thought on the possibility of legalizing rape

This looks like a complete misinterpretation, albeit one I've seen several times. The context of this is the novella Three Worlds Collide. (Spoilers follow). In that story humans meet two races of aliens with incompatible values, the babyeaters and the superhappies. The superhappies demand to modify human values to be more compatible with their own; and the author's perspective is that this would be a very bad thing, worth sacrificing billions of lives to prevent. This is the focus of the story.

Then we find out that in this universe, rape has been legalized, and it's only a little more than a throwaway remark. What are we to make of this? Well, it's a concrete example of why changing human values would be bad. Which, given the overall story, seems like the obvious intended interpretation. But hey, male author mentioning rape - let's all be offended! His condemnation of it wasn't strong enough!

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-12-26T23:43:43.397Z · LW(p) · GW(p)

But hey, male author mentioning rape - let's all be offended! His condemnation of it wasn't strong enough!

Who said anything about being offended?

comment by [deleted] · 2012-12-24T10:09:44.903Z · LW(p) · GW(p)

Fun Exercise

Posts or comments advocating or 'asking about' violation of laws that are actually enforced against middle-class people (e.g., kidnapping, not anti-marijuana laws) may at the admins' option be censored on the grounds that it makes LW look bad and that anyone talking about a proposed crime on the Internet fails forever as a criminal

Consider what would have been covered by this 250, 100 and 50 years ago.

Bonus Consider what wouldn't have been covered by this 250, 100 and 50 years ago but would be today.

Replies from: Qiaochu_Yuan, ChristianKl
comment by Qiaochu_Yuan · 2012-12-24T11:35:32.808Z · LW(p) · GW(p)

I see the point you're trying to make, but I don't think it constitutes a counterargument to the proposed policy. If you were an abolitionist back when slavery was commonly accepted, it would've been a dumb idea to, say, yell out your plans to free slaves in the Towne Square. If you were part of an organization that thought about interesting ideas, including the possibility that you should get together and free some slaves sometime, that organization would be justified in telling its members not to do something as dumb as yelling out plans to free slaves in the Towne Square. And if Ye Olde Eliezere Yudkowskie saw you yelling out your plans to free slaves in the Towne Square, he would be justified in clamping his hand over your mouth.

Replies from: None
comment by [deleted] · 2012-12-24T12:18:32.917Z · LW(p) · GW(p)

It wouldn't be dumb to argue for the moral acceptability of freeing slaves (even by force) however.

Replies from: Qiaochu_Yuan, Multiheaded
comment by Qiaochu_Yuan · 2012-12-24T12:28:46.798Z · LW(p) · GW(p)

It wouldn't be dumb for an organization to decide that society at large might be willing to listen to them argue for the moral acceptability of freeing slaves, even by force. It would be dumb for an organization to allow its individual members to make this decision independently because that substantially increases the probability that someone gets the timing wrong.

Replies from: prase, Decius
comment by prase · 2012-12-24T13:53:37.921Z · LW(p) · GW(p)

Beware selective application of your standards. If the members can't be trusted with one type of independent decision, why they can be trusted with other sorts of decisions?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2012-12-24T23:10:39.546Z · LW(p) · GW(p)

Because the decision to initiate a particular kind of public discussion entails everyone else in the organization taking on a certain level of risk, and an organization should be able to determine what kinds of communal risk it's willing to allow its individual members to force on everyone else. There are jurisdictions where criminal incitement is itself a crime.

Replies from: prase
comment by prase · 2012-12-25T00:01:40.897Z · LW(p) · GW(p)

I can't say whether I agree or disagree until you precise the meaning of the qualifiers "particular" and "certain". But my question was in any case probably directed a bit elsewhere: if the members shouldn't be free to write about certain class of topics because they may misjudge how the society at large would react, doesn't it imply that they shouldn't be free to write about anything because they may misjudge what the society at large might think? If the rationale is that which you say, returning back from abolitionists to LW, shouldn't the policy be "any post that is in conflict with LW interest can be deleted" rather than the overly specific rule concerning violence and only violence?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2012-12-25T00:05:42.004Z · LW(p) · GW(p)

I can't say whether I agree or disagree until you precise the meaning of the qualifiers "particular" and "certain".

"Criminal incitement" and "the risk of being arrested," then. In other time periods, substitute "blasphemy" and "the risk of being burned at the stake."

if the members shouldn't be free to write about certain class of topics because they may misjudge how the society at large would react

They shouldn't be free to write about certain topics with the name of their organization attached to that writing, which is the case here. They can write about anything they want anonymously and with no organization's name attached because that doesn't entail the other members of the organization taking on any risk.

If the rationale is that which you say, returning back from abolitionists to LW, shouldn't the policy be "any post that is in conflict with LW interest can be deleted" rather than the overly specific rule concerning violence and only violence?

Sure.

comment by Decius · 2012-12-26T21:36:23.285Z · LW(p) · GW(p)

How does an organization make decisions independently of the members of the organization?

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2012-12-26T22:52:20.884Z · LW(p) · GW(p)

It doesn't. The distinction is between decisions that individual members make independently and decisions that individual members make communally.

If it helps, the underlying moral principle I'm working from here is "try to avoid making decisions that entail other people taking on risks without their consent."

Replies from: Decius
comment by Decius · 2012-12-27T01:28:49.916Z · LW(p) · GW(p)

Did they take on the risks when they entered the conspiracy, or do they only take on those risks when events beyond their control happen? It would be foolish to conspire with foolish or rash people, which is one reason why I don't.

comment by Multiheaded · 2012-12-27T12:07:46.806Z · LW(p) · GW(p)

But I thought you'd support American slavery on general reactionary grounds? That the slaves were just wonderfully happy and content until religiously-minded Abolitionist meddlers tried to teach them to read, to disrespect their masters, etc?

(semi-trolling)

comment by ChristianKl · 2012-12-25T02:19:29.509Z · LW(p) · GW(p)

Bonus:

Consider what's likely to be covered 50 years in the future.

Replies from: Eugine_Nier, None
comment by Eugine_Nier · 2012-12-26T03:12:19.847Z · LW(p) · GW(p)

For something like that, consider the algorithm you use to answer it. Then consider why the output of said algorithm should at all correlate with future social trends.

comment by [deleted] · 2012-12-25T10:17:54.625Z · LW(p) · GW(p)

I considered adding that too. :)

comment by [deleted] · 2012-12-23T23:51:07.797Z · LW(p) · GW(p)

Would my pro-piracy arguments be covered by this? What about my pro-coup d'état ones?

Replies from: None, Jabberslythe, army1987
comment by [deleted] · 2012-12-23T23:58:06.192Z · LW(p) · GW(p)

Possibly. I hope not. I'm all for mod action, but not at the expense of political diversity.

comment by Jabberslythe · 2012-12-24T06:12:35.981Z · LW(p) · GW(p)

I think piracy cases are pretty similar to marijuana cases (they are even less likely to be enforced actually) which he said won't be banned.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-12-24T07:54:07.133Z · LW(p) · GW(p)

I don't think Konkvistador was talking about software piracy.

Replies from: Jabberslythe
comment by Jabberslythe · 2012-12-24T18:01:55.090Z · LW(p) · GW(p)

Hahaha, whoops.

comment by A1987dM (army1987) · 2012-12-24T10:17:37.940Z · LW(p) · GW(p)

You mean copyright piracy or sea piracy?

Replies from: None
comment by [deleted] · 2012-12-24T10:20:09.600Z · LW(p) · GW(p)

Sea piracy obviously. What kind of a person do you think I am?!

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2012-12-24T11:12:24.917Z · LW(p) · GW(p)

As someone unfamiliar with your views, I can't tell whether this is sarcasm or not, especially because of the interrobang. Can you clarify? Is there anywhere on the internet where your views are concisely summarized? (Is it in any way associated with your real name?)

Replies from: None
comment by [deleted] · 2012-12-24T12:27:37.977Z · LW(p) · GW(p)

The levels can be hard to disambiguate so I sympathize. I'll write my opinions out unironically. You can find the full arguments in my comment history (I can dig links to that up too).

  • I'm assuming you are familiar with the arguments for efficent charity and optimal employment? If not I can provide citations & links. I don't think Sea Piracy as a means to funding efficient charity is obviously worse from a utilitarian perspective than a combo with many legal professions. It may or may not be justified, I'm leaning towards it being justified on the same utilitarian grounds as government taxation can be. If not cheating on taxes to fund efficient charity is a pretty good idea. Some people's comparative advantage will lay in sea piracy.

  • Violating copyright on software or media products in the modern West is in general not a bad thing. But indiscriminately pirating everything may be bad.

In the grandfather comment I was aiming for ambiguity and humour.

Replies from: MBlume
comment by MBlume · 2012-12-24T18:45:26.572Z · LW(p) · GW(p)

I mean, assuming that sea piracy to fund efficient charity is good, media piracy to save money that you can give to efficient charity is just obviously good.

Replies from: None
comment by [deleted] · 2012-12-24T18:51:54.994Z · LW(p) · GW(p)

media piracy to save money that you can give to efficient charity

Is so incredibly obviously good that I'm mystified no one is promoting it. I think the main reason is because it is "illegal".

Replies from: FiftyTwo, army1987
comment by FiftyTwo · 2012-12-24T19:24:44.853Z · LW(p) · GW(p)

We often seperate endorsing things from believing they are good, as endorsing them implies you would like them to be prevalent which leads to collective action issues. (E.g. I think it is ok to occasionally take more than your share share of the cake if you're hungry, I wouldn't encourage it as then there wouldn't be any cake left)

comment by A1987dM (army1987) · 2012-12-25T01:44:26.828Z · LW(p) · GW(p)

Because few people actually spend much money on copyrighted stuff they could pirate instead these days, so it's just assumed that anyone trying to do efficient charity already has more money than if they had paid for all the copyright media they've consumed?

Replies from: None
comment by [deleted] · 2012-12-25T10:37:58.855Z · LW(p) · GW(p)

Efficient charity folks are really serious about morality though and generally well off. They may have compartmentalized deontological beliefs to buy media material you consume. I bet they are more likely to pay for copyrighted works than the average person.

And if you are going out to the movies or buying popular books off Amazon you should be reminded to pirate more.

comment by ChristianKl · 2012-12-25T16:58:04.327Z · LW(p) · GW(p)

I am asking in advance if anyone has non-obvious consequences they want to point out or policy considerations they would like to raise.

I'm not sure what's obvious for you. In an enviroment without censorship you don't endorse a post by not censoring the post. If you however start censoring you do endorse a post by letting it stand.

Your legal and PR obligations for those posts that LessWrong hosts get bigger if you make editorial censorship decisions.

Replies from: David_Gerard, Viliam_Bur
comment by David_Gerard · 2013-01-01T22:01:00.011Z · LW(p) · GW(p)

Your legal and PR obligations for those posts that LessWrong hosts get bigger if you make editorial censorship decisions.

AIUI this is legally true: CDA section 230, mere hosting versus moderation.

comment by Viliam_Bur · 2012-12-26T00:00:38.325Z · LW(p) · GW(p)

Is there any way out of this dilemma? For example having a policy where moderator flips a coin for each offending article or comment, and head = delete, tails = keep.

:D

Replies from: jkaufman
comment by jefftk (jkaufman) · 2012-12-27T18:30:14.161Z · LW(p) · GW(p)

While I don't know about the legality, practically what this does is add noise to the moderation signal. Posts that remain are still more likely to be ones that the moderator approves of, but might not be.

This is actually very similar to the current system, with the randomness of coin flipping substituted for the semi-randomness of what the moderator happens to see.

comment by CronoDAS · 2012-12-23T23:15:21.459Z · LW(p) · GW(p)

My post was indeed inappropriate. I have used the "Delete" function on it.

comment by Suryc11 · 2012-12-24T07:04:39.192Z · LW(p) · GW(p)

I'm disappointed by EY's response so far in this thread, particularly here. The content of the post above in itself did not significantly dismay me, but upon reading what appeared to be a serious lack of any rigorous updating on the part of EY to--what I and many LWers seemed to have thought were--valid concerns, my motivation to donate to the SI has substantially decreased.

I had originally planned to donate around $100 (starving college student) to the SI by the start of the new year, but this is now in question. (This is not an attempt at some sort of blackmail, just a frank response by someone who reads LW precisely to sift through material largely unencumbered by mainstream non-epistemic factors.) This is not to say that I will not donate at all, just that the warm fuzzies I would have received on donating are now compromised, and that I will have to purchase warm fuzzies elsewhere--instead of utilons and fuzzies all at once through the SI.

Replies from: drethelin
comment by drethelin · 2012-12-24T07:19:14.521Z · LW(p) · GW(p)

This is similar to how I feel. I was perfectly happy with his response to the incident but became progressively less happy with his responses to the responses.

Replies from: orthonormal, Eliezer_Yudkowsky
comment by orthonormal · 2012-12-26T05:28:33.083Z · LW(p) · GW(p)

There is a rare personality trait which allows a person to read and respond to hundreds of critical comments without compromising their perspicacity and composure. Luke, for instance, has demonstrated this trait; Eliezer hasn't (to the detriment of this discussion and some prior ones).

(I'd bet at 10-to-1 that Eliezer agrees with this assessment.)

Replies from: lukeprog, wedrifid
comment by lukeprog · 2013-02-02T00:31:10.914Z · LW(p) · GW(p)

There is a rare personality trait which allows a person to read and respond to hundreds of critical comments without compromising their perspicacity and composure. Luke, for instance, has demonstrated this trait

Not sure I totally agree. My LW comments may show retained composure in most cases, but I can think of two instances in the past few months in which I became (mildly) emotional in SI meetings in ways that disrupted my judgment until after I had cooled down. Anna can confirm, as she happens to have been present for both meetings. (Eliezer could also confirm if he had better episodic memory.) The first instance was a board meeting at which we discussed different methods of tracking project expenses, the second was at a strategy meeting which Anna compared to a Markov chain.

Anyway, I'm aware of people who are better at this than I am, and building this skill is one of my primary self-improvement goals at this time.

Replies from: orthonormal
comment by orthonormal · 2013-02-03T16:02:15.487Z · LW(p) · GW(p)

I appreciate you sharing this.

Keeping one's composure in person and keeping one's composure on the Internet are distinct aptitudes (and only somewhat correlated, as far as I can tell), and it still looks to me like you've done well at the latter.

comment by wedrifid · 2012-12-26T06:01:31.884Z · LW(p) · GW(p)

There is a rare personality trait which allows a person to read and respond to hundreds of critical comments without compromising their perspicacity and composure. Luke, for instance, has demonstrated this trait; Eliezer hasn't (to the detriment of this discussion and some prior ones).

There is another personality trait (or skill) that allows one to be comfortable with acknowledging areas of weakness and delegating to people more capable. Fortunately Eliezer was able to do this with respect to managing SIAI. He seems to have lost some of that capability or awareness in recent times.

Replies from: hairyfigment
comment by hairyfigment · 2012-12-27T07:04:12.358Z · LW(p) · GW(p)

Quite possibly. I'd still oppose putting you in charge of the website. :)

Replies from: wedrifid
comment by wedrifid · 2012-12-28T05:11:42.942Z · LW(p) · GW(p)

Quite possibly. I'd still oppose putting you in charge of the website.

Irrelevant and unnecessary. Particularly since I happen to have the self awareness in question, know that becoming Luke would be exhausting to me and so would immediately hand off the responsibility. What I wouldn't do is go about acting how I naturally wish to act and be unable to comprehend why my actions have the consequences that they inevitably have.

:)

Why are you smiling? This makes your remark all the more objectionable (in as much as it indicates either wit or rapport, neither of which are present.)

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-24T19:46:42.849Z · LW(p) · GW(p)

Punishment received! Brain has learned to stop responding to responses.

comment by NancyLebovitz · 2012-12-24T01:17:07.690Z · LW(p) · GW(p)

Posts or comments advocating or 'asking about' violation of laws that are actually enforced against middle-class people (e.g., kidnapping, not anti-marijuana laws) may at the admins' option be censored on the grounds that it makes LW look bad

I'm dubious about this because laws can change. I'm also sure I don't have a solid grasp of which laws can be enforced against middle-class people, but I do know that they aren't all like laws against kidnapping. For example, doctors can get into trouble for prescribing "too much" pain medication.

Replies from: army1987
comment by A1987dM (army1987) · 2012-12-25T00:43:55.081Z · LW(p) · GW(p)

BTW, I know it's not terribly rare for anti-marijuana laws to be enforced against middle-class people where I am; so he should have either specified “against middle-class people in Northern California” (but how is someone from (say) rural Poland supposed to know?) or use a different example such as copyright infringement for personal use (hoping that no country actually enforces that non-negligibly often).

EDIT: A better criterion that would include laws against kidnapping but not laws against marijuana or laws against copyright infringement (though by far not a perfect one) in the context of ‘suggesting breaking those publicly on the internet would look bad’ would be ‘laws that a supermajority of internet users aged between 18 and 35 and with IQ above 115 would likely find ridiculous’. (Though I might be excessively Generalizing From One Example when thinking about what other people would think of anti-marijuana laws or copyright laws.)

Replies from: kodos96
comment by kodos96 · 2012-12-25T01:08:00.183Z · LW(p) · GW(p)

BTW, I know it's not terribly rare for anti-marijuana laws to be enforced against middle-class people where I am; so he should have either specified “against middle-class people in Northern California”

Also, even in California, and even for people of middle class, you'll get marijuana laws enforced against you if you manage to piss off the wrong cop/prosecutor.

comment by [deleted] · 2012-12-24T05:30:00.514Z · LW(p) · GW(p)

Just because I think responses to this post might not have been representative:

I think this is a good policy.

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2012-12-24T06:17:06.779Z · LW(p) · GW(p)

I also agree with this policy, and feel that many of the raised or implied criticisms of it are mostly motivated from an emotional reaction against censorship. The points do have some merit, but their significance is vastly overstated. (Yes, explicit censorship of some topics does shift the Schelling fence somewhat, but suggesting that violence is such a slippery topic that next we'll be banning discussion about gun control and taxes? That's just being silly.)

Replies from: kodos96
comment by kodos96 · 2012-12-24T06:36:28.179Z · LW(p) · GW(p)

You may think it's silly, others do not. Even if Eliezar has no intention of interpeting "violence" that way, how do we know that? Ambiguity about what is and is not allowed results in chilling far more speech than may have been originally intended by the policy author.

Also, the policy is not limited to only violence, but to anything illegal (and commonly enforced on middle class people). What the hell does that even mean? Illegal according to whom? Under what jurisdiction? What about conflicts between state/federal/constitutional law? I mean, don't get me wrong, I think I have a pretty good idea what Eliezar meant by that, but I could well be wrong, and other people will likely have different ideas of what he meant. Again, ambiguity is what ends up chilling speech, far more broadly than the original policy author may have actually intended.

And I will again reiterate what I consider to be the most slam-dunk argument against this policy: in the incident that provoked this policy change, the author of the offending post voluntarily removed it, after discussion convinced him it was a bad idea. Self-policing worked! So what exactly is the necessity for any new policy at all?

Replies from: Kaj_Sotala, DanArmak
comment by Kaj_Sotala · 2012-12-24T07:43:26.839Z · LW(p) · GW(p)

I agree that your points about ambiguity have some merit, but I don't think there's much of a risk of free speech being chilled more than was intended, because there will be people who test these limits. Some of their posts will be deleted, some of them will not. And then people can see directly roughly where the intended line goes. The chilling effect of censorship would be a more worrying factor if the punishment for transgressing was harsher: but so far Eliezer has only indicated that at worst, he will have the offending post deleted. That's mild enough that plenty of people will have the courage to test the limits, as they tested the limits in the basilisk case.

As for self-policing, well, it worked once. But we've already had trolls in the past, and the userbase of this site is notoriously contrarian, so you can't expect it to always work - if we could just rely on self-policing, we wouldn't need moderators in the first place.

comment by DanArmak · 2012-12-26T20:13:56.044Z · LW(p) · GW(p)

What about conflicts between state/federal/constitutional law?

What about gasp whole other countries outside the US?

Replies from: kodos96
comment by kodos96 · 2012-12-26T20:50:20.903Z · LW(p) · GW(p)

Yes, that was covered by the previous question: "Under what jurisdiction?"

comment by ChristianKl · 2012-12-25T02:16:06.170Z · LW(p) · GW(p)

On of the most challenging moderation decisions I had to do at another forum was whether someone who argues the position "Homosexuality is a crime. In my country it's punishable with death. I like the laws of my country" should have his right of free speech. I think the author of the post was living in Uganda.

The basic question is, should someone who's been raised in Uganda feel free to share his moral views? Even if those views are offensive to Western ears and people might die based on those views?

If you want to have a open discussion about morality I think it's very valuable to have people who aren't raised in Western society participating openly in the discussion. I don't think LessWrong is supposed to be a place where someone from Uganda should be prevented from arguing the moral views in which he believes.

When it comes to politics, communists argue frequently for the necessarity of a revolution. A revolution is an illegal act that includes violence against real people. Moldburg argues frequently for the necessity of a coup d'état.

This policy allows for censoring both the political philosophy of communism as well as the political philosophy of moldbuggianism.
Even when I disagree with both political philosophies I think they should stay within the realm of discourse on LessWrong.

A community which has the goal of finding the correct moral system shouldn't ban ideas because they conflict with the basic Western moral consensus.

TDT suggests that one should push the fat man. It's a thought exercise and it's easy to say "I would push the fat man". In a discussion about pushing fat man's on trolly I think it's valid to switch the discussion from trolly cars to real world examples.

Discussion of torture is similar. If you say "Policemen should torture kidnappers to get the location where the kidnapper hid the victim" you are advocating a crime against real people.

Corporal punishment is illegal violence.

Given the examples I listed in this posts, which are cases where you would choose to censor? Do you think that you could articulate a public criteria about which cases you censor and which you will allow?

Replies from: army1987, Eugine_Nier
comment by A1987dM (army1987) · 2012-12-25T18:00:51.491Z · LW(p) · GW(p)

TDT suggests that one should push the fat man.

Does it? CDT most certainly does, but...

Replies from: ChristianKl
comment by ChristianKl · 2012-12-25T18:26:33.966Z · LW(p) · GW(p)

Okay, you can argue whether it does. Regardles, that's an argument that should be possible in depth. And it should be possible to exchange the trolly cars for more real world examples.

comment by Eugine_Nier · 2012-12-26T03:26:48.903Z · LW(p) · GW(p)

Discussion of torture is similar. If you say "Policemen should torture kidnappers to get the location where the kidnapper hid the victim" you are advocating a crime against real people.

No you're advocating changing the law. It's not a crime once/if the law is changed.

Corporal punishment is illegal violence.

Depends on where you are.

Replies from: ChristianKl
comment by ChristianKl · 2012-12-26T12:53:23.470Z · LW(p) · GW(p)

No you're advocating changing the law. It's not a crime once/if the law is changed.

No, that sentence doesn't include the word law. It's a valid position to argue that a policeman has the moral duty to do everything he can to safe a life even when that involves breaking the law.

comment by DataPacRat · 2012-12-24T12:35:23.621Z · LW(p) · GW(p)

I currently find myself tempted to write a new post for Discussion, on the general topic of "From a Bayesian/rationalist/winningest perspective, if there is a more-than-minuscule threat of political violence in your area, how should you go about figuring out the best course of action? What criteria should you apply? How do you figure out which group(s), if any, to try to support? How do you determine what the risk of political violence actually is? When the law says rebellion is illegal, that preparing to rebel is illegal, that discussing rebellion even in theory is illegal, when should you obey the law, and when shouldn't you? Which lessons from HPMoR might apply? What reference books on war, game-theory, and history are good to have read beforehand? In the extreme case... where do you draw the line between choosing to pull a trigger, or not?".

If it was simply a bad idea to have such a post, then I'd expect to take a karma hit from the downvotes, and take it as a lesson learned. However, I also find myself unsure whether or not such a post would pass the muster of the new deletionist criteria, and so I'm not sure whether or not I would be able to gather that idea - let alone whatever good ideas might result if such a thread was, in fact, something that interested other LessWrongers.

This whole thread-idea seems to fall squarely in the middle, between the approved 'hypothetical violence near trolleys' and 'discussion violence against real groups'. Would anyone be interested in helping me put together a version of such a post to generate the most possible constructive discourse? Or, perhaps, would somebody like to clarify that no version of such a post would pass muster under the new policy?

Replies from: MixedNuts
comment by MixedNuts · 2012-12-25T16:48:35.224Z · LW(p) · GW(p)

Do you have answers to those questions? Just "Hey, this problem exists" has not historically been shown to be productive.

Replies from: DataPacRat
comment by DataPacRat · 2012-12-25T19:40:51.597Z · LW(p) · GW(p)

I have /a/ set of answers, based on what I've learned so far of economics, politics, human nature, and various bits of evidence. However, I peg my confidence-levels of at least some of those answers as being low enough that I could be easily persuaded to change my mind, especially by the well-argued points that tend to crop up around here.

comment by CronoDAS · 2012-12-23T23:42:17.932Z · LW(p) · GW(p)

The "interesting" thing about violence is that it's one of the few ways that a relatively small group of (politically) powerless people with no significant support can cause a big change in the world. However, the change rarely turns out the way the small group would hope; most attempts at political violence by individuals or small groups fail miserably at achieving the group's aims.

Replies from: BrassLion
comment by BrassLion · 2012-12-24T06:33:53.517Z · LW(p) · GW(p)

Non-violent action has a reasonable track record, considering how rarely it's been used in an organized way by the oppressed. The track record is particularly good in the first world, where people care about appearances.

Replies from: None
comment by [deleted] · 2012-12-25T00:16:10.764Z · LW(p) · GW(p)

I can't think of any cases. Can you give some specific examples?

Replies from: BrassLion
comment by BrassLion · 2012-12-25T07:07:21.793Z · LW(p) · GW(p)

Gandhi and Marting Luther King, Jr. are the headliners, as usual. Both used pacificism as a tool against regimes that, in the end, needed to think of themselves as decent people, and that had to bow to political pressure both at home and abroad. There's far more examples, though, that people don't think about - when you're looking for social change in the modern first world, non-violence is the default. Women's rights were secured without violence. Black civil rights in America were gained through non-violent activists like King and through the courts - there were violence groups like the Black Panthers, but in the end King's approach worked and violence... just didn't. Gay rights might be another example, although gays are marginalized, but not powerless, since they can show up anywhere - still, the gay rights movement has been well organized, never used violence, and has brought the first world to the point where full equality for homosexuals seems inevitable in about a generation.

Replies from: None, JonathanLivengood, MixedNuts, MC_Escherichia
comment by [deleted] · 2012-12-25T08:05:13.165Z · LW(p) · GW(p)

there were violence groups like the Black Panthers, but in the end King's approach worked and violence... just didn't.

Interesting. I've also seen analyses that argue that Gandhi and MLK were substantially helped by being backed by violent terrorist groups. Of course those analyses don't explain female and gay rights.

I don't have the history and poli sci qualifications to judge the factors involved, but thanks for your take.

comment by JonathanLivengood · 2012-12-26T18:59:39.050Z · LW(p) · GW(p)

I'm not sure what you count as violence, but if you look at the history of the suffrage movement in Britain, you will find that while the movement started out as non-violent, it escalated to include campaigns of window-breaking, arson, and other destruction of property. (Women were also involved in many violent confrontations with police, but it looks like the police always initiated the violence. To what degree women responded in kind and whether that would make their movement violent is unclear to me.) The historians usually describe the vandalism campaigns as violent, militarism, or both, though maybe you meant to restrict attention to violence against persons. Of course, the women agitating for the vote suffered much more violence than they inflicted.

comment by MixedNuts · 2012-12-25T16:11:40.003Z · LW(p) · GW(p)

How violent is violence? Stonewall was a throw-bricks-type riot, but there were no assassinations or the like. Also there were some violent feminists, but as you say, Black Panthers.

comment by MC_Escherichia · 2012-12-25T15:22:01.031Z · LW(p) · GW(p)

the gay rights movement ... never used violence

Never say never. http://en.wikipedia.org/wiki/Stonewall_riots

comment by drethelin · 2012-12-23T21:08:40.817Z · LW(p) · GW(p)

Got it. Posts discussing our plans for crimes will herewith be kept to the secret boards only.

Replies from: David_Gerard, timtyler, Kawoomba
comment by David_Gerard · 2012-12-23T22:04:15.414Z · LW(p) · GW(p)

And the mailing lists, apparently.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-24T02:24:07.398Z · LW(p) · GW(p)

The Surgeon General recommends that you not discuss criminal activities, with respect to laws actually enforced, on any mailing list containing more than 5 people.

Replies from: None, David_Gerard, AndrewH
comment by [deleted] · 2012-12-24T16:36:10.889Z · LW(p) · GW(p)

Why 5?

Replies from: Waffle_Iron
comment by Waffle_Iron · 2012-12-25T01:29:10.682Z · LW(p) · GW(p)

Have you ever tried to get a group of more than 5 people to keep a secret?

Replies from: None
comment by [deleted] · 2012-12-25T10:39:09.808Z · LW(p) · GW(p)

Have you ever tried to get a group of 4 people to keep a secret?

I'm just wondering where the particular number comes from. Three people can keep a secret if two are dead and all that...

comment by David_Gerard · 2013-01-01T22:05:27.979Z · LW(p) · GW(p)

I was thinking of the London list, and this thread, about a drug which isn't actually illegal in the UK (it's prescription-restricted, but not illegal at all to possess) but selling it in public in a pub as if it is. I mean, WHAT. There's stupidity that isn't actually illegal but is nevertheless blithering.

Replies from: FiftyTwo
comment by FiftyTwo · 2013-01-01T23:38:24.391Z · LW(p) · GW(p)

That seems sensible enough, you are allowed the drug if a competent expert has determined it is in your best interests to have it, but as you are not yourself qualifies to make that decision you can't transfer ownership to others.

comment by AndrewH · 2012-12-24T02:57:44.008Z · LW(p) · GW(p)

Intriguing, actual paraphrasing here of a US "The Surgeon General"? I can imagine it is something someone in high office might say.

Replies from: katydee, Alicorn
comment by katydee · 2012-12-24T11:37:32.170Z · LW(p) · GW(p)

The Surgeon General is someone who issues national health recommendations. The implication of Eliezer's post is that discussing criminal activity may be hazardous to your health.

comment by Alicorn · 2012-12-24T03:04:12.975Z · LW(p) · GW(p)

We have a The Surgeon General, but he recommends things about smoking and whatnot; I'm pretty sure he doesn't issue warnings about mailing lists.

comment by timtyler · 2012-12-24T02:52:42.677Z · LW(p) · GW(p)

I believe the traditional structure is a clandestine cell system.

comment by Kawoomba · 2012-12-23T21:13:16.525Z · LW(p) · GW(p)

Back in line with you!

comment by pleeppleep · 2012-12-23T22:42:32.716Z · LW(p) · GW(p)

Deleting comments for being perceived as dangerous might get in the way of conversation. I think that if we're worried about how the site looks to outsiders then it's probably only necessary to worry about actual posts. Nobody expects comments to be appropriate on the internet, so it probably doesn't hurt us that much.

Replies from: army1987
comment by A1987dM (army1987) · 2012-12-23T22:56:35.468Z · LW(p) · GW(p)

It was a top-level post (though one in Discussion) he was thinking about.

Replies from: pleeppleep
comment by pleeppleep · 2012-12-23T23:34:04.243Z · LW(p) · GW(p)

I know, but he said that the suggested policy change would include comments.

Replies from: army1987
comment by A1987dM (army1987) · 2012-12-24T10:28:48.789Z · LW(p) · GW(p)

That's usual Yudkowskian overreaction he will likely get tired of implementing within a couple years or less.

Replies from: pleeppleep
comment by pleeppleep · 2012-12-24T14:35:05.294Z · LW(p) · GW(p)

.......

But the site's only been around for a couple of years in the first place

Replies from: army1987
comment by A1987dM (army1987) · 2012-12-24T21:34:43.839Z · LW(p) · GW(p)

Well, when saying that, I was thinking of That Thing that happened in July 2010 about which EY appears to no longer have as many bees in his bonnet.

Replies from: MixedNuts
comment by MixedNuts · 2012-12-25T16:04:51.183Z · LW(p) · GW(p)

Does he appear so?

Replies from: army1987
comment by A1987dM (army1987) · 2012-12-25T17:57:57.753Z · LW(p) · GW(p)

In a couple of occasions in the last months I saw people discussing That Thing openly, and I myself encouraged them to rot13 stuff. And very recently there was a discussion which could be considered relevant to which I made non-rot13ed comments. Now maybe EY just didn't see those, but maybe he just didn't care too much.

comment by [deleted] · 2012-12-23T21:56:51.335Z · LW(p) · GW(p)

Yes, a post of this type was just recently made.

Well then.

I've heard that firemen respond to everything not because they actually have to, but because it keeps the drill sharp, so to speak. The same idea may apply to mod action... (in other words, MOAR "POINTLESS" CENSORSHIP)

More seriously, does this policy apply to things like gwern's hypothetical bombing of intel?

Replies from: RomeoStevens, timtyler, MugaSofer
comment by RomeoStevens · 2012-12-23T22:26:36.088Z · LW(p) · GW(p)

gwern specifically argued that small scale terrorism would be ineffective.

Replies from: printing-spoon, TheOtherDave
comment by printing-spoon · 2012-12-24T01:27:40.610Z · LW(p) · GW(p)

Implying that whether his post should be censored hinges on the conclusion reached and not just the topic?

Replies from: RomeoStevens
comment by RomeoStevens · 2012-12-24T01:28:57.463Z · LW(p) · GW(p)

discussion of violence by state actors is quite a bit different than discussion of individual violence.

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-12-24T04:08:20.404Z · LW(p) · GW(p)

discussion of violence by state actors is quite a bit different than discussion of individual violence.

Sure, but why is that a difference that makes a difference?

Replies from: MixedNuts
comment by MixedNuts · 2012-12-25T17:15:35.333Z · LW(p) · GW(p)

Individuals are somewhat likely to become violent because of Internet sophistry. If big oils (or likely future big oils) become violent because of Internet sophistry, we have bigger problems.

comment by TheOtherDave · 2012-12-23T23:25:43.231Z · LW(p) · GW(p)

I suppose the next question is whether it would apply to things like comments in response to gwern's hypothetical bombing of intel arguing that his conclusion is incorrect.

Given the stated principles governing the new censorship policy, I think the answer would be "yes, of course."

Replies from: None
comment by [deleted] · 2012-12-23T23:50:04.652Z · LW(p) · GW(p)

Let's not delete posts for disagreeing on uncomfortable empirical questions.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-12-23T23:53:41.889Z · LW(p) · GW(p)

I don't think the policy EY is proposing involves banning people, just deleting the stuff we write that violates policy.

Replies from: None
comment by [deleted] · 2012-12-23T23:54:42.215Z · LW(p) · GW(p)

fixed, thanks

comment by timtyler · 2012-12-24T02:38:50.245Z · LW(p) · GW(p)

More seriously, does this policy apply to things like gwern's hypothetical bombing of intel?

It looks as though that was on gwern.net - outside the zone.

Replies from: None, gwern
comment by [deleted] · 2012-12-24T02:40:47.231Z · LW(p) · GW(p)

it was in discussion too.

Replies from: Epiphany
comment by Epiphany · 2012-12-24T09:08:33.568Z · LW(p) · GW(p)

If you're talking about his Slowing Moore's Law: Why You Might Want To and How You Would Do It it's not there anymore.

I didn't thoroughly read the new version on his site, so there's a chance that there is now a link to an article that will still be confused for a pro-terrorism piece (that's the problem the previous version had) or sounds like it's advocating the idea of governments attacking chip fabs.

comment by gwern · 2012-12-24T23:37:47.139Z · LW(p) · GW(p)

I posted a draft here. A while after the initial discussion, because I had expanded it massively, I deleted the draft version so readers of that post had no choice but to go to the updated master copy. (I also did this for all similar posts like my Melatonin post, for similar reasons.)

comment by MugaSofer · 2013-01-01T20:47:07.676Z · LW(p) · GW(p)

As worded, yes. However, I suspect it wouldn't have been enforced in that case.

comment by [deleted] · 2012-12-24T05:44:23.760Z · LW(p) · GW(p)

violence against real people.

Abortion, euthenasia and suicide fit that description, some say. For them and those who disagree with them this proposal may have unforeseen consequences. Edit: all three are illegal in parts of the world today.

comment by BrassLion · 2012-12-24T06:48:53.447Z · LW(p) · GW(p)

I think this is an overreation to (deleted thing) happening, and the proposed policy goes too far. (Deleted thing) was neither a good idea or good to talk about in this public forum, but it was straight-out advocating violence in an obvious and direct way, against specific, real people that aren't in some hated group. That's not okay and it's not good for community for the reasons you (EY) said. But the proposed standard is too loose and it's going to have a chilling effect on some fringe discussion that's probably going to be useful in teasing out some of the consquences of ethics (which is where this stuff comes up). Having this be a guideline rather than a hard rule seems good, but it still seems like we're scarring on the first cut, as it were.

I think we run the risk of adopting a censorship policy that makes it difficult to talk about or change the censorship policy, which is also a really terrible idea.

I agree with the general idea of protecting LW's reputation to outsiders. After all, if we're raising the sanity waterline (rather than researching FAI), we want outsiders to become insiders, which they won't do if they think we're crazy.

"No advocating violence against real world people, or opening a discussion on whether to commit violence on real world people" seems safe enough as a policy to adopt, and specific enough to not have much of a chilling effect on discussion. We ought to restrict what we talk about as little as possible, in the absence of actual problems, given that any posts we don't want here can be erased by a few keystrokes from an admin.

comment by Nominull · 2012-12-24T03:29:35.463Z · LW(p) · GW(p)

Censorship is particularly harmful to the project of rationality, because it encourages hypocrisy and the thinking of thoughts for reasons other than that they are true. You must do what you feel is right, of course, and I don't know what the post you're referring to was about, but I don't trust you to be responding to some actual problematic post instead of self-righteously overreacting. Which is a problem in and of itself.

Replies from: kodos96, twanvl
comment by kodos96 · 2012-12-24T04:17:08.032Z · LW(p) · GW(p)

You must do what you feel is right, of course

Passive-aggression level: Obi-Wan Kenobi

Replies from: gjm
comment by gjm · 2012-12-24T11:06:20.222Z · LW(p) · GW(p)

I don't see that that's passive-aggressive when it's accompanied by a clear and explicit statement that Nominull thinks Eliezer is wrong and why. What would be passive-aggressive is just saying "Well, I suppose you must do what you feel is right" and expecting Eliezer to work out that disapproval is being expressed and what sort.

Replies from: kodos96
comment by kodos96 · 2012-12-24T16:26:05.598Z · LW(p) · GW(p)

I didn't mean it as a criticism, just that my brain pattern-matched his choice of words and read it in Alec Guiness's voice.

comment by twanvl · 2012-12-24T12:37:25.757Z · LW(p) · GW(p)

because it encourages hypocrisy and the thinking of thoughts for reasons other than that they are true

In particular, this comment seems to suggest that EY considers public opinion to be more important than truth. Of course this is a really tough trade-off to make. Do you want to see the truth no matter what impact it has on the world? But I think this policy vastly overestimates the negative effect posts on abstract violence have. First of all, the people who read LW are hopefully rational enough not to run out and commit violence based on a blog post. Secondly, there is plenty of more concrete violence on the internet, and that doesn't seem to have to many bad direct consequences.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-12-26T00:36:03.765Z · LW(p) · GW(p)

the people who read LW are hopefully rational enough not to run out and commit violence based on a blog post

Anyone can read LW. There is no IQ test, rationality test, or a mandatory de-biasing session before reading the articles and discussions.

I am not concerned about someone reading LW and commiting a violence. I am concerned about someone commiting violence and coincidentally having read LW a day before (for example just one article randomly found by google), and police collecting a list of recently visited websites, and a journalist looking at that list and then looking at some articles on LW.

Shortly, we don't live on a planet full of rationalists. It is a fact of life that anything we do can be judged by any irrational person who notices. Sure, we can't make everyone happy. But we should avoid some things that can predictably lead to unnecessary trouble.

comment by Pentashagon · 2012-12-24T06:01:49.732Z · LW(p) · GW(p)

Do wars count? I find it strange, to say the least, that humans have strong feelings about singling out an individual for violence but give relatively little thought to dropping bombs on hundreds or thousands of nameless, faceless humans.

Context matters, and trying to describe an ethical situation in enough detail to arrive at a meaningful answer may indirectly identify the participants. Should there at least be an exception for notorious people or groups who happen to still be living instead of relegated to historical "bad guys" who are almost universally accepted to be worth killing? I can think of numerous examples, living and dead, who were or are the target of state-sponsored violence, some with fairly good reason.

comment by MixedNuts · 2012-12-23T22:30:17.910Z · LW(p) · GW(p)

Your generalization is averaging over clairvoyance. The whole purpose of discussing such plans is to reduce uncertainty over their utility; you haven't proven that the utility gain of a plan turning out to be good must be less than the cost of discussing it in public.

Does the policy apply to violence against oneself? (I'm guessing not, since it's not illegal.) Talking about it is usually believed to reduce risk.

There's a scarcity effect whereby people believe pro-violence arguments to be stronger, since if they weren't convincing they wouldn't be censored. Not sure how strong it is, likely depends on whether people drop the topic or say things like "I'm not allowed to give more detail, wink wink nudge nudge".

It's a common policy so there don't seem to be any slippery slope problems.

We're losing Graham cred by being unwilling to discuss things that make us look bad. Probably a good thing, we're getting more mainstream.

Replies from: RomeoStevens, DanArmak
comment by RomeoStevens · 2012-12-24T00:13:54.065Z · LW(p) · GW(p)

since when is violence against oneself or even discussion of violence against oneself fully legal?

Replies from: wedrifid, MixedNuts
comment by wedrifid · 2012-12-24T00:20:31.975Z · LW(p) · GW(p)

since when is violence against oneself or even discussion of violence against oneself fully legal?

In most times and places throughout history, including all countries whose legal systems I am familiar with.

Replies from: Caspian
comment by Caspian · 2012-12-24T00:34:50.561Z · LW(p) · GW(p)

Suicide in particular is often illegal.

ETA: possibly this statement of mine was outdated.

Replies from: wedrifid
comment by wedrifid · 2012-12-24T06:51:06.918Z · LW(p) · GW(p)

Suicide in particular is often illegal.

Either you or some of the people reading your comment seem to have been mislead into concluding that a thing being illegal and also violence against oneself can be generalised to conclude that violence against oneself or even discussion of violence against oneself is illegal. That seems to be a rather blatant confusion.

Replies from: kodos96
comment by kodos96 · 2012-12-24T07:09:45.406Z · LW(p) · GW(p)

I'm not sure what RomeoStevens meant about discussion of violence against oneself being illegal, but aside from that aspect, his point is entirely valid. You seem to be suggesting that we're generalising from "suicide is illegal" to "any form of violence against oneself is illegal". We're not. We're simply noting that suicide is one type of violence against onself, and it's illegal.

Your statement expands to "In most times and places throughout history, including all countries whose legal systems I am familiar with, violence against oneself is fully legal." Unless you're familiar only with very odd legal systems, that seems to be a rather blatant confusion.

Replies from: wedrifid
comment by wedrifid · 2012-12-24T07:23:56.288Z · LW(p) · GW(p)

but aside from that aspect, his point is entirely valid

No. MixedNut's point. RomeoStevens' reply was confused and mistaken. Unfortunately Caspian has mislead you about the context.

We're simply noting that suicide is one type of violence against onself, and it's illegal.

That was my original impression and why I refrained from downvoting him. Until, that is, it became apparent that he and some readers (evidently yourself included) believe that his statement of trivia in some way undermines the point made by MixedNut's and supported by myself or supports RomeoStevens' ungrammatical rhetorical interjection.

Replies from: kodos96
comment by kodos96 · 2012-12-24T07:40:43.427Z · LW(p) · GW(p)

I had read the entire context, and re-read it just now to make sure I hadn't missed anything. You're correct that RomeoStevens' reply doesn't really undermine MixedNuts' point, and is therefore "trivia". But it's nonetheless correct trivia (modulo the above-mentioned caveat) and your refutation of it is therefore quite confusing.

But it's pointless to continue arguing this trivial point, as it's irrelevant to the thread topic, except in the meta sense that these kinds of pointless semantic debates will be the inevitible result of implementation of this extremely ill-advised and poorly thought-through censorship policy.

comment by MixedNuts · 2012-12-24T09:20:33.892Z · LW(p) · GW(p)

What are you thinking of? Non-assisted suicide that doesn't put third parties in danger is legal most places (exceptions: India, Singapore, North Korea, Virginia). Self-injury is legal in the US at least. Discussion of suicide is allowed as long as it's even slightly more hypothetical than "I intend to kill myself in the near future". Discussion of self-injury is AFAIK completely legal (in the US?).

Replies from: RomeoStevens, DanArmak
comment by RomeoStevens · 2012-12-24T11:15:15.569Z · LW(p) · GW(p)

My understanding has always been that self harm or plausible discussion of self harm in the US leads to a loss of autonomy in that you can be diagnosed with a mental illness and lose access to things like voting, driving, firearms, etc. (depending on the diagnosis)

Replies from: MixedNuts
comment by MixedNuts · 2012-12-24T12:45:15.426Z · LW(p) · GW(p)

Trigger warning for, obviously, self-harm.

There's a huge chasm between a mental illness diagnosis (which self-harm is very likely to cause, especially in the US where you need diagnosis other than "ain't quite right - not otherwise specified" for insurance) and actual repercussions. Members of online support groups report that their psychiatrists either treat self-injury like any other symptom (asking about it, describing decreases as good but not praiseworthy) or recommend they stop but do not enforce it. If it gets life-threatening it's treated like suicide, but that almost never comes up.

comment by DanArmak · 2012-12-26T20:08:10.350Z · LW(p) · GW(p)

What does it mean to make suicide illegal, anyway? You can't punish the perpetrator, they're dead. You can punish their relatives by e.g. taking away their inheritance, but someone who plans their suicide in advance can circumvent that by transferring ownership of the important things before killing themselves.

Replies from: MixedNuts, Decius
comment by MixedNuts · 2012-12-26T21:57:04.592Z · LW(p) · GW(p)

Punish attempts. Punish in ways that are avoidable (e.g. inheritance) but work for insufficiently planned suicides. If there's a state religion, predict punishment in the afterlife. Punish relatives directly (North Korea does that).

comment by Decius · 2012-12-26T21:24:57.912Z · LW(p) · GW(p)

It means that you prosecute failed suicides as crimes.

Replies from: DanArmak
comment by DanArmak · 2012-12-26T23:31:28.127Z · LW(p) · GW(p)

Is there good data on whether this is effective as deterrence? I don't expect it could be effective as punishment: I would expect it to increase despair and poverty, and so to increase chances of recurrent attempts.

comment by DanArmak · 2012-12-26T20:05:27.152Z · LW(p) · GW(p)

Probably a good thing, we're getting more mainstream.

I'm not sure if that's a good thing...

Replies from: MixedNuts
comment by MixedNuts · 2012-12-26T22:10:50.306Z · LW(p) · GW(p)

Once you have so many smart contrarians that you run into sharp diminishing returns trying to recruit more, you want to attract smart non-contrarians. To pick a very silly example, a group of mostly Gentiles musing aimlessly on the ethics of genociding Jews (because it's a local point of pride to play with any idea no matter how evil or stupid) is going to have a hard time attracting Jews.

Replies from: DanArmak
comment by DanArmak · 2012-12-26T23:35:26.508Z · LW(p) · GW(p)

Once you have so many smart contrarians that you run into sharp diminishing returns trying to recruit more

Why are there diminishing returns? Because too many smart contrarians cannot coexist? Because we ran out of smart contrarians to recruit? Because a group requires non-smart or non-contrarian people too in order to function better?

Also: over the last year many people joined LW, many of them referred here by HPMOR. I would expect these people to be less smart-contrarian.

comment by [deleted] · 2012-12-24T10:07:52.101Z · LW(p) · GW(p)

Would pro-suicide and general anti-natalist posts be covered by this?

Replies from: Viliam_Bur, eurg, MugaSofer
comment by Viliam_Bur · 2012-12-26T00:45:44.358Z · LW(p) · GW(p)

Suggesting that specific people commit suicide, obviously yes. People in general... maybe no.

I am not going to explain why, but although death of all people technically includes the death of any specific person X.Y., saying "X.Y. should die" sounds worse than saying "all humans should die".

comment by eurg · 2012-12-24T16:25:58.985Z · LW(p) · GW(p)

Forget about it.

Replies from: None
comment by [deleted] · 2012-12-24T16:28:12.880Z · LW(p) · GW(p)

I'm not trolling. I quite like reading Sister Y's stuff and have said so in the past.

Replies from: eurg
comment by eurg · 2012-12-27T17:44:42.939Z · LW(p) · GW(p)

Luckily enough, that blog seems much better than your introduction of it. My troll accusation is founded on your highly repetitive deliberate misunderstanding of the OP. It must be deliberate, as you are usually much smarter than that, and also better in style.

Also, Sister Y is not pro-suicide per se, but against anti-suicide positions; at least that's how I read it.

comment by MugaSofer · 2012-12-24T22:26:47.740Z · LW(p) · GW(p)

Um, yes. We don't ant to look like a suicide phyg.

comment by Eugine_Nier · 2012-12-24T08:40:40.863Z · LW(p) · GW(p)

I don't necessarily object to this policy but find it troubling that you can't give a better reason for not discussing violence being a good idea than PR.

Frankly, I find it even more troubling that your standard reasons for why violence is not in fact a good idea seem to be "it's bad PR" and "even if it is we shouldn't say so in public".

As I quote here:

if your main goal is to show that your heart is in the right place, then your heart is not in the right place.

Edit: added link to an example of SIAI people unable to give a better reason against doing violence than PR.

Replies from: quiet, jimrandomh, Desrtopa
comment by quiet · 2012-12-24T16:55:29.171Z · LW(p) · GW(p)

I appreciate the honesty of it. No one here is going to enact any of these thought experiments in real life. The likely worst outcome is to off-put potential SI donors. It must be hard enough to secure funding for a fanfic-writing apocalypse cult; prepending violent onto that description isn't going to loosen up many wallets.

comment by jimrandomh · 2012-12-24T18:25:17.328Z · LW(p) · GW(p)

I don't necessarily object to this policy but find it troubling that you can't give a better reason for not discussing violence being a good idea than PR.

I would find this troubling if it were true, but the better reason is right there in the post: "Talking about such violence makes that violence more probable".

comment by Desrtopa · 2012-12-24T19:50:10.179Z · LW(p) · GW(p)

If the violence is a bad idea, which in nearly all cases it probably would be, other commenters are likely to point that out. Having people inspired to carry out acts of violence in spite of other members pointing out that it's unlikely to bear good results is possible, but unlikely, whereas having people judge the community negatively for discussing such things at all is considerably more likely.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-12-24T20:00:34.332Z · LW(p) · GW(p)

If the violence is a bad idea, which in nearly all cases it probably would be, other commenters are likely to point that out.

Can you point to an example of this actually happening?

Replies from: Desrtopa, DanArmak
comment by Desrtopa · 2012-12-24T20:10:28.565Z · LW(p) · GW(p)

Here.

comment by DanArmak · 2012-12-26T20:04:52.513Z · LW(p) · GW(p)

The post in question was heavily downvoted before it was deleted.

comment by MixedNuts · 2012-12-25T19:12:18.292Z · LW(p) · GW(p)

The freaky consequences are not of the policy, they're of the meta-policy. You know how communities die when they stop being fun? Occasional shitstorms are not fun, and fear of saying something that will cause a shitstorm is not fun. Benevolent dictators work well to keep communities fun; the justifications don't apply when the dictator is pursuing goals that aren't in the selfish interest of members and interested lurkers; making the institute the founder likes look bad only weakly impacts community fun.

Predictable consequences are bright iconoclasts leaving, and shitstorm frequency increasing. (That's kinda hard to settle: the former is imprecise and the latter can be rigged.)

Every time, people complain much less about the policy than about not being consulted. There are at least two metapolicies that avoid this:

  • Avoid kicking up shitstorms. In this particular instance, you could have told CronoDAS his post was stupid and suggest he delete it, and then said "Hey, everyone, let's stop talking about violence against specific people, it's stupid and makes us look bad" without putting your moderator hat on.

  • Produce a policy, possibly ridiculously stringent, that covers most things you don't like, which allows people to predict moderator behavior and doesn't change often. Ignore complaints when enforcing, and do what you wish with complaints on principle.

Replies from: drethelin
comment by drethelin · 2012-12-25T20:51:45.179Z · LW(p) · GW(p)

I actually kind of enjoy occasional shitstorms

comment by shware · 2012-12-25T03:33:14.183Z · LW(p) · GW(p)

Taking this post in the way it was intended i.e. 'are there any reasons why such a policy would make people more likely to attribute violent intent to LW' I can think of one:

The fact that this policy is seen as necessary could imply that LW has a particular problem with members advocating violence. Basically, I could envision the one as saying: 'LW members advocate violence so often that they had to institute a specific policy just to avoid looking bad to the outside world'

And, of course, statements like 'if a proposed conspiratorial crime were in fact good you shouldn't talk about it on the internet' make for good out-of-context excerpts.

Replies from: fubarobfusco, ChristianKl
comment by fubarobfusco · 2012-12-25T06:19:56.728Z · LW(p) · GW(p)

See also xkcd.

comment by ChristianKl · 2012-12-25T16:03:04.326Z · LW(p) · GW(p)

The fact that this policy is seen as necessary could imply that LW has a particular problem with members advocating violence. Basically, I could envision the one as saying: 'LW members advocate violence so often that they had to institute a specific policy just to avoid looking bad to the outside world'

I don't think that's probable. There are many online forum that have forum rules that prevent those discussions.

comment by [deleted] · 2012-12-24T16:11:56.814Z · LW(p) · GW(p)

Posts or comments advocating or 'asking about' violation of laws that are actually enforced against middle-class people (e.g., kidnapping, not anti-marijuana laws) may at the admins' option be censored

The blasphemy laws of many countries fit this description - another possible unintended consequence.

comment by Eugine_Nier · 2012-12-24T02:03:16.955Z · LW(p) · GW(p)

I find that threatening hypothetical violence against my interlocutor can be a useful rhetorical device for getting them to think about ethical problems in near mode.

Replies from: FiftyTwo
comment by FiftyTwo · 2012-12-24T02:46:41.600Z · LW(p) · GW(p)

I'm going to hit you with a stick unless you can give me an example of where that has been effective.

Replies from: Pentashagon, kodos96
comment by Pentashagon · 2012-12-24T05:36:42.221Z · LW(p) · GW(p)

THREE examples.

comment by kodos96 · 2012-12-24T04:14:57.670Z · LW(p) · GW(p)

For all the whining I do about how LWers lack a sense of humor.... I absolutely love it when I'm proven wrong.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2012-12-24T11:19:35.097Z · LW(p) · GW(p)

Do you really feel like LWers lack a sense of humor? LWers have posted some of the funniest things I've ever read. Their sense-of-humor distribution has heavy tails, at least.

Replies from: kodos96
comment by kodos96 · 2012-12-24T16:27:32.865Z · LW(p) · GW(p)

Their sense-of-humor distribution has heavy tails, at least.

Yeah, I'd say that's a fair assesment.

comment by CronoDAS · 2012-12-25T23:57:43.357Z · LW(p) · GW(p)

Maybe something like "Moderators, at their discretion, may remove comments that can be construed as advocating illegal activity" would work for a formal policy - it reads like relatively inoffensive boilerplate and would be something to point to if a post like mine needs to go, but is vague enough that it doesn't scream "CENSORSHIP!!!" to people who feel strongly about it. The "at their discretion" is key; it doesn't create a category of posts that moderators are required to delete, so it can't be used by non-moderators as a weapon to stifle otherwise productive discussion. (If you don't trust the discretion of the moderators, that's not a problem that can be easily solved with a few written policies.)

comment by asparisi · 2012-12-25T06:21:06.157Z · LW(p) · GW(p)

Yeesh. Step out for a couple days to work on your bodyhacking and there's a trench war going on when you get back...

In all seriousness, there seems to be a lot of shouting here. Intelligent shouting, mind you, but I am not sure how much of it is actually informative.

This looks like a pretty simple situation to run a cost/benefit on: will censoring of the sort proposed help, hurt, or have little appreciable effect on the community.

Benefits: May help public image. (Sub-benefits: Make LW more friendly to new persons, advance SIAI-related PR); May reduce brain-eating discussions (If I advocate violence against group X, even as a hypothetical, and you are a member of said group, then you have a vested political interest whether or not my initial idea was good which leads to worse discussion); May preserve what is essentially a community norm now (as many have noted) in the face of future change; Will remove one particularly noxious and bad-PR generating avenue for trolling. (Which won't remove trolling, of course. In fact, fighting trolls gives them attention, which they like: see Cons)

Costs: May increase bad PR for censoring (Rare in my experience, provided that the rules are sensibly enforced); May lead to people not posting important ideas for fear of violating rules (corollary: may help lead to environment where people post less); May create "silly" attempts to get around the rule by gray-areaing it (Where people say things like "I won't say which country, but it starts with United States and rhymes with Bymerica") which is a headache; May increase trolling (Trolls love it when there are rules to break, as these violations give them attention); May increase odds of LW community members acting in violence

Those are all the ones I could come up with in a few minutes after reading many posts. I am not sure what weights or probabilities to assign: probabilities could be determined by looking at other communities and incidents of media exposure, possibly comparing community size to exposure and total harm done and comparing that to a sample of similarly-sized communities. Maybe with a focus on communities about the size LW is now to cut down on the paperwork. Weights are trickier, but should probably be assigned in terms of expected harm to the community and its goals and the types of harm that could be done.

comment by kodos96 · 2012-12-24T05:14:20.060Z · LW(p) · GW(p)

Aside from the fact that "it might make us look bad" is a horrible argument in general, have you not considered the consequence that censorship makes us look bad? And consider the following comment below:

Got it. Posts discussing our plans for crimes will herewith be kept to the secret boards only.

It was obviously intended as a joke, but is that clear to outsiders? Does forcing certain kinds of discussions into side-channels, which will inevitibly leak, make us look good?

Consideration of these kinds of meta-consequences is what separates naive decision theories from sophisticated decision theores. Have you considered that it might hurt your credibility as a decision theorist to demonstrate such a lack of application of sophisticated decision theory in setting policies on your own website?

And now, what I consider to be the single most damning argument against this policy: in the very incident that provoked this rule change, the author of the post in question, after discussion, voluntarily withdrew the post, without this policy being in effect! So self-policing has demonstrated itself, so far, to be 100% effective at dealing with this situation. So where exactly is the necessity for such a policy change?

Replies from: handoflixue
comment by handoflixue · 2012-12-24T21:18:27.096Z · LW(p) · GW(p)

"it might make us look bad" is a horrible argument

You can argue that LessWrong shouldn't care about PR, or that censorship is going to be bad PR, or that censorship is unnecessary, but you can't argue that PR is a fundamentally horrible idea without some very strong evidence (which you did not provide).

-

It's almost tautological that if a group cares about PR, it HAS to care about what makes them look bad:

If Obama went on record saying that we should kill everyone on Less Wrong, and made it clear he was serious, I'd hope to high hell that there would be an impeachment trial.

If Greenpeace said we should kill all the oil CEOs, people would consider them insane terrorists.

If the oil CEOs suggested that there might be... incentives... should Greenpeace members be killed...

Replies from: kodos96
comment by kodos96 · 2012-12-24T21:25:23.526Z · LW(p) · GW(p)

You can argue that LessWrong shouldn't care about PR, or that censorship is going to be bad PR, or that censorship is unnecessary, but you can't argue that PR is a fundamentally horrible idea without some very strong evidence (which you did not provide).

That was perhaps a bit of an overstatement on my part. Considering PR consequences of actions is certainly a good thing to do. But if PR concerns are driving your policy, rather than simply informing it, that's bad.

Replies from: handoflixue
comment by handoflixue · 2012-12-24T21:29:26.869Z · LW(p) · GW(p)

Taboo "driving" and "informing" and explain the difference between those two to me?

Or we can save ourselves some time if this resolves your objection: Eliezer is saying that he is adding the OPTION to censor things if they are a PR problem OR because the person is needlessly incriminating themselves. I'm not sure how that's a bad OPTION to have, given that he's explicitly stated he will not mindlessly enforce it, and in fact has currently enforced it zero (0) times to my knowledge (the post that prompted this was voluntarily withdrawn by it's author)

Replies from: kodos96
comment by kodos96 · 2012-12-24T21:32:05.769Z · LW(p) · GW(p)

One the one hand, you're deciding policy based on non-PR related factors, then thinking about the most PR friendly way to proceed from there. On the other hand, you're letting PR actually determine policy.

Replies from: handoflixue
comment by handoflixue · 2012-12-25T00:05:56.130Z · LW(p) · GW(p)

Which category is it if you decide based on multiple factors, ONE of which is PR? And why is this a bad thing, if that's what you believe?

Replies from: kodos96
comment by kodos96 · 2012-12-25T00:43:53.523Z · LW(p) · GW(p)

Before I spend any more time replying to this, can you clarify for me... do you and I actually disagree about something of substance here? I.e. how an organization should, in the real world, deal with PR concerns? Or are we just arguing about the most technically correct way to go about stating our position?

comment by [deleted] · 2012-12-24T10:10:23.802Z · LW(p) · GW(p)

(i.e., even if a proposed conspiratorial crime were in fact good, there would still be net negative expected utility from talking about it on the Internet; if it's a bad idea, promoting it conceptually by discussing it is also a bad idea; therefore and in full generality this is a low-value form of discussion).

This seems to be a fully general argument against Devil's Advocacy. Was it meant as such?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-24T19:55:48.570Z · LW(p) · GW(p)

I don't see the link, but it does so happen I think "devil's advocate!" is mentally poisonous. I'd even call it an evolutionary precursor of trolling. http://lesswrong.com/lw/r3/against_devils_advocacy/

Replies from: None, MugaSofer
comment by [deleted] · 2012-12-24T21:03:06.400Z · LW(p) · GW(p)

I recall reading this before but reread and did some thought on it again. We may have a sort of a trade off here. I think I agree on Devil's Advocacy might be risky for the person doing it but I see obvious benefits in having one or two in a group as it emulates some of the epistemic benefits of value diversity as well as signals to the group that its beliefs shouldn't be taken that seriously.

You, dear reader, are probably a sophisticated enough reasoner that if you manage to get yourself stuck in an advanced rut, dutifully playing Devil's Advocate won't get you out of it. You'll just subconsciously avoid any Devil's arguments that make you genuinely nervous, and then congratulate yourself for doing your duty. People at this level need stronger medicine. (So far I've only covered medium-strength medicine.)

This is the major problem I see with my view that I'm unsure how to resolve though. Devil's Advocates may in practice be merely straw men generators.

Added: Brandon argues that Devil's Advocacy] is most importantly a social rather than individual process, which aspect I confess I wasn't thinking about.

Interesting I suppose I agree with it.

Replies from: buybuydandavis
comment by buybuydandavis · 2012-12-24T22:43:58.133Z · LW(p) · GW(p)

This is the major problem I see with my view that I'm unsure how to resolve though. Devil's Advocates may in practice be merely straw men generators.

You may only generate straw men while you attempt to generate steel men, but how many steel men are you likely to make if you don't even try to make them?

You can always fail, but not trying guarantees failure.

comment by MugaSofer · 2012-12-24T22:25:51.778Z · LW(p) · GW(p)

Sounds more like a precursor of "Policy Debates Should Not Appear One-Sided".

comment by Joshua Hobbes (Locke) · 2012-12-23T21:09:53.533Z · LW(p) · GW(p)

Would this censor posts about robbing banks and then donating the proceeds to charity?

Replies from: Eliezer_Yudkowsky, Alicorn
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-23T21:43:43.917Z · LW(p) · GW(p)

Depends on exactly how it was written, I think. "The paradigmatic criticism of utilitarianism has always been that we shouldn't rob banks and donate the proceeds to charity" - sure, that's not actually going to conceptually promote the crime and thereby make it more probable, or make LW look bad. "There's this bank in Missouri that looks really easy to rob" - no.

Replies from: None, None, Decius
comment by [deleted] · 2012-12-24T10:16:36.749Z · LW(p) · GW(p)

Uncharitable reading: As long as taking utilitarianism seriously doesn't lead to arguments to violate formalized 21st century Western norms too much it is ok to argue for taking utilitarianism seriously. You are however free to debunk how it supposedly leads to things considered unacceptable on the Berkeley campus in 2012, since it obviously can't.

comment by [deleted] · 2012-12-23T23:46:26.393Z · LW(p) · GW(p)

What abot pro-robbing banks in general?

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-12-25T23:25:43.493Z · LW(p) · GW(p)

Best way would be to construct the comment in a way that makes it least likely to seem bad when quoted outside of LW. For example we could imagine an alternative universe with intelligent bunnies and carrot-banks. Would it be good if a bunny robbed the carrot-bank and donated the carrots to charity?

If someone copied this comment on a different forum, it would seem silly, but not threatening. It is more difficult to start a wave of moral panic because of carrots and bunnies.

comment by Decius · 2012-12-24T05:15:44.687Z · LW(p) · GW(p)

What about discussions which discuss flaws in security systems, generally? e.g. "Banks often have this specific flaw which can be mitigated in this cost-ineffective manner."?

comment by Alicorn · 2012-12-23T22:35:54.039Z · LW(p) · GW(p)

Or Really Extreme Altruism?

Replies from: Larks, Eliezer_Yudkowsky, wedrifid
comment by Larks · 2012-12-23T22:58:58.729Z · LW(p) · GW(p)

Note to all: Alicorn is referring to something else. Robbing banks may be extreme but it is not altruism.

Replies from: Alicorn
comment by Alicorn · 2012-12-23T23:23:19.018Z · LW(p) · GW(p)

Edited in a link.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-24T02:20:44.289Z · LW(p) · GW(p)

This does indeed seem like something that's covered by the new policy. It's illegal. In the alternative where it's a bad idea, talking about it has net negative expected utility. If it were for some reason a good idea, it would still be incredibly stupid to talk about it on the &^%$ing Internet. I shall mark it for deletion if the policy passes.

Replies from: Tenoke, CronoDAS, saturn
comment by Tenoke · 2012-12-24T02:24:53.743Z · LW(p) · GW(p)

So you don't see value in discussions like these? Thought experiments that give some insights into morality? Is really the (probably barely any) effect on the reputation of LW because of those posts really more than the benefit of the discussion?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-24T02:34:03.778Z · LW(p) · GW(p)

I think that post was a net negative effect on reality and that diminishing the number of people who read it again is a net positive. No, the conversation isn't worth it.

Replies from: Tenoke
comment by Tenoke · 2012-12-24T02:40:01.618Z · LW(p) · GW(p)

Oh come on, you are evoking your basilisk-related logic here? How does it have a negative effect. Please don't tell me that it is because you think that there will be more suicides in the world if the number of readers of the post is larger? And further please don't tell me that if you thought that you think that this will lead to a net negative effect for the world? But please do answer me.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-24T02:44:30.155Z · LW(p) · GW(p)

It has a net negative effect because people then go around saying (this post will be deleted after policy implementation), "Oh, look, LW is encouraging people to commit suicide and donate the money to them." That is what actually happens. It is the only real significant consequence.

Now it's true that, in general, any particular post may have only a small effect in this direction, because, for example, idiots repeatedly make up crap about how SIAI's ideas should encourage violence against AI researchers, even though none of us have ever raised it even as a hypothetical, and so themselves become the ones who conceptually promote violence. But it would be nice to have a nice clear policy in place we can point to and say, "An issue like this would not be discussable on LW because we think that talking about violence against individuals can conceptually promote such violence, even in the form of hypotheticals, and that any such individuals would justly have a right to complain. We of course assume that you will continue to discuss violence against AI researchers on your own blog, since you care more about making us look bad and posturing your concern, than about the fact that you, yourself, are the one has actually invented, introduced, talked about, and given publicity to, the idea of violence against AI researchers. But everyone else should be advised that any such 'hypothetical' would have been deleted from LW in accordance with our anti-discussing-hypothetical-violence-against-identifiable-actual-people policy."

Replies from: fubarobfusco, kodos96, Eugine_Nier, CronoDAS, Tenoke, DanArmak
comment by fubarobfusco · 2012-12-24T19:19:01.708Z · LW(p) · GW(p)

idiots repeatedly make up crap

Idiots make up crap. You probably can't change this. The more significant you are, the more crap idiots will make up about you. Idiots claim that Barack Obama is a Kenyan Muslim terrorist and that George Bush is mentally subnormal. Not because they have sufficient evidence of these propositions, but because gossip about Obama and Bush is thereby juicier than gossip about my neighbor Marty whom you've never heard of.

Idiots make up crap about projects, too. They say NASA faked the moon landing, vaccines cause autism, and that international food aid contains sterility drugs. It turns out that scurrilous rumors about NASA and the United Nations spread farther than scurrilous rumors about that funny-looking building in the town park which is totally a secret drug lab for the mayor.

But everyone else should be advised that any such 'hypothetical' would have been deleted from LW in accordance with our anti-discussing-hypothetical-violence-against-identifiable-actual-people policy."

How about treating the hypothetical as the stupidity it is? "Dude, beating up AI researchers wouldn't work and you're a jerk for posting it. There are a half dozen obvious reasons it wouldn't work, if you take five minutes to think about it ... and you're a jerk for posting it because it's stirring up shit for no good reason. Seriously, quit it. This is LW, not Conspiracy Hotline."

Replies from: Eugine_Nier, Viliam_Bur, Eliezer_Yudkowsky
comment by Eugine_Nier · 2012-12-24T20:08:16.733Z · LW(p) · GW(p)

There are a half dozen obvious reasons it wouldn't work, if you take five minutes to think about it

And yet, when attempting to list them, the only one anyone from SIAI can seem think of is bad PR.

comment by Viliam_Bur · 2012-12-25T23:14:24.760Z · LW(p) · GW(p)

Idiots claim that Barack Obama is a Kenyan Muslim terrorist and that George Bush is mentally subnormal.

The important difference is that in these cases the given idiot is less famous than the person they make crap about.

Imagine an alternative universe where Barack Obama is just an unknown guy, and some idiots for whatever reason start claiming that he is a Muslim terrorist. I can imagine an anonymous phone call to the police, a police action with some misunderstanding, resulting with too many negative utilons for Mr. Obama.

In our universe, Mr. Obama has the advantage of being more famous than any such possible accuser. However, SIAI does not have the same advantage.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-24T19:41:58.543Z · LW(p) · GW(p)

Seriously, quit it. This is LW, not Conspiracy Hotline.

Sounds like a fine reply on LW. I think it will be useful, on forums not LW, to have a LW-policy to point to.

comment by kodos96 · 2012-12-24T04:07:19.684Z · LW(p) · GW(p)

It has a net negative effect because people then go around saying (this post will be deleted after policy implementation), "Oh, look, LW is encouraging people to commit suicide and donate the money to them." That is what actually happens. It is the only real significant consequence.

This is where the rubber meets the road as far as whether we really mean it when we say "that which can be destroyed by the truth, should be." If we accept this argument, then by "mere addition" of censorship rules, you eventually end up renaming SIAI "The Institute for Puppies and Unicorn Farts", and completly lying to the public about what it is you're actually about, in order to benefit PR.

comment by Eugine_Nier · 2012-12-24T07:47:28.960Z · LW(p) · GW(p)

"Oh, look, LW is encouraging people to commit suicide and donate the money to them."

Well, are you?

idiots repeatedly make up crap about how SIAI's ideas should encourage violence against AI researchers, even though none of us have ever raised it even as a hypothetical,

True, but you have said things that seem to imply it. Seriously, you can't go around saying "X" and "X->Y" and then object when people start attributing position "Y" to you.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-24T19:51:55.321Z · LW(p) · GW(p)

Well, are you?

No. To prove this, I shall shortly delete the post advocating it.

True, but you have said things that seem to imply it. Seriously, you can't go around saying "X" and "X->Y" and then object when people start attributing position "Y" to you.

Point one: We never said X->Y. We said X, and a bunch of people too stupid to understand the fallacy of appeal to consequences said 'X->violence, look what those bad people advocate' as an attempted counterargument. Since no actual good can possibly come of discussing this on any set of assumptions, it would be nice to have the counter-counterargument, "Unlike this bad person here, we have a policy of deleting posts which claim Q->specific-violence even if the post claims not to believe in Q because the identifiable target would have a reasonable complaint of being threatened".

Replies from: kodos96
comment by kodos96 · 2012-12-24T21:11:07.024Z · LW(p) · GW(p)

it would be nice to have the counter-counterargument, "Unlike this bad person here, we have a policy of deleting posts which claim Q->specific-violence even if the post claims not to believe in Q because the identifiable target would have a reasonable complaint of being threatened".

I would find this counter-counter-argument extremely uncompelling if made by an opponent. Suppose you read someone's blog who made statements which could be interpreted as vaguely anti-Semitic, but it could go either way. Now suppose someone in the comments of that blog post replied by saying "Yeah, you're totally right, we should kill all the Jews!".

Which type of response from the blog owner do you think would be more likely to convince you that he was not actually an anti-Semite: 1) deleting the comment, covering up its existence, and never speaking of it, or 2) Leaving the comment in place, and refuting it - carefully laying out why the commenter is wrong.

I know that I for one would find the latter response much more convincing of the author's benign intent.

Note: in order to post this comment, despite it being, IMHO entirely on-point and important to the conversation, I had to take a 5 point karma hit.... due to the LAST poorly thought out, dictatorially imposed, consensus-defying policy change.

Replies from: Viliam_Bur
comment by Viliam_Bur · 2012-12-25T23:06:16.998Z · LW(p) · GW(p)

Which type of response from the blog owner do you think would be more likely to convince you that he was not actually an anti-Semite: 1) deleting the comment, covering up its existence, and never speaking of it, or 2) Leaving the comment in place, and refuting it - carefully laying out why the commenter is wrong.

If someone really wants to get some cheap internet points for accusing the author of antisemitism, either option can be used. In both cases, the fact that the comment was written on the blog would be interpreted as an evidence for blog somehow evoking this kind of comment. Both deleting and refuting would be interpreted like "the author pretends to disagree, for obvious PR reasons, but he cannot fool us".

The advantage of deleting the comment is that a potential accuser has smaller chance to notice it (well, unless some readers make "why did this specific comment disappear?" their topic of the month), and they cannot support their attacks with hyperlinks and screenshots. Also, if someone puts specific keywords in Google, they will not get that blog among results.

comment by CronoDAS · 2012-12-24T03:01:58.134Z · LW(p) · GW(p)

I wasn't thinking of SIAI as the charity.

Replies from: AdeleneDawner, Eliezer_Yudkowsky
comment by AdeleneDawner · 2012-12-24T12:16:48.911Z · LW(p) · GW(p)

Regardless of your intentions, I know of one person who somewhat seriously considered that course of action as a result of the post in question. (The individual in question has been talked out of it in the short term, by way of 'the negative publicity would hurt more than the money would help', but my impression is that the chance that they'll try something like that has still increased, probably permanently.)

Replies from: CronoDAS, spzx
comment by CronoDAS · 2012-12-25T02:57:35.198Z · LW(p) · GW(p)

[sincerity mode]So... is that a good thing, or a bad thing?[/sincerity mode]

In many circumstances, sacrificing one's own life in order to save others is considered a good thing, and people who do it are called "heroes". A famous example is the story of railroad engineer Casey Jones, who, after realizing that a collision with a stalled train was inevitable, chose to remain in the engine and slow his own train as much as possible, saving the rest of the passengers and crew at the cost of his own life.

"Really Extreme Altruism" (with the money going to one of GiveWell's top charities) isn't as dramatic as a "typical" real-life Heroic Sacrifice, but the outcome is the same: one person dies, a lot of other people live who would have otherwise died. It's the manner of the sacrifice (and the distributed, distant nature of the benefit) that makes it far more disturbing.

comment by spzx · 2013-09-25T06:47:35.947Z · LW(p) · GW(p)

There should be a warning on the donate page: "For reasons of public relations, please refrain from donating and minimize your association with us if you are or may in the future become suicidal."

Of course, if I were, not being able to contribute would be one less reason to stick around. I could shop for some less controversial group to support (possibly one that indirectly helped SIAI/MIRI), but it wouldn't be quite as motivating or as obviously sufficient to offset the cost of living.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-24T03:09:50.627Z · LW(p) · GW(p)

This intention of yours is not transparent. Plus, they don't care.

Replies from: CronoDAS
comment by CronoDAS · 2012-12-25T03:00:31.251Z · LW(p) · GW(p)

I edited the original post to link to GiveWell's top charities list.

comment by Tenoke · 2012-12-24T11:01:34.998Z · LW(p) · GW(p)

I thought I posted this comment last night, but it seems like I didn't (and now I have to pay karma to post it) but aren't we just encouraging belief bias this way? (which has an additional negative utility on top of the loss of positive utility from the discussion and loss of utility because people see us as a heavily-censored community and form another type of negative opinion of us)

comment by DanArmak · 2012-12-26T19:22:39.734Z · LW(p) · GW(p)

Idiots make up crap about all kinds of things, not just violence or other illegal acts. Ideas outside societal norms often attract bad PR. If your primary goal here is to improve PR, you would have to censor posts by explicit PR criteria. The proposed criteria of discussion of violence or law-breaking is not optimized for this goal. So, what is it you really want?

Discussion of violence is something that (you claim) has no positive value, even ignoring PR. So it's easy to decide to censor it. But have you really considered what else to censor according to your goals? Violence clearly came due to the now deleted post; it was an available example. But you shouldn't just react to it and ignore other things, if your goal is not to prevent discussion of violence or crime in itself.

comment by CronoDAS · 2012-12-24T02:43:36.677Z · LW(p) · GW(p)

As far as I can tell, Really Extreme Altruism actually is legal.

comment by saturn · 2012-12-24T03:50:10.022Z · LW(p) · GW(p)

In the alternative where it's a bad idea, talking about it has net negative expected utility.

What about the possibility that someone who thought it was a good idea would change their mind after talking about it?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-24T04:07:06.230Z · LW(p) · GW(p)

This seems an order-of-magnitude less likely than somebody wouldn't naturally think of the dumb idea, seeing the dumb idea.

Replies from: Decius
comment by Decius · 2012-12-24T05:00:22.329Z · LW(p) · GW(p)

Therefore censor uncommon bad ideas generally?

comment by wedrifid · 2012-12-24T00:59:11.592Z · LW(p) · GW(p)

Or Really Extreme Altruism?

This is an example of why I support this kind of censorship. Lesswrong just isn't capable of thinking about such things in a sane way anyhow.

The top comment in that thread demonstrates AnnaSalamon being either completely and utterly mindkilled or blatantly lying about simple epistemic facts for the purpose of public relations. I don't want to see the (now) Executive Director of CFAR doing either of those things. And most others are similarly mindkilled, meaning that I just don't expect any useful or sane discussion to occur on sensitive subjects like this.

(ie. I consider this censorship about as intrusive as forbidding peanuts to someone with a peanut allergy.)

Replies from: fubarobfusco, Eugine_Nier, jbeshir
comment by fubarobfusco · 2012-12-24T09:26:07.761Z · LW(p) · GW(p)

The top comment in that thread demonstrates AnnaSalamon being either completely and utterly mindkilled or blatantly lying

This seems an excessively hostile and presumptuous way to state that you disagree with Anna's conclusion.

Replies from: wedrifid
comment by wedrifid · 2012-12-24T10:09:13.620Z · LW(p) · GW(p)

This seems an excessively hostile and presumptuous way to state that you disagree with Anna's conclusion.

No it isn't, the meaning of my words are clear and quite simply do not mean what you say I am trying to say.

The disagreement with the claims of the linked comment is obviously implied as a premise somewhere in the background but the reason I support this policy really is because it produces mindkilled responses and near-obligatory dishonesty. I don't want to see bullshit on lesswrong. The things Eliezer plans to censor consistently encourage people to speak bullshit. Therefore, I support the censorship. Not complicated.

You may claim that it is rude or otherwise deprecated-by-fubarobfusco but if you say that my point is different to both what I intended and what the words could possibly mean then you're wrong.

Replies from: fubarobfusco, jsalvatier, Pentashagon
comment by fubarobfusco · 2012-12-25T02:06:22.624Z · LW(p) · GW(p)

No it isn't, the meaning of my words are clear and quite simply do not mean what you say I am trying to say.

Well, taking your words seriously, you are claiming to be a Legilimens. Since you are not, maybe you are not as clear as you think you are.

It sure looks from what you wrote that you drew an inference from "Anna does not agree with me" to "Anna is running broken or disreputable inference rules, or is lying out of self-interest" without considering alternate hypotheses.

comment by jsalvatier · 2012-12-24T19:24:49.316Z · LW(p) · GW(p)

This also seems like an excessively hostile way of disagreeing! I think there's some illusion of transparency going on.

I think

Sorry, I think you've misunderstood me. I don't want to see bullshit on lesswrong. [Elaboation] The things Eliezer plans to censor consistently encourage people to speak bullshit. Therefore, I support the censorship.

Might have worked better

Replies from: wedrifid
comment by wedrifid · 2012-12-24T22:29:41.564Z · LW(p) · GW(p)

This also seems like an excessively hostile way of disagreeing!

It is unfortunate that the one word on your comment that you gave emphasis to is the one word that invalidates it (rather than being a mere subjective disagreement). Since I have already been quite clear that I consider fubarobfusco's comment to be both epistemically flawed and an unacceptable violation of lesswrong's (or at very least my) ideals you ought to be able to predict that this would make me dismiss you as merely supporting toxic behavior. It means that the full weight of the grandparent comment applies to you, with additional emphasis given that you are persisting despite the redundant explanation.

Sorry

Wedrifid writing 'Sorry' in response to fubarobfusco's behavior---or anything else involving untenable misrepresentations of the words of another---would have been disingenuous. Moreover anyone who is remotely familiar with wedrifid would interpret him making that particular political move in that context as passive-aggressive dissembling... and would have been entirely correct in doing so.

Replies from: jsalvatier
comment by jsalvatier · 2012-12-25T01:34:01.872Z · LW(p) · GW(p)

Part of my point was that your words are not nearly as clear as you think they are. Merely telling people your words are clear doesn't make people understand them.

I probably won't respond further because this conversation quickly became frustrating for me.

comment by Pentashagon · 2012-12-26T22:28:50.021Z · LW(p) · GW(p)

The disagreement with the claims of the linked comment is obviously implied as a premise somewhere in the background but the reason I support this policy really is because it produces mindkilled responses and near-obligatory dishonesty. I don't want to see bullshit on lesswrong. The things Eliezer plans to censor consistently encourage people to speak bullshit. Therefore, I support the censorship. Not complicated.

There are a lot of topics about which most people have only bullshit to say. The solution is to downvote bullshit instead of censoring potentially important topics. If not enough people can detect bullshit that's an entirely different (and far worse) problem.

comment by Eugine_Nier · 2012-12-24T09:32:50.841Z · LW(p) · GW(p)

The top comment in that thread demonstrates AnnaSalamon being either completely and utterly mindkilled or blatantly lying about simple epistemic facts for the purpose of public relations. I don't want to see the (now) Executive Director of CFAR doing either of those things.

Yes and if the CFAR Executive Director is either mindkilled or willing to lie for PR, I want to know about it.

comment by jbeshir · 2012-12-24T01:47:57.200Z · LW(p) · GW(p)

I think that a discussion in which only most people are mindkilled can still be a fairly productive one on these questions in the LW format. LW is actually one of the few places where you would get some people who aren't mindkilled, so I think it is actually good that it achieves this much.

They seem fairly ancillary tor LW as a place for improving instrumental or epistemic rationality, though. If you think testing the extreme cases of your models of your own decision-making is likely to result in practical improvements in your thinking, or just want to test yourself on difficult questions, these things seem like they might be a bit helpful, but I'm comfortable with them being censored as a side effect of a policy with useful effects.

Replies from: wedrifid
comment by wedrifid · 2012-12-24T01:58:56.979Z · LW(p) · GW(p)

I think that a discussion in which only most people are mindkilled can still be a fairly productive one on these questions in the LW format. LW is actually one of the few places where you would get some people who aren't mindkilled, so I think it is actually good that it achieves this much.

Unfortunately the non mindkilled people would also have to be comfortable simply ignoring all the mindkilled people so that they can talk among themselves and build the conversation toward improved understanding. That isn't something I see often. More often the efforts of the sane people are squandered trying to beat back the tide of crazy.

comment by kodos96 · 2012-12-24T22:44:37.891Z · LW(p) · GW(p)

I am asking in advance if anyone has non-obvious consequences they want to point out or policy considerations they would like to raise. In other words, the form of this discussion is not 'Do you like this?' - you probably have a different cost function from people who are held responsible for how LW looks as a whole - but rather, 'Are there any predictable consequences we didn't think of that you would like to point out

Eliezer, at this point I think it's fair to ask: has anything anyone has said so far caused you to update? If not, why not?

I realize some of my replies to you in this thread have been rather harsh, so perhaps I should take this opportunity to clarify: I consider myself a big fan of yours. I think you're a brilliant guy, and I agree with you on just about everything regarding FAI, x-risk, SIAI's mission.... I think you're probably mankind's best bet if we want to successfully navigate the singularity. But at the same time, I also think you can demonstrate some remarkably poor judgement from time to time... hey, we're all running on corrupted hardware after all. It's the combination of these two facts that really bothers me.

I don't know of any way to say this that isn't going to come off sounding horribly condescending, so I'm just going to say it, and hope you evaluate it in the context of the fact that I'm a big fan of your work, and in the grand scheme of things, we're on the same side.

I think what's going on here is that your feelings have gotten hurt by various people misattributing various positions to you that you don't actually hold. That's totally understandable. But I think you're confusing the extent to which your feelings have been hurt with the extent to which actual harm has been done to SIAI's mission, and are overreacting as a result. I'm not a psychologist - this is just armchair speculation.... I'm just telling you how it looks from the outside.

Again, we're all running on corrupted hardware, so it's entirely natural for even the best amongst us to make these kinds of mistakes... I don't expect you to be an emotionless Straw Vulcan (and indeed, I wouldn't trust you if you were)... but your apparent unwillingness to update in response to other's input when it comes to certain emotionally charged issues is very troubling to me.

So to answer your question "Are there any predictable consequences we didn't think of that you would like to point out"... well I've pointed out many already, but the most concise, and most important predictable consequence of this policy which I believe you're failing to take into account, is this: IT LOOKS HORRIBLE... like, really really bad. Way worse than the things it's intended to combat.

comment by CronoDAS · 2012-12-24T04:20:57.636Z · LW(p) · GW(p)

I don't know if we actually need a specific policy on this. We didn't in the case of my post...

Replies from: DanArmak, Viliam_Bur
comment by DanArmak · 2012-12-26T19:37:12.427Z · LW(p) · GW(p)

I agree. We should trust in the community more where the guarantee of moderation (by establishing a policy) is not needed.

Your post was quickly downvoted, and you deleted it yourself. This is an example of a good outcome that demonstrates we didn't need moderation.

comment by Viliam_Bur · 2012-12-25T23:28:54.916Z · LW(p) · GW(p)

On the other hand, if we will need it for some post in future, it will be an advantage that the policy will already have existed and will not have to be invented ad hoc.

comment by SoftFlare · 2012-12-24T12:09:48.925Z · LW(p) · GW(p)

Beware Evaporative Cooling of Group Beliefs.

I am for the policy, although heavy-heartedly. I feel that one of the pillars of Rationality is that there should be no Stop Signs and this policy might produce some. On the other hand, I think PR is important, and that we must be aware of evaporative cooling that might happen if it is not applied.

On a neutral note - We aren't enemies here. We all have very similar utility functions, with slightly different weights on certain terminal values (PR) - which is understandable as some of us have more or less to lose from LW's PR.

To convince Eliezer - you must show him a model of the world given the policy that causes ill effects he finds worse than the positive effects of enacting the policy. If you just tell him "Your policy is flawed due to ambiguitiy in description" or "You have, in the past, said things that are not consistent with this policy" - I place low probability on him significantly changing his mind. You should take this as a sign that you are Straw-manning Eliezer, when you should be Steel-manning him.

Also, how about some creative solutions? An special post tag that must be applied to posts that condone hypothetical violence which causes them to only be seen to registered users - and displays a disclaimer above the post warning against the nature of the post? That should mitigate 99% of the PR effect. Or, your better, more creative idea. Go.

Replies from: kodos96, NancyLebovitz, Tenoke
comment by kodos96 · 2012-12-24T16:49:01.859Z · LW(p) · GW(p)

On a neutral note - We aren't enemies here. We all have very similar utility functions, with slightly different weights on certain terminal values (PR) - which is understandable as some of us have more or less to lose from LW's PR.

I disagree that this is the entire source of the dispute. I think that even when constrained to optimizing only for good PR, this is an instrumentally ineffective method of achieving that. Censorship is worse for PR than the problem in question, especially given that that problem in question is thus far nonexistent

To convince Eliezer - you must show him a model of the world given the policy that causes ill effects he finds worse than the positive effects of enacting the policy.

This is trivially easy to do, since the positive effects of enacting the policy are zero, given that the one and only time this has ever been a problem, the problem resolved itself without censorship, via self-policing.

Well... the showing him the model part is trivially easy anyway. Convincing him... apparently not so much.

Replies from: jbeshir
comment by jbeshir · 2012-12-25T03:10:14.558Z · LW(p) · GW(p)

This model trivially shows that censoring espousing violence is a bad idea, if and only if you accept the given premise that censorship of espousing violence is a substantial PR negative. This premise is a large part of what the dispute is about, though.

Not everyone is you; a lot of people feel positively about refusing to provide a platform to certain messages. I observe a substantial amount of time expended by organisations on simply signalling opposition to things commonly accepted as negative, and avoiding association with those things. LW barring espousing violence would certainly have a positive effect through this.

Negative effects from the policy would be that people who do feel negatively about censorship, even of espousing violence, would view LW less well.

The poll in this thread indicates that a majority of people here would be for moderators being able to censor people espousing violence. This suggests that for the majority here it is not bad PR for the reason of censorship alone, since they agree with its imposition. I would expect myself for people outside LW to have an even stronger preference in favour of censorship of advocacy of unthinkable dangerous ideas, suggesting a positive PR effect.

Whether people should react to it in this manner is a completely different matter, a question of the just world rather than the real one.

And this is before requiring any actual message be censored, and considering the impact of any such censorship, and before considering what the particular concerns of the people who particularly need to be attracted are.

comment by NancyLebovitz · 2012-12-24T18:43:22.259Z · LW(p) · GW(p)

Ambiguity is actually a problem. If people don't know what the policy means, then the person who makes the policy doesn't know what they forbidding or permitting.

Replies from: SoftFlare
comment by SoftFlare · 2012-12-24T19:29:47.696Z · LW(p) · GW(p)

True. I was giving the ambiguity as an example of something people say to claim a policy won't work, without hashing out what that actually means in real execution. Almost every policy is somewhat ambiguous, yet there are many good policies.

comment by Tenoke · 2012-12-24T16:35:57.451Z · LW(p) · GW(p)

There are better ideas in this thread but apparently LW can't afford software changes.

comment by fubarobfusco · 2012-12-24T09:48:17.971Z · LW(p) · GW(p)

Two thoughts:

One: When my partner worked as the system administrator of a small college, her boss (the head of IT, a fatherly older man) came to her with a bit of an ethical situation.

It seems that the Dean of Admissions had asked him about taking down a student's personal web page hosted on the college's web server. Why? The web page contained pictures of the student and her girlfriend engaged in public displays of affection, some not particularly clothed. The Dean of Admissions was concerned that this would give the college a bad reputation.

Naturally the head of IT completely rejected the request out of hand, but was interested in discussing the implications. One that came up was that taking down a student web page about a lesbian relationship would be worse reputation than hosting it could bring. Another was that the IT staff did not feel like being censors over student expression, and certainly did not feel like being so on behalf of the Admissions office.

It's not clear to me that this case is especially analogous. It may be rather irrelevant, all in all.

Two: There is the notion that politics is about violence, not about agreement. That is to say, it is not about what we do when everyone agrees and goes along; but rather what we do when someone refuses to go along; when there is contention over shared resources because not everyone agrees what to do with them; when someone is excluded; when someone gets to impose on someone else (or not); and so on. Violence is often at least somewhere in the background of such discussions, in judicial systems, diplomacy, and so on. As Chairman Mao put it (at least, as quoted by Bob Wilson), political power grows out of the barrel of a gun. And a party with no ability to disrupt the status quo is one that nobody has to listen to.

As such, a position of nonviolence goes along with a position of non-politics. Avoiding threatening people — taken seriously enough — may require disengaging from a lot of political and legal-system stuff. For instance, proposing to make certain research illegal or restricted by law entails proposing a threat of violence against people doing that research.

comment by Incorrect · 2012-12-24T04:40:09.933Z · LW(p) · GW(p)

Would your post on eating babies count, or is it too nonspecific?

http://lesswrong.com/lw/1ww/undiscriminating_skepticism/1scb?context=1

(I completely agree with the policy, I'm just curious)

Replies from: quiet
comment by quiet · 2012-12-24T17:12:40.670Z · LW(p) · GW(p)

We should exempt any imagery fitting of a Slayer album cover, lest we upset the gods of metal with our weakness.

comment by Tenoke · 2012-12-24T00:25:22.273Z · LW(p) · GW(p)

So I finally downvoted Yudkowsky.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-12-24T01:11:15.613Z · LW(p) · GW(p)

What was your line of thought?

Replies from: Tenoke
comment by Tenoke · 2012-12-24T01:19:44.025Z · LW(p) · GW(p)

That censorship because of what people think of LessWrong is ridiculous. That the negative effect on the reputation is probably significantly less than what is assumed. And that if EY thought that censorship of content for the sake of LW's image is in order he should've logically thought that omitting fetishes from his public OKCupid profile(for the record I've defended the view that this is his right) among other things is also in order as well. And some other thoughts of this kind.

comment by Wei Dai (Wei_Dai) · 2012-12-24T21:05:35.689Z · LW(p) · GW(p)

Eliezer, could you clarify whether this policy applies to discussions like "maybe action X, which some people think constitutes violence, isn't really violence"? And what about nuclear war strategies?

comment by ewbrownv · 2012-12-26T16:39:38.518Z · LW(p) · GW(p)

Censorship is generally not a wise response to a single instance of any problem. Every increment of censorship you impose will wipe out an unexpectedly broad swath of discussion, make it easier to add more censorship later, and make it harder to resist accusations that you implicitly support any post you don't censor.

If you feel you have to Do Something, a more narrowly-tailored rule that still gets the job done would be something like: "Posts that directly advocate violating the laws of in a manner likely to create criminal liability will be deleted."

Because, you know, it's just about impossible to talk about specific wars, terrorism, criminal law or even many forms of political activism without advocating real violence against identifiable groups of people.

comment by twanvl · 2012-12-24T12:40:43.740Z · LW(p) · GW(p)

What if some violence helps reduce further violence? For example corporal punishment could reduce crime (think of Singapore). Note that I am not saying that this is necessarily true, just that we should not a priori ban all discussion on topics like this.

Replies from: prase
comment by prase · 2012-12-24T13:44:47.780Z · LW(p) · GW(p)

The proposal is to ban such discussions not because violence is bad, but because discussing violence is bad PR. I am pretty sure advocacy of corporal punishment belongs to this category too.

Replies from: twanvl
comment by twanvl · 2012-12-24T15:48:36.704Z · LW(p) · GW(p)

Is it really bad PR, though? IMO one of the strengths of LW is that almost any weird topic can be discussed as long as the discussion is rational and civilized. If some interesting posts are banned by the moderators, then this diminishes the value of LW to me.

Replies from: prase, fubarobfusco
comment by prase · 2012-12-24T19:58:21.367Z · LW(p) · GW(p)

I don't know whether it's indeed bad PR. It probably depends on one's expectations. I agree with you that banning weird discussions makes LW less attractive (to me, you, certain kind of people), but the site owners want to become more respected by the mainstream and in order to achieve that it is probably a good strategy to remove the weirdest discussions from sight.

comment by fubarobfusco · 2012-12-24T18:27:42.542Z · LW(p) · GW(p)

But the point of LW is not merely having a forum that is valuable to you for discussing weird topics. If you want Reddit, you know where to find it. The point of LW is advancing human rationality, and being a place where people air proposals of violence may get in the way of that. How would we tell?

Eliezer and other big names here have been on the receiving end of scandal-sheet gossip-mongering before and may be particularly sensitive to some of these issues. One thing that worries me about this proposal is that Eliezer may be conflating "LW has a bad reputation" with "I have to answer snarky, demeaning questions about foolish things people posted on LW more often than I'm comfortable with" or "People publish articles making fun of my friends and I wish to heck they would stop doing that." But I infer there is also evidence that Eliezer is withholding.

But it seems to me that the best way to have a good reputation is to actually be good. For instance, I would like it if people did not see LW as a place to air demeaning, privileged hypotheses (pun intended) about, say, race or gender — in part because many people's evidence standards for these topics is appallingly low; in part because it drives away members of the less-privileged sets (I would rather cooperate with women and defect against PUAs than vice versa; for one thing, there are more women). I would accept the same restriction on discussions of political economy (viz. libertarianism and socialism); although I've talked politics here it's not exactly an area where humans are renowned for being exemplary rationalists.

Replies from: duckduckMOO
comment by duckduckMOO · 2012-12-30T14:31:20.531Z · LW(p) · GW(p)

"is valuable to you for discussing weird topics"

"reddit"

pick one.

comment by fubarobfusco · 2012-12-24T10:05:14.501Z · LW(p) · GW(p)

Counter-proposal:

We don't contemplate proposals of violence against identifiable people because we're not assholes.

I mean, seriously, what the fuck, people?

Replies from: Manfred, kodos96
comment by Manfred · 2012-12-24T16:12:23.234Z · LW(p) · GW(p)

Generalizations: on average accurate. In specific wrong.

comment by kodos96 · 2012-12-24T17:02:42.837Z · LW(p) · GW(p)

Yes, this is the unstated policy we've all been working under up until this point, and it's worked. Which is why it's so irrational to propose a censorship rule.

Replies from: fubarobfusco
comment by fubarobfusco · 2012-12-24T18:22:07.764Z · LW(p) · GW(p)

First: "Rational" and "irrational" describe mental processes, not conclusions. A social rule can be useful or useless, beneficial or harmful, well- or ill-defended ....

("If deleting posts that propose violence would benefit Less Wrong, I want to believe that deleting posts that propose violence would benefit Less Wrong. If deleting posts that propose violence would not benefit Less Wrong, I want not to believe that deleting posts that propose violence would ...")

Second: Consider the difference between "we're not assholes" and "we don't want to look like assholes".

Or between "I will cooperate" and "I want you to think that I will cooperate." A defector can rationally conclude the latter, but not the former (since it is false of defectors).

comment by Mestroyer · 2012-12-24T02:08:50.536Z · LW(p) · GW(p)

I'll restate a third option here that I made in the censored thread (woohoo, I have read a thread Eliezer Yudkowsky doesn't want people to read, and that you, dear reader of this comment, probably can't!) Make an option while creating a post to have it be only viewable by people with certain karma or above, or so that after a week or so, it disappears from people without that karma. This is based on an idea 4chan uses, where it deletes all threads after they become inactive, to encourage people to discuss freely.

This would keep these threads from showing up when people Googled LessWrong. It could also let us discuss phyggishness without making LessWrong look bad on Google.

Replies from: drethelin, NancyLebovitz, Tenoke, Eliezer_Yudkowsky, handoflixue
comment by drethelin · 2012-12-24T09:16:55.788Z · LW(p) · GW(p)

Yes, and if we all put on black robes and masks to hide our identities when we talk about sinister secrets, no one will be suspicious of us at all!

comment by NancyLebovitz · 2012-12-24T03:19:50.454Z · LW(p) · GW(p)

You can't reliably make things on the internet go away.

Replies from: Mestroyer, kodos96
comment by Mestroyer · 2012-12-24T03:24:42.446Z · LW(p) · GW(p)

You can make them hard enough to access that they won't be stumbled upon by random people wondering what LessWrong is about, which is basically good enough for preserving LessWrong's reputation.

Replies from: NancyLebovitz
comment by NancyLebovitz · 2012-12-24T05:07:03.322Z · LW(p) · GW(p)

I was thinking about people posting screen shots.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2012-12-24T10:45:45.936Z · LW(p) · GW(p)

Agreed. It only takes one high-karma user posting a screenshot on reddit of LW's Secret Thread Where They Discuss Terrorism or whatever...

comment by kodos96 · 2012-12-24T04:23:48.136Z · LW(p) · GW(p)

I can think of a few different ways, requiring no more than a few dozen software-engineer-hours, that this could be solved effectively enough to make it a non-issue.

Replies from: fubarobfusco
comment by fubarobfusco · 2012-12-24T09:10:42.730Z · LW(p) · GW(p)

If my browser displays it as text, I can copy it. If you try dickish JavaScript hacks to stop me from copying it the normal way, I can screenshot it. If you display it as some kind of hardware-accelerated DRM'd video that can't be screenshotted, I can get out a fucking camera and take a fucking picture. If I post it somewhere and you try to shut me down, you invoke the Streisand Effect and now all of Reddit wants (and has) a copy, to show their Censorship Fighter status.

tl;dr: No, you can't stop people from copying things on the Internet.

Replies from: kodos96
comment by kodos96 · 2012-12-24T09:31:51.137Z · LW(p) · GW(p)

Of course. But a "good enough" solution to the stated problem doesn't need to be able to do that. There are a number of different approaches I can think of off the top of my head, in increasing order of complexity:

  • Just keep it from getting indexed by google, and expire it after a certain period. Sure, a sufficiently determined attacker could just spider LW every day, but do we actually think there's an organized conspiracy out there against us?
  • Limit access to people who can be trusted not to copy it - either based on karma as suggested, or individual vetting. I'm not a fan of this option, but it could certainly be made to work, for certain values of "work".
  • Implement a full on OTR style system providing full deniability through crypto. Rather than stopping content from being copied, just make sure you can claim any copy is a forgery, and nobody can prove you wrong. A MAJOR engineering effort of course, but totally possible, and 100% effective.
Replies from: handoflixue, roystgnr
comment by handoflixue · 2012-12-24T21:51:17.225Z · LW(p) · GW(p)

Implement a full on OTR style system providing full deniability through crypto. Rather than stopping content from being copied, just make sure you can claim any copy is a forgery, and nobody can prove you wrong. A MAJOR engineering effort of course, but totally possible, and 100% effective.

I can't help but see two major flaws:

1) If I link to a major, encrypted offshoot of LessWrong, people will AUTOMATICALLY be suspicious and it will damage PR.

2) Why would it be any easier to cry "it's a forgery" in this situation versus me posting a screenshot of an unencrypted forum? o.o Especially given #1...

3) I can share my password / decryption key / etc..

Replies from: kodos96
comment by kodos96 · 2012-12-24T21:58:55.130Z · LW(p) · GW(p)

Well, point 3 can be eliminated by proper use of crypto. See OTR

The response to point 2 is that by having it be publicly known to everyone that messages' contents are formally mathematically provably deniable (as can be guaranteed by proper crypto implementation), that disincentivizes people from even bothering to re-post content in the first place.

Point 1, however, I agree with completely, and that's why I'm not actually advocating this solution.

Replies from: handoflixue
comment by handoflixue · 2012-12-25T00:00:50.395Z · LW(p) · GW(p)

You're flat-out wrong about #3. Encryption is just a mathematical algorithm, it doesn't care who uses it, only that you have the key.

In short, encryption is just a very complex function, so you feed Key + Message in, and you get an Output. f(K,M) = O

I already have access to Key and Message, so I can share both of those. The only thing you can possibly secure is f().

If you have a cryptographic program, like OTR, I can just decompile it and get f(), and then post a modified version that lets the user manually configure their key (I think this is actually trivial in OTR, but it's been years since I poked at it)

If it's a website where I login and it auto-decrypts things for me, then I can just send someone the URL and the key I use.

Point 2 seems to reply on Point 3, and as far as I'm aware the only formally mathematically provably deniable method WHEN THE KEY IS COMPROMISED is a one-time pad.

I'm not sure how much crypto experience you have, but "and no one else knows the key" is a foundation of every algorithm I have ever worked on, and I'm reasonably confident that it's a mathematical requirement. I simply cannot imagine how you could possibly write a crypto algorithm that is secure EVEN with a compromised key.

EDIT: If you still think I'm wrong, can you please give me a sense of your crypto experience? For reference: I've met with the people who wrote OTR and hang out in a number of crypto circles, but only do fairly basic stuff in my actual work. I do still have a hobby interest in it, and follow it, but the last time I did any serious code breaking was about a decade ago.

Replies from: kodos96
comment by kodos96 · 2012-12-25T00:36:23.173Z · LW(p) · GW(p)

You seem to be using a very narrow definition of "crypto".. I'm not sure whether you're just being pedantic about definitions, in which case you may be correct, or if you're actually disputing the substance of what I'm saying. To answer your question, I'm not a cryptographer, but I have a CS degree and am quite capable of reading and understanding crypto papers (though not of retaining the knowledge for long)... it's been several years since I read the relevant papers, so I might be getting some of the details wrong in how I'm explaining it, but the basic concept of deniable message authentication is something that's well understood by mainstream cryptographers.

You seem to be aware of the existence of OTR, so I'm confused - are you claiming that it doesn't accomplish what it says it does? Or just that something about the way I'm proposing to apply similar technology to this use case would break some of its assumptions? The latter case is entirely possible, as so far I've put a grand total of about 5 minutes thought into it... if that's the case I'd be curious to know what are the relevant assumptions my proposed use case would break?

Replies from: handoflixue
comment by handoflixue · 2012-12-25T00:52:17.868Z · LW(p) · GW(p)

If I give you my key, you can pretend to be me on OTR. I've had friends demonstrate this to me, but I've never done it myself, so 99% confidence.

Technical disagreement, as near as I can tell, since you're not advocating for the solution.

comment by roystgnr · 2012-12-24T19:10:51.886Z · LW(p) · GW(p)

This must be why the media companies haven't given up on DRM yet. They think if they can just unmask and arrest the ringleaders of the "organized conspiracy out there" then copy protection will start working, when in reality any random person can become a "conspiracy" member with nothing more than a little technical knowledge, a little free time, and a moral code that encourages copying.

To be fair, the "vetting" and "full deniability" options don't really apply to the ??AA. The best pre-existing example for those kinds of policies might be the Freemasons or the Mormons? In neither case would I be confident that the bad PR they've avoided by hiding embarrassing things hasn't been worse than the bad PR they've abetted by obviously dissembling and/or by increasing the suspicion that they're hiding even worse things.

Replies from: kodos96
comment by kodos96 · 2012-12-24T20:14:06.074Z · LW(p) · GW(p)

In neither case would I be confident that the bad PR they've avoided by hiding embarrassing things hasn't been worse than the bad PR they've abetted by obviously dissembling and/or by increasing the suspicion that they're hiding even worse things.

Exactly. That's why I'm not actually advocating any of these technical solutions, just pointing out that they do exist in solution-space.

The solution that I'm actually advocating is even simpler still: do nothing. Rely on self-policing and the "don't be an asshole" principle, and in the event that that fails (which it hasn't yet), then counter bad speech with more speech: clearly state "LW/SIAI does not endorse this suggestion, and renounces the use of violence." If people out there still insist on slandering SIAI by association to something some random guy on LW said, then fuck em - haters gonna hate.

comment by Tenoke · 2012-12-24T02:14:15.327Z · LW(p) · GW(p)

Not a bad option indeed. It has a merit if we are really that bothered about the general view of LW.

And for the record the post is still accessible albeit deleted.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-24T02:27:00.318Z · LW(p) · GW(p)

LW has effectively zero resources to implement software changes.

Replies from: kodos96
comment by kodos96 · 2012-12-24T04:24:41.109Z · LW(p) · GW(p)

If this were your real rejection, you would be asking for volunteer software-engineer-hours.

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-24T05:00:00.427Z · LW(p) · GW(p)

Tried.

Replies from: gelisam
comment by gelisam · 2012-12-24T07:24:29.805Z · LW(p) · GW(p)

Are you kidding? Sign me up as a volunteer polyglot programmer, then!

Although, my own eagerness to help makes me think that the problem might not be that you tried to ask for volunteers and didn't get any, but rather that you tried to work with volunteers and something else didn't work out.

Replies from: yli, Risto_Saarelma
comment by yli · 2012-12-24T11:08:28.697Z · LW(p) · GW(p)

Maybe it's just that volunteers that will actually do any work are hard to find. Related.

Personally, I was excited about doing some LW development a couple of years ago and emailed one of the people coordinating volunteers about it. I got some instructions back but procrastinated forever on it and never ended up doing any programming at all.

Replies from: gelisam
comment by gelisam · 2012-12-24T16:27:20.781Z · LW(p) · GW(p)

I understand how that might have happened. Now that I am no longer a heroic volunteer saving my beloved website maiden, but just a potential contributor to an open source project, my motivation has dropped.

It is a strange inversion of effect. The issue list and instructions both make it easier for me to contribute, but since they reveal that the project is well organized, they also demotivate me because a well-organized project makes me feel like it doesn't need my help. This probably reveals more about my own psychology than about effective volunteer recruitment strategies, though.

comment by Risto_Saarelma · 2012-12-24T07:29:06.865Z · LW(p) · GW(p)

The site is open source, you should be able to just write a patch and submit it.

Replies from: kodos96
comment by kodos96 · 2012-12-24T07:59:05.123Z · LW(p) · GW(p)

This would be a poor investment of time without first getting a commitment from Eliezer that he will accept said patch.

Replies from: Risto_Saarelma, gelisam
comment by Risto_Saarelma · 2012-12-24T08:05:00.095Z · LW(p) · GW(p)

It'd get you familiar with the code base, which you'd need to be anyway if you wanted to be a volunteer contributor.

comment by gelisam · 2012-12-24T15:40:49.435Z · LW(p) · GW(p)

After finding the source and the issue list, I found instructions which indicate that there is, after all, non-zero engineering resources for lesswrong development. Specifically, somebody is sorting the incoming issues into "issues for which contributions are welcome" versus "issues which we want to fix ourselves".

The path to becoming a volunteer contributor is now very clear.

Replies from: AdeleneDawner
comment by AdeleneDawner · 2012-12-25T03:35:03.222Z · LW(p) · GW(p)

non-zero engineering resources

effectively zero

Getting someone to sort a list, even on an ongoing basis, is not functionally useful if there's nobody to take action on the sorted list.

comment by handoflixue · 2012-12-24T21:38:58.227Z · LW(p) · GW(p)

I like the idea, but I have to agree that the PR cost of such a thing being leaked is probably vastly worse than simply being open about it in the first place.

comment by jimrandomh · 2012-12-23T23:27:14.431Z · LW(p) · GW(p)

Posts advocating or "asking about" violence against identifiable real people or groups should be deleted at the admins' discretion:

[pollid:374]

Posts advocating or "asking about" violation of laws that are actually enforced against middle-class people, other than the above, should be deleted at the admins' discretion:

[pollid:375]

Replies from: gjm, jimrandomh, Eliezer_Yudkowsky
comment by gjm · 2012-12-24T02:08:22.102Z · LW(p) · GW(p)

This poll, like EY's original question, conflates two things that don't obviously belong together. (1) Advocating certain kinds of act. (2) "Asking about" the same kind of act.

I appreciate that in some cases "asking about" might just be lightly-disguised advocacy, or apparent advocacy might just be a particularly vivid way of asking a question. I'm guessing that the quotes around "asking about" are intended to indicate something like the first of these. But what, exactly?

Replies from: jbeshir
comment by jbeshir · 2012-12-24T02:51:36.388Z · LW(p) · GW(p)

I think in this context, "asking about" might include raising for neutral discussion without drawing moral judgements.

The connection I see between them is that if someone starts neutral discussion about a possible action, actions which would reasonably be classified as advocacy have to be permitted if the discussion is going to progress smoothly. We can't discuss whether some action is good or bad without letting people put forward arguments that it is good.

Replies from: gjm
comment by gjm · 2012-12-24T03:15:34.295Z · LW(p) · GW(p)

There's certainly a connection. I'm not convinced the connection is so intimate that if censoring one is a good idea then so is censoring the other.

comment by jimrandomh · 2012-12-23T23:35:02.144Z · LW(p) · GW(p)

This is not a poll, but

...but it'd be nice to have a poll to point at later, to show consensus, and I'd be surprised if people disagreed.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-24T02:27:55.040Z · LW(p) · GW(p)

Posts like these are selectively read. Then not everyone votes in the poll. Shrug.

Replies from: Eugine_Nier, Jay_Schweikert
comment by Eugine_Nier · 2012-12-24T07:52:26.110Z · LW(p) · GW(p)

Translation: these polls frequently don't go the way I want, so I need an excuse to dismiss them.

Replies from: TheOtherDave
comment by TheOtherDave · 2012-12-24T08:33:32.490Z · LW(p) · GW(p)

He needs an excuse? Recall the original post said, straight up:

In other words, the form of this discussion is not 'Do you like this?' - you probably have a different cost function from people who are held responsible for how LW looks as a whole

comment by Jay_Schweikert · 2012-12-25T20:42:41.232Z · LW(p) · GW(p)

Is that your true rejection? That is, if this poll were posted to Main, and all readers were encouraged to answer, and the results came back essentially the same, would you then allow the results to influence what kind of policy to adopt? Or are you just sufficiently confident about the need for such a moderation policy that, absent clear negative consequences not previously considered, you'll implement it anyway?

I don't mean at all to suggest that the latter answer is inappropriate. Overall I trust your moderating judgment, and you clearly have more experience with and have thought more about LW's public image than probably anyone. If you decide the strong version of this policy is needed, notwithstanding disagreement from most LW members, I'm happy to give substantial deference to that decision. But does it matter either way whether this post is selectively read?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-25T21:17:57.345Z · LW(p) · GW(p)

Not if readers were "encouraged" to answer. If there were some way of knowing the population was representative (i.e. we selected at random and got back responses from everyone selected)... hm, possibly. I know that what people say at local LW gatherings has a stronger influence on me than what I hear online, but that could be for 'improper' reasons of face-to-face contact or greater personal familiarity.

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-12-26T03:42:26.639Z · LW(p) · GW(p)

I know that what people say at local LW gatherings has a stronger influence on me than what I hear online, but that could be for 'improper' reasons of face-to-face contact or greater personal familiarity.

The Bay Area meetup is most definitely not-representative of LW in general. Heck, the Bay Area is an extreme outlier even by California, not to mention US or World, standards.

comment by handoflixue · 2012-12-24T23:43:15.695Z · LW(p) · GW(p)

It seems like this represents, not simply a new rule, but a change in the FOCUS of the community. Specifically, it used to be entirely about generating good ideas, and you are now adding a NEW priority which is "generating acceptable PR".

Quite possibly there is an illusion of transparency here, because there hasn't really BEEN (to my knowledge) any discussion about this change in purpose and priorities. It seems reasonable to be worried that this new priority will get in the way of, or even supersede the old priority, especially given the phrasing of this.

At a minimum, it's a slippery slope - if we make one concession to PR, it's reasonable to assume others will be made as well. I don't know if that's the case - if I'm in error on that point, feel free to mention it.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2012-12-24T23:56:53.564Z · LW(p) · GW(p)

When you go on a first date with someone, would you tell them "hey, I've got this great idea about how I should [insert violence here] in order to [insert goal here]. What do you think?" Of course not, because whether or not this is a good idea, you are not getting a second date.

PR isn't inherently Dark Arts. It's about providing evidence to another party about yourself or your organization in a way which is conducive to further provision of evidence. If you start all your dates by talking about your worst traits first, you aren't giving your date incentives to stick around and learn about your best traits. If LW becomes known for harboring discussions of terrorism or whatever, you aren't giving outsiders incentives to stick around and learn about all the other interesting things happening on LW, or work for SIAI, etc.

Replies from: handoflixue, DanArmak, ChristianKl
comment by handoflixue · 2012-12-25T00:55:03.753Z · LW(p) · GW(p)

You'd be amazed how many second dates I get...

That said, I don't think PR is Dark Arts, I just think it's an UNSPOKEN change in community norms, and... from a PR standpoint, this post is a blatantly stupid way of revealing that change.

Huh. Either the original post is bad because PR is bad, or this post is bad because it demonstrates bad PR. Lose/lose :)

comment by DanArmak · 2012-12-26T19:57:25.882Z · LW(p) · GW(p)

If you start all your dates by talking about your worst traits first

This begs the question by assuming the proposed violence is a bad trait.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2012-12-26T22:51:05.522Z · LW(p) · GW(p)

All I'm assuming is that a typical date will assume that people who talk about violence on the first date are crazy and/or violent themselves. This is an argument about first impressions, not an argument about goodness or badness.

comment by ChristianKl · 2012-12-25T16:40:13.921Z · LW(p) · GW(p)

If you start all your dates by talking about your worst traits first, you aren't giving your date incentives to stick around and learn about your best traits.

If I would go to a date with a girl who believes in the necessity of a communist revolution I wouldn't judge her negatively for that political belief. There are character traits that I would judge much worse.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2012-12-25T22:54:53.388Z · LW(p) · GW(p)

Okay, but 1) the fact that you post on LW is already evidence that you're not representative of the general population in various ways, and 2) communist revolution is at least an idea that people learn about in college, and it's not too unusual to hear a certain type of person say stuff like that. I had in mind the subject of the deleted post; if a typical person heard someone talking like that, their first reaction would be that that person is crazy, and with a reasonable choice of priors this would be a reasonable inference to make.

Replies from: ChristianKl, DanArmak
comment by ChristianKl · 2012-12-25T23:39:03.206Z · LW(p) · GW(p)

I haven't read the deleted post. If someone who knew what the case is about would write it to me via private message I would appreciate it.

The communist revolution is a classic example of an idea that involves the advocation of illegal violence against a specific group of people. There are certainly internet forums where that kind of political speech isn't welcome and will get deleted.

On LessWrong I think that's a position that should be allowed to be argued. Moldbuggian advocacy of a coup d'état should also be allowed.

Some people might think that you are crazy if you argue Moldbuggianism on a first date. At the same time I think that idea should be within the realm of permissable discourse on LessWrong.

comment by DanArmak · 2012-12-26T19:59:06.022Z · LW(p) · GW(p)

the fact that you post on LW is already evidence that you're not representative of the general population in various ways

If LW-compatible people are more welcoming of discussion of violence than the general population, then the bad PR would affect them less than it would other people, so we should care less about bad PR.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-24T23:05:37.973Z · LW(p) · GW(p)

If there's a general policy against discussing violence on LW, and I can point to statements from the same timeframe of mine condemning such violence, it may help. It may not. Reporters are stupid. Your argument does not actually say why the anti-violence-discussion policy is a bad idea, and seems to be ad hominem tu quoque.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-24T20:03:10.853Z · LW(p) · GW(p)

Everyone even slightly famous gets arbitrary green ink. Choosing which green ink to 'complain' about on your blog, when it makes an idea look bad which you would find politically disadvantageous, is not a neutral act. I'm also frankly suspicious of what the green ink actually said, and whether it was, perhaps, another person who doesn't like the "UFAI is possible" thesis saying that "Surely it would imply..." without anyone ever actually advocating it. Why would somebody who actually advocated that, contact Ben Goertzel when he is known as a disbeliever in the thesis?

No, I don't particularly trust Ben Goertzel to play rationalist::nice with his politics. And describing him as a "former researcher at SIAI" is quite disingenuous of you, by the way; he never received any salary from us and is a long-time opponent of these ideas. At one point Tyler Emerson thought it would be a good idea to fund a project of his, but that's it.

Replies from: saturn
comment by saturn · 2012-12-24T22:05:49.539Z · LW(p) · GW(p)

And describing him as a "former researcher at SIAI" is quite disingenuous of you, by the way; he never received any salary from us and is a long-time opponent of these ideas. At one point Tyler Emerson thought it would be a good idea to fund a project of his, but that's it.

If that's the case, it seems like giving him the title Director of Research could cause a lot of confusion. I certainly find it confusing. Maybe that was a different Ben Goertzel?

Replies from: timtyler, Eliezer_Yudkowsky
comment by timtyler · 2012-12-28T00:24:18.209Z · LW(p) · GW(p)

Reportedly, Ben Goertzel and OpenCog were intended to add credibility through association with an academic:

It has similarly been a general rule with the Singularity Institute that, whatever it is we're supposed to do to be more credible, when we actually do it, nothing much changes. "Do you do any sort of code development? I'm not interested in supporting an organization that doesn't develop code"—> OpenCog—> nothing changes. "Eliezer Yudkowsky lacks academic credentials"—> Professor Ben Goertzel installed as Director of Research—> nothing changes.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-24T23:16:42.031Z · LW(p) · GW(p)

Honestly, at this point I'm willing to just call that a mistake on Tyler Emerson's part.

comment by Decius · 2012-12-24T05:23:31.537Z · LW(p) · GW(p)

Why the explicit class distinction?

It would be prohibited to discuss how to or speed and avoid being cited for it. (I thought that this was already policy, and I believe it to be a good policy.)

It would not be prohibited to discuss how to be a vagrant and avoid being cited for it. (Middle class people temporarily without residences typically aren't treated as poorly as the underclass.)

Should the proper distinction be 'serious' crimes, or perhaps 'crimes of infamy'?

Replies from: DanArmak
comment by DanArmak · 2012-12-26T20:10:50.990Z · LW(p) · GW(p)

Should the proper distinction be 'serious' crimes, or perhaps 'crimes of infamy'?

As judged by who?

(I don't endorse EY's original proposal, either.)

Replies from: Decius
comment by Decius · 2012-12-26T20:48:55.223Z · LW(p) · GW(p)

As judged by the person making the decision to delete.

Replies from: DanArmak
comment by DanArmak · 2012-12-26T20:52:50.840Z · LW(p) · GW(p)

I don't think the words "serious crime" have the property that different judges would make very similar judgements about a given discussion.

Replies from: Decius
comment by Decius · 2012-12-26T21:05:00.837Z · LW(p) · GW(p)

Is that phrase better or worse than

laws that are actually enforced against middle-class people

Replies from: DanArmak
comment by DanArmak · 2012-12-26T23:20:38.498Z · LW(p) · GW(p)

"Laws that are actually enforced" is at least an empirical question. "Serious crime" is just a value judgement.

Replies from: Decius
comment by Decius · 2012-12-27T01:25:26.775Z · LW(p) · GW(p)

"Middle class" is just as much an undefined term as "serious crime".

It's concerning that we are having trouble agreeing on where the edge cases are, much less how to decide them.

comment by blacktrance · 2012-12-27T06:22:32.620Z · LW(p) · GW(p)

I don't have any principled objection to this policy, other than that as rationalists, we want to have fun, and this policy makes LW less fun.

comment by Larks · 2012-12-23T23:01:04.748Z · LW(p) · GW(p)

Does advocating gun control, or increased taxes, count? They would count as violence is private actors did them, and talking about them makes them more likely (by states). Is the public-private distinction the important thing - would advocating/talking about state-sanctioned genocide be ok?

Replies from: ikrase, Eugine_Nier, Luke_A_Somers, kodos96
comment by ikrase · 2012-12-24T01:04:55.362Z · LW(p) · GW(p)

While an interesting question, I think that the answer to that is reasonably obvious.

comment by Eugine_Nier · 2012-12-24T01:54:53.017Z · LW(p) · GW(p)

What about capital punishment and/or corporal punishment?

comment by Luke_A_Somers · 2012-12-24T05:23:45.270Z · LW(p) · GW(p)

To call either gun control or taxation violence is stretching matters beyond reasonable limits. The only sense in which they are is the sense in which any public policy is - that it is backed by the government. If anything to do with the government has to be considered as 'about violence'... bah.

Replies from: DanArmak, kodos96
comment by DanArmak · 2012-12-26T20:29:55.926Z · LW(p) · GW(p)

If anything to do with the government has to be considered as 'about violence'... bah.

Of course all laws enforced by governments are enforced with the threat of violence, and actual violence against violators. The law itself may not be violent, but violence will be used if necessary to enforce it.

Violence is not necessarily bad; it is a tool that may be the right one to use. Just as government and laws are not bad in themselves. If you object to saying government uses violence, you must be disagreeing with me over the meaning of the word "violence".

Replies from: Luke_A_Somers
comment by Luke_A_Somers · 2012-12-27T03:45:01.496Z · LW(p) · GW(p)

Sufficiently proximate to the point that simply talking about government is in fact a derail advocating violence? I think not.

Replies from: DanArmak
comment by DanArmak · 2012-12-28T12:25:04.452Z · LW(p) · GW(p)

Of course simply talking about government, or any particular government policy, is not about violence. And so it's not a derail that needs to be moderated.

But violence is an essential part of government. That's all I was saying.

Compare: simply talking about cryonics is not about quantum mechanics. If discussion of quantum mechanics were counterfactually forbidden, talking about cryonics would not be forbidden thereby. But cryonics, like all physical systems, is "implemented" or backed by quantum mechanics; you can't have one without the other.

comment by kodos96 · 2012-12-24T05:30:23.367Z · LW(p) · GW(p)

I don't think it's silly, and based on the LW survey results, neither do approximately 30.3% of LW users.

But aside from that, OP said "More generally: Posts or comments advocating or 'asking about' violation of laws that are actually enforced against middle-class people". Gun control (though not taxation) clearly falls under this illegality clause, without resort to classifying it as "violence".

Replies from: jsalvatier, Luke_A_Somers
comment by jsalvatier · 2012-12-24T12:25:54.151Z · LW(p) · GW(p)

I identify as libertarian and have been objectivist, but calling taxation theft (and other similar claims) is almost always sneaking in connotations.

Replies from: kodos96
comment by kodos96 · 2012-12-24T16:31:09.920Z · LW(p) · GW(p)

True, but that doesn't change the fact that the wording of the proposed policy is heavily subject to interpretation, which is the point I was trying to make.

comment by Luke_A_Somers · 2012-12-24T06:17:54.737Z · LW(p) · GW(p)

'Libertarian' does not mean 'believes all government action is violence'.

comment by kodos96 · 2012-12-24T04:18:48.485Z · LW(p) · GW(p)

Does advocating gun control, or increased taxes, count? They would count as violence is private actors did them

In the event of gun control, it would in fact be illegal even if done by a state actor.

Edit: assuming USA of course.

comment by drethelin · 2012-12-25T07:46:56.980Z · LW(p) · GW(p)

Regardless of whether you think censoring is net good or bad (for the forums, for the world, for SIAI), you have to realize the risks are far more on Eliezer than they are on any poster. His low tolerance responses and angry swearing are exactly what you should expect of someone who feels the need to project an image of complete disassociation from any lines of speculation that could possibly get him in legal trouble. Eliezer's PR concerns are not just of the forums in general. If he's being consistent, they should be informing every one of his comments on this topic. There's little to nothing to be gained by trying to apply logic against this sort of totally justifiable (in my mind) conversational caution. Eliezer should probably also delete any comments about keeping criminal discussions off the public internet.

This is also why trying to point out his Okcupid profile as a PR snafu is a non sequitur. Nothing there can actually get him in trouble with the law.

Replies from: ChristianKl, kodos96
comment by ChristianKl · 2012-12-25T15:43:52.340Z · LW(p) · GW(p)

PR is something different than legal trouble. Eliezer didn't express any concern that the speech he wants to censor would create legal trouble. If Eliezer wants to make that argument I should make that argument directly instead of speaking about PR.

In the US speech that encourages violent conduct is protect under the first amendment. See Brandenburg v. Ohio.

If you actually to want to protect against legal threats you should also forbid a lot of discussion that could fall under UK libel law.

comment by kodos96 · 2012-12-25T08:36:17.951Z · LW(p) · GW(p)

you have to realize the risks are far more on Eliezer than they are on any poster

This, I think, is the fundamental point of diagreement here. The emotional valence is far greater on Eliezer than on us, but if we're taking seriously the proposition that the singularity is coming in our lifetimes (and I do), then the risks are the same for all of us.

His low tolerance responses and angry swearing are exactly what you should expect

Angry swearing? Did I miss some posts? Link please.

This is also why trying to point out his Okcupid profile as a PR snafu is a non sequitur.

I suppose I should point out that when I referred earlier to Eliezer's occasional lapses in judgement, I was absolutely NOT intending to refer to this. I wasn't actively commenting at the time, but looking back on that episode, I found a lot of the criticism of Eliezer regarding his OKcupid profile to be downright offensive. When I first read the profile, I was actually incredibly impressed by the courage he displayed in not hiding anything about who he is.

Replies from: TheOtherDave, drethelin
comment by TheOtherDave · 2012-12-25T16:09:53.662Z · LW(p) · GW(p)

When I first read the profile, I was actually incredibly impressed by the courage he displayed in not hiding anything about who he is.

In my experience, it's difficult to display a high level of courage about revealing the truth about myself and at the same time commit to moderating the image I present so as to avoid public-relations failures. At some point, tradeoffs become necessary.

comment by drethelin · 2012-12-25T08:46:19.578Z · LW(p) · GW(p)

http://lesswrong.com/lw/g24/new_censorship_against_hypothetical_violence/84fd but I think there's others

I think it's fairly reasonable for Eliezer to guard against large risks for himself in exchange for small and entirely theoretical risks to the effectiveness of the forums. I don't think this censorship decision has a very meaningful impact either way on FAI. You can also factor his own (possibly inflated) evaluation of his value to the cause. A 0.01 percent chance of Eliezer getting jailed might be worse than 10 percent chance of stifling a mildly useful conversation or having someone quit the forums due to censorship.

Replies from: kodos96
comment by kodos96 · 2012-12-25T08:58:16.867Z · LW(p) · GW(p)

I don't think this censorship decision has a very meaningful impact either way on FAI.

I disagree. One of the most common objections to the idea of FAI/CEV is "so will this new god-like AI restrict fundamental civil liberties after it takes over?"

Replies from: drethelin
comment by drethelin · 2012-12-25T09:03:15.760Z · LW(p) · GW(p)

I've never heard this objection.

Fundamental civil liberties is also a fundamentally diseased concept.

Replies from: ChristianKl, kodos96
comment by ChristianKl · 2012-12-25T15:49:35.845Z · LW(p) · GW(p)

Fundamental civil liberties is also a fundamentally diseased concept.

If you explain that position in huge detail, there a plausible chance that it includes advocation of illegal conduct and could therefore be censored through this policy.

Replies from: Desrtopa
comment by Desrtopa · 2012-12-25T16:10:48.519Z · LW(p) · GW(p)

Keep in mind that the policy is going to be done through human implementation with the specific intention of avoiding inconveniently broad interpretations like this.

It's not enough to show that the censorship policy could theoretically be used to stifle conversation we actually want here, the important question is whether it actually would be.

Replies from: ChristianKl
comment by ChristianKl · 2012-12-25T16:54:06.254Z · LW(p) · GW(p)

It's not enough to show that the censorship policy could theoretically be used to stifle conversation we actually want here, the important question is whether it actually would be.

I think that's a very dangerous idea. This community is about developing FAI. FAI should be expected to act according to the rules that you give it. I think the policy should be judged by the way it would actually work if it would be applied as it's proposed.

There also the problem of encouraging group think: Advocating of illegal conduct gets censored when it goes against the morality of this group but is allowed when it's within the realms of that morality is bad.

This community should have consistent rules about which discussions are allowed and which aren't. Censoring on a case by case is problematic.

If you start censoring certain speech that advocates violence and avoid censoring other speech that advocates violence you also have the problem that you get more responsibility for the speech that you allow.

In the absence of a censorship policy you don't endorse a viewpoint by not censoring the viewpoint. If you however do censor, and you do make decisions not to censor specific speech that's an endorsement of that speech.

Replies from: Desrtopa
comment by Desrtopa · 2012-12-25T23:42:06.532Z · LW(p) · GW(p)

The way it's proposed is to be applied according to the judgment of a moderator. It makes no sense to pretend that we're beholden to the strictest letter of the rule when that's not how it's actually going to work.

What speech that advocates violence do you think would get a pass while the rest would get censored?

Replies from: ChristianKl
comment by ChristianKl · 2012-12-25T23:59:03.599Z · LW(p) · GW(p)

I don't know excatly how much speech Eliezier wants to censor. I wrote a post with a bunch of examples. I would like to see with of those example Eliezer considers worthy of censorship.

comment by kodos96 · 2012-12-25T09:09:52.577Z · LW(p) · GW(p)

Fundamental civil liberties is also a fundamentally diseased concept.

Please explain. (I've heard this argued before, but I'm curious what your particular angle on it is)

Replies from: Jayson_Virissimo
comment by Jayson_Virissimo · 2012-12-25T09:23:05.273Z · LW(p) · GW(p)

Please explain. (I've heard this argued before, but I'm curious what your particular angle on it is)

He is probably pattern-matching "fundamental civil liberties" to Natural Rights, which are not taken very seriously around these parts, since they are mostly myth.

comment by buybuydandavis · 2012-12-24T22:53:22.027Z · LW(p) · GW(p)

If the point was to "make a good impression" by distorting the impression given by people on the list to potential donors, maybe a more effective strategy is to shut up and do it, instead of making an announcement about it and causing a ruckus. "Disappear" the problems quietly and discretely.

This reminds me of the phyg business. Prompting long discussion threads about how "We are not a phyg! We are not a phyg!" is not recommended behavior if you don't want people to think you're not a phyg.

Replies from: Qiaochu_Yuan
comment by Qiaochu_Yuan · 2012-12-24T23:58:56.411Z · LW(p) · GW(p)

People notice when this happens, and the resulting uproar might have been worse; then accusations would be flying about lack of transparency.

Replies from: buybuydandavis
comment by buybuydandavis · 2012-12-25T03:36:26.582Z · LW(p) · GW(p)

Might have been. I didn't see it, and I didn't see any brouhaha over a deleted post. Was there one? More than the 318 comments in this thread announcing the policy? I see lots of downvotes for EY. I don't think it's going well.

The advantage of just doing it is that many people will not notice it at all, and those that notice it have seen the offending post, and so have a concrete context to go by. When people haven't seen the threat, they talk about censorship. My guess is that in a concrete case, at worst it will come across as an overreaction to some boorish behavior.

If the concrete case really involved stifling ideas, I'd expect people to make a huge stink about it. The big stink about poilcy, and undetected by my stink about the concrete case tells me that people are getting their undies in a bunch over nothing.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-24T20:18:19.427Z · LW(p) · GW(p)

If combined with a "Please write him and ask him to shut down!", sure. I think it's understood by default in most civilized cultures that violence is not being advocated by default when other courses of action are being presented. If the action to be taken is mysteriously left unspecified, it'd be a judgment call depending on other language used.

comment by Qiaochu_Yuan · 2012-12-24T11:13:53.935Z · LW(p) · GW(p)

I wouldn't have posted the following except that I share Esar's concerns about representativeness:

I think this is a good idea. I think using the word "censorship" primes a large segment of the LW population in an unproductive direction. I think various people are interpreting "may be deleted" to mean "must be deleted." I think various people are blithely ignoring this part of the OP (emphasis added):

In other words, the form of this discussion is not 'Do you like this?' - you probably have a different cost function from people who are held responsible for how LW looks as a whole

In particular, I think people are underestimating how important it is for LW not to look too bad, and also underestimating how bad LW could be made to look by discussions of the type under consideration.

Finally, I strongly agree that

anyone talking about a proposed crime on the Internet fails forever as a criminal[.]

Replies from: kodos96
comment by kodos96 · 2012-12-24T17:34:33.787Z · LW(p) · GW(p)

In particular, I think people are underestimating how important it is for LW not to look too bad

I'm not underestimating that at all... I'm saying that this policy makes us look bad... WAY worse than the disease it's intended to cure, especially in light of the fact that that disease cleared itself up in a few hours with no intervention necessary.

comment by kodos96 · 2012-12-24T03:56:25.190Z · LW(p) · GW(p)

How about instead of outright censorship, such discussions be required to be encrypted, via double-rot13?

Replies from: None
comment by [deleted] · 2012-12-24T04:13:38.067Z · LW(p) · GW(p)

Rot13 applied twice is just the original text...

Replies from: kodos96
comment by kodos96 · 2012-12-24T04:30:02.216Z · LW(p) · GW(p)

..............whooosh................

Replies from: None
comment by [deleted] · 2012-12-24T13:52:30.935Z · LW(p) · GW(p)

In light of the above getting upvotes, I'm not sure if it's the "whoosh" of double-rot13 going over your head as I originally thought, or if it's indicating intended sarcasm going over my head, or some other meaning not readily obvious to me (inferential distance and all that.)

Replies from: Manfred
comment by Manfred · 2012-12-24T16:17:52.699Z · LW(p) · GW(p)

Yes, the original comment was a joke.

comment by wedrifid · 2012-12-24T00:26:01.992Z · LW(p) · GW(p)

Yes, a post of this type was just recently made. I will not link to it, since this censorship policy implies that it will shortly be deleted, and reproducing the info necessary to say who was hypothetically targeted and why would be against the policy.

Someone please send me a link via PM? Or perhaps the author could PM me? Not because the censorship of that class bothers me but because talking to wedrifid is not posting things on the internet, I'm curious and there are negligible consequences for talking to me about interesting hypothetical questions.

(Disregard the above is the post or comment was boring.)

Replies from: None
comment by [deleted] · 2012-12-24T00:34:57.352Z · LW(p) · GW(p)

tl;dr: tobacco kills more people than guns and cars combined. Should we ?

PS: fuck the police

Replies from: wedrifid
comment by wedrifid · 2012-12-24T00:47:37.599Z · LW(p) · GW(p)

tl;dr: tobacco kills more people than guns and cars combined. Should we ?

PS: fuck the police

(I laughed). Thanks nyan. (I hope this kind of satirical summary is considered acceptable.)

Replies from: kodos96, CronoDAS
comment by kodos96 · 2012-12-24T04:27:56.289Z · LW(p) · GW(p)

I hope this kind of satirical summary is considered acceptable

This kind of uncertainty about what is and is not acceptible, is perhaps the primary reason why such censorship policies are evil.

Replies from: Viliam_Bur, William_Quixote
comment by Viliam_Bur · 2012-12-26T01:24:52.273Z · LW(p) · GW(p)

This is a huge exaggeration!

I mean, yes, in a far mode, censorship creates fear and so on... but let's come back to near mode and ask: "What is the worst consequence of stepping just a little on the wrong side of this uncertain line?"

Well, Eliezer would delete my comment or article, and that's it. It does not really make my legs shake.

My guess is that "tobacco kills more people than guns and cars combined. Should we ?", written literally like this, is acceptable. Probability estimate? It would be 98% in a different discussion on a different day, and perhaps 95% here and now because Eliezer may still be in the deleting mood. So what? If I am wrong, he will delete that comment, and perhaps also my comment for quoting it. And that's all. Am I afraid? No. Actually, I would probably not even notice if that happened.

Generally, I also prefer precise rules to imprecise ones, but there are limits how precise one can be in topics like this. Trying to make the rules exact (to avoid all harm, but allow all harmless discussion) is a FAI-complete problem. Even the real-world laws often have imprecise parts. Also, the more precise rules, the greater pressure on moderators to follow them literally; but I would prefer them using their own judgement.

Replies from: ChristianKl
comment by ChristianKl · 2012-12-27T15:42:29.001Z · LW(p) · GW(p)

Trying to make the rules exact (to avoid all harm, but allow all harmless discussion) is a FAI-complete problem.

Eliezer should engage into any deliberate practice that involves solving FAI-complete problems and not shun away from putting thinking into them and leave vague rules because the problem is too hard.

comment by William_Quixote · 2012-12-25T23:54:20.810Z · LW(p) · GW(p)

When time passes and the above post is not censored uncertainty will decrease. Arguments about chilling effects due to uncertainty are probably systemically overstated because the about of uncertainty we have now is much higher than the average uncertainty over the life of the policy.

Also the post in question is great.

P.S. fuck the police.

comment by CronoDAS · 2012-12-24T02:45:39.757Z · LW(p) · GW(p)

As the author of the offending Discussion post in question, I'd say it's an adequate summary.

comment by shminux · 2012-12-23T22:48:40.317Z · LW(p) · GW(p)

Would it censor a discussion of, say, compelling an AI researcher by all means necessary to withhold their research from, say, the military?

Replies from: Eliezer_Yudkowsky
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-24T02:25:27.653Z · LW(p) · GW(p)

Yes. This seems like yet another example of "First of all, it's a bad fucking idea, second of all, talking about it makes everyone else look bad, and third of all, if hypothetically it was actually a good idea you'd still be a fucking juvenile idiot for blathering about it on the public Internet." What part of "You fail conspiracies forever" is so hard for people to understand? Talk like this serves no purpose except to serve as fodder for people who claim that leads to violence and is therefore false, and your comment shall be duly deleted once this policy is put into place.

Replies from: kodos96
comment by kodos96 · 2012-12-24T04:21:10.284Z · LW(p) · GW(p)

I don't see how this comment even fits the proposed policy, except under a motivatedly-broad reading of "by all means necessary"

Replies from: CarlShulman, ciphergoth
comment by CarlShulman · 2012-12-24T04:46:56.224Z · LW(p) · GW(p)

Wikipedia thinks otherwise:

By any means necessary is a translation of a phrase coined by the French intellectual Jean Paul Sartre in his play Dirty Hands. It entered the popular culture through a speech given by Malcolm X in the last year of his life. It is generally considered to leave open all available tactics for the desired ends, including violence; however, the “necessary” qualifier adds a caveat—if violence is not necessary, then presumably, it should not be used.

Replies from: kodos96
comment by kodos96 · 2012-12-24T04:51:41.204Z · LW(p) · GW(p)

I was unaware of that connotation. But I don't think it changes the equation. There's a million different ways to interpret "by all means necessary", the vast majority of which would not be construed to include violence. If this were a forum in which Satre/Malcolm X references were the norm, then that would be different. But it isn't.

Replies from: Nick_Tarleton, Decius
comment by Nick_Tarleton · 2012-12-24T05:12:23.063Z · LW(p) · GW(p)

I and the one person currently in the room with me immediately took "by all means necessary" to suggest violence. I think you're in a minority in how you interpret it.

Replies from: kodos96
comment by kodos96 · 2012-12-24T05:19:18.004Z · LW(p) · GW(p)

OK, I'll update on that.

comment by Decius · 2012-12-26T21:15:10.480Z · LW(p) · GW(p)

"By all means necessary" very much means "don't hesitate to use violence". When that phrase isn't required to grant sanction to violence (as when used in military orders), it instead gives sanction to whatever acts aren't already implied (such as the violation of military protocol and/or use of prohibited weapons/tactics).

comment by Paul Crowley (ciphergoth) · 2012-12-24T12:48:14.463Z · LW(p) · GW(p)

Just checked with my houseguest; his interpretation is also "a call to violence".

comment by NancyLebovitz · 2012-12-25T06:28:29.016Z · LW(p) · GW(p)

The overall emotional tone. The lack of calls to direct action. The encouragement to think about the effects of one's actions, with thinking including that you take an honest look at opposing points of view.

comment by Kevin · 2012-12-25T05:55:50.799Z · LW(p) · GW(p)

What would the response to this have been if instead of "censorship policy" the phrase would have been "community standard"?

Replies from: katydee
comment by katydee · 2012-12-29T01:09:20.353Z · LW(p) · GW(p)

It probably would have been more positive but less honest.

comment by DanArmak · 2012-12-26T19:55:03.791Z · LW(p) · GW(p)

laws that are actually enforced against middle-class people

Different countries can have very different laws. Are you going to enforce this policy with reference to U.S. laws only, as they exist in 2012? If not, what is your standard of reference?

As I commented elsewhere, if your goal is to prevent bad PR, it is not obvious to me that this policy is the right way to optimize for it. Perhaps you have thought this out and have good reasons for believing that this policy is best for this goal, but it is not clear to me, so please elaborate on this if you can.

comment by MugaSofer · 2012-12-24T22:24:41.435Z · LW(p) · GW(p)

So ... we can't discus assassination the president, that seems fine if unnecessary. But we can't debate kidnapping people and giving the ransom to charity without the "pro-" side being deleted? That seems like it will either destroy a lot of valuable conversations for minimal PR gain, or be enforced inconsistently, leading to accusations of bias. Probably both.

comment by handoflixue · 2012-12-24T20:46:54.402Z · LW(p) · GW(p)

"anyone talking about a proposed crime on the Internet fails forever as a criminal"

I realize this isn't your TRUE objection, just a bit of a tangential "Public Service Announcement". The real concern is simply PR / our appearance to outsiders, right? But... I'm confused why you feel the need to include such a PSA.

Do we have a serious problem with people saying "Meet under the Lincoln Memorial at midnight, the pass-phrase is Sic semper tyrannis" or "I'm planning to kill my neighbor's dog, can you please debug my plot, I live in Brooklyn at 123 N Stupid Ave"?

You can use Private Messaging to send me actual examples, without causing a public reputation hit. I can't recall ever reading anything like that on this site.

Replies from: All_work_and_no_play
comment by All_work_and_no_play · 2012-12-25T04:56:55.122Z · LW(p) · GW(p)

Look. He's a man who's earning the money by fear mongering. Passably bright which makes it so much worse from moral standpoint. He has opportunity to actually condemn violence. But that would require him to apologize for himself posting very inflammatory statements about other people's projects, because by the moral standards, those are equally bad. So he chooses to make it abundantly clear it's only for PR and no not really.

(as on the point, there was a post by someone which can be summarized as "tobacco kills a lot of people, utilitarian bla bla, why not off tobacco executives")

comment by NancyLebovitz · 2012-12-24T18:41:13.229Z · LW(p) · GW(p)

LessWrong attracts all sorts of people including would-be unabombers of varying level of dedication.

Why do you think LW attracts would-be unabombers?

comment by Dahlen · 2012-12-24T15:06:26.228Z · LW(p) · GW(p)

(I seriously should've posted this question back when the thread only had 3 comments.)

I have no qualms about the policy itself, it's only commonsensical to me; my question is only tangentially related:

Do you believe "censorship" to be a connotatively better term than "moderation"?

Replies from: fubarobfusco
comment by fubarobfusco · 2012-12-24T18:25:43.588Z · LW(p) · GW(p)

To me this sounds like a mix of counter-signaling and a way of saying, "Yes, this proposal is controversial, as policy debates should not appear one-sided. There are ups and downs. There are reasons to believe that some of the negatives that obtain to other things called 'censorship' may happen here."

comment by buybuydandavis · 2012-12-24T22:38:10.485Z · LW(p) · GW(p)

"Reason: Talking about such violence makes that violence more probable,"

Does it? In some cases yes, and some cases no. Wacky people advocating violence can be smacked down by the crowd. Wacky violent loners need perspective from other people.

Talking about socially approved violence probably makes it more likely. Talking about socially disapproved violence might make it less likely.

The problem with these conversations are the generalities involved. We don't have examples of the offending material, and worse, the whole point is to hide the examples of offending material.

SUGGESTION: Move disapproved posts to their own thread where replies aren't allowed. Wouldn't you get some brownie points from the people you're trying to impress by showing that you ban all this stuff you think they disapprove of?

My guess is that the policy will be applied reasonably, and were people allowed to see what was banned, they'd think so too.

comment by handoflixue · 2012-12-24T21:34:52.930Z · LW(p) · GW(p)

I'd ask if there's any evidence of removal but... I can't imagine anything other than Eliezer saying "Yep, I deleted it" would do the trick.

Well, actually, does this software ALLOW deletions, or does deleted comment get replaced with a "Deleted Content" placeholder? Because if it's the latter, a link to that placeholder would be decent evidence :)

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-12-25T00:05:45.956Z · LW(p) · GW(p)

It only gets replaced with a place holder if there are replies.

Replies from: Document
comment by Document · 2012-12-29T16:17:38.818Z · LW(p) · GW(p)

Do deleted comments still appear on the list of comments by that user?

Replies from: Eugine_Nier
comment by Eugine_Nier · 2012-12-29T21:58:33.434Z · LW(p) · GW(p)

I'm not sure. I suspect it might depend on who deleted them.

comment by JonathanLivengood · 2012-12-24T19:55:45.647Z · LW(p) · GW(p)

Do you have worked out numbers (in terms of community participation, support dollars, increased real-world violence, etc.) comparing the effect of having the censorship policy and the effect of allowing discussions that would be censored by the proposed policy? The request for "Consequences we haven't considered" is hard to meet until we know with sufficient detail what exactly you have considered.

My gut thinks it is unlikely that having a censorship policy has a smaller negative public relations effect than having occasional discussions that violate the proposed policy. I know that I am personally much less okay with the proposed censorship policy than with having occasional posts and comments on LW that would violate the proposed policy.

comment by Epiphany · 2012-12-24T06:56:00.373Z · LW(p) · GW(p)

If virtualizing people is violence (since it does imply copying their brains and, uh, removing the physical original) you may want to censor Wei_Dai over here, as he seems to be advocating that the FAI could hypothetically (and euphemistically) kill the entire population of earth:

Wei Dai's Ironic Security Idea

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-12-24T14:39:52.797Z · LW(p) · GW(p)

My hypothetical scenario was that replacing a physical person with a software copy is a harmless operation and the FAI correctly comes to this conclusion. It doesn't constitute hypothetically (or euphemistically) killing, since in the scenario, "virtualizing" doesn't constitute "killing".

Replies from: Epiphany
comment by Epiphany · 2012-12-24T19:57:40.030Z · LW(p) · GW(p)

An FAI would have some security advantages. It can achieve physical security by taking over the world and virtualizing everyone else

That is your exact wording. Not "In the event that the AGI determines that it's safe to [euphemism for doing something that could mean killing the entire human race] because there are software copies." or "if virtualizing is safe..."

Even if your wording was that, I'd still disagree with it.

I thought the most important reason to do friendliness research was to give the AGI what it needs to avoid making decisions that could kill all of humanity. It is humanity's responsibility to dictate what should happen in this case and ensure that the AGI understands enough to choose the option we dictate. If you aren't in favor of micromanaging the millions of tiny ethical decisions it will have to make like exactly how many months to put a lawbreaker in jail, that's one thing. If you aren't in favor of making sure it decides correctly on issues that could kill all of humanity, that's negligent beyond imagining. If you are aware of a decision that an AGI could make that could kill all of humanity, and you are in favor of creating an AGI that hasn't been given guidance on that issue, then you're in favor of creating a very dangerous AGI.

Advocating for an AGI that will kill all of humanity vs. advocating for an AGI that could kill all of humanity is a variation on "advocating violence" (it's advocating possible violence) but, to me, it's no different from saying: "I'm going to put one bullet in my gun, aim at so-and-so, and pull the trigger!" - Just because the likelihood of killing so-and-so is reduced to 1 in 6 from what's more or less a certainty does not mean it's not a murder threat.

Likewise, adding the word "possibly" into a sentence that would otherwise break the censorship policy is a cheap way of trying to get through the filter. That should not work. "We should possibly go on a killing rampage." - no.

What's most alarming is that you've done work for SIAI.

The whole point of SIAI is not to go "Let's let the AGI decide what is ethical" but "Let's iron out all the ethical problems before making an AGI!"

If Eliezer doesn't want to look bad, he should consider this.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-12-24T20:46:43.012Z · LW(p) · GW(p)

As I clarified in a subsequent comment in that thread, "if the FAI concludes that replacing a physical person with a software copy isn't a harmless operation, it could instead keep physical humans around and place them into virtual environments Matrix-style."

We could argue about whether to build an FAI that can make this kind of decision on its own, but I had no intention of doing anyone any harm. Yes the attempted-FAI may reach this conclusion erroneously and end up killing everyone, but then any method of building an FAI has the possibility of something going wrong and everyone ending up dead.

What's most alarming is that you've done work for SIAI.

I've never received any money from them and am not even a Research Associate. I have independently done work that may be useful for SIAI, but I don't think that's the same thing from a PR perspective.

The whole point of SIAI is not to go "Let's let the AGI decide what is ethical" but "Let's iron out all the ethical problems before making an AGI!"

Actually I think SIAI's official position is something like "Let's work out all the problems involved in letting the AGI decide what is ethical." If you disagree with this, let's argue about it, but could you please stop saying that I advocate killing people?

Replies from: Epiphany
comment by Epiphany · 2012-12-24T22:18:04.764Z · LW(p) · GW(p)

I had no intention of doing anyone any harm.

I know.

could you please stop saying that I advocate killing people?

reviews my wording very carefully

"If virtualizing people is violence ... Wei_Dai ... seems to be advocating "

"Advocating for an AGI that will kill all of humanity (in the context of this is not what you said) vs. advocating for an AGI that could kill all of humanity (context: this is what you said)"

My understanding is that it's your perspective that copying people and removing the physical original might not be killing them, so my statements reflect that but maybe it would make you feel better if I did this:

"If virtualizing people is violence ... Wei_Dai ... seems to be advocating ... kill the entire population of earth (though he isn't convinced that they would die)"

And likewise with the other statement.

Sorry for the upset that has probably caused. It wasn't my intent to accuse you of actually wanting to kill everyone. I just disagree with you and am very concerned about how your statement looks to others with my perspective. More importantly, I feel concerned about the existential risk if people such as yourself (who are prominent here and connected with SIAI) are willing to have an AGI that could (in my view) potentially kill the entire human race. My feeling is not that you are violent or intend any harm, but that you appear to be confused in a way that I deem dangerous. Someone I'm close to holds a view similar to yours and although I find this disturbing, I accept him anyway. My disagreement with you is not personal, it's not a judgment about your moral character, it's an intellectual disagreement with your viewpoint.

As I clarified in a subsequent comment in that thread, "if the FAI concludes that replacing a physical person with a software copy isn't a harmless operation, it could instead keep physical humans around and place them into virtual environments Matrix-style."

I think the purpose of this part is to support your statement that you have no intention to harm anyone, but if it's an argument against some specific part of my comment, would you mind matching them up because I don't see how this refutes any of my points.

I've never received any money from them and am not even a Research Associate. I have independently done work that may be useful for SIAI, but I don't think that's the same thing from a PR perspective.

It's not easy for me to determine your level of involvement from the website. This here suggests that you've done important work for SIAI:

Vladimir Nesov, a decision theory researcher, holds an M.S. in applied mathematics and physics from Moscow Institute of Physics and Technology. He has worked on Wei Dai’s updateless decision theory, in pursuit of one of the Singularity Institute’s core research goals: that of developing a “reflective decision theory.”

http://singularity.org/blog/2011/07/22/announcing-the-research-associates-program/

If one is informed of the exact relationship between you and SIAI, it is not as bad, but:

A. If someone very prominent on LessWrong (a top contributor) who has been contributing to SIAI's decision theory ideas (independently) does something that looks bad, it still makes them look bad.

B. The PR effect for SIAI could be much worse considering that there are probably lots of people who read the site and see a connection there but do not know the specifics of the relationship.

"Let's work out all the problems involved in letting the AGI decide what is ethical."

Okay but how will you know it's making the right decision if you do not even know what the right decision is for yourself? If you do not think it is safe to simply give the AGI an algorithm that looks good without testing to see whether running the algorithm outputs choices that we want it to make, then how do you test it? How do you even reason about the algorithm? How do you make those beliefs "pay rent", as the sequence post puts it?

I see now that the statement could be interpreted in one of two ways:

"Let's work out all the problems involved in letting the AGI define ethics."

"Let's work out all the problems involved in letting the AGI make decisions on it's own without doing any of the things that are wrong by our definition of what's ethical."

Do you not think it better to determine for ourselves whether virtualizing everyone means killing them, and then ensure that the AGI makes the correct decision? Perhaps the reason you approach it this way is because you don't think it's possible for humans to determine whether virtualizing everyone is ethical?

I do think it is possible, so if you don't think it is possible, let's debate that.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-12-24T23:44:42.763Z · LW(p) · GW(p)

Perhaps the reason you approach it this way is because you don't think it's possible for humans to determine whether virtualizing everyone is ethical?

I think it may not be possible for humans to determine this, in the time available before someone builds a UFAI or some other existential risk occurs. Still, I have been trying to determine this, for example just recently in Beware Selective Nihilism. Did you see that post?

Were you serious about having Eliezer censor my comment? If so, now that you have a better understanding of my ideas and relationship with SIAI, would you perhaps settle for me editing that comment with some additional clarifications?

Replies from: Epiphany
comment by Epiphany · 2012-12-28T21:53:04.604Z · LW(p) · GW(p)

Sorry for not responding sooner. The tab explosion triggered by the links in your article and related items was pretty big. I was trying to figure out how to deal with the large amount of information that was provided.

If you want to consider my take on it uniformed, fine. I haven't read all of the relevant information in the tab explosion (this would take a long time). Here is my take and my opinion on the situation:

If a person is copied, the physical original will not experience what the copy experiences. Therefore, if you remove the physical original, the physical original's experiences will end. This isn't perfectly comparable to death seeing as how the person's experiences, personality, knowledge, and interaction with the world will continue. However, the physical original's experiences will end. That, for many, would be an unacceptable result of being virtualized.

I believe in the right to die, so regardless of whether I think being virtualized should be called "death", I believe that people have the right to choose to do it to themselves. I do not believe that an AGI has the right to make that decision for them. To decide to end someone else's experiences without first gaining their consent qualifies as violence to me and it is alarming to see someone as prominent as you advocating this.

My opinion is that it's better for PR for you to edit your comment. Even if, for some reason, reading the entire tab explosion would somehow reveal to me that yes, the physical original would experience what the copy experiences even after being destroyed, I think it is likely that people who have not read all of that information will interpret it the way that I did and may become alarmed especially after realizing that it was you who wrote this.

I would be really happy to see you edit your own "virtualize everyone" comments. I do think something needs to be done. My suggestion would be to either:

A. Clearly state that you believe the physical original will experience the copy's experiences even after being removed if that's your view.

B. In the event that you agree that the physical original's experiences would end, to refrain from talking about virtualizing everyone without their consent.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2012-12-29T01:45:39.440Z · LW(p) · GW(p)

I added a disclaimer to my comment. I had to write my own since neither of yours correctly describes my current beliefs. I'll also try to remember to be a bit more careful about my FAI-related comments in the future, and keep in mind that not all the readers will be familiar with my other writings.

Replies from: Epiphany
comment by Epiphany · 2012-12-29T02:04:18.336Z · LW(p) · GW(p)

Thanks for listening to me. I feel better about this now.

comment by wedrifid · 2012-12-24T00:23:26.426Z · LW(p) · GW(p)

may at the admins' option be censored on the grounds that ... anyone talking about a proposed crime on the Internet fails forever as a criminal

I like it.

comment by ChristianKl · 2012-12-27T15:47:05.251Z · LW(p) · GW(p)

For most people it doesn't really matter when they trade off better PR for not following higher values such as respecting free speech.

If you want to design a FAI that values human values, it matters. You should practice to follow human values yourself. You should take situation like this opportunities for deliberate practice in making the kind of moral decisions that an FAI has to make.

Power corrupts. It's easy to censor criticism of your own decisions. You should use those opportuniities to practice being friendly instead of being corrupted.

In the past there was a case of censorship that lead to bad press for LessWrong. Given that past performance, why should we believe that increasing censorship will be good for PR?

In other words, the form of this discussion is not 'Do you like this?' - you probably have a different cost function from people who are held responsible for how LW looks as a whole

The first instinct of an FAI shouldn't be: "Hey, the cost function of those humans is probably wrong, let's use a different cost function."

A few days ago someone wrote a post about how rationalists should make group decisions. I did argue that his proposal was unlikely to be effectively implementable.

A decision about how the perfect censorship policy for LessWrong could look like could be made via Delphi.

comment by katydee · 2012-12-24T11:42:33.681Z · LW(p) · GW(p)

I am extremely in favor of this policy and would be in favor of extending it to posts or comments asking about the violation of any and all laws.

To all voters: please think about this for five minutes before upvoting or downvoting.

Replies from: kodos96, prase, MugaSofer, Eliezer_Yudkowsky
comment by kodos96 · 2012-12-24T17:14:10.849Z · LW(p) · GW(p)

It would clearly seem to be you who has not thought about this for five minutes. The absurdly broad extension you propose to the already absurdly broad policy would effectively chill all speech on any topics other than puppies or unicorn farts. Actually maybe just puppies... unicorn farts might after all violate EPA air quality standards.

Keep in mind, the average American commits Three Felonies a Day

The average professional in this country wakes up in the morning, goes to work, comes home, eats dinner, and then goes to sleep, unaware that he or she has likely committed several federal crimes that day. Why? The answer lies in the very nature of modern federal criminal laws, which have exploded in number but also become impossibly broad and vague.

Replies from: katydee
comment by katydee · 2012-12-24T20:13:22.399Z · LW(p) · GW(p)

The absurdly broad extension you propose to the already absurdly broad policy would effectively chill all speech on any topics other than puppies or unicorn farts.

You are wrong.

The top post on LW (discounting HoldenKarnofsky's "Thoughts on the Singularity Institute (SI)"), is Yvain's "Generalizing from One Example." This post has nothing to do with crimes except a mention of shoplifting rates in one footnote.

I seriously doubt you have actually read "Three Felonies a Day," or else you would not be citing it here-- it is better classified as propaganda rather than research.

Replies from: kodos96
comment by kodos96 · 2012-12-24T20:22:40.960Z · LW(p) · GW(p)

You do realize that much of the world, including much of the supposedly "civilized" world, has blasphemy laws on the books? What percentage of articles on LW (including their comment sections) do you think would run afoul of strict readings of such laws?

Also, I said "chill all speech", not forbid it outright. If you're forced, while writing, to wonder "is this violating some rule? Should I rephrase it to make it not violate?", that's what "chilled" speech means - forcing on you the cognitive burden of thinking in terms of "what won't get me in trouble" rather than "what will communicate most effectively"

Or how about this: you characterized "Three Felonies a Day" as propaganda... I'm sure the author of the book would be quite upset to hear that. He might consider it to constitute some manner of defamation, or perhaps intentional infliction of emotional distress. Tortious interference perhaps? Disturbing the peace? YOUR COMMENT IS NOW BANNED!

comment by prase · 2012-12-24T13:49:55.164Z · LW(p) · GW(p)

please think about this for five minutes before upvoting or downvoting

Before upvoting your comment, or Eliezer's post?

(If the latter, it seems that you are operating from the assumption that votes on the post reflect how much people agree with the proposed policy. This may not be true. I have upvoted the post although I oppose the policy, because I want to encourage discussing similar policies beforehand.)

comment by MugaSofer · 2012-12-25T13:12:56.888Z · LW(p) · GW(p)

Advocating? Maybe. But asking? Hell no.

comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-12-24T20:05:55.363Z · LW(p) · GW(p)

Please interpret the comment charitably; poster means real laws, not fake laws passed for purposes of selective enforcement.

Replies from: kodos96, MugaSofer
comment by kodos96 · 2012-12-24T20:42:11.795Z · LW(p) · GW(p)

But that's the whole point of my objection. This distinction is what makes this policy such a bad idea. Ignoring the distinction is to ignore the point.

comment by MugaSofer · 2012-12-25T13:15:10.997Z · LW(p) · GW(p)

Because if a law is enforced selectively, it's actually ... what, exactly?