Downvote stalkers: Driving members away from the LessWrong community?

post by Ander · 2014-07-02T00:40:26.138Z · score: 41 (51 votes) · LW · GW · Legacy · 128 comments

Last month I saw this post: http://lesswrong.com/lw/kbc/meta_the_decline_of_discussion_now_with_charts/ addressing whether the discussion on LessWrong was in decline.  As a relatively new user who had only just started to post comments, my reaction was: “I hope that LessWrong isn’t in decline, because the sequences are amazing, and I really like this community.  I should try to write a couple articles myself and post them!  Maybe I could do an analysis/summary of certain sequences posts, and discuss how they had helped me to change my mind”.   I started working on writing an article.

Then I logged into LessWrong and saw that my Karma value was roughly half of what it had been the day before.   Previously I hadn’t really cared much about Karma, aside from whatever micro-utilons of happiness it provided to see that the number slowly grew because people generally liked my comments.   Or at least, I thought I didn’t really care, until my lizard brain reflexes reacted to what it perceived as an assault on my person.

 

Had I posted something terrible and unpopular that had been massively downvoted during the several days since my previous login?  No, in fact my ‘past 30 days’ Karma was still positive.  Rather, it appeared that everything I had ever posted to LessWrong now had a -1 on it instead of a 0. Of course, my loss probably pales in comparison to that of other, more prolific posters who I have seen report this behavior.

So what controversial subject must I have commented on in order to trigger this assault?  Well, let’s see, in the past week  I had asked if anyone had any opinions of good software engineer interview questions I could ask a candidate.  I posted in http://lesswrong.com/lw/kex/happiness_and_children/ that I was happy to not have children, and finally, here in what appears to me to be by far the most promising candidate:http://lesswrong.com/r/discussion/lw/keu/separating_the_roles_of_theory_and_direct/  I replied to a comment about global warming data, stating that I routinely saw headlines about data supporting global warming. 

 

Here is our scenario: A new user is attempting to participate on a message board that values empiricism and rationality, posted that evidence supports that climate change is real.  (Wow, really rocking the boat here!)    Then, apparently in an effort to ‘win’ this discussion by silencing opposition, someone went and downvoted every comment this user had ever made on the site.   Apparently they would like to see LessWrong be a bastion of empiricism and rationality and [i]climate change denial[/i] instead? And the way to achieve this is not to have a fair and rational discussion of the existing empirical data, but rather to simply Karmassassinate anyone who would oppose them?

 

Here is my hypothesis: The continuing problem of karma downvote stalkers is contributing to the decline of discussion on the site.    I definitely feel much less motivated to try and contribute anything now, and I have been told by multiple other people at LessWrong meetings things such as “I used to post a lot on LessWrong, but then I posted X, and got mass downvoted, so now I only comment on Yvain’s blog”.  These anecdotes are, of course, only very weak evidence to support my claim.  I wish I could provide more, but I will have to defer to any readers who can supply more.

 

Perhaps this post will simply trigger more retribution, or maybe it will trigger an outswelling of support, or perhaps just be dismissed by people saying I should’ve posted it to the weekly discussion thread instead.   Whatever the outcome, rather than meekly leaving LessWrong and letting my 'stalker' win, I decided to open a discussion about the issue.  Thank you!

128 comments

Comments sorted by top scores.

comment by Kaj_Sotala · 2014-07-02T04:31:38.103Z · score: 40 (44 votes) · LW(p) · GW(p)

It looks like the person who has been downvoting you is the same person mentioned in this thread. Follow-up queries also indicated that the same person had been downvoting several others who had previously complained of downvote stalking.

The said person failed to respond to my first private message on the subject; because there's the chance that they might have just missed it, I finally got around sending them another message yesterday, explicitly mentioning the possibility of a ban unless they provide a very good explanation within a reasonable time. I apologize for taking so long - I procrastinated on this for a while, as I find it quite uncomfortable to initiate conflict with people.

comment by Viliam_Bur · 2014-07-02T16:15:29.285Z · score: 38 (44 votes) · LW(p) · GW(p)

Just to encourage you, I want to put things in context:

  • This is one person that significantly destroys the social capital of the LW community. And in our community, social capital is scarce.

  • They probably do this to promote their political views; to silence perceived political opponents. (Including new users.) This is completely against LW values.

If you'd just block their account without further notice right now, I would say: "Well done!". It is extremely generous to give them a chance to explain themselves; and there probably is no good explanation anyway, so it's just playing for time.

I mean, really, if one person keeps terrorizing the community, and the community is unwilling to defend themselves, then all the lessons about how rationalists are supposed to win have failed.

A person who did so much damage does not deserve a second chance. If you decide to give them the second chance, I won't complain. But I would complain against inaction while they continue to do more damage. If you are the only person who has an access to the "Ban User" button, just press it already, before everyone leaves.

EDIT: This whole thread (and it is far from being the first one) is additional damage caused by a single person. People keep proposing solutions without evidence, then they argue with each other. There is a growing frustration when they realize that most of the proposed changes won't get implemented anyway (either because other people oppose it, or because making changes to LW codebase always takes a lot of time). We keep generating negative emotions, because... why exactly?

comment by Ander · 2014-07-02T18:26:54.998Z · score: 13 (13 votes) · LW(p) · GW(p)

I mean, really, if one person keeps terrorizing the community, and the community is unwilling to defend themselves, then all the lessons about how rationalists are supposed to win have failed.

I agree, Rationalists should win! And in this case, winning doesn't mean turning into straw-man Vulcans who say "you shouldn't have any emotional reactions to people mass downvoting you" as I see a couple other places in this thread. Rather, it means that we should be able to design a community system that makes everyone feel cared for, and also provides them useful feedback for how they should or shouldn't post things.

Emotions matter, and making people feel valued and loved by other members is how a community thrives. (Thats why religions can do so well even though they make silly claims about the nature of reality).

comment by gjm · 2014-07-02T12:46:23.692Z · score: 12 (12 votes) · LW(p) · GW(p)

I suggest that whether they're banned or not, unless they do provide a very good explanation their identity and a description of the mass-downvoting they've done should be posted on LW, and (if anyone has the bandwidth to do it) mass-downvoting should be exposed when it's done in the future, and it should be known that it will be.

Because otherwise the obvious response to "hey, we're banning you for abusing the system" is "OK, thanks. I'll make another account.".

comment by Tenoke · 2014-07-02T15:11:33.445Z · score: 7 (7 votes) · LW(p) · GW(p)

Because otherwise the obvious response to "hey, we're banning you for abusing the system" is "OK, thanks. I'll make another account.".

I don't necessarily disagree, but given that the offender will lose > 9k Karma, and will have to grind a bit to be able to keep mass-downvoting, I'd say it is more than a trivial inconvenience.

comment by ialdabaoth · 2014-07-03T13:13:19.458Z · score: 7 (7 votes) · LW(p) · GW(p)

The person in question has got Rationality Quotes karma-mining down to a science. Ban them, and they'll be back up to 5K karma on their new account within weeks.

HEY! Suggestion:

Can the Rationality Quotes threads be pulled off into their own section, where upvotes and downvotes still happen but don't affect the user's karma?

This makes sense for multiple reasons:

  • you shouldn't get karma for just quoting things someone else said, without analysis or context; if you can't be original, at least be relevant/topical.

  • it prevents karma-mining.

  • it keeps the Rationality Quotes threads from turning into a distracting meta-game.

comment by buybuydandavis · 2014-07-03T21:14:48.371Z · score: 1 (1 votes) · LW(p) · GW(p)

it prevents karma-mining.

So that's the trick!

comment by Viliam_Bur · 2014-07-02T15:50:14.044Z · score: 6 (10 votes) · LW(p) · GW(p)

You only need maybe 10 karma to be able to significantly hurt new users.

Maybe there should be some treshold, e.g. 100 karma before you can downvote. And then, you can downvote as much as you can today. This probably could be done by one "if" line in the code.

We need downvoting, but we don't quite need to have new users able to destroy other new users.

comment by Jinoc · 2014-07-02T17:40:57.992Z · score: 1 (5 votes) · LW(p) · GW(p)

Actually, I was wondering about this: do we need downvoting ?

I mean, is there a discussion somewhere on the relative merits of up/down-voting versus upvoting only ?

comment by Nornagest · 2014-07-02T19:25:06.902Z · score: 10 (10 votes) · LW(p) · GW(p)

is there a discussion somewhere on the relative merits of up/down-voting versus upvoting only ?

Yes, it came up here the last time someone made a Discussion post about retributive downvoting. Not to toot my own horn, but I feel I outlined some reasonable issues with that plan in my response.

(Short version: I feel that upvote-only systems encourage cliques and pandering, neither of which align well with LW's culture or goals.)

comment by Jinoc · 2014-07-03T16:09:52.401Z · score: 1 (1 votes) · LW(p) · GW(p)

Thank you !

comment by Luke_A_Somers · 2014-07-02T22:46:06.938Z · score: 2 (2 votes) · LW(p) · GW(p)

I think downvoting is good to have, but I'm not at all sure that we need downvoting to below 0.

comment by Viliam_Bur · 2014-07-03T08:04:25.540Z · score: 11 (11 votes) · LW(p) · GW(p)

That depends on the comment. Some comments display so much ignorance, that they deserve to be downvoted and hidden.

Imagine a new user, who would just assert that theory of relativity is wrong, and provide their own "theory" based on some mumbo-jumbo or misunderstanding of the basic concepts of physics. That specific comment deserves to be downvoted below zero. It is not a spam, it is not offensive, so it should not be reported to moderators. It is just too stupid. Zero is for the "meh" comments, this would be below that level.

This is different from mass-downvoting all comments of other users because someone does not agree with them for political reasons.

It seems to me that many people are thinking in a direction "design a system that cannot be abused, and it will not be abused". But anything can be abused. Imagine that we would adopt a system with upvotes only, and then we would have a separate button for "report spam". Would this be safe against abuse? A malicious user could decide to mass-report all comments of their political enemies as spam. And then, what? If the spam reports are handled automatically, it would mean that new users would suddenly find themselves blocked by the system and their comments removed. (We could make the algorithm to remove the comment only if three users report it as spam; and then the abuser creates two sockpuppet accounts.) Or if the reports are not handled automatically, then some moderator must spend hours reading them and clicking "no, this is not a spam". At that moment, wouldn't it be just much simpler to ban the offender? Or perhaps remove from them specifically the ability to report spam? Analogically, we can ban the user now, or perhaps make a change that will prevent this specific user from downvoting.

At this moment, there is just one specific user abusing the system. Most of the debates about whether downvotes are bad, are started by their actions. Spending energy to redesign the whole system, which works okay for N-1 users, instead of banning the 1 disruptive user, that's a waste of everyone's time.

comment by Luke_A_Somers · 2014-07-03T10:36:44.796Z · score: 7 (7 votes) · LW(p) · GW(p)

I am now convinced that going negative is useful.

comment by Dentin · 2014-07-03T14:59:51.163Z · score: 1 (3 votes) · LW(p) · GW(p)

What about requiring a karma payment to downvote negative?

comment by Squark · 2014-07-02T18:48:58.255Z · score: -2 (6 votes) · LW(p) · GW(p)

Personally, I'm in favor of a system similar to stackexchange: a comment cannot be downvoted but can be "flagged as inappropriate" to draw moderator attention.

comment by Viliam_Bur · 2014-07-02T19:19:49.749Z · score: 2 (4 votes) · LW(p) · GW(p)

Realistically, considering how much time does it take to change anything about LW software, I don't see it as likely.

But I can imagine that this system could work if we had multiple moderators. I mean, so the website would not be completely abandoned if one moderator spends a day offline. Also, to provide the moderators some kind of plausible deniability, so they wouldn't feel they start a personal conflict with someone whenever they remove a comment.

comment by Squark · 2014-07-03T07:28:07.853Z · score: 0 (0 votes) · LW(p) · GW(p)

Regarding changes to LW software, I think the process can be improved if the persons responsible will allow LWers with coding skills to volunteer their time.

comment by Vladimir_Nesov · 2014-07-03T10:16:05.185Z · score: 5 (5 votes) · LW(p) · GW(p)

It's open source, and contributions (at least on some issues) are welcome.

comment by Squark · 2014-07-07T18:35:04.755Z · score: 0 (0 votes) · LW(p) · GW(p)

jackk, Vladimir, thx for commenting!

I think those links should be on the main page to be easier to discover.

comment by jackk · 2014-07-03T07:39:27.788Z · score: 3 (3 votes) · LW(p) · GW(p)

Part of my job is to review pull requests.

comment by Nornagest · 2014-07-02T19:17:30.264Z · score: 1 (7 votes) · LW(p) · GW(p)

That depends on two things we don't have: (a) an active mod community that's reasonably large in proportion to the userbase, and (b) a culture that accepts and ideally applauds an authoritarian approach to dealing with trolls and other assorted troublemakers.

Having the button without having the support for it is useless at best, and at worst can be actively counterproductive by creating an expectation that the mods can't possibly meet, or by encouraging an adversarial relationship between mods and users. Scott Alexander's got a similar system going over at slatestarcodex (which, to be fair, is excellent in terms of top-level content, and above average in terms of commentariat as long as you don't mind the occasional insane diatribe), and it doesn't seem to be doing a very good job of deterring the type of commentary it was instituted to prevent.

comment by Squark · 2014-07-03T07:25:26.565Z · score: 0 (4 votes) · LW(p) · GW(p)

We can set up a system in which mods are elected. This might provide a sufficient amount of mods and wouldn't be authoritarian.

comment by NancyLebovitz · 2014-07-03T10:06:37.029Z · score: 2 (4 votes) · LW(p) · GW(p)

Does anyone have experience with a board that elects its mods?

I'm not saying it's a bad idea, though it seems like it's got some interesting complications, such has who gets to vote and keeping the voting honest-- I've just only been on boards where the mods were chosen from the top.

comment by [deleted] · 2014-07-03T12:21:30.659Z · score: 3 (3 votes) · LW(p) · GW(p)

I've seen a board occasionally elect a moderator (with other mods appointed). The resulting drama was way too high for whatever benefits the election may have had.

comment by Nornagest · 2014-07-03T16:36:46.554Z · score: 2 (2 votes) · LW(p) · GW(p)

Formal elections are rare, but vague consensus processes (along the lines of "anyone who cares can nominate a mod; we'll pick whoever gets the most nods as long as they aren't blatantly electioneering") seem pretty common. Honestly I think I'd prefer the latter to the former.

comment by Squark · 2014-07-07T18:36:45.842Z · score: 0 (0 votes) · LW(p) · GW(p)

AFAIK, Wikipedia and StackExchange use elected mods. They don't seem to be faring too bad.

comment by Dentin · 2014-07-03T14:57:56.830Z · score: 1 (1 votes) · LW(p) · GW(p)

It's possible to make hundreds of karma with minutes of effort simply by copy/pasting somebody else's awesome quote into a monthly quote thread. The amount of grinding required is paltry, and not at all a stumbling block to persistent offenders.

comment by ThisSpaceAvailable · 2014-07-03T02:52:09.655Z · score: 0 (0 votes) · LW(p) · GW(p)

By "identity", I take it you mean not merely the user name, but whatever other identifying information the mods have? I don't understand how your second paragraph follows from your first. What is your motive for wanting the information released? If it's retribution, that has nothing to do with your second paragraph. I don't see a deterrence value, since anyone concerned about keeping their information private to avoid downvote stalking will presumably just not use their actual information in registering in the first place. I don't see a preventative justification, either; if the mods can verify identity, they should just block any new account from that person, and if they can't verify identity, then how is this an answer to people making new accounts?

comment by gjm · 2014-07-03T18:38:10.893Z · score: 0 (0 votes) · LW(p) · GW(p)

I meant the user name, not any other information the moderators may have.

The second paragraph is intended to follow from the first because:

  • I expect posting information about mass-downvoting to reduce its effectiveness, because
    • people will feel less bothered by getting lots of downvotes if they know they come from a low-quality mass-downvoter
    • readers who know that A has been mass-downvoting B will be aware of that when looking at B's comments and may discount downvotes on them accordingly.
  • I expect posting information about mass-downvoting to reduce its attractiveness, because
    • prospective mass-downvoters will anticipate getting exposed, with likely consequences for their own reputation (and in particular their ability to amass the karma they need for the mass-downvoting).
  • I expect the promise of future exposure to inhibit mass-downvoting by a further mechanism:
    • prospective mass-downvoters will fear that they may get not only exposed but banned, which would (at least) be an inconvenience.
comment by jsteinhardt · 2014-07-02T05:29:37.814Z · score: 5 (7 votes) · LW(p) · GW(p)

Thanks for following up on this. Any possibility we can know what "within a reasonable time" means concretely? (E.g. days, weeks, months? I think a quicker resolution will be better, though I empathize with your situation.)

comment by Kaj_Sotala · 2014-07-02T09:48:43.096Z · score: 6 (10 votes) · LW(p) · GW(p)

Around a week.

comment by ChristianKl · 2014-07-02T09:08:39.738Z · score: 3 (3 votes) · LW(p) · GW(p)

Yes, when it comes to instances like that and asking people to respond in a reasonable timeframe, setting is useful. It makes it easier for you to simply wait for the deadline instead of asking every day yourself: "Is enough time passed that I should do something?"

comment by shminux · 2014-07-02T16:23:08.715Z · score: 4 (16 votes) · LW(p) · GW(p)

No need for a conflict or a ban, just let them know that their user name will be made public.

I find it quite uncomfortable to initiate conflict with people.

Not sure why the parent is upvoted. If you have trouble confronting people, you make a poor admin. Is there another active admin on LW who is more competent?

EDIT: I assumed too much, Kaj was probably not expected to moderate and ended up in this position by default. Sorry.

comment by JGWeissman · 2014-07-02T16:41:55.620Z · score: 29 (33 votes) · LW(p) · GW(p)

If you have trouble confronting people, you make a poor admin.

Can we please act like we actually know stuff about practical instrumental rationality given how human brains work, and not punish people for openly noticing their weaknesses.

You could have more constructively said something like "Thank you for taking on these responsibilities even though it sometimes makes you uncomfortable. I wonder if anyone else who is more comfortable with that would be willing to help out."

comment by shminux · 2014-07-02T18:14:37.164Z · score: 7 (7 votes) · LW(p) · GW(p)

not punish people for openly noticing their weaknesses.

Thanks! Yes, that's a good point. On the other hand, willingness to confront problem users is one of the absolute minimum requirements for a forum moderator. I suppose Kaj was not expected to do the moderator's job, probably just behind-the-scene maintenance, and I assumed too much. Sorry, Kaj!

That said, a competent active forum moderator is required to deal with this particular issue, and I am yet to see one here.

comment by Viliam_Bur · 2014-07-02T19:22:00.636Z · score: 6 (6 votes) · LW(p) · GW(p)

Preferably more than one moderator.

comment by jackk · 2014-07-03T00:32:06.367Z · score: 4 (4 votes) · LW(p) · GW(p)

Quoting from the other thread about downvote stalking:

I'm messaging you because you're the moderator who commented most recently.

comment by Kaj_Sotala · 2014-07-03T12:38:41.975Z · score: 0 (2 votes) · LW(p) · GW(p)

No problem. :-)

comment by David_Gerard · 2014-07-03T21:50:43.642Z · score: 5 (7 votes) · LW(p) · GW(p)

I'm brash, extroverted, outgoing, confrontative, have the subtlety of a head-on collision with a Mack truck and still find this sort of admin duty unpleasant. So this leads me to suspect it's just horrible work.

comment by John_Maxwell (John_Maxwell_IV) · 2014-07-02T00:56:37.735Z · score: 10 (10 votes) · LW(p) · GW(p)

I have been told by multiple other people at LessWrong meetings things such as “I used to post a lot on LessWrong, but then I posted X, and got mass downvoted, so now I only comment on Yvain’s blog”.

That's interesting, and is causing me to update in the direction of thinking that this is a real problem that resources should be devoted to solving. I think I know of one other person who's not you who has left LW because of downvoting. It's interesting how seriously we take the arbitrary numbers associated with our profiles & contributions. (I do it too.)

And it looks as though there are many people who have reported similar in this thread. Maybe talk to Kaj Sotala? Perhaps he is privately reprimanding mass downvoters?

I do think this comment of yours was a reasonable downvote candidate:

Then why do I see reddit links to NOAA articles, every single month, with titles like: "May 2014 the hottest May since 1880. Four of the five warmest Mays on record have occurred in the past five years. May 2014 marked the 39th consecutive May and 351st consecutive month (more than 29 years) with a global temperature above the 20th century average."

Not because I think you are wrong about global warming, but because frequency of newspaper headlines seems like a bad way to infer statistical trends. Newspapers report on what's interesting, what their readers will read, what's unusual, etc. So news stories are not all that representative of what's actually going on in the world.

comment by shminux · 2014-07-02T03:01:33.450Z · score: 9 (9 votes) · LW(p) · GW(p)

is causing me to update in the direction of thinking that this is a real problem that resources should be devoted to solving

I don't believe that it's more than a day or two of work for a developer. The SQL queries one would run are pretty simple, as we previously discussed, and as Jack from Trike confirmed. The reason that nothing has been done about it is that Eliezer doesn't care. And he may well have good reasons not to, but he never commented on the issue, except maybe once when he mentioned something about not having technical capabilities to identify the culprits (which is no longer a valid statement).

My guess is that he cares not nearly as much about LW in general now as he used to, as most of the real work is done at MIRI behind the scenes, and this forum is mostly noise for him these days. He drops by occasionally as a distraction from important stuff, but that's it.

comment by John_Maxwell (John_Maxwell_IV) · 2014-07-02T06:34:20.940Z · score: 10 (14 votes) · LW(p) · GW(p)

The reason that nothing has been done about it is that Eliezer doesn't care.

This sounds like moralizing to me. Of the following two scenarios, which do you have in mind?

  • Someone had an idea for a solution to the problem and ran it by Eliezer. Eliezer vetoed it (because he was feeling spiteful?)

  • Eliezer is a busy person trying to do lots of things. Because Eliezer has historically been LW's head honcho, no one feels comfortable taking decisive action without his approval. But folks are hesitant to bother him because they know he's got lots to do, or else they do send him email but he doesn't respond to them because he's behind on his email, or he skips reading those emails in favor of higher-priority emails.

I think the second scenario is far more likely. If the second scenario is the case, I don't see any reason to bother Eliezer. We just have to stop acting as though all important forum decisions must go through him. Personally I don't see any reason why Eliezer would know best how to run LW. Expertise at blogging is not the same as expertise at online community management. And empirically, there have been lots of complaints about the way LW is moderated, which is evidence that Eliezer is bad at it (I know there are other moderators, but I'm assuming he sets the tone and has the final word). My guess is that to the extent he's given deference, it's due to his high status or some kind of halo effect. (Speaking of which, the halo effect seems like a bias that LWers fall prey to really often regarding high-status LW figures like Eliezer, Lukeprog, and Matt Fallshaw. But I digress.)

I don't know if this one particular issue is worth a revolt. But if we can brainstorm enough issues that would benefit from an overhaul of the moderation/LW leadership team, perhaps it would be worthwhile to start another thread devoted to that topic.

comment by ChristianKl · 2014-07-02T09:13:19.839Z · score: 10 (10 votes) · LW(p) · GW(p)

Eliezer is a busy person trying to do lots of things. Because Eliezer has historically been LW's head honcho, no one feels comfortable taking decisive action without his approval.

If you are a busy person wanting to get a lot of things done, delegate tasks and give someone else the authority do solve the task. To the extend that he doesn't want to solve tasks like this himself, he should delegate the authority clearly to someone else.

comment by shminux · 2014-07-02T07:04:23.771Z · score: 9 (9 votes) · LW(p) · GW(p)

Of course it's the second scenario. My point is that this forum has dropped in priority for Eliezer and MIRI in general in the last year or so. And, as I said, probably for a good reason.

comment by ChrisHallquist · 2014-07-02T05:31:11.305Z · score: 8 (10 votes) · LW(p) · GW(p)

The reason that nothing has been done about it is that Eliezer doesn't care. And he may well have good reasons not to, but he never commented on the issue, except maybe once when he mentioned something about not having technical capabilities to identify the culprits (which is no longer a valid statement).

My guess is that he cares not nearly as much about LW in general now as he used to...

This. Eliezer clearly doesn't care about LessWrong anymore, to the point that these days he seems to post more on Facebook than on LessWrong. Realizing this is a major reason why this comment is the first anything I've posted on LessWrong in well over a month.

I know a number of people have been working on launching a LessWrong-like forum dedicated to Effective Altruism, which is supposedly going to launch very soon. Here's hoping it takes off—because honestly, I don't have much hope for LessWrong at this point.

comment by XiXiDu · 2014-07-02T09:41:08.871Z · score: 5 (23 votes) · LW(p) · GW(p)

Eliezer clearly doesn't care about LessWrong anymore, to the point that these days he seems to post more on Facebook than on LessWrong.

He receives a massive number of likes there, no matter what he writes. My guess is that he needs that kind of feedback, and he doesn't get it here anymore. Recently he requested that a certain topic should not be mentioned on the HPMOR subreddit, or otherwise he would go elsewhere. On Facebook he can easily ban people who mention something he doesn't like.

comment by [deleted] · 2014-07-02T16:22:10.206Z · score: 1 (17 votes) · LW(p) · GW(p)

Given that you directly caused a fair portion of the thing that is causing him pain (i.e., spreading FUD about him, his orgs, and etc.), this is like a win for you, right?

Why don't you leave armchair Internet psychoanalysis to experts?

comment by ChrisHallquist · 2014-07-03T04:13:33.634Z · score: 11 (15 votes) · LW(p) · GW(p)

I'm not sure how to respond to this comment, given that it contains no actual statements, just rhetorical questions, but the intended message seems to be "F you for daring to cause Eliezer pain, by criticizing him and the organization he founded."

If that's the intended message, I submit that when someone is a public figure, who writes and speaks about controversial subjects and is the founder of an org that's fairly aggressive about asking people for money, they really shouldn't be insulated from criticism on the basis of their feelings.

comment by [deleted] · 2014-07-03T12:01:09.864Z · score: -2 (8 votes) · LW(p) · GW(p)

I'm not sure how to respond to this comment

You could have simply not responded.

If that's the intended message

It wasn't, no. It was a reminder to everyone else of XiXi's general MO, and the benefit he gets from convincing others that EY is a megalomaniac, using any means necessary.

comment by David_Gerard · 2014-07-03T12:28:48.044Z · score: 3 (5 votes) · LW(p) · GW(p)

It wasn't, no. It was a reminder to everyone else of XiXi's general MO, and the benefit he gets from convincing others that EY is a megalomaniac, using any means necessary.

You keep saying this and things like it, and not providing any evidence whatsoever when asked, directly or indirectly.

comment by [deleted] · 2014-07-03T12:41:23.438Z · score: -4 (6 votes) · LW(p) · GW(p)

People who provide evidence for these things just end up starting long, pointless diatribes. I'm not interested in that kind of time commitment.

comment by XiXiDu · 2014-07-04T08:10:45.879Z · score: 1 (1 votes) · LW(p) · GW(p)

People who provide evidence for these things just end up starting long, pointless diatribes.

Aris Katsaris made a big deal about a comment by Eliezer Yudkowsky that I forgot about. His accusation that I deliberately ignored the comment made no sense at all, because I was the one who spread the new comment, in which he uttered a similar opinion. If I was interested in hiding that opinion then I would not have told other people about the new one as well, a comment made in an obscure subreddit.

And if we are talking about people forgetting something. Eliezer Yudkowsky even forgot that he deleted the post that gave him so much pain.

comment by Viliam_Bur · 2014-07-04T14:03:34.093Z · score: 0 (0 votes) · LW(p) · GW(p)

Maybe it was a bit more complicated?

comment by David_Gerard · 2014-07-03T21:24:56.245Z · score: 0 (4 votes) · LW(p) · GW(p)

Just in repeating a claim that you literally cannot back up.

Really this is the universe extending you the Crackpot Offer: accuse someone of "lies and slander", and when called on it, just keep repeating the claim. This is what actual cranks do.

comment by [deleted] · 2014-07-03T21:59:43.512Z · score: -2 (2 votes) · LW(p) · GW(p)

Oh, now I remember our last meeting.

This seems to imply that even if I do back up this statement with links and argument and such, you'll just ignore it and disengage. That's not a lot of incentive for me to get involved.

comment by XiXiDu · 2014-07-03T12:51:35.387Z · score: 2 (2 votes) · LW(p) · GW(p)

It wasn't, no. It was a reminder to everyone else of XiXi's general MO...

Circa 2005 I had a link to MIRI (then called the Singularity Institute) on my homepage. Circa 2009 I've even been advertising LessWrong.

I am on record as saying that I believe most of the sequences to consist of true and sane material. I am on record as saying that I believe LessWrong to be the most rational community.

But in 2010, due to some incidence that may not be mentioned here, I noticed that there are some extreme tendencies and beliefs that might easily outweigh all the positive qualities. I also noticed that a certain subset of people seems to have a very weird attitude when it comes to criticism pertaining Yudkowsky, MIRI or LW.

I've posted a lot of arguments that were never meant to decisively refute Yudkowksky or MIRI, but to show that many of the extraordinary claims can be weakened. The important point here is that I did not even have to do this, as the burden of evidence is not on me disprove those claims, but on the people who make the claims. They need to prove that their claims are robust and not just speculations on possible bad outcomes.

comment by XiXiDu · 2014-07-03T11:58:33.319Z · score: 3 (13 votes) · LW(p) · GW(p)

Given that you directly caused a fair portion of the thing that is causing him pain (i.e., spreading FUD about him, his orgs, and etc.), this is like a win for you, right?

A win would be if certain people became a little less confident about the extraordinary claims he makes, and more skeptical of the mindset that CFAR spreads.

A win would be if he became more focused on exploration rather than exploitation, on increasing the robustness of his claims, rather than on taking actions in accordance with his claims.

A world in which I don't criticize MIRI is a world where they ask for money in order to research whether artificial intelligence is an existential risk, rather than asking for money to research a specific solution in order to save an intergalactic civilization.

A world in which I don't criticize Yudkowsky is a world in which he does not make claims such as that if you don’t sign up your kids for cryonics then you are a lousy parent.

A world in which I don't criticize CFAR/LW is a world in which they teach people to be extremely skeptical of back-of-the-envelope calculations, a world in which they tell people to strongly discount claims that cannot be readily tested.

Why don't you leave armchair Internet psychoanalysis to experts?

I speculate that Yudkowsky has narcissistic tendencies. Call it armchair psychoanalysis if you like, but I think there is enough evidence to warrant such speculations.

comment by Viliam_Bur · 2014-07-04T07:21:31.080Z · score: 0 (0 votes) · LW(p) · GW(p)

more skeptical of the mindset that CFAR spreads

Just curious: what else do you consider the big problems of CFAR (other than being associated with MIRI)?

comment by Squark · 2014-07-03T14:03:48.536Z · score: -1 (5 votes) · LW(p) · GW(p)

I speculate that Yudkowsky has narcissistic tendencies. Call it armchair psychoanalysis if you like, but I think there is enough evidence to warrant such speculations.

I call it an ignoble personal attack which has no place on this forum.

comment by XiXiDu · 2014-07-03T15:28:52.202Z · score: 4 (15 votes) · LW(p) · GW(p)

I call it an ignoble personal attack which has no place on this forum.

Sorry. It wasn't meant as an attack, just something that came to my mind reading the comment by Chris Hallquist.

My initial reply was based on the following comment by Yudkowsky:

I'm really impressed by Facebook's lovely user experience - when I get a troll comment I just click the x, block the user and it's gone without a trace and never recurs.

And regarding narcissism, the definition is: "an inflated sense of one's own importance and a deep need for admiration."

See e.g. this conversation between Ben Goertzel and Eliezer Yudkowsky (note that MIRI was formerly known as SIAI):

Striving toward total rationality and total altruism comes easily to me. […] I’ll try not to be an arrogant bastard, but I’m definitely arrogant. I’m incredibly brilliant and yes, I’m proud of it, and what’s more, I enjoy showing off and bragging about it. I don’t know if that’s who I aspire to be, but it’s surely who I am. I don’t demand that everyone acknowledge my incredible brilliance, but I’m not going to cut against the grain of my nature, either. The next time someone incredulously asks, “You think you’re so smart, huh?” I’m going to answer, “Hell yes, and I am pursuing a task appropriate to my talents.” If anyone thinks that a Friendly AI can be created by a moderately bright researcher, they have rocks in their head. This is a job for what I can only call Eliezer-class intelligence.

Also see e.g. this comment by Yudkowsky:

Unfortunately for my peace of mind and ego, people who say to me "You're the brightest person I know" are noticeably more common than people who say to me "You're the brightest person I know, and I know John Conway". Maybe someday I'll hit that level. Maybe not.

Until then... I do thank you, because when people tell me that sort of thing, it gives me the courage to keep going and keep trying to reach that higher level.

...and from his post...

When Marcello Herreshoff had known me for long enough, I asked him if he knew of anyone who struck him as substantially more natively intelligent than myself. Marcello thought for a moment and said "John Conway—I met him at a summer math camp." Darn, I thought, he thought of someone, and worse, it's some ultra-famous old guy I can't grab. I inquired how Marcello had arrived at the judgment. Marcello said, "He just struck me as having a tremendous amount of mental horsepower," and started to explain a math problem he'd had a chance to work on with Conway.

Not what I wanted to hear.

And this kind of attitude started early. See for example what he wrote in his early "biography":

I think my efforts could spell the difference between life and death for most of humanity, or even the difference between a Singularity and a lifeless, sterilized planet [...] I think that I can save the world, not just because I’m the one who happens to be making the effort, but because I’m the only one who can make the effort.

Also see this video:

So if I got hit by a meteor right now, what would happen is that Michael Vassar would take over responsibility for seeing the planet through to safety, and say ‘Yeah I’m personally just going to get this done, not going to rely on anyone else to do it for me, this is my problem, I have to handle it.’ And Marcello Herreshoff would be the one who would be tasked with recognizing another Eliezer Yudkowsky if one showed up and could take over the project, but at present I don’t know of any other person who could do that, or I’d be working with them.

comment by Nornagest · 2014-07-03T22:32:09.314Z · score: 5 (7 votes) · LW(p) · GW(p)

regarding narcissism, the definition is: "an inflated sense of one's own importance and a deep need for admiration."

That's the dictionary definition. When throwing around accusations of mental pathology, though, it behooves one not to rely on pattern-matching to one-sentence definitions; it overestimates the prevalence of problems, suggests the wrong approaches to them, and tends to be considered rude.

Having a lot of ambition and an overly optimistic view of intelligence in general and one's own intelligence in particular doesn't make you a narcissist, or every fifteen-year-old nerd in the world would be a narcissist.

(That said, I'm not too impressed with Eliezer's reasons for moving to Facebook.)

comment by Viliam_Bur · 2014-07-03T16:17:54.429Z · score: 3 (5 votes) · LW(p) · GW(p)

I feel that similar accusation could be used against anyone who feels that more is possible and instead of whining tries to win.

I am not an expert on narcissism (though I could be expert at it, heh), but seems to me that a typical narcissistic person would feel they deserve admiration without doing anything awesome. They probably wouldn't be able to work hard, for years. (But as I said, I am not an expert; there could be multiple types of narcissism.)

comment by buybuydandavis · 2014-07-03T21:10:37.435Z · score: 1 (3 votes) · LW(p) · GW(p)

I think that I can save the world, not just because I’m the one who happens to be making the effort, but because I’m the only one who can make the effort.

Thinking that one person is going to save the world, and you're him, qualifies as "an inflated sense of one's own importance", IMO.

First mistake: believing that one person will be saving the world. Second mistake: there is likely only one person that can do it, and he's that person.

“You think that you are potentially the greatest who has yet lived, the strongest servant of the Light, that no other is likely to take up your wand if you lay it down.”

comment by Viliam_Bur · 2014-07-04T07:19:00.145Z · score: 1 (1 votes) · LW(p) · GW(p)

To put the first quotation into some context, Eliezer argued that his combination of high SAT scores and spending a lot of effort in studying AI puts him in a unique position that can make a "difference between cracking the problem of intelligence in five years and cracking it in twenty-five". (Which could make a huge difference, if it saves Earth from destruction by nanotechnology, presumably coming during that interval...)

Of course, knowing that it was written in 2000, the five-years estimate was obviously wrong. And there is a Sequence about it, which explains that Friendly AI is more complicated than just any AI. (Which doesn't prove that the five-years estimate would be correct for any AI.)

comment by buybuydandavis · 2014-07-04T12:45:45.386Z · score: 1 (1 votes) · LW(p) · GW(p)

Most people very seriously studying AI probably have high SATs too. High IQs. High lots of things. And some likely have other unique qualities and advantages that Eliezer doesn't.

Unique in some qualities doesn't mean uniquely capable of the task in some timeline.

My main objection is that until it's done, I don't think people are very justified in claims to know what it will take to get done, and therefore unjustified in claiming some particular person is best able to do it, even if he is best suited to pursue one particular approach to the problem.

Hence, I conclude he is overestimating his importance, per the definition. Not that I see it as some heinous crime. He's over confident. So what? It seems to be an ingredient to high achievement. Better to be over confident epistemologically than under confident instrumentally.

comment by TheAncientGeek · 2014-07-04T13:21:14.031Z · score: -1 (1 votes) · LW(p) · GW(p)

Private overconfidence is harmless. Public overconfidence is how cults start.

comment by Nornagest · 2014-07-04T20:23:05.092Z · score: 0 (0 votes) · LW(p) · GW(p)

I'd say that's, at the very least, an oversimplification; when you look at the architecture of organizations generally recognized as cults, you end up finding they share a fairly specific cluster of cultural characteristics, one that has more to do with internal organization than claims of certainty. My favorite framework for this is the amusingly named ABCDEF: though aimed at new religions in the neopagan space, it's general enough to be applied outside it.

(Eliezer, of course, would say that every cause wants to be a cult. I think he's being too free with the word, myself.)

comment by Squark · 2014-07-07T18:47:08.374Z · score: 0 (4 votes) · LW(p) · GW(p)

Sorry. It wasn't meant as an attack, just something that came to my mind reading the comment by Chris Hallquist.

Well, I'm sorry but when you dig up quotes of your opponent to demonstrate purported flaws in his character, it is a personal attack. I didn't expect to encounter this sort of thing in LessWrong. Given the number of upvotes your comment received, I can understand why Eliezer prefers Facebook.

comment by XiXiDu · 2014-07-08T08:30:48.858Z · score: 1 (8 votes) · LW(p) · GW(p)

Yudkowsky tells other people to get laid. He is asking the community to downvote certain people. He is calling people permanent idiots.

He is a forum moderator. He asks people for money. He wants to create the core of the future machine dictator that is supposed to rule the universe.

Given the above, I believe that remarks about his personality are warranted, and not attacks, if they are backed up by evidence (which I provided in other comments above).

But note that in my initial comment, which got this discussion started, I merely uttered a guess on why Yudowsky might now prefer Facebook over LessWrong. Then a comment forced me to escalate this by providing further justification for uttering this guess. Your comments further forced me to explain myself. Which resulted in a whole thread about Yudkowsky's personality.

comment by Ander · 2014-07-02T01:26:16.208Z · score: 8 (10 votes) · LW(p) · GW(p)

Indeed, it is perfectly fine if someone downvoted that post. I probably deserved a -3 there. However, rather than be given the opportunity to learn from that feedback in the way karma is supposed to work, I instead received one downvote to every post I ever made on the site.

comment by buybuydandavis · 2014-07-02T03:29:32.481Z · score: 1 (9 votes) · LW(p) · GW(p)

I don't think people are entirely on the same page about how karma is "supposed to work". For some, it may be feedback to get people to post better. For others, it may be stifling the posts from who they perceive as a low quality poster.

Karma bombing seems rather jerk faced to me, but do you really need to care? You've got enough karma to post articles. You have good evidence that the karma drop was due to one lone jerk off.

Therefore, what does he matter? Why is this a problem for you?

comment by shminux · 2014-07-02T03:53:28.833Z · score: 16 (16 votes) · LW(p) · GW(p)

Why is this a problem for you?

I suppose if you use comment karma to evaluate how people like what you write, blank downvoting masks the useful signal.

comment by bbleeker · 2014-07-02T11:23:20.653Z · score: 9 (11 votes) · LW(p) · GW(p)

Yes. A while ago I suddenly lost like 50 points (which is a lot for me). The signal that gives isn't 'don't write stuff like this', but 'we don't want you here, go away', and I almost did.

comment by buybuydandavis · 2014-07-03T00:00:06.013Z · score: 2 (2 votes) · LW(p) · GW(p)

But he knows the source of the karma drop, therefore the useful signal has been unmasked.

comment by NancyLebovitz · 2014-07-02T15:37:28.245Z · score: 11 (13 votes) · LW(p) · GW(p)

Therefore, what does he matter? Why is this a problem for you?

I don't see the point in telling people that they shouldn't have the emotional reactions that they keep having. It may be possible to fade those reactions out in the long haul, but caring about karma is a typical reaction (and it seems to be at least common), then it's better to take it into account.

comment by buybuydandavis · 2014-07-03T00:20:53.296Z · score: -5 (9 votes) · LW(p) · GW(p)

from the OP

The continuing problem of karma downvote stalkers is contributing to the decline of discussion on the site. I definitely feel much less motivated to try and contribute anything now, and I have been told by multiple other people at LessWrong meetings things such as “I used to post a lot on LessWrong, but then I posted X, and got mass downvoted, so now I only comment on Yvain’s blog”.

It's simply dysfunctional to let yourself be controlled by the opinions of others, particularly when it's one random internet bozo who whacks your karma.

comment by satt · 2014-07-03T03:51:53.079Z · score: 9 (11 votes) · LW(p) · GW(p)

It's simply dysfunctional to let yourself be controlled by the opinions of others, particularly when it's one random internet bozo who whacks your karma.

Boo! Yes, it's useful to be able to shrug off a downvote bombing, but it's pushing that grain of truth too far to imply someone's broken if they can't. Three reasons.

One: this is, or is supposed to be, a community, and when someone's part of a community they put some weight on what the rest of the community thinks of them. (This is one of the things distinguishing a community from a mere ad hoc group of strangers.) Advising LWers to write off the opinions of the rest of this community erodes this neighbourly norm.

Two, the Michael Bolton principle: why put the onus on Ander to change when the downvoter's the one who's being obnoxious?

Three: this argument proves too much. If I started running around LW insulting and swearing at everyone else here, I'd piss off a lot of people, and it'd be bullet-headed to dismiss their annoyance with "It's simply dysfunctional to let yourself be controlled by the opinions of others".

comment by buybuydandavis · 2014-07-03T20:32:11.473Z · score: -2 (4 votes) · LW(p) · GW(p)

See the quote I referred to, see my comment on it.

Everyone who thinks it's perfectly functional to let yourself be driven off of participating in an internet community because some random bozo gives you a karma bombing is dysfunctional too.

It's disturbing that so many people think that's a wonderful way to live around here. Is that what "winning" is, curling up in fetal position because one person in the world doesn't like you?

Not to me.

One: One bozo indicated his dislike. That is no indication of what the rest of the community thinks of him. Suppose it's a few. Suppose it's many. So what? Does everyone in the world have to like you every second of the day for you to function?

Two: Reality puts the onus on everyone to make their own decisions about their own actions. I've already expressed that the karma bomber is an asshole. He should knock it off. But what you do in response to an asshole is your choice.

Three: Invalid analogy, leaving out the key point - refraining from doing something they want to do because you're an asshole.

comment by satt · 2014-07-04T05:40:25.320Z · score: 0 (2 votes) · LW(p) · GW(p)

See the quote I referred to, see my comment on it.

Already did.

Everyone who thinks it's perfectly functional to let yourself be driven off of participating in an internet community because some random bozo gives you a karma bombing is dysfunctional too.

It's disturbing that so many people think that's a wonderful way to live around here. Is that what "winning" is, curling up in fetal position because one person in the world doesn't like you?

Perhaps you should've read my comment; if you think that's responsive to it I can only conclude you've got a warped, exaggerated idea of what I wrote.

One: One bozo indicated his dislike. That is no indication of what the rest of the community thinks of him.

It actually is, given that that bozo is themselves part of the community.

Suppose it's a few. Suppose it's many. So what?

You see nothing reasonable about being perturbed if "many" people in your community not only dislike you but make a point of indicating that dislike?

Does everyone in the world have to like you every second of the day for you to function?

How is that rhetorical question remotely proportionate or responsive? I nowhere suggested that Ander should require "everyone in the world" to like them "every second of the day" for them "to function".

Two: Reality puts the onus on everyone to make their own decisions about their own actions.

Yet you felt the need to butt in and ultimately insult Ander & me regardless. Almost as if "Reality" were really just standing in for "buybuydandavis" all along.

Three: Invalid analogy, leaving out the key point - refraining from doing something they want to do because you're an asshole.

Ignoring the fact that if I'm being enough of an arsehole, that in itself can change "something they want to do" to "something they no longer want to do".

Let's take your "Invalid analogy" complaint as given, for argument's sake, and explicitly suppose that by running around insulting and swearing at people here I'd drive some of them away. (This incorporates what you call "the key point".) I maintain that it'd be bullet-headed to shrug off people being driven away with "It's simply dysfunctional to let yourself be controlled by the opinions of others".

comment by buybuydandavis · 2014-07-06T06:31:47.199Z · score: -4 (4 votes) · LW(p) · GW(p)

You see nothing reasonable about being perturbed if "many" people in your community not only dislike you but make a point of indicating that dislike?

Many people in a community of hundreds or thousands. To the extent that anyone has noticed you, some like you, and some don't. This is a fact you should have been able to infer without seeing any karma votes.

Be perturbed if it floats your boat. Nurse and cherish your perturbation. The issue again is failing to do something you want to do because some people have publicly indicated something you should have known in the first place.

It actually is, given that that bozo is themselves part of the community.

See phrase "rest of the community", and context clearly distinguishing them as distinct from "one bozo".

Yet you felt the need to butt in and ultimately insult Ander & me

And what statements specifically are you calling an "insult"?

I'm being enough of an arsehole, that in itself can change "something they want to do"

How many hundreds or thousands of people are on this list? If they're going to stop wanting to talk to all of them because one other guy is an asshole, they are dysfunctional.

Note that we really should have an around here. Ah, if only there was an as well! Technology saves the day again!

comment by Nornagest · 2014-07-08T22:02:41.862Z · score: 3 (3 votes) · LW(p) · GW(p)

Note that we really should have an around here. Ah, if only there was an as well! Technology saves the day again!

Killfiles are shit. The incentive structure they create is all screwed up: not only would they contribute to fragmenting the community into little incestuous clumps of people all vigorously pandering to each other (if someone killfiles you, you don't have to worry about their votes), but they raise barriers to entry (by making it necessary for new users to killfile every troll and douchebag in the community) and don't materially discourage trolling (because people need to read your stuff to killfile you, and because hundreds of clicks are a lot more effort than a few sentences of drivel). At best they can function as a patch over an inadequate moderation policy, which at least I'll confess we've historically had.

"Ignore votes" is actually kind of interesting, but it doesn't solve the problem of people's privs getting affected by mass downvotes, makes voting a lot less anonymous if it's reversible, and still creates somewhat ugly incentives. It should never be harder, as a sum of effort, for a forum to correct for the presence of a problem user than it is for that user to create problems.

comment by Lumifer · 2014-07-09T03:22:26.653Z · score: 0 (0 votes) · LW(p) · GW(p)

Killfiles are not efficient for communities of people who think alike. They are pretty good for collections of radically diverse people.

I understand your point about scaling. But I am also highly suspicious of one-size-fits-all solutions.

comment by Nornagest · 2014-07-09T04:25:30.283Z · score: 2 (2 votes) · LW(p) · GW(p)

I don't think they work too well in a diverse community, either: I used to moderate such a community, on a codebase that introduced killfile features during my tenure, and its only substantial effect on moderation seemed to be cutting down on complaints from long-term users that had well-developed killfiles. (I've gone into more detail elsewhere in this thread on its cultural effects.) Since all communities are mostly newer/transient people during the active phase of their lifecycle, this wasn't much consolation.

That is a much harder administrative problem, though, and I've never found a solution that works other than "have a good seed culture, create strong norms against empty rhetoric and generally being a dick, and choose your mods very carefully". With the LW experience in mind I'm actually kind of a fan of karma as a self-moderation tool, but it introduces some problems of its own (see: Recent Unpleasantness), isn't anywhere close to a panacea (see: half of Reddit), and doesn't completely eliminate the need for good people with higher perm levels.

comment by buybuydandavis · 2014-07-08T23:03:42.295Z · score: -1 (1 votes) · LW(p) · GW(p)

is awesome.

Some people are concerned about Signal/Noise. Filter some people, and poof, signal/noise is improved for you according to your tastes.

Some people liked walled gardens. Great! is your personal wall. Throw everyone you don't want to over that wall.

not only would they contribute to fragmenting the community into little incestuous clumps of people

You mean, like life, where people associate with the people they like, and don't associate with those they don't?

making it necessary for new users to killfile every troll and douchebag in the community

It's hardly necessary, as it's impossible to do now. It merely gives you an option to do so.

because people need to read your stuff to killfile you, and because hundreds of clicks are a lot more effort than a few sentences of drivel

What? I read a post, get annoyed, and click, that person drops into my bit bucket never to be seen again. Nothing could possibly be easier.

At best they can function as a patch over an inadequate moderation policy

It's a personal moderation policy that you control. I would rather have Eugine on the list. From his 9000 karma, I doubt that I'm alone. But I don't have that option.

It should never be

This is simple a category error. What is the "it" that has a moral duty to "never be"?

for a forum to correct for the presence of a problem user than it is for that user to create problems.

Problem user, according to who? Eugine's downvoting was a minor problem in the karma system. That's hardly the only deficiency of it. That's hardly the only problem around here.

My problem is having a valuable poster banned from the list.

comment by Nornagest · 2014-07-08T23:12:04.797Z · score: 0 (0 votes) · LW(p) · GW(p)

It's hardly necessary, as it's impossible to do now. It merely gives you an option to do so.

Options have a habit of becoming mandatory over time as norms adjust to their presence. Make it possible to ignore people and I guarantee that a year later, when the next white supremacist or militant Maoist or Randroid or whatever shows up, you'll get people saying that it's not a problem, everyone just needs to ignore them and they'll never need to see them again. I further guarantee that said white supremacists etc. will respond to this by settling down and carving out hateful little niches for themselves in the forum ecosystem, as the people that care start dropping them into their killfiles and stop downvoting their posts or leaving angry responses or, y'know, actually proving them wrong.

All of which comes to a huge waste of effort, because...

What? I read a post, get annoyed, and click, that person drops into my bit bucket never to be seen again. Nothing could possibly be easier.

...you should now imagine that process being repeated by some large fraction of the two thousand users on this forum, every time a problem (excuse me, controversial) user shows up or creates a new sockpuppet. Doesn't look so trivial now, does it?

comment by buybuydandavis · 2014-07-09T01:59:01.720Z · score: 0 (0 votes) · LW(p) · GW(p)

the next white supremacist or militant Maoist or Randroid

Some people want centrally enforced ideological litmus tests, and some don't.

Doesn't look so trivial now, does it?

Scales linearly. Click , and they're gone, for everyone who doesn't want to see them. Nothing could be simpler. An order of magnitude (or two) less sound and fury than we've spent on Eugine.

comment by Nornagest · 2014-07-09T02:00:28.870Z · score: 0 (0 votes) · LW(p) · GW(p)

Scales linearly.

The entire point is that we can and should do a lot better than O(n).

comment by buybuydandavis · 2014-07-09T02:13:10.383Z · score: 0 (0 votes) · LW(p) · GW(p)

We haven't. To quote myself:

Nothing could be simpler. An order of magnitude (or two) less sound and fury than we've spent on Eugine.

comment by Nornagest · 2014-07-09T02:17:49.678Z · score: 1 (1 votes) · LW(p) · GW(p)

The recent ban was executed through administrative action. That's O(1), albeit apparently with a high constant factor if Kaj's posts are to be trusted. There's been a lot of drama surrounding it, but that doesn't have anything to do with scalability.

(Personally, I'd say most of the drama has to do with preexisting cultural and administrative issues that this has dragged squirming into the light, and takes the late unpleasantness as a proximate rather than an ultimate cause, but we may reasonably disagree on that point.)

comment by satt · 2014-07-08T21:27:48.292Z · score: 1 (1 votes) · LW(p) · GW(p)

You see nothing reasonable about being perturbed if "many" people in your community not only dislike you but make a point of indicating that dislike?

Many people in a community of hundreds or thousands. To the extent that anyone has noticed you, some like you, and some don't. This is a fact you should have been able to infer without seeing any karma votes.

Keep your eye on the ball: I wrote, adding emphasis this time, "not only dislike you but make a point of indicating that dislike". You, for whatever reason, then skipped over that second part and zoomed in on the mundane fact that some people who post here dislike some other people who post here.

Be perturbed if it floats your boat. Nurse and cherish your perturbation. The issue again is failing to do something you want to do because some people have publicly indicated something you should have known in the first place.

Again I find I have to repeat myself with emphasis: "that in itself [i.e. having people publicly display their contempt for you] can change 'something they want to do' to 'something they no longer want to do'."

It sounds like the mental model you have of this kind of situation is missing a dimension. The information transmitted when X pointedly & publicly signals their dislike of Y to Y is not simply, "I dislike you". It's closer to "I dislike you, and I dislike you to such a degree that I'm willing to express that fact in spite of whatever social friction it causes between me and everyone else, and in spite of whatever time & effort it costs me, because I think it's totes worth making my dislike of you cognitively salient to you." It can also be a show of social power.

It actually is, given that that bozo is themselves part of the community.

See phrase "rest of the community", and context clearly distinguishing them as distinct from "one bozo".

The context, as I saw it, was that I'd already used the phrase "rest of the community" to refer to everyone on LW apart from Ander, including Eugine_Nier. You used the same phrase when responding to me on that point, so I presumed you were following my usage, and simply indicating a subset of the "rest of the community" with "one bozo". Evidently I was mistaken on that point; perhaps you weren't distinguishing the phrases as clearly as you thought?

And what statements specifically are you calling an "insult"?

  1. "It's simply dysfunctional to let yourself be controlled by the opinions of others, particularly when it's one random internet bozo who whacks your karma."

  2. "Everyone who thinks it's perfectly functional to let yourself be driven off of participating in an internet community because some random bozo gives you a karma bombing is dysfunctional too."

I expect you'll argue that (1) isn't actually an insult since it's denigrating a behaviour rather than a person, and that (2) wasn't actually directed at me. But a little thought would give the lie to such an argument: (2) doesn't make much sense as a germane reply to me unless it's a dig at me for being "dysfunctional", and the "too" at the end of (2) shows your hand by implying that (1) was actually denigrating a person, not just a behaviour.

How many hundreds or thousands of people are on this list? If they're going to stop wanting to talk to all of them because one other guy is an asshole, they are dysfunctional.

And with that, the conversation is back where it started. I don't see the point in completing another circuit, given your sneery hyperbole and such; unless you can pull the quality of your argumentation out of its current nosedive, don't expect another reply.

comment by buybuydandavis · 2014-07-09T02:11:04.331Z · score: -1 (1 votes) · LW(p) · GW(p)

Whatever. Have a nice life.

comment by ThisSpaceAvailable · 2014-07-03T02:37:28.854Z · score: 1 (3 votes) · LW(p) · GW(p)

If you were mugged, but the cops caught the mugger and you got all your money back, would you not care about the mugging? You seem to be putting results over process.

comment by buybuydandavis · 2014-07-03T20:36:41.690Z · score: 0 (2 votes) · LW(p) · GW(p)

I would want to see the guy strung up. But I wouldn't refrain from going out of my house because I had once been mugged. I consider that a dysfunctional response. If I knew someone who was "living" that way, I'd encourage them to change.

See previous comment (downvoted into oblivion) on people refraining from posting because people downvoted them. I walk the talk. It just isn't that hard.

http://lesswrong.com/lw/kfj/downvote_stalkers_driving_members_away_from_the/b255

comment by polymathwannabe · 2014-07-02T15:32:44.030Z · score: 3 (5 votes) · LW(p) · GW(p)

I do think this comment of yours was a reasonable downvote candidate [...] Not because I think you are wrong about global warming, but because frequency of newspaper headlines seems like a bad way to infer statistical trends.

Then taking the trouble of explaining why the comment is problematic is much more helpful to the discussion than simply clicking on the thumbdown.

comment by Squark · 2014-07-02T19:00:54.606Z · score: 5 (13 votes) · LW(p) · GW(p)

I hope you'll forgive me for reiterating what other commenters have already said, but I want to add my own voice here. The problem is not just serial karmakillers. The problem is the culture of using downvotes as a disagree button rather than as a moderation tool. I talked about it before, but ironically most of my comments got downvoted.

comment by EGarrett · 2014-07-02T10:39:58.601Z · score: 5 (13 votes) · LW(p) · GW(p)

I've commented also that the karma system, as it is currently, causes less participation on the site. Just to save time I'll paste it here.

"The fundamental flaw that I see with LessWrong's main site is that its karma/moderating system has the effect of silencing and banning people for being disagreed with or misunderstood. This is a major problem. You cannot mix "I don't agree with you" or "I don't understand you" with "you will be punished and silenced."

People who spam, flame, or otherwise destroy conversation are the ones who need to be silenced, ignored or banned, and a lot of sites like Facebook have separate buttons to perform exactly that function. People in the other category, who are misunderstood or disagreed with, but who discuss constructively and rationally, are the ones who MOST need to be able to speak. I think the punishment and silencing, and the threat of it, contributes largely to any lack of new posters or threads that you might see. I know I personally refrain from posting theories or models I have that are counter-intuitive and would actually start good discussions specifically for this reason...and I put them on the LessWrong Facebook page or bring them up at Meetups instead, where I've had some great conversations and made some good friends because of it."

comment by John_Maxwell (John_Maxwell_IV) · 2014-07-03T00:30:23.208Z · score: 5 (5 votes) · LW(p) · GW(p)

Good criticism is frequently upvoted on LW. But overall, I agree with you that this is an issue.

comment by Tenoke · 2014-07-02T15:06:30.151Z · score: 5 (7 votes) · LW(p) · GW(p)

Are you seriously implying that the facebook group for LessWrong has better discussions than the site? I can't say that I agree.

comment by polymathwannabe · 2014-07-02T15:29:27.908Z · score: 2 (2 votes) · LW(p) · GW(p)

For the past couple of months, I've found the Facebook LW group to debate more interesting subjects than the LW website. But that's only my appraisal of what's interesting and what's not.

comment by philh · 2014-07-02T11:41:36.314Z · score: 4 (4 votes) · LW(p) · GW(p)

People in the other category, who are misunderstood or disagreed with, but who discuss constructively and rationally, are the ones who MOST need to be able to speak.

Do such people usually get downvoted on LW? Outside of this one downvote stalker, that is.

(This is a separate question from whether or not people think they'll get downvoted for constructive disagreement, which is also important.)

comment by EGarrett · 2014-07-02T13:53:39.112Z · score: 1 (3 votes) · LW(p) · GW(p)

Well I'm sure each person downvotes for their own reasons, but I have noticed several people who, when they are disagreeing with someone, tend to have a consistent series of "-1" votes showing up on the posts of the person with whom they're disagreeing.

If they are doing what it seems, I would say this is an example of the problem. Downvoting also allows people to express disagreement without having to give reasons or even pay much attention to what's said. I think this also goes against the purpose of the site.

comment by Nornagest · 2014-07-03T00:05:44.125Z · score: 4 (4 votes) · LW(p) · GW(p)

Downvoting also allows people to express disagreement without having to give reasons [...] I think this also goes against the purpose of the site.

Maybe not disagreement as such, but it's very often good to express disapproval without detailing the reasons for it. The basic issue here is that a response increases visibility (more, in fact, than an upvote does), and you generally don't want to make things you disapprove of more visible.

The classic example would be deliberate trolling, where a lovingly crafted response detailing everything that's wrong with the post is precisely what you don't want: it wastes your time and encourages the troll. But it's not much different for incoherent crankery or political diatribes or cat pictures: the author might not be encouraged by a response, but you're still wasting other people's time as long as the thread's clogging up Recent Comments.

That said, while I don't feel that downvoting your conversational partners to express disapproval is an abuse of the system in the same way that block downvoting is, I do think it's a bad idea and wouldn't be opposed to a feature limiting it.

comment by EGarrett · 2014-07-03T00:58:24.655Z · score: 2 (2 votes) · LW(p) · GW(p)

I think you're definitely right that we need to be able to control people who stop the site from being an honest exchange of ideas or good-faith discussion. It might be better to have a button to report trolling, flaming or spamming, but not an all-purpose downvote that might be used for other reasons.

The example I think about is a Religious Forum. If they had a "downvoting" feature that was implemented in the same way that the Less Wrong feature is...anyone showing up who asks too many skeptical questions could just be downvoted out of existence without anyone answering their arguments.

Perhaps this demonstrates how it could be an Anti-Rational tool or encourage groupthink...which I think is dangerous.

comment by Nornagest · 2014-07-03T01:42:06.900Z · score: 4 (4 votes) · LW(p) · GW(p)

The example I think about is a Religious Forum. If they had a "downvoting" feature that was implemented in the same way that the Less Wrong feature is...anyone showing up who asks too many skeptical questions could just be downvoted out of existence without anyone answering their arguments.

Bidirectional voting has its disadvantages, but I don't think this is one of them.

Sure, if you get a seed culture that's skewed enough in one direction, karma-like systems can be used to enforce conformity with it. But that's hardly unique; if you wander into a LiveJournal (a voteless format) or a Facebook discussion (a unidirectional format) and start spouting off opinions outside the local Overton window, you'll quickly find yourself getting shouted down. There's no purely technical way I know of to break what I'll politely describe as an ideological consensus cluster.

That being the case, I find myself thinking more of the incentives karma creates in an ideologically mixed environment that values things other than conformity, like clarity and originality. Sure, offending someone's ideology is risky; but people on the other side aren't mindless political monsters, they care about those other values as much as you do, and if you respect them you won't get many downvotes. But ignore those norms to dribble content-free "hooray for our side", and the best you can hope for is a few upvotes from people suffering from halo effects.

What happens if you don't have the option of downvoting? Well, suddenly it doesn't matter what your opponents think, since they can't effectively punish you for it. People don't stop caring about discourse norms, they still have the same reactions to following them that they always did, but the thing is that being clever and polite and original is hard; it takes effort and care and some facility with the language. Repeating buzzwords for a few safe upvotes from true believers, on the other hand, doesn't. Stripped of downside, that's what people are going to fall back on -- which of course leads to a self-perpetuating cycle of radicalization.

(Twitter and Tumblr make salient examples, although they both have other issues going on. Open Facebook comment threads are a somewhat purer case.)

comment by Viliam_Bur · 2014-07-03T08:30:13.216Z · score: 2 (2 votes) · LW(p) · GW(p)

It might be better to have a button to report trolling, flaming or spamming

And if some user decides to use this button to report all comments of the people with different political opinion, then what? Would it then be acceptable to ban the user, because they abused the button? Well, they are abusing the downvote button now.

At some moment you just have to use the banhammer. It could as well be now.

comment by EGarrett · 2014-07-03T09:20:58.779Z · score: 3 (3 votes) · LW(p) · GW(p)

To Viliam,

Trolling/flaming/spamming report buttons are clearly labeled for their purpose. The downvote button isn't.

To Nornagest,

Here's the big difference: On Facebook, you can't stop OTHER people from seeing what the person has to say, no matter how much you scream at them. With the system here, you can. Their posts will be hidden and they can even lose their posting privileges when they are downvoted. And when I say that's a big difference, I mean that's a BIG difference. Again, think of the religious forum. This same karma system would allow them to literally stop you from speaking to or influencing people who are on the fence or more open to rationality, instead of just posting replies that highlight their own immaturity or irrationality. I think the issue here is clear.

Secondly, when you refer to (I presume) LessWrong as "an ideologically-mixed environment that values things other than conformity," you're assuming that everyone here views it that way. If everyone saw the downvote button in the same idealized form, we wouldn't have a problem. The issue is that the downvote button does not have such a clear and apparent definition, and there doesn't appear to be any actual enforced policy by the LessWrong admins to stop people from using the downvote button to simply express disagreement.

comment by gwern · 2014-07-03T17:39:18.824Z · score: 1 (1 votes) · LW(p) · GW(p)

On Facebook, you can't stop OTHER people from seeing what the person has to say, no matter how much you scream at them. With the system here, you can.

Can't you? Eliezer cites the easiness of clicking a button and making the other person Go Away as a major perceived advantage of posting on FB rather than LW. And even if you downvote someone on LW, well, someone can undo that with an upvote.

comment by EGarrett · 2014-07-03T18:59:51.742Z · score: 0 (0 votes) · LW(p) · GW(p)

Hi gwern, I'm not sure exactly what you mean. In Facebook groups, you can ignore someone, but the person in question can still participate in discussions that don't involve you, or discuss what you've said outside of your own threads. I think this is actually a good thing, since it lets you avoid unconstructive people, but doesn't allow you to censor people from being heard by others if that person has something valuable to add.

Regarding downvoting vs upvoting, counteracting mass downvoters (who apparently have gone to the extent of downvoting someone over 1000 times) is a huge burden on other people and not something they should have to do.

comment by gwern · 2014-07-03T19:12:06.189Z · score: 2 (2 votes) · LW(p) · GW(p)

In Facebook groups, you can ignore someone, but the person in question can still participate in discussions that don't involve you, or discuss what you've said outside of your own threads.

I believe Eliezer was referring to starting posts. So the question is, which is better, a banhappy omnipotent OP or gradual undoable community moderation?

counteracting mass downvoters (who apparently have gone to the extent of downvoting someone over 1000 times) is a huge burden on other people and not something they should have to do.

And indeed, it's not something that happens often. Eugine is so far the only person to be banned for mass downvoting in the ~5 year history of a very active site.

comment by Nornagest · 2014-07-03T17:08:54.903Z · score: 1 (1 votes) · LW(p) · GW(p)

Sorry, didn't see this until now. In future, it works better if you put responses to a post under that post; I'm not alerted if you respond to me in another branch of the thread.

Secondly, when you refer to (I presume) LessWrong as "an ideologically-mixed environment that values things other than conformity," you're assuming that everyone here views it that way. If everyone saw the downvote button in the same idealized form, we wouldn't have a problem.

I'm presuming no such thing; I was talking about the composition of LW, not the purpose of the downvote button. People's personal downvote policies are going to vary (quite a bit, really), but as long as the forum as a whole contains people with a mix of values similar to those I mentioned, their votes are going to average out to something like the behavior I described: some votes for conformity, some for contrarianism, some for unrelated norms. Note however that this doesn't take into account retributive downvoting; there needs to be policy in place to deal with that, but hey! Now there is, and we've just seen it in action.

The visibility effects of karma, I suspect, are overrated as a driver of behavior except in the case of top-level posts (where they're taken off most of the interface and become something of a pain to get to): leaving that "downvoted below threshold" notification seems to incite people's curiosity as much as anything. Some of my highest-ranked posts are replies to comments below the threshold; they wouldn't have gotten there if people weren't reading the thread.

The karma toll for replying to heavily downvoted comments does shape behavior, but I've only seen one person get that low for politely expressing political views, and he was a white supremacist.

comment by EGarrett · 2014-07-03T18:56:17.025Z · score: 0 (0 votes) · LW(p) · GW(p)

Hi Nornagest, I'm used to forums with a multi-quote feature. I wasn't aware it wouldn't notify you if I just replied to the bottom comment.

I'm presuming no such thing; I was talking about the composition of LW, not the purpose of the downvote button. People's personal downvote policies are going to vary (quite a bit, really), but as long as the forum as a whole contains people with a mix of values similar to those I mentioned, their votes are going to average out to something like the behavior I described: some votes for conformity, some for contrarianism, some for unrelated norms.

This doesn't work in practice precisely because mass and retributive downvoting are disproportionately effective. One person with a skewed concept of downvoting can outweigh tons of other people who are using the functions as intended. I might vote up a comment by someone I like, but I'm not going to go through their profiles and give them hundreds (or even thousands) of upvotes, while we've seen the downvote-abusers do exactly this. So they won't average out properly.

The visibility effects of karma, I suspect, are overrated as a driver of behavior except in the case of top-level posts (where they're taken off most of the interface and become something of a pain to get to): leaving that "downvoted below threshold" notification seems to incite people's curiosity as much as anything. Some of my highest-ranked posts are replies to comments below the threshold; they wouldn't have gotten there if people weren't reading the thread.

We don't have a lot of clear data on this because an "ugh field" or people refraining from posting are often an invisible cost. I've had several times that I had a notion that I wanted to post about here, even considering an entire sequence or at least largely new area of discussion, then thought of some of this type of behavior and changed my mind.

Even if the "downvote below threshold" might incite curiosity, the person in question still loses privileges on site. Lastly, the Eugine_Nier news is quite encouraging and may indicate some solutions to this issue.

comment by Nornagest · 2014-07-03T19:09:52.631Z · score: 0 (0 votes) · LW(p) · GW(p)

This doesn't work in practice precisely because mass and retributive downvoting are disproportionately effective. [...] So they won't average out properly.

See the next sentence of my comment.

Even if the "downvote below threshold" might incite curiosity, the person in question still loses privileges on site.

That's a very different case. Downvoting a person into losing privileges can by done by a single user if the target's posted a lot of marginal or controversial comments, but unless they're very new it takes a lot of patience or a downvote script (Eugine seems to have been using patience), and AFAICT most people have karma ratios high enough that it'd take sockpuppets or other abuses that could be targeted by narrower rules. I only know of one illegitimate case, although others may emerge as the consequences of Eugine's behavior become more apparent. Conversely, downvoting a post below the visibility threshold is much more common but can't be done by a single user.

comment by EGarrett · 2014-07-03T19:24:12.773Z · score: 0 (0 votes) · LW(p) · GW(p)

See the next sentence of my comment.

Yes, but I feel that problem nullifies the paragraph.

That's a very different case. Downvoting a person into losing privileges can by done by a single user if the target's posted a lot of marginal or controversial comments, but unless they're very new it takes a lot of patience or a downvote script (Eugine seems to have been using patience), and AFAICT most people have karma ratios high enough that it'd take sockpuppets or other abuses that could be targeted by narrower rules.

I would have agreed that the patience required is a barrier, until I found out about the 1000 vote attacks. Also, even giving someone a smaller amount of downvotes can become a problem if it's disproportionate to the upvotes. Such as downvoting the person's last 30-50 comments. It simply requires a larger number of people to be doing it. When there was no indication that there would be mass downvote moderating, I actually downvoted Eugine several times in a row out of annoyance when I realized what he was doing to other people...since I figured there was no other option to control it.

Anyway, it may be of course that Eugine is the first person to be outed for this behavior and it will become a regular thing. In which case this issue may cease to be a problem at all.

comment by Nornagest · 2014-07-03T19:32:10.780Z · score: 0 (0 votes) · LW(p) · GW(p)

Eugine may be the only person to have (recently) been using this as a tool of policy, aside from a couple people downvoting him in retribution. If you look at the patterns of people targeted for retributive downvoting (here, here, and here, plus this thread and its relatives), most of the situations seem to fit his MO and apparent set of grievances. Perhaps most tellingly, I don't know of anyone besides Eugine himself who's been mass-downvoted by two users (which is easy to tell from karma on obscure or unremarkable posts).

(I'm not sure about Will_Newsome, but that was three years ago.)

comment by philh · 2014-07-02T23:45:56.264Z · score: 0 (0 votes) · LW(p) · GW(p)

For what it's worth, I haven't noticed that myself, and I don't think it's ever happened to me here. But I agree that when it happens, it's an example of the problem you're talking about.

Downvoting also allows people to express disagreement without having to give reasons or even pay much attention to what's said. I think this also goes against the purpose of the site.

I agree with this too. I think maybe we have just different intuitions of how commonly it's actually used like that.

comment by EGarrett · 2014-07-03T01:00:02.228Z · score: 1 (1 votes) · LW(p) · GW(p)

You probably have more experience than I do with how people as a whole do the voting. I'm just concerned with potential problems.

comment by David_Gerard · 2014-07-03T21:48:55.705Z · score: 3 (3 votes) · LW(p) · GW(p)

In my personal experience, I have posted things that are quite critical of LW ideas, but if I show I've done my homework they get upvotes.

comment by selylindi · 2014-07-02T15:23:06.884Z · score: 2 (8 votes) · LW(p) · GW(p)

Would it be problematic to put a blanket ban on upvotes and downvotes of posts that are older than 30 days? Changes in karma to old posts are no longer an especially useful signal to their author anyway. Such a ban could be a cheap way to mitigate downvote stalking without significantly impacting current discussions.

An attacker could still use multiple accounts to mass-downvote everything from a user in the past 30 days. On the other hand, it's possible that some users' comments were uniformly bad. For the purpose of providing a useful signal, I think we only need enough downvotes to go just a bit negative. People respond disproportionately strongly to loss than to gain, after all! The karma of a particular comment could be capped at no worse than, say, -3, regardless of how many downvotes it received. That would be a cheap way to reduce the possibility of malicious mass-downvoting.

comment by Adele_L · 2014-07-02T16:03:44.400Z · score: 11 (11 votes) · LW(p) · GW(p)

Would it be problematic to put a blanket ban on upvotes and downvotes of posts that are older than 30 days?

This is one of those little things I really like about LW; I would miss it if it was gone. The best content here is on posts that are years old, and discouraging discussion/engagement there would just make the current content problem worse.

The karma of a particular comment could be capped at no worse than, say, -3, regardless of how many downvotes it received. That would be a cheap way to reduce the possibility of malicious mass-downvoting.

This doesn't do anything to solve the problem of one mass-downvoter.

comment by selylindi · 2014-07-02T20:15:27.705Z · score: 2 (2 votes) · LW(p) · GW(p)

The best content here is on posts that are years old, and discouraging discussion/engagement there would just make the current content problem worse.

To be sure, commenting on old posts is great. That definitely shouldn't be banned. It's not so clear about the karma system, which serves several functions, one of which is signalling "more like this" or "less like this" in varying degrees to users so that they can modify their commenting habits. For you and all those who value upvoting/downvoting old comments for its function of engaging with old conversations, perhaps there could be an alternative course between banning late votes and maintaining the status quo? For instance, the upvote/downvote buttons could still increment/decrement scores on comments after 30 days, but not the karma of the commenters. Since a commenter would still have to look back through their old posts to notice the change anyway, the signalling effect would remain unchanged from the status quo, but the possibility of using old posts to attack karma would be removed. (Downside: karma wouldn't be the sum of comment scores.)

This doesn't do anything to solve the problem of one mass-downvoter.

Right, the problem it was stated to mitigate is that "An attacker could still use multiple accounts to mass-downvote everything from a user in the past 30 days." I forgot to state but also intended it as helping with the problem Ander brought up in the OP that getting a single comment massively downvoted has discouraged people from staying around LW.

Jiro correctly pointed out below that vigilence is the technologically simplest solution, albeit more laborious for everyone involved. My preference would be a community that prevented the problem rather than punished it afterwards. There's no guarantee that there exists a rule that would be the perfect solution, but no doubt we can come up with simple rules that put trivial inconveniences (or nontrivial ones) in the way of undesirable behavior! There are probably many such imperfect-but-helpful rules.

comment by Jiro · 2014-07-02T16:13:02.208Z · score: 2 (8 votes) · LW(p) · GW(p)

The simplest solution would be 1) to show the names of downvoters and 2) to have moderators who are willing to kick people out for abusive downvoting

1) could be dispensed with if users could ask moderators to look for abusive downvoting and publicize the name, but that would be more work for moderators.

comment by NancyLebovitz · 2014-07-02T23:01:06.141Z · score: 5 (5 votes) · LW(p) · GW(p)

Having a "gave most downvotes in the past month" list (with the numbers of downvotes, of course) would be awesome.

comment by Nornagest · 2014-07-02T23:55:53.063Z · score: 2 (2 votes) · LW(p) · GW(p)

Well, I don't think that'd have most of the social effects that make me think open votes are a bad idea. It does have some odd features, though -- not everyone votes (or indeed contributes) at the same rate, so a prolific contributor with perfectly normal voting habits might end up being flagged over a less prolific retributive downvoter. Not that looking at downvote ratios would be much better -- those would be fairly easy to mask. Either option would be a disincentive to downvoting in general, and I'm not sure that's a good thing.

Still, this doesn't strike me as an obviously bad idea. I'd probably prefer something more narrowly targeted at retributive behavior, but if that's not in the cards this might be a good option.

comment by satt · 2014-07-03T03:13:15.180Z · score: 1 (1 votes) · LW(p) · GW(p)

A variation on NancyLebovitz's idea: instead of listing individual users with the most downvotes in the past month, list the pairs of users A & B with the highest number of downvotes given by A to B in the past month. With the latter, merely prolific users should rank visibly below the blanket downvoters.

comment by SilentCal · 2014-07-02T18:08:42.716Z · score: 4 (6 votes) · LW(p) · GW(p)

On the technical solution side, how feasible would it be to institute a more complex karma aggregation algorithm, with diminishing effects from repeated downvotes from the same user?

comment by someonewrongonthenet · 2014-12-08T18:01:56.813Z · score: 0 (0 votes) · LW(p) · GW(p)

Do we have more than one downvote stalker? If true, it really sucks that it only takes a single person to bring down and entire community.

comment by Agathodaimon · 2014-07-18T22:04:08.647Z · score: 0 (2 votes) · LW(p) · GW(p)

Your intuition appears to be good. There was a recent paper published on this very topic.

http://arxiv.org/abs/1405.1429