Dealing with trolling and the signal to noise ratio
post by JoshuaZ · 2012-08-31T13:26:13.809Z · LW · GW · Legacy · 236 commentsContents
236 comments
The recent implementation of a -5 karma penalty for replying to comments that are at -3 or below has clearly met with some disagreement and controversy. See http://lesswrong.com/r/discussion/lw/eb9/meta_karma_for_last_30_days/7aon . However, at the same time, it seems that Eliezer's observation that trolling and related problems have over time gotten worse here may be correct. It may be that this an inevitable consequence of growth, but it may be that it can be handled or reduced with some solution or set of solutions. I'm starting this discussion thread for people to propose possible solutions. To minimize anchoring bias and related problems, I'm not going to include my ideas in this header but in a comment below. People should think about the problem before reading proposed solutions (again to minimize anchoring issues).
236 comments
Comments sorted by top scores.
comment by DanArmak · 2012-08-31T19:19:42.205Z · LW(p) · GW(p)
Can someone please provide hard data on trolling, to assess its kind and scale? I can only remember a single example of repeated apparent trolling - comments made by private_messaging and presumed sockpuppets. I'm not very active though, and miss many discussions while they're still unvoted-on.
Replies from: Kaj_Sotala, John_Maxwell_IV↑ comment by Kaj_Sotala · 2012-09-01T12:34:02.420Z · LW(p) · GW(p)
Seconded. The first time that I saw any indications of LW having a troll problem was a couple of days back, when people started complaining about us having one.
Replies from: Will_Newsome↑ comment by Will_Newsome · 2012-09-01T13:30:07.355Z · LW(p) · GW(p)
Well now that you guys have started talking about a trolling problem, I'm quite happy to provide one for you.
(Eliezer, this is what happens when you use words whose meanings you don't understand.)
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2012-09-01T13:32:26.292Z · LW(p) · GW(p)
Point taken.
But you're a cute troll, so you only count partially.
Replies from: None, Will_Newsome↑ comment by [deleted] · 2012-09-01T21:12:35.154Z · LW(p) · GW(p)
Can someone please provide hard data on Will_Newsome's cuteness, to assess its kind and scale?
Replies from: gwern↑ comment by gwern · 2012-09-02T00:30:24.573Z · LW(p) · GW(p)
According to Google Images & Twitter, he's dangerously Bieber-licious!
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-09-02T08:30:34.902Z · LW(p) · GW(p)
Yes, that surprised me. For some reason, my mental image of him was that of a man in his late forties.
↑ comment by Will_Newsome · 2012-09-01T13:34:25.346Z · LW(p) · GW(p)
Eliezer doesn't think I'm so cute. But of course, when it comes to his walled garden, Eliezer likes to swim on the shallow end of the meme pool.
↑ comment by John_Maxwell (John_Maxwell_IV) · 2012-09-01T05:46:40.302Z · LW(p) · GW(p)
I'm not even sure he's a troll--my impression is that he isn't very fluent in English and inadvertently got himself anti-LW-mindkilled.
Replies from: More_Right↑ comment by More_Right · 2014-04-24T06:12:58.156Z · LW(p) · GW(p)
Can anyone "name that troll?" (Rumplestiltskin?)
comment by Sly · 2012-08-31T23:49:19.582Z · LW(p) · GW(p)
This rule is asinine.
If I see a post at -3 that I desire to reply too, I am incentivized to upvote it so that I may enter my comment.
Furthermore, it stifles debate.
Look at this post of Eliezer's at -19 In the new system, the worthwhile replies to that post are not encouraged.
In the new system, instead of people expressing their disagreement, they will not want to reply. The negatives of this system grossly override any upsides.
I have not noticed a worsening trolling problem. Does anyone have any evidence of such a claim?
Replies from: common_law↑ comment by common_law · 2012-09-02T04:21:19.042Z · LW(p) · GW(p)
This rule is asinine.
Indeed. But why is our Rational Leader overreacting to what would seem a minor issue? The question bears analysis. Have some of the public exposures of SIAI left him feeling particularly vulnerable to criticism?
comment by [deleted] · 2012-08-31T15:06:12.234Z · LW(p) · GW(p)
Problem:
General signal to noise. Tags on articles are used very badly. This makes finding interesting content on a topic harder than it needs be.
Idea:
Let's let users with enough karma edit tags on articles! Seriously, why aren't we doing this already?
Replies from: RolfAndreassen, Eugine_Nier↑ comment by RolfAndreassen · 2012-08-31T15:17:59.504Z · LW(p) · GW(p)
This seems to assume that having correct tags will reduce noise, actual or perceived. That's not clear to me. I never look at the tags when deciding what to read, only the titles. Why would accurate tags be useful?
Replies from: None, dbaupp, magfrump↑ comment by [deleted] · 2012-08-31T15:26:01.973Z · LW(p) · GW(p)
It would make research for writing new articles. It would help people interested in a particular topic read more about it. I use tags a lot and even as poorly used as they currently are, I've found a lot of interesting material through them.
Also most importantly it would be a step towards better indexing.
↑ comment by dbaupp · 2012-08-31T15:31:57.038Z · LW(p) · GW(p)
It allows people to read only the things they are interested in; essentially it could provide multiple topic-based discussion areas.
I don't think one can look at how tags have previously been used on LW to answer "why would accurate tags be useful?", since the tagging situation is pretty horrible. On StackOverflow the tags can be edited (and are, all the time) by "high"[1] reputation user and they are used to filter content extensively. However the infrastructure there is a little different, since users can select "Favorite Tags" and "Ignored Tags" to control what shows up on their front page.
(Granted that the comparison isn't necessarily a good one, since StackOverflow is much higher volume, and it has a slightly different purpose.)
[1]: 500 reputation, but reputation is basically received 5 or 10 per vote (depending on the situation), so it is actually quite low. But the possible tags come from a finite set, and one needs 1500 reputation to add a new tag to this set.
↑ comment by Eugine_Nier · 2012-09-01T03:56:49.553Z · LW(p) · GW(p)
One thing we can do now is use the wiki to index articles.
comment by buybuydandavis · 2012-08-31T18:59:25.987Z · LW(p) · GW(p)
Instead of trying to stop noise, you can filter it. Instead of designing to prevent errors, you can design to be robust to them.
I'll repeat something I said in the other thread:
To the extent that all the griping over signal to noise is about a desire to control what you see, and not control what others see or say, there are decades old solutions to discussion filtering. The fancy shmancy Web has been a marked deevolution of capabilities in this regard. It's pitiful. No web discussion forum I know of has filtering capabilities even in the ball park of Usenet, which was available in the 80s. Pitiful.
I also suggest that any solution which is not fundamentally about user customization is a failure under my assumption above, because one man's noise is another man's signal.
Replies from: moridinamael, More_Right↑ comment by moridinamael · 2012-08-31T20:26:12.282Z · LW(p) · GW(p)
You've made me understand the root of one of my own dissatisfactions with the current system. If I look through my post history and roughly group my posts into bins based on how I would summarize them, this is what I see:
Silly posts in the HP:MOR threads: ~ +20 karma
Posts of mine having little content except to express agreement with other high-karma posts: ~ +10 karma
Important information or technical corrections in serious discussions: ~ +1 karma
Posts which I try to say something technical which I retrospectively realize were poorly worded but could have been clarified if someone pointed out an issue instead of just downvoting: ~ -5 karma
Perhaps I exaggerate slightly but my point is that if I were to formulate a posting strategy aimed at obtaining karma, then I would avoid saying anything technical or discussing anything serious and stick to applause lights and fluff.
On top of this, I tend to watch how the karma of my most recent comments behaves, and so I notice that, for example, a comment might have +5 upvotes and -3 downvotes, with no replies. This is just baffling to me. Was there something wrong with the post that three people noticed? Were the three separate things wrong with it? Was it just a response to the tone? What about the upvotes, is it being upvoted because of the witticism at the end, or because of the technical content in the middle? My point is something like Slashdot has a system where things are voted "funny" or "insightful" would be infinitely more useful.
Replies from: buybuydandavis, JenniferRM, Dan_Moore↑ comment by buybuydandavis · 2012-08-31T21:49:47.351Z · LW(p) · GW(p)
Perhaps I exaggerate slightly but my point is that if I were to formulate a posting strategy aimed at obtaining karma, then I would avoid saying anything technical or discussing anything serious and stick to applause lights and fluff.
That's about right. Also, stick to high traffic threads. Hit the HPMOR threads hard!
As I pointed out that people want different things out of the list, you finish by pointing out that the karma votes themselves are clearly used differently by different people. They're also used to a different extent by different people.
One nice thing that Slashdot does is limit your karma votes. That keeps individual Karma Kops from have a disproportionate effect on total score. But I don't think the Slashdot system of multiple scores is that helpful.
From my experience in the grand old days of Usenet, the most useful filters were on people, and the important ease of use features were a single screen view of all threads, expand and contract, sort by date or thread, and sort by date for a subset of threads.
↑ comment by JenniferRM · 2012-09-11T07:04:44.508Z · LW(p) · GW(p)
I think you might be falling prey to a sort of fundamental attribution error for comments... thinking of all votes on a comment as being about the internal traits of the comment itself.
I generally vote to enact a total ordering on all current content, aiming to raise valuable/unique/informative/pro-social content to reader-serving prominence. This involves determining an ideal arrangement of content and voting everything down that is too high, and voting up everything that is too low... except, I try to keep the floor at zero total except where content is (sadly) above the sanity waterline of LW, as with some discussions of gender relations and politics.
About the only "pure knee jerk voting" I engage in is upvoting of content that isn't mind-killing in itself but that has a negative total. Sometimes I upvote a comment simply because someone said something really awesome as a rebuttal to it, and that comment/rebuttal pair is worth reading, and the way to give the conversation the order-of-reading-attention it deserves is to upvote the parent.
↑ comment by Dan_Moore · 2012-09-04T14:17:10.817Z · LW(p) · GW(p)
(+1) I rarely downvote, but from now on, I will accompany any downvote with a reply stating "-1: reason for downvote."
Replies from: wedrifid↑ comment by wedrifid · 2012-09-05T02:04:48.135Z · LW(p) · GW(p)
(+1) I rarely downvote, but from now on, I will accompany any downvote with a reply stating "-1: reason for downvote."
I will downvote every such comment because I oppose this being used as a general policy. I don't want to see the spam and sometimes it just isn't useful to criticize explicitly. Even when downvoting is accompanied by such an criticism it is sometimes better to just speak directly rather than dragging in talk about your downvotes as part of the conversation.
Replies from: Dan_Moore↑ comment by Dan_Moore · 2012-09-05T13:34:38.020Z · LW(p) · GW(p)
This seems like a reasonable approach. The reason for the downvote could force a defamatory statement, which I prefer to avoid. Otherwise, you are right that dragging in a downvote mention doesn't add anything to just saying what you want to say. Thanks for the comment, by the way.
I was thinking that upon downvoting, maybe an option (not a requirement) should be given to state a reason why. Then I realized that there is no need to program such a thing; this option exists already.
Replies from: More_Right↑ comment by More_Right · 2014-04-24T05:46:36.993Z · LW(p) · GW(p)
Too much information can be ignored, too little information is sometimes annoying. I'd always welcome your reason for explaining your downvote, especially if it seems legitimate to me.
If we were going to get highly technical, a somewhat interesting thing to do would be to allow a double click to differentiate your downvote, and divide it into several "slider bars." People who didn't differentiate their downvotges would be listed as "general downvote" Those who did differentiate would be listed as a "specific reason downvote." A small number of "common reasons for downvoting that don't merit an individualized comment" on LessWrong would be present, plus an "other" box. If you clicked on the light gray "other", it would be replaced with a dropdown selection box, one whose default position you could type into, limited to 140 characters. Other comments could be "poorly worded, but likely to be correct" "Poorly constructed argument," "well-worded but likely incorrect" "ad hominem attack" "contains logical fallacies" "bad grammar" "bad formatting" "ignores existing body of thought, seems unaware of existing work on the subject" "anti-consensus, likely wrong" "anti-consensus, grains of truth."
There could also be a "reason for upranking," including polar opposite options that were the opposites of the prior options, so one need only adjust one slider bar for "positive and negative" common reasons. This would allow a + and - value to be associated with comments, to obtain a truer picture of the comment more quickly. "Detailed rankings" (listed next to the general ranking) could give commentators a positive and a negative for various reasons, dividing up two possible points, and adjusting remaining percentages for remaining portions of a point as the slider bar was raised. "General argument is true" could be the positive "up" value, "general argument is false" could be its polar opposite.
It also might be interesting to indicate how long people took to write their comments, if they were written in the edit window, and not copied and pasted. A hastily written comment could be downranked as "sloppily written" unless it was an overall optimal comment.
Then, when people click on the comment ranking numbers, they could see a popup window with all the general up and downvotes, and with many of them providing specific reasoning behind them. clicking on a big "X" would close the window.
I also like letting unregistered users voting in a separate "unregistered users" ranking. Additionally, it would be interesting to create a digital currency for the site that can be traded or purchased, in order to create market karma. Anyone who produces original work for LW could be paid corresponding to the importance of the work, according to their per hour payscale and the number of hours (corresponding to "real world pay" from the CFAR, or other cooperating organizations).
A friend of mine made $2M off of an initial small investment in bitcoin, and never fails to rub that in when I talk to him. I'd like it if a bunch of LW people made similar profits off of ideas they almost inherently understand. Additionally, it would be cool to get paid for "intellectual activity" or "actual useful intellectual work" (depending on one's relationship with the site) :)
↑ comment by More_Right · 2014-04-24T05:31:01.021Z · LW(p) · GW(p)
No web discussion forum I know of has filtering capabilities even in the ball park of Usenet, which was available in the 80s. Pitiful.
I strongly share your opinion on this. LW is actually one of the better fora I've come across in terms of filtering, and it still is fairly primitive. (Due to the steady improvement of this forum based on some of the suggestions that I've seen here, I don't want to be too harsh.)
It might be a good idea to increase comment-ranking values for people who turn on anti-kibbitzing. (I'm sure other people have suggested this, so I claim no points for originality.) ...What a great feature!
(Of course, then that option of "stronger karma for enabled anti-kibbitzers" would give an advantage the malevolent people who want to "game the system" who could turn it on and off, or turn it on on another device, see the information necessary to "send out their political soldiers" and use that to win arguments at a higher-ranking karma. Of course, one might want to reward malevolent players, because they are frequent users of the site, who thus increase the overall activity level, even if they do so dishonestly. They then become "invested players," for when the site is optimized further. Also, robust sites should be able to filter even malevolent players, emphasizing constructive information flow. So, even though I'm a "classical liberal" or "small-L libertarian," this site could theoretically be made stronger if there were a lot of paid government goons on it, purposefully trying to prevent benevolent or "friendly" AGI that might interfere with their plans for continuing domination.)
A good way to defeat this would be to "mine" for "anti-kibbitzing" karma. Another good idea would be to allow users to "turn off karma." Another option would be to allow those with lots of karma to turn off their own karma, and show a ratio of "possible karma" next to "visible karma," as an ongoing vote for what system makes the most sense, from those in a position of power to benefit from the system. This still wouldn't tell you if it was a good system, but everyone exercizing the option would indicate that the karma-based system was was a bad one.
Also, I think that in a perfect world, Karma in its entirety should be eliminated here. "One man's signal is another man's noise," indeed! If a genius level basement innovator shows up tomorrow and begins commenting here, I'd like him to stick around. (Possibly because I might be one myself, and have noticed that some of the people who most closely agree with certain arguments of mine are here briefly as "very low karma" partipants, agree with one or two points I make, and then leave. Also, if I try to encourage them but others vote them down, I'm encouraged to eliminate dissent, in the interest of eliminating "noise." Why not just allow users to automatically minimize anyone who comments on a heavily-downranked already minimized comment? Problem solved.)
LessWrong is at risk of becoming another "unlikeliest cult," to the same extent that Ayn Rand Institute became an "unlikely cult." (ARI did, to some extent, become a cult, and that made it less successful at its intended goal, which was similar to the stated goal of LessWrong. It became too important what Ayn Rand personally thought about an idea, and too unimportant what hierarchical importance there inherently was to the individual ideas themselves. Issues became "settled" once she had an opinion on them. Much the way that "mind-killing" is now used to "shut down" political debate, or debate over the importance of political engagement, and thus cybernetics, itself.)
There are certain subjects that "most humans in general" have an incredibly difficult time discussing, and unthinking agreement with respected members of the community is precisely what makes it "safe" to disregard novel "true" or "valuable" solutions or problem-solving ideas, ...rare as they may admittedly be.
Worse still, any human network is more likely to benefit from solutions outside of its own area of expertise. After all, the experts congregate in the same place, and familiarize themselves with the same incremental pathways toward the solution of their problems. In any complex modern discipline this requires immense knowledge and discipline. But what if there is a more direct but unanticipated solution that can arise from outside of that community? This is frequently the case, as indicated in Kurzweil's quote of Weiner's "Cybernetics" in "How to Create a Mind."
It may be that the rise of a simple algorithm designed by a nanotech pioneer rapidly builds a better brain than AGI innovators can build, and that this brain "slashes the gordian knot," by out-thinking humans and building better and better brains that ultimately are highly-logical, highly-rational, and highly-benevolent AGI. This constitutes some of the failure of biologists and computer scientists to understand the depth of each others' points in a recent Singularity Summit meeting. http://www.youtube.com/watch?v=kQ2snfsnroM -Dennis Bray on the Complexity of Biological Systems (Author of "Wetware" describing computational processes within cells).
Also, if someone can be a "troll" and bother other people with his comments, he's doing you a small favor, because he's showing that there are weaknesses in your commenting system that actually rise to the level of interfering with your mission. If we were all being paid minimum wage to be here, that might represent significant losses. (And shouldn't we put a price on how valuable this time is to us?) The provision of garbled blather as a steady background of "chatter" can be analyzed by itself, and I believe it exists on a fluid scale from "totally useless" to "possibly useful" to "interesting." Also, it indicates a partial value: the willingness to engage. Why would someone be willing to engage a website about learning an interesting subject, but not actually learn it? They might be unintelligent, which then gives you useful information about what people are willing to learn, and what kinds of minds are drawn to the page without the intelligence necessary to comprehend it, but with the willingness to try to interact with it to gain some sort of value. (Often these are annoying religious types who wish to convert you to their religion, who are unfamiliar with the reasons for unbelief. However, occasionally there's someone who has logic and reason on their side, even though they are "unschooled." I'm with Dawkins on this one: A good or bad meme can ride an unrelated "carrier meme" or "vehicle.")
Site "chatter" might normally not be too interesting, and I admit it's sub-optimal next to people who take the site seriously, but it's also a little bit useful, and a little bit interesting, if you're trying to build a network that applies rationality.
For example, there are, no doubt, people who have visited this website who are marketing majors, or who were simply curious about the current state of AGI due to a question about when will a "Terminator" or "skynet"-like scenario be possible, (if not likely). Some of them might have been willing participants in the more mindless busywork of the site, if there had been an avenue for them to pursue in that direction. There are very few such avenues on this "no nonsense" (but also no benevolent mutations) version of the site.
There also doesn't appear to be much of an avenue for people who hold significant differences of opinion that contradict or question the consensus. Such ideas will be downvoted, and likely out of destructive conformity. As such, I agree that it's best to allow users to eliminate or "minimize" their own conceptions of "what constitutes noise" and "what constitutes bias."
comment by Emile · 2012-08-31T19:56:53.091Z · LW(p) · GW(p)
The recent implementation of a -5 karma penalty for replying to comments that are at -3 or below has clearly met with some disagreement and controversy.
How about we wait a couple weeks to try the new feature; instead of jumping up in outrage and proposing even more complicated schemes?
I'd be in favor of an official "no complaining about feature X for the first two weeks" rule, after which a post could be created for discussion. Like that the discussion could be about what actually happened, and not about what people imagine might happen.
It's not as if two weeks of using an experimental feature was some unbearable burden.
Replies from: drethelin, DanArmak↑ comment by DanArmak · 2012-08-31T20:17:29.501Z · LW(p) · GW(p)
It's not a huge burden, but we're already seeing some negative effects worth discussing. For example I have now twice paid the 5 karma penalty replying to downvoted comments which were not trolling at all; they were downvoted because people disagreed with what they were proposing.
If we decide to wait two weeks, we need to decide on specific criteria that we will judge in two weeks' time to decide whether to modify or remove the new feature. If the new feature stays anyway because a few people decide unilaterally, then we might as well discuss it now.
comment by [deleted] · 2012-08-31T15:15:36.722Z · LW(p) · GW(p)
Wiki
We would benefit from more and better wiki articles since they seem the best way to compress information that is often scattered across several articles and dozens of comments. This should help us maintain our level of discussion by making it easier to bring users up to speed on topics.
I used to think the most straigthforward fix would be:
Eliminate the trivial inconvenience of creating a separate count for the wiki. Lets just make it so you use your LW log in. Also perhaps limit edits to people with more than 100 karma, since I hear they had some problems with spamming.
Let people up and down vote edits. Let karma whoring work for us!
But when talking about this on IRC with gwern and he thought it probably wouldn't do much good and isn't worth the effort to implement. What do fellow rationalist think might be a good way to encourage more quantity and quality in the wiki?
Replies from: DanArmak, Bruno_Coelho↑ comment by DanArmak · 2012-08-31T18:45:01.778Z · LW(p) · GW(p)
If we're seriously having troll issues, then wiki + trolls = edit wars. Also polarization over schools of editing styles, the way Wikipedia has its Deletionists.
Replies from: None↑ comment by Bruno_Coelho · 2012-08-31T17:04:58.500Z · LW(p) · GW(p)
If most of the topics will generate disagreement, growing a wiki make this site less dynamic. Or maybe is fine to change(or improve) definitions all the time.
comment by David_Gerard · 2012-08-31T13:50:01.146Z · LW(p) · GW(p)
(as I noted in the buried thread)
The mental model being applied appears to be sculpting the community in the manner of sculpting marble with a hammer and chisel. Whereas how it'll work will be rather more like sculpting human flesh with a hammer and chisel. Giving rather a lot of side effects and not quite achieving the desired aims. Sculpting online communities really doesn't work very well. But, geeks keep assuming the social world is simple, even when they've been in it for years.
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2012-08-31T16:22:55.929Z · LW(p) · GW(p)
"Technical solutions for social problems almost never work."
Replies from: ciphergoth, NancyLebovitz, rocurley↑ comment by Paul Crowley (ciphergoth) · 2012-08-31T21:00:46.834Z · LW(p) · GW(p)
Why on Earth do people keep saying this? Sending out a party invite via email is a technical solution to a social problem, and it's great! For God's sake, taking the train to see a friend is a technical solution to a social problem. This phrase seems to have gained currency through repetition despite being trivially, obviously false on the face of it.
Replies from: Risto_Saarelma, army1987↑ comment by Risto_Saarelma · 2012-08-31T23:05:50.311Z · LW(p) · GW(p)
How about if you substitute "nontrivial social conflict" with "social problem"?
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2012-09-01T06:49:40.776Z · LW(p) · GW(p)
Burglar alarms, voting, Pagerank? Pagerank is definitely a very technological solution to a serious conflict of interest problem, and its effectiveness is a key driver of Google's initial success. Why would you expect technology not to be helpful here?
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2012-09-01T07:26:12.712Z · LW(p) · GW(p)
Ok, those are pretty good examples. Though none of them are quite complete in the sense that there's still a bunch of human messiness with circumvention and countermeasures involved. Burglar alarms need human security personnel to back up the threat, voting is being gamed with gerrymandering and who knows what, and PageRank is probably in a constant arms race between SEO operators and Google engineers tweaking the system. They don't work in a way where you just drop in the tech and go to sleep and have the tech solve the social conflict, though they obviously help managing the conflict, possibly in a very large degree.
The idea with discussion forums, where people spout the epigram, often seems to be that the technical solution would just tick away without human supervision and solve the social conflict. Stuff that does that is extremely hard. Stuff that's more a tool than a complete system will need a police department or a Google or full-time discussion forum moderators to do the actual work while being helped by the tool.
Modern Bayesian spam filters are another example of a well-working technical solution to a social conflict though. Don't know how much of an arms race something like Gmail's filter is. This is something that is giving me the vibe of a standalone system actually solving the problem, even more than PageRank, though I don't know the inner details of either very well.
Replies from: ciphergoth, ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2012-09-01T07:47:25.423Z · LW(p) · GW(p)
When I hear people say "you're proposing a technical improvement to a social problem", they are not cheering on the effort to continually tweak the technology to make it more effective at meeting our social ends; they are calling for an end to the tweaks. From what you say above, that's the wrong direction to move in. Pagerank got worse as it was attacked and needed tweaking, but untweaked Pagerank today would still be better than untweaked AltaVista. "This improvement you're proposing may be open to even greater improvement in the future!" doesn't seem like a counter argument.
In many instances, the technology doesn't directly try to determine the best page, or candidate; it collects information from people. The technology is there to make a social solution to a social problem possible. That's what we're trying to do here.
Replies from: Caspian↑ comment by Caspian · 2012-09-02T03:16:01.101Z · LW(p) · GW(p)
I mostly agree with you that the statement against technical solutions is false on the face of it.
How about this: if you want to prevent certain types of discussion and interaction in an online community, the members need to have some kind of consensus against it (the "social" part of the solution). Otherwise technical measures will either be worked around (if plenty of communication can still happen) or the community will be damaged (if communication is blocked enough to achieve the stated aim).
Technical measures can change the required amount of consensus needed from complete unanimity to something more achievable.
In our case, we may not have had the required amount of consensus against feeding trolls, or of what counts as a troll to avoid feeding.
↑ comment by Paul Crowley (ciphergoth) · 2012-09-01T07:52:25.973Z · LW(p) · GW(p)
Because this involves conflict of interest, it is a security issue, and people aren't very good at thinking about those. Often they fail to take the basic step of asking "if I were the attacker, how would I respond to this?". See Inside the twisted mind of the security professional.
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2012-09-01T08:14:26.974Z · LW(p) · GW(p)
When you think of discussion forum design as a security issue, determining just what should be considered an attack can get pretty tricky. Trying to hack other people's passwords, sure. Open spamming and verbal abuse in messages, most likely. Deliberate trolling, probably, but how easy is it to tell what the intent of a message was? Formalizing "good faith discussion" isn't easy. What about people sincerely posting nothing but "rationalist lolcat" macro pictures on the front page and other people sincerely upvoting them? Is a clueless commenter a 14-year-old who is willing to learn forum conventions and is a bit too eager to post in the meantime, or a 57-year-old who would like to engage you in a learned debate to show you the error of your ways of thought and then present you the obvious truth of the Space Tetrahedron Theory of Everything?
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2012-09-01T10:52:57.712Z · LW(p) · GW(p)
I'm not sure how what you say above is meant to influence what we recommend wrt possible changes to LW.
Replies from: Risto_Saarelma↑ comment by Risto_Saarelma · 2012-09-01T11:16:53.934Z · LW(p) · GW(p)
Basically that discussion forum failure modes seem to be very complex compared to what an autonomous technical system can handle, and the discussion on improving LW seems to often skirt around the role of human moderators in favor of trying to make a forum work with simple autonomous mechanisms.
↑ comment by A1987dM (army1987) · 2012-09-03T08:15:47.017Z · LW(p) · GW(p)
ADBOC. Email and trains might be “technical solutions for social problems” in the literal sense, but that's not what that phrase normally means.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2012-09-03T14:24:17.884Z · LW(p) · GW(p)
What does the phrase normally mean? Risto_Saarelma had one go in reply to me at restricting it to the relevant domain but that didn't work. Could you describe what the phrase does normally mean? I'm not asking for a perfect, precise definition, but just a pointer to a cluster of correlations that it identifies.
Replies from: Richard_Kennaway, army1987↑ comment by Richard_Kennaway · 2012-09-03T15:17:53.781Z · LW(p) · GW(p)
I think it's a misidentification of the reason why a certain class of proposed solutions to social problems do not work. The class consists of solutions which fail to take into account that people will change their behaviour as necessary to achieve whatever their purposes are, and will simply step around any easily-avoided obstacles that may be placed in their way.
The famous picture that Bruce Schneier once posted as an allegory of useless security measures, of car tracks in the snow going around barriers across the road, is an excellent example. "The Internet routes around censorship" is another.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2012-09-03T15:26:54.584Z · LW(p) · GW(p)
(from The Weakest Link)
That seems like a plausible story! And was also the message I was pointing at here.
↑ comment by A1987dM (army1987) · 2012-09-03T22:23:53.266Z · LW(p) · GW(p)
“I know when I see it”, but I'd say that e-mail and trains enable people to do what they want to do (namely, communicate and travel), whereas the prototypical “technical solutions for social problems” try to discourage people from doing what they want to do (e.g. the Prohibitionism).
↑ comment by NancyLebovitz · 2012-08-31T16:45:10.989Z · LW(p) · GW(p)
Making Light and Ta-Nehisi Coates have notably good comment sections, and they have strong moderation by humans.
Replies from: None↑ comment by rocurley · 2012-09-04T15:15:27.700Z · LW(p) · GW(p)
This seems intuitively likely, and is likely true in many cases. In the end, if you don't have good commenters, there may not be much to be done about it on a technical level. However, it's not obvious to me it applies here. For example, the entire karma system is a technical solution that seems to be, if not ideal, better than nothing in dealing with the social problem of filtering content on this site and Reddit.
comment by khafra · 2012-08-31T22:56:18.224Z · LW(p) · GW(p)
Consider the prior art, here. The first place I saw the "reply to any negatively-scored comment inherits the parent's score" concept was at SensibleErection. That policy has been in place there longer than LW has been a forum (possibly longer than reddit), so it seems to work for them.
Prior art for special "oldschool/karma-proven" sections: hacker news. Paul Graham is intensely interested in keeping a high-quality forum going, and is very willing to experiment. Here's the normal frontpage, here's the oldschool view, and here's the recent members view. Hacker news also has several threshholds for voting privileges.
One more step HN took is hiding comment scores, while continuing to sort comments from highest to lowest. It's dramatic, almost draconian, but it definitely had an effect on the karma tournament system
Replies from: dbauppcomment by Aharon · 2012-09-01T11:33:43.361Z · LW(p) · GW(p)
Could someone please point out some examples of trolling to me? I find this discussion surprising because I perceived the trolling rate as low to non-existent. Perhaps I've frequented the wrong threads.
Replies from: dbaupp↑ comment by dbaupp · 2012-09-01T12:07:38.416Z · LW(p) · GW(p)
The most obvious example of trolls right now is this post and some of its comments, although as far as trolling goes, neither are very effective.
Replies from: phonypapercut↑ comment by phonypapercut · 2012-09-01T16:02:17.133Z · LW(p) · GW(p)
That post wouldn't exist if the karma penalty hadn't been implemented.
Replies from: Viliam_Bur, NancyLebovitz, dbaupp↑ comment by Viliam_Bur · 2012-09-02T12:46:33.024Z · LW(p) · GW(p)
That post wouldn't exist if the karma penalty hadn't been implemented.
Agree denotationally... but I hope you are not proposing a policy of avoiding things that could make trolls unhappy with this site.
↑ comment by NancyLebovitz · 2012-09-02T08:44:44.368Z · LW(p) · GW(p)
And even so, it's only 15 comments.
comment by shokwave · 2012-08-31T15:42:59.449Z · LW(p) · GW(p)
I'm not proposing a solution. I'm thinking about the problem for five minutes.
edit: Well, it didn't even take five minutes!
We need a reliable predictor of troll-nature. I mean, I'm not even sure that P( troll comment | at -3 ) is above, say, 0.25 - much less anywhere high enough to be comfortable with a -5 penalty.
Of course, I'd be comfortable with asserting that P( noise comment | at -3 ) is pretty high, like 0.6 or something. Still not high enough to justify a penalty, in my opinion, but high enough that I can see how another's opinion might be that it justifies a penalty. If that is the case, well, the discussion is being severely negatively impacted by conflating noise and trolling.
I might go and figure out how to get some data off of LessWrong commenting system, to try and determine a good indicator for troll-nature. (I don't plan to try and figure out noise-nature. That's the problem that the Internet has faced for the last 15 years, I'm not that hubristic.) That in turn would would put some numbers into this discussion. I don't know that arguing over how many genuine comments can be inadvertently caught in a filter is any better than arguing over whether there should be a filter at all, but to my mind it's more constructive.
Replies from: None↑ comment by [deleted] · 2012-08-31T17:18:07.818Z · LW(p) · GW(p)
I might go and figure out how to get some data off of LessWrong commenting system, to try and determine a good indicator for troll-nature.
Master, you have mediated on this for under five minutes, so I wish to ask two things:
- Does not asking about what has the troll-nature bring one closer to the troll-nature?
- If you meet a Socrates on the road does it have the troll-nature?
↑ comment by Luke_A_Somers · 2012-08-31T18:20:01.082Z · LW(p) · GW(p)
- No - I know you aren't serious, but... seriously?
- If you meet a Socrates anywhere it has troll-nature. That's why he got permabanned from the universe. It also has other less irritating natures.
↑ comment by [deleted] · 2012-08-31T18:24:33.138Z · LW(p) · GW(p)
No - I know you aren't serious, but... seriously?
I have often seen trolls trolling by discussing the troll-nature.
Replies from: Luke_A_Somers↑ comment by Luke_A_Somers · 2012-09-04T16:08:06.605Z · LW(p) · GW(p)
Trolls can troll on any topic at hand. Where there are trolls, trolling will often be a topic at hand.
That doesn't make the nature of trolls a trollish topic. You're going to have to do a lot better than a correlation.
↑ comment by shokwave · 2012-09-07T01:53:29.862Z · LW(p) · GW(p)
Asking what has the troll-nature brings one closer to being a noisemaker. Asking what distinguishes troll-nature from noisemaker brings one closer to having the troll-nature.
Notes
Ask not what separates noise from trolling; instead ask for that which makes a thing neither.
comment by CronoDAS · 2012-09-02T21:10:25.005Z · LW(p) · GW(p)
I think we forgot to hold off on proposing solutions.
Replies from: More_Right↑ comment by More_Right · 2014-04-24T06:08:58.891Z · LW(p) · GW(p)
The proposals here exist outside the space of people who will "solve" any problems that they decide are problems. Therefore, they can still follow that advice, and this is simply a discussion area discussing potential problems and their potential solutions. All of which can be ignored.
My earlier comment to the effect of "I'm more happy with LessWrong's forum than I am unhappy with it, but that it still falls far short of an ideally-interactive space" should be construed as "doing nothing to improve the forum" is definitely a valid option. "If it aint broke, don't fix it."
I don't view it as either totally broken, or totally optimal. Others have indicated similar sentiments. Likely, improvements will be made when programmers have spare time, and we have no idea when that will be.
Now, if I was aggressively agitating for a solution to something that hadn't been clearly identified as a problem, that might be a little obnoxious. I hope I didn't come off that way.
comment by AlexMennen · 2012-09-01T00:07:17.641Z · LW(p) · GW(p)
Proposed solution: remove the karma penalty and do exactly the same thing we were doing before. That is, if someone is pretty sure that they will not benefit from reading the replies to a particular thread, they don't read them. No disincentives from posting such a reply needed. What is the problem with that system?
Edit: As of this edit, if one more person decides they don't like my comment, then no one can tell me why they don't like my comment without loosing 5 karma. One of many reasons the new system is terrible.
comment by Vladimir_Nesov · 2012-08-31T21:38:38.831Z · LW(p) · GW(p)
To address the Big Bad Threads problem specifically (as opposed to other problems), what we need is ability to close threads in some sense, but not necessarily as a side effect of voting on individual comments.
For example, moderators could be given the power to declare a thread (or a post) a Toll Thread, so that making a comment within that thread would start costing 5 Karma or something like that, irrespective of what you reply to or what the properties of your user account are. This would work like a mild variant of the standard (but not on LW) closed thread feature.
Replies from: lukeprog, metaphysicist↑ comment by lukeprog · 2012-09-01T22:26:34.514Z · LW(p) · GW(p)
Nesov, you are a particularly active and helpful moderator. I'm less familiar with how much effort is invested by other moderators. I believe you could do this well, but I'm not sure this solution can be scaled, or even run without you (right now).
Replies from: Alicorn, Vladimir_Nesov↑ comment by Alicorn · 2012-09-01T22:49:44.912Z · LW(p) · GW(p)
I'm active (I read literally everything on Less Wrong, or at least skim) but I'm timid. I don't know what I am and am not supposed to be banning/editing, so I confine banning to spam and editing to obvious errors of formatting or spelling/grammar.
In June I asked Eliezer for moderation guidelines, since there has been an uptick in trolling or just timewasting poorly-informed ranters, but he just said that he thought it needed a software fix (the recent controversial one).
Replies from: lukeprog↑ comment by lukeprog · 2012-09-01T23:04:03.670Z · LW(p) · GW(p)
Thanks for your contributions. I scanned this whole thread and am talking to Eliezer about possible solutions. Right now the troll toll isn't enough, but maybe that's because nothing will deter a SuperTroll like Will Newsome.
ETA: I should clarify that I like Will Newsome in person, but on Less Wrong his comments very often seem to be deliberately obscurantist, unhelpful, and misleading.
Replies from: Sly, drethelin, common_law, common_law↑ comment by drethelin · 2012-09-02T04:51:05.212Z · LW(p) · GW(p)
Ban him and ostracize him socially.
Replies from: wedrifid, Viliam_Bur↑ comment by wedrifid · 2012-09-02T04:58:32.096Z · LW(p) · GW(p)
Ban him and ostracize him socially.
You're right. It seems silly to say that nothing, with emphasis, will stop Will when banning him and any obvious sockpuppets hasn't even been tried. (This isn't particularly advocating that course of action, just agreeing that Luke's prediction is absurd.)
↑ comment by Viliam_Bur · 2012-09-02T13:02:45.764Z · LW(p) · GW(p)
"Ostracize" does not work well online. You don't get direct feedback on how many people read what. (Even the downvotes are evidence that someone did read the comment, and expended some of their energy to downvote it -- which supposedly is part of what the trolls want.)
There is no online equivalent of a group turning their backs on someone in ice-cold silence. Just "not answering" is not the same thing... that happens to many normal comments too.
Replies from: drethelin↑ comment by common_law · 2012-09-02T02:32:27.223Z · LW(p) · GW(p)
Newsome a SuperTroll? Do you really think Newsome contributes less, substantively, than, say, you?
↑ comment by common_law · 2012-09-03T21:05:37.313Z · LW(p) · GW(p)
Will Newsome earned his karma, and he is now entitled to spend it as he pleases. Any interference with that right would be dishonorable, a moral breach of contractual obligation. Libeling him as a SuperTroll is scarcely better; posting provocative comments does not make a troll simply because it's mildly annoying. A malicious or disruptive intent in required, and that's patently absent.
[A few months ago, Will Newsome corrected E.Y.'s definition of "troll"; E.Y. called one Loosemore a troll on account of the latter's being a liar (which he was even less than a troll). Correcting E.Y. turned Will Newsome into something of an overnight authority on the definition of "troll." This is unfortunate, since Will's understanding shows itself a bit defective when it faces sterner tests than Loosemore. Newsome is more trollish than Loosemore, but Newsome is no troll.)
Replies from: Randaly↑ comment by Randaly · 2012-09-04T00:48:04.210Z · LW(p) · GW(p)
I strongly disagree. As far as I am aware, there is no contract between Will Newsome and LessWrong/the SI/FHI/CFAR that states that he is entitled to do whatever he likes. In fact, per community norms, the opposite is true. The claim that he is a SuperTroll seems to be self-evidently true, and is almost certainly not libel, as Will Newsome has done his best to encourage the idea that he is a troll, and possibly even began this view. (Consent is usually considered a defense against libel.) Newsome also seems to have disruptive intent- he's explicitly stated that he's trying to burn his credibility as fast as possible. So, the mods have fairly solid reasons for moving against him- at present rates, it'll take ~27 months for him to burn through the rest of his karma, which is far too long.
↑ comment by Vladimir_Nesov · 2012-09-01T23:05:24.062Z · LW(p) · GW(p)
The distinction between "toll threads" and "closed threads" was an attempt to make the action easier, bear less responsibility and provoke less agitation if applied in controversial cases (it could be un-applied by a moderator as well), so that the button could be more easily given to more people.
Right now the only tool anyone has is the banhammer that either destroys a post with all its comments completely (something I strongly disapprove of being at all possible, but my discussion in the tickets didn't evoke much support) or needs to be applied in a whac-a-mole manner to each new comment, and neither decision should be made lightly. Since there are no milder measures, nothing at all can be done in milder or more controversial cases. I don't believe there is much of a fixable-by-moderation signal-to-noise problem right now except for the occasional big bad threads, so most of the motivation for this tool is to make their inhibition more effective than it currently is. Since big bad threads are rare, you don't need a lot of moderators to address them.
(It's probably not worth the effort to implement it right now, so bringing it up is mostly motivated as being what I see as a better alternative to the punish-all-subcomments-automatically measure Eliezer was suggesting, although I still expect the current punish-direct-replies to suffice on its own.)
↑ comment by metaphysicist · 2012-09-01T22:40:57.126Z · LW(p) · GW(p)
moderators could be given the power
By whom?
Replies from: common_law↑ comment by common_law · 2012-09-02T02:34:30.749Z · LW(p) · GW(p)
Why vote down this simple question? Is it a point of sensitivity--sufficient to drive Nesov to the passive voice? Don't other readers want to know who decides forum policies?
comment by Shmi (shminux) · 2012-08-31T17:55:40.337Z · LW(p) · GW(p)
Warning: a rant follows!
The general incompetence of the replies to the OP is appalling. Fantastically complicated solutions with many potential harmful side effects are offered and defended. My estimate of the general intelligence of the subset of LWers who replied to this post has gone way down. This reminds me of the many pieces of software I had a misfortune to browse the source code of: half-assed solutions patched over and over to provide some semblance of the desired functionality without breaking into pieces.
For comparison, I have noted a trivial low-risk one-line patch that would fix a potential exploit in the recent (and also easy to implement) anti-troll feature: paying with 5 karma to reply to comments downvoted to -3 or lower (patch: only if the author has negative 30-day karma). Can you do cheaper and better? if not, why bother suggesting something else?
After a long time in the software business, one of the lessons I have learned (thanks, Steve McConnell) is that every new feature can be implemented cheaply or expensively, with very little effect on its utility. Unfortunately, I have not heard of any university teaching design simplification beyond using some Boolean algebra (and even that trivial bit is largely ignored by the programmers, who tend to insert a bunch of ad hoc nested if statements rather than think through the possible outcomes and construct and minimize the relevant CNF or DNF). There is also no emphasis on complexity leading to fragility, and how spending 5 minutes thinking through solutions can save months and years of effort during the maintenance stage.
To sum up: every (software) solution can be simplified without perceptible loss of functionality. Simpler solutions are more reliable and easier to maintain. One ought to spend time upfront attempting such simplifications. Pretend that you are being charged per line of (pseudo)code, per use case to test (10x more) and per bug fixed (10x more still), then see if you can save some money before rushing to design and code (or, in this particular case, before posting it here for others to see).
Replies from: Kawoomba, KPier, palladias, Will_Newsome↑ comment by Kawoomba · 2012-08-31T18:42:16.591Z · LW(p) · GW(p)
Changes to a functioning system that is in use should be done with care akin to pruning a bonsai tree, not by introducing sweeping changes that are then potentially scaled back.
That would only make sense for posters with, say, negative karma in the last month. Otherwise this results in (self-)censoring of controversial comments.
I very much agree with your proposal, it should have been an obvious first step (and if there had been some public discourse in which you would probably have suggested it, it might well have been). Upon unsatisfactory results, it could have been escalated to a more profound change.
How long is it going to take for some forum regulars to pick up a dedicated downvoter with two sockpuppets, strongly impinging the discussion on their new +0 -> -3 comments, at least temporarily? The karma hit is a negligible inconvenience for old-timers, but the temporary obstruction of their conversational flow isn't. Empowering trolls inadvertently, how ironic. Who would have thought there are such unforeseen consequences when non-professionals implement such changes without discussions? Why, I suspect you would have, as someone experienced with software design failure modes.
It is a bit disconcerting that such an a-priori type proposal was implemented not only without considering expertise from LW's very own base of professionals, but without first gathering some evidence on its repercussions as well. From the resident Bayesianists, you'd think that there'd be some more updating on evidence through trials first, e.g. by implementing similar (such as yours) but less impactful changes first.
Concerning your software development paradigm:
With the caveat of not having spent a long time in the software industry, there is an argument for the converse case as well.
While I'm all for using k-CNFs, DNFs and all sorts of normal forms, penalising lines of code can easily lead to a shorter, but much more opaque piece of software, regardless of doxygen or other documentation. Getting rid of unnecessary conjuncts in a boolean formula sounds good in theory, but just spelling out each case, with e.g. throwing an exception and commenting that this or that case should not happen, can make it much easier for someone else to follow along your logic.
A maximally compressed pile of ultra efficient goodness, resembling a quine in terms of comprehensibility, would satisfy your "minimize the code" requirement, yet make the code less accessible, not more so.
But I'm happy to consider your testimony to the contrary. As the saying goes:
Replies from: shminuxAll theory, dear friend, is gray, but the golden tree of life springs ever green.
↑ comment by Shmi (shminux) · 2012-08-31T18:58:37.724Z · LW(p) · GW(p)
penalising lines of code can easily lead to a shorter, but much more opaque piece of software,
Yes indeed, hence the weighting:
Pretend that you are being charged per line of (pseudo)code, per use case to test (10x more) and per bug fixed (10x more still)
↑ comment by KPier · 2012-09-02T07:03:25.534Z · LW(p) · GW(p)
My estimate of the general intelligence of the subset of LWers who replied to this post has gone way down.
It seems like it's your estimate of the programming knowledge of the commenters that should go down. Most of the proposed solutions have in common that they sound really simple to implement, but would in fact be complicated - which someone with high general intelligence and rationality, but limited domain-specific knowledge, might not know.
Should people who can't program refrain from suggesting programming fixes? Maybe. But maybe it's worth the time to reply to some of the highly-rated suggestions and explain why they're much harder than they look.
(I agree with your proposed solution to attempt simplifications.)
↑ comment by palladias · 2012-08-31T18:26:35.450Z · LW(p) · GW(p)
There seem to be two functions of this discussion:
- come up with practical solutions
- diagnose the problem
Terrible to implement comment tweaks can still spur helpful discussion. A poster may have articulated a problem or an incentive in a way most of us haven't considered yet. Not everyone who has an interesting description of the problem may have that much coding-fu. Better to throw up your diagnosis without worrying too much about the cure and let everyone critique, counter-suggest, and fix the implementation.
↑ comment by Will_Newsome · 2012-09-01T20:33:59.516Z · LW(p) · GW(p)
My estimate of the general intelligence of the subset of LWers who replied to this post has gone way down.
Why was it so high to begin with? What mistakes do you think you made? (Honest questions.)
Replies from: shminux↑ comment by Shmi (shminux) · 2012-09-02T04:13:10.068Z · LW(p) · GW(p)
I never said it was very high to begin with :) Though the level of discourse here is much higher than on most other online forums I followed, and that threw me off.
comment by JoshuaZ · 2012-08-31T13:35:46.981Z · LW(p) · GW(p)
The proposals I have (not all of which are mutually exclusive) are :
Make comments within highly downvoted subthreads not appear on recent comments. Since the main problem with trolling is drowning out of recent comments, this will solve many of the issues. Moreover, it will discourage continued replies.
Have a separate section of the website where threads can be moved to or have a link to continue to. This section would have its own recent changes section. Moderators could move threads there or make it so that replies went to that section, and would be used for subthreads that are fairly downvoted. This has the advantage of quarantining the worst threads. This is a variation of an old system used at The Panda's Thumb which works well for that website.
Use the -5 penalty system but adjust either the trigger level or the penalty size. It isn't obvious that -3 and -5 are the best values for such a system if it is a good idea. The fact is that -3 isn't that negative as comment scores go, so something like -3 can be obtained without saying that much about a comment's quality. -5 and -5 or or -5 and -1 may be better values. The second would offer softer discouragement for more severe comments.
Use the penalty system but have karma penalties be restored if the comment it replies to is subsequently voted up more. This may be slightly more technically advanced, but this will help encourage people to speak out when they think a comment is unfairly downvoted.
↑ comment by matt · 2012-08-31T18:46:22.389Z · LW(p) · GW(p)
Downvoted for putting more than one suggestion in a single comment.
Punish me for this anti-social act if you must, but as one of the dudes who tries to act after reading these suggestions (and tries hard to discount his own opinion and be guided by the community) this practice makes it much harder for me to judge community support for ideas. Does your comment having a score of 10 suggest 2.5 points per suggestion? ~10 points per suggestion? 15 points each for 3 of your suggestions and --35 for one of them (and which one is the -35?)?
Can we please adopt a community norm of atomicity in suggestions?
Replies from: JoshuaZ↑ comment by Kindly · 2012-08-31T14:36:51.428Z · LW(p) · GW(p)
I think #1 is the way to go here, and the only method that will have any effect in most cases.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-09-01T03:39:32.042Z · LW(p) · GW(p)
As long as there is a separate "uncensored" recent comments page where they still show up.
Replies from: Kindly↑ comment by Douglas_Knight · 2012-08-31T16:58:24.999Z · LW(p) · GW(p)
Since the main problem with trolling is drowning out of recent comments
I agree that if I try to extract coherent beliefs from Eliezer's claim, particularly the claim that people are fleeing the invisible threads, this is what I must conclude.
But I'm not sure I should try to extract coherent beliefs from Eliezer's claims. Do you directly claim this? Do you agree with his claim that trolling has increased in recent months? Do you think invisible comments are a good proxy for trolls?
I stopped looking at recent comments long ago for reasons of volume. I think it keeps large threads going. But I think large threads tend to be equally worthless, regardless of average karma or karma of the initial comment.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2012-09-01T00:47:08.688Z · LW(p) · GW(p)
I guess that I agree with Eliezer that the signal to noise ratio is not as good as it used to be, at least not as good as when I first joined here. But I'm not that highly active a user, and my karma is only around 9000, so my impression may not be that important in this context.
Replies from: CronoDAScomment by wedrifid · 2012-09-01T08:10:35.842Z · LW(p) · GW(p)
The recent implementation of a -5 karma penalty for replying to comments that are at -3 or below has clearly met with some disagreement and controversy. See http://lesswrong.com/r/discussion/lw/eb9/meta_karma_for_last_30_days/7aon .
Be sure to distinguish between the controversy surrounding Eliezer's provocative comments and the policy as he declares he wishes it and actual disagreement with the implementation as it currently stands. I, for example, are tentatively in favor of the current implementation---but not in favour of the policy as he intends it to be implemented.
comment by TraderJoe · 2012-11-05T10:45:43.074Z · LW(p) · GW(p)
Surely this makes it very tough for a non-trolling user to figure out what was wrong with his post? Few people are going to explain it to him. You need to be familiar with LW jargon before you can expect to write a technical comment and not be downvoted for it, so this would very easily deter a lot of new users. "These guys all downvoted my post and nobody will explain it to me. Jerks. I'll stick to rationalwiki."
comment by Pentashagon · 2012-08-31T18:08:55.599Z · LW(p) · GW(p)
As it's currently implemented it appears that replies to the -3 comments still start at a rating of 0. Why not match the -5 karma and set the new comment's rating to -5 as well? This would be a strong disincentive to others replying to the new 0-rated comment and extending the thread at no cost.
Replies from: prase, Viliam_Bur↑ comment by prase · 2012-08-31T19:29:18.314Z · LW(p) · GW(p)
This would be an improvement since then one's karma would still remain in principle obtainable by summing the karma of all one's comments and posts. But then, why have the arbitrary numbers -3 and -5? Wouldn't it be better if a reply to a negatively rated comment started at the same karma as the parent comment? Smooth rewarding schemes usually work better than those with thresholds and steps.
(I still don't support karma penalties for replies in general.)
Replies from: Pentashagon↑ comment by Pentashagon · 2012-09-01T00:36:47.274Z · LW(p) · GW(p)
(I still don't support karma penalties for replies in general.)
Reading troll comments has negative utility. Replying to a troll means causing that loss of utility to each reader who wants to read the reply (times the probability that they read the troll when reading the reply). Perhaps giving the reply rating the same rating as the troll would be a more equitable utility cost to karma.
Replies from: common_law, prase, More_Right↑ comment by common_law · 2012-09-02T04:07:40.756Z · LW(p) · GW(p)
Reading troll comments has negative utility. Replying to a troll means causing that loss of utility to each reader who wants to read the reply (times the probability that they read the troll when reading the reply)
That's exactly the kind of consideration that should lead people to downvote responses to "trolls." If you think someone is stupidly "feeding trolls," you should downvote them.
It seems that E.Y. is miffed that readers aren't punishing troll feeders enough and that he's personally limited to a single downvote. As an end-run around this sad limitation, he seeks to multiply his downvote by 6 by instituting an automatic penalty for this class of downvotable comment.
Nothing is so outrageously bad about troll feeding that it can't be controlled by the normal means of karma allocation. The bottom line is that readers simply don't mind troll feeding as much as E.Y. minds it; otherwise they'd penalize it more by downvotes. E.Y. is trying to become more of an autocrat.
Replies from: army1987, Pentashagon↑ comment by A1987dM (army1987) · 2012-09-02T08:43:57.414Z · LW(p) · GW(p)
Thank you. The last paragraph perfectly articulates why I disagree with this feature.
↑ comment by Pentashagon · 2012-09-02T06:33:44.639Z · LW(p) · GW(p)
It sounds like the real fix is a user-defined threshold. Anyone who only likes the highest rated comments can browse at +3 or whatever, and anyone who isn't bothered by negatively rated comments can browse at a lower threshold.
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-09-02T08:42:19.413Z · LW(p) · GW(p)
Isn't it already there?
Replies from: Alicorn, Pentashagon↑ comment by Pentashagon · 2012-09-03T08:13:25.020Z · LW(p) · GW(p)
Thanks, I had only looked on the article's page for something like the "sort by" dropdown, but found the setting in the preferences.
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-09-03T22:26:26.090Z · LW(p) · GW(p)
(Now, if it also hid replies to downvoted comments in the Recent Comments page, it'd fully solve the ‘problem’, IMO.)
↑ comment by prase · 2012-09-01T15:01:08.850Z · LW(p) · GW(p)
As your comment stands now, you are just one point above the reply penalty threshold. You aren't a troll. I think it illustrates well that the problem with reply penalties isn't particularly strongly related to trolling. Since the penalty was introduced I have already twice refrained from answering a fairly resonable comment because the comment had less than -3 karma. I have seen no trollish comments for weeks.
Replies from: More_Right, Kawoomba↑ comment by More_Right · 2014-04-24T06:34:10.219Z · LW(p) · GW(p)
Also, the thresholds for "simple majoritarianism" are usually required to be much higher in order to obtain intelligent results. No thresholds should be possible to be reached by three people. Three people could be goons who are being paid to interfere with the LW forum. That then means that if people are disinterested, or those goons are "johnny on the spot" (the one likely characteristic of the real life agents provocateurs I've encountered), then legitimate karma is lost.
Of course, karma itself has been abused on this site (and all other karma-using sites), in my opinion. I really like the intuitions of Kevin Kelly, since they're highly emergence-optimizing, and often genius when it comes to forum design. :) Too bad too few programmers have implemented his well-spring of ideas!
↑ comment by More_Right · 2014-04-24T06:26:26.449Z · LW(p) · GW(p)
Intelligently replying to trolls provides useful "negative intelligence." If someone has a witty counter-communication to a troll, I'd like to read it, the same way George Carlin slows down for auto wrecks. Of course, I'm kind of a procrastinator.
I know: A popup window could appear that asks [minutes spent replying to this comment] x [hourly rate you charge for work] x.016r = "[$###.##] is the money you lost telling us how to put down a troll. We know faster ways: don't feed them."
Of course, any response to a troll MIGHT mean that a respected member of the community disagrees with the "valueless troll comment" assessment. --A great characteristic to have: one who selflessly provides protection against the LW community becoming an insular backwater of inbred thinking.
Our ideas need cross pollination! After all, "Humans are the sex organs of technology." -Kevin Kelly
↑ comment by Viliam_Bur · 2012-09-02T12:52:28.332Z · LW(p) · GW(p)
In some situations it may be worth replying to a comment with negative value.
Imagine a comment which was made in a good faith, but just happens to be incredibly stupid, or is heavily downvoted for some other reason.
Now imagine a reply that contains just a word or two and a hyperlink to an article which explains why the parent comment was wrong. Does this reply deserve an automatic downvote?
Generally: It is a bad idea to think about one specific example and use it to create a rule for all examples. For instance, not all negative-karma comments are trolling; yet we create a rule for all negative-karma comments based on our emotional reaction to trolling.
comment by Epiphany · 2012-09-05T04:55:57.628Z · LW(p) · GW(p)
Solution: Ban their IP addresses. This actually works, I'll tell you why. Not because they can't get new ones, but because they can't infinitely get new ones. If you've ever sought an unsecured proxy (a key way of obscuring your IP address) you'll know that it's tough to find a good proxy, they're slow, and they frequently leak your IP address regardless. Even programs like Tor only have so many IP addresses. To make it worse, (for them) it's no fun to use proxies that are far away - they're slow as all get out. This technique worked on spammers on a forum that was growing fast. It took a while to collect all the IP addresses they were using, but it was an extremely effective method of stopping spam. Banning the account will make this more effective. Doing both will make it a real hassle for them to post, and after a while, it will be hard to find good proxies.
A solution also requires that volunteers from the forum / moderators can ban the trolls.
comment by moridinamael · 2012-08-31T15:14:53.194Z · LW(p) · GW(p)
There should be a different discussion forum which is readable to all but can only be posted in by those with over, say 1000 karma. This solution seems "obvious" as we already have a robust karma system, and it's very difficult to acquire that much karma without actually being a good poster (I don't have that much and I've been here for over a year).
This system could lead to the use of the "open" discussion forum as a kind of training grounds where you prove your individual signal-to-noise ratio and "rationality" is high enough to deserve the right to post in the "exclusive" discussions.
I can already foresee a number of objections to this idea, so I will try to forestall some:
"We already have this, it's called main." Anybody can post in the comments section of a Main article. And the threshold to post to Main is too low in the first place. Have you seen some of the things that make it on there?
"We'll get a reputation as being snooty and intellectually insular!" We already have that reputation, and in any case, it is better to be thought arrogant while having high-quality discourse than it is to be thought humble while having mediocre discussions.
"This would be repellent or confusing to newcomers." I'd argue it's really no worse than the current two-forum system with the current karma system. And I thought part of the point was to put up a filter so we bring in the right newcomers?
(An aside: It's pretty hard to write up a suggestion like this without the tone coming out arrogant and self-important sounding, so sorry about that.)
Replies from: None, William_Quixote, Emile, Randaly↑ comment by [deleted] · 2012-08-31T15:42:35.366Z · LW(p) · GW(p)
There should be a different discussion forum which is readable to all but can only be posted in by those with over, say 1000 karma.
Actually I'd find restrictions on who can or can't on vote on the comments to be a more interesting option. What would a forum look like if only those with over 1000 karma on LW could vote?
Replies from: Kaj_Sotala, falenas108, moridinamael, Emile, DuncanF, DanArmak, thomblake↑ comment by Kaj_Sotala · 2012-09-03T08:24:38.075Z · LW(p) · GW(p)
The Stack Exchange sites provide new users with an increasing amount of privileges based on their karma (example). In principle, something similar could be implemented here, with separate privileges such as (in no particular order):
- Vote comments up
- Vote comments down
- Vote posts up
- Vote posts down
- Create Discussion posts
- Create Main posts
- Create meetups
↑ comment by [deleted] · 2012-09-03T08:36:47.757Z · LW(p) · GW(p)
Meetup creation doesn't seem to need a barrier. Perhaps a useful privilege that could come with enough karma would be to allow users to edit tags on articles. Separating voting on comments and main seem reasonable, but I don't quite see why separating down voting and up voting would do any good.
↑ comment by falenas108 · 2012-08-31T17:42:10.148Z · LW(p) · GW(p)
This would make it very difficult for people who aren't already over 1000 to get there, because there would be so much less upvoting happening.
Replies from: None↑ comment by [deleted] · 2012-08-31T17:44:05.165Z · LW(p) · GW(p)
I didn't originally propose this for LW in general, but a different forum or section. People can earn their LW karma elsewhere. But let us for the sake of this exchange suppose here we make this a general rule. I actually like it much more than what I had in mind at first!
It should be emphasised the reverse of what you describe is constantly happening. It is easier and easier to amass 1000 karma as LessWrong grows. Comparing older to newer articles shows clear evidence of ongoing karma inflation.
There aren't that few people with karma over 1000, I'd guesstimate there are at least 100 of them. Many of those are currently active. But again making it harder to get over 1000 karma in order to vote might be a good think. A key feature of the Eternal September problem is that when you have newcomers of a community interacting mostly with other new members old norms have a hard time taking root. And yes since users takee the karma mechanism, especially negative votes, so seriously it is a very strong kind of interaction. Putting the karma mechanism in the hands of proven members should produce better poster quality. It somewhat alleviates the problems of rapid growth.
It also further subsidizes the creation of new articles. Recall your karma from writing a Main Article is boosted 10 fold.
Replies from: drethelin, falenas108↑ comment by drethelin · 2012-08-31T20:05:08.380Z · LW(p) · GW(p)
It especially control how easy it is to post to main. 20 karma from +1000 people is worth way more than 20 random karma.
Replies from: None↑ comment by [deleted] · 2012-09-01T07:13:54.541Z · LW(p) · GW(p)
Getting about 10 karma from introductory posts in the Welcome to LW threads wouldn't be hard. Also people can publish a draft in comment form or just ask for karma in order to write a particular article.
What do you think of the idea in general, for some other karma limit? Perhaps 500 which is probably close to what the average LWer has.
Replies from: drethelin↑ comment by drethelin · 2012-09-01T07:24:37.816Z · LW(p) · GW(p)
I like it but then again I have around a thousand karma so it wouldn't impact me very hard. On the other hand, I don't think it does a lot of work to actually fix the Monkeymind situation that EY and company seem to be so distressed by.
Replies from: Eugine_Nier↑ comment by Eugine_Nier · 2012-09-02T00:26:53.983Z · LW(p) · GW(p)
I'm not at all convinced the Monkeymind situation is nearly as serious a problem as EY and company seem to think.
↑ comment by falenas108 · 2012-08-31T17:53:59.126Z · LW(p) · GW(p)
Ah, okay. Never mind then, sounds like an interesting idea.
Replies from: None↑ comment by [deleted] · 2012-08-31T18:03:11.475Z · LW(p) · GW(p)
I hope this says what I wanted it to say:
Right but recall that I didn't propose this for LW in general, but a different forum or section. People can earn their LW karma elsewhere. But let us for the sake of this exchange suppose here we make this a general rule.
Your interpretation was an interesting question in itself. So please talk criticism of this modified idea!
Replies from: falenas108↑ comment by falenas108 · 2012-08-31T18:25:46.842Z · LW(p) · GW(p)
Okay, in regards to the misinterpretation:
The reverse is happening precisely because there are so many new users who are voting. I'd say that the way LW started out could be used as an estimate of what that would look like. It was very rare for a comment to reach as many as 5 upvotes, and if you see an old comment that has more than that, most likely it had help from someone more recently upvoting it.
Obviously, it would not be entirely the same, and I would place more weight on the up and downvotes being more accurate if this were put into place now, but it would make it much more difficult to get to that point.
Replies from: None↑ comment by [deleted] · 2012-08-31T18:34:36.305Z · LW(p) · GW(p)
The reverse is happening precisely because there are so many new users who are voting. I'd say that the way LW started out could be used as an estimate of what that would look like. It was very rare for a comment to reach as many as 5 upvotes, and if you see an old comment that has more than that, most likely it had help from someone more recently upvoting it.
I agree LW in say 2010 seems an ok proxy for what it would be like. With one key difference, posting Main articles is much more karma rewarding than it was back then. Articles did get over 10 or 20 karma even back then
We should remember that we don't really care how many of the lurkers become posters. Growing the number of users is not a goal in itself, though I think for some communities it becomes a lost purpose. What we actually care about is having as much high quality content that has as many readers as possible.
I would argue the median high quality comment is already made by a 1000+ user. In any case the limit is something we can easily change based on experience and isn't something that should be set without at least first seeing a graph of karma distribution among users.
Replies from: Pentashagon↑ comment by Pentashagon · 2012-09-03T09:16:32.294Z · LW(p) · GW(p)
What we actually care about is having as much high quality content that has as many readers as possible.
I would argue the median high quality comment is already made by a 1000+ user. In any case the limit is something we can easily change based on experience and isn't something that should be set without at least first seeing a graph of karma distribution among users.
There's probably a corollary to Löb's theorem that says a community of rationalists can't add new members to the community and guarantee that it remains a rational community indefinitely. Karma from ratings is probably an especially poor way to indicate a judgement of rationality because it's also used to signal interest in humor (to the point that slashdot doesn't even grant karma for Funny moderations), eloquence, storytelling, and other non-rational things. Any karma-increasing behavior will be reinforced and gain even more karma, and the most efficient ways of obtaining karma will prosper contrary to the goal of creating high quality content. Does every user with more than 1000 karma understand that concept sufficiently to never allow a user who does not understand it to reach 1000 karma?
To be honest I didn't fully grasp the concept until just now. I was ready to start talking economics with karma as the currency until I realized that economics can not solve the problem.
↑ comment by moridinamael · 2012-08-31T18:11:12.136Z · LW(p) · GW(p)
I agree. This idea is better than my originally proposed idea. Easier to implement too, and with fewer drawbacks.
↑ comment by Emile · 2012-08-31T16:52:53.050Z · LW(p) · GW(p)
This seems like one of the best ideas in this thread to me. It's a simple rule (low drama, low meta), and is a bit like a distributed sponsorship system (where instead of needing to be sponsored by one member, you get partial sponsorship by several).
↑ comment by DuncanF · 2012-11-04T07:44:55.287Z · LW(p) · GW(p)
Hmmm. My unease with this idea would be entirely resolved if the upvotes were cached until the user reached 1000 karma rather than merely prohibited/lost.
Consider EYs article on how we fail to co-operate; I'd like to be able to stand-up and say "yes, more of this please". I don't mind at all if the effect of that upvoting is delayed but if I reach 1000 karma I don't expect to find the energy to go back over all the old threads to up vote those I liked in the past - so in that world my expression of support will be forever missing.
That said, something really is necessary - on more recent posts the comments have had such a disheartening effect that I was beginning to decide that I should only read articles.
Replies from: None↑ comment by [deleted] · 2012-11-04T08:00:05.464Z · LW(p) · GW(p)
The thing is your early up votes and down votes are probably different than your later ones.
Replies from: DuncanF↑ comment by DanArmak · 2012-08-31T18:51:07.955Z · LW(p) · GW(p)
Edited: I got the wrong impression from reading too quickly. Corrected comment:
If needed we can choose a different level than 1000 karma, and change it over time in response to experience, so it's a flexible system.
However, I'm not certain the idea itself is sound. I don't have the feeling that mutual upvoting by new users is a real problem that needs solving. Can you give links to example comments where you think the proposed rule would have helped?
↑ comment by thomblake · 2012-09-21T18:57:01.911Z · LW(p) · GW(p)
I feel like if implemented generally, this would punish lurkers who have presumably been contributing to voting patterns for quite some time.
Replies from: wedrifid↑ comment by wedrifid · 2012-09-22T10:18:21.093Z · LW(p) · GW(p)
I feel like if implemented generally, this would punish lurkers who have presumably been contributing to voting patterns for quite some time.
It would limit or remove the voting capability of lurkers. I'm not sure this is a bad thing (even though some of the people who do not comment probably do have good judgement). Either way "punishment" isn't the right the right word.
↑ comment by William_Quixote · 2012-08-31T18:09:42.237Z · LW(p) · GW(p)
On prior forums I have been on, attempts to split into a only some posters and all posters forums have ended badly.
When there are enouph high class posters, everything goes into the high class forum and the open forum collapses leaving no worthwhile "in" for new users. When there are too few high class users, everyone double posts to both forums in order to get discussion and you wind up with a functional 1 forum system except with lots of links and more burden and top level menus.
I have not seen an open / closed forum system with exactly the goldilocks number of high class users to maintain stable equilibria in both forums.
↑ comment by Randaly · 2012-08-31T19:29:38.672Z · LW(p) · GW(p)
Alternately: a (very low) karma requirement for comments and posts to the discussion subreddit (~20?), a somewhat higher karma requirement for comments and posts to Main (~200?), and a newbie subreddit for new posters to post in. Regular newbie forum topics would be welcome threads and stupid question threads; anybody could post in the newbie forum, so there would responses from LW veterans, and anybody would still be able to ask Stupid Questions. (Meetup threads possibly should go there too.) (Depending on how many other topics there are, it might be worthwhile for every introduction, meetup, and question to get its own thread there.)
comment by beoShaffer · 2012-08-31T19:08:43.277Z · LW(p) · GW(p)
I think we need to have a better way to separate true trolls (an admittedly loose category) from people who can be reasoned with and/or raise interesting points but are being down voted for other reasons (like poor grammar/writing). Once we have this we need to convince people to stop feeding the trolls. One way that I have seen proposed is tagged karma (eg +1 insightful -1 trolling). Additionally sockpuppets haven't been a major problem here, but given the role they tend to play in website decline we should have a strategy for dealing with them in advance.
comment by dbaupp · 2012-08-31T13:46:53.945Z · LW(p) · GW(p)
Two more variations on the penalty system:
Have the penalty dependent on history, e.g. replies to a comment with -3 score are penalised only if the original commenter has negative karma over the last 30 days (or maybe if the commenter has total karma less than 10 or 50 etc.). (also suggested by shminux)
Use comment score to compute the penalty, so a comment with -2 only takes 2 karma to reply to, while one with -10 takes 10. (Obviously some other proportionality factor could be used, or even a different relationship between comment karma and reply penalty.) (ETA: Just thinking about it, this would encourage people to vote up before replying, so the penalty computation should probably ignore the reply author's vote.)
↑ comment by Caspian · 2012-09-01T15:21:52.825Z · LW(p) · GW(p)
I like the proportionality idea but still want to be able to express a vote that says "I want less posts like this" without also saying "I don't want replies to this". My proposal is that replies to comments at -5 or better are free, and replies to other comments start at a score 5 better than the parent.
comment by Eliezer Yudkowsky (Eliezer_Yudkowsky) · 2012-09-01T22:44:48.583Z · LW(p) · GW(p)
Please keep your suggestions programmatically very simple. There are all sorts of bright ideas we could be trying, but the actual strong filter by which only a very few are implemented is that programming resources for LW are very scarce and very expensive.
(This filter is so strong that it's the main reason why discussion of potential LW features didn't in-advance-to-me seem very publicky - most suggestions are too complicated, and the critical discussion is the one where SIAI/CFAR decides what we can actually afford to pay for. It hadn't occurred to me that anyone would dislike this particular measure, and I'll try to update more in that direction in the future.)
Replies from: common_law↑ comment by common_law · 2012-09-02T03:36:29.410Z · LW(p) · GW(p)
why discussion of potential LW features didn't in-advance-to-me seem very publicky
Do we really need another layer of verbal obfuscation, particularly of the cutesy variety?
comment by duckduckMOO · 2012-09-01T02:26:28.701Z · LW(p) · GW(p)
can't you just not read the replies to downvoted comments? How is it hurting anybody when someone replies to a comment with a score at or below -3? I don't see a reason to disincentivise it.
Replies from: JoshuaZ↑ comment by JoshuaZ · 2012-09-01T03:15:39.417Z · LW(p) · GW(p)
Many people use the recent comments to see what is being discussed. So off topic or replies to trolls that show up there make that more difficult to use efficiently.
Replies from: Xachariah↑ comment by Xachariah · 2012-09-01T03:35:22.493Z · LW(p) · GW(p)
If the problem is with spam in the 'recent comments' sidebar, then it seems like we should change fix that. I would be on board with a rule that posts in hidden sub-threads don't show up on the 'recent comments' sidebar. If we can remove posts from the sidebar, then perhaps posts that drop to -3 are removed from the 'recent comments' sidebar as well.
Replies from: MugaSofercomment by MileyCyrus · 2012-08-31T15:09:41.047Z · LW(p) · GW(p)
If you want to nuke trolling, use the Metafilter strategy: new accounts have to pay $5 (once). Troll too much, lose your account and pay $5 for a new one. Hurts a lot more than downvotes.
This will deter some (a lot?) of non-trolls from making new accounts. It will slow community growth. On the other hand, it will tighten the community and align interests. Casual users don't contribute to Less Wrong's mission: we need more FAI philanthropist/activists. Requiring a small donation will make it easier for casual users to make the leap to FAI philanthropist/activists, even if it makes it harder for lurkers to become casual users. And it will stop the trolling.
Replies from: Vaniver, None, CarlShulman, prase, David_Gerard, JGWeissman, CronoDAS, palladias, army1987, drethelin↑ comment by Vaniver · 2012-08-31T15:35:23.501Z · LW(p) · GW(p)
If you want to nuke trolling, use the Metafilter strategy: new accounts have to pay $5 (once).
I don't know if I would have made my account here if I had to pay $5 to do so. I would pay $5 now to remain a member of the community- but I've already sunk a lot of time and energy into it. I mean, $5 is less cost to me than writing a new post for main!
I am deeply reluctant to endorse any strategy that might have turned me away as a newcomer.
Replies from: JGWeissman, MileyCyrus↑ comment by JGWeissman · 2012-08-31T15:50:45.084Z · LW(p) · GW(p)
What if you had to associate your account with a mobile phone number, by getting an activation code by text message? It still has the effect of requiring some real resource to make an account, but the first one is effectively free. There may be some concern about your number being sold to scammers.
Replies from: DanArmak, Emile, Vaniver, Kindly, David_Gerard, MileyCyrus↑ comment by Vaniver · 2012-08-31T15:55:43.189Z · LW(p) · GW(p)
Hard to say. So far, I've only given out my phone number to online services like gmail (woo 2 factor authentication!) or banks, but that's because my email and bank accounts are more powerful than my phone number and because very few services ask for it. I think there's a chance I wouldn't give out my phone number, and I can't clearly feel whether that chance is larger or smaller than my reluctance to pay $5. (Modeling myself from over a year ago is tough.)
This also runs into the trouble that instead of getting resources from users, you're spending them on users- texting activation codes is cheap but not free.
↑ comment by David_Gerard · 2012-08-31T17:15:17.522Z · LW(p) · GW(p)
Do you know how many offers of free SIMs I get here in the UK? Really quite a lot. Phone numbers are as easy as email accounts.
Replies from: ciphergoth, army1987↑ comment by Paul Crowley (ciphergoth) · 2012-08-31T20:56:05.359Z · LW(p) · GW(p)
Err really? I'd like to make some sort of bet on this - how many phone numbers you can receive texts from verses how many email addresses I can receive texts from by some deadline. Interested? You wouldn't have to actually receive on them all of course, we'll both use sampling to check.
Replies from: David_Gerard↑ comment by David_Gerard · 2012-08-31T21:45:16.289Z · LW(p) · GW(p)
You are, of course, correct. There'd be a bit of a delay - I was thinking of different email providers, not creating lots on one domain. And SIMs are sorta slow to turn over. But accumulating a pile of phone numbers for trolling would not be hard.
Replies from: ciphergoth↑ comment by Paul Crowley (ciphergoth) · 2012-09-01T06:55:04.818Z · LW(p) · GW(p)
"A pile", sure, but not millions.
The "different email providers" thing is an interesting caveat, but how are you proposing to make use of that caveat in software? It's not that it's impossible on the face of it, but any software that wanted to make use of it would AFAICT have to have a painstakingly hand-crafted database of domain rules, so that you accept lots of gmail.com addresses but not lots of ciphergoth.org addresses.
↑ comment by A1987dM (army1987) · 2012-09-02T01:00:47.013Z · LW(p) · GW(p)
It's not like that in all countries. In Italy (unless the law has recently changed) you have to provide an identity document in order to activate a new SIM.
↑ comment by MileyCyrus · 2012-08-31T16:34:48.519Z · LW(p) · GW(p)
An even lower barrier would be a 100 captchas. That would accessible to almost everyone, and annoying to do repeatedly. Being a lower barrier though would mean it deters fewer trolls and doesn't tighten the community as much.
Replies from: DanArmak↑ comment by DanArmak · 2012-08-31T19:02:32.372Z · LW(p) · GW(p)
I can't even solve single captchas and need to retry many times. You'd be seriously disadvantaging some people.
Replies from: army1987↑ comment by A1987dM (army1987) · 2012-09-02T01:05:45.196Z · LW(p) · GW(p)
So I'm not the only one. (Is it my impression or did they use to be much easier until not long ago?)
↑ comment by MileyCyrus · 2012-08-31T15:40:07.099Z · LW(p) · GW(p)
Have you donated any money to the SIAI? Sorry for the personal question, but you did post a personal anecdote.
Replies from: Vaniver↑ comment by [deleted] · 2012-08-31T16:15:44.278Z · LW(p) · GW(p)
An unintended side-effect: readers without credit/debit cards may find it harder to join the site. This disproportionately affects younger people, a demographic that may be more open to LW ideas.
Another unintended side-effect is that it may increase phyg pattern-matching. Now new recruits have to pay to join the site, and surely that money is being secretly funneled into EY's bank account.
That said, I think that on balance this is a good policy proposal. I also think that the similar proposal using phone verification is plausible, and doesn't run into the above two problems.
Replies from: Kindly↑ comment by Kindly · 2012-08-31T16:58:19.771Z · LW(p) · GW(p)
Heck, there's no pattern-matching about it. It will increase phyg.
Replies from: drethelin↑ comment by CarlShulman · 2012-09-01T01:36:53.563Z · LW(p) · GW(p)
I don't think anyone at SI agrees with you about Less Wrong's mission. The site is supposed to be about rationality. There is hope (and track record) of the Less Wrong rationality community having helpful spinoffs for SI's mission, but those benefits depend on it having its own independent life and mission. An open forum on rationality and a closed board for donors to a charity aren't close substitutes for one another.
↑ comment by prase · 2012-08-31T18:32:46.069Z · LW(p) · GW(p)
we need more FAI philanthropist/activists
Who is "we"?
I think the percentage of "casual" users who participate on this site because they enjoy intelligent conversations on rationality-related topics while having no FAI agenda is non-negligible. I suspect that reinforcing the idea of equality between LW and FAI activism will make many of them leave. It may be a net negative even if LW's mission is FAI activism as there are positive externalities of greater diversity of both discussion topics and participant opinions (less boredom, more new ideas, better critical scrutiny of ideas, less danger of community evaporative cooling, greater ability to attract new readers...)
Also, I don't like the idea of LW's mission being FAI activism. There is still written in the header: "A community blog devoted to refining the art of human rationality", and I'd appreciate if I could continue believing that description. Of course I realise that the owners of the site are FAI enthusiasts, but that's not true about the community as a whole. LW is a great rationality blog even without all its FAI/philanthropy stuff, not only for the texts already written, but also for the productive debating standards used here and a lot of intelligent people around. I would regret if I had to leave, which I would if LW turned to a solely FAI activist webpage.
Replies from: MileyCyrus↑ comment by MileyCyrus · 2012-09-01T00:01:16.512Z · LW(p) · GW(p)
I think the percentage of "casual" users who participate on this site because they enjoy intelligent conversations on rationality-related topics while having no FAI agenda is non-negligible.
I think they're a majority, and that's the problem. There's a social norm that tolerates doing nothing about the most important problem humanity has ever faced.
It may be a net negative even if LW's mission is FAI activism as there are positive externalities of greater diversity of both discussion topics and participant opinions (less boredom, more new ideas, better critical scrutiny of ideas, less danger of community evaporative cooling, greater ability to attract new readers...)
Charging a fee to comment is not the same as banning non-FAI topics. I agree that discussion of other topics can provide instrumental value to the SIAI.
↑ comment by David_Gerard · 2012-08-31T19:50:18.641Z · LW(p) · GW(p)
Casual users don't contribute to Less Wrong's mission: we need more FAI philanthropist/activists.
The tagline is still "A community blog devoted to refining the art of human rationality". If you want FAI and philanthropy, you should I suspect be asking for those specifically up front.
Replies from: MileyCyrus↑ comment by MileyCyrus · 2012-08-31T23:31:01.288Z · LW(p) · GW(p)
The tagline is still "A community blog devoted to refining the art of human rationality". If you want FAI and philanthropy, you should I suspect be asking for those specifically up front.
The rationality discussion is a loss-leader, which brings smart, open-minded people into the shop. FAI activism is the high margin item LW needs to sell to remain profitable.
Replies from: David_Gerard, JoshuaZ↑ comment by David_Gerard · 2012-09-01T02:46:52.486Z · LW(p) · GW(p)
Right, but a tagline that knowingly omits important information about what you see as the actual mission will fairly obviously lead to (a) your time being wasted (b) their time being wasted. (And I'm not convinced a little logo to the side counts.) When you think the people who actually believe your tagline need to be made to go away, you may be doing something wrong.
↑ comment by JoshuaZ · 2012-09-02T01:01:15.373Z · LW(p) · GW(p)
The rationality discussion is a loss-leader, which brings smart, open-minded people into the shop. FAI activism is the high margin item LW needs to sell to remain profitable.
If that's the case then LW is failing badly. There are a lot of people here like me who have become convinced by LW to be much more worried about existential risk at all but are not at all convinced that AI is a major segment of existential risk, and moreover even given that aren't convinced that the solution is some notion of Friendliness in any useful sense. Moreover, this sort of phrasing makes the ideas about FAI sound dogmatic in a very worrying way. The Litany of Tarski seems relevant here. I want to believe that AGI is a likely existential risk threat if and only if AGI is a likely existential risk threat. If LW attracts or creates a lot of good rationalists and they find reasons why we should focus more on some other existential risk problem that's a good thing.
↑ comment by JGWeissman · 2012-08-31T16:04:36.138Z · LW(p) · GW(p)
If you want to nuke trolling, use the Metafilter strategy: new accounts have to pay $5 (once). Troll too much, lose your account and pay $5 for a new one. Hurts a lot more than downvotes.
It's a good idea. Some variations, like associating accounts with mobile phone numbers, may slow good growth less. Maybe it would help to have multiple options to signal being a legitimate new user.
Casual users don't contribute to Less Wrong's mission: we need more FAI philanthropist/activists.
I would like to see more x-risk philanthropists/activists, but I don't want to make that a requirement for LW users. It would be good to have more users who want to be stronger because they have something to protect, rather than thinking rationality is shiny.
Replies from: Alicorn↑ comment by Alicorn · 2012-08-31T17:00:47.741Z · LW(p) · GW(p)
associating accounts with mobile phone numbers
I don't have a phone, and if I did I would refuse to give it out in case someone did something horrible like call me. I'm not the only phone-hater around; we overlap with phone-hater demographics a fair amount.
Replies from: JGWeissman, DanArmak↑ comment by JGWeissman · 2012-08-31T19:42:09.923Z · LW(p) · GW(p)
How would you feel about the $5 per account option?
Any other ideas on how someone could signal that the account they are creating is not yet another sock puppet or identity reset that you would be comfortable with? Maybe associating your account with your website?
I'm thinking the phone idea, if it is used at all, should be one of several options, so the user can choose one that works for them.
Replies from: Alicorn, Eugine_Nier↑ comment by Alicorn · 2012-08-31T19:48:56.258Z · LW(p) · GW(p)
By strong default, I do not pay money for Internet intangibles, but $5 is low enough that I think we might see people buying accounts for their likely-valuable-commenter friends or something, so I'm not quite so opposed (but I think it would sharply slow community growth, and prevent people who we'd love to have around - like folks whose books get reviewed here - from dropping in to just say a few things).
I wouldn't mind associating my website with my account - I already do, now that that's an available field. But even fewer people have websites than phones.
Wouldn't some kind of IP address thing suffice to rule out casually created socks?
Replies from: JGWeissman, DanArmak↑ comment by JGWeissman · 2012-08-31T20:05:50.397Z · LW(p) · GW(p)
like folks whose books get reviewed here
This is an important point, we should be welcoming to people we talk about, and I'm not sure how that fits in to any scheme. Send out preemptive invitations when we talk about people? Who would keep on top of that?
I wouldn't mind associating my website with my account - I already do, now that that's an available field. But even fewer people have websites than phones.
Well, that was the result of me trying to find a mechanism that wouldn't exclude you. But if we let people associate their account with a phone or a website, we include more people. It would be better to have more options to be more inclusive, if we can think of more specific options.
Wouldn't some kind of IP address thing suffice to rule out casually created socks?
Yes, for certain values of casual. You can hide your IP address by going through proxies.
↑ comment by DanArmak · 2012-08-31T20:13:42.625Z · LW(p) · GW(p)
Wouldn't some kind of IP address thing suffice to rule out casually created socks?
It would have false positives due to people sharing public IPs (but not computers) on workplace or campus networks.
Replies from: Kaj_Sotala↑ comment by Kaj_Sotala · 2012-09-01T13:22:36.196Z · LW(p) · GW(p)
And due to e.g. family members sharing IPs and computers.
↑ comment by Eugine_Nier · 2012-09-01T04:07:55.139Z · LW(p) · GW(p)
How would you feel about the $5 per account option?
That would show up on my credit card bill, which may cause certain inconveniences and I suspect we have people for whom that would cause a lot more than an inconvenience.
↑ comment by DanArmak · 2012-08-31T19:04:26.309Z · LW(p) · GW(p)
Strongly agree. There are also several objections raised on that comment.
↑ comment by palladias · 2012-08-31T15:22:28.478Z · LW(p) · GW(p)
What percentage of current posters do you estimate are FAI philanthopists/activists?
Can you give a couple specific examples of what distinguishes them from casual users? (donates to SI? works in a relevant field? volunteers for SI? etc)
Replies from: army1987, MileyCyrus↑ comment by A1987dM (army1987) · 2012-09-02T01:09:43.004Z · LW(p) · GW(p)
Now that I think about that, neither poll asked takers how much they had donated for existential risk mitigation. (In case you're wondering, the answer in my case would be “zero”.)
↑ comment by MileyCyrus · 2012-08-31T16:22:16.018Z · LW(p) · GW(p)
What percentage of current posters do you estimate are FAI philanthopists/activists?
I don't want to pull a number out of my butt, but look at this way: our most recent open thread had 287 comments. Our summer fundraising thread had 17. We should probably have a survey.
Can you give a couple specific examples of what distinguishes them from casual users? (donates to SI? works in a relevant field? volunteers for SI? etc)
If they donate a unit of caring to the SIAI (or perhaps the FHI), then I would lump them in the FAI philanthropist/activists category. There are some people who make contributions without donating money, such as Holden Karnofsky. But for the most part, the people who don't give money are freeloaders.
They go to Less Wrong for entertainment, or for practical advice, or for social interaction. They don't add value, and some take value away. They'll direct the conversation to things that are fun to argue instead of things that that might save the world. Or they'll write comments that show off their intellect instead of comments that raise the sanity waterline. I'm glad these people are gaining value from Less Wrong, but if we're trying to save the world, they shouldn't be our priority.
Edit: Changed the last paragraph to make it less mean.
Replies from: Vaniver, Alicorn↑ comment by Vaniver · 2012-08-31T16:55:57.960Z · LW(p) · GW(p)
But for the most part, the people who don't give money are freeloaders.
They go to Less Wrong for entertainment, or for practical advice, or for social interaction. They don't add value, and some take value away.
I think you really should separate SIAI and LW here. I'd like to think that I've added value to LW in ways that don't involve donating money to SIAI.
Replies from: palladias↑ comment by Alicorn · 2012-08-31T17:03:56.728Z · LW(p) · GW(p)
Fixed your link.
Replies from: MileyCyrus↑ comment by MileyCyrus · 2012-08-31T17:09:58.698Z · LW(p) · GW(p)
Is this comment ironic? My link goes exactly where I intended: to a page that discusses the problems with attaching numerical estimates to everything.
Replies from: ArisKatsaris↑ comment by ArisKatsaris · 2012-08-31T17:18:53.962Z · LW(p) · GW(p)
It was a joke: Alicorn's link in her own "quote" takes you to a comment that among other things says "(electric slide) I like to shake my butt, I like to make stuff up (electric slide) Is there published data? Maybe! Doesn't matta! I'll pull it out of my butt! (butt shake!)"
↑ comment by A1987dM (army1987) · 2012-09-02T08:56:42.086Z · LW(p) · GW(p)
Where did you get that idea about "Less Wrong's mission" from? Actually, when LW was created, discussing AI wasn't even allowed.
comment by [deleted] · 2012-08-31T15:05:06.454Z · LW(p) · GW(p)
Problem:
Karma inflation due to more users means old articles aren't as up voted as they should. Also because they are old they don't get read or updated as much as they should. We tried to at least correct people not reading the sequence with reruns. It didn't exactly work.
Idea:
Currently karma earned from posting a Main article is boosted by a factor of 10. Lets boost the value of karma by a factor of 2 or some other low value for any new comments on articles older than 2 years.
Replies from: None, Alicorn↑ comment by [deleted] · 2012-08-31T15:14:38.766Z · LW(p) · GW(p)
We tried to at least correct people not reading the sequence with reruns. It didn't exactly work.
I also didn't like how they fragmented the commentary. Many found that a feature rather than a bug. I found it plain annoying when reading an old article I had to do a search to see if there was any recent discussion in the rerun threads too.
I mean surely eventually some of the things we wrote back in 2007 or 2009 will turn out to have been plain wrong, obsolete or incomplete right? It would be neat to see that noted at least in their comment section.
Many people read through the sequences much like they would a textbook. We practically encourage them to do so. New well written comments to old article might be very useful.
comment by CronoDAS · 2012-09-02T08:03:30.989Z · LW(p) · GW(p)
Whatever happened to tolerating tolerance?
Replies from: JoshuaZ↑ comment by JoshuaZ · 2012-09-02T13:53:39.539Z · LW(p) · GW(p)
Yes, tolerating tolerance is important. But also well kept gardens and all that. Indeed, part of this discussion is an attempt to try to find a way which will improve the signal to noise ratio without methods that run afoul of your concern.
comment by [deleted] · 2012-08-31T14:58:47.565Z · LW(p) · GW(p)
Problem:
There are many many polite or on topic posts that are not very good or even inane which hover at 0 or 1 karma. For many readers they simply aren't worth the opportunity cost .
Idea:
Set the default visible level not to 0 but to 2 karma or some such number, much like people can currently set negative comments to unhidden. The exception to this should be when "Sort By" is set to "New".
Replies from: DanArmak, dbaupp, Pentashagon↑ comment by Pentashagon · 2012-08-31T17:43:30.113Z · LW(p) · GW(p)
Maybe have the comment threshold increase with the age of the comment. New comments should be visible at the current -1 threshold for a day or two but after that anything below 2 could be hidden. I think that immediately hiding comments would probably lead to fewer accurate comment ratings.
Another possibility is a user-selectable threshold.
Replies from: Nonecomment by [deleted] · 2012-08-31T15:02:15.195Z · LW(p) · GW(p)
Problem:
Huge comment threads on certain articles consume a lot of energy of users, writing page long comments and counter comments on more and more tangential issues. Clearly we'd be better off if smaller debates produced spin off threads of their own either in the open thread or on discussion level articles.
Idea:
Introduce a -2 karma penalty for posting in the comment section of an article with more than 1000 comments. A warning as currently implemented on the -5 karma penalty should be used.
Replies from: DanArmak↑ comment by DanArmak · 2012-08-31T19:10:37.137Z · LW(p) · GW(p)
Data please: how many articles with >1000 comments are there, and what are some recent examples?
I'll also note that I just paid 5 karma to make this reply. Yet your comment was downvoted presumably because people disagreed with your suggestion, not because they thought you were trolling. The new feature of discouraging discussion on downvoted comments is counterproductive.
Replies from: None↑ comment by [deleted] · 2012-08-31T19:17:20.717Z · LW(p) · GW(p)
Data please: how many articles with >1000 comments are there, and what are some recent examples?
This thread is the example that made me think we would be better off if spin off debates had been made into discussion articles or at least open thread posts.
There is some very interesting off topic material but it is lost in the sheer volume. I now think there are probably far too few such threads to make this feature worth implementing.
Edit: I'm a bit confused as to why I'm being down voted for changing my mind?
Replies from: DanArmakcomment by GeraldMonroe · 2012-08-31T14:45:05.224Z · LW(p) · GW(p)
Problem : Comments are anonymously down-voted for no evident reason.
Idea : The names of the commentors who down-vote should be visible. If a person down votes a comment, it should cost them karma points to do so unless an explanation why they choose this action is made.
Replies from: palladias, Emile, None↑ comment by Emile · 2012-08-31T15:41:54.436Z · LW(p) · GW(p)
The names of the commentors who down-vote should be visible.
That would probably be an improvement (though there's a risk of increasing focus on meta); However -
If a person down votes a comment, it should cost them karma points to do so unless an explanation why they choose this action is made.
... would be quite bad: adding more tedious and repetitive explanations would just increase the noise, especially if the OP then answers to the explanations with even more nitpicks.
(unless you mean that downvoting would include an interface with a set of choices, but that would be cumbersome and probably incomplete)
Replies from: GeraldMonroe↑ comment by GeraldMonroe · 2012-08-31T16:02:29.317Z · LW(p) · GW(p)
Well, for example, in this very thread. The post I made bringing up the idea is down to -2. Why is this?
Is the idea irrational?
Is the idea so impractical that I should have immediately realized it was a bad one and avoided a low SNR post?
Did I make spelling or grammar mistakes?
Did a particular person spot other posts of mine they disagreed with and then search for more posts to troll-moderate?
Does someone simply disagree with the idea of there being any accountability at all?
Now, given I've tried to enumerate each of the major possibilities, and the first 3 seem unlikely, I have to conclude it is possible the reason is one of the last 2. If so, the downvoting system is not doing it's job.
Replies from: evand, cousin_it, prase↑ comment by evand · 2012-08-31T16:33:17.809Z · LW(p) · GW(p)
Having thought about downvoting your post (but not, -2 seemed low enough): the problem is not obviously one worth solving. Your idea does not obviously solve the problem (if your comment had lots of downvotes with the reason given as "troll", would you feel more informed?). There are obvious downsides to your idea, as discussed by the other replies, that you should have addressed. Any benefit from your idea, when it has been implemented on other sites (see e.g. Slashdot) is dubious.
Replies from: NancyLebovitz↑ comment by NancyLebovitz · 2012-08-31T17:31:43.766Z · LW(p) · GW(p)
Actually, "troll" does offer some information. It means the downvote was because the comment was perceived as an effort to stir up pointless conflict, rather than for it being wrong, poorly proofread, redundant, or a multitude of other possible reasons.
↑ comment by cousin_it · 2012-08-31T16:19:52.473Z · LW(p) · GW(p)
I downvoted your comment because you seem to be trying to solve the problem that your first post got downvoted, not the problem posed by the OP. Do you have an argument why requiring explanations for downvoting will reduce trolling and increase the signal to noise ratio on LW? Note that the founder of LW has argued for being more downvote-happy.
↑ comment by prase · 2012-08-31T19:18:27.047Z · LW(p) · GW(p)
I intended* to downvote the comment to express disagreement. It seems pretty standard to vote this way on proposals for changes. The reasons why I disagree with obligatory downvote explanations are
- It brings heavy asymmetry between downvotes and upvotes, making downvoting much less convenient. A comment which ten people find worth downvoting and two find worth upvoting now stands on -8 (within the most naïve voting model) which shows that it is very probably a bad comment (we already vote more up than down on average). After the suggested change the comment could easily be at zero or positive as the change wouldn't affect the upvoters while most potential downvoters would be lazy to explain.
- The best strategy to deal with trolls is to downvote without explanation, which your proposal would make impossible.
- If a comment is stupid for one reason, most downvoters vote down for that single reason, but there is indeed no benefit from stating the same reason n times while the comment sinks down to -n.
- There is no practical way how to automatically verify the merit of explanations. Therefore it would be easy to game the system, by voting down and posting an empty explanation, or, less explicitly, saying e.g. "the comment is stupid".
The more general idea, i.e. visible names of downvoters, I oppose basically because it introduces a pretty dangerous social dynamic into the forum. There is a good reason why political voting is usually secret and the reason extends beyond politics. Not knowing who voted us down makes us less likely to succumb to the typical failure mode of debating to win and supporting our allies instead of debating to find the truth and supporting good arguments.
Note also that your enumeration is very incomplete, as you can see from my reasons not overlapping much with any possibility you have listed. Furthermore the last one is unnecessarily polarising and practically comprises a false dilemma of only two possibilities - either agreeing with your proposal, or disagreeing with the general idea of accountability.
*) In the end I have retracted my downvote because I wanted to reply to your comment and voting it to -4 would cause me to lose 5 karma points in my reply. So I tried to vote it up to -2, then reply, and after having replied retract my upvote and vote down. Which I haven't managed because you have retracted your comment in between, but still this illustrates one rather bizarre feature of the new anti-trolling "improvement".
↑ comment by [deleted] · 2012-08-31T15:22:27.715Z · LW(p) · GW(p)
As a neutral(neither upvoted nor downvoted) critique of that article:
The conclusion section was fairly good.
But if it was in main when you posted it, yes, it was supposed to be fully edited. I'm pretty sure drafts usually go in discussion.
And in reference to a guideline, actually yes, I think there's part of the Wiki that does mention this:
I strongly disagree with Less Wrongers on something. Can I write a top-level post about it?
Yes, if you do your homework first.
...
If you have a question regarding a "consensus" view, or don't want to do any homework, consider posting in an open thread.
I think you fell into the bolded section. Admittedly, Eliezer's views aren't necessarily consensus. For instance, a lot of people disagreed with him in the thread linked at the top of the page. But it probably applies in the case of the WBE vs AI idea.
Hopefully my tone came through. I'm trying to say this in a helpful/educational manner, which seems slightly silly because you've written one more article in Main then I have, so I don't know if I'm the best judge. But hopefully it helps.
comment by GeraldMonroe · 2012-08-31T16:13:09.425Z · LW(p) · GW(p)
I've GOT IT! Eureka! Oh I'm a genius.
Let's make the moderation system on this site freemium. See, upon logging into the site, for every 100 positive karma you gain, you can make a single up or down moderation. However, if you purchase "rational coins" (by demonstrating the rational operation of our money system with your credit card) you are allowed to make a single moderation for each coin.
Furthermore, you will be able to purchase 'special ability kits' that would do various interesting things to help you out in the moderation wars.
- Hard Reset Pack : resets the moderation of any of your posts back to zero no matter how negative it is
- Self Flagellation kit - gives +10 moderation to any of your posts, at the cost of 100 rational coins.
And tons more!
What could possibly go wrong?