The types of manipulation on vote-based forums
post by pepe_prime · 2017-02-11T17:09:49.319Z · LW · GW · Legacy · 7 commentsThis is a link post for https://www.reddit.com/r/TheoryOfReddit/comments/36wwr6/the_types_of_manipulation_on_votebased_forums/
Contents
7 comments
7 comments
Comments sorted by top scores.
comment by Viliam · 2017-02-13T10:41:24.333Z · LW(p) · GW(p)
One thing I noticed: this article talks about Reddit, Hacker News, and Stack Exchange as the examples of vote-based forums. The first two are essentially "linking to someone else's content and talking about it", and the third one is "narrow questions, and answers".
None of them is about "writing high-quality articles". It sometimes happens that there is a very-high-quality comment or an answer, but those are a microscopic fraction of the set of all comments and all answers.
Maybe this is an answer for why LW fails at generating high-quality content recently. We are using a fundamentally wrong tool for this job. The Reddit system is good for linking interesting stuff, and sometimes it happens to generate good stuff as a side effect, simply because with the millions of comments, even if only one in a thousand is great, it means thousands of great comments in absolute numbers -- but you wouldn't want to read the whole Reddit systematically to find them. Reading Reddit is an archetypal example of procrastination.
It is not a mystery that LW fails to generate great content. That is the default functionality of Reddit-like systems. The part that needs explanation is how LW used to have great content in the past. But the answer seems quite simple -- at the beginning, LW didn't play by the usual Reddit rules. Actually, the first Eliezer's articles were not even written on the Reddit-like platform; they were imported from the Overcoming Bias.
And the first months on the Reddit-like platform were probably sustained by the momentum -- people saw the Sequences as the example of what is supposed to be published on LW, so they tried to write the similar kind of stuff, and some of them succeeded. But gradually the content followed the form, and the website became more Reddit-like, to the point where the Sequences became publicly treated as a joke, and the emphasis moved from the debated articles to the debates themselves, which of course disincentivizes writing high-quality articles.
So many people who still wanted to write high-quality articles did the obvious thing, and started to publish their articles elsewhere, and only repost, and later only link them from LW. They updated (consciously or not) to the fact that LW became more Reddit-like, and started approaching it just like one approaches Reddit.
I guess the lesson is, if we don't want to become yet another Reddit, we shouldn't use Reddit architecture. Because the Reddit architecture naturally converges towards Reddit-like quality, which is synomymous to procrastination, vote manipulation, etc.
While thinking about how to prevent vote manipulation and other undesired things in the forum, we should not forget that creating content needs to happen outside of the forum, using different rules. Reddit without vote manipulation would probablly still suck at generating Sequences-like content. (Essentially, great articles are supposed to be "timeless", while forum is by its nature ephemeral, and you just can't optimize for two contradictory things at the same time. Rationalist chat, and rationality books need to be two different things.)
Replies from: WhySpace_duplicate0.9261692129075527↑ comment by WhySpace_duplicate0.9261692129075527 · 2017-02-13T20:31:40.554Z · LW(p) · GW(p)
It seems weird to me to talk about reddit as a bad example. Look at /r/ChangeMyView, /r/AskScience, /r/SpaceX, etc, not the joke subs that aren't even trying for epistemic honesty. /r/SpaceX is basically a giant mechanism that takes thousands of nerds and geeks in on one end, and slowly converts them into dozens of rocket scientists, and spits them out the other side. For example, this is from yesterday. Even the majority that never buy a textbook or take a class learn a lot.
I think this is largely because Reddit is one of the best available architectures for sorting out the gems from the rest, when there is a critical mass of people who want gems. If you want more gems, you need to put more dirt through the filter.
The failure to this rule is the default subreddits, because everyone can vote, and not just those with high standards. An ideal solution would be to ask people to predict whether something will get, say, a 1-5 star rating by mods/curators, as a means of signal boosting. If you take a weighted average of these predictions based on each person’s historical accuracy, you can just use that as a rating, and spare the mods from having to review almost everything.
Really, I like extending this to multiple axes, rather than just lumping everything into one. For example, some topics are only new to new people, and such threads can be good places to link to older discussions and maybe even build on them. However, older members may have little interest, and may not want to have to engage with commenters who aren't familiar with older discussions. Arbital seems to be moving in this direction, just by having more coherent chains of prerequisite concepts, rather than whatever couple links the author thought to include in an essay.
Just some musings.
Replies from: Viliam↑ comment by Viliam · 2017-02-14T09:58:54.745Z · LW(p) · GW(p)
That is a great point!!! Can we turn it into actionable advice for "creating better LW"?
Maybe there is a chance to create a vivid (LW-style) rationalist community on Reddit. We just have to find out what is the secret that makes the three subreddits you mentioned different from the rest of Reddit, and make it work for a LW-style forum.
I noticed CMV has about 30 moderators, AskScience has several hundreds, and SpaceX has nine. I don't know what is the average, but at this moment I have an impression that a large-ish number of active moderators is a must.
Another ingredient is probably that these sites seem to have clear rules on what is okay; what should one optimize for. In CMV, it's replies that "change OP's mind"; in AskScience it's replies compatible with respected science. -- I am afraid we couldn't have a similarly clear rule for "x-rationality".
EDIT:
I like the anti-advice page for CMV. (And I find it quite amusing that a lot of them pretty much describe how RationalWiki works.) I posted that link on LW.
comment by Viliam · 2017-02-12T18:55:42.798Z · LW(p) · GW(p)
Replies from: pepe_primewhat does karma do? Votes determine what gets seen and what doesn't. [...] Views determine what kind of content gets circulated, which determines who makes money. Views also determine what ideas people are exposed to and influenced by, which in turn determines how they will attempt to change the minds of others concerning those ideas. Karma is the currency of attention and influence.
Let me be clear: you cannot push dogshit to the top of reddit. [...] However, you can push kind-of-good-but-not-great stuff to the top of reddit. This happens all the time. There is a lot of money in it.
Redditors are often shocked at the dishonesty that goes on in the corporate world, but I think that's because so many redditors work as programmers, which is comparatively way more honest than most professions. If you deal with people for a living, you're usually dealing with bullshit, because most people are bullshit factories. [...] Wikipedia overwhelmingly considers articles from organizations "reliable" and articles from individuals "not reliable" -- note that these are the same organizations where a PR person calls up a friend and contrives a story.
Vote nudging is perhaps the most common type of vote manipulation on reddit. [...] Vote nudging happens when someone arranges themselves or a few other people to give a link a boost of upvotes during its initial appearance on a subreddit, and then leave the link to grow organically. Vote nudging is extremely successful because the first five upvotes are the most influential portion of a link's lifespan, because when a link has one vote you can kill 100% of its votes by downvoting it. [...] So, the vast majority of links will die in the first few upvotes. You might think of it as surviving infant mortality -- once children don't die from the vast majority of death-causing things, their life expectancy rockets forward. [...] Vote nudging is difficult to call manipulation because (a) it's very easy to organize and (b) this is undoubtedly common practice at countless marketing firms that promote their content.
Reverse-nudging is more insidious and it's when you vote negatively to all content blocking your link's position to a higher page. So let's say your link is 40th -- you'd downvote the 10 submissions above 40 [...] This would probably rise you to the 35th spot
Here is something you need to understand about reddit: The number of people who don't read the article and "skip to the comments" is immense, and people who do this register upvotes as some kind of truth rating (as opposed to "content we want you to see" rating). This viewpoint could not be more erroneous, but vote manipulation exploits people who think this way. [...] people don't tend to read the whole discussion, so a completely crushing counterargument could be at +1 or +2 forever.
Asch conformity experiment. If conformity pressure can delude someone about the size of lines right in front of their face, they can be influenced by a number telling them that they're disagreeing with 1000 people.
How many people are uninfluenced by votes? In other words, how many people think for themselves? There is no way to know for sure unless you have access to data a normal user does not, but I'd suspect it's something like 20-30%.
↑ comment by pepe_prime · 2017-02-12T23:44:02.427Z · LW(p) · GW(p)
Thanks, these are excellent highlight reels.
comment by pepe_prime · 2017-02-11T17:11:58.404Z · LW(p) · GW(p)
Self-explanatory title. The list is rather rambling and not terribly comprehensive, but I found it worthwhile nonetheless.
Here's a summary, of sorts. OP also discusses what can be done about manipulation, in a way relevant for lesswrong.
When I first tried to post this I accidentally saved as draft first and it ended up pointing to itself, so I'm reposting. Thanks satt.
comment by Viliam · 2017-02-12T21:05:44.035Z · LW(p) · GW(p)
In comments:
an easy way to stop vote manipulation is to just force the content you submit to be good and/or make votes unequal. But maximizing for quality content and making votes unequal both have problems.
you have to define 'quality'. Even though reddit can be hilarious, if jokes are more visible than insightful discussion (and they will be, because it doesn't take as long to process/vote on them) then insightful discussion will be buried and no one will have an incentive to do it. But, people will always make jokes anyway
you have to keep out non-quality [...] This is by design a form of elitism, and elitism is rarely as profitable as populism. Romance outsells conceptual fiction; Time magazine outsells the New Yorker. But at least those models have some way to make money [...] if reddit is not the source of startup wealth, well, it's not like you're going to find it by making an even more selective reddit [...] and profitability determines whether a solution will ever be implemented at all. [...] If "reddit but better" were something someone thought had real monetary potential, you'd have seen it by now.
HackerNews in particular requires that you get 500 upvotes before you can downvote someone, so every user has to think about how they are voting. Stack Exchange is even more stringent, and requires that you earn this privilege per discussion area as opposed to sitewide.
Ultimately, you cannot have some kind of content quality permanence unless you curate who votes in some fashion. This is done indirectly on most websites by simply creating discussion boards around subject matter that filter out idiots. If you start a message board around Algebraic Topology, well, your "off topic" forum is probably going to be a bit more rigorous than usual. But nothing is stopping you from being a bunch of idiots if you all decide to talk about normative ethics or exercise science or cognitive psychology or whatever
So, the solution would be a reddit-like entity where: Some kind of 'curator curator' exists whose job it is to approve who can vote/not vote. [...] Users have to earn the ability to upvote, and especially the ability to downvote, on a per-subreddit basis [...] New subreddits are made by application -- i.e. someone has to make the case for this subreddit's creation before it's actually approved [...] Links with very few words require captcha entry -- no actual penalty, but enough to make submitters second-guess whether they really want to submit, say, a link to a tweet. Subreddits have the ability to separately set permissions on who can submit and who can comment
It's about prior expectation. If you give someone a right or power and then take that right or power away, they will feel cheated and be angry. [...] Reddit is probably beyond repair in this respect. [...] it'd be better to start from scratch
three major things people feel in response to internet comments [...] laughter, insight, and [...] we could vaguely call the third one "feels." [...] splitting up the third category further would probably have diminishing returns [...] people just naturally gravitate toward stuff like laughter and feeling good. Trying to be insightful is hard. People were making other people laugh and cry before writing existed, but logic by comparison is really new and our brains clash with it a lot. Users will manage to insert jokes and moving anecdotes into their content even if you make insight a priority. But if you don't make insight a priority people aren't going to work to pursue that