Downvotes temporarily disabled
post by Vaniver · 2016-12-01T17:31:41.763Z · LW · GW · Legacy · 28 commentsContents
28 comments
This is a stopgap measure until admins get visibility into comment voting, which will allow us to find sockpuppet accounts more easily.
The best place to track changes to the codebase is the github LW issues page.
28 comments
Comments sorted by top scores.
comment by RomeoStevens · 2016-12-02T03:00:24.500Z · LW(p) · GW(p)
Why aren't we piggybacking the experience of hacker news and just disabling downvotes for people under, say 100 karma permanently? Letting the community wither because of hand wringing over elitism seems ridiculous.
Replies from: Vaniver, ChristianKl, Pimgd, Error, Daniel_Burfoot↑ comment by Vaniver · 2016-12-02T18:07:32.942Z · LW(p) · GW(p)
Currently the limit is set at 10; it would be fairly easy to change it to 100 or to 1000. The problem is that we don't know what accounts Eugine has yet, and so even if we set the limit at 1k he might still have twenty accounts available to downvote things. Once we get the ability to investigate comment voting, then we can keep the limit fairly low.
Replies from: RomeoStevens, steven0461↑ comment by RomeoStevens · 2016-12-03T03:22:37.379Z · LW(p) · GW(p)
okay, so a temp measure until then... the test is cheap.
↑ comment by steven0461 · 2016-12-15T16:45:11.254Z · LW(p) · GW(p)
How about 10k?
Replies from: Vaniver↑ comment by Vaniver · 2016-12-15T18:39:29.468Z · LW(p) · GW(p)
One problem with setting the limit too high is that the voter base becomes unbalanced in a problematic way; is it really useful to have downvotes if only ~50 people can use them, but ~5000 people can use upvotes?
Replies from: steven0461, Lumifer↑ comment by steven0461 · 2016-12-15T20:29:56.507Z · LW(p) · GW(p)
Yes, it seems to me that would be useful. Concretely, it might make the difference between whether clearly-bad-but-not-ban-worthy content ends up at +1 or -2. If >10k isn't enough people, something like >3k would still come with a pretty minimal risk of abuse.
edit: On rereading your comment, it sounds like you're saying a high threshold for downvoting has problems relative to a low threshold. I agree with this, but what we currently have is no downvoting. I suspect the ideal policy in terms of site quality (but not politics/PR/attractiveness to newcomers) is a medium-sized whitelist of voters selected by a trusted, anonymous entity, with no voting (up or down) outside this whitelist.
Replies from: Lumifer↑ comment by Lumifer · 2016-12-15T22:06:02.220Z · LW(p) · GW(p)
trusted, anonymous
These two words do not match well.
Replies from: steven0461↑ comment by steven0461 · 2016-12-16T03:33:48.710Z · LW(p) · GW(p)
Trusted by the site owners, anonymous to others. (This is not actually a practical suggestion, so it doesn't matter.)
↑ comment by ChristianKl · 2016-12-02T11:55:30.531Z · LW(p) · GW(p)
I would agree with such a change. I'm not sure that it's enough with the sockpuppets that exist given that they can upvote each other.
↑ comment by Error · 2016-12-02T15:27:17.340Z · LW(p) · GW(p)
I'd second this, but it might not be as easy as it sounds like. It seems the site is a technical black hole and the mods are effectively operating in a straitjacket; the intersection of the set of people who have a stake in the site, and the set of people who actually have direct DB access, is the empty set. Also, outsiders who see its internal structure for the first time have reactions like this, which really isn't a good sign.
I've nosed around contributing to the effort a time or two, but always end up backing off when I realize just how aggravating trying to work on it would be. This makes me feel bad.
...that being said, if your suggestion is actually easy, this is a no-brainer. Won't solve puppets that have been around for a while, but it limits how long the mods have to play whack a mole.
[edit: depending on how the disabling is coded, it might be easier to disable downvoting against known targets]
Replies from: Viliam, WhySpace_duplicate0.9261692129075527, scarcegreengrass↑ comment by Viliam · 2016-12-05T13:47:44.886Z · LW(p) · GW(p)
I looked at the LW code again, because in the meanwhile I had some very short experience with Python. Now some parts of the code make more sense than before. But still:
The code uses so many different technologies, that making all of them run is already a full-time job. The overhead for an individual volunteer working in their free time is insane. And contributing code that you are not able to run on your own machine is, uhm, unlikely to result in a correctly working code. -- Luckily, some people are already working on this issue by building a virtual machine that has everything installed, so that everyone else can simply download the VM and start coding.
Adding a new feature often requires you to get familiar with all layers of the code. (How the URLs are mapped to function calls; how the parameters are passed, how the page is rendered, how the data are stored in the database and how they should be manipulated.) Each of these layers uses some specific solutions with many sparsely documented details, so you more or less have to find an existing functionality that seems similar to what you want, and then track step-by-step how exactly it works. Even there it is easy to get stuck. I spent a few hours this weekend just finding the most simple existing functionality (shortly: setting an integer value for a user, and storing this value in the database), and trying to connect all its pieces together; some connections are still missing (for example, I already found a controller for the functionality, and the HTML template, but I have no idea how the program knows that this controller is connected to this template; searching the name of the template in the source code gives no results, so there probably is some code that says something like "take the name of the controller, remove the Controller suffix, convert to lowercase, etc." but good luck finding it).
I mean, the code is not completely bad. It's probably better than most projects. But there are many frustrating things, for example functions that receive five or seven arguments, the names of the arguments are 1-3 characters long, and most of those arguments are just passed to other functions, which pass them to other functions... I have only an approximate idea of what those arguments mean, and the documentation says nothing about it. (Such code would definitely not pass code review at my current job.) And it doesn't help that many of these functions are not called directly from other functions (so that I could backtrack where the value came from), but instead there is some dispatching system, for example in some configuration file you write "controller = promoted, action = listing", and then when you access the given URL, a method "PromotedController.GET_listing" is called (but you have to do a separate investigation to find out where do some of the method's arguments actually come from). Essentially, if you are able to contribute a new feature to the LW code, Reddit should be happy to hire you, because you will save them money they would have to spend on your training otherwise.
Anyway, as soon as someone gets the virtual machine running, the mere mortals like me at least get a hope of providing a useful contribution.
For a more constructive approach, there is a lot of low-hanging fruit, and one can make a very useful contribution simply by writing comments to the existing code, and renaming variables; i.e. by making obvious the things every contributor would otherwise have to discover independently.
↑ comment by WhySpace_duplicate0.9261692129075527 · 2016-12-02T19:29:48.862Z · LW(p) · GW(p)
...reactions like this,...
The relevant bit from the link:
... I'll happily volunteer a few hours a week.
EDIT: AAAUUUGH REDDIT'S DB USES KEY-VALUE PAIRS AIIEEEE IT ONLY HAS TWO TABLES OH GOD WHY WHY SAVE ME YOG-SOTHOTH I HAVE GAZED INTO THE ABYSS AAAAAAAIIIIGH okay. I'll still do it. whimper
↑ comment by scarcegreengrass · 2016-12-03T16:30:10.705Z · LW(p) · GW(p)
Modifying the site takes time but isn't impossible. Volunteers are making changes, altho some people are bottlenecked by the difficulty of setting up test environments.
↑ comment by Daniel_Burfoot · 2016-12-02T14:59:48.609Z · LW(p) · GW(p)
Great suggestion. 100 Karma is not particularly hard to obtain if you comment regularly and post a few articles.
Replies from: Viliam↑ comment by Viliam · 2016-12-03T15:13:06.797Z · LW(p) · GW(p)
Also if you have dozen sockpuppets that upvote each other. Just saying.
Replies from: AspiringRationalist↑ comment by NoSignalNoNoise (AspiringRationalist) · 2016-12-03T21:38:57.649Z · LW(p) · GW(p)
If all votes required 100 karma, using sockpuppets for votes would get a lot harder.
Replies from: ChristianKl↑ comment by ChristianKl · 2016-12-05T12:35:13.965Z · LW(p) · GW(p)
Only if there would be an effective way to delete sockpuppets.
comment by turchin · 2016-12-02T15:29:22.908Z · LW(p) · GW(p)
I prefer to get informative downvotes, because based on them I can update what was wrong with my post more easily. There is two types of such informative downvoting :
On Longecity there are many different downvoting types , like "non-informative", "wrong".
On Astroforum.ru they provide short written explanation for downvotes.
But such downvoting should be anonymous to prevent "wars".
Replies from: Vaniver↑ comment by Vaniver · 2016-12-02T18:11:44.339Z · LW(p) · GW(p)
So, this also comes up in proposals of informative upvotes. My impression is that the halo effect makes this less effective in practice than it seems like it will be in theory.
Replies from: Lumifer, scarcegreengrass, root↑ comment by scarcegreengrass · 2016-12-03T16:23:48.759Z · LW(p) · GW(p)
Can you go into a little more detail? Readers suffer a halo bias about whom?
Replies from: Vaniver↑ comment by Vaniver · 2016-12-03T18:18:52.943Z · LW(p) · GW(p)
What we would like to have is different signals--posts that are informative but wrong have high 'informative' scores and negative 'right' scores. Or posts that are 'right' but not 'funny', or 'funny' but not 'right.' But what I suspect will happen is that the various signals will be correlated together strongly, so that you end up with posts that are informative and right and funny, vs. posts that are uninformative and wrong and boring. At which point you could have just stuck with a single karma rating.
It's possible that this is thinking about what happens with extreme posts, when what matters is marginal posts--if might be that if you get a single downvote, it's not one person clicking all three buttons, but instead one person clicking one of those buttons, and so you can figure out which one it is.