Karma fluctuations?

post by eapache (evan-huus) · 2020-06-11T00:33:26.604Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    paulfchristiano
    Zvi
    ChristianKl
None
No comments

I see karma on posts fluctuating (in particular going down) more than I would expect coming from other vote-based websites. Is downvoting really used here for posts that are not spam or trolling? Or do people just change their minds a lot?

The FAQ has: We encourage people to vote such that upvote means “I want to see more of this” and downvote means “I want to see less of this.” But I guess I’m surprised if people actually behave that way? And that some posts are controversial enough to receive active downvotes vs passive ignoring.

Answers

answer by paulfchristiano · 2020-06-11T00:47:47.181Z · LW(p) · GW(p)
Is downvoting really used here for posts that are not spam or trolling?

Yes.

But I guess I’m surprised if people actually behave that way?

What makes this surprising?

some posts are controversial enough to receive active downvotes vs passive ignoring.

The point is to downvote content that you want to see less of, not content that you disagree with. If by "controversial" you mean "that some people don't want to see it," then I can't speak for others but I can say that personally the whole internet is full of content that I don't want to see (including and indeed especially content that I mostly agree with).

comment by G Gordon Worley III (gworley) · 2020-06-11T17:57:34.600Z · LW(p) · GW(p)

The point is to downvote content that you want to see less of, not content that you disagree with. If by "controversial" you mean "that some people don't want to see it," then I can't speak for others but I can say that personally the whole internet is full of content that I don't want to see (including and indeed especially content that I mostly agree with).

In practice I think people don't do a great job separating "disagree" with "I don't want to see this" because disagreeing often implies not wanting to see something for many people. I wish the norm were less focused on what I want to see and more on what I think is worth being seen by people reading LW.

I think this shift from personal preference to a focus on curating content for others shifts the approach to voting in a way that is likely to better result in votes that reflect what is worth reading when a person comes to the site rather than what people on LW like.

(I previously have had more to say on voting on LW) [LW · GW]

comment by Ben Pace (Benito) · 2020-06-11T18:45:37.589Z · LW(p) · GW(p)

Oh I'm pretty sure I disagree with this.

I think it's often very hard to get direct social data about the world, when people are always trying to say what they think everyone else wants them to say, or be in line with what everyone else thinks. This is like replacing survey questions about "Did you enjoy the movie?" with "Do you think most people who answer this survey will have enjoyed the movie?" You can end up in places where an outcome is reached just because everyone thinks that everyone else wants that outcome. "No, *I* don't think your content is bad, I just think *other people* will think your content is bad, so I think it shouldn't be on the site." This is how bubbles and information cascades form. 

Vote according to your own assessment of quality, not your low resolution model of a mass of tens of thousands of other people's assessment of quality.

comment by eapache (evan-huus) · 2020-06-11T21:17:02.284Z · LW(p) · GW(p)

This is interesting. I am mostly uninterested in AI research topics, but have avoided downvoting them on less wrong because there seems to be a lot of value and interest from other parts of the community. I could start downvoting every AI post that I see, but I’m afraid it would turn into a factional war between AI enthusiasts and everyone else.

comment by Ben Pace (Benito) · 2020-06-11T22:22:04.872Z · LW(p) · GW(p)

Okay, I admit my model was oversimplified. No, I wouldn't recommend downvoting all AI content. It's a pretty core part of the site, and a lot of core users care about it. It's like turning up to a forum for software engineering enthusiasts and downvoting everything because you're primarily interested in discussing national politics in your country. It would be a waste of effort. 

(FYI If you don't want to see the AI content, you can now use the tags to exclude / reduce the amount of those posts on your frontpage.)

Neither the extremes of "Vote how you think others will vote" and "Vote with no regard to the context of the site + its community" are accurate. The main thing I mean to say is that "Vote how you think others will vote" has a lot of pathologies, related to things like first-past-the-post voting systems and market bubbles.

(What is the correct simplified recommendation? Perhaps it is to vote mostly on the average and occasionally on the margin. This is a site for rationality content, and lots of other smaller things like AI, world optimisation, practical advice, and so on, and as part of this site's community you can help out by upvoting and downvoting good/bad content in those areas. You can also help out by adding your taste, where you see things you feel are underrated or overrated, and adding votes there too. I'm not sure how to put into words the right way to combine these things, although I do it myself very frequently.)

comment by Sherrinford · 2020-06-13T21:39:05.609Z · LW(p) · GW(p)

I like that you are trying to trying to make your approach to voting explicit.

I believe people will usually vote based on something like "Upvote if this is content represents what I think Lesswrong should be". It depends on the content of the post or the comment what this means exactly. In many cases it should mean whether a post or comment should contribute to truthfinding on a topic. This should imply, for example, that posts that contain empirical claims should contain some kind of empirical evidence, and that conclusions are valid. It also implies that the identity of the author should not be a reason for upvoting.

Compared to websites that do not have a voting-system, I see the disadvantage that anonymity of voting implies that you can judge things without giving and evaluating arguments. This does not matter very much as long as we are talking about content that is not very emotional. It matters when we are talking about politics; partisan fellow-feeling can then drive behavior.

(Personally, I get the impression that the pandemic situation has brought more politics to LW and that a certain kind of partisan voting has become more common, and I don't like this trend. But my sample is not yet large enough to form a good judgement.)

comment by Ben Pace (Benito) · 2020-06-13T22:01:02.183Z · LW(p) · GW(p)

Yeah. I think I agree that it gets worse as you move towards news-like topics, and focusing on covid definitely has tradeoffs on the front. Though overall covid content is pretty small, and I don't expect to continue to move in that direction, I expect crises on this level that it's worth us engaging with to be no more than once per decade.

I think the karma system generally does a standup job of promoting the good stuff to my attention and occasionally punishing (with a negative score) stuff that's bad. I do think there are worries about short term incentives and scoring, and I'd like to remedy that in part by creating better long-term incentives. I'm just about to get back to work on the book of the LW 2018 Review [LW · GW], to set things in motion for us to output a book like that annually. The combination of Review+Book I hope will feel to authors much more valuable+rewarding than week-to-week karma scores.

comment by Sherrinford · 2020-06-14T05:34:31.136Z · LW(p) · GW(p)

I would like to add that I find politics important and found the pandemic posts mostly very useful and readable. But writing politics requires strong self-discipline if a rationalist standard of discussion should be maintained, in particular by readers who vote.

comment by eapache (evan-huus) · 2020-06-14T00:08:28.203Z · LW(p) · GW(p)

Related: do mods consider karma when deciding what to curate? Obviously something in the negatives is unlikely to warrant curation, but is a higher karma score considered a positive signal past whatever minimum bar?

(Speaking for my own internal reward function, I like writing posts that get high karma, but I'd like writing a post that gets curated much more)

comment by Ben Pace (Benito) · 2020-06-14T08:37:14.324Z · LW(p) · GW(p)

Speaking for myself, when I consider what post to curate, I let my attention naturally go to the top 5 or so recent high karma posts, as well as to the posts that other mods nominated (we have a mod-only UI that shows all curation nominations). I also ask myself what posts I liked lately, and occasionally when I read a post I immediately think "wow this was excellent, I bet I'll want to curate this 5 days from now" and nominate it. For example, this happened with a post recently, where I wrote a comment about why I liked the post as soon as I read it [LW(p) · GW(p)], and then still endorsed it 4 days later and curated it.

Overall karma plays an important role in what posts I consider. But to answer your specific question, about whether a higher karma score is something I consider a positive signal, the answer is no.

My rough internal question is "Was this idea important/interesting/useful enough, and was it written clearly/concisely/enjoyably enough?" and don't care about the karma. That generally produces a binary "yes/no".

The main way I use karma is to second-guess myself. If I think a post should be curated, but it only got like 35 karma, then I will spend some time considering the hypothesis that "This post is really well aimed at Ben in particular, and actually a lot of people won't be that interested to read this."

comment by Pongo · 2020-06-12T01:18:54.201Z · LW(p) · GW(p)

I don't know that I buy that it would be a waste of time and effort. It's a very cheap action to downvote something. Particularly if eapache is voting as if controlling the vote of all other lesswrongers uninterested in AI safety

I like the handle "the context of the site" and your final parenthetical paragraph

comment by G Gordon Worley III (gworley) · 2020-06-12T01:44:47.183Z · LW(p) · GW(p)

You seem to be rejecting a position I'm not taking, possibly because I didn't explain it in a maximally clear way.

I'm not saying to vote up/down things you think others will like/dislike, I'm saying vote up/down the things you want other people to read/not read.

Notice how this is not the same as voting up/down what you like/dislike or what you personally want to read/not read or what you think others will like/dislike or what you think others will themselves want to read/not read. I'm saying think of it as saying "I want/don't want this to be seen by others".

Given this framing I end up rarely downvoting things, mostly reserving my downvote for things that feel like an obvious waste of time for all readers. I upvote lots of things by this criteria, especially including things I disagree with or think are wrong, because they seem worth engaging with. And of course lots of things get no vote from me, because I fail to form a judgement of whether or not it's worth reading.

comment by Ben Pace (Benito) · 2020-06-12T03:00:50.499Z · LW(p) · GW(p)

You seem to be rejecting a position I'm not taking, possibly because I didn't explain it in a maximally clear way.

Gotcha. FYI I wrote and published the comment within the first 5 minutes of waking up this morning, a time in which I write my thoughts more starkly than normal. Not that it's a bad thing.

Even now, I still feel a 'push away strongly' feeling, toward the level of focus on others you suggesting with sentences like "I want/don't want this to be seen by others" and "vote up/down the things you want other people to read/not read". So let me write down a bunch of ways I think about what to vote on that seem different to that.

A lot of the time, I ask myself what incentive this will have on the person whose content I'm voting on, and from time to time I also consider second order effects on what norms others will infer.

I, too, regularly upvote stuff that I (a) disagree with and (b) am not personally interested in reading, because I want to reward people for that content because it will improve my experience of the site. For the former, many posts that are doing real thinking and give me data even though I don't form the same conclusions as the author; for the latter, lots of comments doing valuable legwork, like math proofs or data collection, or providing a reference people will want in the future. 

I also downvote stuff that I agree with if it seems super aggressive or it seems like the person is wasting a lot of space/time for readers (like if they found the site yesterday and today they've written a low-effort bad-grammar confused post).

Reflecting more, I do also consider how much visibility a post will get, especially if it's in the 5-20 space and I can strong upvote it by +9 to a much stronger visibility. I guess, as we knew all along, karma is a thing that does many things, and the thing you're talking about is one of them, but I think it's misleading to imagine it should be the only one.

comment by frontier64 · 2020-06-11T21:43:01.994Z · LW(p) · GW(p)

Vote aggregation is how we get "this is worth being seen by people reading LW" from "I want to see this." Individuals know a bit more about their own personal preferences than they do about the personal preferences of others. Asking people to judge the personal preferences of others can only lead to a decline in accuracy of reporting.

I think this shift from personal preference to a focus on curating content for others shifts the approach to voting in a way that is likely to better result in votes that reflect what is worth reading when a person comes to the site rather than what people on LW like.

I think the point is that what people on LW like should be worth reading. I can imagine a few different general situations and only in the most ridiculous situation does changing the voting basis from "what you want to see/don't want to see" to "what you think the community should read/should not read" improve the content.

Situation 1: LW users have good taste and want to see more worthy posts.

In this case the switch would at best cause no change and at worse decrease the quality of posts because users may be worse at judging the opinions of others than they are at judging their own opinions.

Situation 2: LW users are unable to judge a post's worthiness.

If the users aren't able to judge worthiness, voting on what they think is worthy can't actually improve their score.

Situation 3: LW users have bad taste and what they want to see has little/negative correlation with worthiness, but they can still judge worthiness well if asked.

In this case switching the voting system would be effective. But this sounds ridiculous on it's face! How could someone capable of judging worthiness not actually prefer worthy content to unworthy content?

comment by Pattern · 2020-07-20T21:30:52.808Z · LW(p) · GW(p)

There might be multiple dimensions of quality, or types (of content) that require time to invest in (to appreciate/recognize quality/etc.), that not everyone wants to invest in currently. See here [LW(p) · GW(p)], from further up the page - LW users aren't homogenous wrt. taste and topic related group membership population can 'have effects on ability to judge'.

Situation 3: LW users have bad taste and what they want to see has little/negative correlation with worthiness, but they can still judge worthiness well if asked.

Basically this, with 'and time investment is required to understand X, like the Sequences (except hopefully it takes less time)'.

(Perhaps this could be solved via recognition of different types (or groups) which can be filtered on, and tools for doing so.)

Does that make sense?

comment by frontier64 · 2020-07-24T20:06:53.372Z · LW(p) · GW(p)

I understand this position and it's totally relevant to the question of when to downvote. However, I don't think it has much relevance to the question of when a user should upvote. If a person isn't interested in certain genres of topics, downvoting every post on one of those topics wouldn't improve discourse; it would lead to uniformity of topics. Only the few topics for which more people (accounting for karma weights) are interested in than uninterested in would remain more upvoted than downvoted. However, with the current system most people understand that this situation is exactly what the novote is for. If one doesn't have any interest in AI research then one should filter those posts where they can and disregard them where they can't.

I like the idea of automatically figuring out what topic a post is based on the upvote, novote, and downvote patterns of different users. Maybe some combination with that and the topic tags on posts could lead to a different, individualized karma system. Votes from users with similar interest in topics would have more weight for each other than they do for users with disparate interest in topics. Seems a little echo-chambery, but I see value in the idea.

I do see a bit of a incongruity between what you're describing and the the comment you linked to which I can't square. In actuality, eapache seems to have the ability to see the value in AI research topics, but regardless is uninterested in the topic himself. But what you're describing would lead to eapache not being able to discern value or lack of value in AI research topics because he's uninterested and thus hasn't invested the time to be able to appreciate them.

comment by Pattern · 2020-07-25T02:14:19.831Z · LW(p) · GW(p)

I understand this position and it's totally relevant to the question of when to downvote. However, I don't think it has much relevance to the question of when a user should upvote.

Yes.

In actuality, eapache seems to have the ability to see the value in AI research topics, but regardless is uninterested in the topic himself.

Yes.

what you're describing would lead to eapache not being able to discern value or lack of value in AI research topics because he's uninterested and thus hasn't invested the time to be able to appreciate them.

Perhaps. I might be able to appreciate AI research topics, while 

  • maybe not appreciating (all of) it as much
  • or be as good at detecting flaws as say, an expert
comment by eapache (evan-huus) · 2020-07-25T11:02:26.117Z · LW(p) · GW(p)

The new tagging system actually works really well for this. I set AI to have a moderate negative homepage modifier, and still get to see the top AI posts but mostly only those.

comment by paulfchristiano · 2020-07-02T16:03:28.423Z · LW(p) · GW(p)
(including and indeed especially content that I mostly agree with)

In retrospect this was too self-flattering. Plenty of the stuff I don't want to see expresses ideas that I agree with, but the majority expresses ideas I disagree with.

comment by eapache (evan-huus) · 2020-06-11T01:02:55.950Z · LW(p) · GW(p)

It’s surprising to me because it feels very different from other similar voting systems, and I was expecting people to carry over habits from those places.

Maybe controversial was the wrong word. It still feels like “actively want to see less of” is a much stronger reaction than my default to posts I didn’t think were particularly great. But quite possibly that’s on me. It’s also surprising to me that there is so much lack of consensus on whether people want more or less of certain posts.

comment by Raemon · 2020-06-11T02:55:22.863Z · LW(p) · GW(p)

This blogpost [LW · GW] is one of the defining cultural elements of LessWrong, which may be a bit relevant. 

answer by Zvi · 2020-07-02T17:24:46.451Z · LW(p) · GW(p)

My model for how people *actually* vote, in practice, is they ask the question: "Does this post have too much or too little Karma on it right now?"

It's not "do I want to see less/more of this" it's "do I want to see less/more of this than the current vote implies"?

Thus, posts generally get a lot of their steady-state Karma quickly (assuming they are going to get any) then voting starts to stabilize and at the end you get almost an equal mix of plus and minus.

My prediction is that if I were to withdraw my own upvote from my posts once they got 20+ karma, that the final karma numbers would change less than half of the amount of my vote.

comment by Pongo · 2020-07-02T21:46:21.421Z · LW(p) · GW(p)

Indeed, recently I saw some posts and thought, "I really hope I don't see a lot of this. Perhaps I should downvote", but saw that it had low karma and oldish age, so decided against pushing it further down.

This suggests I may be asking "do I want to see less/more of this than my prediction of its karma total implies". Which is perhaps silly if I can make little difference to the steady state score.

answer by ChristianKl · 2020-06-11T22:09:29.189Z · LW(p) · GW(p)

Most other vote-based websites are run by businesses that want a lot of traffic and prefer quantity over quality. 

LessWrong keeps up relatively high norms for content by downvoting a lot of posts that are neither spam nor trolling.

No comments

Comments sorted by top scores.