Rationality tip: Predict your comment karma

post by Will_Newsome · 2011-09-14T05:07:41.361Z · LW · GW · Legacy · 51 comments

Contents

51 comments

For the last few months I've taken up the habit of explicitly predicting how much karma I'll get for each of my contributions on LW. I picked up the habit of doing so for Main posts back in the Visiting Fellows program, but I've found that doing it for comments is way more informative.

It forces you to build decent models of your audience and their social psychology, the game theoretic details of each particular situation, how information cascades should be expected to work, your overall memetic environment, etc. It also forces you to be reflective and to expand on your gut feeling of "people will upvote this a lot" or "people will downvote this a little bit"; it forces you to think through more specifically why you expect that, and how your contributions should be expected to shape the minds of your audience on average.

It also makes it easier to notice confusion. When one of my comments gets downvoted to -6 when I expected -3 then I know some part of my model is wrong; or, as is often the case, it will get voted back up to -3 within a few hours.

Having powerful intuitive models of social psychology is important for navigating disagreement. It helps you realize when people are agreeing or disagreeing for reasons they don't want to state explicitly, why they would find certain lines of argument more or less compelling, why they would feel justified in supporting or criticizing certain social norms, what underlying tensions they feel that cause them to respond in a certain way, etc, which is important for getting the maximum amount of evidence from your interactions. All the information in the world won't help you if you can't interpret it correctly.

Doing it well also makes you look cool. When I write from a social psychological perspective I get significantly more karma. And I can help people express things that they don't find easy to explicitly express, which is infinitely more important than karma. When you're taking into account not only people's words but the generators of people's words you get an automatic reflectivity bonus. Obviously, looking at their actual words is a prerequisite and is also an extremely important habit of sane communication.

Most importantly, gaining explicit knowledge of everyday social psychology is like explicitly understanding a huge portion of the world that you already knew. This is often a really fun experience.

There are a lot of subskills necessary to do this right, but maybe doing it wrong is also informative, if you keep trying.

51 comments

Comments sorted by top scores.

comment by Solvent · 2011-09-14T10:22:51.076Z · LW(p) · GW(p)

Why do you make a comment if you expect it to have net negative downvotes? I've always felt that I strongly agreed with how the community voted things. If it has three downvotes, it's probably not worth seeing.

Do you think that Less Wrong is really that bad at voting that some things need to be said which we'll downvote anyway?

I would have thought that "will be downvoted" is fairly close to "should not post."

Replies from: Jack, ShardPhoenix, wedrifid, Richard_Kennaway, Will_Newsome
comment by Jack · 2011-09-14T15:27:41.000Z · LW(p) · GW(p)

If it has three downvotes, it's probably not worth seeing.

I find that seeing a comment with a lot of down-votes has the exact opposite effect on me. "Six down-votes! What crazy half-formed idea is Will_Newsome talking about now!?"

The quality control benefits of down-voting are mostly deterrence which requires people feel good about up-votes and bad when they get down-voted.

(Just teasing you, Will)

Replies from: Will_Newsome, komponisto
comment by Will_Newsome · 2011-09-14T16:09:19.419Z · LW(p) · GW(p)

The quality control benefits of down-voting are mostly deterrence which requires people feel good about up-votes and bad when they get down-voted.

Which is pretty dangerous, on the whole. It's reinforcement learning for meshing with local preconceptions. Luckily local preconceptions 'round these parts are pretty top notch relatively speaking, but unluckily they're really suboptimal objectively speaking. This is extra dangerous for people like me who do a lot of associative learning. Please everyone, do be careful when it comes to letting people reward or punish you for thinking in certain ways.

Replies from: Jack
comment by Jack · 2011-09-14T16:41:16.572Z · LW(p) · GW(p)

It is, in principle, dangerous. But the vast majority of down-votes are for rhetoric rather than position. Usually, my comments which contradict local preconceptions are just low karma relative to the replies, not negative karma. Most down-votes are for rhetorical and stylistic violations rather than unpopular beliefs.

I'd say this goes for a lot of your down-votes, too. Especially if we include 'paying insufficient attention to likely inferential distances' as a rhetorical violation. Though of course at this point a number people may be especially sensitive to your comment quality and mediocre comments that would usually be ignored are down voted if your name is attached.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-09-15T02:00:20.383Z · LW(p) · GW(p)

Thinking about what standards you should hold yourself to when it comes to choosing rhetoric and style is also an important kind of thinking though. Like, using the phrase "it seems to me as if" habitually is a cheap way to get karma but it's also a good habit of thought. But sometimes my rhetoric is negatively reinforced when the only other option I had was in some should world where I wasn't prodromal schizohrenic and so by letting people's downvotes affect my perception of what I should or shouldn't have been able to do it's like I'm implicitly endorsing an inaccurate model of how justification should work or just how my mind is structured or how people should process justification when they have uncertainty about how others' minds are structured. Policies spring from models, and letting policies be punished based on inaccurate models is like saying it's okay to have inaccurate models. (Disclaimer: Policy debates always have fifteen trillion sides and fifteen gazillion ways to go meta, this is just one of them.)

Replies from: Will_Newsome
comment by Will_Newsome · 2011-09-15T02:04:50.934Z · LW(p) · GW(p)

(Note that it doesn't necessarily matter that beliefs and policies tend to be mutually supporting rationalizations linked only in the mind of the believer.)

comment by komponisto · 2011-09-14T22:43:27.366Z · LW(p) · GW(p)

I find that seeing a comment with a lot of down-votes has the exact opposite effect on me. "Six down-votes! What crazy half-formed idea is Will_Newsome talking about now!?"

Fittingly, this comment currently has six upvotes. :-)

comment by ShardPhoenix · 2011-09-14T11:50:15.174Z · LW(p) · GW(p)

There has been at least one occasion when I've posted something despite correctly expecting it to be downvoted. In that case it was a topic where I felt the LW community was letting politeness/pro-social-ness get in the way of rationality/actually-being-right.

Replies from: Jack, Metus
comment by Jack · 2011-09-14T15:34:38.057Z · LW(p) · GW(p)

I'd love to see a collection of the lowest karma comments of high karma contributors.

Replies from: Wei_Dai, Jack, prase
comment by Wei Dai (Wei_Dai) · 2011-09-19T11:06:42.993Z · LW(p) · GW(p)

My script for downloading all comments of a user has been updated to allow sorting by votes. After sorting, scroll to the bottom to see the lowest karma comments.

Replies from: Jack
comment by Jack · 2011-09-19T11:56:55.851Z · LW(p) · GW(p)

Awesome! Thanks.

Going through some now. Preliminary findings: looking at the most downvoted comments a user ever made is a really good way to induce attribution bias. ("You people are jerks!")

comment by Jack · 2011-09-15T08:51:19.214Z · LW(p) · GW(p)

This was not the karma I predicted for this comment.

comment by prase · 2011-09-14T20:00:52.918Z · LW(p) · GW(p)

Me too, but even more interesting would be lowest karma comments of mediocre contributors. The high karma contributors are rarely downvoted when they formulate an idea, because people suppose that even if it sounds crazy it mustn't be so since it originated from an elite contributor. Therefore I suppose that lowest karma comments of high karma contributors would mostly be trivial snarky remarks downvoted for incompatibility of sense of humor. At least this is my hypothesis - let it be tested, if we can collect such comments somehow.

Replies from: wedrifid
comment by wedrifid · 2011-09-15T13:08:41.289Z · LW(p) · GW(p)

Therefore I suppose that lowest karma comments of high karma contributors would mostly be trivial snarky remarks downvoted for incompatibility of sense of humor. At least this is my hypothesis - let it be tested, if we can collect such comments somehow.

In response to Jack's expression of interest I went used Wei_Dai's script to download all my comments and had a search through with some regex. By approximate count the greatest number of downvoted comments were jests of the type you mentioned, followed by comments of the form "I don't approve of the grandparent either but your specific criticism Y is wrong for this logical reason". The lowest vote that I spotted was -6 for a comment along the lines of "I fundamentally disagree with your accusations of me and do not wish to continue this conversation".

The selection here is somewhat biased in as much as I am comfortable deleting comments if for any reason a conversation is unsatisfactory to me. There are quite possibly comments or jokes that would have gone into free-fall if I did not delete them when they reached -3 in 10 seconds flat. The downvoted comments that remain I either still endorse, consider important for the conversation to make sense, haven't noticed or don't care about enough to click on. (This isn't to say that deleting a comment indicates that I do not endorse it entirely. I also have no problem with choosing my battles.)

I'm afraid Jack would be disappointed in that few of the most downvoted comments seem to be about object level subject matter. Or, if they are, it is object level conversation about something that people are... passionate about. It isn't a source of ideas I have that people most disagree with, which may be interesting to see!

comment by Metus · 2011-09-14T15:17:47.820Z · LW(p) · GW(p)

Can you please give us more information? Or even better, link to the actual instance.

comment by wedrifid · 2011-09-14T15:28:11.956Z · LW(p) · GW(p)

I would have thought that "will be downvoted" is fairly close to "should not post."

It is certainly close to 'best not to post'. There are some times where posting things that you know will be negatively received is worth doing anyway. But it doesn't work well if you try it too frequently. You end up with a reputation as a crockpot at best either in general or specific to one topic. 'Qualia' and 'quantum monadology' spring to mind as past examples. Will is risking getting his own reputation on the subject of theism.

Replies from: lessdazed, Will_Newsome
comment by lessdazed · 2011-09-14T19:27:28.143Z · LW(p) · GW(p)

There are some times where posting things that you know will be negatively received is worth doing anyway. But it doesn't work well if you try it too frequently. You end up with a reputation as a crockpot

You end up with a reputation as a poor communicator. Unpopular but non-obviously stupid ideas that a person (or paperclip maximizer, etc.) tries to articulate are not punished, like here. That came to mind since I participated in the conversation but there are many other examples on LW.

Replies from: wedrifid
comment by wedrifid · 2011-09-14T19:36:34.353Z · LW(p) · GW(p)

You end up with a reputation as a poor communicator.

What? No you don't. You end up with a reputation as a poor communicator when you communicate poorly, which is a different thing altogether.

Replies from: lessdazed
comment by lessdazed · 2011-09-14T19:52:45.766Z · LW(p) · GW(p)

I am assuming that non-stupid ideas well communicated will not be negatively received.

comment by Will_Newsome · 2011-09-14T15:48:02.976Z · LW(p) · GW(p)

I'm pretty sure I passed that threshold awhile ago. At least many of my comments get systematically downvoted without getting read these days. ETA: And not just ones that have to do with "theism" (more like theology).

Replies from: prase
comment by prase · 2011-09-14T19:52:36.706Z · LW(p) · GW(p)

There is not one reputation on a forum like this, where people don't engage in gossip about other users. You have as many reputations as many users are here. So even if somebody is downvoting your comments without reading them (by the way, are you sure about that?), it still doesn't mean that you can't lose more reputation.

Replies from: Will_Newsome, lessdazed
comment by Will_Newsome · 2011-09-14T23:30:31.978Z · LW(p) · GW(p)

(by the way, are you sure about that?)

Yes. (It would be a weird hypothesis for me to come up with with little evidence and then assert confidently.)

I wasn't claiming I can't lose more reputation.

Replies from: shminux
comment by Shmi (shminux) · 2011-09-14T23:59:01.021Z · LW(p) · GW(p)

How did you manage to test it?

Replies from: Will_Newsome
comment by Will_Newsome · 2011-09-15T00:03:34.316Z · LW(p) · GW(p)

Refreshed the page every 5 seconds. If all my comments get downvoted at once that is strong evidence that they weren't actually read, especially if it happens more than once.

There's also a precedent, I dunno if you saw my discussion post about karmassassination.

comment by lessdazed · 2011-09-14T20:01:50.986Z · LW(p) · GW(p)

I am considering making a thread in which people type, for reinforcement, a sentence emphasizing how little they know about votes they receive. What do you think of "I do not know why my comment got the votes it got"? It doesn't reflect partial knowledge or educated guesses enough to be perfect, can you think of better?

Replies from: handoflixue
comment by handoflixue · 2011-09-14T22:40:06.297Z · LW(p) · GW(p)

"I do not know why my comment got the votes it got"

The point of Bayesian thinking is that you should have an idea why things are happening. If you genuinely don't know why your comments are getting the votes it did, then ask. This is not a shy forum. You'll build up a few data points and can resume being a competent Bayesian with a pretty good idea why you're getting the votes you do.

comment by Richard_Kennaway · 2011-09-14T14:03:53.833Z · LW(p) · GW(p)

Do you think that Less Wrong is really that bad at voting that some things need to be said which we'll downvote anyway?

It doesn't have to be bad at judging, to judge some things wrongly.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-09-14T14:47:39.022Z · LW(p) · GW(p)

(50% good/bad individual judgments seems natural as a baseline for the good/bad overall judgment impression cutoff but personally-empirically I think it might be more like 75% (I guess due to selection effects of some kind). Sorta like school grades. Do others have other impressions?)

comment by Will_Newsome · 2011-09-14T13:47:09.057Z · LW(p) · GW(p)

A question of moral philosophy?

Because Less Wrong's extrapolated volition would have upvoted it, and if you didn't post it anyway then Less Wrong's extrapolated volition would be justified in getting mad at you for having not even tried to help Less Wrong's extrapolated volition to obtain faster than it otherwise would have (by instantiating its decision policy earlier in time, because there's no other way for the future to change the present than by the future-minded present thinkers' conscious invocation).

Because definitions of reasonableness get made up after the fact as if teleologically, and it doesn't matter whether or not your straightforward causal reasons seemed good enough at the time, it matters whether or not you correctly predict the retrospective judgment of future powers who make things up after the fact to apportion blame or credit according to higher level principles than the ones that appeared salient to you, or the ones that seemed salient as the ones that would seem salient to them.

This is how morality has always worked, this is how we ourselves look back on history, judging the decisions of the past by our own ideals, whether those decisions were made by past civilizations or past lovers. This pattern of unreasonable judging itself seems like an institution that shouldn't be propped up, so there's no safety in self-consistency either. And if you get complacent about the foundational tensions here, or oppositely if you act rashly as a result of feeling those tensions, then that itself is asking to be seen as unjustified in retrospect.

And if no future agent manages to become omniscient and omnibenevolent then any information you managed to propagate about what morality truly is just gets swallowed by the noise. And if an omniscient and omnibenevolent agent does pop up then it might be the best you can hope for is to be a martyr or a scapegoat, and all that you value becomes a sacrifice made by the ideal future so that it can enter into time. Assuming for some crazy reason that you were able to correctly intuit in the first place what concessions the future will demand that you had already made.

You constantly make the same choices as Sophie and Abraham, it's just less obvious to you that you're making them, less salient because it's not your child's life on the line. Not obviously at this very moment anyway.

Go meta, be clever.

Replies from: Desrtopa, cousin_it, Will_Newsome
comment by Desrtopa · 2011-09-14T15:26:06.927Z · LW(p) · GW(p)

In other words, when everyone thinks you're wrong, do it anyway because you're sure it's right and they'll come around eventually?

This has been the foundation for pretty much all positive social disobedience, but it's wrong a lot more often.

People who disagree with mainstream opinions here, but do so for well articulated and coherent reasons are usually upvoted. If you think something will be downvoted, I think you should take very seriously the idea that it's either very wrong or you're not articulating it well enough for it to be useful.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-09-14T15:40:00.141Z · LW(p) · GW(p)

In other words, when everyone thinks you're wrong, do it anyway because you're sure it's right and they'll come around eventually?

No.

comment by cousin_it · 2011-09-20T21:07:20.796Z · LW(p) · GW(p)

And if no future agent manages to become omniscient and omnibenevolent then any information you managed to propagate about what morality truly is just gets swallowed by the noise. And if an omniscient and omnibenevolent agent does pop up then it might be the best you can hope for is to be a martyr or a scapegoat, and all that you value becomes a sacrifice made by the ideal future so that it can enter into time.

I still don't see the point of writing obfuscated comments, though. If serving a possible future god is your cup of tea, it seems to me that making your LW comments more readable should help you in reaching that goal. If that demands sacrifice, Will, could you please make that sacrifice?

Replies from: pedanterrific
comment by pedanterrific · 2011-09-20T21:15:18.015Z · LW(p) · GW(p)

Okay, I haven't even been here that long and I'm already getting tired of this conversation.

comment by Will_Newsome · 2011-09-14T23:27:15.354Z · LW(p) · GW(p)

Did no one understand this? (Desrtopa was off by like 3 meta levels.)

Replies from: Will_Newsome
comment by Will_Newsome · 2011-09-15T02:54:22.888Z · LW(p) · GW(p)

Do people at least feel guilty about upvoting Desrtopa's comment to +7 despite it being off by like 3 meta levels? Someone, anyone? User:wedrifid, do you see what I'm saying? User:Vladimir_Nesov?

Replies from: wedrifid
comment by wedrifid · 2011-09-15T06:53:59.228Z · LW(p) · GW(p)

I think Desrtopa may have missed a level or two. For example downvotes do not only represent an evaluation of whether a given comment is useful for lesswrong and while votes are always evidence of something they are not strictly evidence that you are wrong. On the other hand there is something to the message he is attempting to convey that applies to a subset of the comments you write knowing that you will be downvoted.

You do write some comments that you know will be disapproved of, you know will be considered incomprehensible and you know that others will think are wrong. If you reason that in these cases "Less Wrong's extrapolated volition would have upvoted it" you are saying that everyone else is wrong and that you are right. This means some combination of:

  • You think voters who downvote you will be be doing so for political reasons - political reasons that Lesswrong's extrapolated volition will not respect.
  • If people don't understand you then that is their fault and not yours. Perhaps because they should consider your say so sufficiently important that they go and do background reading until they can piece together your meaning. Perhaps because you believe they are not trying hard enough to understand you due to their biases against topics that are incorrectly considered 'enemy' memes.
  • You think those that do actually make the effort to parse your comment and disagree with you still are wrong because you are better at rational philosophy than they are.

Now, obviously you should expect me to disagree with you about whether it is good for you to make certain comments. This is the inevitable result of having different priors. I disagree with you about several premises which impact the value-of-comment evaluation. These relate to what the implications of TDT are on morality and the degree to which preferences of humans would be convergent when undergoing extrapolation.

There is another meta-consideration that I expect you have made. That is, when you think something is a good idea, know that other intelligent people think it is a bad idea and are able to update on their belief such that you are no longer as confident in your own it can sometimes still be useful to express your idea anyway. This prevents premature convergence and allows more of the search space to be explored and distributes attention somewhat closer to what the ideas deserve.

It is somewhat harder to apply the meta-consideration mentioned in the previous paragraph to comments along the lines of "I'm just so many levels above you, I don't care enough to write more clearly and if you downvote me it is just because you're political. Screw you all!" (if you'll pardon the wedrifid-speak). When saying that and expecting to be downvoted you have an implied disagreement on the meta-level benefit of people being told that they should respect you more or execute different political actions. That kind of say-it-even-if-they'll-hate it decision is less often a good one than say-it-if-they'll-hate-it object level comments.

That is a non-exhaustive list of the most obvious of the relevant meta stuff. It is somewhat frustrating that it took that much text to express thoughts that flew through my head in about five seconds. The vocabulary of English or myself is far more limited than the mental constructs.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-09-15T08:31:19.501Z · LW(p) · GW(p)

That is a non-exhaustive list of the most obvious of the relevant meta stuff. It is somewhat frustrating that it took that much text to express thoughts that flew through my head in about five seconds. The vocabulary of English or myself is far more limited than the mental constructs.

It's like trying to describe in words how you play a song on the guitar. Awkward, effortful, and unless people already know what song it is you're trying to describe it'll probably just sound like nonsense.

Thanks for at least temporarily restoring my faith in humanity, User:wedrifid. There's of course a lot of stuff you didn't hit on, but that's not because you couldn't.

Replies from: wedrifid, Will_Newsome
comment by wedrifid · 2011-09-15T08:58:55.495Z · LW(p) · GW(p)

Thanks for at least temporarily restoring my faith in humanity, User:wedrifid. There's of course a lot of stuff you didn't hit on, but that's not because you couldn't.

Thank you. I pride myself on being able to hit on all sorts of things.

comment by Will_Newsome · 2011-09-15T08:31:39.028Z · LW(p) · GW(p)

"Have You Got It, Yet?" is an unreleased song written by Barrett during the short time in which Pink Floyd was a five-piece. At the time, David Gilmour had been asked to join as a fifth member and second guitarist, while Barrett, whose mental state and difficult nature were creating issues with the band, was intended to remain home and compose songs, much as Brian Wilson had done for The Beach Boys. Barrett's unpredictable behaviour at the time and idiosyncratic sense of humour combined to create a song that, initially, seemed like an ordinary Barrett tune. However, as soon as the others attempted to join in and learn the song, Barrett changed the melodies and structure, making it impossible for the others to follow, while singing the chorus "Have you got it yet?" and having the rest of the band answer "No, no!". This would be his last attempt to write material for Pink Floyd before leaving the band. In fact, Roger Waters stated, in an interview for The Pink Floyd and Syd Barrett Story, that upon realizing Syd was deliberately making the tune impossible to learn, he put down his bass guitar, left the room, and never attempted to play with Syd again.

So the question is: who here is Roger, and who here is Syd? Good day.

comment by Bongo · 2011-09-16T01:48:13.161Z · LW(p) · GW(p)

If you really can predict your karma, you should post encrypted predictions* offsite at the same time as you make your post, or use some similar scheme so your predictions are verifiable.

Seems obviously worth the bragging rights.

* A prediction is made up of a post id, a time, and a karma score, and means that the post will have that karma score at that time.

comment by Will_Newsome · 2011-09-14T23:34:32.543Z · LW(p) · GW(p)

So I guess now is a decent time to reveal that I'd predicted this post would receive 8 karma.

Replies from: satt, KPier
comment by satt · 2011-09-15T03:37:19.381Z · LW(p) · GW(p)

After how long?

Replies from: Will_Newsome
comment by Will_Newsome · 2011-09-15T03:57:58.803Z · LW(p) · GW(p)

A month or so. So, October 14th.

comment by KPier · 2011-09-15T00:03:30.765Z · LW(p) · GW(p)

That makes me tempted to downvote it so you'll be right. Which isn't the point, so I did what I originally intended and upvoted.

comment by Will_Newsome · 2011-09-14T05:51:38.188Z · LW(p) · GW(p)

(Suggstion: Vote on likelihood ratios, not posterior probabilities. (I find it unlikely that this post deserves to be downvoted on its own merits. The continued political downvoting saddens me.))

Replies from: Metus
comment by Metus · 2011-09-14T15:19:06.605Z · LW(p) · GW(p)

I am new to LessWrong, so can you please explain why you should be downvoted?

Replies from: orthonormal, Will_Newsome
comment by orthonormal · 2011-09-14T21:56:57.234Z · LW(p) · GW(p)

Will Newsome thinks he's a rationalist hipster. See this comment thread if you really want to know.

Replies from: Will_Newsome, pedanterrific
comment by Will_Newsome · 2011-09-14T23:25:40.232Z · LW(p) · GW(p)

Vladimir_Nesov correctly interpreted me in that thread. Everybody else just had a lot of fun taking turns talking about how I am bad at communication (bad at trying to communicate), as if that was something I didn't already have a detailed model of.

comment by pedanterrific · 2011-09-14T23:50:52.402Z · LW(p) · GW(p)

Well, to be fair he kind of is a rationalist hipster, for certain values of 'hipster'.

(No offense, Will.)

comment by Will_Newsome · 2011-09-14T15:53:48.012Z · LW(p) · GW(p)

It's like I'm not even trying to communicate an answer to your question with this comment. And it's like I'm trying to reserve the right to get upset with you if you don't see why this comment is clever.

Replies from: Will_Newsome
comment by Will_Newsome · 2011-09-15T01:41:47.910Z · LW(p) · GW(p)

Person who downvoted me, I am upset with you!!!!!! rah rah rah