April Fools: Announcing: Karma 2.0

post by habryka (habryka4) · 2018-04-01T10:33:39.961Z · LW · GW · 56 comments

Contents

  Adjust the size of your content, based on your karma
None
56 comments

Ever since we started work on the new LessWrong, improving the karma system has been top priority for us.

We started giving users a higher vote-weight when they themselves had accumulated enough karma, and have plans to implement various more complicated schemes, such as Eigenkarma. But we recently realized that more important than how karma is calculated, is how karma actually influences the user experience.

The key purpose of a karma system is to allocate the attention-bandwidth on the site towards users who make the most valuable contributions. Historically we have done this via sorting, but sorting is only a small fraction of how we can possibly allocate attention. And as soon as we realized this, the next step was obvious.

Adjust the size of your content, based on your karma

This means, as you get more karma on the site, your comments literally get larger. At the beginning, you will be a mere dust-speck among giants in the comments, but after you engage with the site for a bit, your words can tower over those of your contemporaries.

This changes everything. Vertical space on the page is now properly allocated according to our best guess about what content you will want to read. People's influence and history on the site is clearly communicated to anyone skimming a comment thread. Competing with your friends for karma now literally translates to determining who has the larger "D". The positive effects are too numerous to all list exhaustively here.

We believe this truly revolutionizes the way karma works on LessWrong, which is why we are proud to call this new system "Karma 2.0". We also believe there are many more promising improvements in this direction. We are soon planning to experiment with coloring your comments green or red based on the ratio of upvotes to downvotes your content received, and adjusting the line-height of your posts based on our estimate of how vacuous your claims are (to properly signal to the user the correct ratio of content to "hot air"). Stay tuned for the upcoming "Karma 2.1" and "Karma 2.2" which will implement these features.

However, if for some inscrutable reason, you want to disable Karma 2.0, you can do so by editing your profile (click on your username and then click "Edit Account") and checking the "Deactivate Karma 2.0" checkbox.

Signed Oliver Habryka, Ben Pace, Raymond Arnold, and Matthew 'Vaniver' Graves

56 comments

Comments sorted by top scores.

comment by TurnTrout · 2018-04-01T14:49:44.296Z · LW(p) · GW(p)

Frankly, I'm not a fan of this. the main worry which comes to mind is that it's possible for new people to get karma, so their opinions could be (conceivably) read.

comment by jimrandomh · 2018-04-01T17:15:31.082Z · LW(p) · GW(p)

Doesn't go far enough; people with less karma than me are still legible.

comment by MondSemmel · 2018-04-01T15:47:01.521Z · LW(p) · GW(p)

This is fantastic! Are you still collecting feature requests for Karma 3.0? I propose adjusting the default font of each comment based on some combination of karma, upvote ratio, and whether an ML algorithm considers it insightful.

The possibility space of this new feature is endless! To give just one example, if a comment is figuratively incomprehensible, Karma 3.0 could make it literally so, by changing its default font to Wingdings.

Replies from: Benito
comment by Ben Pace (Benito) · 2018-04-01T15:56:41.788Z · LW(p) · GW(p)

Yes please! Give us more feature requests (for the next time we do this).

comment by [deleted] (tommsittler) · 2018-04-01T20:54:25.491Z · LW(p) · GW(p)

This Berkeley Dad TRIPLED His Karma With One Weird Trick (Mods HATE Him!):

  1. Go To Your Profile
  2. Press 'Ctrl' And '+' At The Same Time
  3. Repeat Until Attain Desired Karma Level
Replies from: Luke A Somers
comment by Luke A Somers · 2018-04-02T18:46:42.628Z · LW(p) · GW(p)

Instructions unclear, comment stuck in ceiling fan?

comment by Jacob Falkovich (Jacobian) · 2018-04-02T05:17:21.501Z · LW(p) · GW(p)

Additional formatting suggestions:

  • wHEn sOmeONe's COMmeNt dOESn'T MaKe SEnSE.
  • wehn soenoeme's cmnenomt rlaley dseon't mkae sesne.
  • WHEN A COMMENT IS OVERLY AGGRESSIVE.
  • When the commenter is Italian.
  • כשהכותב יהודי.
  • Когда тот кто пишет скрытый коммунист.
comment by orthonormal · 2018-04-01T20:41:28.136Z · LW(p) · GW(p)

I for one welcome our new typographical overlords.

comment by Donald Hobson (donald-hobson) · 2019-11-27T23:16:48.104Z · LW(p) · GW(p)

Instead of just voting comments up and down, can we vote comments north, south east west past and future to make a full 4d voting system? Position the comments in their appropriate position on the screen, using drop shadows to indicate depth. Access inbuilt compasses on smartphones to make sure the direction is properly aligned. Use the GPS to work out the velocity and gravitational field exposure to make proper relativistic calculations. The comments voted into the future should only show up after a time delay, while those voted into the past should show up before they are posted. A potential feature for Karma .

comment by [deleted] (tommsittler) · 2018-04-01T20:19:00.847Z · LW(p) · GW(p)

This is a good start but you really need to implement differential kerning. Lofty words like 'Behoove' and 'Splendiferous' must be given the full horizontal space commanded by their dignity.

comment by JenniferRM · 2018-04-01T15:24:54.477Z · LW(p) · GW(p)

I'm sure this day will be remembered in history as the day that LessWrong became great again!

comment by tristanm · 2018-04-01T14:44:37.182Z · LW(p) · GW(p)

I would also like to have a little jingle or ringtone play every time someone passes over my comments, please implement for Karma 3.0 thanks

comment by Tetraspace (tetraspace-grouping) · 2018-04-01T14:09:39.584Z · LW(p) · GW(p)

I'd like to report a bug. My comments aren't larger than worlds, which is a pity, because the kind of content I produce is clearly the most insightful and intelligent of all. I'm also humble to boot - more humble than you could ever believe - which is one of the rationalist virtues that any non-tribal fellow would espouse.

Replies from: Kaj_Sotala, ciphergoth
comment by Kaj_Sotala · 2018-04-01T15:46:21.799Z · LW(p) · GW(p)

Yeah, sorry: we tried making some comments literally larger than worlds, but then our world was crushed under one such comment and the people running our simulation had to restore it from an earlier backup. Then we had to promise not to cause such trouble again.

comment by Paul Crowley (ciphergoth) · 2018-04-01T15:11:53.341Z · LW(p) · GW(p)

I notice your words are now larger thanks to the excellence of this comment!

comment by Anonymouse1 · 2018-04-01T12:12:17.440Z · LW(p) · GW(p)

I just made an account to say this: Please do not implement "Karma 2.0" or its followers on GreaterWrong. I do no want text to be as large as possible, I want it to be a particular size. When I go to GreaterWrong I habitually hit "-" three times as soon as the page loads because the default size is too big. This is not a problem, since zooming out does work properly. But if comments were all different sizes then I would probably ignore the biggest comments as well as the smallest just because they'd be annoying. Or I'd simply not read comments just like if GW didn't exist.

Edit after 10 minutes: Naq V sryy sbe lbhe Ncey sbby'f wbxr.

Replies from: Charlie Steiner, SaidAchmiz
comment by Charlie Steiner · 2018-04-02T06:56:46.886Z · LW(p) · GW(p)

Ah, but it did get you to make an account :)

comment by Said Achmiz (SaidAchmiz) · 2018-04-01T21:25:51.679Z · LW(p) · GW(p)

Kidding aside, check out GW’s text size adjustment feature [? · GW]. Your browser will remember the text size you set, and you won’t have to adjust it each time you load a page.

(Caveat: this feature does require Javascript, and doesn’t work in Mozilla-based browsers. My apologies if that makes it unusable for your browsing setup.)

comment by Raemon · 2018-04-01T18:57:54.290Z · LW(p) · GW(p)

But won't the Biggest Luke simply eat all the other users?

comment by ozziegooen · 2018-04-01T17:41:44.507Z · LW(p) · GW(p)

It annoys me that all comments are the same color. At least you could change the shade of gray as well.

Alternatively, mean comments could be displayed in awful vampire-type fonts in red, and naive ones could be displayed in comic sans.

Replies from: eukaryote
comment by eukaryote · 2018-04-01T20:41:01.855Z · LW(p) · GW(p)

A fluid serif/sans-serif font, where the serifs get progressively bigger the more formal your comment is.

comment by namespace (ingres) · 2018-04-01T17:45:16.522Z · LW(p) · GW(p)

Thanks team, this is exactly the sort of work I've come to expect from you. ;)

comment by Unnamed · 2018-04-01T17:02:21.768Z · LW(p) · GW(p)

I expect that I (and many other users) would get more benefit out of this feature if it was more personalized. If I have personally upvoted a lot of posts by a user, then make that user's comments appear even larger to me (but not to other readers). That way, the people who I like would be a "bigger" part of my Less Wrong experience.

It's a bit concerning that you seem not have considered this possibility. It seems like this sort of personalization would've naturally come under consideration if LW's leadership was paying attention to the state of the art in user experience like the Facebook news feed.

Replies from: Razanleo
comment by Razanleo · 2018-04-02T16:44:02.841Z · LW(p) · GW(p)

Wouldn't this create an echo chamber where users keep noticing more from what they personally agree with, and consequentially less from the rest?

Facebook caters to you, the user. Less Wrong, in my opinion, should revolve around the topic of discussion, not the users participating in it and what they value as individuals.

I think it wouldn't be harmful if activities from users you routinely upvote are made more visible to you in your "main page," "feed" or whatever it's called here. But once you enter an article and an open discussion is set in place, hierarchies should dissolve, not be accentuated. After all, what matters to us beyond a user's personal profile is the quality of their ideas, not who they are.

comment by alkjash · 2018-04-01T16:55:16.910Z · LW(p) · GW(p)

Smart and thoughtful change, what a wonderful Easter surprise!

comment by ChristianKl · 2018-04-01T14:35:41.927Z · LW(p) · GW(p)

So when does our old LW karma get imported to make our posts even larger?

Replies from: Benito
comment by Ben Pace (Benito) · 2018-04-01T21:06:30.649Z · LW(p) · GW(p)

Users who hit 10,000 karma will get their old karma imported.

comment by mingyuan · 2018-04-01T17:00:06.375Z · LW(p) · GW(p)

Please don't upvote me I don't want anyone to hear me

Replies from: Raemon
comment by Raemon · 2020-04-02T00:40:47.765Z · LW(p) · GW(p)

I'm so sorry.

comment by Wei Dai (Wei_Dai) · 2018-04-02T01:24:11.824Z · LW(p) · GW(p)

I had the feature in my "LW Power Reader" of using color and font size to highlight just the metadata line of highly-upvoted comments, and that was very helpful for scanning for especially good comments in the middle of large threads. See here for a screenshot of what it looked like. I suggest that might be useful if LW ever regularly gets hundreds of comments per post again.

comment by Benquo · 2018-04-01T23:09:01.109Z · LW(p) · GW(p)

This seems perverse; the higher my karma, the fewer words I can fit on a page.

Replies from: habryka4
comment by habryka (habryka4) · 2018-04-02T00:06:06.364Z · LW(p) · GW(p)

With great power comes great responsibility. As your influence grows larger, you should be urged to be more careful how you use the attention that you are given :P

comment by [deleted] · 2018-04-01T22:21:31.675Z · LW(p) · GW(p)

I was gonna comment something witty, but actually I just want to measure my D. So here goes

comment by avturchin · 2018-04-01T20:34:11.949Z · LW(p) · GW(p)

Test of my size.

comment by moridinamael · 2018-04-01T17:01:49.403Z · LW(p) · GW(p)

Finally.

comment by JohnGreer · 2018-04-02T04:19:01.854Z · LW(p) · GW(p)

No flashing neon text? A bit disappointing but happy to see refreshing innovation from the rationality community...

comment by Rohin Shah (rohinmshah) · 2018-04-01T21:38:09.869Z · LW(p) · GW(p)

I've noticed that the larger font annoys me enough that I just scroll past it looking for more reasonably-sized fonts, leading to the exact opposite of the desired effect :/

Replies from: Raemon
comment by Raemon · 2018-04-01T22:04:41.324Z · LW(p) · GW(p)

Huh. Looks like Karma 2.0 is a failure. We'll reverse it during our next available office hours.

Replies from: Benito
comment by Ben Pace (Benito) · 2018-04-01T22:14:06.465Z · LW(p) · GW(p)

Gosh-darn it.

comment by gwillen · 2018-04-01T20:16:50.811Z · LW(p) · GW(p)

Posting because my ego feels compelled to see how big my text is.

comment by MakerOfErrors · 2018-04-01T18:43:32.082Z · LW(p) · GW(p)

I'm loving this new Karma system!

Metaculus (a community prediction market for tech/science/transhumanist things) has a similar feature, where comments from people with higher prediction rankings have progressively more golden usernames. The end result is that you can quickly pick out the signal from the noise, and good info floats to the top while misinformation and unbalanced rhetoric sinks.

But, karma is more than just a measure of how useful info is. It's also a measure of social standing. So, while I applaud the effort it took to implement this and don't want to discourage such contributions, I'd personally suggest tweaking it to avoid trying to do 2 things at once.

Maybe let users link to Metaculus/PredictionBook/prediction market accounts, and color their usernames based on Brier score?

Then, to handle the social side of things, make the font size of their posts and/or usernames scale with social standing. Maybe make a list of people from highest to lowest on the ladder? You could ask users to judge each other anonymously, or ideally use machine learning to detect submissive gestures and whether or not displays of dominance are recognized by other commenter.

As the power of AI improves exponentially, and learns to detect ever more subtle social cues, the social ranking would become more and more accurate! Eventually, it would be able to tell you your precise social standing, to within ±1 person, and predict exactly what concrete advice to follow if you want to get in front of the person ahead of you. You'd know their name, personality, knowledge-base, etc, and could see exactly what they were doing right that you were doing wrong. It would solve social awkwardness, by removing all the ambiguity and feelings of crippling uncertainty around how we're supposed to be acting!

comment by Yannick_Muehlhaeuser · 2018-04-01T15:02:03.066Z · LW(p) · GW(p)

I agree with others who commentet here that the aestetics of it isn't really that satisfying right now. But i think the system has the potential to be good overall, so I don't really want to turn it off. Maybe the differences should be less extreme?

Replies from: TurnTrout
comment by TurnTrout · 2018-04-01T15:05:10.142Z · LW(p) · GW(p)

This could be a good thing to try. Make it more subtle and also have more levels - I notice that my comments are of the same size as Qiaochu's. That's a little strange, since he has nearly 10x my karma. Honestly, only the karma he earned by commenting on my posts should count. Can the mod team look into this?

comment by Paul Crowley (ciphergoth) · 2018-04-01T14:31:54.211Z · LW(p) · GW(p)

Excellent, my words will finally get the prominence they deserve!

comment by Rafael Harth (sil-ver) · 2018-04-01T21:14:48.705Z · LW(p) · GW(p)

So this...

I assume this is an April Fool's joke, but it's also more than that. I take it as a social experiment of sorts, even if it's an involuntary one. Status is such a monster, it's such a big part of our motivation, that literally attaching a quantitative display of your karma size to posts is just not going to be harmless.

And I'm not that thrilled with responses here. The biggest advance on how I handle signaling is to be more honest about it, which in this case means acknowledging that the motivation is real. Joking about it seems misguided. I think. Right now, I'm obviously struggling with phrasing this post on an appropriate level of humility and am worried to miss it, so, clearly, I'm not beyond caring about it. And I do consider status concerns as motivations to be largely harmful, so that's bad.

I think it comes down to this: if it genuinely isn't a big deal to someone and that's why they joke about it, that's fantastic. I've not nearly come that far. If joking about it is mostly about signaling not-caring-about-signaling (counter-signaling?), that's bad – from a bird's-eye view onto the community. And I suspect the latter is almost entirely true.

If it was just done as a mere joke, that also doesn't seem good to me. I'll put it like this: when Robin Hanson was asked about LessWrong (this was in a recent podcast), his reply was that he worries about people mostly using it to look for little signs of affirmation that they are already being rational. In other words, even after whatever progress we've made regarding status, it is still the primary concern for this site's usefulness. So if it was done as a joke, insofar as that means that the people who decided to do it expected everyone to just take it as a joke, then, I think, they were just factually wrong.

This certainly makes me uncomfortable, but I'm also curious to see what comes off it.

Replies from: Wei_Dai
comment by Wei Dai (Wei_Dai) · 2018-04-02T07:34:53.843Z · LW(p) · GW(p)

Your comment made me wonder what humor/joking is about in general (in Robin Hanson's "X is not about Y" sense). It turns out there is a really nice chapter in The Elephant in the Brain about it. Highly recommended. (At least that chapter. I haven't read the rest of the book yet.)

And I do consider status concerns as motivations to be largely harmful, so that’s bad.

If you can make a case for this, I would be very interested to read it. My current view is that a lot of "good" motivations such as altruism and intellectual curiosity are actually linked to status concerns [LW · GW] at a deep level, and since it seems infeasible to get rid of status concerns as a motivation anyway, we should instead try to optimize our social norms and institutions to maximize the good side effects of status seeking, and minimize the bad ones.

Replies from: Kaj_Sotala, sil-ver
comment by Kaj_Sotala · 2018-04-02T09:14:06.297Z · LW(p) · GW(p)

There's also this paper, which I assume the Elephant in the Brain chapter to draw on (though I didn't check the references) (emphasis added):

Since the dawn of Western thought, philosophers, scientists, and comedians have tried to explain what makes things funny. Theories of humor, however, tend to suffer from one of two drawbacks. Domain-specific theories, which address narrow sources of humor, such as jokes (Raskin, 1985) or irony (Giora, 1995), are incapable of explaining humor across domains. And general humor theories, which attempt to explain all types of humor by supposing broad antecedents, such as incongruity (Suls, 1972), superiority (Gruner, 1997), or tension release (Freud, 1928), often erroneously predict humor, as in the case of some unexpected tragedies. For example, unintentionally killing a loved one would be incongruous, assert superiority, and release repressed aggressive tension, but is unlikely to be funny. Moreover, most humor theories have difficulty predicting laughter in response to tickling or play fighting in primates (including humans). Consequently, evolutionarily primitive sources of laughter, such as tickling and play fighting (Gervais & Wilson, 2005), are typically treated as distinct from other sources of humor (Provine, 2000; Wyer & Collins, 1992).
Although existing theories do not agree on the specific necessary and sufficient antecedents of humor (Martin, 2007), a broad review of the literature suggests three conditions that facilitate humor. First, theorists since Aristotle have suggested that humor is often evoked by violations, including apparent threats, breaches of norms, or taboo content (Freud, 1928; Gruner, 1997; Provine, 2000; Veatch, 1998). Empirical work confirms that humor is aroused by displays of aggression, hostility, and disparagement (McCauley, Woods, Coolidge, & Kulick, 1983; Zillmann, 1983). For example, primates often laugh when they are play fighting, tickled, or in the presence of other physical threats (Gervais & Wilson, 2005; Provine, 2000).
A second, seemingly contradictory, condition is that humor occurs in contexts perceived to be safe, playful, nonserious, or, in other words, benign (Apter, 1982; Gervais & Wilson, 2005; Ramachandran, 1998; Rothbart, 1973). For example, apparent threats like play fighting and tickling are unlikely to elicit laughter if the aggressor seems serious or is not trusted (Gervais & Wilson, 2005; Rothbart, 1973).
A third condition provides a way to reconcile the first two: Several theories suggest that humor requires an interpretive process labeled simultaneity, bisociation, synergy, or incongruity (Apter, 1982; Koestler, 1964; Raskin, 1985; Veatch, 1998; Wyer & Collins, 1992). That is, humor requires that two contradictory ideas about the same situation be held simultaneously. For example, understanding puns, in which two meanings of a word or phrase are brought together, requires simultaneity (Martin, 2007; Veatch, 1998). Simultaneity, moreover, provides a way to interpret the threats present in play fighting and tickling as benign.
With the exception of Veatch (1998), researchers have not considered these three conditions together. Considered together, however, they suggest an untested hypothesis: Humor is aroused by benign violations. The benign-violation hypothesis suggests that three conditions are jointly necessary and sufficient for eliciting humor: A situation must be appraised as a violation, a situation must be appraised as benign, and these two appraisals must occur simultaneously.
Violations can take a variety of forms (Veatch, 1998). From an evolutionary perspective, humorous violations likely originated as apparent physical threats, similar to those present in play fighting and tickling (Gervais & Wilson, 2005). As humans evolved, the situations that elicited humor likely expanded from apparent physical threats to a wider range of violations, including violations of personal dignity (e.g., slapstick, physical deformities), linguistic norms (e.g., unusual accents, malapropisms), social norms (e.g., eating from a sterile bedpan, strange behaviors), and even moral norms (e.g., bestiality, disrespectful behaviors). The benign-violation hypothesis suggests that anything that is threatening to one’s sense of how the world “ought to be” will be humorous, as long as the threatening situation also seems benign.
Just as there is more than one way in which a situation can be a violation, there is more than one way in which a violation can seem benign. We propose and test three. A violation can seem benign if (a) a salient norm suggests that something is wrong but another salient norm suggests that it is acceptable, (b) one is only weakly committed to the violated norm, or (c) the violation is psychologically distant. [...]
We found that benign moral violations tend to elicit laughter (Study 1), behavioral displays of amusement (Study 2), and mixed emotions of amusement and disgust (Studies 3–5). Moral violations are amusing when another norm suggests that the behavior is acceptable (Studies 2 and 3), when one is weakly committed to the violated norm (Study 4), or when one feels psychologically distant from the violation (Study 5). These findings contribute to current understanding of humor by providing empirical support for the benign-violation hypothesis and by showing that negative emotions can accompany laughter and amusement. The findings also contribute to understanding of moral psychology by showing that benign moral violations elicit laughter and amusement in addition to disgust.
We investigated the benign-violation hypothesis in the domain of moral violations. The hypothesis, however, appears to explain humor across a range of domains, including tickling, teasing, slapstick, and puns. As previously discussed, tickling, which often elicits laughter, is a benign violation because it is a mock attack (Gervais & Wilson, 2005; Koestler, 1964). Similarly, teasing, which is a playful, indirect method of provocation that threatens the dignity of a target (Keltner et al., 2001), appears to be consistent with the benign-violation hypothesis. Targets are more likely to be amused by teasing that is less direct (multiple possible interpretations), less relevant to the targets’ self-concept (low commitment), and more exaggerated (greater hypotheticality or psychological distance; Keltner et al., 2001). Slapstick humor also involves benign violations because the harmful or demeaning acts are hypothetical and thus psychologically distant. Slapstick is less funny if it seems too real or if the viewer feels empathy for the victim. Humorous puns also appear to be benign violations. A pun is funny, at least to people who care about language, because it violates a language convention but is technically correct according to an alternative interpretation of a word or phrase (Veatch, 1998). [...]
Synthesizing seemingly disparate ideas into three jointly necessary and sufficient conditions (appraisal as a violation, appraisal as a benign situation, and simultaneity), we suggest that humor is a positive and adaptive response to benign violations. Humor provides a healthy and socially beneficial way to react to hypothetical threats, remote concerns, minor setbacks, social faux pas, cultural misunderstandings, and other benign violations people encounter on a regular basis. Humor also serves a valuable communicative function (Martin, 2007; Provine, 2000; Ramachandran, 1998): Laughter and amusement signal to the world that a violation is indeed okay.
comment by Rafael Harth (sil-ver) · 2018-04-02T19:47:10.897Z · LW(p) · GW(p)
If you can make a case for this, I would be very interested to read it. My current view is that a lot of "good" motivations such as altruism and intellectual curiosity are actually linked to status concerns [LW · GW] at a deep level, and since it seems infeasible to get rid of status concerns as a motivation anyway, we should instead try to optimize our social norms and institutions to maximize the good side effects of status seeking, and minimize the bad ones.

I can't really. I think our disagreement is subtle. I'll explain my view and try to pin it down.

What's bad about status is that it may cause us to optimize for the wrong goals, because it may motivate us for the wrong goals. By the time a goal is determined, status already has (or hasn't) done its damage. Given a fixed goal, I would not consider it negative if part of the motivation was related to status.

This means it comes down to preference between two approaches. 1. Minimizing status seeking as motivations to avoid being motivated for the wrong things; 2. Changing the field so status motivations are better aligned with positive outcomes.

If I understood your position right, you think we should do #2. And I think we should do #2. I also agree that overcoming status concerns isn't possible. But the sentence you quoted is still true, at least insofar as it relates to me. For now I'm agnostic as to whether the rest of what I'll say here extends to anyone else.

To explain where I think your model stops working for me, I have to differentiate between the utility function U that I would like to have, and the utility function V that I actually have. The difference is that V makes me play a game when according to U I should rather read a Miri paper. Okay, now according to the master/slave post, the master has "the ability to perform surgery on the slave's terminal values" – but I think it can access only V, not U. Among others, I think the example you give no longer works.

For example, the number theorist might one day have a sudden revelation that abstract mathematics is a waste of time and it should go into politics and philanthropy instead, all the while having no idea that the master is manipulating it to maximize status and power.

I don't think this is possible for me. Big decisions are governed by U; the master doesn't have access.

Now, on LW in particular, I can say with confidence that I'm not here to improve my status. I don't write posts to improve status. But once I have written a post, then at least 70% of my concern goes into how it will make me look; and even during writing, a lesser but still significant chunk goes into that (larger for comments). That seems clearly bad. And I think it's fairly representative of how I work in general. I'm trying to think of an area where status motivates me to do something positive that I wouldn't otherwise do... the search doesn't come up empty, but it's less than your model would suggest. Removing status as a motivation entirely should be net positive.

I'm fairly confident that this explanation is correct, but again don't know it generalizes (actually, the fact that I fully support doing #2 suggests that I don't really expect it to generalize). The boldest thing I'll say is that I'd be surprised if LW didn't work better if status were taken out of the equation. I don't expect activity to drop drastically. The link to intellectual curiosity in particular seems questionable. But I can't make a stronger case for that, at least not yet.

Somewhat unrelated – The Elephant in the Brain suggests that the conscious part of your brain should be thought of as the press secretary, whose job it is to rationalize the things that the rest of the brain decides to do; to come up with bogus explanations for why you did what you did. This is fairly similar to your master/slave model, with the biggest difference being that the press secretary isn't given her own (terminal) values, and the emphasis on rationalization. Obviously, I think the conscious part does have terminal values. I'd take a hybrid of them over each one.

comment by son0fhobs · 2018-04-01T20:24:07.397Z · LW(p) · GW(p)

OMG, please tell me this text size thing is an April Fools joke. It's painful on my eyes, and as a web designer, painful to my soul.

comment by Dagon · 2018-04-01T14:37:25.597Z · LW(p) · GW(p)

This is ugly, and I do not want it. Can I disable it for my comments on others' feeds?

I'm already deeply uncomfortable with the way scoring and gamification alters interaction. Making it so visible and in-your-face may well be the final straw to get me out of here completely.

comment by Viliam · 2018-04-01T23:37:27.309Z · LW(p) · GW(p)

Oh, so this is how it works" Before reading this article I noticed that some comments used large fonts, and my guess was that it's the unread comments. And I thought that was an excellent solution! :D

Knowing the right answer, my feelings are ambiguous. I agree that karma should somehow translate into greater visibility. But the size of the largest comments is a bit uncomfortable; I suggest adjusting the size interval. (If there is a person with large karma who likes to write long comments, this will become unbearable.) Also, it probably should be some combination of user karma and comment karma, as MondSemmel [LW · GW] suggested.

comment by evolution-is-just-a-theorem · 2018-04-01T15:36:04.595Z · LW(p) · GW(p)

Jokes on you I can just edit the CSS.

Replies from: ingres
comment by namespace (ingres) · 2018-04-01T19:25:23.914Z · LW(p) · GW(p)

I literally don't even know if this is a real thing or not because I use GreaterWrong.

comment by Razanleo · 2018-04-02T17:09:47.278Z · LW(p) · GW(p)

So......was this an April's Fools or not? >.>

Replies from: Benito, Luke A Somers
comment by Ben Pace (Benito) · 2018-04-02T21:11:56.457Z · LW(p) · GW(p)

An answer [LW · GW] to your question.

comment by Luke A Somers · 2018-04-02T18:50:20.556Z · LW(p) · GW(p)

I have no RSS monkeying going on, and Wei Dai and Kaj Sotala have the same font size as you or me.