The LessWrong 2019 Review

post by habryka (habryka4) · 2020-12-02T11:21:11.533Z · LW · GW · 47 comments

Contents

  Why run a review like this?
    Improving our incentives and rewards
    Creating a highly curated sequence and book
    Create common knowledge
  What does it look like concretely?
    Nominating
    Reviewing
      Zack's review "Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think":
    Voting
  How do I participate? 
    Nominations
    Reviews
    Voting
  Prizes and Rewards 
      Public Writeup and Aggregation
  Good luck, think well, and have fun!
None
47 comments

Today is the start of the 2019 Review, continuing our tradition of checking which things that were written on LessWrong still hold up a year later, and to help build an ongoing canon of the most important insights developed here on LessWrong.

The whole process will span 8 weeks, starting on December 1st: 

But before I get more into the details of the process, let's go up a level.

Why run a review like this?

The Review has three primary goals: 

  1. Improve our incentives, feedback, and rewards for contributing to LessWrong. 
  2. Create a highly curated "Best of 2019" sequence and physical book
  3. Create common knowledge about the LW community's collective epistemic state about the most important posts of 2019

Improving our incentives and rewards

Comments and upvotes are a really valuable tool for allocating attention on LessWrong, but they are ephemeral and frequently news-driven, with far-from-perfect correlation to the ultimate importance of an idea or an explanation. 

I want LessWrong to be a place for Long Content. A place where we can build on ideas over decades, and an archive that helps us collectively navigate the jungle of infinite content that spews forward on LessWrong every year.

One way to do that is to take some time between when you first see a post and when you evaluate it. That's why today we are starting the 2019 review, not the 2020 review. A year is probably enough time to no longer be swept away in the news or excitement of the day, but recent enough that we can still remember and write down how an idea or explanation has affected us. 

I also want LessWrong to not be overwhelmed by research debt

Research debt is the accumulation of missing interpretive labor. It’s extremely natural for young ideas to go through a stage of debt, like early prototypes in engineering. The problem is that we often stop at that point. Young ideas aren’t ending points for us to put in a paper and abandon. When we let things stop there the debt piles up. It becomes harder to understand and build on each other’s work and the field fragments.

There needs to be an incentive to clean up ideas that turned out to be important but badly presented. This is the time for authors to get feedback on which of their posts turned out to be important and to correct minor errors, clean up prose and polish them up. And the time for others to see what concepts still lack a good explanation after at least a whole year has passed, and to maybe take the time to finally write that good canonical reference post. 

Creating a highly curated sequence and book

The internet is not great at preserving things for the future. Also, books feel real to me in a way that feels very hard to achieve for a website. Also, they look beautiful: 

One of the books printed for the 2018 Review.

Of course, when you show up to LessWrong, you can read Rationality: A-Z [? · GW], you can read The Codex [? · GW], and you can read HPMoR [? · GW], but historically we haven't done a great job at archiving and curating the best content of anyone who isn't Scott or Eliezer (and even for Scott and Eliezer, it's hard to find any curation of the content they wrote in recent years). When I showed up, I wish there was a best of 2012 book and sequence that would have helped me find the best content from the years from before I was active (and maybe we should run a "10-year Review" so that I can figure out what the best posts from 2010 and beyond are).

Create common knowledge

Ray says it pretty well in last year's Review announcement post [LW · GW]: 

Some posts are highly upvoted because everyone agrees they're true and important. Other posts are upvoted because they're more like exciting hypotheses. There's a lot of disagreement about which claims are actually true, but that disagreement is crudely measured in comments from a vocal minority.

Now is the time to give your opinions much more detail, distinguish between a post being an interesting hypothesis versus a robust argument, and generally help others understand what you think, so that we can discover exciting new disagreements and build much more robustly on past and future work. 

What does it look like concretely?

Nominating

Nominations really don't have to be very fancy. Some concrete examples from last year: 

Reading Alex Zhu's Paul agenda FAQ was the first time I felt like I understood Paul's agenda in its entirety as opposed to only understanding individual bits and pieces. I think this FAQ was a major contributing factor in me eventually coming to work on Paul's agenda. – evhub on "Paul's research agenda FAQ" [LW(p) · GW(p)]

And:

This post not only made me understand the relevant positions better, but the two different perspectives on thinking about motivation have remained with me in general. (I often find the Harris one more useful, which is interesting by itself since he had been sold to me as "the guy who doesn't really understand philosophy".) 

– Kaj Sotala on "Sam Harris and the Is-Ought Gap" [LW(p) · GW(p)]

But sometimes can be a bit more substantial:

This post:

  • Tackles an important question. In particular, it seems quite valuable to me that someone who tries to build a platform for intellectual progress attempts to build their own concrete models of the domain and try to test those against history
  • It also has a spirit of empiricism and figuring things out yourself, rather than assuming that you can't learning anything from something that isn't an academic paper
  • Those are positive attributes and contribute to good epistemic norms on the margin. Yet at the same time, a culture of unchecked amateur research could end up in bad states, and reviews seem like a useful mechanism to protect against that

This makes this suitable for a nomination.

– jacobjacob on "How did academia ensure papers were correct in the early 20th Century?" [LW(p) · GW(p)]

Overall, a nomination doesn't need to require much effort. It's also OK to just second someone else's nomination [LW(p) · GW(p)] (though do make sure to actually add a new top-level nomination comment, so we can properly count things). 

Reviewing

We awarded $1500 in prizes for reviews [LW · GW] last year. The reviews that we awarded the prizes for really exemplify what I hope reviews can be. The top prize went to Vanessa Kosoy, here's an extract from one of her reviews

From Vanessa Kosoy on "Clarifying AI Alignment" [LW(p) · GW(p)]:

In this essay Paul Christiano proposes a definition of "AI alignment" which is more narrow than other definitions that are often employed. Specifically, Paul suggests defining alignment in terms of the motivation of the agent (which should be, helping the user), rather than what the agent actually does. That is, as long as the agent "means well", it is aligned, even if errors in its assumptions about the user's preferences or about the world at large lead it to actions that are bad for the user.

[...]

In contrast, I will argue that the "motivation-competence" decomposition is not as useful as Paul and Rohin believe, and the "definition-optimization" decomposition is more useful.

[...]

The review both makes good arguments against the main thrust of the post it is reviewing, while also putting the article into a broader context that helps me place it in relation to other work in AI Alignment. She argues for an alternative breakdown of the problem where you instead of modeling it as the problems of "motivation and competence", model it as the problems of "definition and optimization". She connects both the decomposition proposed in the essay she is critiquing, and the one she proposed to existing research (including some of her own), and generally makes a point I am really glad to see surfaced during the review. 

To be more concrete, this kind of ontology-level objection feels like one of the most valuable things to add during the review phase, even if you can't propose any immediate alternative (i.e. reviews of "I don't really like the concepts this post uses, it feels like reality is more neatly carved by modeling it this way" seem quite valuable and good to me).

Zack M. Davis was joint-second winner of prizes for reviews last year. Here's an extract from a review of his.

Zack's review "Firming Up Not-Lying Around Its Edge-Cases Is Less Broadly Useful Than One Might Initially Think": 

Reply to: Meta-Honesty: Firming Up Honesty Around Its Edge-Cases [LW · GW]

[...]

A potential problem with this is that human natural language contains a lot of ambiguity. Words can be used in many ways depending on context. Even the specification "literally" in "literally false" is less useful than it initially appears when you consider that the way people ordinarily speak when they're being truthful is actually pretty dense with metaphors that we typically don't notice as metaphors because they're common enough to be recognized legitimate uses that all fluent speakers will understand.

[...]

Zack wrote a whole post that I really liked, that made the core argument that while it might make sense to try really hard to figure out the edge-cases of lying, it seems that it's probably better to focus on understanding the philosophy and principles behind reducing other forms of deception like using strategic ambiguity, heavily filtering evidence, or misleading metaphors.

Arguing that a post, while maybe making accurate statements, appears to put its emphasis in the wrong place, and encouraging action that seems far from the most effective, also seems generally valuable, and a good class of review. 

Voting

You can trial-run the vote UI here (though you can't submit any votes yet). Here is also a screenshot of what it looked like last year:

UI when first opening the review, during the basic voting pass
UI when using quadratic voting mode and selecting a post

How do I participate? 

Now for some more concrete instructions on how to participate:

Nominations

Starting today (December 1st), if you have an account that was registered before the 1st of January 2019, you will see a new button on all posts from 2019 that will allow you to nominate them for the 2019 review:

It's at the top of the triple-dot menu.

Since a major goal of the review is to see which posts had a long-term effect on people, we are limiting nominations to users who signed up before 2019. If you were actively reading LessWrong before then, but never registered an account, you can ping me on Intercom (the small chat bubble in the bottom right corner on desktop devices), and I will give your account nomination and voting privileges.

I recommend using the All Posts page for convenience, where you can group posts by year, month, week, and day. Here's the two I use the most:

Reviews

Starting on December 14th, you can write reviews on any post that received more than 2 nominations. For the following month, my hope is that you read the posts carefully, write comments on them, and discuss: 

I would consider the gold-standard for post reviews to be SlateStarCodex book reviews (though obviously shorter, since posts tend to be less long than books). 

As an author, my hope is that you take this time to note where you disagree with the critiques, help other authors arrange followup work, and, if you have the time, update your post in response to the critiques (or just polish it up in general, if it seems like it has a good chance of ending up in the book).

This page [? · GW] will also you to see all posts above two nominations, and how many reviews they have, together with some fancy UI to help you navigate all the reviews and nominations that are coming in.

Voting

Starting on January 11th, any user who registered before 2019 can vote on any 2019 post that has received at least one review. The vote will use quadratic voting, with each participant having 500 points to distribute. To help handle the cognitive complexity of the quadratic voting, we also provide you with a more straightforward "No", "Neutral", "Good", "Important", "Crucial" scale that you can use to prepopulate your quadratic voting scores.

You can give the vote system a spin here on the posts from 2018, to get a sense of how it works and what the UI will look like.

Last year, only users above 1000 karma could participate in the review and vote. This year, we are going to break out the vote into two categories, one for users above a 1000 karma, and one for everyone. I am curious to see if and how they diverge. We might make some adjustments to how we aggregate the votes for the "everyone" category, like introducing some karma-weighting. Overall I expect we will give substantial prominence to both rankings, but favoring the 1000+ karma user ranking somewhat higher in our considerations for what to include in the final sequence and book. To be more concrete, I am imagining something like a 70:30 split of attention and prominence favoring the 1000+ karma users vote.

Prizes and Rewards 

I think this review process is really important. To put the LessWrong's Team's money where it's mouth is, we are again awarding $2000 in prizes to the top posts as judged by the review, and up to $2000 in prizes for the best reviews and nominations (as judged by the LW mod team). These are the nominations and reviews from last year that we awarded prizes.

Public Writeup and Aggregation

At the end of the vote, we are going to publish an analysis with all the vote results again. 

Last year, we also produced an (according to me) really astonishingly beautiful book with all the top essays (thanks to Ben Pace and Jacob Lagerros!) and some of the best comments on reviews. I can't promise we are going to spend quite as much time on the book this year, but I expect it to again be quite beautiful. See Ben's post with more details [LW · GW] about the books, and with the link to buy last year's book if you want to get a visceral sense of them.

The book might look quite different for this year than it did for last year's review, but still anyone who is featured in the book will get a copy of it. So even just writing a good comment can secure your legacy.

Good luck, think well, and have fun!

This year, just as we did last year, we are going to replace the "Recommendations & From the Archives" section of the site with a section that just shows you posts you haven't read that were written in 2019. 

I really enjoyed last year's review, and am looking forward to an even greater review this year. May our epistemics pierce the heavens!

47 comments

Comments sorted by top scores.

comment by riceissa · 2020-12-03T03:46:46.049Z · LW(p) · GW(p)

This is a minor point, but I am somewhat worried that the idea of research debt/research distillation seems to be getting diluted over time. The original article (which this post links to) says:

Distillation is also hard. It’s tempting to think of explaining an idea as just putting a layer of polish on it, but good explanations often involve transforming the idea. This kind of refinement of an idea can take just as much effort and deep understanding as the initial discovery.

I think the kind of cleanup and polish that is encouraged by the review process is insufficient to qualify as distillation (I know this post didn't use the word "distillation", but it does talk about research debt, and distillation is presented as the solution to debt in the original article), and to adequately deal with research debt.

There seems to be a pattern where a term is introduced first in a strong form, then it accumulates a lot of positive connotations, and that causes people to stretch the term to use it for things that don't quite qualify. I'm not confident that is what is happening here (it's hard to tell what happens in people's heads), but from the outside it's a bit worrying.

I actually made a similar comment a while ago about a different term.

Replies from: abramdemski, habryka4
comment by abramdemski · 2020-12-04T14:56:07.709Z · LW(p) · GW(p)

Personally, I believe I understood "research debt" in the strong way upon first reading (I hadn't encountered the term before, but the post included a definition), but was then immediately struck by the inadequacy of the review process to address the problem. Granted, it's a move in the right direction.

Replies from: Raemon
comment by Raemon · 2020-12-04T20:30:20.269Z · LW(p) · GW(p)

(note: this view isn't necessarily shared by other LW team members, and there's at least some kind of major disagreements about how to think about this)

In my ideal world, the review process outputs "here's what's inadequate about all our best posts", and there's a period where authors are expected to actually make substantial improvements before they go in the book. This is a lot to ask of authors who are often pretty busy, and I'm not 100% sure it's the right way to go about things but I lean that way.

Alternately, there might be a LessWrong Textbook that is "the level above the Review", where the Review takes stock of which posts are ready for being officially canonized (which has quite high standards), but that's pretty rare. 

I am somewhat confused about how to think about things like... say, Simulacra, where there's already been ongoing efforts to distill/refine/clarify. Benquo might choose to rewrite his original post in response to review-feedback, but Zvi has sort of already been working on that (but, on later posts that wouldn't show up till the 2020 review)

Replies from: abramdemski, RobbBB, Benito
comment by abramdemski · 2020-12-04T20:57:25.922Z · LW(p) · GW(p)

I think the distillation function is better served by new posts, as opposed to extensive revisions of old posts. The repeated attempts to re-explain simulacra are a good example of this.

Part of why I think this is because I believe editing old posts, especially extensive editing, loses some "aliveness" from the post which lets me get in touch with the author's thought process as they write. Let the rough off the cuff posts stay rough.

Another reason is that textbooks and high quality survey papers are not edited versions of old papers.

"Radical Probabilism" was in many ways a more mature version of "toward a new technical explanation of technical explanation", but could never be produced by editing the old post.

comment by Rob Bensinger (RobbBB) · 2020-12-04T21:04:26.013Z · LW(p) · GW(p)

Benquo might choose to rewrite his original post in response to review-feedback, but Zvi has sort of already been working on that (but, on later posts that wouldn't show up till the 2020 review)

If Benquo re-wrote his original post extensively enough, that would also sort of count as 2020-2021 content. Which makes me wonder whether it would make sense to distinguish 'this post is in the 2018 review because of its underlying content' vs. 'this post is in the 2018 review because of its implementation/presentation'? Then the 'underlying content' stuff could include multiple posts, or posts from later years, as warranted.

Replies from: RobbBB
comment by Rob Bensinger (RobbBB) · 2020-12-04T21:05:42.523Z · LW(p) · GW(p)

A bad situation to maybe try to avoid is: an important idea never ends up included in a review because the original exposition is in one post/year, and the best exposition is in another post/year, and the original exposition is not quite well-executed to warrant inclusion, while the best exposition is not quite noteworthy or innovative enough. (E.g., maybe the best exposition is just a shorter, lightly rephrased version of the original.)

Including content and not just posts also helps address the problem where you want to credit multiple different people for an idea (or idea+exposition), even where their collaboration was spread across multiple posts rather than a single post with multiple authors.

Replies from: Raemon
comment by Raemon · 2020-12-04T23:02:07.704Z · LW(p) · GW(p)

Yeah agreed. There's (relatedly) the thing where "post" isn't even always the natural category of thing-people-want-to-nominate (often I think people are nominating a sequence). It's a somewhat tricky question "how do we let people nominate 'concepts' with multiple related posts?" in a way that has good UI and is clear.

I think for immediate future, it's probably good to just manually spell out "I'm really nominating this overall concept, and am interested in comparing it to more recent work". I'm not 100% sure what'll make sense for handling that in the Review Phase but seems worth trying.

comment by Ben Pace (Benito) · 2020-12-04T20:36:06.817Z · LW(p) · GW(p)

I would be excited for the review to output "Here's a description of further work we'd like to see done".

comment by habryka (habryka4) · 2020-12-03T03:48:38.509Z · LW(p) · GW(p)

Yes, sorry. The concrete mechanism by which I hope to address research debt is not the editing of essays, but the identification of essays that have good ideas and bad presentation, and encouraging other authors to write better new explanations for them, as well as more something like thorough rewrites of existing posts. 

Replies from: riceissa
comment by riceissa · 2020-12-03T04:00:27.412Z · LW(p) · GW(p)

I see, that wasn't clear from the post. In that case I am wondering if the 2018 review caused anyone to write better explanations or rewrite the existing posts. (It seems like the LessWrong 2018 Book just included the original posts without much rewriting, at least based on scanning the table of contents.)

Replies from: Raemon
comment by Raemon · 2020-12-03T04:07:49.412Z · LW(p) · GW(p)

At least 3 people substantially rewrote their posts in the 2018 review, and my hope is that over time it becomes pretty common for there to be substantial rewriting. (albeit, two of those people were LessWrong team members)

But for what it's worth, here's the diff [? · GW] between the original version of my own post and the current version I wrote as a result of the review.

Replies from: riceissa
comment by riceissa · 2020-12-03T04:15:33.868Z · LW(p) · GW(p)

Thanks! That does make me feel a bit better about the annual reviews.

Replies from: Raemon
comment by Raemon · 2020-12-03T04:51:56.959Z · LW(p) · GW(p)

One of the pernicious issues with word-dillution is that often when people try to use a word to mean things, they're... kinda meaning those things "aspirationally." Where, yes part of my original goal with the Review absolutely included Research Debt. But indeed there's a decent chance it won't succeed at that goal. (But, I do intend to put in a fair amount of optimization pressure towards making it succeed)

comment by Kaj_Sotala · 2020-12-02T13:27:23.998Z · LW(p) · GW(p)

Nominated [? · GW] the eight 2019 posts that have felt the most long-term valuable to me.

(Posting this on the principle of "telling that you've nominated posts encourages others to do it as well".)

comment by abramdemski · 2020-12-04T14:41:49.956Z · LW(p) · GW(p)

Is there a way to see all of the 2019 posts which haven't yet been nominated, sorted by karma?

Replies from: Raemon, Raemon
comment by Raemon · 2020-12-09T22:15:53.048Z · LW(p) · GW(p)

It is now possible to see non-nominated posts. We've updated the links at the top of frontpage to go here:

https://www.lesswrong.com/allPosts?timeframe=yearly&after=2019-01-01&before=2020-01-01&limit=100&sortedBy=top&filter=unnominated2019 [? · GW]

which filters out posts with at least 1 nomination.

(To see already nominated posts, go to lesswrong.com/nominations, which is linked from the 'nominations' section of the timeline at the top of frontpage)

Replies from: abramdemski, abramdemski
comment by abramdemski · 2020-12-09T22:44:25.442Z · LW(p) · GW(p)

Now we can (easily) follow the algorithm:

  1. Go thru the 0-nomination posts and add a nomination to the most worthy ones.
  2. Go thru the 1-nomination posts and add a nomination to the most worthy ones.
Replies from: Raemon
comment by Raemon · 2020-12-09T23:10:47.441Z · LW(p) · GW(p)

Yup, it is a good algorithm indeed

comment by abramdemski · 2020-12-09T22:17:14.636Z · LW(p) · GW(p)

Great, thanks!

comment by Raemon · 2020-12-04T20:08:51.077Z · LW(p) · GW(p)

No but there totally should be (I also was annoyed by this, might try to get a quick PR up to fix this today, might get distracted by solstice stuff and not get around to it)

comment by adamShimi · 2020-12-28T16:38:29.083Z · LW(p) · GW(p)

I know it doesn't matter whatsoever. I know. But it's still quite satisfying to be the first reviewer on an important post.

comment by magfrump · 2020-12-27T19:48:50.024Z · LW(p) · GW(p)

The review process has been a nice way for me to feel good about getting more involved in rereading articles and posting comments.

comment by DanielFilan · 2020-12-05T05:18:39.164Z · LW(p) · GW(p)

I wish there were common knowledge of what properties of posts nominations and voting is supposed to reveal. For instance, is it supposed to be posts we currently agree with? Posts we think would move our knowledge forward if people read them today? Posts that proved useful? I'm genuinely unsure.

Replies from: habryka4
comment by habryka (habryka4) · 2020-12-05T05:21:08.646Z · LW(p) · GW(p)

I did try to make a list of at least the things I (and I think Ray) think are more important: 

  • How has this post been useful?
  • How does it connect to the broader intellectual landscape?
  • Is this post epistemically sound?
  • How could it be improved?
  • What further work would you like to see on top of the ideas proposed in this post?
Replies from: DanielFilan
comment by DanielFilan · 2020-12-10T03:27:11.705Z · LW(p) · GW(p)

This was good, but its status as 'things to discuss in reviews' sort of buried in the post made it less Schelling-y than I'd ideally wish.

comment by adamShimi · 2020-12-17T12:43:22.974Z · LW(p) · GW(p)

When writing a review for a sequence that spans more than 2019 (like this one [? · GW]), should I only consider the posts from 2019?

Replies from: Zvi, Kaj_Sotala
comment by Zvi · 2020-12-18T21:35:17.768Z · LW(p) · GW(p)

That's what I would assume is going on here for the Mazes sequence, as well, although of course you can consider that it leads to those other places etc.

Replies from: habryka4
comment by habryka (habryka4) · 2020-12-18T22:15:42.225Z · LW(p) · GW(p)

I think for the Moral Mazes stuff we considered just moving the review of the whole sequence to 2020, but I think Ben was thinking through the relevant considerations more than I am, so will ping him on an update on this.

comment by Kaj_Sotala · 2020-12-17T14:32:03.222Z · LW(p) · GW(p)

That's what I would do if I were to review my own sequence, especially given that the post-2019 articles happen to all be part of a subsequence that's somewhat distinct from all the preceding content.

Replies from: adamShimi
comment by adamShimi · 2020-12-17T19:37:55.737Z · LW(p) · GW(p)

That answer my specific question then. And in general it makes sense to do it. Thanks.

comment by Neel Nanda (neel-nanda-1) · 2020-12-04T09:26:41.488Z · LW(p) · GW(p)

A few of the links to previous reviews link to the wrong reviews. I noticed 'evhub on "Paul's research agenda FAQ"' and 'From Vanessa Kosoy on "Clarifying AI Alignment":' link to the wrong place

Replies from: habryka4
comment by habryka (habryka4) · 2020-12-04T21:30:25.875Z · LW(p) · GW(p)

Oops, will fix. (Edit: Fixed)

comment by Zvi · 2020-12-03T16:14:41.026Z · LW(p) · GW(p)

Can I suggest the mods go through the top Personal Blogposts for last year to confirm they shouldn't have been frontpage? In particular You Have About Five Words (which I am likely going to nominate) and one post of mine seem out of place there in hindsight.

Replies from: Benito, habryka4
comment by Ben Pace (Benito) · 2020-12-05T02:31:34.949Z · LW(p) · GW(p)

Current policy is: you can nominate personal blogposts. (I have 3 I want to nominate.)

comment by habryka (habryka4) · 2020-12-03T19:18:03.274Z · LW(p) · GW(p)

Seems good, will do. You can also just nominate things on Personal, but it's good to get the record in order in any case.

comment by Bridgett Kay (bridgett-kay) · 2021-01-05T23:07:47.474Z · LW(p) · GW(p)

I was just wondering, on the subject of research debt, if there was any sort of system so that people could "adopt" the posts of others. Like say, if someone posts an interesting idea that they don't have the  time to polish or expand upon, they could post is somewhere for people who can. 

Replies from: Raemon, Ruby
comment by Raemon · 2021-01-05T23:41:42.699Z · LW(p) · GW(p)

There isn't a formal system, but in general people are free to write new distillations of old posts.

comment by Ruby · 2021-01-09T07:30:40.794Z · LW(p) · GW(p)

I think a good option here is to take the core idea of the post and make its own wiki page for it (we hope to shortly make wiki-page creation straightforward, for now it's fine to treat tag pages as wikis even when you don't want the tags).

This might be unconventional in the sense that wikis generally are more for "established" facts, but I think a wiki where people are fleshing out thoughts would be cool and good, definitely worth people trying.

comment by niplav · 2020-12-02T18:05:00.156Z · LW(p) · GW(p)

The URL for the 2018 review results [? · GW] gives a 404. This makes sense, since it has been reserved for the 2019 review. However, I'd like to finish my read-through of the 2018 results. Where (except in the new book series) can I do that?

Replies from: Kaj_Sotala
comment by Kaj_Sotala · 2020-12-02T18:28:45.387Z · LW(p) · GW(p)

This page [? · GW] has the results.

comment by Bucky · 2020-12-04T12:25:21.620Z · LW(p) · GW(p)

I think there was some talk after last year about adding a "endorse nomination" button so that not everyone had to write their own comment to provide a nomination if they just agreed with what someone else had already written. Is this available / planned?

Replies from: Vaniver, Raemon
comment by Vaniver · 2020-12-05T17:17:47.909Z · LW(p) · GW(p)

I believe last year some people wrote comments that just linked to other nominations; it's more clicks, but doesn't force you to pretend to have different reasons or write them in your own words or whatever.

comment by Raemon · 2020-12-04T20:09:48.106Z · LW(p) · GW(p)

I do think this is a good idea, but this December has turned out to be pretty hectic and I'm not sure we'll get around to it.

comment by Ideopunk · 2021-01-17T17:58:20.891Z · LW(p) · GW(p)

Is there a way to see all the nominations listed? I registered in 2020 so I can't vote but I'd still love to pick through the nominations. 

Replies from: habryka4
comment by habryka (habryka4) · 2021-01-17T18:59:53.311Z · LW(p) · GW(p)

Yep, on this page you can see all nominations and reviews, plus all the posts with at least two nominations: https://lesswrong.com/reviews 

comment by Douglas_Reay · 2021-01-02T13:42:31.628Z · LW(p) · GW(p)

Is this likely to bias people towards writing longer single posts rather than splitting their thoughts into a sequence of posts?

For example, back in 2018 (so not eligible for this) I wrote a sequence of 8 posts that, between them, got a total of 94 votes. Would I have been better off having made a single post (were it to have gotten 94 just by itself) ?

Replies from: habryka4
comment by habryka (habryka4) · 2021-01-02T18:45:44.455Z · LW(p) · GW(p)

There is probably a small bias here, yeah, but probably not overwhelmingly much. I think overall it's more likely for a post to appear in the final best-off collection if it's short, simply because we are dealing with very limited space, so that pushes in the opposite direction.