LessWrong is paying $500 for Book Reviews

post by Ruby · 2021-09-14T00:24:23.507Z · LW · GW · 21 comments

Contents

  How it Works
  Desired Reviews
  Examples of Desired and Undesired Book Reviews
    Examples
    Examples
  Why are you doing this?
  Conditions
None
20 comments

Kudos to Kelsey Piper [LW · GW] and Buck [LW · GW] for this idea. See Buck’s shortform post [EA(p) · GW(p)] for another formulation.

LessWrong is trialing a new pilot program: paying USD500 for high-quality book reviews that are of general interest to LessWrong readers, subject to our judgment and discretion.

How it Works

The program will by default run for one month (until October 13). At the end of the month, a bonus $750 will be split evenly between the top three book reviews received, as judged by us.

Desired Reviews

Most non-fiction topics related to science, history, and rationality will merit payment if the book review is of sufficient quality. By “quality” I’m referring to both content and form. Do the inferences seem correct? Does the reviewer seem to be asking the right questions? Does the summary feel informative or lacking? Do I feel confused or enlightened? Is it riveting or a slog to get through? On the writing side, relevant aspects are sentence construction, word choice, pacing, structure, imagery, etc.

I don’t want to be too prescriptive about form since I expect that being of sufficiently high quality (nebulously defined) is enough to make for exceptions, but generally, I’m interested in book reviews that:

(An extra great format is to compare and contrast two or more books on the same topic.)

Examples of Desired and Undesired Book Reviews

Since it’s hard to give an explicit definition of “quality”, I’m going to fall back on examples and hope that these are better than nothing. Generally, the book reviews tag [? · GW] is a good guide to the kinds of book reviews that are popular on LessWrong and that we want to incentivize.

Below I’ve listed specific book reviews that were either particularly great or kind of poor. Again, most of these came down to quality rather than topic. 

Positive Examples

These book reviews all present engagingly on a topic of interest. They’re not difficult to read, and having read them, I know something more about the world than I did before. 

Negative Examples

I am reluctant to name and shame particular essays on LessWrong, and instead, direct people to view the book reviews tag sorted by karma [? · GW] and look at the lowest scoring posts (you’ll have to click load more to get the entire list). Karma is a strong correlate of quality (whether or not the bounty is paid out is not strictly contingent on the karma it gets, but is influenced by it).

Importantly, quality is not the automatic result of effort. Someone could expend a lot of effort writing an extremely long and detailed review that no one wants to read because it’s tedious or because the English is grating. To be explicit, the bounty will not be paid out just because someone put a lot of effort into their review. 

However, to make it easier to produce high-quality reviews, anyone writing a book review for this program is welcome to avail themselves of LessWrong’s feedback service [LW · GW], even if they don’t yet have 100+ karma. Just ping us on Intercom.

Why are you doing this?

Foremost, we want more valuable content on the site. We are beginning to experiment with offering people monetary compensation for their hard work. I estimate that our best blog posts generate much more than $500 of value. However, $500 is maybe enough to symbolically thank our writers and incentivize them.

Beyond the first-order benefit, there could be additional benefits, as listed by Buck [EA(p) · GW(p)]:

  • It might encourage people to practice useful skills, like writing, quickly learning about new topics, and thinking through what topics would be useful to know more about.
  • ...sometimes I worry that rationalists are too interested in thinking about the world by introspection or weird analogies relative to learning many facts about different aspects of the world; I think book reviews would maybe be a healthier way to direct energy towards intellectual development.
  • It might surface some talented writers and thinkers who weren’t otherwise known to EA [or LessWrong]

 

Of these, I’m especially interested in helping to develop new strong writers and researchers. 

We’re starting with compensation for book reviews as these feel like a more “approachable” kind of content for people to target writing. By being a more specific format, I imagine that it will be easier for people to get started than if the directive were “write good posts”. 

Conditions

This list will be expanded as things come up that I didn’t think of.

  1. You may submit multiple book reviews, although we might apply a higher quality bar for each subsequent submission.
  2. At this time, we’re not paying for reviews of fiction.
  3. At this time, we’re only paying for reviews of book-length written material (not podcasts or documentaries). If you listened to the audiobook version of a print book, that's fine.
  4. Your book review must be published after the posting of this announcement, i.e., no submitting book reviews you wrote a month ago and already published elsewhere on the Internet.
  5. You may review a book that was already reviewed on LessWrong (or SlateStarCodex/ACX), however your review must add significant value beyond the existing review(s).

21 comments

Comments sorted by top scores.

comment by adamzerner · 2021-09-14T04:23:23.544Z · LW(p) · GW(p)

Elicit [LW · GW] prediction for the probability that there will be more than 5.5 submissions that receive a payout by October 13th (the title doesn't mention receiving a payout, but I intended for it to).

My thinking: a quick scan of the Book Reviews tag [? · GW] indicates about 2.5 posts with that tag per month. I suppose another 0.5-1 or so are book reviews but just haven't been tagged as such. So that is my baseline. From there, I expect the $500 reward to give a solid bump, but nothing too crazy.

comment by MondSemmel · 2021-09-14T17:57:38.747Z · LW(p) · GW(p)

Sounds like a great idea, and like something you might want to ask Scott to publicize on the next ACX Open Thread.

comment by Yoav Ravid · 2021-09-17T18:39:18.746Z · LW(p) · GW(p)

I just found out that Pinker is releasing a book called "Rationality" (Hmm.. sounds familiar..) later this month, which apparently presents tools that "have never been presented clearly and entertainingly in a single book--until now." (Good that someone finally put in the effort!).

But slight sarcasm aside, it seems like the sort of thing our community should keep up with - so if someone was looking for a book to review, this sounds like a good option (though it doesn't give much time, it releases on the 28th this month and the bounty closes on the 14h of the next).

I'm already working on a different book review (which is long and difficult to write) so I won't be doing this myself.

Replies from: Liron
comment by Liron · 2021-09-20T13:43:46.514Z · LW(p) · GW(p)

Good idea, I might give this a shot. I hope others give it a shot regardless of whether I do since I want to read others’ reviews.

comment by adamShimi · 2021-09-14T10:13:33.057Z · LW(p) · GW(p)

Does this apply to review for books that have already been reviewed on LW? I would assume that in this case you want a different approach for the review, but it's not clear whether it's valid or not.

Replies from: Sherrinford
comment by Sherrinford · 2021-09-15T16:44:44.951Z · LW(p) · GW(p)

"Conditions

...

5. You may review a book that was already reviewed on LessWrong (or SlateStarCodex/ACX), however your review must add significant value beyond the existing review(s)."

Replies from: adamShimi
comment by adamShimi · 2021-09-15T16:53:45.890Z · LW(p) · GW(p)

My bad, I looked there and failed to see it. :)

Replies from: Ruby
comment by Ruby · 2021-09-15T16:56:38.155Z · LW(p) · GW(p)

MIght depend on when you looked at it. It wasn't there when I first posted, got added after someone asked me.

comment by CraigMichael · 2021-09-15T04:38:54.331Z · LW(p) · GW(p)

I estimate that our best blog posts generate much more than $500 of value.

Just curious - how are you estimating value here? I’m totally excited for this policy, just wondering how you put a dollar amount on this.

Replies from: Ruby
comment by Ruby · 2021-09-15T21:29:29.774Z · LW(p) · GW(p)

I have to confess, it is a more personal intuition (and willingness to spend altruistic dollars) than a hard calculation. It has to be that way, because at some point I have to assign dollar value to something that isn't a dollar value.

A piece of it is that I think individuals (and collectively the community) understanding the world a bit better is worth quite a lot. I think that knowledge has compounding gains, so each additional piece that is known gets built upon and multiplied. I think that our community becomes stronger and wise as a result of scholarship. And of course, each good piece of writing strengthens the community, gets more readership, and in turn generates more writing. This is a poor articulation, but it's what I can manage in a spare couple of minutes.

Another way to think about it is the labor costs. Generating posts is difficult knowledge, the likes of which you'd pay at least $50/hour in the Bay Area. A solidly written post might take between 5 and 20 hours, which means if you were pay someone an hourly wage for the post, the amount that it'd make worth it, in dollars, is like $500. If someone is going to bother writing that post, really there ought to be some "profit" with the post being worth more.

comment by gwillen · 2021-09-14T22:51:29.794Z · LW(p) · GW(p)

This seems really good! As a specific offer, and as part of a general class of LW initiatives.

Presumably most people who would take you up on it already trust LW, and wouldn't be too worried about the open-ended nature of the "if we like it". But I hope you can make that somewhat more concrete for people who do opt to contact you with a proposal / to seek feedback. It would be nice to be able to give assurances of the form "if you do X and Y, and the review has quality similar to your past posts, and length around Z, then we will pay for sure."

Replies from: Ruby
comment by Ruby · 2021-09-14T23:13:27.224Z · LW(p) · GW(p)

It's possible that after doing the first round, we'll be a better position to clarify what makes for a review that we like, though I do expect it to be hard to express intensionally.

comment by Sherrinford · 2021-09-15T16:48:16.145Z · LW(p) · GW(p)

Slightly, but only slightly off-topic: 

Replies from: Ruby, hath
comment by Ruby · 2021-09-15T21:20:54.469Z · LW(p) · GW(p)

When I change to a different sorting there, the "load more" link disappears.

This is a bug when you change sort method after having clicked load more once.  Try refreshing.
 

comment by hath · 2021-09-15T17:37:57.913Z · LW(p) · GW(p)

You can vote for a specific tag on a page, and "most relevant" sorts by the post where that tag has been upvoted the most instead of the karma of the posts with the tag.

comment by tslarm · 2021-09-14T23:55:10.229Z · LW(p) · GW(p)

Your book review must be published after the posting of this announcement, i.e., no submitting book reviews you wrote a month ago and already published elsewhere on the Internet.

What's your policy on previously-partially-published reviews? The specific case I have in mind is a rough review I put up on Goodreads, which would need major reworking to be suitable here. (It's currently more of a notes-dump than a proper review.)

Replies from: Ruby
comment by Ruby · 2021-09-15T01:23:17.551Z · LW(p) · GW(p)

Converting a notes-dump into a proper review is fine.

comment by lsusr · 2021-09-22T06:10:05.430Z · LW(p) · GW(p)
  • ...sometimes I worry that rationalists are too interested in thinking about the world by introspection or weird analogies relative to learning many facts about different aspects of the world; I think book reviews would maybe be a healthier way to direct energy towards intellectual development.

Data is the foundation of empiricism. Abstract reasoning untethered to factual reality (and mathematical axioms) is not rationality.

comment by Sherrinford · 2021-09-15T17:07:59.323Z · LW(p) · GW(p)

"Karma is a strong correlate of quality (whether or not the bounty is paid out is not strictly contingent on the karma it gets, but is influenced by it).

Importantly, quality is not the automatic result of effort. Someone could expend a lot of effort writing an extremely long and detailed review that no one wants to read because it’s tedious or because the English is grating. "

Before anyone gets sad: 

While Karma is certainly a useful measure for the probability that a book review will be seen as rewardworthy by the LW team, nobody really knows how strongly it correlates with "quality" as defined by what non-LW readers would see as high-quality. Saying that "Karma is a strong correlate of quality" is not an objective description, but a belief. 

Unreadable book reviews would probably be seen as having low quality both on LW and on goodreads or other review sites. Readability is only one factor leading to more Karma points, however, and I assume that obvious things like topical fit with things the typical LW reader likes and less obvious things like being close to the center of the social network of LW will lead to higher Karma points. Therefore, we could say that Karma measures quality in the same sense that IQ measures intelligence: It would then just measure what it's defined to measure, so the word "quality" could just as well stand for "Karma points" without mixing words and associations. The correlation betwee   Karma points and quality in this sense then is 1.

comment by Ruby · 2021-09-14T00:24:40.929Z · LW(p) · GW(p)

Footnote 1: For example, while the Notes from “Don’t Shoot The Dog” [LW · GW] is ostensibly about animal training, it’s a fascinating read because it’s written from the angle of how it applies to human psychology. Book Review: Design Principles of Biological Circuits [LW · GW] is of extra interest because it bears upon the likelihood that deep neural nets are intelligible.