The Review Phasepost by Ben Pace (Benito) · 2019-12-09T00:54:28.514Z · LW · GW · 3 comments
Creating Inputs For LW Users' Thinking Personal Experience Reports Big Picture Analysis (e.g. Book Reviews) Testing Subclaims (e.g. Epistemic Spot Checks) None 3 comments
LessWrong is currently doing a major review of 2018 [LW · GW] — looking back at old posts and considering which of them have stood the test of time. Info about what features we added to the site for writing reviews is in December's monthly updates post [? · GW].
There are three phases:
- Nomination (completed)
- Review (ends Dec 31st [EDIT: Jan 13th])
- Voting on the best posts (ends January 7th [EDIT: Jan 13th])
We’re now in the Review Phase, and there are 75 posts that got two or more nominations. The full list is here [? · GW]. Now is the time to dig into those posts, and for each one ask questions like “What did it add to the conversation?”, “Was it epistemically sound?” and “How do I know these things?”.
The LessWrong team will award $2000 in prizes to the reviews that are most helpful to them for deciding what goes into the Best of 2018 book.
If you’re a nominated author and for whatever reason don’t want one or more of your posts to be considered for the Best of 2018 book, contact any member of the team - e.g. drop me an email at email@example.com.
Creating Inputs For LW Users' Thinking
The goal for the next month is for us to try to figure out which posts we think were the best in 2018.
Not which posts were talked about a lot when they were published, or which posts were highly upvoted at the time, but which posts, with the benefit of hindsight, you're most grateful for being published, and are well suited to be part of the foundation of future conversations.
This is in part an effort to reward the best writing, and in part an effort to solve the bandwidth problem (there were more than 2000 posts written in 2018) so that we can build common knowledge of the best ideas that came out of 2018.
With that aim, when I'm reviewing a post, the main question I'm asking myself is
What information can I give to other users to help them think clearly and accurately about whether a given post should be added to our annual journal?
A large part of the review phase is about producing inputs for our collective thinking. With that in mind, I’ve gathered some examples of things you can write that are help others understand posts and their impacts.
1) Personal Experience Reports
There were a lot of examples of this in the nomination phase, which I found really useful, and would find useful to read more of. Here are some examples:
This post... may have actually had the single-largest effect size on "amount of time I spent thinking thoughts descending from it."
This post (and the rest of the sequence) was the first time I had ever read something about AI alignment and thought that it was actually asking the right questions. It is not about a sub-problem, it is not about marginal improvements. Its goal is a gears-level understanding of agents, and it directly explains why that's hard. It's a list of everything which needs to be figured out in order to remove all the black boxes and Cartesian boundaries, and understand agents as well as we understand refrigerators.
Used as a research source for my EA/rationality novel project, found this interesting and useful.
Until seeing this post, I did not have a clear way of talking about common knowledge. Despite understanding the concept fairly well, this post made the points more clearly than I had seen them made before, and provided a useful reference when talking to others about the issue.
One of my favorite posts, that encouraged me to rethink and redesign my honesty policy.
I have definitely linked this more than any other post.
More detail is also really great. I'd definitely encourage the above users to be more thorough about how the ideas in the post impacted them. Here's a nomination that had a bunch more detail about how the ideas have affected them.
In my own life, these insights have led me to do/considering doing things like:
• not sharing private information even with my closest friends -- in order for them to know in future that I'm the kind of agent who can keep important information (notice that there is the counterincentive that, in the moment, sharing secrets makes you feel like you have a stronger bond with someone -- even though in the long-run it is evidence to them that you are less trustworthy)
• building robustness between past and future selves (e.g. if I was excited about and had planned for having a rest day, but then started that day by work and being really excited by work, choosing to stop work and decide to rest such that different parts of me learn that I can make and keep inter-temporal deals (even if work seems higher ev in the moment))
• being more angry with friends (on the margin) -- to demonstrate that I have values and principles and will defend those in a predictable way, making it easier to coordinate with and trust me in future (and making it easier for me to trust others, knowing I'm capable of acting robustly to defend my values)
• thinking about, in various domains, "What would be my limit here? What could this person do such that I would stop trusting them? What could this organisation do such that I would think their work is net negative?" and then looking back at those principles to see how things turned out
• not sharing passwords with close friends, even for one-off things -- not because I expect them to release or lose it, but simply because it would be a security flaw that makes them more vulnerable to anyone wanting to get to me. It's a very unlikely scenario, but I'm choosing to adopt a robust policy across cases, and it seems like useful practice
A special case here is data from the author themselves, e.g. “Yeah, this has been central to my thinking” or “I didn’t really think about it again” or “I actually changed my mind and think this is useful but wrong”. I would generally be excited for users to review their own posts now that they've had ~1.5 years of hindsight, and I plan to do that for all the posts I've written that were nominated.
If a post had a big or otherwise interesting impact on you, consider writing that up.
2) Big Picture Analysis (e.g. Book Reviews)
There are lots of great book reviews on the web that really help the reader understand the context of the book, and explain what it says and adds to the conversation.
Some good examples on LessWrong are be the reviews of Pearl's Book of Why [LW · GW], The Elephant in the Brain [LW · GW], The Secret of Our Success [LW · GW], Consciousness Explained [LW · GW], Design Principles of Biological Circuits [LW · GW], The Case Against Education [LW · GW] (part 2 [LW · GW], part 3 [LW · GW]), and The Structure of Scientific Revolutions [LW · GW].
Many of these reviews do a great job of things like
- Talking about how the post fits into the broader conversation on that topic
- Trying to pass the ITT of the author by explaining how they see the world
- Looking at that same topic through their own worldview
- Pointing out places they see things differently and offering alternative hypotheses.
Many of the posts we’re reviewing are shorter than most of the reviews I linked to, so it doesn’t apply literally, but much of the spirit of these reviews is great. Also check out others short book reviews and consider writing something in that style (e.g. SSC, Thing of Things).
Consider picking a book review style you like and applying it to one of the nominated posts.
3) Testing Subclaims (e.g. Epistemic Spot Checks)
Elizabeth Van Nostrand has written several posts in this style.
- Epistemic Spot Check: The Role of Deliberate Practice in the Acquisition of Expert Performance [LW · GW]
- Epistemic Spot Check: Full Catastrophe Living (Jon Kabat-Zinn) [LW · GW]
- Epistemic Spot Check: The Dorito Effect (Mark Schatzker) [LW · GW]
For another example, in Scott's review of Secular Cycles [LW · GW], one way he tried to think about the ideas in the book was to gather a bunch of alternative data sets on which to test some of the author’s claims.
These things aren't meant to be full reviews of the entire book or paper, or advice on overall how to judge it. They take narrower questions that are definitively answerable, like is a random sample of testable claims literally true, and answers them as fully as possible.
If there is an important subclaim of a post you think you can check out, consider trying to verify/falsify the claim and writing up your results and partial results.
Go forth and think out loud!
Comments sorted by top scores.