Ben Pace's Controversial Picks for the 2020 Review
post by Ben Pace (Benito) · 2021-12-27T18:25:30.417Z · LW · GW · 8 commentsContents
Covid Mazes Agent Foundations Simulacra Assorted Posts Didn't get to None 8 comments
This year, the LessWrong books had about 187,000 words in them. This was the top 59 posts last year in the review.
If we count up the posts in the early vote this year, then we get the top 43 posts. Basically, it’s everything up-to-and-including “Forecasting Thread: AI Timelines”. The spreadsheet where I did the math is here.
Now, we may do something fairly different with the results of the review this year. But for now I'm going to run with this as a "passed review" and "didn't pass review" watermark. Then in this post I'm going to make my case for 15 underrated posts in the review. (I encourage others to try this frame out for prioritizing which posts to review.)
Note that I'm about to defend my picks that were controversial within the LW crowd, which is a fun and weird optimization criteria. I'm not going to talk about the super defensible posts or the posts everyone here loved, but the posts many people don't share my impressions of, in the hope that people change their votes. Here goes.
Covid
First are my three Covid picks.
Covid-19: My Current Model [LW · GW] was where I got most of my practical Covid updates. It so obvious now, but risk follows a power law (i.e. I should focus on reducing my riskiest 1 or 2 activities), surfaces are mostly harmless (this was when I stopped washing my packages), outdoor activity is relatively harmless (me and my housemates stopped avoiding people on the street around this time), and more. I give this +4.
A Significant Portion of COVID-19 Transmission is Presymptomatic [LW · GW] also argued for something that is blindingly obvious now, but a real surprise to me at the time. Covid has an incubation period of up to 2 weeks at the extreme, where you can have no symptoms but still give it to people. This totally changed my threat model, where I didn't need to know if someone was symptomatic, but instead I had to calculate how much risk they took in the last 7-14 days. The author got this point out fast (March 14th). I give this +4.
Crisis and opportunity during coronavirus [LW · GW] seemed cute to me at the time, and now I feel like an idiot for not realizing it more. My point here is "this post was really right in retrospect and I should've listened to it at the time". This post, combined with John's "Making Vaccine", have led me to believe I was in a position to create large amounts of vaccine during the pandemic, at least narrowly for my community, and (more ambitiously) made very large amounts (100k+) in some country with weak regulation where I could have sold it. I'm not going to flesh out the argument here, and it's not airtight, but it was really bad that I didn't seriously consider this until 2021. The post was also out very early (March 12th). I give this a +4.
Mazes
Okay, this is time to review the Mazes sequence. [? · GW]
We have 17 posts, summing to 46,000 words. That's nearly a quarter of last year's book.
The sequence is an extended meditation on a theme, exploring it from lots of perspective, about how large projects and large coordination efforts end up being eaten by Moloch. The specific perspective reminds me a bit of The Screwtape Letters. In The Screwtape Letters, the two devils are focused on causing people to be immoral. The explicit optimization for vices and personal flaws helps highlight (to me) what it looks like when I'm doing something really stupid or harmful within myself.
Similarly, this sequence explores the perspective of large groups of people who live to game a large company, not to actually achieve the goals of the company. What that culture looks like, what is rewarded, what it feels like to be in it.
I've executed some of these strategies in my life. I don't think I've ever lived the life of the soulless middle-manager stereotyped by the sequence, but I see elements of it in myself, and I'm grateful to the sequence for helping me identify those cognitive patterns.
Something the sequence really conveys, is not just that individuals can try to game a company, but that a whole company's culture can change such that gaming-behavior is expected and rewarded. It contains a lot of detail about what that culture looks and feels like.
The sequence (including the essay "Motive Ambiguity") has led me see how in such an environment groups of people can end up optimizing for the opposite of their stated purpose.
The sequence doesn't hold together as a whole to me. I don't get the perfect or superperfect competition idea at the top. Some of the claims seem like a stretch or not really argued for, just completing the pattern when riffing on a theme. But I'm not going to review the weaknesses here, my goal is mostly to advocate for the best parts of it that I'd like to see score more highly in the book.
My three picks are:
The Road to Mazedom [? · GW] is the best precis of the whole sequence. It's the one to read to get all the key points. High in gears, low in detail.
Create a Full Alternative Stack [LW · GW] is probably in the top 15 ideas I got from LW in 2020. Thinking through this as an option has helped me decide when and where to engage with "the establishment" in many areas (e.g. academia). Some parts of my life I work with the mazes whilst trying not getting too much of it on me, and some parts of my life I try to build alternative stacks. (Not the full version, I don't have the time to fix all of civilization.)
Protecting Large Projects Against Mazedom [LW · GW] is all key advice that seemed unintuitive to me when I was getting started doing things in the world, but now all the advice seems imperative to me. I've learned a bunch of this by doing it "the hard way" I guess.
(Also Moloch Hasn't Won [LW · GW] but that was in last year's review and books, so skipping it here.)
(Also Motive Ambiguity [LW · GW], but everyone already agrees with me on that, and also it's not technically part of the sequence.)
Overall, I don't know if this all works out, but it's my current bet on which posts should go into a hypothetical book. Also they're all short, only summing to 1200 + 2000 + 1200 + 1800 = 6200 words (including Motive Ambiguity), which is about 15% of the sequence length, but I claim gets like 50% of the value.
Agent Foundations
There were a couple of truly excellent posts in the quest to understand foundational properties of agents, an area of research that I would like to see go much further, that may eventually give us a strong tool for aligning the agents we one day build. (It's a pipe-dream at the minute, but it does seem like a piece of a solution, so even though I don't have the other pieces I am happy to pump resources into this piece when there's traction.)
And I really like the work done in 2020. My picks are:
An Orthodox Case Against Utility Functions [LW · GW] was a shocking piece to me. Abram spends the first half of the post laying out a view he suspects people hold, but he thinks is clearly wrong, which is a perspective that approaches things "from the starting-point of the universe". I felt dread reading it, because it was a view I held at the time, and I used as a key background perspective when I discussed bayesian reasoning. The rest of the post lays out an alternative perspective that "starts from the standpoint of the agent". Instead of my beliefs being about the universe, my beliefs are about my experiences and thoughts.
I generally nod along to a lot of the 'scientific' discussion in the 21st century about how the universe works and how reasonable the whole thing is. But I don't feel I knew in-advance to expect the world around me to operate on simple mathematical principles and be so reasonable. I could've woken up in the Harry Potter universe of magic wands and spells. I know I didn't, but if I did, I think I would be able to act in it? I wouldn't constantly be falling over myself because I don't understand how 1 + 1 = 2 anymore? There's some place I'm starting from that builds up to an understanding of the universe, and doesn't sneak it in as an 'assumption'.
And this is what this new perspective does that Abram lays out in technical detail. (I don't follow it all, for instance I don't recall why it's important that the former view assumes that utility is computable.) In conclusion, this piece is a key step from the existing philosophy of agents to the philosophy of embedded agents, or at least it was for me, and it changes my background perspective on rationality. It's the only post in the early vote that I gave +9.
(At this point in this post I'm getting tired and will try to write shorter comments.)
Introduction to Cartesian Frames [LW · GW] is a piece that also gave me a new philosophical perspective on my life.
I don't know how to simply describe it. I don't know what even to say here.
One thing I can say is that the post formalized the idea of having "more agency" or "less agency", in terms of "what facts about the world can I force to be true?". The more I approach the world by stating things that are going to happen, that I can't change, the more I'm boxing-in my agency over the world. The more I treat constraints as things I could fight to change, the more I have power and agency over the world. If I can't imagine a fact being false, I don't have agency over it. (This applies to mathematical and logical claims too, which ties into logical induction and decision theory.)
Writing this review I realize the idea is one with the post I wrote "Taking your environment as object" vs "Being subject to your environment" [LW · GW] which is another chunk of this element of growth I've experienced in the last year.
Anyway, that was a big deal — the first few times I read the math of cartesian frames I didn't get the idea at all, then after seeing some examples and reflecting on it, it clicked and helped me understand this whole thing better.
(Also that Scott has formalized it is very valuable and impressive, and even more so is this notion of factorizations of a set and the apparently new sequence he discovered which is insane and can't be true. Factorization of a set seems like the third thing you'd invent about sets once you thought of the idea, and if Scott discovered it in 2020 I'll be like wtaf.)
(But this is not the primary reason I'm endorsing it in the review. The primary reason is that it captures something that seems philosophically important to me.)
In retrospect I'm bumping this up to a +9 for the review. I didn't think about it properly in the early vote, and it's a lot of technical stuff and I forgot about the core concepts I got from it.
Radical Probabilism [LW · GW] and The Bayesian Tyrant [LW · GW] are both extensions of the Embedded Agency philosophical position. I remember reading the former and feeling a strong sense that I really got to see a well pinned-down argument in that philosophy. I won't reread it now because I'm busy. The Bayesian Tyrant is a story told using that understanding, and it is fun and fleshes out lots of parts of bayesian rationality. I recommend them both. +4. Radical Probabilism might be a +9, will have to re-read.
Simulacra
Okay, the Simulacra posts were another big idea I got from 2020. Basically, everyone is right that "Simulacra Levels and their Interactions" is the best single post to read on the subject, and I'm satisfied it's in the top half of the posts making the hypothetical cut.
So here's my controversial pick.
The Four Children of the Seder as the Simulacra Levels [LW · GW] is an interpretation of a classic Jewish reading through the lens of simulacra levels. It makes an awful lot of sense to me, helps me understand them better, and also engages the simulacra levels with the perspective of "how should a society deal with these sorts of people/strategies". I feel like I got some wisdom from that, but I'm not sure how to describe it. Anyway, I give this post a +4.
Assorted Posts
Below are four more reviews.
What are some beautiful, rationalist artworks? [LW · GW] has many pieces of art that help me resonate with what rationality is about.
Look at this statue.
That's the first piece, there's many more, that help me have a visual handle on rationality. I give this post a +4.
Can crimes be discussed literally? [LW · GW] makes a short case that when you straightforwardly describe misbehavior and wrongdoing, people commonly criticize the language you use, reading it as an attempt to attack the parties you're talking about. At the time I didn't think that this was my experience, and thought the post was probably wrong and confused. I don't remember when I changed my mind, but nowadays I'm much more aware of requests on me to not talk about what a person or group has done or is doing. I find myself the subject of such requests quite a lot, and I think past versions of myself would have thought these requests reasonable. Anyway, my point is this post was right about something important, so I give is a +4.
The Skewed and the Screwed: When Mating Meets Politics [LW · GW] is a post that compellingly explains the effects of gender ratios in a social space (a college, a city, etc).
There's lots of simple effects here that I never noticed. For example, if there's a 55/45 split of the two genders (just counting the heterosexual people), then the minority gender gets an edge of selectiveness, which they enjoy (everyone gets to pick someone they like a bit more than they otherwise would have), but for the majority gender, 18% of them do not have a partner. It's really bad for the least liked people in the majority group. Lack of a partner can lead to desperation and all sorts of unpleasant experiences.
This post walks through a bunch of effects like this and explains what's going on in the world. Also it's got lots of diagrams and jokes and is very engagingly written. I learned a lot from it about modern mating dynamics, and I give it a +4.
Elephant seal [LW · GW] is a picture of an elephant seal. It has a mysterious Mona Lisa smile that I can't pin down, that shows glee, intent, focus, forward-looking-ness, and satisfaction. It's fat and funny-looking. It looks very relaxed on the sand. I give this post a +4.
Didn't get to
The other low-ish scoring posts I didn't get around to reviewing but think are pretty good are Transportation as a Constraint [LW · GW], Assessing Kurzweil predictions about 2019 [LW · GW], Tools for keeping focused [LW · GW], The First Sample Gives the Most Information [LW · GW], Search versus Design [LW · GW], and The Darwin Game - Conclusion [LW · GW]. I give them all +4.
8 comments
Comments sorted by top scores.
comment by MondSemmel · 2021-12-27T19:54:47.457Z · LW(p) · GW(p)
Assessing Kurzweil predictions about 2019 [LW · GW], Tools for keeping focused [LW · GW]
These both link to the wrong posts.
Replies from: Benito↑ comment by Ben Pace (Benito) · 2021-12-28T15:07:12.068Z · LW(p) · GW(p)
Fixed.
comment by Sherrinford · 2021-12-28T15:52:50.882Z · LW(p) · GW(p)
Thanks for explicitly pointing out which Mazes posts might be worthwhile reads. Because people seemed to be excited about the quality of the Mazes sequence, I started reading it and stopped somewhere in the Perfect Competition post [? · GW], basically because of this:
"Which, for any compactly defined axis of competition we know about, destroys all value [LW · GW].
This is mathematically true.
Yet value remains.
Thus competition is imperfect."
If there are posts later in the sequence that differ from that style and content, I'd like to give them a try. Maybe I'll just read those you nominated.
Replies from: Benito, Zvi↑ comment by Ben Pace (Benito) · 2021-12-28T23:31:59.534Z · LW(p) · GW(p)
Glad to hear that was helpful. I do recommend reading the rest of the sequence.
↑ comment by Zvi · 2021-12-29T00:18:41.853Z · LW(p) · GW(p)
There are a bunch of people who think the competition posts aren't great but who like what happens after that. To me they are important groundwork but in practice many don't agree.
Replies from: Sherrinford↑ comment by Sherrinford · 2021-12-29T09:48:07.529Z · LW(p) · GW(p)
Thanks for commenting. When reading the part I quoted, I was put off by the claim itself (which is false except if I overlook or misunderstand something) but also by the style of just claiming something to be "mathematically true" without demonstrating it. To me that just seems like an authority argument. But I will judt assume the rest of the sequence is not like that.
Replies from: Zvi↑ comment by Zvi · 2021-12-29T11:29:44.145Z · LW(p) · GW(p)
Yeah it's not like that. Wasn't meant to be authority, was meant to be 'this is math and true by definition' but I see how you got that interpretation.
Replies from: Sherrinford↑ comment by Sherrinford · 2021-12-29T19:29:39.507Z · LW(p) · GW(p)
Ok, thanks. Maybe it would be helpful if you defined what you mean by "value"? Maybe you meant (surplus) profit instead of "value"?