2020 Review Article

post by Vaniver · 2022-01-14T04:58:02.456Z · LW · GW · 3 comments

Contents

  Rationality
  Gears
  Economics
  History
  Current Events
  Communication
  Alignment
  Conclusion
None
3 comments

A common thing in academia is to write ‘review articles’ that attempt to summarize a whole field quickly, allowing researchers to see what’s out there (while referring them to the actual articles for all of the details). This is my attempt to do something similar for the 2020 Review [LW · GW], focusing on posts that had sufficiently many votes (as all nominated posts was a few too many).

I ended up clustering the posts into seven categories: rationality, gears, economics, history, current events, communication, and alignment.

Rationality

The site doesn't have a tagline anymore, but interest in rationality remains [? · GW] Less Wrong's defining feature.

There were a handful of posts on rationality 'directly'. Anna Salamon looked at two sorts of puzzles: reality-masking and reality-revealing [LW · GW], or those which are about controlling yourself (and others) or about understanding non-agentic reality. Listing out examples [LW · GW] (both internal and external) helped explain cognitive biases more simply. Kaj Sotala elaborated on the Felt Sense [LW · GW], a core component of Gendlin’s Focusing. CFAR released its participant handbook [LW · GW]. Jacob Falkovich wrote about the treacherous path to rationality [LW · GW], focusing on various obstacles in the way of developing more rationality. 

Personal productivity is a perennial topic on LW. alkjash identified a belief that ‘pain is the unit of effort’, where caring is measured by suffering, and identifies an alternative, superior view [LW · GW]. Lynette Bye gave five specific high-variance tips for productivity [LW · GW], and then later argued prioritization is a huge driver of productivity, and explained five ways to prioritize better [LW · GW]. AllAmericanBreakfast elaborated on what it means to give something a Good Try [LW · GW]. adamShimi wrote about how habits shape identity [LW · GW]. Ben Kuhn repeated Byrne Hobart’s claim that focus drives productivity, and argued that attention is your scarcest resource [LW · GW], and then talked about tools for keeping focused [LW · GW]. alkjash pointed out some ways success can have downsides [LW · GW], and how to mitigate those downsides.

orthonormal discussed the impact of zero points [LW · GW], and thus the importance of choosing yours. Jacob Falkovich argued against victim mentality [LW · GW].

There was some progress on the project of 'slowly digest some maybe-woo things'. Kaj Sotala gives a non-mystical explanation of “no-self” [LW · GW], detailing some 'early insights' into what it means, as part of his sequence on multiagent models of mind. Ouroboros grapples with Valentine’s Kensho [LW · GW]. I write a post about how Circling (the social practice) focuses on updating based on experience [LW · GW] in a way that makes it deeply empirical.

Gears

John Wentworth wrote a sequence, Gears Which Turn the World [? · GW], which had six nominated posts. The first post discussed constraints, and how technology primarily acts by changing the constraints [? · GW] on behavior. Later posts then looked at different types of constraints, and examples where that constraint is the tight constraint / scarce resource: coordination [LW · GW], interfaces [LW · GW], and transportation [LW · GW]. He argued that money cannot substitute for expertise [LW · GW] on how to use money, twice [LW · GW].

Other posts contained thoughts on how to develop better models and gearsy intuitions. While our intuitive sense of dimensionality is low-dimensional space, much of our decision-making and planning happens in high-dimensional space, where we benefit from applying heuristics trained on high-dimensional optimization [LW · GW] and geometry. Ideas from statistical mechanics [LW · GW] apply in many situations of uncertainty. Oliver Habryka and Eli Tyre described how to Fermi Model [LW · GW]. Maxwell Peterson used animations to demonstrate how quickly the central limit theorem applies [LW · GW] for some distributions. Mark Xu talked about why the first sample is the most informative [LW · GW] when estimating a uncertain quantity.

Scott Alexander wrote Studies on Slack [LW · GW], which goes through examples to see the impacts of amount of slack, and what dynamics lead to more or less of it.

landfish evaluated the evidence and theories and suggests nuclear war is unlikely to be an x-risk [LW · GW].

John Wentworth summarized Working With Contracts [LW · GW]. A babble challenge on generating (within an hour) 50 ways to send something to the moon [LW · GW] drew 29 responses. dynomight wondered: what happens if you drink acetone? [LW · GW], an example of boggling at simple things posted to the internet, to paraphrase a comment.

Economics

LessWrong has lots of systems thinkers; economics remains a perennial interest from the Sequences's inception on an economist's blog. Comparative advantage is not about trade [LW · GW], but about production. Talking about comparative advantage also sometimes involves talking about negotiation [LW · GW]. Credit-allocation is often imperfect, and so it’s useful to think about incentive design that takes that into account [LW · GW]. Paul Christiano thought about moral public goods [LW · GW].

Buck talked about six economics misconceptions [LW · GW] of his that he recently resolved. Richard Meadows defended the Efficient Market Hypothesis [LW · GW] in the wake of COVID, and Wei Dai responded with some specific inefficiencies [LW · GW], and asked for help timing the SPAC bubble. aphyer looked into the limits of PredictIt’s ability to track truth [LW · GW].

philh looked at all possible two-player simultaneous symmetric games in normal form, and classifies them [LW · GW]. Abram Demski argued that most analysis of game theory misidentifies the relevant games [LW · GW].

History

Things happened in the past; we talk about them sometimes. Often historical examples can help ground our modeling efforts, as when AI Impacts searched for examples of discontinuous progress in history [LW · GW].

Mostly written about by Jason Crawford [LW · GW], Progress Studies seeks to understand what causes progress and thus better understand what interventions would make the world better. It’s grown more nuanced after contact with thinking on x-risk, and seeks a ‘new theory of progress’ that might care more about things like differential tech development. While he started crossposting on LW in 2019, 2020 saw 8 of his posts in the review, of which the best-liked was Industrial literacy [LW · GW], which argued that understanding the basics of how industrial society works helps reframe the economy as ‘solutions to problems’, which makes the world much more sensible (and perhaps makes people's desired interventions much more sensible).

Gwern wrote about his personal progress (and the progress he saw in the world) from 2010-2019 [LW · GW]. DARPA built a digital tutor [LW · GW] that educated much more effectively than traditional classrooms in 2009.

Not only can we model the past, we can look at people in the past modeling the past (and future). Jan Bloch learned from the Franco-Prussian war that wars were getting much more damaging and less beneficial [LW · GW], and that the old style of warfare was on the way out; he tried to stop WWI and failed. Wei Dai shared his grandparents’ story of Communist China [LW · GW].

Martin Sustrik wrote about the Swiss Political System [LW · GW]. Anna Salamon asks where stable, cooperative institutions came from [LW · GW]. Zvi writes about the dynamics and origins of moral mazes [LW · GW]. Julia Wise shares notes on “The Anthopology of Childhood” [LW · GW]. jefftk writes about growing independence [LW · GW] for his two young children.

Current Events

Things keep happening in the present; we talk about them sometimes too.

Anti-Aging is much further along [LW · GW] than it looked in 2015; over a hundred companies are deliberately targeting it and plausibly there will be evidence of therapeutic success in 2025-2030. John Wentworth speculated about aging’s impact on the thymus [LW · GW] and what could be done about it.

COVID spread to the world. Practical advice [LW · GW] was collected. Smoke was seen [LW · GW]. Points of leverage [LW · GW] were discussed. The CDC was fact-checked [LW · GW]. Authorities and Amateurs [LW · GW] were compared. John Wentworth asked how hard it would be to make a COVID vaccine [LW · GW]. Zvi analyzed the reaction with the lens of simulacra levels [LW · GW]. catherio announced microCOVID.org [LW · GW]. Zvi predicted in December [LW · GW] that there would be a large wave of infections in March-May, which doesn’t come to pass; he detailed in an edit how the data he had at the time led to his prediction.

Biden’s prediction market price was too low, according to deluks917 [LW · GW]. reallyeli asked if superforecasters are real [LW · GW]; David Manheim said yes. niplav investigated the impact of time-until-event on forecasting accuracy [LW · GW], finding that long-run questions are easier to predict than short-run questions, but events are easier to predict closer to their time of resolution.

SuspendedReason interviewed a professional philosopher about LessWrong [LW · GW], highlighting adjacent ideas in contemporary philosophy and particularly friendly corners of that space.

Wei Dai asked if epistemic conditions have always been this bad [LW · GW], and responses are mixed (with "no, it's worse now" seeming to have a bit more weight behind it).

Richard Korzekwa talked about fixing indoor lighting [LW · GW].

Katja Grace posted a photo of an elephant seal [LW · GW].

Communication

Sometimes we talk about talking itself.

Ben Hoffman asked if crimes can be discussed literally [LW · GW], as often straightforward interpretations of behavior rely on ‘attack words’, and thus it is difficult to have clear conversations. Elizabeth considered negative feedback [LW · GW] thru the lens of simulcra levels.

Malcolm Ocean crossposted his 2015 writing about Reveal Culture [LW · GW], an amendment to the Tell Culture [LW · GW] model from 2014. Ben Kuhn claimed curiosity is a core component of listening well [LW · GW]. Buck thought about criticism [LW · GW], noticing that he gets a lot of value from being criticized, and thinking about how to make it happen more. MakoYass outlined a way to build parallel webs of trust [LW · GW].

Zvi described Motive Ambiguity [LW · GW], where one might take destructive actions to reduce ambiguity and so signal one’s preferences or trustworthiness. Raemon talked about practicalities of confidentiality [LW · GW], and applying the fundamental question of rationality [? · GW] to it.

Alignment

Things will happen in the future; we talk about that sometimes.

The AI Alignment field has grown significantly over the years, and much of the discussion about it happens on the Alignment Forum, which automatically crossposts to LessWrong.

Some work collected, reviewed, and categorized previous work. Rohin Shah reviewed work done in 2018-2019 [LW · GW]. Evan Hubinger overviewed 11 proposals for building safe advanced AI [LW · GW]. Andrew Critch laid out some AI research areas and their relevance to existential safety [LW · GW]. Richard Ngo published AGI safety from first principles [LW · GW], which grew from a summary of many people's views to his detailed view.

Other work attempted to define relevant concepts. Evan Hubinger clarified his definitions of alignment terminology [LW · GW]. Alex Flint attempted to ground optimization [LW · GW] with a clear definition and many examples. John Wentworth wrote about abstraction [LW · GW]. Alex Flint compared search and design [LW · GW].

Paul Christiano wrote precursors to his current research on Eliciting Latent Knowledge [? · GW]: Inaccessible Information [LW · GW], Learning the prior [LW · GW], and Better priors as a safety problem [LW · GW]. Evan Hubinger argued that Zoom In [LW · GW] by Chris Olah gives other researchers a foundation to build off of. nostalgebraist built a lens to interpret GPT [LW · GW]. Beth Barnes et al summarized Progress on AI Safety via Debate [LW · GW], and then Barnes discussed obfuscated arguments in more detail [LW · GW]. Mark Xu suspected that SGD favors deceptively aligned models [LW · GW]. Scott Garrabrant introduced Cartesian Frames [LW · GW].

A forecasting thread resulted in a collection of AI timelines [LW · GW]. hippke looked at Measuring hardware overhang [LW · GW] thru backdating modern solutions to older hardware. Ajeya Cotra released her draft report on timelines [LW · GW] to get feedback. Daniel Kokotajlo examined conquistadors as precedents for takeover [LW · GW] of human societies (concluding that small edges can be enough to give a significant edge, especially if you can take advantage of pre-existing schisms in your target society), observed that the visible event when AI takes over is preceded by the point of no return at which their takeover is inevitable [LW · GW], and argued against using GDP as a metric for timelines and takeoff [LW · GW]. Lanrian extrapolated GPT-N performance [LW · GW]. Stuart Armstrong assessed Kurtzweil’s predictions about 2019 [LW · GW] (half of them turned out false).

Steven Byrnes outlined his computational framework for the brain [LW · GW], drawing heavily on human neuroscience, inner alignment in the brain [LW · GW], and the specific example of inner alignment in salt-starved rats [LW · GW], where rats are able to identify the situational usefulness of salt in a way current RL algorithms can’t. Alex Zhu investigated cortical uniformity [LW · GW], ultimately thinking it’s plausible.

Chi Nguyen attempted to understand Paul Christiano’s Iterated Amplification [LW · GW]. Rafael Harth explained inner alignment like I’m 12 [LW · GW].

John Wentworth argued that alignability is a bottleneck to generating economic value [LW · GW] for things like GPT-3. He also described what it might look like to get alignment by default [LW · GW]. In the Pointers Problem [LW · GW], he argued that human values are a function of humans’ latent variables.

Jan Kulveit noted that there’s a ‘box inversion [LW · GW]’, or duality, between the alignment problems as seen by Agent Foundations and Comprehensive AI Services. JohnWentworth outlined Demons in Imperfect Search [LW · GW], and then DaemonicSigil built a toy model of it [LW · GW]. Diffractor wrote up a sequence on Infra-Bayesianism [LW · GW], with the key post on inframeasure theory [LW · GW] also making it into the review. Joar Skalse discussed research on why neural networks generalize [LW · GW], with lots of discussions in the comments. nostalgebraist thought GPT-3 was disappointing [LW · GW] and later explained an openAI insight about scaling [LW · GW] (that data would become the tight constraint instead of compute, moving past GPT-3)

Abram Demski presented two alternative views of ‘utility functions’ [LW · GW]: the ‘view from nowhere’ defined over the base elements of reductionism, or the ‘view from somewhere’ defined over perceptual events, and favors the second. He then discussed Radical Probabilism [LW · GW], where Richard Jeffrey expands the possible range of updates beyond strict Bayesian updates. In The Bayesian Tyrant [LW · GW], he gave a simple parable of futarchy further developing this view.

Conclusion

A lot happened on LW over the course of the year! The main thing that seemed noteworthy, reading thru the review, was just how much alignment stuff there was. [This could be an artifact of more people interested in alignment voting in the review, but I think this matches my memory of the year.] 

One of the things that surprised me was how much continuity there was between posts from 2020 and things that people are writing about now; part of this is because of COVID, but I think part of it is a sign of research interests maturing; where rather than a handful of people chasing fads the community is a considerably larger set of people working on steady accumulation in more narrow subfields. 

3 comments

Comments sorted by top scores.

comment by Yoav Ravid · 2022-01-15T16:51:48.793Z · LW(p) · GW(p)

This was very fun to read, thanks! I like this post format cause it helps to understand what was written about better than just seeing a list of post titles. I expect that this would also be a good article to send someone who wonders what sort of articles are posted on LW.

comment by philh · 2022-01-18T21:30:14.019Z · LW(p) · GW(p)

philh looked at all possible two-player simultaneous games in normal form

Note: just the symmetric ones.

Replies from: Vaniver
comment by Vaniver · 2022-01-18T22:09:20.833Z · LW(p) · GW(p)

Fixed.