Curating "The Epistemic Sequences" (list v.0.1)
post by Andrew_Critch · 2022-07-23T22:17:08.544Z · LW · GW · 12 commentsContents
Introduction for newcomers The good The questionable My take The Epistemic Sequences, list v.0.1 What's next? None 12 comments
Summary for regular readers: The epistemic content of The Sequences — i.e., the advice on finding true beliefs — has a different epistemic status than the instrumental content — i.e., the advice on how to behave. Specifically, the epistemic content is based upon techniques from logic, probability, statistics, and causal inference that have already been heavily explored and vetted, both formally and empirically, in a wide range of disciplines and application areas. This is not true of the how-to-behave content of the sequences, such as the stances presented on individual heroism and what does or not not constitute humble behavior. Indeed, the analogous underpinning fields — namely, decision theory, game theory, ethics, meta-ethics, and political theory — are not nearly as well explored-and-vetted as the epistemic underpinnings. As such, the epistemic content of the sequences should be made available in a separate compendium, curated for its special epistemic status. This post is one attempt at that, which I'd be happy to have replaced or superseded by a more official attempt from, say, the LessWrong team, or Eliezer Yudkowsky.
Followed by: What's next for instrumental rationality? [LW · GW]
Introduction for newcomers
The good
The "Epistemic Sequences" curated below are taken from the writings of Eliezer Yudkowsky. Yudkowsky's writings cover some of the best introductions I've seen for helping individual people to start forming true beliefs about the world, or at least, beliefs that become progressively less wrong over time. The Epistemic Sequences describe processes to follow to form true beliefs, rather than merely conclusions to reach, and the prescribed processes are well backed by logic, statistics, and causal inference. They draw inspiration from well-written, tried-and-true textbooks like these:
- (2000) H. B. Enderton: A Mathematical Introduction to Logic
- (2003) E.T. Jaynes: Probability Theory: The Logic of Science
- (2000/2009) J. Pearl: Causality: Models, Reasoning, and Inference*
(* J. Pearl: Causal Inference in Statistics: A Primer is a more recent and friendlier intro.)
The questionable
Epistemic status: broadly verifiable claims about fields of theory and practice and their relation to the LessWrong Sequences.
For better or worse, the broader LessWrong Sequences [? · GW] and Sequence Highlights [? · GW] also contain a lot of strong stances on how to behave, i.e., what decisions and actions to take given one's beliefs, including considerations of ethics, meta-ethics, and human values. There are strong themes of unilateral heroism, how to be humble, and how to best protect and serve humanity. Compared to the epistemic aspects of the sequences, these how-to-behave aspects are not founded in well-established technical disciplines like logic and statistics, even though they have a tendency to motivate people to form true beliefs and attempt to help the world. The analogous underpinning fields — including decision theory, game theory, ethics, meta-ethics, political theory — are nowhere near as well explored-and-vetted as logic and statistics.
My take
Epistemic status: my opinion as a researcher professionally dedicated to helping the world.
Feel free to skip this part if you're not particularly interested in my opinion.
That said: I think the how-to-behave themes of the LessWrong Sequences are at best "often wrong but sometimes motivationally helpful because of how they inspire people to think as individuals and try to help the world", and at worst "inspiring of toxic relationships and civilizational disintegration." I'm not going to try to argue this position here, though, because I think it would distract from the goodness of the epistemic content, which I'd like to see promoted in its purest possible form.
Also, I'd like to add that Eliezer Yudkowsky can't be "blamed" for the absence of ideal technical underpinnings for the how-to-behave aspects of the sequences. In fact, he and his colleagues at MIRI have made world-class attempts to improve these foundations through "Agent Foundations" research. E.g., Scott Garrabrant's discovery of Logical Inductors (on which I had the privilege of serving as his co-author) was a research direction that was inspired by Yudkowsky and funded by MIRI.
Importantly, the strength of my dislike for what I consider the 'toxic' aspects of the sequences is also not based on tried-and-true technical underpinnings, any more than Eliezer's original writings. Like Eliezer, I've tried to advance research on foundations of individual and collective decision-making — e.g., negotiable reinforcement learning, fair division algorithms, Lobian cooperation, expert aggregation criteria, and equilibria of symmetric games — but even new technical insights I've made along this journey do not have the tried-everywhere-and-definitely-helpful properties that logic, probability, statistics, and causal inference have.
Nonetheless, for the purpose of building a community of people who collectively pursue the truth and sometimes attempt to take collective actions, I think it's important to call out the questionability of the LessWrong Sequences for its how-to-behave content, and promote the epistemic content as being valuable despite this critique.
The Epistemic Sequences, list v.0.1
The content below is taken from the Sequence Highlights [? · GW] created by the LessWrong team, with only strikethroughs and italicized interjections from me marked by carets (^).
Thinking Better on Purpose [? · GW]
Part 1 of 6 from the Sequence Highlights [? · GW].
Humans can not only think, but think about our own thinking. This makes it possible for us to recognize the shortcomings of our default reasoning and work to improve it – the project of human rationality.
<33 min read
- The Lens That Sees Its Flaws [? · GW]
- What Do We Mean By "Rationality"? [? · GW]
- Humans are not automatically strategic [? · GW]
- ^ Flag: this was almost removed for its how-to-behave content, but kept because it's more framed as observation-about-what-happens rather than what-you-should-do.
Use the Try Harder, Luke[? · GW]- ^ Removed for strong how-to-behave content.
- Your Strength as a Rationalist [? · GW]
- The Meditation on Curiosity [? · GW]
- The Importance of Saying "Oops" [? · GW]
- The Martial Art of Rationality [? · GW]
- Twelve Virtues of Rationality [? · GW]
- ^ Flag: this was almost removed because the 'humility' virtue is a particular prescription for how to behave, and because the framing of the post sounds like a treatise on how to live, but actually it's mostly about beliefs.
Pitfalls of Human Cognition [? · GW]
Part 2 of 6 from the Sequence Highlights [? · GW].
A major theme of the Sequences is the ways in which human reasoning goes astray. This sample of essays describes a number of failure modes and invokes us to do better.
34 min read
- The Bottom Line [? · GW]
- Rationalization [? · GW]
- You Can Face Reality [? · GW]
- Is That Your True Rejection? [? · GW]
- Avoiding Your Belief's Real Weak Points [? · GW]
- Belief as Attire [? · GW]
- ^ Flag: this was almost removed for strong overtones about whether people should or should not engage in certain kinds of signaling, but kept because these overtones are not super-explicit, and because much of the post is observational.
- Cached Thoughts [? · GW]
- The Fallacy of Gray [? · GW]
- Lonely Dissent [? · GW]
- Positive Bias: Look Into the Dark [? · GW]
- ^ Flag: this was almost removed for have a strong overtone of encouraging pessimism (a bias), but kept because its explicit content is more about confirmation bias than social positivity (presumptions of good faith).
- Knowing About Biases Can Hurt People [? · GW]
- Politics is the Mind-Killer [? · GW]
- ^ Flag: this was almost removed for having a strong overtone encouraging people not to get involved in politics, but kept because the explicit content is mostly observational, and most of the advice is still about how to practice true-belief-formation.
The Laws Governing Belief [? · GW]
Part 3 of 6 from the Sequence Highlights [? · GW].
While beliefs are subjective, that doesn't mean that one gets to choose their beliefs willy-nilly. There are laws that theoretically determine the correct belief given the evidence, and it's towards such beliefs that we should aspire.
82 min read
- Making Beliefs Pay Rent (in Anticipated Experiences) [? · GW]
- What is Evidence? [? · GW]
- Scientific Evidence, Legal Evidence, Rational Evidence [? · GW]
- How Much Evidence Does It Take? [? · GW]
- Absence of Evidence Is Evidence of Absence [? · GW]
- Conservation of Expected Evidence [? · GW]
- ^ Flag: this was almost removed because it's often misapplied, i.e., around a dozen readers that I've met have cited this post as saying "you can't predict the direction of your belief updates", which is wrong, but that's not what the post actually says. Conclusion: this post needs to be read carefully to avoid confusion.
- Argument Screens Off Authority [? · GW]
- An Intuitive Explanation of Bayes's Theorem [? · GW]
- The Second Law of Thermodynamics, and Engines of Cognition [? · GW]
- Toolbox-thinking and Law-thinking [? · GW]
Local Validity as a Key to Sanity and Civilization[? · GW]- ^ Removed for making a lot of claims about multi-agent coordination that are not well founded in logic, statistics, and causality (rather, in game theory, which is not as well vetted for real-world-use).
Science Isn't Enough [? · GW]
Part 4 of 6 from the Sequence Highlights [? · GW].
While far better than what came before, "science" and the "scientific method" are still crude, inefficient, and inadequate to prevent you from wasting years of effort on doomed research directions.
18 min read
- When Science Can't Help [? · GW]
- ^ Flag: almost removed for a strong overtone of encouraging cryonics (a behavior) , but kept because the explicit content is about whether it works or not (a belief).
- Faster Than Science [? · GW]
Science Doesn't Trust Your Rationality[? · GW]- ^ Removed for not drawing actionable belief-forming advice from logic / statistics / causality, while arguably being a pro-libertarian how-to-behave piece.
No Safe Defense, Not Even Science[? · GW]- ^ Removed for not drawing actionable belief-forming advice from logic / statistics / causality, while encouraging emotional distrust in the sanity of other people.
- ^ Worth reading as a contextualization of Yukdowsky's other writings.
Connecting Words to Reality [? · GW]
Part 5 of 6 from the Sequence Highlights [? · GW].
To understand reality, especially on confusing topics, it's important to understand the mental processes involved in forming concepts and using words to speak about them.
33 min read
- Taboo Your Words [? · GW]
- Dissolving the Question [? · GW]
- Diseased thinking: dissolving questions about disease [? · GW]
- Hug the Query [? · GW]
- Say Not "Complexity" [? · GW]
- Mind Projection Fallacy [? · GW]
- How An Algorithm Feels From Inside [? · GW]
- Expecting Short Inferential Distances [? · GW]
- Illusion of Transparency: Why No One Understands You [? · GW]
Why We Fight [? · GW]
^ Flag: This (Part 6) was almost removed entirely due to not carrying much in the way of belief-formation advice. Perhaps it should be removed from a more curated "Epistemic Sequences" compendium.
Part 6 of 6 from the Sequence Highlights [? · GW].
The pursuit of rationality and that of doing better on purpose, can in fact be rather hard. You have to get the motivation for that from somewhere.
31 min read
- Something to Protect [? · GW]
- ^ Flag: almost removed for encouraging unilateral heroism (a behavior), and for not carrying much belief-formation advice, but kept because many people find it motivating. Perhaps this should still be removed from a more curated "Epistemic Sequences" compendium.
- The Gift We Give To Tomorrow [? · GW]
- ^ Flag: almost removed for not carrying much belief-formation advice, but kept because many people find it motivating. Perhaps this should still be removed from a more curated "Epistemic Sequences" compendium.
On Caring[? · GW]- ^ Removed for not carrying much belief-formation advice, and for explicitly advising on how to feel about about relate to other people (emotions are not quite behavior, but intermediate between belief and behavior).
Tsuyoku Naritai! (I Want To Become Stronger)[? · GW]- ^ Removed for not carrying much belief formation advice, and for explicitly advising on how to relate to others (behavior).
- A Sense That More Is Possible [? · GW]
- ^ Flag: almost removed for not carrying much belief-formation advice, but kept because many people find it motivating.
What's next?
I'm not sure! Perhaps an official "Epistemic Sequences" compendium could someday be be produced that focusses entirely on epistemics, with the potential upsides of
- more strongly emphasizing the most-well-founded aspects of the Sequences;
- avoiding turning off readers who find the how-to-behave aspects of the sequences off-putting (either because the Sequences are wrong and those readers can sense it, or just because the arguments aren't strong enough, or both); and
- yielding a larger and broader community of people who can agree on beliefs and good belief formation practices, even if they don't (yet) agree on how to treat each other or the rest of the world.
For now, I'll just content myself to link interested readers to this post if they ask me which parts of the LessWrong sequences are most worth reading and why.
Followed by: What's next for instrumental rationality? [LW · GW]
12 comments
Comments sorted by top scores.
comment by gjm · 2022-07-24T11:28:05.861Z · LW(p) · GW(p)
I like this, but would also like to register that I would be very interested to read more about your opinion that the how-to-behave bits of the Sequences are
at best "often wrong but sometimes motivationally helpful because of how they inspire people to think as individuals and try to help the world", and at worst "inspiring of toxic relationships and civilizational disintegration."
comment by iceman · 2022-07-24T15:14:00.945Z · LW(p) · GW(p)
I think the how-to-behave themes of the LessWrong Sequences are at best "often wrong but sometimes motivationally helpful because of how they inspire people to think as individuals and try to help the world", and at worst "inspiring of toxic relationships and civilizational disintegration."
I broadly agree with this. I stopped referring people to the Sequences because of it.
One other possible lens to filter a better Sequences: is it a piece relying on Yudkowsky citing current psychology at the time? He was way too credulous, when the correct amount to update on most social science research of that era was: lol.
Concretely to your project above though: I think you should remove all of Why We Fight series: Something to Protect is Yudkowsky typical minding about where your motivation comes from (and is wrong, lots of people are selfishly motivated as if Tomorrow is The Gift I Give Myself), and I've seen A Sense That More is Possible invoked as Deep Wisdom to justify anything that isn't the current status quo. Likewise, I think Politics is the Mind Killer should also be removed for similar reasons. Whatever its actual content, the phrase has taken on a life of its own and that interpretation is not helpful.
comment by Raemon · 2022-07-23T23:23:01.312Z · LW(p) · GW(p)
Thanks for doing this! I do think it'd be good to have an Epistemic Rationality textbook that focuses on the well-vetted stuff.
I think it'd probably make sense to start with all of R:A-Z rather than the Sequences Highlights for generating it – The Sequence Highlights are deliberately meant to be an overview of both epistemic/instrumental/motivation content, and for the sake of fitting it into 50 posts we probably skipped over stuff that might have made sense to incorporate into an Epistemics 101 textbook).
(That said I think someone interested in epistemics and just getting started would do well to start with the stuff here)
Replies from: RobbBB↑ comment by Rob Bensinger (RobbBB) · 2022-07-26T03:35:56.985Z · LW(p) · GW(p)
I'd be more interested in a project to review the Sequences, have savvy people weigh in on which parts they think are more probable vs. less probable (indexed to a certain time, since views can change), and display their assessments in a centralized, easy-to-navigate way.
I want to say that trying to purify the Sequences feels... boring? Compared to engaging with the content, debating it, vetting it, updating it, improving it, etc.
And I worry that attempts to cut out the less-certain parts will also require cutting out the less-legible parts, even where these contain important content to interact with. The Sequences IMO are a pretty cohesive thing; many insights can be extracted without swallowing the thing wholesale, but a large portion of it will be harder to understand if you don't read the full thing at all. (Or don't read enough of the core 'kinda-illegible' pieces.)
Maybe what I want is more like a new 'Critiques of the Sequences' sequence, or a 'Sequences + (critiques and responses)' alt-sequence. Since responding to stuff and pointing at problems seems obviously productive to me, whereas I'm more wary of purification.
Replies from: Raemon↑ comment by Raemon · 2022-07-26T05:54:21.855Z · LW(p) · GW(p)
To be clear, my current epistemic state is not at all that a curated "epistemic sequences" should replace the existing things. The thing I see as potentially valuable here is to carve them into different subsections focusing on different things that can serve people with different backgrounds and learning goals. (I hadn't at all thinking of this as "purifying" anything, or at least that's not my interest in it)
Ruby/I listened to some of Critch's feedback earlier, and still decided to have have the Sequence Highlights cover a swatch of motivational/instrumental posts, because they seemed important part of the Sequences experience.
I think my own take (not necessarily matching Ruby or Habryka's) is that I disagree with Critch's overall assessment of "the how to behave" parts of the sequences are "toxic." But I do think there is something about the motivation-orientation of the sequences that is... high variance, at least. I see it getting at something that feels important to engage with. (My guess is, if I reflected a bunch of double cruxed with Critch about it, the disagreement here would be less about concrete claims the sequences make and more about a vibe, sorta like how I think disgreements with Post Rationalists are not actually about claims and are more about vibe [LW · GW]).
I haven't gotten into that in this comment section since Critch went out of his way to avoid making that the topic here. (Also somewhat because it feels like a big discussion and I'm kinda busy atm)
comment by trevor (TrevorWiesinger) · 2022-07-23T23:46:54.082Z · LW(p) · GW(p)
I think it also makes sense to reorder the entire sequences- all 333 of them- in order from most valuable to least valuable, and perhaps make multiple different lists according to different values. That way, when someone feels like the last 20 or 40 have not been very helpful, that means the time is right to move on to other things, then and only then.
Replies from: MSRayne↑ comment by MSRayne · 2022-07-24T01:57:18.962Z · LW(p) · GW(p)
I think - and I've considered trying to do this partly in order to teach myself and get all the insights to sink in - that it would also be desirable to rewrite some or all of the sequences / write entirely new stuff inspired by them in simpler language, with a more neutral tone (Eliezer has a certain style that not everyone would appreciate) and mathematical parts made as visual and easy to follow as possible, for normal audiences who aren't... well, nerds like the rest of us. I think improving the rationality of ordinary people would be worth it.
Replies from: TrevorWiesinger↑ comment by trevor (TrevorWiesinger) · 2022-07-24T02:27:16.688Z · LW(p) · GW(p)
Seems like it could be a core LW book, just like how The Precipice was the big book for EA. I definitely think that, one way or another, the CFAR handbook [? · GW] should be taken into account (since it's explicitly more optimized to train executives and other clients a wider variety of people vrom various backgrounds).
↑ comment by Raemon · 2022-07-24T03:52:22.801Z · LW(p) · GW(p)
the CFAR handbookshould be taken into account (since it's explicitly optimized to train executives and other clients).
what leads you to think it’s optimized in this way?
Replies from: TrevorWiesinger↑ comment by trevor (TrevorWiesinger) · 2022-07-24T04:12:21.484Z · LW(p) · GW(p)
I made a ton of assumptions based off of this though, and I never checked to see whether the CFAR handbook was stated to help those particular people. So I retracted parts of my comment that were based on assumptions that I should have checked before stating that it was clearly for executives.
Replies from: Zach Stein-Perlman, Raemon↑ comment by Zach Stein-Perlman · 2022-07-24T04:23:04.392Z · LW(p) · GW(p)
↑ comment by Raemon · 2022-07-24T05:45:27.235Z · LW(p) · GW(p)
Ah wow, yeah as Zach notes that's a totally different CFAR.
An important thing about the CFAR handbook is that it was mostly optimized as a companion to workshops. For a longtime, the first chapter in the CFAR handbook warned you "this was not actually designed to give you any particular experience, we have no idea what reading this book will do if not accompanied by a workshop."
The current CFAR Handbook publishing that Duncan is doing has some additional thought put into it as a standalone series of essays, but I don't think it's optimized the way you're imagining.