What should experienced rationalists know?
post by sapphire (deluks917) · 2020-10-13T17:32:32.388Z · LW · GW · 7 commentsThis is a question post.
Contents
1 - Rationality Techniques 2 - AI Risk: Superintelligence 3 - Cognitive Biases: Thinking Fast and Slow 4 - Statistics 5 - Signalling / The Elephant in the Brain None Answers 14 Chris_Leong 12 gilch 12 Keller 8 namespace 4 G Gordon Worley III 1 blhayk 1 Miguel Solano None 7 comments
The obvious answer is 'the sequences' but imo that is neither necessary nor sufficient. The Sequences are valuable but they are quite old at this point. They also run to over a million words (though Rationality AtZ is only ~600k). Here is a list of core skills and ideas:
1 - Rationality Techniques
Ideally, an experienced rationalist would have experience with most of the CFAR manual. Anyone trying to learn this material needs to actually try the techniques; theoretical knowledge is not enough. If I had to make a shorter list of techniques I would include:
- Double Crux / Internal DC
- Five-minute Timers
- Trigger Action Plans
- Bucket Errors
- Goal/Aversion Factoring
- Gears Level Understanding
- Negative Visualisation / Murphy-Jitsu
- Focusing
2 - AI Risk: Superintelligence
The rationality community was founded to help solve AI risk. Superintelligence gives an updated and complete version of the 'classic' argument for AI-risk. Superintelligence does not make as many strong claims about takeoff as Elizer's early writings. This seems useful given that positions closer to Paul Christiano's seem to be gaining prominence. I think the 'classic' arguments are still very much worth understanding. On the other hand, Superintelligence is ~125K words and not easy reading.
I think many readers can skip the first few chapters. The core argument is in chapters five through fourteen.
5. Decisive strategic advantage
6. Cognitive superpowers
7. The superintelligent will
8. Is the default outcome doom?
9. The control problem
10. Oracles, genies, sovereigns, tools
11. Multipolar scenarios
12. Acquiring values
13. Choosing the criteria for choosing
14. The strategic picture
3 - Cognitive Biases: Thinking Fast and Slow
Priming is the first research area discussed in depth in TF&S. Priming seems to be almost entirely BS. I would suggest skipping the chapter on priming and remembering the discussion of the 'hot hand fallacy' seems incorrect. Another potential downside is the length (~175K words). However, I don't think there is a better source overall. Many of the concepts in TF&S remain fundamental. The writing is also quite good and the historical value is extremely high. Here is a quick review [LW · GW] from 2016.
4 - Statistics
It is hard to be an informed rationalist without a basic understanding of Bayesian thinking. You need to understand frequentist statistics to evaluate a lot of relevant research. Some of the most important concepts/goals are listed below.
Bayesian Statistics:
- Illustrate the use of odd's ratio calculation in practical situations
- Derive Laplace's rule of succession
Frequentist Stats - Understand the following concepts:
- Law of large numbers
- Power, p-values, t-tests, z-tests
- Linear Regression
- Limitations of the above concepts
5 - Signalling / The Elephant in the Brain
The Elephant in the Brain is a clear and authoritative source. The ideas discussed have certainly been influential in the rationalist community. But I am not what epistemic status the community assigns to the Hanson/Simler theories around signaling. Any opinions? For reference here are the topics.
PART I Why We Hide Our Motives
- 1 Animal Behavior
- 2 Competition
- 3 Norms
- 4 Cheating
- 5 Self-Deception
- 6 Counterfeit Reasons
PART II Hidden Motives in Everyday Life
- 7 Body Language
- 8 Laughter
- 9 Conversation
- 10 Consumption
- 11 Art
- 12 Charity
- 13 Education
- 14 Medicine
- 15 Religion
- 16 Politics
- 17 Conclusion
What am I missing? Try to be as specific as possible about what exactly should be learned. Some possible topics discussed in the community include:
- Economics
- The basics of the other EA cause areas and general theory? (at least the stuff in 'Doing Good Better')
- Eliezer says to study evolutionary psychology in the eleventh virtue but I have not been impressed with evo-psych.
- Something about mental tech? Maybe mindfulness, Internal Family Systems, or circling? I am not confident anything in space fits.
Answers
This is an excellent question. Here's some of the things I consider personally important.
Regarding probability, I recently asked the question: Why is Bayesianism Important? [LW · GW] I found this Slatestarcodex post to provide an excellent overview of thinking probabilistically, which seems way more important than almost any of the specific theorems.
I would include basic game theory - prisoner's dilemma, tragedy of the commons, multi-polar traps (see Meditations on Moloch for this later idea).
In terms of decision theory, there's the basic concept of expected utility, decreasing marginal utility, then the Inside/Outside views [? · GW].
I think it's also important to understand the limits of rationality. I've written a post on this (pseudo-rationality [LW · GW]), there's Barbarians vs. Bayesians [LW · GW] and there's these two posts by Scott Alexander - Seeing as a State and The Secret of Our Success. Thinking Fast and slow has already been mentioned.
The Map is Not the Territory [LW · GW] revolutionised my understanding of philosophy and prevented me from ending up in stupid linguistic arguments. I'd suggest supplementing this by understanding how Conceptual Engineering [LW · GW] avoids the plague of counterexample philosophy prevalent with conceptual engineering (Wittgenstein's conception of meanings as Family Resemblances is useful too - Eliezier talks about the cluster structure of thingspace [LW · GW]).
Most normal people are far too ready to dismiss hypothetical situations [? · GW]. While if taken too far Making Beliefs Pay Rent [LW · GW] can lead to a naïve kind of logical positivism, it is in general a good heuristic. Where Recursive Justification Hits Bottom [LW · GW] argues for a kind of circular epistemology.
In terms of morality Torture vs. Dust Specks [LW · GW] is a classic.
Pragmatically, there's the Pareto Principle (or 80/20 rule) and I'll also throw in my posts on Making Exceptions to General Rules [LW · GW] and Emotions are not Beliefs [LW · GW].
In terms of understanding people better there's Inferential Distance, Mistake Theory vs. Conflict Theory, Contextualising vs. Decoupling Norms [LW · GW], The Least Convenient Possible World [LW · GW], Intellectual Turing Tests and Steelmanning [? · GW]/Principal of Charity.
There seems to be an increasingly broad agreement that meditation is really important and compliments rationality beautifully insofar as irrationality is more often a result of lack of control over our emotions, than lack of knowledge. But beyond this, it can provide extra introspective capacities and meditative practises like circling can allow us to relate better with humans.
One of my main philosophical disagreements with people here is that they often lean towards verificationism, while I don't believe that the universe has to play nice [LW · GW] and so that often things will be true that we can't actually verify.
I'll answer with some of the "rationalist" ideas I've personally learned and consider important. (I expect that I still have important knowledge gaps compared to other experienced rationalists though. I'm working on it.)
Intelligence is not the same thing as rationality. [? · GW] So much of "Epistemic Rationality" comes down to not lying to yourself. Clever people tell themselves clever lies. The Litany of Tarski is the correct attitude for a rationalist.
Raising the Sanity Waterline [LW · GW] is a mini-sequence in its own right, if you read its links. The worldviews taught by all major religions are extremely confused at best—including (especially) yours, if you still have one—and if this isn't already blatantly obvious, your epistemology is very broken! People are crazy, the world is mad.
There are No Guardrails. [LW · GW] The Universe is crushingly indifferent to human well-being. Everything is allowed to go horribly wrong.
Rationalists Should Win. Or "Instrumental Rationality". This does not mean "Go team!" It means that if you're not winning, then you're doing rationality wrong, and you should correct your errors, regardless of what the "rationalists" or the "great teacher" think. This can make "rationality" hard to pin down, but the principle of correcting one's errors is very important because it catches a lot of failure modes on the path. Then why not just say, "Correct your errors"? Because there are a lot of ways of misidentifying errors, and "not winning" cuts through all that and gets to the heart of what errors are.
You have to be willing to be correct even when that makes you weird. [LW · GW] Doing better than normal is, by definition, abnormal. But the goal is correctness, not weirdness. Reversed stupidity is not intelligence.
Understand Social Status. [? · GW]
Value your Slack. [LW · GW] Guard it. Spend it only when Worth It. The world is out to get your Slack! If you lose it, fight to get it back!
You Need More Money [LW · GW]. I wrote this one, but it's a synthesis of earlier rationalist thought.
The mountains of philosophy are the foothills of AI. Even if you're not planning to become an AI researcher yourself, understanding how to see the world through the lens of AI clarifies many important things.
Read textbooks. [LW · GW] (And you can download almost all of them if you know where to look.)
Econ in general, no. The specific model of rational actors optimizing for outcomes, the intuition for why markets often succeed at delivering on desires (at least for those with money), and the practice of making multi-stage models and following an impact in one area through to others, yes. Nobody needs macro. Lots of people need to reflexively think of how other actors in a system respond to a change, and econ is one of the more effective ways of teaching this. Critical if you want to actually have a good understanding of multi-polar scenarios. What I'm talking about is rarely adequately understood just by studying game theory, which usually analyzes games too simple to stick in student's minds, but it matters.
In addition to understanding what statistics are on a theoretical level, an intuitive understanding that getting the right numbers yields real benefits, and that the right numbers are very precisely defined, seems important.
These are both softer skills, but I think that they're important.
↑ comment by cousin_it · 2020-10-13T22:22:39.311Z · LW(p) · GW(p)
A note of caution here. Econ is one of those disciplines where many people think they grasp the fundamentals, but actually don't. I think if someone can't give worked examples (with numbers or graphs) for concepts like deadweight loss, comparative advantage, or tax incidence, their intuition probably points in subtly wrong directions, and would benefit from learning this stuff systematically.
I still think that this problem is intractable so long as people refuse to define 'rationality' beyond 'winning'.
https://www.thelastrationalist.com/rationality-is-not-systematized-winning.html
I, in general, try to avoid using the frame of 'rationality' as much as possible precisely because of this intractability. If you talk about things like existential risk, it's clearer what you should know to work on that.
↑ comment by NunoSempere (Radamantis) · 2020-10-15T21:05:14.108Z · LW(p) · GW(p)
See: The Rationality Quotient: Toward a Test of Rational Thinking, by Keith E. Stanovich et al.
See my caveats in the comment section, so those being said, I'd say the most useful thing I know/read on my path to being something of an "experienced" rationalist was Godel, Escher, Bach. The older I get the more I realize how much was in that book that set me down my path, and yes there were lots of opportunities to get various things wrong and confused along the way, but in the end I think it might be the single best source I've seen that might turn someone to the rationalist mindset if they really grok what it has to say.
↑ comment by Dagon · 2020-10-13T21:33:46.385Z · LW(p) · GW(p)
+1 on the recommendation. I read it in high school <mumble> years before the word "rationalist" was used by anyone, and it's shaped my worldview and approach since. It absolutely made me more able to understand and use the writing and discoveries on this site. Very hard to believe it's over 40 years old.
And the reason it's so formative points to a missing topic (or maybe two) on this list. That topic is something like "modeling and reductionism". I'm unsure if it's one or two distinct topics, but they're related by how they fit into an epistemic framework. One could argue that it should be the first topic: how do you understand things based on knowledge?
I have a problem with the short term existence of real AI. I am new to this site and likely will discuss topics many are familiar with. I don't know if self consciousness can be created artificially through code influenced by even the smallest amount of human emotion and unconscious irrationality. Humans evolved through conditions that, when it comes down to it, were based in objective reality. Even if an "irrational" event threating existence occurred, it is safe to say that those influenced the most by objective reason survived, even if the example of "rationality" sounds insane to us today. It makes sense naturally that an organism rejecting the innumerable physical laws guiding our universe(most of which are unknown to our "higher level of consciousness") are the least likely to propagate. I almost think there is an unwritten physical law, too hard to prove at the moment through any known scientific means, that states one consciousness can not create another species/ mind with a consciousness grounded in intelligence vastly superior to its creators. I don't think it is possible to teach a computer 2+2= 4 and then attempt to teach it to learn from an algorithm that isn't written, whether consciously or unconsciously, to answer questions 100% coinciding w/ objective physical/ mathematical reality, and expect it will develop a superior and even threatening consciousness. If a consciousness does develop it will be through drawing from info in a corrupted/unrepresentative data set. It will be extremely flawed and easy to combat with any objective human reasoning. Again, I am new to this site and likely wrong. I'm hoping to hear a perspective that uses either empirical evidence or basic logic to reject my hypothesis.
↑ comment by blhayk · 2020-10-17T02:50:23.583Z · LW(p) · GW(p)
Also I'm not sure rationality is as complex as you are suggesting. Its as simple as asking yourself "Do I have evidence for what I feel inclined to believe?" A persons ability to consciously ask along with their ability to objectively answer will lead to a rationality based existence. (to the extent at which it is possible)
Replies from: Nacruno96Computational complexity theory (if only the rudiments), I think.
7 comments
Comments sorted by top scores.
comment by Gordon Seidoh Worley (gworley) · 2020-10-13T20:59:30.509Z · LW(p) · GW(p)
I think it's a bit hard to answer this question, because we have to unpack what the "should" is doing in this question.
If the question were something like "what do most experienced rationalists know?" it would be easier to answer because it would just be descriptive and we could go down the list of things most people in the rationality community for at least some period of time and acknowledged by the community in some way to be "experienced" (say, 5 years and >2500 LW karma, just to purpose some concrete proposal if you were to ask this question). As it stands, though, asking "should" means this is asking for a norm, prescription, or plan to apply to rationalists to qualify as experienced.
And I think therein lies the problem. I can think of no necessary and sufficient knowledge a rationalist should know, since so much of work makes someone "good" and "experienced" at rationality is in the practice of it, and you can find folks who are good rationalists who never heard of LessWrong and might be quite skilled at practicing the art of rationality but not know particular things that could let them communicate those skills in ways that would be recognizable to rationalists on LessWrong.
But I guess if I really step back and take this question to be something like "what are some things that appear to have been helpful for experienced rationalists to have learned in order to become so experienced" then I think I can answer (look for my answer in the comments section).
comment by romeostevensit · 2020-10-13T21:55:22.865Z · LW(p) · GW(p)
Should but often don't: other big insights from statistics that aren't just probabilistic reasoning. E.g. factor analysis, statistical power, multiple types of uncertainty, sampling biases.
comment by habryka (habryka4) · 2020-10-15T01:57:04.854Z · LW(p) · GW(p)
Note: I cleaned up the formatting a bit and made the spacing a bit more consistent. Feel free to ask me to revert.
Replies from: deluks917↑ comment by sapphire (deluks917) · 2020-10-15T05:06:24.497Z · LW(p) · GW(p)
Thanks. Fine by me.
comment by DPiepgrass · 2020-10-15T18:56:10.085Z · LW(p) · GW(p)
Thank you for this valuable overview, it's worth bookmarking.
The link in section 3 does not support the idea that humans don't suffer from a priming effect (this may not have been what you meant, but that's how it sounds). Rather, the studies are underpowered and there is evidence of positive-result publication bias. This doesn't mean the published results are wrong, it means 'grain of salt' and replication is needed. LWers often reasonably believe things on less evidence than 12 studies.