Rationality: An Introduction

post by Rob Bensinger (RobbBB) · 2015-03-11T19:00:31.162Z · LW · GW · 9 comments

Contents

  The Mathematics of Rationality
  Rationality Applied
None
9 comments

In the autumn of 1951, a football game between Dartmouth and Princeton turned unusually rough. A pair of psychologists, Dartmouth’s Albert Hastorf and Princeton’s Hadley Cantril, decided to ask students from both schools which team had initiated the rough play. Nearly everyone agreed that Princeton hadn’t started it; but 86% of Princeton students believed that Dartmouth had started it, whereas only 36% of Dartmouth students blamed Dartmouth. (Most Dartmouth students believed “both started it.”)

When shown a film of the game later and asked to count the infractions they saw, Dartmouth students claimed to see a mean of 4.3 infractions by the Dartmouth team (and identified half as “mild”), whereas Princeton students claimed to see a mean of 9.8 Dartmouth infractions (and identified a third as “mild”).1

When something we value is threatened—our world-view, our in-group, our social standing, or something else we care about—our thoughts and perceptions rally to their defense.2,3 Some psychologists go so far as to hypothesize that the human ability to come up with explicit justifications for our conclusions evolved specifically to help us win arguments.4

One of the basic insights of 20th-century psychology is that human behavior is often driven by sophisticated unconscious processes, and the stories we tell ourselves about our motives and reasons are much more biased and confabulated than we realize. We often fail, in fact, to realize that we’re doing any story-telling. When we seem to “directly perceive” things about ourselves in introspection, it often turns out to rest on tenuous implicit causal models.5,6 When we try to argue for our beliefs, we can come up with shaky reasoning bearing no relation to how we first arrived at the belief.7 Rather than trusting explanations in proportion to their predictive power, we tend to trust stories in proportion to their psychological appeal.

How can we do better? How can we arrive at a realistic view of the world, when we’re so prone to rationalization? How can we come to a realistic view of our mental lives, when our thoughts about thinking are also suspect?

What’s the least shaky place we could put our weight down?

 

The Mathematics of Rationality

At the turn of the 20th century, coming up with simple (e.g., set-theoretic) axioms for arithmetic gave mathematicians a clearer standard by which to judge the correctness of their conclusions. If a human or calculator outputs “2 + 2 = 4,” we can now do more than just say “that seems intuitively right.” We can explain why it’s right, and we can prove that its rightness is tied in systematic ways to the rightness of the rest of arithmetic.

But mathematics lets us model the behaviors of physical systems that are a lot more interesting than a pocket calculator. We can also formalize rational belief in general, using probability theory to pick out features held in common by all successful forms of inference. We can even formalize rational behavior in general by drawing upon decision theory.

Probability theory defines how we would ideally reason in the face of uncertainty, if we had the requisite time, computing power, and mental control. Given some background knowledge (priors) and a new piece of evidence, probability theory uniquely and precisely defines the best set of new beliefs (posterior) I could adopt. Likewise, decision theory defines what action I should take based on my beliefs. For any consistent set of beliefs and preferences I could have, there is a decision-theoretic answer to how I should then act in order to satisfy my preferences.

Suppose you find out that one of your six classmates has a crush on you—perhaps you get a letter from a secret admirer, and you’re sure it’s from one of those six—but you have no idea which of the six it is. Bob happens to be one of those six classmates. If you have no special reason to think Bob’s any likelier (or any less likely) than the other five candidates, then what are the odds that Bob is the one with the crush?

Answer: The odds are 1:5. There are six possibilities, so a wild guess would result in you getting it right once for every five times you got it wrong, on average.

We can’t say, “Well, I have no idea who has a crush on me; maybe it’s Bob, or maybe it’s not. So I’ll just say the odds are fifty-fifty.” Even if we would rather say “I don’t know” or “Maybe” and stop there, the right answer is still 1:5. This follows from the assumption that there are six possibilities and you have no reason to favor one of them over any of the others.8

Suppose that you’ve also noticed you get winked at by people ten times as often when they have a crush on you. If Bob then winks at you, that’s a new piece of evidence. In that case, it would be a mistake to stay skeptical about whether Bob is your secret admirer; the 10:1 odds in favor of “a random person who winks at me has a crush on me” outweigh the 1:5 odds against “Bob has a crush on me.”

It would also be a mistake to say, “That evidence is so strong, it’s a sure bet that he’s the one who has the crush on me! I’ll just assume from now on that Bob is into me.” Overconfidence is just as bad as underconfidence.

In fact, there’s only one viable answer to this question too. To change our mind from the 1:5 prior odds in response to the evidence’s 10:1 likelihood ratio, we multiply the left sides together and the right sides together, getting 10:5 posterior odds, or 2:1 odds in favor of “Bob has a crush on me.” Given our assumptions and the available evidence, guessing that Bob has a crush on you will turn out to be correct 2 times for every 1 time it turns out to be wrong. Equivalently: the probability that he’s attracted to you is 2/3. Any other confidence level would be inconsistent.

It turns out that given very modest constraints, the question “What should I believe?” has an objectively right answer. It has a right answer when you’re wracked with uncertainty, not just when you have a conclusive proof. There is always a correct amount of confidence to have in a statement, even when it looks more like a “personal belief” instead of an expert-verified “fact.”

Yet we often talk as though the existence of uncertainty and disagreement makes beliefs a mere matter of taste. We say “that’s just my opinion” or “you’re entitled to your opinion,” as though the assertions of science and math existed on a different and higher plane than beliefs that are merely “private” or “subjective.” To which economist Robin Hanson has responded:9

You are never entitled to your opinion. Ever! You are not even entitled to “I don’t know.” You are entitled to your desires, and sometimes to your choices. You might own a choice, and if you can choose your preferences, you may have the right to do so. But your beliefs are not about you; beliefs are about the world. Your beliefs should be your best available estimate of the way things are; anything else is a lie. [ . . . ]

It is true that some topics give experts stronger mechanisms for resolving disputes. On other topics our biases and the complexity of the world make it harder to draw strong conclusions. [ . . . ]

But never forget that on any question about the way things are (or should be), and in any information situation, there is always a best estimate. You are only entitled to your best honest effort to find that best estimate; anything else is a lie.

Our culture hasn’t internalized the lessons of probability theory—that the correct answer to questions like “How sure can I be that Bob has a crush on me?” is just as logically constrained as the correct answer to a question on an algebra quiz or in a geology textbook.

Our brains are kludges slapped together by natural selection. Humans aren’t perfect reasoners or perfect decision-makers, any more than we’re perfect calculators. Even at our best, we don’t compute the exact right answer to “what should I think?” and “what should I do?”10

And yet, knowing we can’t become fully consistent, we can certainly still get better. Knowing that there’s an ideal standard we can compare ourselves to—what researchers call Bayesian rationality—can guide us as we improve our thoughts and actions. Though we’ll never be perfect Bayesians, the mathematics of rationality can help us understand why a certain answer is correct, and help us spot exactly where we messed up.

Imagine trying to learn math through rote memorization alone. You might be told that “10 + 3 = 13,” “31 + 108 = 139,” and so on, but it won’t do you a lot of good unless you understand the pattern behind the squiggles. It can be a lot harder to seek out methods for improving your rationality when you don’t have a general framework for judging a method’s success. The purpose of this book is to help people build for themselves such frameworks.

 

Rationality Applied

The tightly linked essays in How to Actually Change Your Mind were originally written by Eliezer Yudkowsky for the blog Overcoming Bias. Published in the late 2000s, these posts helped inspire the growth of a vibrant community interested in rationality and self-improvement.

Map and Territory was the first such collection. How to Actually Change Your Mind is the second. The full six-book set, titled Rationality: From AI to Zombies, can be found on Less Wrong at http://lesswrong.com/rationality.

One of the rationality community’s most popular writers, Scott Alexander, has previously observed:11

[O]bviously it’s useful to have as much evidence as possible, in the same way it’s useful to have as much money as possible. But equally obviously it’s useful to be able to use a limited amount of evidence wisely, in the same way it’s useful to be able to use a limited amount of money wisely.

Rationality techniques help us get more mileage out of the evidence we have, in cases where the evidence is inconclusive or our biases are distorting how we interpret the evidence.

This applies to our personal lives, as in the tale of Bob. It applies to disagreements between political factions and sports fans. And it applies to philosophical puzzles and debates about the future trajectory of technology and society. Recognizing that the same mathematical rules apply to each of these domains (and that in many cases the same cognitive biases crop up), How to Actually Change Your Mind freely moves between a wide range of topics.

The first sequence of essays in this book, Overly Convenient Excuses [? · GW], focuses on probabilistically “easy” questions—ones where the odds are extreme, and systematic errors seem like they should be particularly easy to spot.…

From there, we move into murkier waters with Politics and Rationality [? · GW]. Politics—or rather, mainstream national politics of the sort debated by TV pundits—is famous for its angry, unproductive discussions. On the face of it, there’s something surprising about that. Why do we take political disagreements so personally, even though the machinery and effects of national politics are often so distant from us in space or in time? For that matter, why do we not become more careful and rigorous with the evidence when we’re dealing with issues we deem important?

The Dartmouth-Princeton game hints at an answer. Much of our reasoning process is really rationalization—story-telling that makes our current beliefs feel more coherent and justified, without necessarily improving their accuracy. Against Rationalization [? · GW] speaks to this problem, followed by Seeing with Fresh Eyes [? · GW], on the challenge of recognizing evidence that doesn’t fit our expectations and assumptions.

In practice, leveling up in rationality often means encountering interesting and powerful new ideas and colliding more with the in-person rationality community. Death Spirals [? · GW] discusses some important hazards that can afflict groups united around common interests and amazing shiny ideas, which rationalists will need to overcome if they’re to translate their high-minded ideas into real-world effectiveness. How to Actually Change Your Mind then concludes with a sequence on Letting Go [? · GW].

Our natural state isn’t to change our minds like a Bayesian would. Getting the Dartmouth and Princeton students to notice what they’re actually seeing won’t be as easy as reciting the axioms of probability theory to them. As philanthropic research analyst Luke Muehlhauser writes in “The Power of Agency”:12

You are not a Bayesian homunculus whose reasoning is “corrupted” by cognitive biases.

You just are cognitive biases.

Confirmation bias, status quo bias, correspondence bias, and the like are not tacked on to our reasoning; they are its very substance.

That doesn’t mean that debiasing is impossible. We aren’t perfect calculators underneath all our arithmetic errors, either. Many of our mathematical limitations result from very deep facts about how the human brain works. Yet we can train our mathematical abilities; we can learn when to trust and distrust our mathematical intuitions; we can shape our environments to make things easier on us. And if we’re wrong today, we can be less so tomorrow.


1 Albert Hastorf and Hadley Cantril, “They Saw a Game: A Case Study,” Journal of Abnormal and Social Psychology 49 (1954): 129–134, http://www2.psych.ubc.ca/~schaller/Psyc590Readings/Hastorf1954.pdf.

2 Emily Pronin, “How We See Ourselves and How We See Others,” Science 320 (2008): 1177–1180.

3 Robert P. Vallone, Lee Ross, and Mark R. Lepper, “The Hostile Media Phenomenon: Biased Perception and Perceptions of Media Bias in Coverage of the Beirut Massacre,” Journal of Personality and Social Psychology 49 (1985): 577–585, http://ssc.wisc.edu/~jpiliavi/965/hwang.pdf.

4 Hugo Mercier and Dan Sperber, “Why Do Humans Reason? Arguments for an Argumentative Theory,” Behavioral and Brain Sciences 34 (2011): 57–74, http://hal.archives-ouvertes.fr/file/index/docid/904097/filename/MercierSperberWhydohumansreason.pdf.

5 Richard E. Nisbett and Timothy D. Wilson, “Telling More than We Can Know: Verbal Reports on Mental Processes,” Psychological Review 84 (1977): 231–259, http://people.virginia.edu/~tdw/nisbett&wilson.pdf.

6 Eric Schwitzgebel, Perplexities of Consciousness (MIT Press, 2011).

7 Jonathan Haidt, “The Emotional Dog and Its Rational Tail: A Social Intuitionist Approach to Moral Judgment,” Psychological Review 108, no. 4 (2001): 814–834, doi:10.1037/0033-295X.108.4.814.

8 We’re also assuming, unrealistically, that you can really be certain the admirer is one of those six people, and that you aren’t neglecting other possibilities. (What if more than one of your classmates has a crush on you?)

9 Robin Hanson, “You Are Never Entitled to Your Opinion,” Overcoming Bias (Blog), 2006, http://www.overcomingbias.com/2006/12/you_are_never_e.html.

10 We lack the computational resources (and evolution lacked the engineering expertise and foresight) to iron out all our bugs. Indeed, even a maximally efficient reasoner in the real world would still need to rely on heuristics and approximations. The best possible computationally tractable algorithms for changing beliefs would still fall short of probability theory’s consistency.

11 Scott Alexander, “Why I Am Not Rene Descartes,” Slate Star Codex (Blog), 2014, http://slatestarcodex.com/2014/11/27/why-i-am-not-rene-descartes/.

12 Luke Muehlhauser, “The Power of Agency,” Less Wrong (Blog), 2011, http://lesswrong.com/lw/5i8/the_power_of_agency/.

9 comments

Comments sorted by top scores.

comment by cjkatfish · 2019-05-15T23:22:07.796Z · LW(p) · GW(p)

Tremendous article. Thank you. I feel like you threw my mental football 30 yards downfield. I use a model that this dovetails perfectly with. Can’t wait to read the rest.

comment by Ian McKenzie (naimenz) · 2019-07-20T22:12:45.494Z · LW(p) · GW(p)

In the example with Bob, surely the odds of Bob having a crush on you after winking (2:1) should be higher than a random person winking at you (given as 10:1), as we already have reason to suspect that Bob is more likely to have a crush on you than some random person not part of the six.

Replies from: liliet-b, tmercer, jeronimo196, aleksandr-budkov
comment by Liliet B (liliet-b) · 2019-12-04T09:27:58.649Z · LW(p) · GW(p)

Not if you consider that the 1:5 figure constrains that ONLY one person among the six has a crush on you. If you learn for a fact one does, you'll also immediately know the others all don't. Which is not true for a random selection of students - you could randomly pick six that all have a crush on you. Bob belongs to a group in which you know for a fact five people DON'T have a crush on you. So you have evidence lowering Bob's odds relative to a random winker.

Either that, or it doesn't matter how many actually have a crush on you, you're looking for the specific one you have definite evidence about. For thisy, a random winker is not qualified to enter the comparison at all - if they're not one of the six, they're not the person you're looking for. So Bob might have a crush on you AND not be the person you're looking for, although his odds are higher than those of the other five you don't have any evidence about.

That's the interpretations that make the math not wrong, anyway. If you only know that "at least one of them has a crush on me" and more than one could potentially satisfy your search criteria, the 1:5 figure is not the right odds.

Replies from: tmercer
comment by tmercer · 2022-07-06T01:52:47.499Z · LW(p) · GW(p)

"you'll also immediately know the others all don't"

No. Receiving an anonymous love note from among the 6 in NO WAY informs you that 5 of the 6 DON'T have a crush on you. All it does is take the unspecified prior (rate of these 6 humans having a crush on you), and INCREASE it for all 6 of them.

@irmckenzie is right. There's no way you get < 10:1 with MORE positive (confirmatory) evidence for Bob than a random stranger. All positive evidence HAS TO make a rational mind MORE certain the thing is true. Weak evidence, like the letter, which informs that AT LEAST 1 in 6 has a crush, should move a rational mind LESS than strong evidence, like the wink, but it must move it all the same, and in the affirmative direction.

comment by tmercer · 2022-07-06T01:49:49.984Z · LW(p) · GW(p)

This is obviously correct. The error was that Rob interpreted the evidence incorrectly. Getting an anonymous letter DOES NOT inform a rational mind that Bob has 1:5 odds of crushing. It informs the rational mind that AT LEAST ONE of the 6 classmates has a crush on you. It DOES NOT inform a rational mind that 5 of the 6 classmates DO NOT have a crush on you. I also hated this. Obviously, two pieces of evidence should make Bob MORE LIKELY to have a crush on you that one. There's no baseline rate of humans having a crush on us, so the real prior isn't in the problem.

comment by jeronimo196 · 2020-02-23T17:53:14.612Z · LW(p) · GW(p)

What Liliet B said. Low priors will screw with you even after a "definitive" experiment. You might also want to take a look at this: https://www.lesswrong.com/posts/XTXWPQSEgoMkAupKt/an-intuitive-explanation-of-bayes-s-theorem [LW · GW]

comment by Alexander Budkov (aleksandr-budkov) · 2024-03-14T11:35:51.493Z · LW(p) · GW(p)

You are right. Seems like there is an error in this example and a main problem is not with a prior 1:5 odds, problem is with a bad phrasing and confusion between "crush when winked" odds and "wink likelihood ratio".

you get winked at by people ten times as often when they have a crush on you

Is a statement about likelihood ratio (or at least can be interpreted that way) - P(wink|crush):P(wink|!crush)=10:1

And in final calculation likelihood is used and its correct according to Bayes Rule

To change our mind from the 1:5 prior odds in response to the evidence’s 10:1 likelihood ratio, we multiply the left sides together and the right sides together

While a statement

the 10:1 odds in favor of “a random person who winks at me has a crush on me”

is a statement about odds P(crush|wink):P(!crush|wink)=10:1 and to apply a Bayes rule to it as if it is ratio would be a mistake, but i'm guessing it's just an author's error in phrasing of this statement. 

comment by dhruv agarwal (dhruv-agarwal) · 2024-04-20T04:38:00.553Z · LW(p) · GW(p)

I don't understand how bob having a crush was assigned 2:1 odds. A random person having winked at you is given 10:1 odds of having a crush then shouldn't bob be the same? The 1:5 odds assigned earlier should be discarded in face of new evidence no? By this reasoning a random person having winked at you is given higher odds than bob having winked at you of having a crush. I am confused 😕

comment by David James (david-james) · 2024-04-05T02:03:21.776Z · LW(p) · GW(p)

Regarding the cost of a making an incorrect probability estimate, “Overconfidence is just as bad as underconfidence.” is not generally true. In binary classification contexts, one leads to more false positives and another to more false negatives. The costs of each are not equal in general for real world situations.

The author may simply mean that both are incorrect; this I accept.

My point is more than pedantic; there are too many examples of machine learning systems failing to recognize different misclassification costs.