A review of Steven Pinker's new book on rationality

post by Matthew Barnett (matthew-barnett) · 2021-09-29T01:29:58.151Z · LW · GW · 43 comments

Contents

  What I liked
  What I didn't like as much
None
43 comments

Steven Pinker's new book on rationality came out today. I figured someone on LessWrong would write a review for it, so I might as well be the one to do it.

Unlike Pinker's prior books, such as The Blank Slate and The Better Angels of Our Nature, this book lacks a straightforward empirical thesis. Instead, he mirrors the sequences [? · GW] by building a science of rationality and then tries to convince the reader that rationality is important, both personally and socially.

Unfortunately, long-time readers of LessWrong are unlikely to learn much from Pinker's new book; his content is too similar to the content in the sequences. An upside is that Pinker's treatment is more concise, and his style more closely resembles mainstream thought. Consequently, I'm tempted to recommend this book to people who might otherwise be turned away by Rationality: From A to Z [? · GW].

He starts by asking a simple question: how come it seems like everyone is so irrational? Pointing to religion, conspiracy theorists, ghost-believers, anti-vaxxers, alternative medicine adherents, and postmodernists, Pinker makes a good case that there's a lot of irrationality in the world. On the other hand, he continues, shouldn't humans have evolved to be more rational? How could such persistent, widespread irrationality be so common in humans, if our survival impinges on our ability to reason?

Pinker provides a simple answer: humans are very rational animals, just not in every domain. In those domains on which our survival depended, such as finding and eating food, humans are much less clueless than you might have been lead to believe. Pinker provides the example of the San people of the Kalahari Desert in southern Africa, who, despite their mythological beliefs, are stunningly successful at hunting prey. He cites Louis Liebenberg, who documented how the San people use Bayesian reasoning to hunt, applying it to footprints and animal droppings in order to build an accurate picture of their natural world: a dry desert on which they have subsisted for over a hundred thousand years.

It's not hard to see this dual phenomenon of rationality and irrationality reflected in the modern day: many young Earth creationists believe that the moon's craters were literally planted by God to give the appearance of old age, but these same people rarely apply the same standards of reason to matters in their ordinary life.

Yet, as Pinker observes, sometimes even when our life and money does depend on our rationality, we still fail. For instance, most people consistently fail to save for retirement. Why? The answer here is simple: life today is a lot different than the lives of our ancestors. What might have been a threat 10,000 years ago—such as a tiger in the bushes—is no longer a major threat; conversely, some threats—like car crashes—are entirely new, and thus, the human brain is ill-equipped to evaluate them rationally.

Pinker's book proceeds by presenting a textbook view of the science of rationality, including cognitive biases, formal logic, Bayesian inference, correlation and causation, statistical decision theory, and game theory. There isn't much to complain about here: Pinker is a great writer, and presents these ideas with impressive clarity. However, the content in these chapters rarely departs from the mainstream exposition of these subjects. Given that I already knew most of the details, I was left a tad bored.

To prevent you from being bored as well, I won't summarize the book's main contents. (You can go and read his book if you want to know all the details.) Instead, I'll draw my attention to some parts I liked, and some parts I didn't like as much.

What I liked

First off, Pinker cited the rationalist community as an example of a group of good reasoners,

A heartening exception to the disdain for reason in so much of our online discourse is the rise of a “Rationality Community,” whose members strive to be “less wrong” by compensating for their cognitive biases and embracing standards of critical thinking and epistemic humility.

I was pleasantly surprised to see such a positive review of this community, given his previous assessment of AI risk arguments from LessWrong, which he bizarrely conflated with unfounded fears of the "Robopocalypse". Perhaps this statement, and his semi-recent interaction with Stuart Russell is evidence that he is changing his mind.

In this book, Pinker demonstrates that he can be a good summarizer of other people. While this book contains almost no novel research, his skillfully ties together a ton of experiments in behavioral economics, theoretical models of rationality, and even signal detection theory. At the same time, it also seemed like the right length for a book trying to explain the basics of rationality, striking a nice balance between detail and book length. Gone are the days of needing to send someone a 1000+ page book to get them started on the whole "rationality" thing.

At no point did I feel that he was simply pushing an agenda. To be sure, at points, he drew from politics, religion, and personal experience to illustrate some aspect of irrationality, but these examples were secondary, as a way of making the ideas concrete; they were not his main focus.

Compared to his previous work, this book isn't likely to get him into hot water. Nearly everything, besides a few of his examples of irrationality, are part of the standard consensus in cognitive science and statistics. The most controversial chapter is probably chapter 10 in which he explains myside bias, building on the recent work of Keith E. Stanovich. That said, given that his examples are broad and varied—criticizing dogmas on both the left and right—it's not hard to see how some people might feel "the book isn't for me."

His book was not a mere recitation of biases or fallacies either: it emphasized what I view as a core principle of rationality, of actually taking your beliefs seriously, and acting on those beliefs. He refers to taking your beliefs seriously as "the reality mindset" and contrasts it with the "mythology mindset." Many of us on LessWrong will know that this psychological dichotomy sometimes goes by other names, such as "near and far view [? · GW]" and "separate non-overlapping magisteria". Pinker explains the essence of the mythology mindset,

[It] is the world beyond immediate experience: the distant past, the unknowable future, faraway peoples and places, remote corridors of power, the microscopic, the cosmic, the counterfactual, the metaphysical. People may entertain notions about what happens in these zones, but they have no way of finding out, and anyway it makes no discernible difference to their lives. Beliefs in these zones are narratives, which may be entertaining or inspiring or morally edifying. Whether they are literally “true” or “false” is the wrong question. The function of these beliefs is to construct a social reality that binds the tribe or sect and gives it a moral purpose.

Pinker acknowledges that rationality is not merely a matter of divorcing yourself from mythology. Of course, doing so is necessary if we want to seek truth, but we must also keep in mind the social function that the mythology mindset plays, and the psychological needs it satisfies. He does not commit the classic blunder that rationality is dependent on one's goals, and that all truly rational agents will pursue the same set of actions. In fact, he embraces the fact that rationality is goal-independent, what we might call here the orthogonality thesis [? · GW].

The book might be called "a version of the sequences that cites more primary sources." Pinker quotes Hume heavily, which gives you the sense—accurately, I might add—that much of what we call "rationality" was invented centuries ago, not recently. I particularly like the sentiment he expresses towards the Enlightenment in this passage,

Bertrand Russell famously said, “It is undesirable to believe a proposition when there is no ground whatsoever for supposing it is true.” The key to understanding rampant irrationality is to recognize that Russell’s statement is not a truism but a revolutionary manifesto. For most of human history and prehistory, there were no grounds for supposing that propositions about remote worlds were true. But beliefs about them could be empowering or inspirational, and that made them desirable enough.

Russell’s maxim is the luxury of a technologically advanced society with science, history, journalism, and their infrastructure of truth-seeking, including archival records, digital datasets, high-tech instruments, and communities of editing, fact-checking, and peer review. We children of the Enlightenment embrace the radical creed of universal realism: we hold that all our beliefs should fall within the reality mindset.

What I didn't like as much

For those expecting everything from the sequences to be represented, you will be let down. For example, he says little more about quantum mechanics other than that "most physicists believe there is irreducible randomness in the subatomic realm of quantum mechanics". Compare that to the sequence on quantum mechanics here [LW · GW] which forcefully argued for the deterministic many worlds interpretation.

His section on fallacies is, in my opinion, outdated rationality. He presents fallacies as common errors of reasoning which we can point out in other people. As an example, he analyzes a passage from Andrew Yang's presidential campaign, which claimed, "The smartest people in the world now predict that ⅓ of Americans will lose their job to automation in 12 years." Pinker labels such reasoning a "mild example of the argument from authority". 

The problem with this analysis is that, from a Bayesian point of view, Yang's statement is perfectly valid evidence for his thesis. In another section Pinker refers to the sunk cost fallacy, but without mentioning how that fallacy might be perfectly rational too. More generally, common formal and informal fallacies can be seen as instances of Bayesian reasoning, a thesis explained by Kaj Sotala's essay, "Fallacies as weak Bayesian evidence [LW · GW]". Pinker missed some opportunities to make this point clear.

For a book whose title is "Rationality: What It Is, Why It Seems Scarce, Why It Matters", Pinker spends relatively little time on the last part: why it matters. Only his last chapter was about why rationality matters, and in my opinion, it was the weakest part of the book.

In regards to personal life, Pinker documents a dizzying array of errors afflicting us on a daily basis, from hyperbolic discounting and taboo trade-offs to the availability heuristic. Yet rather than show how an understanding of these errors could help us succeed at our goals, he retreats into the abstract, and speaks about how rationality could help us all build a better democracy—an ironic defense, given what he had just told us about how the mythological mindset can interact with our politics.

Just about the only thing Pinker had to say about how rationality could help us personally was a single study by Wändi Bruine de Bruin and her colleagues, who found that after holding a few factors constant, such as intelligence and socioeconomic status, competence in reasoning and decision making was correlated with positive life outcomes. Pinker concedes that "all this still falls short of proving causation" but concludes that this research "entitles us to vest some credence in the causal conclusion that competence in reasoning can protect a person from the misfortunes of life."

Overall, I was not so much persuaded that rationality self-help matters, but Pinker did show that rationality as a general cognitive trait is powerful; after all, humans are doing quite fine compared to other animals. Whether this makes you less interested in rationality depends on why you're interested. If you are interested in truth, especially in those abstract mythology-adjacent realms, then rationality is the subject for you. For everyone else, I'm not yet convinced.

43 comments

Comments sorted by top scores.

comment by Jacob Falkovich (Jacobian) · 2021-09-29T18:19:41.086Z · LW(p) · GW(p)

rationality is not merely a matter of divorcing yourself from mythology. Of course, doing so is necessary if we want to seek truth...

I think there's a deep error here, one that's also present in the sequences. Namely, the idea that "mythology mindset" is something one should or can just get rid of, a vestige of silly stories told by pre-enlightenment tribes in a mysterious world.

I think the human brain does "mythological thinking" all the time, and it serves an important individual function of infusing the world with value and meaning alongside the social function of binding a tribe together. Thinking that you can excise mythological thinking from your brain only blinds you to it. The paperclip maximizer is a mythos, and the work it does in your mind of giving shape and color to complex ideas about AGI is no different from the work Biblical stories do for religious people. "Let us for the purpose of thought experiment assume that in the land of Uz lived a man whose name was Job and he was righteous and upright..."

The key to rationality is recognizing this type of thinking in yourself and others as distinct from Bayesian thinking. It's the latter that's a rare skill that can be learned by some people in specialized dojos like LessWrong. When you really need to get the right answer to a reality-based question you can keep the mythological thinking from polluting the Bayesian calculation — if you're trained at recognizing it and haven't told yourself "I don't believe in myths".

Replies from: RationalRomantic
comment by RationalRomantic · 2021-10-01T21:20:08.578Z · LW(p) · GW(p)

I agree. Myths are a function of how the mind stores (some types of) knowledge, rather than just silly stories. I would be interested to hear a "rational" account of poetry and art, as I think myth has more in common with these than with scientific knowledge.

The development of applied rationality was a historical phenomenon, which mostly originated in Greece (with some proto-rationalists in other cultures). One aspect of rationality is differentiating things from each other, and then judging between them. In order to employ judgement, one must have different options to judge between. This is why proto-rationality often arises in hermeneutic traditions, where individuals attempt to judge between possible interpretations of religious texts (see India, for example).

In pre-rational societies, myth often operates as an undifferentiated amalgam of various types of knowledge. It acts as a moral system, an educational system, a political system, a military system, and more. In Islam -- which traditionally did not have a separation of church and state -- politics, culture, and religion are still almost completely undifferentiated; this was also the largely the case in Rabbinic Judaism (minus the politics, for obvious reasons).

I think in future myths will continue to serve this purpose: integrating various domains of knowledge and culture together. Arguably the rationalist community, the enlightenment tradition, the philosophical tradition, each of these are engaged in a myth. Nietzsche would call this optimistic Socratism: the optimism that increased knowledge and consciousness will always lead to a better world, and more primordially that the world is ultimately intelligible to the human mind in some deep sense.

comment by Liron · 2021-09-29T13:08:06.655Z · LW(p) · GW(p)

I listened the audiobook and fully endorse this review. It’s much better than what I would have written.

I really love Pinker’s other books so I was looking forward to this but unfortunately I already had all the fun spoiled by reading LW sequences, like literally all. The Sequences are a superset of this book, longer and quirkier, deeper and more insightful. But I agree that Pinker’s book is a good fit for someone who wants a more compact and mainstream-sounding intro to rationality.

Replies from: Yoav Ravid
comment by Yoav Ravid · 2021-09-29T13:56:05.046Z · LW(p) · GW(p)

And perhaps the biggest advantage of the sequences is that, though they're long, they're made of mostly stand-alone essays that can be shared on their own. So if you want to let someone read about a specific topic, an essay from the sequences will probably be a better fit than "Page/Chapter X in this book".

comment by Ben Pace (Benito) · 2021-09-29T01:40:43.971Z · LW(p) · GW(p)

Did he say much more about LessWrong than that paragraph?

Replies from: matthew-barnett
comment by Matthew Barnett (matthew-barnett) · 2021-09-29T01:54:37.747Z · LW(p) · GW(p)

Yes, he says a bit more. Let me provide some quotes,

Many Rationalistas believe that Bayes’s rule is among the normative models that are most frequently flouted in everyday reasoning and which, if better appreciated, could add the biggest kick to public rationality. In recent decades Bayesian thinking has skyrocketed in prominence in every scientific field. Though few laypeople can name or explain it, they have felt its influence in the trendy term “priors,” which refers to one of the variables in the theorem.

Elsewhere,

It would be nice to see people earn brownie points for acknowledging uncertainty in their beliefs, questioning the dogmas of their political sect, and changing their minds when the facts change, rather than for being steadfast warriors for the dogmas of their clique. Conversely, it could be a mortifying faux pas to overinterpret anecdotes, confuse correlation with causation, or commit an informal fallacy like guilt by association or the argument from authority. The “Rationality Community” identifies itself by these norms, but they should be the mores of the whole society rather than the hobby of a club of enthusiasts.

He also cites Raemon's essay "What exactly is the "Rationality Community?" [LW · GW]" in the notes, as well as this essay on Arbital about Bayes' rule, Overcoming Bias, Slate Star Codex, Scott Aaronson, Julia Galef's new book, Bryan Caplan's description of the rationalist community, and Tom Chiver's book.

Replies from: Benito
comment by Ben Pace (Benito) · 2021-09-29T01:56:13.721Z · LW(p) · GW(p)

Man, that essay by Ray really gets around.

Overall sounds like he portrays us pleasantly, which is nice.

comment by Vaniver · 2021-10-01T04:41:56.674Z · LW(p) · GW(p)

Compare that to the sequence on quantum mechanics here [LW · GW] which forcefully argued for the deterministic many worlds interpretation.

Ok, but why does this matter?

IMO, the point of the QM sequence is "no, really, nowhere is safe; the call is coming from inside the house." There's a big difference between 'rationalists' who see irrationality as something to fight 'out there', and 'rationalists' who see irrationality as something to fight 'in yourself'. Seeing irrationality in a field considered by many to be the peak of human intellect is a sobering observation.

But... you have to go through the details, and you have to be right. I don't know that I would put it in my intro rationality book (for example, it's not in Thinking and Deciding, which was my old go-to recommendation [LW · GW] for a textbook on rationality).

comment by swarriner · 2021-09-30T13:17:43.718Z · LW(p) · GW(p)

I find myself concerned. Steven Pinker's past work has been infamously vulnerable to spot-checks of citations, leading me to heavily discount any given factual claims he makes. Is there reason to think he has made an effort here that will be any better constructed?

comment by Vaniver · 2021-10-01T04:53:53.024Z · LW(p) · GW(p)

speaks about how rationality could help us all build a better democracy—an ironic defense, given what he had just told us about how the mythological mindset can interact with our politics.

Tho this feels to me like it needs to grapple with The Myth of the Rational Voter. That is, Caplan claims voters are 'rationally irrational', where they correctly determine that voting calls for the mythological mindset instead of the reality mindset. 

In order for people to vote in reality mindset, something needs to be structurally different, because if you just get people to drop the mythological mindset, they'll probably rationally decide not to vote (because the expected benefit of their vote, under most reality-based analyses, will be less than the cost of voting).

[I am optimistic about some ways to make voting more conducive to reality mindset, but I think it doesn't look very much like "more informed voters". Also, I think most "well, educate people more" approaches look like "replace mythology A with mythology B", which I'm in favor of!]

comment by Robbo · 2021-09-29T15:33:18.247Z · LW(p) · GW(p)

"I'm tempted to recommend this book to people who might otherwise be turned away by Rationality: From A to Z [? · GW]."

Within the category of "recent accessible introduction to rationality", would you recommend this Pinker book, or Julia Galef's "Scout Mindset"? Do thoughts on the pros and cons of each, or who would benefit more from each?

Replies from: matthew-barnett
comment by Matthew Barnett (matthew-barnett) · 2021-09-29T15:47:53.910Z · LW(p) · GW(p)

I only skimmed Julia Galef's book, which is why I didn't compare the two. I suspect her book would be a better fit for newcomers, but I'm not sure.

comment by Alexander (alexander-1) · 2021-10-06T02:57:58.969Z · LW(p) · GW(p)

Funny how the top-rated review of this book on Goodreads ignores everything Pinker says about cognitive biases and probabilistic reasoning and claims that "There are no objective facts; such things are self-contradictory" as some strawman rebuttal. If true, then that statement itself is a contradiction.

I find it astonishing that people continue to conflate "rationality" with "objective facts" when the modern meaning of rationality acknowledges that the map is not the territory.

comment by wolflow · 2021-09-29T15:14:43.823Z · LW(p) · GW(p)

"For instance, most people consistently fail to save for retirement."

Strange example since there might almost be evolutionary reasons to not care for later, i.e it being rational from the pov of our genes.

comment by dedz · 2021-09-30T03:29:11.677Z · LW(p) · GW(p)

There's a nice paper on this "informal fallacies as Bayesian reasoning" idea: https://ojs.uwindsor.ca/index.php/informal_logic/article/view/2132

(But that doesn't mean informal fallacies are always good arguments. It just means they can't be dismissed a priori, you have to analyze the argument individually.)

comment by Rafael Harth (sil-ver) · 2021-09-29T17:09:18.386Z · LW(p) · GW(p)

As an example, he analyzes a passage from Andrew Yang's presidential campaign, which claimed, "The smartest people in the world now predict that ⅓ of Americans will lose their job to automation in 12 years." Pinker labels such reasoning a "mild example of the argument from authority".

Isn't this pretty damning? (As in, damning for Steven's abilities as a rationalist.)

Appeals to authority are not at all categorically weak. A lot of Inadequate Equilibria was about figuring out when it is and is not plausible to out-compete authority. Saying "Professionals say it's hard to predict Microsoft's stock price, so you won't be able to" is also an argument from authority; it's also an extremely strong argument.

There has to be a name for the fallacy of thinking that a common term with negative association like "argument from authority" automatically means something even when it doesn't... ? Someone help me out here.

Also -- choosing Yang as an example of irrationality in politics? Really? I guess I'm supposed to think that any example is okay as long as it's legit (this one arguably isn't, anyway), but in fact, being named as an example is Bayesian evidence that the author thinks you are altogether irrational, and people are probably going to understand it that way.

Replies from: Measure, AllAmericanBreakfast, gjm, TAG, TAG
comment by Measure · 2021-09-29T17:38:15.137Z · LW(p) · GW(p)

The smartest people in the world find it hard to predict Microsoft's stock price, so you won't be able to.

Wouldn't the argument-from-authority version of this instead be "The smartest people in the world say it's hard for anyone to predict Microsoft's stock price, so you won't be able to"?

"Smart people can't do X, therefore average people can't do X either" seems less fallacious than "Smart people say average people can't do X, and they must be right because they're smart."

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2021-09-29T18:15:04.303Z · LW(p) · GW(p)

Actually, I think the example I wanted to choose was 'professionals say it's hard to predict the stock price'. Like, the appeal to authority is most commonly made with respect to supposed experts in the field, not with generally smart people. What Yang said isn't even a central example of appeal to authority.

I think 'smart people say average people can't do X, they must be right because they're smart' is probably also not central? Also not sure it's all that fallacious, probably depends on how you define 'smart'.

comment by DirectedEvolution (AllAmericanBreakfast) · 2021-09-29T19:55:13.555Z · LW(p) · GW(p)

We could substitute any nonsense assertion we like into that Yang quote.

The smartest people in the world now predict that horse-doses of ivermectin are the best treatment for COVID-19.

The smartest people in the world now predict that Kanye West will win the next presidential election.

The smartest people in the world now predict that we should return to the gold standard.

Appending "the smartest people in the world now predict [X}" shouldn't actually lend any weight to the subsequent claim. It's just a thing you can say to make that claim sound more substantial. This is because Yang's not telling us who these people are, exactly, how he knows they're so smart, or why their intelligence bears very much on the credence we should assign to this remarkably specific prediction.

If instead, Yang had said "Paul Krugman now predicts that 1/3 of Americans will lose their job to automation in 12 years" (and note - I used Krugman's name arbitrarily and have no reason to believe this is what he really thinks), then we could judge for ourselves the credence we assign to Krugman's predictive abilities. It would become a meaningful claim. As it is, though, I think this Yang quote is a nice, central example of an argument from authority.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2021-09-29T21:47:25.308Z · LW(p) · GW(p)

We could substitute any nonsense assertion we like into that Yang quote.

Isn't that trivially false? If Yang said the Kanye thing, that would be a lie, so if you think he's trying to be honest, he can't say that. I agree that he doesn't give you any way to verify his claim, but that's not the standard I use to decide whether something is an appeal to authority. If you say, 'some of the smartest people in the world are religious', that's an appeal to authority and probably a weak argument even though it's true.

Yang often uses the phrase 'my friends in Silicon Valley'; he probably was talking about important people in tech in that quote.. I wouldn't trust those people, but I certainly think their opinions are evidence.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2021-09-30T00:15:33.083Z · LW(p) · GW(p)

OK, I think there are two ways to look at this question.

One is to separate the quote from the speaker, and ask if we'd still consider the context-free quote to be a piece of evidence. This is what I am advocating.

The other is to consider who's speaking as key to interpreting the meaning of the quote, which is what you're doing.

I think both are valid. For example, in the TV show "Firefly," one of the characters, Simon Tam, receives letters from his highly intelligent sister. They read as "perfectly normal" to his parents, but their trivial content and occasional misspellings make him suspect - correctly - that they contain a code saying that she's being harmed at the boarding school she's been sent to. Here, considering the message in light of the speaker (or sender, in this case), is crucial to understanding it as a piece of evidence that his sister is in danger.

Another example is when my mentor in my MS program tells me that it's best to automatically accept the predictions our PI makes about research ideas. If he likes them, they're good. If he doesn't like them, they're bad. He's a credible authority figure to whom we can and should appeal as a strong form of evidence.

Alternatively, there are many cases in which we might find it very difficult to predict how the identity of the speaker, or the context, should influence our interpretation of their quote. In the case of Yang, I have very little insight into whether or not his reference to "the smartest people in the world" is evidence that "this job-loss prediction is believed by more smart and well-qualified analysists than I, AllAmericanBreakfast, had thought prior to reading this Yang quote."

If it is evidence of this, then yes, I agree that an ideal reasoning process would take it as some evidence that the prediction is true. But realistically, politicians often play fast-and-loose with their evidence. I am often wise to actively choose to not read anything into the quote beyond its context-free content. When I read this quote, I willfully shut off my imagination from trying to conjure up images of the supposed "smartest people in the world" that Yang's ventriloquizing, to prevent my brain from being tricked into thinking this quote ought to update my belief.

Perhaps a more precise description of the fallacy here is "argument from an illusory authority." When we say that something is a fallacious argument from authority, we're implicitly saying that "proper epistemics in this context is to disregard references to the opinions of nonspecific 'authorities,' because the rhetoric is designed to trick you into accepting the statement, rather than to convey credible opinions to your mind."

In response to your question, "isn't this trivially false," well - no, it's trivially true. We can substitute anything we like into that statement. Watch me!

The smartest people in the world say that raisin bran is the best cereal.

The smartest people in the world say that I, AllAmericanBreakfast, am right about all this argument-from-authority stuff.

The smartest people in the world say that it's trivially true that you can append any nonsense statement to "the smartest people in the world."

Context-free, you shouldn't (and, I'm sure, are not) taking "the smartest people in the world" as any evidence at all of the truth-value of my claims. In context, you also aren't doing that, because you recognize my intent in uttering these quotes is not to convince you of their literal truth.

Note that it's only in very special circumstances - particular combinations of speaker, referenced authority figure, and object-level claim - in which you would consider the referenced authority figure to lend a greater weight of evidence to the claim. Perhaps most references to authority figures in the context of argument are selected for actually being relevant as evidence. By this, I mean that during actual debate, people may strive to avoid statements like "my uncle Bob says that Apple stock is going to double in price by next year," because referencing uncle Bob as an authority figure lends no support to their claim, but undermines their general credibility as a debater. So when an authority figure does get referenced, we ought to take it seriously.

But I don't actually buy that argument. I think vague or non-credible authority figures get referenced all the time, and it's only in select circumstances when we should actually respect this form of evidence. Generally, I think we should "tag" references to authority figures as fallacious and to be ignored, unless we have made a considered judgment to afford a specific speaker, on a specific topic, referencing a specific authority figure, to be actually useful evidence. A conservative guardedness with occasional permissions, rather than a liberal acceptance with occasional rejections.

In an ideal reasoning process with unlimited compute, we might wish to consider fully the credibility of each referenced authority figure, no matter how vague. But in a real process with our puny minds, I think it's best to generally choose to ignore and actively (epistemically) punish arguments by authority, unless they're done right.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2021-09-30T19:22:51.036Z · LW(p) · GW(p)

I strongly endorse drawing a distinction, but I think I want to draw it a bit differently. The reason is that I feel like I would still defend the smart-people quote as non-fallacy-like if someone else had said it, and if that's true, it can't be because I have some respect for Yang as a thinker.

How about this (which I give you total credit for because I only came up with it after reading your comment):

  • (1) A statement is, by itself, not evidence to a rational observer who doesn't know the speaker, but it's possible that adding more information could turn it into evidence
  • (2) A statement contains enough information for a rational observer to conclude that the argument is fallacious

I would agree that #1 applies to Yang's quote, and if that's sufficient for being a fallacy, then Yang's quote is a fallacy. This has some logic to it because people could mistakenly believe that the quote is evidence by itself, and that would be a mistake. However, I myself often say things for which #1 applies, and I believe that a lot of pretty rational people do as well. Then again, perhaps some don't. I probably do it much less on LW than on other sites where I put much less effort into my posts.

I think the explanation for my intuition that the inclusion into the book is stupid is that avoiding #1 is a relatively high standard, and in fact lots of politicians routinely fail #2. I bet you could even find clear-cut examples of politicians failing #2 with regard to arguments from authority. There just is a difference between "this is stupid" and "this is incomplete and may or may not be stupid if I hear the rest", and that difference seems to capture my reaction even on reflection.

Conversely, I feel like the personality of the speaker should not be an input to the fallacy-deciding function. I agree with everything in your last four paragraphs, but I think #1 remains less bad than #2 even if you think it's unlikely that additional information would make the argument non-stupid.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2021-09-30T22:10:07.065Z · LW(p) · GW(p)

I think we're getting closer! Here's even another alternative.

Let's first admit that arguments can gain information-content both from their text and from their context. Through text and context, they can define, describe, and make claims and predictions.

  1. An appropriate argument from authority sufficiently defines (via text and context) a meaningful authority figure, describes with sufficient accuracy their level of credibility on the subject, and makes a sufficiently specific claim. We predict that the accuracy of our own prediction will improve if we put in the effort to update our predictions based on this claim. Furthermore, once the prediction resolves, we will use that result to increase or decease the credibility we ascribe to that authority figure.
  2. A fallacious argument from authority fails one or more of these tests of sufficiency, even when taking context into account. The authority figure may be too vaguely referenced, their credibility may be exaggerated, or the claim may be too imprecise. The accuracy of our predictions will be worsened if we update based on this claim, and the resolution of the claim does not allow us to update the credibility we ascribe to the referenced authority figure.

Another possible division is:

  1. A non-fallacious argument from authority creates an expectation that more context about the referenced authority figure would help us better assess the truth-content of the claim.
  2. A fallacious argument from authority creates an expectation that further context would make the quote feel just as stupid as it seemed before.

I think it's only worth worrying about these divisions and fearing arguments from authority in a relatively serious context. If you're having dinner with your friends and happen to vaguely reference an authority figure to back your claim, that's fine. It ain't that deep.

But if a serious statesman does it on TV in the context of a debate or speech, then we have every right to complain that their claims contain fallacious arguments from authority.

I think that Yang's quote has a serious-enough context, is stupid-sounding enough on its own to make me uninterested in more context, and fails all these tests of sufficiency. For that reason, I consider it fallacious. It hits all three check boxes on my "fallacy check-list."

I suspect that when you make potentially-fallacious arguments from authority, they're usually in a relatively non-serious context and that your audience believes that if they took the time to interrogate you about the authority figures you reference, that they'd feel persuaded that they are plausibly meritorious authority figures even though you were vague in your initial presentation. Hence, you would probably not be committing a fallacy, in the way I am defining it here.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2021-10-01T18:28:38.993Z · LW(p) · GW(p)

Taking your second division:

  1. [...] creates an expectation that more context about the referenced authority figure would help us better assess the truth-content of the claim.
  1. [...] creates an expectation that further context would make the quote feel just as stupid as it seemed before.

I want to add 3

  1. [...] contains enough information that you already know the quote is stupid

You implied Yang's quote is #2, I would say it's #1. But (more importantly?) I would draw the bigger distinction between #2 and #3, and I don't see any reason to choose something in #2 as an example of irrationality when examples in #3 are available. This, I think, is still my main argument.

I agree with everything else in your post; in particular, with the importance of context which I hadn't considered. I concede that my behavior isn't analogous to that of Yang in this example.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2021-10-01T20:09:36.090Z · LW(p) · GW(p)

Glad we are converging on some sort of agreement! I find this helpful to talk out with you.

I'm not clear on the distinction between #2 and #3. What's the difference between predicting the quote will still seem stupid after further research, and finding the quote to be stupid now? By conservation of expected evidence, aren't they the same thing?

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2021-10-01T22:03:42.830Z · LW(p) · GW(p)

I'm not clear on the distinction between #2 and #3. What's the difference between predicting the quote will still seem stupid after further research, and finding the quote to be stupid now? By conservation of expected evidence, aren't they the same thing?

One difference is just uncertainty. Conservation of expected evidence only works if you're certain. I think I was assuming that your probability on Yang being unable to expand on his claim meaningfully is like ~67%, not like 98%.

What I suspect happened is that he talked to various big names in tech, including the CEOs of companies who make decisions about automation, and they were bullish on the timelines. Would that kind of scenario qualify as being non-stupid?

The other difference is that you can't demonstrate it. The leading question wasn't about whether one should update from the quote, it was whether it's a good idea to choose the quote as a negative example in a book about rationality. Even if Steven Pinker were 100% sure that there is no reason to update here, it's still not a good example if he can't prove it. I mean, if you accept that the quote could plausibly be non-stupid, then the honest way to talk about would have been "Here is Yang saying {quote}. If it turns out that 'some of the smartest people in the world' is made-up or refers to people aren't actually impressive, this will have been an example of a vapid appeal-to-authority. I can't prove that this is so, but here are my reasons for suspecting it." And again, I can't imagine it's hard to find examples of appeals-to-authority that are clear-cut fallacies.

Replies from: AllAmericanBreakfast
comment by DirectedEvolution (AllAmericanBreakfast) · 2021-10-02T00:20:38.242Z · LW(p) · GW(p)

What I suspect happened is that he talked to various big names in tech, including the CEOs of companies who make decisions about automation, and they were bullish on the timelines. Would that kind of scenario qualify as being non-stupid?

No. That's a good point. It seems like "fallacious argument from authority" lends itself to black-and-white thinking that's just not appropriate in many cases. Reading the tea leaves has its value. If I had to guess, Pinker was looking for a timely quote by a politician his readers are likely to be sympathetic to, and this one was convenient.

I still think that there are many times when it's best as a rule to just dismiss statements with the form of "arguments from authority." This fits the criteria, and it might be that sometimes you throw out the baby with the bathwater this way. Then again, there could be equal value in becoming sensitive to when it's appropriate to "tune in" to this sort of evidence. That probably depends on the individual and their goals.

comment by gjm · 2021-09-29T17:30:08.744Z · LW(p) · GW(p)

It would be interesting to know exactly what Pinker wrote. For instance, imagine that he wrote something like this:

What Yang does here is a mild example of the argument from authority. It may be true that 1/3 of Americans will lose their jobs to automation in 12 years, and if it's true that the smartest people in the world expect that then that should make us think it more likely than we did before. But it's a long way short of proof -- the smartest people in the world may still not have much actual ability to predict what will happen 12 years out. And Yang never says which smartest people in the world, or how he knows that they're the smartest, or what sort of smart, all of which could make a big difference to how much weight we give to their opinion. (Maybe the smartest people in the world are all theoretical physicists and don't actually know anything about the economy. Maybe the people Yang thinks are the smartest in the world are the people who seem smart to him because they say a lot of things he agrees with. Maybe he has no idea who the smartest people in the world are and he's just bullshitting. Or maybe he means that a selection of top economists, technologists and the like sat down and made a serious attempt to predict likely futures for technological development and their impact on the economy, and they estimate that 1/3 of Americans will lose their jobs to automation. These scenarios are not all alike.) Some arguments from authority are stronger than this, some are weaker, but the weaknesses here are typical: it's not clear exactly what authority is being cited, it's not clear exactly how expert that authority is, and even the most expert it could plausibly be isn't enough to justify very much confidence in what they say.

I wouldn't find that damning evidence against Pinker's expertise in rationality.

Replies from: matthew-barnett
comment by Matthew Barnett (matthew-barnett) · 2021-09-29T18:03:46.048Z · LW(p) · GW(p)

FWIW, this was the full paragraph that I pulled the quote from,

The “smartest people in the world” claim from the Yang Gang is a mild example of the argument from authority. The authority being deferred to is often religious, as in the gospel song and bumper sticker “God said it, I believe it, that settles it.” But it can also be political or academic. Intellectual cliques often revolve around a guru whose pronouncements become secular gospel. Many academic disquisitions begin, “As Derrida has taught us . . .”—or Foucault, or Butler, or Marx, or Freud, or Chomsky. Good scientists disavow this way of talking, but they are sometimes raised up as authorities by others. I often get letters taking me to task for worrying about human-caused climate change because, they note, this brilliant physicist or that Nobel laureate denies it. But Einstein was not the only scientific authority whose opinions outside his area of expertise were less than authoritative. In their article “The Nobel Disease: When Intelligence Fails to Protect against Irrationality,” Scott Lilienfeld and his colleagues list the flaky beliefs of a dozen science laureates, including eugenics, megavitamins, telepathy, homeopathy, astrology, herbalism, synchronicity, race pseudoscience, cold fusion, crank autism treatments, and denying that AIDS is caused by HIV.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2021-09-29T18:25:27.292Z · LW(p) · GW(p)

I maintain that the point is stupid. Putting aside that 'smart' could entail rationality, the bigger issue is that Steven argues [it's non-conclusive] => [it's a fallacy]. Certainly intelligent people can believe false things, but as long as intelligence correlates with being accurate, what Yang is is still Bayesian evidence, just as you said in the review.

And it's not hard to find better examples for irrationality. Julia Galef's book contains a bunch of those, and they're all clear-cut.

Replies from: TAG
comment by TAG · 2021-09-29T18:55:45.416Z · LW(p) · GW(p)

If all the information you had about a person was some.very generic information about their IQ or rationality quotient,your best option would be to believe the person who has the highest.

But that is almost never the case. Experts have certificates indicating their domaon specific knowledge.

Would you want a random person with an IQ of 180 performing a surgical operation on you?

what Yang is is still Bayesian evidence

But very weak Bayesian evidence. The human brain can't physically deal with every small.or very large quantities. You're much better off disregarding very weak evidence.

Replies from: Vladimir_Nesov, sil-ver
comment by Vladimir_Nesov · 2021-09-29T19:58:18.450Z · LW(p) · GW(p)

You're much better off disregarding very weak evidence.

Yes. This shouldn't be confused with regarding it as lack of evidence. Occasionally you are better off making use of it after all, if the margins for a decision are slim. Evidence is never itself fallacious, the error is its misrepresentation as something that it isn't, and an act of labeling something as a fallacy can itself easily fall into such a fallacy, for example implying that weak evidence of certain form should be seen as lack of evidence or as counterevidence.

comment by Rafael Harth (sil-ver) · 2021-09-29T19:44:54.981Z · LW(p) · GW(p)

Would you want a random person with an IQ of 180 performing a surgical operation on you?

If we didn't have professional surgeons (and I need the surgery), then yes, and we don't have something analogous to professional surgeons for predicting the future. (Maybe superforecasters, but that standard is definitely not relevant if we're comparing Yang to the average politician.)

Replies from: gjm
comment by gjm · 2021-09-29T20:54:31.062Z · LW(p) · GW(p)

We do have people with expertise relevant to making the sort of prediction Yang's talking about, though. For instance:

  • AI researchers probably have a better idea than randomly chosen very smart people of what the state of AI is likely to be a decade from now.
  • Economists probably have a better idea than randomly chosen very smart people of whether the likely outcome of a given level of AI progress looks more like "oh noes 1/3 of all Americans have lost their jobs" or "1/3 of all Americans find that the nature of their work has changed" or "no one actually needs a job any more".
Replies from: ChristianKl
comment by ChristianKl · 2021-09-30T09:22:30.860Z · LW(p) · GW(p)

As a field AI researchers are in AI research because they believe in it's potential. While they do have expertise there's also a bias.

If you would have listened to AI researchers five years ago we would have driverless cars by now.

Replies from: gjm
comment by gjm · 2021-09-30T22:58:20.071Z · LW(p) · GW(p)

Damn! If only I'd listened to AI researchers five years ago.

(I know what you meant :-).)

Yes, it's true that AI researchers' greater expertise is to some extent counterbalanced by possible biases.I still think it's likely that a typical eminent AI researcher has a better idea of the likely level of technological obsolescence over the next ~decade than a typical randomly chosen person with (say) an IQ over 160.

(I don't think corresponding things are always true. For instance, I am not at all convinced that a randomly chosen eminent philosopher-of-religion has a better idea on average of whether there are any gods and what they're like if so than a randomly chosen very clever person. I think it depends on how much real expertise is possible in a given field. In AI there's quite a lot.)

Replies from: ChristianKl
comment by ChristianKl · 2021-10-01T13:18:15.662Z · LW(p) · GW(p)

Knowing whether AI will make a field obsolate takes both expertise of AI and expertise of the given field.

There's an xkcd for that https://xkcd.com/793/

Replies from: gjm
comment by gjm · 2021-10-01T14:53:50.579Z · LW(p) · GW(p)

I agree that people who are both AI experts and truck drivers (or executives at truck-driving companies) will have a better idea of how many Americans will lose their truck-driving jobs because they get automated away, and likewise for other jobs.

Relatively few people are expert both in AI and in other fields at risk of getting automated away. I think having just expertise in AI gives you a better chance than having neither. I don't know who Yang's "smartest people" actually were, but if they aren't people with either specific AI expertise, or specific expertise in areas likely to fall victim to automation, or maybe expertise in labor economics, then I think their pronouncements about how many Americans are going to lose their jobs to automation in the near future are themselves examples of the phenomenon that xkcd comic is pointing at.

(Also, e.g., truck drivers may well have characteristic biases when they talk about the likely state of the truck driving industry a decade from now, just as AI researchers may.)

comment by TAG · 2021-09-29T23:44:57.800Z · LW(p) · GW(p)

There has to be a name for the fallacy of thinking that a common term with negative association like “argument from authority” automatically means something even when it doesn’t… ? Someone help me out here.

Fallacy fallacy

comment by TAG · 2021-09-29T18:15:42.275Z · LW(p) · GW(p)

The fallacy is inappropriate appeal to authority. A geologist is not an apporiate authority on what is truly a fruit or berry. And you should not disregard your doctor's advice just because they're an authority on medicine.

The word "smart" is nowadays used to mean something like "expert on everything"...but how likely is that? It takes half a lifetime to get serious domain-specific knowledge of just one thing.

comment by TAG · 2021-09-29T11:07:21.972Z · LW(p) · GW(p)

Louis Liebenberg, who documented how the San people use Bayesian reasoning

It would be quite impressive if they were applying Bayes's rule , a feat most of my fellow countrymen are incapable of. But I can't find Liebenbeg using the term "Bayes". Did you interject it ,or did Pinker?

Replies from: matthew-barnett
comment by Matthew Barnett (matthew-barnett) · 2021-09-29T14:36:03.948Z · LW(p) · GW(p)

I think the Bayesian interpretation might have been Pinker's own take. I did not interject it myself.

comment by TAG · 2021-09-29T10:46:17.454Z · LW(p) · GW(p)

For those expecting everything from the sequences to be represented, you will be let down. For example, he says little more about quantum mechanics other than that “most physicists believe there is irreducible randomness in the subatomic realm of quantum mechanics”. Compare that to the sequence on quantum mechanics here which forcefully argued for the deterministic many worlds interpretation

OK, I've compared them.

If a laypeson tries to out-think an expert , and ends up disagreeing with the consensus , they are almost certainly wrong -- whether it's over climate change , vaccines or QM. So, out of the two, Pinker is giving correct rationality advice.