Some highlights from Nate Silver's "The Signal and the Noise"

post by JonahS (JonahSinick) · 2013-07-13T15:21:14.903Z · LW · GW · Legacy · 11 comments

Contents

  Main Points
  Chapter Summaries
    Introduction
    Chapter 6: 
None
11 comments

As a part of my work for MIRI on the "Can we know what to do about AI?" project, I read Nate Silver's book The Signal and the Noise: Why So Many Predictions Fail — but Some Don't. I compiled a list of the takeaway points that I found most relevant to the project. I think that they might be of independent interest to the Less Wrong community, and so am posting them here.

Because I've paraphrased Silver rather than quoting him, and because the summary is long, there may be places where I've inadvertently misrepresented Silver. A reader who's especially interested in a point should check the original text. 

Main Points

Chapter Summaries

Introduction

Increased access to information can do more harm than good. This is because the more information is available, the easier it is for people to cherry-pick information that supports their pre-existing positions, or to perceive patterns where there are none.
The invention of the printing press may have given rise to religious wars on account of facilitating the development of ideological agendas.

Chapter 1: The failure to predict the 2008 housing bubble and recession

Chapter 2: Political Predictions

Chapter 3: Baseball predictions

Chapter 4: Weather Predictions

Chapter 5: Earthquake predictions:

Chapter 6: 

Chapter 7: Disease Outbreaks

Chapter 8: Bayes' Theorem

Chapter 9: Chess computers

Chapter 10: Poker

Chapter 11: The stock market

Chapter 12: Climate change

Chapter 13: Terrorism

11 comments

Comments sorted by top scores.

comment by lukeprog · 2013-07-13T16:37:38.602Z · LW(p) · GW(p)

Political pundits and political experts usually don't do much better than chance when forecasting political events, and usually do worse than crude statistical models.

An important caveat to this, though I can't recall whether Silver mentions it: Tetlock (2010) reminds people that he never said experts are "as good as dart-throwing chimps." E.g. experts do way better than chance (or a linear prediction rule) when considering the space of all physically possible hypotheses rather than when considering a pre-selected list of already-plausible answers. E.g. the layperson doesn't even know who is up for election in Myanmar. The expert knows who the plausible candidates are, but once you narrow the list to just the plausible candidates, then it is hard for experts to out-perform a coin flip or an LPR by very much.

comment by Matt_Simpson · 2013-07-13T20:44:23.984Z · LW(p) · GW(p)

One interesting fact from Chapter 4 (on weather predictions) that seems worth mentioning: Weather forecasters are also very good at manually and intuitively (i.e. without some rigorous mathematical method) fixing the predictions of their models. E.g. they might know that model A always predicts rain a hundred miles or so too far west from the Rocky Mountains. So to fix this, they take the computer output and manually redraw the lines (demarking level sets of precipitation) about a hundred miles east, and this significantly improves their forecasts.

Also: the national weather service gives the most accurate weather predictions. Everyone else will exaggerate to a greater or lesser degree in order to avoid getting flak from consumers about, e.g., rain on their wedding day (because not-rain or their not-wedding day is far less of a problem).

comment by Said Achmiz (SaidAchmiz) · 2013-07-13T16:41:03.564Z · LW(p) · GW(p)

A major reason that predictions fail is model uncertainty into account.

This does not seem to be a proper sentence. Some words missing, yes?

Replies from: JonahSinick
comment by JonahS (JonahSinick) · 2013-07-14T05:22:22.451Z · LW(p) · GW(p)

Thanks, fixed.

comment by gwern · 2013-07-14T15:28:36.922Z · LW(p) · GW(p)

The excerpts I posted from the book may be of interest:

comment by drc500free · 2013-07-17T23:35:31.565Z · LW(p) · GW(p)

When gauging the strength of a prediction, it's important to view the inside view in the context of the outside view. For example, most medical studies that claim 95% confidence aren't replicable, so one shouldn't take the 95% confidence figures at face value.

This implies that the average prior for a medical study is below 5%. Does he make that point in the book? Obviously you shouldn't use a 95% test when your prior is that low, but I don't think most experimenters actually know why a 95% confidence level is used.

comment by teageegeepea · 2013-07-16T22:54:46.199Z · LW(p) · GW(p)

Does the bit on Gorbachev contain any references to Timur Kuran's work on preference falsification & cascades?

Replies from: JayDee
comment by JayDee · 2013-07-19T10:40:44.968Z · LW(p) · GW(p)

Not that I recall, and searching the text for Kuran nets nothing.

comment by Said Achmiz (SaidAchmiz) · 2013-07-13T16:55:44.033Z · LW(p) · GW(p)

The invention of the printing press may have given rise to religious wars on account of facilitating the development of ideological agendas.

Uh, what? Is this meant to suggest that there were no religious wars before the printing press?

Replies from: None
comment by [deleted] · 2013-07-13T18:23:50.155Z · LW(p) · GW(p)

Nope, just ambiguity in English. You read "may have given rise to [all/most] religious wars" when he meant "may have given rise to [some] religious wars."

In general, though, it's exhausting to constantly attempt to write things in a way that minimizes uncharitable interpretations, so readers have an obligation not to jump to conclusions.

Replies from: SaidAchmiz
comment by Said Achmiz (SaidAchmiz) · 2013-07-13T19:33:48.886Z · LW(p) · GW(p)

Ah, I see the ambiguity now. My mistake; thank you for the explanation!

EDIT: You know, I keep getting tripped up by this sort of thing. I don't know if it's because English isn't my first language (although I've known it for over two decades), or if it's just a general failing. Anyway, correction accepted.