[Link] Tetlock on the power of precise predictions to counter political polarization

post by Stefan_Schubert · 2015-10-04T15:19:32.558Z · LW · GW · Legacy · 7 comments

Contents

7 comments

The prediction expert Philip Tetlock writes in New York Times on the power of precise predictions to counter political polarization. Note the similarity to Robin Hanson's futarchy idea.

IS there a solution to this country’s polarized politics?

Consider the debate over the nuclear deal with Iran, which was one of the nastiest foreign policy fights in recent memory. There was apocalyptic rhetoric, multimillion-dollar lobbying on both sides and a near-party-line Senate vote. But in another respect, the dispute was hardly unique: Like all policy debates, it was, at its core, a contest between competing predictions.

Opponents of the deal predicted that the agreement would not prevent Iran from getting the bomb, would put Israel at greater risk and would further destabilize the region. The deal’s supporters forecast that it would stop (or at least delay) Iran from fielding a nuclear weapon, would increase security for the United States and Israel and would underscore American leadership.

The problem with such predictions is that it is difficult to square them with objective reality. Why? Because few of them are specific enough to be testable. Key terms are left vague and undefined. (What exactly does “underscore leadership” mean?) Hedge words like “might” or “could” are deployed freely. And forecasts frequently fail to include precise dates or time frames. Even the most emphatic declarations — like former Vice President Dick Cheney’s prediction that the deal “will lead to a nuclear-armed Iran” — can be too open-ended to disconfirm.

//

Non-falsifiable predictions thus undermine the quality of our discourse. They also impede our ability to improve policy, for if we can never judge whether a prediction is good or bad, we can never discern which ways of thinking about a problem are best.

The solution is straightforward: Replace vague forecasts with testable predictions. Will the International Atomic Energy Agency report in December that Iran has adequately resolved concerns about the potential military dimensions of its nuclear program? Will Iran export or dilute its quantities of low-enriched uranium in excess of 300 kilograms by the deal’s “implementation day” early next year? Within the next six months, will any disputes over I.A.E.A. access to Iranian sites be referred to the Joint Commission for resolution?

Such questions don’t precisely get at what we want to know — namely, will the deal make the United States and its allies safer? — but they are testable and relevant to the question of the Iranian threat. Most important, they introduce accountability into forecasting. And that, it turns out, can depolarize debate.

In recent years, Professor Tetlock and collaborators have observed this depolarizing effect when conducting forecasting “tournaments” designed to identify what separates good forecasters from the rest of us. In these tournaments, run at the behest of the Intelligence Advanced Research Projects Activity (which supports research relevant to intelligence agencies), thousands of forecasters competed to answer roughly 500 questions on various national security topics, from the movement of Syrian refugees to the stability of the eurozone.

The tournaments identified a small group of people, the top 2 percent, who generated forecasts that, when averaged, beat the average of the crowd by well over 50 percent in each of the tournament’s four years. How did they do it? Like the rest of us, these “superforecasters” have political views, often strong ones. But they learned to seriously consider the possibility that they might be wrong.

What made such learning possible was the presence of accountability in the tournament: Forecasters were able see their competitors’ predictions, and that transparency reduced overconfidence and the instinct to make bold, ideologically driven predictions. If you can’t hide behind weasel words like “could” or “might,” you start constructing your predictions carefully. This makes sense: Modest forecasts are more likely to be correct than bold ones — and no one wants to look stupid.

This suggests a way to improve real-world discussion. Suppose, during the next ideologically charged policy debate, that we held a public forecasting tournament in which representatives from both sides had to make concrete predictions. (We are currently sponsoring such a tournament on the Iran deal.) Based on what we have seen in previous tournaments, this exercise would decrease the distance between the two camps. And because it would be possible to determine a “winner,” it would help us learn whether the conservative or liberal assessment of the issue was more accurate.

 

Either way, we would begin to emerge from our dark age of political polarization.

7 comments

Comments sorted by top scores.

comment by Riothamus · 2015-10-08T13:19:15.731Z · LW(p) · GW(p)

How does this idea square with elections in the United States? Consider pollsters; their job is to make specific predictions based on understood methods using data gathered with also understood methods.

Despite what was either fraud or tremendous incompetence in the last Presidential election cycle on the part of ideological pollsters, and the high degree of public attention paid to it, polarization has not meaningfully decreased in any way I can observe.

I therefore expect that making the candidates generate specific predictions would have little overall effect on polarization.

comment by V_V · 2015-10-06T11:13:40.907Z · LW(p) · GW(p)

The deal’s supporters forecast that it would stop (or at least delay) Iran from fielding a nuclear weapon, would increase security for the United States and Israel and would underscore American leadership.

Or maybe they don't particularly care about Iran getting nuclear weapons, or they are actually in favor of Iran getting nuclear weapons, but they can't say it loud since it is politically unacceptable in the US to hold such beliefs.

comment by gjm · 2015-10-05T00:33:34.708Z · LW(p) · GW(p)

The title seems so deliberately alliterative it's hard to see why it doesn't have "prevent" (or perhaps "postpone" or "palliate") in place of "counter".

comment by Shmi (shminux) · 2015-10-04T16:54:28.407Z · LW(p) · GW(p)

There are plenty of issues where "precise predictions" are available, yet the polarization is as bad as ever, such as drugs, birth control, gun control and taxes. So no, facts are no match for ideology.

Replies from: ChristianKl, Lumifer
comment by ChristianKl · 2015-10-04T19:45:44.696Z · LW(p) · GW(p)

There are plenty of issues where "precise predictions" are available, yet the polarization is as bad as ever, such as drugs, birth control, gun control and taxes.

The fact that statistics exist doesn't mean that "precise predictions exist". It especially doesn't mean that stakeholders in the debate engage in the action of engaging in prediction making.

comment by Lumifer · 2015-10-05T15:07:33.311Z · LW(p) · GW(p)

What kind of contextually-relevant "precise predictions" are there for e.g. gun control? Or taxes?

comment by Gunslinger (LessWrong1) · 2015-10-04T16:54:33.633Z · LW(p) · GW(p)

Sometimes, instead of play by the rules and telling everyone to do the same, we can acknowledge that the players are rotten to the core and the rules mean nothing to them.. or they play by entirely different rules. Or whatever. But the main idea is that there's some incompatibility there and a person who respects his time and effort would most likely decide to just leave and be done with it.

Paradoxically, despite how much our ship is burning and everyone is yelling to abandon it, we're probably dumping ourselves in the water.. good luck getting on land. With the grasp politicians have, and I don't need to describe how devestating that might be.

Now for the fun part.. how do we make them play by the rules?