Predicting a global catastrophe: the Ukrainian model

post by RomanS · 2022-04-07T12:06:32.804Z · LW · GW · 11 comments

Contents

  Prediction markets
  Pundits
  Governments
  The general public
  Some lessons 
None
11 comments

The large-scale Russian invasion into Ukraine provides an interesting model of how governments, pundits, prediction markets, and the common folks would predict an imminent global catastrophe.

Prediction markets

The Metaculus community successfully predicted that Russia will invade Ukraine:

If I remember correctly, GJP superforcasters were similarly successful, although they were a bit slower. 

Pundits

I have an impression that the vast majority of pundits failed to predict it, including Matthew Yglesias and Scott Alexander.  

A somewhat funny example is an acquaintance of mine, whom I'll not name for privacy reasons. The guy was very motivated to make the right prediction, he has an excellent track record on Metaculus and a deep understanding of the Russian politics, and he is living in Russia. Nevertheless, he was confidently wrong until the day before the invasion. 

On the other hand, there is Oleksiy Arestovych who predicted the war in 2019, including detailed and surprisingly correct predictions of the Russian military movements. Before that, he successfully predicted the Russian invasion of 2014.

Governments

According to Arestovych (who is a military adviser to the president of Ukraine), the Ukrainian leadership knew that Russia will invade, and knew the date of the invasion. But the leadership decided to keep it from the general public for military reasons (to prevent a mass panic that would paralyze preparations for the war) . 

The NATO and the US leadership have issued many public warnings about the imminent war. 

The general public

The warnings were mostly dismissed or even ridiculed by common people, in both Russia and Ukraine. 

According to a poll conducted 3 days before the invasion, only 35% of Ukrainians believed that Russia will invade. 

On the other hand, some top Ukrainian politicians and billionaires (and their families) left Ukraine well in advance, some - already in January, more than 1.5 months before the invasion. I think that the most likely explanation for that is a leaked insider info.   

Some lessons 

We could try to extrapolate the data to the case of an imminent global catastrophe. For example, an imminent large asteroid impact or a nuclear war. 

  1. If prediction markets say that the global catastrophe will occur with a high probability, they're most likely right. 
  2. If there are news that top politicians and billionaires are making survivalist preparations (e.g. buying nuclear bunkers), take it very seriously.
  3. If government officials claim that a global catastrophe is imminent, they are most likely right. If they claim that there is nothing to worry about (in spite of the #1 and #2), then they're lying (e.g. to prevent panic).
  4. Prediction markets are usually slow. If they've surpassed 50%, you might have only a few weeks to prepare. And if it's >75%, you might have only a few days.
  5. Pundits can be safely ignored, with the exception of the rare experts who have a verifiable good track record. Even the most qualified experts could be confidently wrong. 

11 comments

Comments sorted by top scores.

comment by ChristianKl · 2022-04-07T13:24:51.050Z · LW(p) · GW(p)

Metaculus successfully predicted that Russia will invade Ukraine:

It seems strange to me to say that when the "metaculus prediction" according to Metaculus was lower than 50%. It was the community prediction that was over 50%. The metaculus prediction weighs those people who are good at forecasting higher and it's interesting that forecasting skill here meant on average seeing the likelihood to be less high. 

I have an impression that the vast majority of pundits failed to predict it, including Matthew Yglesias and Scott Alexander.  

Talking about the success and failure of predictions in a binary way is bad. Saying X happens with 40% likelihood doesn't mean that if X indeed happens you failed to predict it. 

If I would be a billionaire then a 40% chance of war would likely be enough for me to leave the potential warzone. 

If government officials claim that a catastrophe is imminent, they are most likely right. If they claim that there is nothing to worry about (in spite of the #1 and #2), then they're lying (e.g. to prevent panic).

I see no reason why you should draw such a conclusion from looking at a single example. 

Replies from: RomanS
comment by RomanS · 2022-04-07T13:41:14.820Z · LW(p) · GW(p)

It seems strange to me to say that when the "metaculus prediction" according to Metaculus was lower than 50%. It was the community prediction that was over 50%.

You're right, I mean the community prediction. I'll fix the phrasing to avoid ambiguity. 

Talking about the success and failure of predictions in a binary way is bad. Saying X happens with 40% likelihood doesn't mean that if X indeed happens you failed to predict it. 

For simplicity, I assume that if Pundit1 said 60% and Pundit2 said 40%, and the event actually happens, then Pundit1 was right about the future, and Pundit2 was wrong. And Pundin3 who said 10% was even more wrong. For an event with a binary outcome, I think it's a simplistic but good-enough language.  

But I agree, it would be better to use a quantitative measure (e.g. a Brier score). Not sure how to calculate it in this case. 

If I would be a billionaire then a 40% chance of war would likely be enough for me to leave the potential warzone. 

I agree, it is reasonable. 

I see no reason why you should draw such a conclusion from looking at a single example. 

The conclusions I've listed are not based solely on the single example of Ukraine. It's rather a somewhat formalized intuition, inspired by everything I know about similar situations, from Chernobil to Covid to the global warming. 

Replies from: ChristianKl
comment by ChristianKl · 2022-04-07T13:56:33.758Z · LW(p) · GW(p)

For simplicity, I assume that if Pundit1 said 60% and Pundit2 said 40%, and the event actually happens, then Pundit1 was right about the future, and Pundit2 was wrong. 

No, neither of them was right or wrong. That's just not how probabilities work and simplifying in that way confuses what's going on.

It's rather a somewhat formalized intuition, inspired by everything I know about similar situations, from Chernobil to Covid to the global warming. 

If you want to draw more general conclusions you would have to look at those events where government officials made forecasts and then nothing happened like Iraqi WMDs as well.

Replies from: RomanS
comment by RomanS · 2022-04-07T14:13:12.323Z · LW(p) · GW(p)

No, neither of them was right or wrong. That's just not how probabilities work and simplifying in that way confuses what's going on.

By "wrong" here I mean "incorrectly predicted the future". If there is a binary event, and I predicted the outcome A, but the reality delivered the outcome B, then I incorrectly predicted the future. Perhaps the source of confusion here is my inability to precisely express ideas in English (I'm a non-native English speaker), and I apologize for that. 

If you want to draw more general conclusions you would have to look at those events where government officials made forecasts and then nothing happened like Iraqi WMDs as well.

I agree, it's an excellent idea. In general, it's quite possible that some politicians would use the high risk of a catastrophe (real or fake) to achieve political goals. 

Replies from: michael-grosse
comment by Celenduin (michael-grosse) · 2022-04-10T11:15:02.442Z · LW(p) · GW(p)

No, neither of them was right or wrong. That's just not how probabilities work and simplifying in that way confuses what's going on.

By "wrong" here I mean "incorrectly predicted the future". If there is a binary event, and I predicted the outcome A, but the reality delivered the outcome B, then I incorrectly predicted the future.

Maybe an intuition pump for what I think Christian is pointing at:

  1. Assuming you have a 6-faced die, and you predict that the probability that you next will roll a 6 and not one of the other faces is about 16.67%.
  2. Then you roll the die, and the face with the 6 comes up on top.

Was your prediction wrong?

Replies from: RomanS
comment by RomanS · 2022-04-10T17:44:50.170Z · LW(p) · GW(p)

Thanks! I think I now see the root of the confusion. These are two closely related but different tasks:

  • predicting the outcome of an event
  •  estimating the probability of the outcome

In your example, the tasks could be completed as follows:

  • "the next roll will be a 6" (i.e I know it because the die is unfair)
  • "the probability of 6 is about 16.67%" (i.e I can correctly calculate it because the die is fair)

If one is trying to predict the future, one could fail either (or both) of the tasks. 

In the situation there people were trying to predict if Russia invades Ukraine, some of them got the probability right, but failed to predict the actual outcome. And the aforementioned pundits failed both tasks (in my opinion), because for a well-informed person it was already clear that Russia will invade with the probability much higher than 40%.

comment by Alaric · 2022-04-08T11:30:09.859Z · LW(p) · GW(p)

I think a success story of the right prediction is a good point. But it is not sufficient in the long run. For a decision to trust some source of predictions we need more information than one success.

As far as I can see, Metaculus track record on geopolitics is looking good, but it contains relatively few points.

And for governments we don't see their track records. But I think it will not be hard to find wrong predictions from governments in the past.

comment by Mateusz Bagiński (mateusz-baginski) · 2022-04-07T13:12:46.653Z · LW(p) · GW(p)

If I remember correctly, GJP superforcasters were similarly successful, although they were a bit slower.

Actually, GJP forecasters updated a bit more quickly than Metaculus [EDIT: probably not, look up the reply below]

Replies from: RomanS
comment by RomanS · 2022-04-07T13:30:39.667Z · LW(p) · GW(p)

Thank you! Somehow I've missed the linked article. Will read it.

I'm not sure about GJP being faster. Judging by the plot, Metaculus was almost at all times higher (and thus closer to the truth) than both GJI and GJO. And the movements look like GJI/GJO were tracking Metaculus (for example, the upward movement on 15-Jan).

Replies from: mateusz-baginski
comment by Mateusz Bagiński (mateusz-baginski) · 2022-04-08T04:33:44.876Z · LW(p) · GW(p)

You're probably right. I was myopically looking only at the rightmost portion where GJP updated to ~99% a bit quicker. It also seems like GJP had a more erratic trajectory than Metaculus.

comment by TLW · 2022-04-11T00:36:13.053Z · LW(p) · GW(p)

Prediction markets are usually slow.

This could be them being slow, or it could be them being more accurate in understanding random elements where others were overconfident. Consider the following market: 'I roll a d10 once per day. Will I roll a 0 within the first 10 days from when this market starts?'.

Now consider what happens if I don't actually roll a 0:

Day 0, this market's value is ~65%
Day 1, this market's value is ~61%
Day 2, this market's value is ~57%
Day 3, this market's value is ~52%
Day 4, this market's value is ~47%
Day 5, this market's value is ~41%
Day 6, this market's value is ~34%
Day 7, this market's value is ~27%
Day 8, this market's value is ~19%
Day 9, this market's value is ~10%
Day 10, this market's value is ~0%

This looks very much like the prediction market was slow to update, when in fact the prediction market was being purely rational.