Did anybody calculate the Briers score for per-state election forecasts?

post by ChristianKl · 2020-11-10T17:51:20.275Z · LW · GW · No comments

This is a question post.

Contents

  Answers
    7 JohnSteidley
    4 steven0461
    1 LukeStebbing
None
No comments

There's a lot of debate about how good the polls and 538 have been is this election in comparison to the betting markets. While it's hard to compare it when just looking at percentage for Biden winning, it would be possible to calculate the Briers score by looking at all US states. Did anybody do the math?

Answers

answer by John Steidley (JohnSteidley) · 2020-11-10T18:37:18.310Z · LW(p) · GW(p)

https://www.lesswrong.com/posts/muEjyyYbSMx23e2ga/scoring-2020-u-s-presidential-election-predictions [LW · GW]

comment by Oscar_Cunningham · 2020-11-11T07:19:45.261Z · LW(p) · GW(p)

Does it make sense to calculate the score like this for events that aren't independent? You no longer have the cool property that it doesn't matter how you chop up your observations.

I think the correct thing to do would be to score the single probability that each model gave to this exact outcome. Equivalently you could add the scores for each state, but for each use the probabilities conditional on the states you've already scored. For 538 these probabilities are available via their interactive forecast.

Otherwise you're counting the correlated part of the outcomes multiple times. So it's not surprising that The Economist does best overall, because they had the highest probability for a Biden win and that did in fact occur.

EDIT: My suggested method has the nice property that if you score two perfectly correlated events then the second one always gives exactly 0 points.

Replies from: JohnSteidley
comment by John Steidley (JohnSteidley) · 2020-11-11T09:10:31.108Z · LW(p) · GW(p)

I think this comment would be better placed as a reply to the post that I'm linking. Perhaps you should put it there?

Replies from: Oscar_Cunningham
answer by steven0461 · 2020-11-10T19:33:58.347Z · LW(p) · GW(p)

Looking at states still throws away information. Trump lost by slightly over a 0.6% margin in the states that he'd have needed to win. The polls were off by slightly under a 6% margin. If those numbers are correct, I don't see how your conclusion about the relative predictive power of 538 and betting markets can be very different from what your conclusion would be if Trump had narrowly won. Obviously if something almost happens, that's normally going to favor a model that assigned 35% to it happening over a model that assigned 10% to it happening. Both Nate Silver and Metaculus users seem to me to be in denial about this.

comment by Rafael Harth (sil-ver) · 2020-11-10T20:07:59.961Z · LW(p) · GW(p)

Both Nate Silver and Metaculus users seem to me to be in denial about this.

I think this is a strawman. Nate Silver says that his model has good calibration across its lifetime, and is in fact slightly too conservative. I agree that, if the only two things you consider are (a) the probabilities for a Biden win in 2020, 65% and 89%, and (b) the margin of the win in 2020, then betting markets are a clear winner. But how much does that matter? (And the article you linked doesn't mention markets at all.)

Replies from: steven0461, hleumas
comment by steven0461 · 2020-11-10T20:57:21.070Z · LW(p) · GW(p)

I agree that, if the only two things you consider are (a) the probabilities for a Biden win in 2020, 65% and 89%, and (b) the margin of the win in 2020, then betting markets are a clear winner.

My impression from Silver's internet writings is he hasn't admitted this, but maybe I'm wrong. I haven't seen him admit it and his claim that "we did a good job" suggests he's unwilling to. Betting markets are the clear winner if you look at Silver's predictions about how wrong polls would be, too. That was always the main point of contention. The line he's taking is "we said the polls might be this wrong and that Biden could still win", but obviously it's worse to say that the polls might be that wrong than to say that the polls probably would be that wrong (in that direction), as the markets implicitly did.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2020-11-10T21:19:01.440Z · LW(p) · GW(p)

If it is true that the model has been slightly conservative, historically speaking, then it isn't clear why there is anything to admit. You expect some number of unlikely events to come true; looking at the history of 538, it was about the right number, a bit to few; now we have one more unlikely event, and the overall calibration probably improved.

So if people just ask him how good of a job he did, it seems completely reasonable to evaluate the model in terms of all past elections, and conclude that they did a good job. There's no reason why you would think anything went wrong this time. This explains the way he's been talking about it.

As far as I know, he hasn't admitted this particular point. But I strongly assume no-one has asked about it. It doesn't seem like a question that makes a lot of sense -- why would you ever ignore all of the past history when you're trying to compute calibration? It's like taking one of Scott Alexander's 90% bets that went wrong and asking, "do you admit that, if we only consider this particular bet, you would have done better assigning 60% instead?" The answer is yes, but asking the question is weird.

Replies from: steven0461
comment by steven0461 · 2020-11-10T21:31:58.943Z · LW(p) · GW(p)

Data points come in one by one, so it's only natural to ask how each data point affects our estimates of how well different models are doing, separately from how much we trust different models in advance. A lot of the arguments that were made by people who disagreed with Silver were Trump-specific, anyway, making the long-term record less relevant.

It's like taking one of Scott Alexander's 90% bets that went wrong and asking, "do you admit that, if we only consider this particular bet, you would have done better assigning 60% instead?"

If we were observing the results of his bets one by one, and Scott said it was 90% likely and a lot of other people said it was 60% likely, and then it didn't happen, I would totally be happy to say that Scott's model took a hit.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2020-11-10T21:43:39.858Z · LW(p) · GW(p)

I would totally be happy to say that Scott's model took a hit.

I think that's the root of our disagreement. In this situation, I would not concede that Scott's model took a hit. Instead, I would claim that 90% was a better estimate than 60%, despite the prediction coming out false. (This is assuming that I already know Scott's overall calibration, which is the case for 538.)

I think this point bottoms out at a pretty deep philosophical problem that we can't resolve in this comment thread. (I super want to write a post about it though.)

Replies from: steven0461
comment by steven0461 · 2020-11-11T19:19:35.400Z · LW(p) · GW(p)

Yes, that looks like a crux. I guess I don't see the need to reason about calibration instead of directly about expected log score.

Replies from: sil-ver
comment by Rafael Harth (sil-ver) · 2020-11-14T07:43:57.086Z · LW(p) · GW(p)

(I feel a bit weird referencing my post [LW · GW] since it did much more poorly than I expected, but I'll just do it anyway since I know you've read it.)

The way in which my post contradicts your argument is that it frames the questions

  1. Did 538 make a good prediction?; and
  2. Was the market's prediction better than 538's?

as entirely separate. For the first question, we care about how much information 538's prediction was based on and how well calibrated it was. Well, we know what kind of information it was based on (the same as every election), and evidence shows that calibration is excellent. In fact, this election made 538's calibration look better than it did before since it was historically conservative. (I think -- I've heard Nate say this.) In the two pictures I've had in my post, both of them had 538 at the same place on the chart. They were only different in how well the market did. In other words, Nate did a good job regardless of what happened with the market. (And in the article you linked, he wasn't asked about the market.)

The second question is where we compare the hypothesis that the market was being stupid (1) to the hypothesis that it was smarter than 538/had information about the polling error that 538 didn't (2). This is where I'll grant you the update you mentioned in your comment. (2) predicts a narrow margin in the real result, whereas (1) has significant probability mass on a Biden landslide. Since we got the narrow margin, that has to be a significant update toward the market being smart, maybe (1:4) or something. (But I made an even greater update toward the market being stupid based on its behavior on election night [LW(p) · GW(p)], so I come out updating toward the market being stupid in total, which was also my prior (that's why I bet against it in the first place).)

comment by Samuel Hapák (hleumas) · 2020-11-10T23:57:05.641Z · LW(p) · GW(p)

Nate Silver's predictions were changing too much over the time. If those probabilities were legit, you'd be able to sell binary options based on them. If Nate would do that, he'd went bankrupt, because he created lot of arbitrage opportunities.

 

https://arxiv.org/pdf/1703.06351.pdf

Replies from: SimonM
comment by SimonM · 2020-11-14T10:06:52.808Z · LW(p) · GW(p)

That paper doesn't actually justify why 538's probabilities don't form a martingale. (In fact it's plausible that they do - to demonstrate they aren't I'd want to see someone show a strategy which is successfully arbitraging the probabilities). Since 538's model isn't open source, it's pretty difficult to say whether or not it is a true martingale, but that paper definitely doesn't show it.

If we were to take a similar model which is open source (specifically The Economist's model) we can see that it is not far from being a martingale. Specifically if they added forecasting for their [fundamentals model](http://www.stat.columbia.edu/~gelman/research/published/jdm200907b.pdf) (not difficult, just painful). I don't think the difference made by the fundamentals model is that significant so I think it would have been fairly difficult for anyone to arbitrage those odds. (Not that they were correct, just that they were broadly time-consistent)

No comments

Comments sorted by top scores.