Why election models didn't predict Trump's victory — A primer on how polls and election models work

post by phl43 · 2017-01-28T19:51:32.151Z · LW · GW · Legacy · 16 comments

This is a link post for http://necpluribusimpar.net/election-models-not-predict-trumps-victory/

16 comments

Comments sorted by top scores.

comment by Qiaochu_Yuan · 2017-02-01T06:38:15.082Z · LW(p) · GW(p)

If the probabilities that Wang’s model computes for each state were right, you could have used the resulting probability distribution of the outcomes in the electoral college to straightforwardly derive the probability that Clinton was going to win, which is just the probability that she gets at least 270 votes in the electoral college.

No. Even if Wang had reasonable probabilities of Clinton individually winning in each state, the aggregation procedure described in the post (I haven't checked if this is what Wang actually did) for using these probabilities to get a probability that Clinton will win the election assumes that winning each state is independent, which is a completely ridiculous assumption. Most sources of uncertainty about elections are correlated between states; for example, widely publicized news stories that make Clinton or Trump look bad a certain number of days before the election. The independence assumption horrendously exaggerates the probability of Clinton winning given that she has a slight edge.

Replies from: phl43, phl43
comment by phl43 · 2017-02-01T23:04:52.678Z · LW(p) · GW(p)

Also, to be clear, in order to compute his prediction, Wang did assume that non-sampling errors were somewhat correlated, just not nearly enough. As I say in the post, he is a very smart guy, so it's not as if he didn't know the things I explain.

comment by phl43 · 2017-02-01T19:47:25.929Z · LW(p) · GW(p)

I agree with you that the probabilities of Clinton winning individual states are correlated, but I'm not sure this makes what I wrote false, although you're probably right that it's a bit misleading. The fact that the probabilities of Clinton winning individual states are correlated is only relevant to calculate the probabilities for each possible outcome in the electoral college. It means that, as I explain later in my post, you have to take into account the fact that non-sampling polling errors in different states are correlated in order to calculate the probabilities for each possible outcome in the electoral college. One of the sources of non-sampling error that I describe in my post is measurement error, which if you read my post carefully I define in such a way that if someone doesn't vote for the candidate they claimed they would vote for when they participated to a survey for whatever reason (e. g. because they heard a news story that made Clinton or Trump look bad), it counts as measurement error. I agree that it's probably an unusual definition of this concept, which is typically construed more narrowly. But I defined measurement error in that unusually broad way precisely because I didn't want to introduce the complication that, even if someone who tells a pollster n days before the election that he's going to vote for X and would really vote for X if the election took place on the day he participated to that survey, he might not vote for X on election day. (Wang takes that, among other things, into account in order to calculate his prediction, but I was only describing the way in which he calculates a snapshot of where the race stands at any given time, since I think it's where the most interesting mistakes were made. I may be wrong about that, but judging by what he said after the election, I think Wang would agree with me on that.) Now, if the probabilities you calculated for each possible outcome in the electoral college are correct, then you can just use the aggregation method I describe above the passage you quoted in my post. What is misleading in my post is that I say the assumption for that method to be reliable is that the probabilities of Clinton winning individual states are correct (instead of the probabilities for each possible outcome in the electoral college), because it suggests that we can assume they are probabilistically independent (although I never said that and the rest of my post makes clear that I wasn't making that assumption), which of course they are not. Do you agree with that or do you think that there is a more serious problem here?

Replies from: phl43
comment by phl43 · 2017-02-02T08:42:08.395Z · LW(p) · GW(p)

I was just reading my post again, and I guess this passage is also misleading, for exactly the same reason: "if you had calculated a probability that Clinton was going to win in each state using the method I explained above (which you then use to compute a probability that Clinton is going to win the electoral college)".

comment by maybefbi · 2017-01-30T06:45:10.265Z · LW(p) · GW(p)

According to the emails leaked by Wikileaks, the pre-election polls presented in the media used a technique called oversampling to misrepresent the results.

Sources:

Relevant Quotes:

  • “I also want to get your Atlas folks to recommend oversamples for our polling before we start in February.”
  • “so we can maximize what we get out of our media polling.”
  • [For Arizona] “Research, microtargeting & polling projects - Over-sample Hispanics… - Over-sample the Native American population”
  • [For Florida] “On Independents: Tampa and Orlando are better persuasion targets than north or south Florida (check your polls before concluding this). If there are budget questions or oversamples, make sure that Tampa and Orlando are included first.
  • [For National] “General election benchmark, 800 sample, with potential over samples in key districts/regions - Benchmark polling in targeted races, with ethnic over samples as needed - Targeting tracking polls in key races, with ethnic over samples as needed”
  • “The plan includes a possible focus on women, might be something we want to do is over sample if we are worried about a certain group later in the summer."

Interpretation:​

BTW shameless plug for my fake news aggregator: https://quibbler.press/#/about

Replies from: gjm
comment by gjm · 2017-01-30T12:02:57.253Z · LW(p) · GW(p)

I think this is a misunderstanding of what "oversampling" means in polling. See e.g. this.

comment by satt · 2017-01-29T18:09:36.921Z · LW(p) · GW(p)

Agree with the post proper. I think the headline is technically accurate but potentially misleading, because poll-dominated models aren't the only kind of election models. Political scientists build models that rely more on fundamentals like economic statistics and military activity, and when Vox averaged 6 of those models together, they predicted that Trump would win the popular vote. The headline remains technically correct because predicting that Trump would win the popular vote isn't the same as predicting Trump would win the election, but it'd be a shame if people walked away with the idea that election models in toto said Clinton would win.

Replies from: phl43
comment by phl43 · 2017-01-29T19:34:03.642Z · LW(p) · GW(p)

I think models that rely on fundamentals are worthless. I don't have time to explain why in details, though perhaps I will post something on that at some point, but if you want to know the gist of my argument, it's that models of that kind are massively underdetermined by the evidence.

Replies from: satt
comment by satt · 2017-01-29T23:43:23.807Z · LW(p) · GW(p)

OK. That's interesting. I disagree but I can see why you'd think that, and in a way I'm kind of sympathetic: I think overfitting definitely happens with some of the poli. sci. models. My go-to model is my go-to exactly because its author really seems to appreciate the overfitting issue, and is very insistent on aiming for proper explanation, not just prediction.

comment by tukabel · 2017-01-28T21:34:19.879Z · LW(p) · GW(p)

well, there were "mainstream" polls (used as a propaganda in the proclintonian media), sampled a bit over 1000, sometimes less, often massively oversampling registered Dem. voters... what do you expect?

and there was the biggest poll of 50000 (1000 per state) showing completely different picture (and of course used as a prooaganda in the anticlintonian, usually non-mainstream media)

google "election poll 50000"

Replies from: tgb, phl43
comment by tgb · 2017-01-28T23:35:33.742Z · LW(p) · GW(p)

A cursory glance through Fivethirtyeight's collected poll data shows a survey with over 84,000 voters (CCES/YouGov) giving Clinton a +4 percentage point lead, with 538 adjusting that to +2. Google and SurveyMonkey routinely had surveys of 20,000+ individuals, with one SurveyMonkey one having 70,000 with Clinton +5 (+4 adjusted). There was no clear reason to prefer your poll (whichever that one was) over these. https://projects.fivethirtyeight.com/2016-election-forecast/national-polls/

And it should go without saying that Clinton did end up at +2 nationally.

Replies from: phl43
comment by phl43 · 2017-01-29T17:57:20.711Z · LW(p) · GW(p)

I'm not sure you have read my post. Nowhere in it do I say that we should have focused on one poll rather than another. So I'm not sure what relevance your comment has.

Replies from: satt
comment by satt · 2017-01-29T18:13:40.436Z · LW(p) · GW(p)

Its relevance is that it rebuts tukabel's suggestion that "the biggest poll" was of "50000" people and showed a "completely different picture" to the mainstream polls indicating a Clinton lead.

Replies from: phl43
comment by phl43 · 2017-01-29T19:32:15.711Z · LW(p) · GW(p)

Oh I see. I had totally missed the fact that it was a reply to another comment. Apologies to tgb.

Replies from: tgb
comment by tgb · 2017-02-03T00:10:48.506Z · LW(p) · GW(p)

No problem!

comment by phl43 · 2017-01-28T22:13:43.112Z · LW(p) · GW(p)

I'm sure pollsters sometimes "cheat" by constructing biased samples, but this can happen even if you're honest because, as I explain in my post, polling is really difficult to do. To my mind, the problem had more to do with commentators who were making mistaken inferences based on the polls, than with the polls themselves, although evidently some of them got things badly wrong.