New election forecast

A colleague pointed me to Nate Silver’s election forecast; see here and here:

The headline number

The Fivethirtyeight forecast gives Biden a 72% chance of winning the electoral vote, a bit less than the 89% coming from our model at the Economist.

The first thing to say is that 72% and 89% can correspond to vote forecasts and associated uncertainties that are a lot closer than you might think.

Let me demonstrate with a quick calculation. We’re currently predicting Biden at 54.0% of the popular vote with a 95% interval of (50.0%, 58.0%), thus roughly a forecast standard deviation of 2.0%:

As most of our readers know, the electoral college is currently tilted in the Republicans’ favor. That’s not an inherent feature of the system; it’s just one of those things, a result of the current configuration of state votes. In some other years, the electoral college has favored the Democrats.

Anyway, back to the current election . . . suppose for simplicity that the Democratic candidate needs 51% of the national vote to win the electoral college. Then, using an approximately normally-distributed forecast distribution, the probability that Biden wins in the electoral college is, in R terms, pnorm(0.54, 0.51, 0.02), or 93%. That’s pretty close to what we have. Move the threshold to 51.5% of the vote and we get pnorm(0.54, 0.51, 0.02) = 89%. So let’s go with that.

Then what does it take to get to Nate’s 72%? We want pnorm(x, 0.51, y) to come out to 72%. We know from reading Nate’s documentation that his forecast is both less Biden-favoring and more uncertain than ours. So we’ll set x to something a bit less than 0.54 and y to something a bit more than 0.02. How about pnorm(0.53, 0.515, 0.025)? That comes to 73%. Close enough.

So, here’s the point. Nate’s forecast and ours are pretty close. Take our point forecast of 0.54 +/- 0.02 of the two-party vote and tweak it slightly to 0.53 +/- 0.025 and the approximate probability of a Biden electoral-college win drops from 89% to 72%.

This is not to say that a forecast of 53% is the same as a forecast of 54%—they’re different!—nor is it to say that a forecast standard deviation of 2.5% is the same as 2%—those numbers are different too! But they’re not a lot different. Also, it’s gonna be hard if not impossible to untangle things enough to say that 53% (or 54%) is the “right” number. The point is that small differences in the forecast map to big differences in the betting odds. There’s no way around this. If two different people are making two different forecasts using two different (overlapping, but different) pools of information, then you’d expect this sort of discrepancy; indeed it would be hard to avoid.

Going beyond the headline number

To understand the forecast, we can’t just look at the electoral college odds or even the national vote. We should go to individual states.

The easiest way for me to do this is to compare to our forecast. I’m not saying our model is some sort of ideal comparison; it’s just something I’m already familiar with, given the many hours that Elliott, Merlin, and I have been staring at it (see for example here and here).

I decided to look at Florida, as that’s one of the pivotal states.

We give Biden a 78% chance of winning the state, and here’s our forecast of the two candidates’ vote shares in Florida:

What’s relevant here is the bit on the right side of the graph, the election day prediction.

And here’s what fivethirtyeight gives for Florida:

From this image, we can extract an approximate 95% predictive interval that the vote margin will be between +14 for Trump and +18 for Biden. Mapping that to the two-party vote share, that’s somewhere from 43% and 59% for Biden. It’s hard for me to picture Biden getting only 43% of the vote in Florida . . . what does our model say? Let me check: Our interval is much narrower: our 95% predictive interval for Biden’s share of the 2-party vote in Florida is approximately [46.5%, 58.5%] (I’m reading this off the graph).

I can’t give you a firm reason why I think 46.5% for Biden is a better lower bound than 43%—after all, anything can happen—but it’s helping me understand what’s going on. One way that the fivethirtyeight forecast gets such a wide national interval is by allowing Biden to get as low as 43% of the two-party vote in Florida. Or, conversely, one reason our model gives Trump such a low probability of winning is that it effectively rules out the possibility of Biden getting as low as 43% of the Florida vote (and all the correlated things that go along with that hypothetical outcome).

OK, let’s check another: Fivethirtyeight gives Trump a 2% chance of winning Connecticut! In contrast, our probability is less than 1%.

How the forecasts work

In some sense, there’s only one way to forecast a presidential election in the modern era. You combine the following sources of information:

– National polls
– State polls
– State-by-state outcomes from previous elections
– A forecast of the national election based on political and economic “fundamentals”
– A model for how the polls can be wrong (nonsampling errors and time trends in public opinion).
– Whatever specific factors you want to add for this election alone.

That’s it. Different forecasts pretty much only differ in how they weigh these six pieces of information.

We’ve discussed our forecast already; see here, here, and here, and our code is here on Github.

On his site and in a recent magazine interview, Nate discusses some differences between his model and ours.

Nate writes that his model “projects what the economy will look like by November rather than relying on current data.” That makes sense to me, and I think that is a weakness in our model that we don’t do that. In this particular case, it’s hard to see that it matters much, and I don’t think it explains the difference between our forecasts, but I take his general point.

He also writes, “Trump is an elected incumbent, and elected incumbents are usually favored for reelection,” but that seems to miss the point that Trump is an unpopular incumbent. Also he says he analyzed elections back to 1880. That doesn’t seem so relevant to me. Too much has changed since then. For one thing, in 1880, much more of politics really was local.

And he writes, “COVID-19 is a big reason to avoid feeling overly confident about the outcome.” I guess that explains some of the wide uncertainties resulting in that possibility of Biden only getting 43% in Florida.

And he writes that, his model “is just saying that, in a highly polarized environment, the race is more likely than not to tighten in the stretch run.” That does not explain that Florida prediction, though. But I guess if you start with our forecast of Florida getting 52.4% in Florida, and you pull that toward 50/50, then it will pull down the low end of the interval too, resulting in that 43% lower bound. Also, as of when the model was released, there was still a chance that Biden would pick Fidel Castro as his running mate, so the prediction had to include that possibility.

One thing that Nate doesn’t mention is that they model individual districts in Maine and Nebraska, and we never bothered to do this. Not a big deal, but credit where due.

Graphical displays

The fivethirtyeight page has some innovative graphical displays. There’s an image showing a random forecast that updates at occasional intervals. I guess that’s a good way to get repeated clicks—people will go back to see how the forecast has changed?

I don’t really have strong feelings about the display, as compared to ours. I mean, sure, I like ours better, but I don’t really know that ours is better. I guess it depends on the audience. Some people will find ours easier to read; some will prefer theirs. I’m glad that different websites use different formats. I’m assuming the New York Times will again use its famous needle display?

The fivethirtyeight page has time series graphs that go back to 1 June. Ours go back to the beginning of March. Both of these are anachronistic in going back to before the forecasts were actually released. But they’re still helpful in giving a sense of how the methods work. You can see how the forecasts are moved by the polls. I’m not sure why they don’t go back to March. Maybe not enough state polls? Or maybe they felt that it didn’t mean much to try to make a forecast that early.

I particularly like how our graphs of vote share show the forecast and the individual polls over time:

I recommend that fivethirtyeight try this too. Right now, they have time series graphs of the forecast vote shares and then a table listing state polls, but the polls haven’t made their way to the graph yet.

The fivethirtyeight site gives letter grades of the A/B/C variety to polling organizations. We don’t assign letters, but we estimate systematic biases (“house effects”) and their uncertainties from the data, so I guess the overall effect will be similar. I’m not quite sure how fivethirtyeight combines information from national and state polls, but I’m sure they do something reasonable.

They smooth their electoral college forecast (“smoothed rolling average”). That makes no sense to me at all! The electoral college really is discrete; that’s just the way it is. No big deal, it just seems like an odd choice to me.

And they’ve got a logo with a fox wearing glasses and headphones. I don’t know whassup with that, but as the coordinator of a blog with the epically boring title of Statistical Modeling, Causal Inference, and Social Science, I’d certainly claim no expertise in brand management!

P.S. Nate also has an online interview where he’s pretty complimentary about our model (as he should be; as noted above, we’re all using similar information and getting similar conclusions) but at one point he unleashes his inner Carmelo and knocks our process:

[Elliott] Morris also says, well, this poll isn’t doing education weighting, or this is a robo poll, so we’re not going to include it. I want to make as few subjective decisions as possible. I want a good set of rules that generalize well, and we’re pretty strict about following those rules. Sometimes a pollster will write us and say, hey, we intended our poll to be interpreted this way, and you’re doing it that way. Well, I’m sorry, but we actually have a set of rules and they’re here on the site. And we have to be consistent.

That’s just silly. We have rules too; they’re just different from Nate’s rules! Also the rules we use are not what Nate said to that reporter. I’m sure why he thought this, but actually I think the only polls we have excluded so far are HarrisX and Emerson College polls, because of biased questionnaires and bad data quality. For better or worse, data quality matters. Otherwise where do you draw the line: what if my cousin who went to prison for fraud were to start promoting an election poll, would I have to include that just out of a concern about making subjective decisions? Of course not.

I’m not sure how important each of these rules is to the final prediction—there are enough solid national polls that the final results are pretty stable—but from a conceptual standpoint, and with 2012 and 2016 in mind, our most important polling adjustment is to treat polls that adjust for party ID as baseline and allow for a time-varying bias of polls that don’t adjust for party ID. That’s there to allow for partisan nonresponse bias. The importance of adjusting for education also comes from what happened in midwestern states in 2016.

Speaking more generally, it’s hard for me to know what to make of a statement such as, “I want to make as few subjective decisions as possible,” in this case. Here are some further thoughts on subjectivity and objectivity in statistics.