OK, enough about coronavirus. Time to talk about the election.

Dhruv Madeka starts things off with this email:

Someone just forwarded me your election model (with Elliott Morris and Merlin Heidemanns) for the Economist. I noticed Biden was already at 84%. I wrote a few years ago about how the time to election factors a lot into the uncertainty – basically more clock time till the election means there’s more clock time for really random things to happen (Biden gets Covid, something crazy is released about him like the second Comey letter etc).

For the 2016 Bloomberg model – we used a linear increase in variance for the time to election, though something more elaborate might help!

Nassim Taleb and I also followed up on this later on to explain that FiveThirtyEight’s forecast violated certain probabilistic constraints (for being a forecast of the future).

Morris replies:

Our model has linearly increasing variance in the random walk, but the maximum amount of distance from time T to T+…..n is constrained by our prior forecast. That is to say that while the temporal error on the polls 300 days to Election Day has a 95% uncertainty interval close to 10 percentage points, our state priors at that distance are normally dispersed in a range closer to +/- 6, constraining to total error to a more reasonable range.

I know about your argument with Taleb against Nate. The whole thing frankly just seems pedantic to me. Lots of this stuff is just really unpredictable (especially pollster error) and black swan events don’t fit neatly into our models, even those with fat tails!

Madeka responds:

I agree that it might be pedantic – but the more uncertainty you have (black swan or clock time), the wider than distribution for the terminal point gets. So if you really did have just a simple brownian (sigma * random_walk – not to say its not way more complex in reality) model, the farther out you are + the more uncertainty you have, the terminal normal distribution with a large variance on the real line starts looking like a “uniform on the real line”. So the probability of being greater than any value becomes 0.5.

I guess our point beyond the martingale technicalities is that when there is a lot of uncertainty, the probabilities won’t move around a lot (typically they’ll be flat at some level, though anything but 0.5 seems suspect to me personally, the level can be say 84%). So if your model was to freeze or move slowly around 84%, that would make sense and be perfectly consistent.

But if more and more news came out, and you updated it – say there was a “scandal” for Biden and that dropped the probability to 52% for Biden and then back to 84% close to the election – I think Nassim and I would say, probabilities don’t behave like that, that it’s more likely a failure to capture the intrinsic time uncertainty of the problem. That was closer to our criticism of Nate: if there really was that much uncertainty (black swan or pollster or news or time) – the probability would have frozen. If you didn’t know, you’d say “I don’t know, it’s all really variable – I’m not going to bet on it”.

In finance terms, Biden winning is a binary option on the election date (more realistically, a basket of binary options) – so the more volatility there is, the closer to 0.5 the price gets.

The implied betting odds from Betfair are 53% for Biden, 40% for Trump, 2% for Mike Pence, 2% for Hillary Clinton (!), and another few percent for some other possible longshot replacements for Biden or Trump.

Right away, you can see that our model does not account for all possibilities, as we frame it as Biden vs. Trump, with the implicit understanding that it would be the Democrat vs. the Republican if either or both candidates are replaced.

But, setting that aside, these implied betting probabilities are much closer than our model, which is based on polls and forecasts.

Just to be clear: The Betfair odds don’t correspond to Biden getting 53% of the vote, they correspond to him having a 53% chance of winning, which in turn basically corresponds to the national election being a tossup.

So the prediction markets really do disagree with our forecast. And there’s nothing so special about our forecast; given the data we have now, I expect that just about every poll-based model will give similar predictions.

Madeka then elaborates on the betting odds:

The idea in a sequence of steps is this:

– If you’re publishing your forecasts, we assume theyre “proper” in the sense that youd be willing to bet on them. So if I gave you $1 for every $4 you put in if Biden won (let’s say you have Biden at 80%) youd be willing to take that. And the converse for Trump.

– If your forecast was too volatile, when you increased the probability for Trump in the future, we’d assume again that your posting was honest and that you’d be happy to sell us (say if you moved Trump to 40%) Trump at those odds. So the trade would go like this:

Today, Trump at 20%:

– bet $1, win $4 – Trump Wins

– take $1, pay $0.25 – Biden WinsTwo months from now, Trump at 40%:

– take $1, Pay $1.5 – Trump Wins

– bet $1, get $0.67 – Biden WinsIf you didn’t have volatility, or you absorbed at 100% – you’d be free from this. But in the scenario where you behaved like Silver in 2016, mean reverting up and down – we’d make money from you whether Trump or Biden won.

I guess what maybe pedantic is that the only sensible interpretation I can see for a single event forecast like an election is betting. You’re asking people to make decisions (say in the extreme case, move to Canada or Europe) based on these numbers – so they should be numbers you’re willing to bet on. That’s where the interpretation of the binary option and betting comes from – so having a super volatile forecast isn’t a great forecast, because once people identify that; they’ll trade against you.

My reply: I agree that betting is *a* model for probability, but it’s not the *only* model for probability. To put it another way: Yes, if I were planning to bet money on the election, I would bet using the odds that our model provided. And if I were planning to bet a lot of money on it, I guess I’d put more effort into the forecasting model and try to use more information in some way. But, even if I don’t plan to bet, I can still help to create the model as a public service, to allow other people to make sensible decisions. It’s like if I were a chef: I would want to make delicious food, but that doesn’t mean that I’m always hungry myself.

On the technical matter. I agree that it should be rare (but not impossible) for the election probabilities to swing wildly during the campaign.

Finally, regarding the statement: “If I gave you $1 for every $4 you put in if Biden won (lets say you have Biden at 80%) youd be willing to take that. And the converse for Trump”: Not quite. Don’t forget the vig. I can’t go around offering both sides of every bet without (a) getting takers on both sides and (b) having some sort of vig. Without (a) and (b), I’m vulnerable to getting taken out by someone with inside information.

The point is that, yes, if you have betting odds, these do translate into probabilities. But the reverse mapping is not so clear, as it involves actual economics.

Anyway, yes, I do believe our probabilities. They’re conditional on our model, and it seems like a reasonable model.

It’s possible that the model will have Trump at 40% chance of winning in 2 months, but I doubt it. My best guess of the probability we’ll have of a Trump win in 2 months is . . . the probability of a Trump win we have right now!

We had a couple more go-arounds on this, and a few more points came up.

Merlin writes:

The forecast partially pools the fundamentals based forecast with the polling based forecast. Essentially, the probability estimate walks toward the fundamentals based prediction. Increasing the diffusion term would allow it to get closer to it (higher uncertainty leading to lower weight in the polls based forecast relative to the fundamentals based forecast in the partial pooling for the prediction for Election Day).

I’ll add that it might be confusing to think of the forecast as a “prior.” This is not a prior, it’s a prediction from a fitted regression model. It’s only a prior in the sense that is the posterior from a previous analysis.

You might want a noninformative prior centered on 50/50. That’s fine. But we have some information. The president is unpopular and the economy is doing poorly. Historically this predicts a poor result for the incumbent in the election:

We could decrease the slope in our fitted regressions (thus making the election prediction less sensitive to presidential popularity and the state of the economy). Actually, we already did reduce the slope to account for polarization. I still don’t see it getting you to 45% chance of Trump winning.

Don’t forget, if you want a baseline, it’s that more people have voted for Democrats than Republicans in most of the past several national elections. To make a prediction of close to 50/50, you have to be really influenced by what happened in 2016. Which I think is happening with these bettors.

Madeka replies:

I guess the part that surprises me is that the model is so confident this far out – its only June. And that the uncertainty bands (for the ones you show – Popular vote) are basically constant-width till election day. That’s the part that makes me think that the forecast will jump as things happen.

We can dispute the market (trade if you like) – but I think the point goes back to my first email, if you think the probability wont move too much – that’s pretty consistent/good and we can disagree on the value.

I guess the question isn’t so much the point today as the dynamic going forward – if Biden drops/moves in the polls, how much does the probability move through time. I always liked Nassim’s picture in his paper:

To which Morris responds:

Again, this is because our election-day outcomes are being constrained by the poplar vote prediction. The “prices” won’t evolve over time like a traditional financial market because we have a really good way of telling what’s going to happen in the future—approval ratings, the economy and polarization are good predictors of election outcomes even this far out, so the resulting process in our forecast is not like a traditional random walk with linearly increasing error as we move away from election day.

To put it another way, the polls do provide information, and they’ll provide more information going forward, but the baseline from the model is a prediction of something like 54% +/- 3% of the vote for Biden, which translates into something like an 85% chance of him winning the electoral vote with the current lineup of states.

It’s good to be transparent here. If you want to go with the market odds rather than our probabilities, that’s fair enough; now you can figure out exactly what part of our model you think is wrong. The only reasonable way I can see you getting anywhere close to 50/50 odds is to center your popular-vote forecast around 51% rather than 53%. You can get there by saying that 2020 is kinda like a rerun of 2016, except the Republicans have the disadvantage of a bad economy and the advantage of incumbency. Say that these kinda balance out and there you go. But I don’t think they really balance out; see the graphs immediately above.

**The bet**

Suppose I were to lay $1000 on Biden right now. According to Betfair it seems that, if I win, I make a profit of $840:

And our model gives Biden an 88% chance of winning. But we’re modeling Biden vs. Trump, whereas Betfair considers other possibilities, including replacements for the Democratic or Republican nominee. So let’s take our 88% down to 80% to account for those unmodeled outcomes.

My expected return from this $1000 bet is then 0.80 * $840 + 0.20 * (-$1000) = $472.

That’s pretty good—a 47% rate of return! That’s a pretty juicy investment.

I discussed this with Josh Miller and he pointed out that if you wanted to hedge this bet, you could just wait until Biden’s price goes up enough and then cover the bet in the opposite direction.

I’m not recommending you make this bet, or that you make any bet. Indeed, you could argue that this spread just shows that the bettors know more than we do. I don’t think so—I think the bettors are too influenced in their thinking by the 2016 outcome—but it’s hard to say more than that.

**P.S.** See here for a detailed explanation by Morris of the different components of our model.