Summary
Polling aggregator FiveThirtyEight has named Vice President Kamala Harris as the narrow favorite to win the presidential race on Election Day, shifting from former President Donald Trump for the first time since October 17.
Harris’s lead is razor-thin, with FiveThirtyEight’s model showing her winning 50 out of 100 simulations compared to Trump’s 49. Similarly, Nate Silver’s model in The Silver Bulletin also slightly favors Harris, giving her a win in 50.015% of cases.
Both forecasts emphasize the unprecedented closeness of this race, with Pennsylvania as a key battleground.
.
Pollsters sucked in the election. It’s like forecasting a 50% chance of rain. “One candidate may win, but the other may win too!” I know that.
That’s pretty much always what the polls say for the presidential election. I don’t know why people expect pollsters to have crystal balls. The election is mostly decided on who is going to actually go vote, and a lot of people don’t know the answer to that until election day.
And even if they did predict anything convincingly, it would probably end up a self-defeatung prophecy, as people don’t care to show up. Or self-fulfilling, if people want to vote for the winning team. In either case it’s just very limited what polls can achieve.
Ideally, your vote shouldn’t depend on what you’re told by pollsters
Well, if anything was ideal, this whole situation would look very different.
This is why people keep complaining about the polls being wrong. The polls are often pretty good these days, but the people reporting and talking about them do not understand basic statistics.
If I had a coin with a small booger weighting one side and making it more likely to land booger side down 51% of the time, would I be surprised if it landed booger side up? No.
Also, these models are extremely rough. They are forced to make a bunch of very rough estimations and guesses, which are then aggregated to a stupidly precise number making it look scientific.
It’s a fun enough exercise, but it’s really just repeated endlessly because it’s so goddamn easy to report on.
There’s also the problem that if the polls are crap, the results of the model will also be crap, regardless of how accurate the model is. It’s similar to how publication bias affects meta-analyses. Several analysts have already argued that pollsters are unlikely to underestimate Trump again, and may in fact over-correct and underestimate Harris much like how they underestimated dems in 2022:
- https://www.nytimes.com/2024/10/23/opinion/election-polls-results-trump-harris.html#link-647a30f1
- https://www.newsweek.com/kamala-harris-underestimate-polls-wrong-election-donald-trump-1979080
- https://nypost.com/2024/10/30/us-news/election-polling-could-be-underestimating-kamala-harris-democrats-in-key-states-cnn-data-reporter-warns/
The Nate Silver model (at least) puts in a bunch of “corrections” for poll quality and historical bias from individual pollsters.
So you’re really playing a second or third level game of “Did Nate (or your other poll aggregator) correct for all the effects and biases, or did they miss something important?”
And we will never be able to validate if these odds are accurate or not, because this specific election will never be replayed again.
It’s Newsweek, and Newsweek is a bit ratch, as publications go.
Nate said today that a coin actually has a 50.5% chance of heads, so this is technically closer than a coin flip!
if the early voter demographics + recent polls only have it at a ‘coin flip’ as the polls open on the last day:
we’re screwed.
(please go vote and prove me wrong)
I’m not sure how accurate early voter demographics correlate to voting patterns anymore. I work for a municipality, and my office has a clear view of the voting lines. They were PACKED for the first week of early voting. They have been empty today. Like, people are still coming in to vote, but it’s onesie-twosies, not the 50+ person lines it was. Allegedly we had over 50% of our eligible voters cast their ballots during early voting. And my area is pretty solidly red. I’m having trouble making any sort of prediction based on it.