Another Election, Another Round of Poll Bashing. Is That Fair?

Visual: Base image via The Image Bank/Getty; Illustration by Undark

Forecasts once again substantially underestimated the extent of support for Trump. But does that mean they failed?


FOUR YEARS AGO, polls indicated that then-Democratic presidential nominee Hillary Clinton would handily beat her Republican opponent, Donald J. Trump. Based on those polls, one prominent election forecaster, Princeton University neuroscience professor Sam Wang, even called the race for Clinton several weeks before Election Day, promising to eat an insect if he was wrong.

Wang ate a cricket on CNN, and in May of 2017, a committee at the American Association for Public Opinion Research, or AAPOR, released a post-mortem of the polls’ performance. The report acknowledged shortcomings and suggested reforms to “reduce the likelihood of another black eye for the profession.”

Many pollsters and forecasters did make changes before the 2020 election. Once again, though, polls pointed toward big Democratic wins in key states, fueling optimism among progressives. And on Tuesday night, as it became clear that polls and forecasts had once again substantially underestimated the extent of support for Trump, the backlash was quick.

“I think it’s clear that there were some problems with polling this year, but a lot of this reaction strikes me as very premature,” said Kennedy.

“We should never again put as much stock in public opinion polls, and those who interpret them, as we’ve grown accustomed to doing,” Washington Post media critic Margaret Sullivan wrote on the morning after Election Day. The discipline, she continued, “seems to be irrevocably broken, or at least our understanding of how seriously to take it is.” A headline in The Atlantic’s Ideas section declared a “polling crisis.” A disgruntled columnist for The Daily Beast joked that it was time “to kill all the pollsters.”

Election forecasters did predict that Biden would win the electoral college, which — as of this writing — appears to be correct, though disputes over that outcome may well take weeks or months to settle. But as with 2016, the odds of such a close race were again considered small, and polling errors were particularly pronounced in Florida and parts of the Midwest. For example, polling averages at The New York Times had indicated that Biden was up by 10 percentage points in Wisconsin and 8 in Michigan, and that Ohio would narrowly go to Trump. Complex models at The Economist and FiveThirtyEight produced similar projections. Instead, the Trump campaign has vowed to ask for a recount in Wisconsin, Biden appears to have only narrowly won Michigan, and Trump won Ohio by around 8 points. (All of these tallies are still provisional.)

Does this mean that something is fundamentally broken about political polling? Some experts say that’s not entirely fair. “I think it’s clear that there were some problems with polling this year, but a lot of this reaction strikes me as very premature,” said Courtney Kennedy, the director of survey research at the Pew Research Center and the chair of AAPOR’s 2016 post-mortem committee, in a Thursday afternoon interview.

“It’s like, my goodness, let’s pump the brakes,” she added. “The election is not even over, there are still millions of votes to be counted.”

KENNEDY AND OTHER experts acknowledge that the 2020 election polls raise questions for pollsters about some of their methods. But many pollsters also argue that the backlash reflects misguided conceptions about what polls actually do — and that some blame lies with the wider ecosystem of pollsters, poll aggregators, and forecasters that has blossomed in the past 15 years.

At the base of this informational food chain are the pollsters, who use a range of information-gathering methods — including interviewer phone calls, online questionnaires, automated calls, and sometimes text messages — to reach samples of voters and to gauge their feelings on a variety of issues. Poll aggregators then take large numbers of those surveys and average them together, in the hopes of getting more reliable figures than any single poll. Many aggregators are also forecasters, feeding their figures into complex computer models that attempt to actually predict election outcomes. This is the methodology that would eventually make Nate Silver’s FiveThirtyEight operation famous, and it is what many large media companies, from The New York Times to CNN, now emulate as a matter of election-year routine.

Andreas Graefe, an election forecasting researcher, said that these forecasting models have become more sophisticated, and they’ve improved to include and account for a wide array of potential errors. But, he added, “I wouldn’t say that that really helped accuracy.” Graefe has helped run PollyVote, an academic election forecasting project, since 2007. Over that time, he said, he has seen election forecasting become a big business. “What has changed, definitely, is forecasts as a media product,” he said…


Another Election, Another Round of Poll Bashing. Is That Fair?

F. Kaskais Web Guru

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s