If there is anything to take away from polling in recent elections, it is that everyone should be more careful about making predictions based on a single indicator of who is ahead, particularly when there is so much other data we can use to tell the story. This means not only polling data, but other indicators that might tell us what’s happening. For example, leading up to the 2020 election, we made it a point to not only discuss our polling indicators, but to also present data about unemployment rates, campaign spending, COVID deaths, early voting, and changes in voter registration as supplemental indicators worth thinking about.
It also means pollsters must do a better job of reporting on the uncertainty of their estimates and discussing other potential sources of error beyond sampling, including the tendency for polls to share similar biases in any given election cycle.
This need for pollsters and reporters to work together so people more clearly understand the limitations of polls is an often overlooked part of the 2020 AAPOR post-mortem. References to this are sprinkled throughout the report, but the conclusions make it clear that,
“Polls are often misinterpreted as precise predictions. It is important in pre-election polling to emphasize the uncertainty by contextualizing poll results relative to their precision. Considering that the average margin of error among the state-level presidential polls in 2020 was 3.9 points, that means candidate margins smaller than 7.8 points would be difficult to statistically distinguish from zero…putting poll results in their proper context is essential; whether or not the margins are large enough to distinguish between different outcomes, they should be reported along with the poll results. Most pre-election polls lack the precision necessary to predict the outcome of semi-close contests” (p. 71).
Trying to assess the polls’ performance by looking at a single indicator is like judging the quality of a car by its paint color–easy to judge but meaningless until you understand the rest of the vehicle’s components and how the owner plans to use it. It is baffling that so many people focus on a single indicator to assess accuracy, in this instance the horse race question that measures candidate preference, when a good poll can provide essential context for understanding an election. Truth is, no one trying to forecast a future event is wise to rely on a single indicator to make their judgments, so why should polls be treated any differently? Would we feel differently about the polls if we allowed ourselves to broaden our perspective and think about more than just the horse race?
In the end, those who look at polls will make their own judgments about the polls’ performance. Undoubtedly, many of these assessments will be motivated more by partisanship and ideology than methodological criteria. In the long run, polling has more often advanced our understanding than misled us, including in 2016 and 2020. The task for everyone who understands polling and its limitations is to do more to make sure that others understand them, too.
Everyone needs to remember that polling is a helpful tool for reducing uncertainty, not eliminating it. In my mind, polls haven’t outlived their usefulness; it is just that their primary users have tended not to use them in the ways they can be most informative.