Franklin & Marshall College Poll

By Center for Opinion Research

Franklin & Marshall College Poll: Assessing the 2020 Pennsylvania Election Polls

#1・
587

subscribers

22

issues

Subscribe to our newsletter

By subscribing, you agree with Revue’s Terms of Service and Privacy Policy and understand that Franklin & Marshall College Poll will receive your email address.

Issue #1 • View online
Franklin & Marshall College Poll
You are receiving this email because you were signed up to receive news and information about our Franklin & Marshall College Polls through a list managed by Terry Madonna. Much has happened since our final Poll of the 2020 election last October, including the retirement of our friend and colleague, Terry. With Terry’s retirement, we are changing how we communicate with those who are interested in the Poll by creating this monthly newsletter to provide results from and analyses related to our F&M Polls. Options for managing your interest in this list appear at the end of this issue.
This first issue of our newsletter reviews the performance of the state polls during the 2020 election and offers some recommendations about how those interested in politics can best use polling information. One clear takeaway from recent elections is that everyone should be more circumspect about making predictions based on a single indicator of who is ahead, particularly when there is so much other data we can use to tell the story.
I hope you find this information about polling in the 2020 election both interesting and useful.
Sincerely,
Berwood Yost, Director

Pennsylvania's 2020 Election Polls
Pennsylvania’s 2020 election, which included millions of mail-in ballots, vast partisan differences in mail-in voting, and a delayed vote count, resulted in an outpouring of snap judgments that the Pennsylvania polls were off the mark in 2020. This Philadelphia Inquirer headline the day after the election, An embarrassing failure for election pollsters, expressed what many were thinking and feeling in that moment. These initial judgments about the 2020 polling “failure” remain mostly uncorrected despite a final outcome that now seems to align with the expectations created by the pre-election polls, at least in Pennsylvania. With the vote counting complete, what can we learn about the performance of the state polls in the 2020 election cycle?
The Polls in Pennsylvania Performed Reasonably Well
Comparing the final Pennsylvania poll estimates to the election results for polls conducted within three weeks of election day shows some positive results for the state’s polling accuracy, and some areas of concern. Nearly half the polls produced in the final three weeks of the campaign produced biased estimates, meaning the poll systematically over- or under-estimated one party’s share of the vote beyond its margin of error. Despite these biases, which for technical reasons may be overstated, four out of five polls accurately predicted a Biden victory in the state and the average candidate error was only about 1.5 points. 
The figure includes the estimated bias along with error bars that display the variability of the bias estimate. Biased estimates are presented in red. The error bars of a biased poll do not overlap with zero. Some pollsters produced more than one poll during the final three weeks of the election and the bias estimates and standard errors for all polls appear on a single row. As noted earlier, 14 of these 29 polls were biased.
The figure includes the estimated bias along with error bars that display the variability of the bias estimate. Biased estimates are presented in red. The error bars of a biased poll do not overlap with zero. Some pollsters produced more than one poll during the final three weeks of the election and the bias estimates and standard errors for all polls appear on a single row. As noted earlier, 14 of these 29 polls were biased.
Comparatively, the 2020 Pennsylvania polls performed better than in the 2016 election, but worse than the 2008 and 2012 elections. Every state poll in 2008 correctly predicted an Obama win as did almost every poll in 2012. Every other measure of bias and error was lower in 2008 and 2012 than in 2016 or 2020. Perhaps the most interesting pattern evident in these polls is the consistent underestimate for the winners’ shares of the vote.
This table presents several different measures of poll accuracy. Please refer to the full paper for an explanation of how each measure is calculated.
This table presents several different measures of poll accuracy. Please refer to the full paper for an explanation of how each measure is calculated.
If the polls are viewed in terms of their aggregated predicted margin and the accompanying aggregated margin of error, the historical performance of the polls in the state is reasonable–in no race was the difference between the predicted poll margins and the actual election margins outside the average margin of sample error reported by the polls.
A visual presentation of the polling errors by election captures the essential feature of the polling errors in the state over the past four election cycles: the 2008 and 2012 election polls had a slight bias in favor of the Republican candidates while the 2016 and 2020 polls had a larger bias in favor of the Democratic candidates.
A curve centered over the vertical dashed line at zero in the figure indicates no bias in the polling, while positive values suggest the polls favored Republicans and negative values suggest the polls favored Democrats.
A curve centered over the vertical dashed line at zero in the figure indicates no bias in the polling, while positive values suggest the polls favored Republicans and negative values suggest the polls favored Democrats.
Getting More from Pre-Election Polling
Polling firms have an obligation to do good work, but measuring the “accuracy” of election polls is a more complicated notion than conversations about whether “the polls got it right” or whether “the polls can be trusted” allow. The complications are many: What’s the right way of measuring accuracy? How close to the final margin does a poll need to be to be considered accurate; is being within the poll’s margin of error close enough? What’s the lifespan of a polling estimate; how close to election day should a poll be conducted for it to count as a prediction?
At its most basic, reliable and accurate survey results come from a pollster’s ability to find a good list from which to draw a representative sample, their ability to encourage those sampled to participate, and their skill at asking fair questions that are understood in the same way by each participant. These technically challenging tasks are each within a pollster’s control.
What is beyond a pollster’s control, and what many people overlook when talking about poll accuracy, is the role of shifting voter attitudes. A question that asks how someone intends to vote is measuring a current belief and not an actual behavior. How wise is the assumption of stable voter attitudes when the major-party candidates are bombarding voters with messages? In the 2020 election, the presidential candidates spent nearly $100 million on advertising in Pennsylvania between July and November; is it reasonable to expect that campaign spending, media coverage, and the public conversations these encourage produce unchanging, immobile attitudes about turnout and preference?
This assumption is further strained in a state like Pennsylvania that is hotly contested and evenly divided. In Pennsylvania, only a small share of voters need to change their minds about turning out to vote or about who they intend to support to make a poll that was accurate when conducted look inaccurate on election day. There is ample evidence that attitudes about voting and about vote preference change throughout the course of the campaign, even if those preferences become a bit more stable in the closing weeks of a race.
Thinking Beyond the Horse Race
If there is anything to take away from polling in recent elections, it is that everyone should be more careful about making predictions based on a single indicator of who is ahead, particularly when there is so much other data we can use to tell the story. This means not only polling data, but other indicators that might tell us what’s happening.
It also means pollsters must do a better job of reporting on the uncertainty of their estimates and discussing other potential sources of error beyond sampling, including the tendency for polls to share similar biases in any given election cycle.
This also means we should rely a bit less on poll aggregators. Poll aggregators have become popular because they make it easy for those interested in the campaigns to keep score, but these sites also over-simplify our understanding of the race and of the methods used to understand it. Poll aggregators’ projections in 2016 allowed for too little uncertainty about the election outcomes by producing lopsided projections that were inconsistent with any individual poll. These sites also encourage the reporting of poll data that is consistent with the polling averages. This “herding” means that some pollsters self-censor their results so they are in line with the poll averages.
Trying to assess the polls’ performance by looking at a single indicator is like judging the quality of a car by its paint color–easy to judge but meaningless until you understand the rest of the vehicle’s components and how the owner plans to use it. It is a bit baffling that so many people focus on a single indicator to assess accuracy, in this instance the horse race question that measures candidate preference, when a good poll can provide essential context for understanding an election. Truth is, no one trying to forecast a future event is wise to rely on a single indicator to make their judgments, so why should polls be treated any differently? Would we feel differently about the polls if we allowed ourselves to broaden our perspective and think about more than just the horse race?
In the end, those who look at polls will make their own judgments about the polls’ performance. Undoubtedly, many of these assessments will be motivated more by partisan and ideological than methodological or performative criteria. But the science of polling has proven itself time and again. In the long run, polling has more often advanced our understanding than misled us, including in 2016 and 2020. The task for everyone who understands those facts is to do more to make sure that people understand.
Polling is a helpful tool for reducing uncertainty, not eliminating it.
Future Polls
Findings of the first 2021 Franklin & Marshall College Poll will be released on March 11. Interviewing will take place March 1 - 7.
Please reach out with questions or comments by replying to this newsletter or by emailing cor@fandm.edu. Let us know what you think of the analyses and/or the new platform.
We encourage you to share our new newsletter with others and follow us on Twitter for additional content.
Did you enjoy this issue?