View profile

Franklin & Marshall Poll - Ballot Initiatives: How Wording Matters

Issue #3 • View online
Franklin & Marshall College Poll
Ballot initiatives should be among the most direct exercises in democracy we have. But for this exercise to work, the ballot questions themselves should provide voters with the information they need to make an informed decision.

The state’s Republican legislative leaders and Governor Wolf are locked in a dispute about the Governor’s authority to establish emergency disaster declarations, such as the one the Governor relied on during the COVID-19 pandemic to take actions he believed protected public health and safety. A change to the state constitution that places some limits on these disaster declarations is scheduled to be on the May 18 primary ballot. The Pennsylvania Department of State has written the text of two ballot questions, but Republican officials believe the proposed ballot wording is unfairly biased.
Political wrangling over the form of ballot questions is not new; in fact, a dispute about the wording of a ballot initiative in 2016 made it all the way to the Pennsylvania Supreme Court.  The 2016 legal fight centered on the wording of a ballot question that sought to raise the retirement age for state judges. In that case, the plaintiffs claimed the ballot wording offered by the Republican-controlled legislature was “manifestly deceptive” because, according to them, it did not provide enough information to voters. 
The Center conducted research on the 2016 initiative that showed how different wording on ballot initiatives produced different levels of support for the initiative. Survey researchers commonly use a tool known as a “split-ballot experiment” to understand the effect that wording has on how people respond to a question. 
The 2016 Ballot Experiment
A split-ballot experiment uses a random procedure to assign different forms of a question to survey participants.  Properly designed, these experiments create groups of people who are identical in all ways except for the form of the question they receive. When differences appear in a split-ballot experiment, they are the result of the question’s wording and not the result of who was asked the question.
Participants in the September 2016 Franklin & Marshall College Poll were randomly assigned to get one of three different question forms about the ballot initiative (shown in figure below): the ballot initiative as it appeared on the ballot (November 2016 Ballot), the original ballot initiative that was scheduled to appear on the primary ballot (May 2016 Primary), and a common language alternative that we created (Common Language).
This experiment showed that each form of the question generated a different response.  Nearly two in three (64%) registered voters in the state said they would vote “yes” to the question as it appeared on the November ballot; less than half (45%) said they would vote “yes” to the question as it would have appeared on the May primary ballot; and fewer than two in five (37%) said they would vote “yes” when asked the common language alternative.  
All three forms of the question should have produced similar results if the ballot question was commonly interpreted and understood, which they clearly were not. 
The most likely reason that responses to the May 2016 Primary and the Common Language forms differed so much from the November 2016 Ballot form is because the actual judicial retirement age was left out of the question. In this case, people who already knew the current retirement age for judges likely understood and answered the question differently from those who did not. When it comes to feelings about the retirement age for judges, that knowledge appeared to make a huge difference.  
Our experiment did not tell us if the ballot question was written to intentionally mislead voters, but it did tell us that the form of the ballot question likely produced a different result from what would have been produced if the primary ballot question had been offered in a different form. In the end, the initiative narrowly passed 51 percent to 49 percent. It is likely the narrower victory than suggested by our survey experiment was a result of extensive coverage of the ballot wording and the Supreme Court case in the weeks leading up to the election that helped voters learn more about what they were being asked to vote for.
The 2020 Ballot Question
Which brings us back to the current ballot questions: Do the Republicans’ complaints about the ballot wording have any merit? Are these bad questions?
Survey researchers generally describe a bad question as one that produces answers resulting from how the question is written, rather than resulting from how people think or feel. 
Survey researchers actually know a great deal about how people respond to questions and about the best ways to write them so that people can accurately answer them. The most basic rules of question construction are commonsensical: form a question that is easy to understand, uses everyday language, uses words with clear and specific meanings, and is as short as possible.
Unclear questions that use terms we don’t understand, that lack specific meaning, and that otherwise cause confusion will influence our answers. People cannot link the ideas in a question to the ideas they understand or consider important when the question is badly written.
Pennsylvania’s proposed ballot initiatives on disaster declarations fail to follow these simple rules. Phrases such as “unilaterally terminate” and “severity pursuant to that declaration—through passing a concurrent resolution” that appear in one of the ballot questions are not universally understood, inherently meaningful terms for most people. Loaded terminology that could influence responses are also present, including phrases like, “increase the power of the General Assembly” and “removing the existing check and balance.” As with the 2016 questions, these questions also lack important context about existing procedures.
Failing to write a good ballot question is about much more than writing good questions. It also drives cynicism and encourages apathy among voters.  At best, citizens will ask, “What political reason did they have to ask such a bad question?” At worst, this failure gives citizens yet another reason to doubt the credibility and competence of our elected officials.  
Ballot initiatives should be among the most direct exercises in democracy we have. But for this exercise to work, the ballot questions themselves should provide voters with the information they need to make an informed decision. The ballot initiative in November 2016 did not seem to meet that standard, and it is difficult to argue that the 2020 questions do either. 
References and Resources
Joint Resolution 2021-1
Shall the Pennsylvania Constitution be amended to change existing law and increase the power of the General Assembly to unilaterally terminate or extend a disaster emergency declaration—and the powers of Commonwealth agencies to address the disaster regardless of its severity pursuant to that declaration—through passing a concurrent resolution by simple majority, thereby removing the existing check and balance of presenting a resolution to the Governor for approval or disapproval?
Shall the Pennsylvania Constitution be amended to change existing law so that: a disaster emergency declaration will expire automatically after 21 days, regardless of the severity of the emergency, unless the General Assembly takes action to extend the disaster emergency; the Governor may not declare a new disaster emergency to respond to the dangers facing the Commonwealth unless the General Assembly passes a concurrent resolution; the General Assembly enacts new laws for disaster management?
Commentary: Wording matters on Pa.'s judicial ballot question
Commentary: Pa. ballot question is designed to confuse
Did you enjoy this issue?