What you need to know
- How does modern political polling work?
- How good are polls at predicting election outcomes?
- What is the best way to interpret campaign polls?
Every election season, Americans are inundated with pre-election polls. Many times, polls offer different predictions of which candidates are ahead and who will win. And in recent elections, polls have significantly underestimated support for some presidential candidates. Do these outcomes indicate that polls are contaminated by partisan bias? Which polls should we pay attention to – if any?
What is polling?
Polls measure public opinion by interviewing a random sample of a community. The value of polling is that if samples are truly random and people respond truthfully, the responses from small samples are likely to be within a few percentage points of the broader population values.
For example, suppose a poll of 1000 people in a state reports 47% support for a candidate. This result indicates that the actual support is likely to be between 44% and 50% (formally, the probability is .95 that actual support is in this range). If the sample size was increased to 5000, the range would narrow to 45.5% - 48.5% as the larger sample is a more accurate representation of the population.
Why do polls disagree?
Some polls are subject to partisan bias – those conducted by candidates, parties, or ideological organizations. Most sources that collect poll data record whether a poll is partisan or not and discount the results when they build polling averages.
However, even among reputable polling organizations, polls can yield different results because they use different random samples. One poll might reveal 47% support for a candidate, while a similarly-sized poll taken at the same time might reveal 45% support just because the polls interviewed different people.
Disagreements also stem from how polls are conducted. Pollsters use different methods to determine likely voters. Questionnaires are often phrased slightly differently. Some pollsters only ask about major party candidates, while others include third-party candidates and independents. Some pollsters force respondents to express a preference, while others allow an undecided option.
Americans are also increasinly reluctant to be polled. Of every 100 people contacted by a pollster, around five agree to participate. The solution is to weigh the opinions of some respondents more than others to make the sample resemble the population, but pollsters have different strategies for implementing this solution.
How should Americans interpret polls?
Most scholars and political professionals argue that we should focus on polling averages rather than individual polls. So, if one poll reports 47% support for a candidate, a second reports 46%, and a third reports 42%, these results are combined into an average. This average gives us a better picture of what’s happening in a political contest because it is based on a much larger sample and controls for differences in poll administration.
Each aggregator has their own strategy for combining polls. For example, Real Clear Politics averages all available poll results for each contest, while 538 weights individual polls based on factors including the sample size and the pollster’s accuracy in past elections. 538 also excludes some pollsters who do not release enough details about their polling methods.
Which polling averages are best?
To compare polling aggregators, Everything Policy selected three sites with a track record across multiple elections and compared their predictions for close Senate elections in 2018, 2020, and 2022. These contests are a good test of the aggregators, as the outcomes of most of these elections were uncertain even in the final days of the campaigns. (We look at Senate elections because there has not been enough presidential elections with good polling data to analyze predictions made about these contests.)
We asked two questions: did each aggregator’s polling average correctly predict the election outcome, and how far off was their final prediction about the vote received by the winning candidate? The chart below shows the results of this comparison.
Two results stand out. Despite their different approaches, the three aggregators have roughly the same accuracy, regardless of whether we compare their predictions about the outcome or the winning candidate’s vote share. This result reflects the fact that the aggregators are all working with the same set of publicly available polls. While each aggregator uses a slightly different averaging technique, there is no sign that these differences translate into an advantage for one over the others.
Second, the results highlight the difficulty of predicting election outcomes. By combining polls, the aggregators have very large samples to develop predictions. Yet their final vote share estimates are on average several points away from the actual results. This difficulty reflects a drawback with polling itself: it is difficult to get people to respond truthfully to questions about which candidate they support, whether that support will change, or whether they are likely to vote. No known aggregation strategy can address these limitations.
The Take-Away
Measuring public opinion is not easy. There are many reasons why individual polls produce different findings about public opinion – reasons that have nothing to do with partisan bias by pollsters.
The best way to interpret political polls is to focus on polling averages rather than individual polls.
Of the three polling aggregators examined here, none stand out for generating more accurate predictions than the others.
Further reading
Kennedy, C., Mercer, A., Hatley, N., and Lau, A. 2022. “Does Public Opinion Polling About Issues Still Work?” https://tinyurl.com/3zrpn7k5, accessed 9/25/24.
Morris, G. E. (2022). Strength in Numbers: How Polls Work and why We Need Them. WW Norton & Company.
Sources
What is polling?
Mehta, Dhrumi, et al. “Polls Policy and Faqs.” FiveThirtyEight, FiveThirtyEight, 9 Jan. 2022, https://tinyurl.com/5n6k6pah.
Why do polls disagree?
Leeper, T. J. (2019). Where have the respondents gone? Perhaps we ate them all. Public Opinion Quarterly, 83(S1), 280-288.
Kennedy, C., Mercer, A., Hatley, N., and Lau, A. 2022. “Does Public Opinion Polling About Issues Still Work?” https://tinyurl.com/48su6c26, accessed 9/25/24
Schaffner, B. & C. Soler. 2024. Pollsters are weighing surveys differently in 2024. Does it matter? Good Authority, October 2, 2024, https://tinyurl.com/bde2zd4d, accessed 10/2/24
How should Americans interpret polls?
Grimmer, J. 2024. “Don’t Trust the Election Forecasts.” Politico, https://tinyurl.com/2p8uhn9c, accessed 9/25/24
Trende, S. 2024. “How to Read and Understand Political Polling Data.” https://tinyurl.com/5n6w8xwt, accessed 9/25/24
Morris, G. E. (2022). Strength in Numbers: How Polls Work and why We Need Them. WW Norton & Company.
Which polling averages are best?
270 to Win. 2018. “2018 Senate Polling.” https://tinyurl.com/yhvjnf7y, accessed 9/25/24.
270 to Win. 2020. “2020 Senate Polling.” https://tinyurl.com/39czf64v, accessed 9/25/24.
270 to Win. 2022. “2022 Senate Polling.” https://tinyurl.com/2d6bej42, accessed 9/25/24.
538. 2018. “2018 Senate Election Forecast.” https://tinyurl.com/yahfaptz, , accessed 9/25/24
538. 2020. “2020 Senate Election Forecast.” https://tinyurl.com/2my3h862, accessed 9/25/24/
538. 2022. “2022 Senate Election Forecast.”, https://tinyurl.com/3865kxvc, accessed 9/25/24/
Real Clear Politics. 2018. “2018 Election Maps - Battle for the Senate 2018.” Battle for the Senate 2018, 2018, https://tinyurl.com/mma7kdp9, accessed 9/25/24/
Real Clear Politics. 2020. “2020 Election Maps - Battle for the Senate 2020.” Battle for the Senate 2020, 2020, https://tinyurl.com/unsztwtp, accessed 9/25/24/
Real Clear Politics. 2022. “2022 Election Maps - Battle for the Senate 2022.” RealClearPolitics, 2022, https://tinyurl.com/4szxp237, accessed 9/25/24/
Contributors
John Arnold (Intern) Is a sophomore at Binghamton University majoring in Political Science and Economics
Dr. Robert Holahan (Content Lead) is Associate Professor of Political Science and Faculty-in-Residence of the Dickinson Research Team (DiRT) at Binghamton University (SUNY). He holds a PhD in Political Science in 2011 from Indiana University-Bloomington, where his advisor was Nobel Laureate Elinor Ostrom.
Dr. William Bianco (Research Director) received his PhD in Political Science from the University of Rochester. He is Professor of Political Science and Director of the Indiana Political Analytics Workshop at Indiana University. His current research is on representation, political identities, and the politics of scientific research.