Has Polling Lost Its Reputation?

A Q&A with PORES Director John Lapinski

Thursday, June 1, 2017

By Susan Ahlborn


John Lapinski, Associate Professor of Political Science and Director of the Penn Program on Opinion Research and Election Studies (PORES), at the decision desk of NBC News, where he is Elections Unit Director. Photo credit: Alex Schein



The morning of November 9, 2016, dawned on a lot of surprised people. Almost all of the polling data reported ahead of the November 8 presidential election, including several state polls in critical battlegrounds, had shown Hilary Clinton leading rival Donald Trump.

Associate Professor of Political Science John Lapinski is the director of the Penn Program on Opinion Research and Election Studies (PORES). His position as elections unit director at NBC News gave him and his students a close-up view of the successes and failures of polling in 2016. Now they’re trying to understand some of the most important problems of polling, and how it can get better.

Were the polls really that bad?

John Lapinski: There's been a lot of criticism, obviously, that the polling didn't get it right in the 2016 election. After the election, the first thing I wanted to do was to evaluate empirically whether the polls really were much worse in 2016 than in past elections. I’ve been working with colleagues to analyze all public opinion polls at the national and state level from the 2016, 2012, and 2008 elections, including the partisan primary elections. I think it’s about 2,900 polls in total, all done within two weeks of the primary and general elections.

What we’ve found was that the polls were not systematically worse in 2016 than 2012 or 2008. Once all the votes were counted, Hillary Clinton ended up with a two percent popular vote margin over Donald Trump. Most national election polls were not too far off, and within that margin of error.

The 2016 state general election polls were slightly less accurate than state polls in 2008 and 2012, but they were a little bit better in the 2016 primaries compared to other presidential years. So there are some areas pollsters need to focus on, but ultimately, there wasn’t a systematic failure in 2016. This does not mean that there are not problems with polling. But the polls were not markedly worse in 2016 than in previous presidential elections.

Some are calling into question whether or not polling will work at all. That's very troubling for me because a world without public opinion polls is a world where government isn't really accountable or responsive to its citizens. They’re how you know what people want. It's my opinion that public opinion polls are pretty important for American democracy. We should always look for ways to improve them and make them better.



Why did people think the polls were so far off?

Lapinski: There’s room for growth in conveying what an estimate is or how much uncertainty might exist. There’s been an increase in the number of polls themselves in recent years, as well as poll aggregators and prediction-based forecasting. And there seems to be a lot of work that needs to be done as pollsters to correctly talk about an estimate or a probability. There is some misunderstanding about what a poll result is actually saying. 

For example, one thing many people don’t understand is what the margin of error is. Each candidate has a margin of error, but to estimate a likely winner, you need to determine the margin of error for the difference between the candidates. The number that is often reported by the media is not the margin of error on the difference, which is considerably larger than the margin of error people see in the TV or print media. This is because usually we are not interested in the difference when we look at polling numbers. This is a tricky concept for some people. And it is really hard to convey to viewers in a simple way. The bottom line is that the plus and minus we usually see in public opinion polls is really too small.

“When I started my academic career, response rates were well over 50 percent. Now, they’re in the single digits.”

We have to be careful when we report on these things to not give people a false sense of confidence. They’re estimates. In 2008 and 2012, it didn't matter because the polls got the direction right both times. This is an issue we are grappling with at PORES. 

I don't want to give the sense that the polls are useless. It's just that they're not as precise as people think they are. We have to be careful about this when we're interpreting them.



Some people talk about how many Democrats versus Republicans are polled, and blame that.


Lapinski:
When looking at polling aggregator sites, it’s easy to see that there was a lot of variability in how many Republicans and Democrats were taking polls, even after controlling for demographics. That's troubling because political science research suggests that partisanship is very stable. One hypothesis we are testing after this election is that when things were going well for Democrats, Democrats were more likely to take the polls, and the same for Republicans.

The question then is if, for example, Republicans aren’t taking polls, does that also mean they’re not going to vote, or are they just reacting to something temporary?

The other thing that we're looking at is potential non-response bias, which basically means the types of people who don’t answer polls are systematically different than those who do answer polls. Pollsters are really now digging into how different polling modes affect responses among Republicans and Democrats.



Are there other issues polls are facing today?

Lapinski: It's just becoming more difficult to do polling. When I started my academic career, response rates were well over 50 percent. Now, they're in the single digits. When you get rates that low, the question becomes whether the people who are taking your polls are fundamentally different from the people who aren’t taking them. If they're not, then you're fine. But as you start getting into smaller and smaller numbers of people, it’s more likely that we are going to have problems.



How do you think polling can be improved?


Lapinski:
We are very interested in using what's called registration-based sampling (RBS) techniques, where you actually draw your sample from the voter file, instead of RDD, which is random digit dialing. Voter files are updated lists of all registered voters. When you draw off that sample, you know exactly who is taking your surveys, but you also know who is not taking your surveys. This helps you understand whether you might have problems with nonresponse. You can also see if people voted in the primary but not the general election, or who voted in 2008 and 2012 but didn’t vote this time. We can also explore this idea of who the likely voters are and if 2016 was different. 

We could begin to examine whether there were certain types of people—Republicans or rural residents, for example—who are just not taking our poll and how they may differ from the people who are. We think that we can get some traction there to correct for potential problems. 

Obviously, the idea of the credibility of polling, whether or not the results were worse or not worse, is in the conversation right now. If it's in the public conversation, then I think we need to address it. We’re putting out articles through NBC and PORES, and trying to engage people. We’re working to educate journalists in best practices in reporting poll results. As scholars, if we can help do something better, and it's important for the public conversation, we should do that. 



What have your students thought about all of this?

Lapinski: The students are highly engaged in it. You would think the interest and engagement in American politics would taper down after an election. In fact, I think it's increased. They have varying opinions on whether they like what's happening right now or don't like what's happening, but everybody seems to be highly engaged.