On a nine percent response rate

benfell's picture

Back in 2003, when I returned to school and took my first research methods class, the professor (Valerie Sue) expressed considerable concern that response rates on telephone surveys had dropped as far as they had. I am uncertain as to the rate she quoted. I'm pretty sure she said it should be ninety percent or higher. A textbook from my second methods class notes, however, that over a period from 1979 to 2003, phone survey response rates deteriorated from below 80 percent to under 60 percent and that some populations are harder to reach than others,1 raising the specter of some populations being better represented in general surveys than others. Now,

Political pollsters got some good news on Monday [May 15, 2017]: The decades-long decline in Americans’ ability and willingness to participate in telephone surveys has stabilized. . . . [A] new Pew report suggests that trend line has plateaued at about 9 percent.2

This order of magnitude difference in percentage is supposed to be okay because "[t]here is little difference in party identification between the universe of people that participate in phone polls and those who won’t."3 But as the Politico story from which I'm drawing all these quotations explains,

Public-opinion polling is based on a simple premise: that pollsters are able to talk to a sample of Americans who represent the entire population. But what happens when the percentage who are reachable and willing to participate gets so low that those left to be interviewed differ meaningfully from the overall population?4

Fig. 1. 2016 U.S. presidential candidates on the Political Compass. Pace News, fair use. The assumption here is that partisan affiliation is the only significant difference in the U.S. population. Which is, of course, absurd. Even the Political Compass recognizes four broad areas of political affiliation: economic left/authoritarian, economic right/authoritarian, economic left/libertarian, and economic right/libertarian (but notice where the two major party candidates in 2016 fell—figure 1).5 My dissertation was all about ideological variations—tendencies—among conservatives6 and it is far from unreasonable to assume that the left is not monolithic either. Overlay this with demographic factors (I'm not even beginning to claim this is a complete list) such as class, race, gender identity, sexual preference, and age—by the way, my professor pointed out that younger people were less likely to have landlines and apparently, "[i]t’s more expensive for pollsters to dial cellphones" so "it’s likely that only news organizations and political groups with large budgets can afford this traditional form of survey research"7—and a nine percent response rate isn't merely perilous but catastrophic. And pollsters can't even know reliably who they're missing because non-respondents aren't telling them.

Could this have something to do with recent poll failures as with the Scottish independence referendum, the Brexit vote, and the 2016 U.S. presidential election? Until pollsters can show that they're getting representative samples, I have to assume that they aren't. Which is to say their results are unreliable. Which could well help to explain those failures.

But because of this allegedly good news, "it’s likely the existing polling landscape will endure for the time being."8 I'm predicting more failures.