On May 8, the day after the British elections, everyone saw that the pre-election polls had gone horribly wrong. What was supposed to be a close election, with the possibility of a hung Parliament, had turned into a clear Conservative victory and Parliamentary majority. Election polls have become so much a part of political life and they are expected to be accurate, making these results quite a shock. A serious investigation is about to take place, and some are beginning to question polling itself.
Should they? And should the British polling problem make us doubt the recent use of opinion polls to measure human rights needs and attitudes?
No.
There are a number of key factors to consider when debating the utility and accuracy of polls.
Methodology
Polling in less developed countries is likely to be conducted mostly face-to-face, making it easier to ensure good sampling.
Today, face-to-face polling is basically non-existent in Britain and in most other Western democracies, and telephone polling (which replaced face-to-face) suffers from low response rates and limited population coverage, as people migrate to using mobile phones. Most polls in the UK and many elsewhere are conducted online, using opt-in panels that many argue cannot represent the public. Polling in less developed countries is likely to be conducted mostly face-to-face, making it easier (though more expensive) to ensure good sampling. That methodology has been validated over time, and it is not subject to concerns about representativeness now facing telephone and online polls.
The coverage issues in developing countries are different: in some areas security concerns might prevent interviews, and researchers need to be sure respondents have the privacy to speak freely. They must also keep a close eye on possible interviewer effects.
Whose opinion counts?
It is often as important in the developed world to discover who is going to vote as to learn how they will vote (in the US, for example, the percentage of adults who vote, especially in Congressional elections, is often under 50%). That complexity—either in selection or in post-interviewing adjustments—is not necessary for human rights polling, where any post-interview weighting is likely to be limited to demographic and geographic variables.
What answers are useful?
In most polling, “Don’t know” answers are legitimate. Sometimes they tell us more than the actual answers do. But the British election pollsters mostly attempted to allocate everyone, a complication that may have added to their errors. British polls often do not publish any “undecided” or “don’t know” percentages. Parliamentary election polls attempt to project seats from national percentages—a difficult process that can’t just be dealt with by translating the percentage of supporters directly into a percentage of seats. And in a multi-party system, there can be far more strategic voting than there is in a two-party system. What polls say becomes a factor in the voter’s decision (should one continue to support a party that will lose or switch to a second choice party that might win?).
The level of precision—and judgment
In the last few years, pre-election polls have been held to a much higher standard than surveys can usually manage. Sampling error is ignored, and the expectation is that polls will match the election outcome precisely. Aggregator websites have propagated this notion by combining all polls to create what they hope will be an estimate less subject to error, cancelling out one possible error with another. As seen in the UK, that doesn’t always work. But polling on human rights issues is not subject to the same unrealistic expectations.
Flickr/David Erickson (Some rights reserved)
Many of the variables which obscure political polling are not applicable when measuring public opinion on human rights.
Whenever a pre-election poll is in error, there is much hand wringing. That happened in 1948, a year immortalized by the image of US president Harry Truman, who had just won the election, holding up a copy of the Chicago Tribune declaring, “Dewey Defeats Truman.” That image is used nearly every time a pre-election poll is in error, including this month’s British Parliamentary election. Usually an investigation follows. In 1948, the Social Science Research Council took on that poll investigation. The British Polling Council will investigate this year’s polls, as the Market Research Society did after a polling mistake more than 20 years ago.
Of course, the polling landscape in Britain is quite different today than it was in 1992, and even more different from the polling landscape in developing countries. Twenty years ago, polling methods in the UK had been unchanged since the 1970s, pollsters used both in-person interviews and telephone interviews, and some of those in-person polls were conducted on the street. In addition, the Market Research Council study found a differential response rate between supporters of the competing parties, and the effort to categorize all possible voters included some who didn’t end up voting, as well as others who voted differently from what had been predicted. The impact was to underestimate Conservative support, giving birth to the notion of the “Shy Tory”: a Conservative voter afraid to say the word.
The differences between then and now and between the pre-election polls and human rights polling are enormous, and they are both methodological and structural. They include the way the polls are conducted, the purpose of the polls and the inclusion of all—not just those who might cast a vote in the next election. The investigation of the British polls will take some time (the full 1992 investigation wasn’t released until 1994), but preliminary results should be available much sooner. Whatever the findings, they shouldn’t stop efforts to use polls to discover and understand human rights issues, especially in terms of who is affected and how they can be helped. If the UK results stifle such efforts, we will all lose.