Could we survey a nationally or sub-nationally representative sample of the population simply by using mobile phones? Mobile polling would revolutionize how civil society and human rights organizations operate, allowing researchers to ask people what they think on any given issue at a fraction of the cost of traditional face-to-face surveys, and then track changes in public opinion in near real-time.
Of course, this technique also has drawbacks—understanding these strengths and weaknesses will help determine when it’s appropriate and when it’s not.
Traditionally, the only way to reach a nationally representative sample of the population was to conduct a household survey using enumerators, who would be sent around the country to conduct face-to-face interviews with a diverse and representative cross-section of the population based upon official census tracts. This produces accurate data (though fraud can be a concern), but it is both expensive and time-consuming. Collecting numbers from program beneficiaries and then conducting phone interviews—or crowdsourcing using a platform like Ushahidi—is less expensive. However, these approaches do not accurately gauge overall public opinion as they do not generate a nationally representative sample.
Accurate statistics require representative data. The key with mobile polling, then, is to harness the increased prevalence of mobile phones to reach a nationally representative sample. According to the International Telecommunications Union, there are 69 mobile cellular subscriptions per every 100 inhabitants in Africa. Even some of the least-developed countries in Africa have over 50 subscriptions per 100 inhabitants. According to the World Bank, as of 2015 the Democratic Republic of Congo had 53 subscriptions per 100 people, up from 23 in 2011. This means that, in many developing countries, mobile phone ownership is spreading into traditionally marginalized demographic groups. Mobile polling also does not require that respondents have smart phones—any mobile phone that can receive a call will suffice.
Flickr/Pabak Sarkar (Some rights reserved)
Mobile polling could revolutionize the way we collect public opinion—but only if we understand both the strengths and weaknesses of this approach.
The Center for Global Development (CGD) piloted mobile polls in Afghanistan, Ethiopia, Mozambique and Zimbabwe in 2015. They used random digit dialing—i.e., calling a large number of randomly generated phone numbers—to reach a random sample of the phone-owning population. The surveys were conducted using interactive voice recognition technology, which made them accessible to both literate and illiterate populations, while avoiding the costs associated with a call center.
CGD used the first part of the survey to stratify their sample, asking questions to determine the respondents’ gender, age, education level, and socio-economic status (using asset ownership as a proxy), as well as whether the respondent lived in a rural or urban area. They then used an algorithm to adjust the existing sample so that it reflected a representative sample of the target population, as against a baseline set by either the most recent census or Demographic and Health Survey.
How can civil society and human rights groups use mobile polling effectively?
This means that, when specific population groups were over-represented in the sample compared to their prevalence in the population as a whole (e.g., urban males), these answers were given less weight, while the responses from under-represented populations (e.g., rural women) were given greater weight. This allowed CGD to determine their sample error—that is, the extent to which their re-weighted sample differed from a true nationally representative sample.
How, then, can civil society and human rights groups use mobile polling effectively?
First, mobile polls can be used as a way to conduct issue-specific baseline assessments. One of the main challenges with advocacy campaigns, for instance, is that often there’s no baseline, so there is no way to assess whether a campaign or intervention has been effective in shifting public opinion. In order to determine a baseline, you need to poll a representative sample of the population.
Second, it’s important that organizations not only assess the baseline, but also continue to assess potential changes in behavior and attitudes over time. Unfortunately, this has traditionally been too expensive for most civil society organizations. With mobile polls, we can create longitudinal public opinion data tailored to the needs of local civil society. Organizations can track changes in public opinion on key issues in real time, to see if their advocacy campaigns and other interventions are successful, and if so, with which segments of the population.
Third, civil society groups can also use this polling to test different advocacy messages, to understand how to frame advocacy campaigns most effectively. For instance, organizations can run a poll using one kind of message and then run a second poll using a second message. Then they can measure which message generated what kind of response. This is crucial in determining effective actions, messaging, campaigns and interventions.
There are four main advantages to mobile polling. The first is that mobile polling costs considerably less than traditional methods. CGD was able to reach an effective sample size of 2,000 people in Afghanistan (which gives a margin of error of + / - 3.1%) for only $23,783; in Zimbabwe, where mobile phone penetration is higher, the cost to reach an effective sample size of 2,000 people was only $14,343. Second, researchers can deploy this method much faster than face-to-face surveys. Total time from launching the survey to analyzing the results can be measured in days and weeks, as opposed to months. Third, it can be used in hard-to-access environments where it’s dangerous for enumerators to operate. Finally, because of the first three advantages, it’s possible to run surveys regularly. This allows us to ask the same question again and again, and then track changes in public sentiment—essentially tracking these changes in near real-time.
However, there are also four main disadvantages. First, mobile polling obviously hinges on mobile phone penetration, so it is of limited use in countries where only a small fraction of the population has access to a mobile phone (e.g., Central African Republic or South Sudan). Second, even in countries with relatively high mobile phone penetration, this approach is not as accurate (in terms of reaching a representative sample of the population) as traditional household surveys. In the CGD pilot surveys, sample error ranged from 2.8% in Zimbabwe to 7% in Ethiopia. Third, the number and type of questions you can ask is restricted—there is only a limited time that people will stay on the phone. Further, bounded questions work best (where respondents choose between a set menu of options), as opposed to open-ended questions, which are harder to analyze. Finally, more research needs to be done on bias—do people answer questions differently over the phone than in person?
Mobile polling could revolutionize the way we collect public opinion, and in turn improve the way we are able to design and implement human rights programs—but only if we understand both the strengths and weaknesses of this approach.