Survey Research

"Anchoring Vignette" methods for when different respondents (perhaps from different cultures, countries, or ethnic groups) understand survey questions in different ways; an approach to developing theoretical definitions of complicated concepts apparently definable only by example (i.e., "you know it when you see it"); how surveys work.

Anchoring Vignettes

Methods for when different respondents (perhaps from different cultures, countries, or ethnic groups), or respondents and investigators, understand survey questions in different ways. Also includes an approach to developing theoretical definitions of complicated concepts apparently definable only by example (i.e., "you know it when you see it").
Improving Anchoring Vignettes: Designing Surveys to Correct Interpersonal Incomparability
Hopkins, Daniel, and Gary King. 2010. Improving Anchoring Vignettes: Designing Surveys to Correct Interpersonal Incomparability. Public Opinion Quarterly: 1-22.Abstract
We report the results of several randomized survey experiments designed to evaluate two intended improvements to anchoring vignettes, an increasingly common technique used to achieve interpersonal comparability in survey research.  This technique asks for respondent self-assessments followed by assessments of hypothetical people described in vignettes. Variation in assessments of the vignettes across respondents reveals interpersonal incomparability and allows researchers to make responses more comparable by rescaling them. Our experiments show, first, that switching the question order so that self-assessments follow the vignettes primes respondents to define the response scale in a common way.  In this case, priming is not a bias to avoid but a means of better communicating the question’s meaning.  We then demonstrate that combining vignettes and self-assessments in a single direct comparison induces inconsistent and less informative responses.  Since similar combined strategies are widely employed for related purposes, our results indicate that anchoring vignettes could reduce measurement error in many applications where they are not currently used.  Data for our experiments come from a national telephone survey and a separate on-line survey.
Enhancing the Validity and Cross-cultural Comparability of Measurement in Survey Research
The original article that lays out the idea, develops the basic models, and gives examples. King, Gary, Christopher JL Murray, Joshua A Salomon, and Ajay Tandon. 2004. Enhancing the Validity and Cross-cultural Comparability of Measurement in Survey Research. American Political Science Review 98: 191–207.Abstract
We address two long-standing survey research problems: measuring complicated concepts, such as political freedom or efficacy, that researchers define best with reference to examples and and what to do when respondents interpret identical questions in different ways. Scholars have long addressed these problems with approaches to reduce incomparability, such as writing more concrete questions – with uneven success. Our alternative is to measure directly response category incomparability and to correct for it. We measure incomparability via respondents’ assessments, on the same scale as the self-assessments to be corrected, of hypothetical individuals described in short vignettes. Since actual levels of the vignettes are invariant over respondents, variability in vignette answers reveals incomparability. Our corrections require either simple recodes or a statistical model designed to save survey administration costs. With analysis, simulations, and cross-national surveys, we show how response incomparability can drastically mislead survey researchers and how our approach can fix them.
Comparing Incomparable Survey Responses: New Tools for Anchoring Vignettes
Develops methods for selecting vignettes and new, simpler, nonparametric methods requiring fewer assumptions for analyzing anchoring vignettes data. King, Gary, and Jonathan Wand. 2007. Comparing Incomparable Survey Responses: New Tools for Anchoring Vignettes. Political Analysis 15: 46-66.Abstract
When respondents use the ordinal response categories of standard survey questions in different ways, the validity of analyses based on the resulting data can be biased. Anchoring vignettes is a survey design technique, introduced by King, Murray, Salomon, and Tandon (2004), intended to correct for some of these problems. We develop new methods both for evaluating and choosing anchoring vignettes, and for analyzing the resulting data. With surveys on a diverse range of topics in a range of countries, we illustrate how our proposed methods can improve the ability of anchoring vignettes to extract information from survey data, as well as saving in survey administration costs.

Many more details, examples, videos, software, etc. can be found at the The Anchoring Vignettes Website: HTML

Software

How Surveys Work

Why are American Presidential Election Campaign Polls so Variable when Votes are so Predictable?
Resolution of a paradox in the study of American voting behavior. Gelman, Andrew, and Gary King. 1993. Why are American Presidential Election Campaign Polls so Variable when Votes are so Predictable?. British Journal of Political Science 23: 409–451.Abstract
As most political scientists know, the outcome of the U.S. Presidential election can be predicted within a few percentage points (in the popular vote), based on information available months before the election. Thus, the general election campaign for president seems irrelevant to the outcome (except in very close elections), despite all the media coverage of campaign strategy. However, it is also well known that the pre-election opinion polls can vary wildly over the campaign, and this variation is generally attributed to events in the campaign. How can campaign events affect people’s opinions on whom they plan to vote for, and yet not affect the outcome of the election? For that matter, why do voters consistently increase their support for a candidate during his nominating convention, even though the conventions are almost entirely predictable events whose effects can be rationally forecast? In this exploratory study, we consider several intuitively appealing, but ultimately wrong, resolutions to this puzzle, and discuss our current understanding of what causes opinion polls to fluctuate and yet reach a predictable outcome. Our evidence is based on graphical presentation and analysis of over 67,000 individual-level responses from forty-nine commercial polls during the 1988 campaign and many other aggregate poll results from the 1952–1992 campaigns. We show that responses to pollsters during the campaign are not generally informed or even, in a sense we describe, "rational." In contrast, voters decide which candidate to eventually support based on their enlightened preferences, as formed by the information they have learned during the campaign, as well as basic political cues such as ideology and party identification. We cannot prove this conclusion, but we do show that it is consistent with the aggregate forecasts and individual-level opinion poll responses. Based on the enlightened preferences hypothesis, we conclude that the news media have an important effect on the outcome of Presidential elections–-not due to misleading advertisements, sound bites, or spin doctors, but rather by conveying candidates’ positions on important issues.
Pre-Election Survey Methodology: Details From Nine Polling Organizations, 1988 and 1992
Voss, Steven D, Andrew Gelman, and Gary King. 1995. Pre-Election Survey Methodology: Details From Nine Polling Organizations, 1988 and 1992. Public Opinion Quarterly 59: 98–132.Abstract
Before every presidential election, journalists, pollsters, and politicians commission dozens of public opinion polls. Although the primary function of these surveys is to forecast the election winners, they also generate a wealth of political data valuable even after the election. These preelection polls are useful because they are conducted with such frequency that they allow researchers to study change in estimates of voter opinion within very narrow time increments (Gelman and King 1993). Additionally, so many are conducted that the cumulative sample size of these polls is large enough to construct aggregate measures of public opinion within small demographic or geographical groupings (Wright, Erikson, and McIver 1985).
Anchors: Software for Anchoring Vignettes Data
Wand, Jonathan, Gary King, and Olivia Lau. 2011. Anchors: Software for Anchoring Vignettes Data. Journal of Statistical Software 42, no. 3: 1--25. WebsiteAbstract
When respondents use the ordinal response categories of standard survey questions in different ways, the validity of analyses based on the resulting data can be biased. Anchoring vignettes is a survey design technique intended to correct for some of these problems. The anchors package in R includes methods for evaluating and choosing anchoring vignettes, and for analyzing the resulting data.

Related Research

Imputing Missing Data due to survey nonresponse: Website

Analyzing Rare Events, including rare survey outcomes and alternative methods of sampling for rare events: Website

Estimating Mortality by Survey using surveys of siblings or other groups, as well as methods designed for estimating cause-specific mortality that applies more generally for extrapolating from one population to another: Website