Publications by Type: Journal Article

2008
The Future of Partisan Symmetry as a Judicial Test for Partisan Gerrymandering after LULAC v. Perry
Bernard Grofman and Gary King. 2008. “The Future of Partisan Symmetry as a Judicial Test for Partisan Gerrymandering after LULAC v. Perry.” Election Law Journal, 6, 1, Pp. 2-35.Abstract

While the Supreme Court in Bandemer v. Davis found partisan gerrymandering to be justiciable, no challenged redistricting plan in the subsequent 20 years has been held unconstitutional on partisan grounds. Then, in Vieth v. Jubilerer, five justices concluded that some standard might be adopted in a future case, if a manageable rule could be found. When gerrymandering next came before the Court, in LULAC v. Perry, we along with our colleagues filed an Amicus Brief (King et al., 2005), proposing the test be based in part on the partisan symmetry standard. Although the issue was not resolved, our proposal was discussed and positively evaluated in three of the opinions, including the plurality judgment, and for the first time for any proposal the Court gave a clear indication that a future legal test for partisan gerrymandering will likely include partisan symmetry. A majority of Justices now appear to endorse the view that the measurement of partisan symmetry may be used in partisan gerrymandering claims as “a helpful (though certainly not talismanic) tool” (Justice Stevens, joined by Justice Breyer), provided one recognizes that “asymmetry alone is not a reliable measure of unconstitutional partisanship” and possibly that the standard would be applied only after at least one election has been held under the redistricting plan at issue (Justice Kennedy, joined by Justices Souter and Ginsburg). We use this essay to respond to the request of Justices Souter and Ginsburg that “further attention … be devoted to the administrability of such a criterion at all levels of redistricting and its review.” Building on our previous scholarly work, our Amicus Brief, the observations of these five Justices, and a supporting consensus in the academic literature, we offer here a social science perspective on the conceptualization and measurement of partisan gerrymandering and the development of relevant legal rules based on what is effectively the Supreme Court’s open invitation to lower courts to revisit these issues in the light of LULAC v. Perry.

Brief PDF Article
Misunderstandings Among Experimentalists and Observationalists about Causal Inference
Kosuke Imai, Gary King, and Elizabeth Stuart. 2008. “Misunderstandings Among Experimentalists and Observationalists about Causal Inference.” Journal of the Royal Statistical Society, Series A, 171, part 2, Pp. 481–502.Abstract

We attempt to clarify, and suggest how to avoid, several serious misunderstandings about and fallacies of causal inference in experimental and observational research. These issues concern some of the most basic advantages and disadvantages of each basic research design. Problems include improper use of hypothesis tests for covariate balance between the treated and control groups, and the consequences of using randomization, blocking before randomization, and matching after treatment assignment to achieve covariate balance. Applied researchers in a wide range of scientific disciplines seem to fall prey to one or more of these fallacies, and as a result make suboptimal design or analysis choices. To clarify these points, we derive a new four-part decomposition of the key estimation errors in making causal inferences. We then show how this decomposition can help scholars from different experimental and observational research traditions better understand each other’s inferential problems and attempted solutions.

Article
Ordinary Economic Voting Behavior in the Extraordinary Election of Adolf Hitler
Gary King, Ori Rosen, Martin Tanner, and Alexander Wagner. 2008. “Ordinary Economic Voting Behavior in the Extraordinary Election of Adolf Hitler.” Journal of Economic History, 68, 4, Pp. 996.Abstract

The enormous Nazi voting literature rarely builds on modern statistical or economic research. By adding these approaches, we find that the most widely accepted existing theories of this era cannot distinguish the Weimar elections from almost any others in any country. Via a retrospective voting account, we show that voters most hurt by the depression, and most likely to oppose the government, fall into separate groups with divergent interests. This explains why some turned to the Nazis and others turned away. The consequences of Hitler's election were extraordinary, but the voting behavior that led to it was not.

Article
Toward A Common Framework for Statistical Analysis and Development
Kosuke Imai, Gary King, and Olivia Lau. 2008. “Toward A Common Framework for Statistical Analysis and Development.” Journal of Computational Graphics and Statistics, 17, Pp. 1–22.Abstract
We describe some progress toward a common framework for statistical analysis and software development built on and within the R language, including R’s numerous existing packages. The framework we have developed offers a simple unified structure and syntax that can encompass a large fraction of statistical procedures already implemented in R, without requiring any changes in existing approaches. We conjecture that it can be used to encompass and present simply a vast majority of existing statistical methods, regardless of the theory of inference on which they are based, notation with which they were developed, and programming syntax with which they have been implemented. This development enabled us, and should enable others, to design statistical software with a single, simple, and unified user interface that helps overcome the conflicting notation, syntax, jargon, and statistical methods existing across the methods subfields of numerous academic disciplines. The approach also enables one to build a graphical user interface that automatically includes any method encompassed within the framework. We hope that the result of this line of research will greatly reduce the time from the creation of a new statistical innovation to its widespread use by applied researchers whether or not they use or program in R.
Article
Verbal Autopsy Methods with Multiple Causes of Death
Gary King and Ying Lu. 2008. “Verbal Autopsy Methods with Multiple Causes of Death.” Statistical Science, 23, Pp. 78–91.Abstract
Verbal autopsy procedures are widely used for estimating cause-specific mortality in areas without medical death certification. Data on symptoms reported by caregivers along with the cause of death are collected from a medical facility, and the cause-of-death distribution is estimated in the population where only symptom data are available. Current approaches analyze only one cause at a time, involve assumptions judged difficult or impossible to satisfy, and require expensive, time consuming, or unreliable physician reviews, expert algorithms, or parametric statistical models. By generalizing current approaches to analyze multiple causes, we show how most of the difficult assumptions underlying existing methods can be dropped. These generalizations also make physician review, expert algorithms, and parametric statistical assumptions unnecessary. With theoretical results, and empirical analyses in data from China and Tanzania, we illustrate the accuracy of this approach. While no method of analyzing verbal autopsy data, including the more computationally intensive approach offered here, can give accurate estimates in all circumstances, the procedure offered is conceptually simpler, less expensive, more general, as or more replicable, and easier to use in practice than existing approaches. We also show how our focus on estimating aggregate proportions, which are the quantities of primary interest in verbal autopsy studies, may also greatly reduce the assumptions necessary, and thus improve the performance of, many individual classifiers in this and other areas. As a companion to this paper, we also offer easy-to-use software that implements the methods discussed herein.
Article
2007
Comparing Incomparable Survey Responses: New Tools for Anchoring Vignettes
Gary King and Jonathan Wand. 2007. “Comparing Incomparable Survey Responses: New Tools for Anchoring Vignettes.” Political Analysis, 15, Pp. 46-66.Abstract

When respondents use the ordinal response categories of standard survey questions in different ways, the validity of analyses based on the resulting data can be biased. Anchoring vignettes is a survey design technique, introduced by King, Murray, Salomon, and Tandon (2004), intended to correct for some of these problems. We develop new methods both for evaluating and choosing anchoring vignettes, and for analyzing the resulting data. With surveys on a diverse range of topics in a range of countries, we illustrate how our proposed methods can improve the ability of anchoring vignettes to extract information from survey data, as well as saving in survey administration costs.

Article
Gary King and Langche Zeng. 2007. “Detecting Model Dependence in Statistical Inference: A Response.” International Studies Quarterly, 51, Pp. 231-241.Abstract

Inferences about counterfactuals are essential for prediction, answering "what if" questions, and estimating causal effects. However, when the counterfactuals posed are too far from the data at hand, conclusions drawn from well-specified statistical analyses become based on speculation and convenient but indefensible model assumptions rather than empirical evidence. Unfortunately, standard statistical approaches assume the veracity of the model rather than revealing the degree of model-dependence, and so this problem can be hard to detect. We develop easy-to-apply methods to evaluate counterfactuals that do not require sensitivity testing over specified classes of models. If an analysis fails the tests we offer, then we know that substantive results are sensitive to at least some modeling choices that are not based on empirical evidence. We use these methods to evaluate the extensive scholarly literatures on the effects of changes in the degree of democracy in a country (on any dependent variable) and separate analyses of the effects of UN peacebuilding efforts. We find evidence that many scholars are inadvertently drawing conclusions based more on modeling hypotheses than on their data. For some research questions, history contains insufficient information to be our guide.

Article
An Introduction to the Dataverse Network as an Infrastructure for Data Sharing
Gary King. 2007. “An Introduction to the Dataverse Network as an Infrastructure for Data Sharing.” Sociological Methods and Research, 36, Pp. 173–199.Abstract

We introduce a set of integrated developments in web application software, networking, data citation standards, and statistical methods designed to put some of the universe of data and data sharing practices on somewhat firmer ground. We have focused on social science data, but aspects of what we have developed may apply more widely. The idea is to facilitate the public distribution of persistent, authorized, and verifiable data, with powerful but easy-to-use technology, even when the data are confidential or proprietary. We intend to solve some of the sociological problems of data sharing via technological means, with the result intended to benefit both the scientific community and the sometimes apparently contradictory goals of individual researchers.

Article
Matching as Nonparametric Preprocessing for Reducing Model Dependence in Parametric Causal Inference
Daniel Ho, Kosuke Imai, Gary King, and Elizabeth Stuart. 2007. “Matching as Nonparametric Preprocessing for Reducing Model Dependence in Parametric Causal Inference.” Political Analysis, 15, Pp. 199–236.Abstract

Although published works rarely include causal estimates from more than a few model specifications, authors usually choose the presented estimates from numerous trial runs readers never see. Given the often large variation in estimates across choices of control variables, functional forms, and other modeling assumptions, how can researchers ensure that the few estimates presented are accurate or representative? How do readers know that publications are not merely demonstrations that it is possible to find a specification that fits the author’s favorite hypothesis? And how do we evaluate or even define statistical properties like unbiasedness or mean squared error when no unique model or estimator even exists? Matching methods, which offer the promise of causal inference with fewer assumptions, constitute one possible way forward, but crucial results in this fast-growing methodological literature are often grossly misinterpreted. We explain how to avoid these misinterpretations and propose a unified approach that makes it possible for researchers to preprocess data with matching (such as with the easy-to-use software we offer) and then to apply the best parametric techniques they would have used anyway. This procedure makes parametric models produce more accurate and considerably less model-dependent causal inferences.

Article
A Proposed Standard for the Scholarly Citation of Quantitative Data
Micah Altman and Gary King. 2007. “A Proposed Standard for the Scholarly Citation of Quantitative Data.” D-Lib Magazine, 13. Publisher's VersionAbstract

An essential aspect of science is a community of scholars cooperating and competing in the pursuit of common goals. A critical component of this community is the common language of and the universal standards for scholarly citation, credit attribution, and the location and retrieval of articles and books. We propose a similar universal standard for citing quantitative data that retains the advantages of print citations, adds other components made possible by, and needed due to, the digital form and systematic nature of quantitative data sets, and is consistent with most existing subfield-specific approaches. Although the digital library field includes numerous creative ideas, we limit ourselves to only those elements that appear ready for easy practical use by scientists, journal editors, publishers, librarians, and archivists.

Article
A "Politically Robust" Experimental Design for Public Policy Evaluation, with Application to the Mexican Universal Health Insurance Program
Gary King, Emmanuela Gakidou, Nirmala Ravishankar, Ryan T Moore, Jason Lakin, Manett Vargas, Martha María Téllez-Rojo, Juan Eugenio Hernández Ávila, Mauricio Hernández Ávila, and Héctor Hernández Llamas. 2007. “A "Politically Robust" Experimental Design for Public Policy Evaluation, with Application to the Mexican Universal Health Insurance Program.” Journal of Policy Analysis and Management, 26, Pp. 479-506.Abstract

We develop an approach to conducting large scale randomized public policy experiments intended to be more robust to the political interventions that have ruined some or all parts of many similar previous efforts. Our proposed design is insulated from selection bias in some circumstances even if we lose observations and our inferences can still be unbiased even if politics disrupts any two of the three steps in our analytical procedures and and other empirical checks are available to validate the overall design. We illustrate with a design and empirical validation of an evaluation of the Mexican Seguro Popular de Salud (Universal Health Insurance) program we are conducting. Seguro Popular, which is intended to grow to provide medical care, drugs, preventative services, and financial health protection to the 50 million Mexicans without health insurance, is one of the largest health reforms of any country in the last two decades. The evaluation is also large scale, constituting one of the largest policy experiments to date and what may be the largest randomized health policy experiment ever.

Article
When Can History Be Our Guide? The Pitfalls of Counterfactual Inference
Gary King and Langche Zeng. 2007. “When Can History Be Our Guide? The Pitfalls of Counterfactual Inference.” International Studies Quarterly, Pp. 183-210.Abstract
Inferences about counterfactuals are essential for prediction, answering "what if" questions, and estimating causal effects. However, when the counterfactuals posed are too far from the data at hand, conclusions drawn from well-specified statistical analyses become based on speculation and convenient but indefensible model assumptions rather than empirical evidence. Unfortunately, standard statistical approaches assume the veracity of the model rather than revealing the degree of model-dependence, and so this problem can be hard to detect. We develop easy-to-apply methods to evaluate counterfactuals that do not require sensitivity testing over specified classes of models. If an analysis fails the tests we offer, then we know that substantive results are sensitive to at least some modeling choices that are not based on empirical evidence. We use these methods to evaluate the extensive scholarly literatures on the effects of changes in the degree of democracy in a country (on any dependent variable) and separate analyses of the effects of UN peacebuilding efforts. We find evidence that many scholars are inadvertently drawing conclusions based more on modeling hypotheses than on their data. For some research questions, history contains insufficient information to be our guide.
Article
2006
The Dangers of Extreme Counterfactuals
Gary King and Langche Zeng. 2006. “The Dangers of Extreme Counterfactuals.” Political Analysis, 14, Pp. 131–159.Abstract
We address the problem that occurs when inferences about counterfactuals – predictions, "what if" questions, and causal effects – are attempted far from the available data. The danger of these extreme counterfactuals is that substantive conclusions drawn from statistical models that fit the data well turn out to be based largely on speculation hidden in convenient modeling assumptions that few would be willing to defend. Yet existing statistical strategies provide few reliable means of identifying extreme counterfactuals. We offer a proof that inferences farther from the data are more model-dependent, and then develop easy-to-apply methods to evaluate how model-dependent our answers would be to specified counterfactuals. These methods require neither sensitivity testing over specified classes of models nor evaluating any specific modeling assumptions. If an analysis fails the simple tests we offer, then we know that substantive results are sensitive to at least some modeling choices that are not based on empirical evidence.
Article
Death by Survey: Estimating Adult Mortality without Selection Bias from Sibling Survival Data
Emmanuela Gakidou and Gary King. 2006. “Death by Survey: Estimating Adult Mortality without Selection Bias from Sibling Survival Data.” Demography, 43, Pp. 569–585.Abstract
The widely used methods for estimating adult mortality rates from sample survey responses about the survival of siblings, parents, spouses, and others depend crucially on an assumption that we demonstrate does not hold in real data. We show that when this assumption is violated – so that the mortality rate varies with sibship size – mortality estimates can be massively biased. By using insights from work on the statistical analysis of selection bias, survey weighting, and extrapolation problems, we propose a new and relatively simple method of recovering the mortality rate with both greatly reduced potential for bias and increased clarity about the source of necessary assumptions.
Article
Publication, Publication
Gary King. 2006. “Publication, Publication.” PS: Political Science and Politics, 39, Pp. 119–125. Continuing updates to this paperAbstract

I show herein how to write a publishable paper by beginning with the replication of a published article. This strategy seems to work well for class projects in producing papers that ultimately get published, helping to professionalize students into the discipline, and teaching them the scientific norms of the free exchange of academic information. I begin by briefly revisiting the prominent debate on replication our discipline had a decade ago and some of the progress made in data sharing since.

Article
2005
The Supreme Court During Crisis: How War Affects only Non-War Cases
Lee Epstein, Daniel E Ho, Gary King, and Jeffrey A Segal. 2005. “The Supreme Court During Crisis: How War Affects only Non-War Cases.” New York University Law Review, 80, Pp. 1–116.Abstract
Does the U.S. Supreme Court curtail rights and liberties when the nation’s security is under threat? In hundreds of articles and books, and with renewed fervor since September 11, 2001, members of the legal community have warred over this question. Yet, not a single large-scale, quantitative study exists on the subject. Using the best data available on the causes and outcomes of every civil rights and liberties case decided by the Supreme Court over the past six decades and employing methods chosen and tuned especially for this problem, our analyses demonstrate that when crises threaten the nation’s security, the justices are substantially more likely to curtail rights and liberties than when peace prevails. Yet paradoxically, and in contradiction to virtually every theory of crisis jurisprudence, war appears to affect only cases that are unrelated to the war. For these cases, the effect of war and other international crises is so substantial, persistent, and consistent that it may surprise even those commentators who long have argued that the Court rallies around the flag in times of crisis. On the other hand, we find no evidence that cases most directly related to the war are affected. We attempt to explain this seemingly paradoxical evidence with one unifying conjecture: Instead of balancing rights and security in high stakes cases directly related to the war, the Justices retreat to ensuring the institutional checks of the democratic branches. Since rights-oriented and process-oriented dimensions seem to operate in different domains and at different times, and often suggest different outcomes, the predictive factors that work for cases unrelated to the war fail for cases related to the war. If this conjecture is correct, federal judges should consider giving less weight to legal principles outside of wartime but established during wartime, and attorneys should see it as their responsibility to distinguish cases along these lines.
Article
Heather Stoll, Gary King, and Langchee Zeng. 2005. “WhatIf: Software for Evaluating Counterfactuals.” Journal of Statistical Software, 15, 4, Pp. 1--18. Publisher's versionAbstract

This article describes WhatIf: Software for Evaluating Counterfactuals, an R package that implements the methods for evaluating counterfactuals introduced in King and Zeng (2006a) and King and Zeng (2006b). It offers easy-to-use techniques for assessing a counterfactual’s model dependence without having to conduct sensitivity testing over specified classes of models. These same methods can be used to approximate the common support of the treatment and control groups in causal inference.

Article
2004
Did Illegal Overseas Absentee Ballots Decide the 2000 U.S. Presidential Election?
Kosuke Imai and Gary King. 2004. “Did Illegal Overseas Absentee Ballots Decide the 2000 U.S. Presidential Election?” Perspectives on Politics, 2, Pp. 537–549.Abstract

Although not widely known until much later, Al Gore received 202 more votes than George W. Bush on election day in Florida. George W. Bush is president because he overcame his election day deficit with overseas absentee ballots that arrived and were counted after election day. In the final official tally, Bush received 537 more votes than Gore. These numbers are taken from the official results released by the Florida Secretary of State's office and so do not reflect overvotes, undervotes, unsuccessful litigation, butterfly ballot problems, recounts that might have been allowed but were not, or any other hypothetical divergence between voter preferences and counted votes. After the election, the New York Times conducted a six month long investigation and found that 680 of the overseas absentee ballots were illegally counted, and no partisan, pundit, or academic has publicly disagreed with their assessment. In this paper, we describe the statistical procedures we developed and implemented for the Times to ascertain whether disqualifying these 680 ballots would have changed the outcome of the election. The methods involve adding formal Bayesian model averaging procedures to King's (1997) ecological inference model. Formal Bayesian model averaging has not been used in political science but is especially useful when substantive conclusions depend heavily on apparently minor but indefensible model choices, when model generalization is not feasible, and when potential critics are more partisan than academic. We show how we derived the results for the Times so that other scholars can use these methods to make ecological inferences for other purposes. We also present a variety of new empirical results that delineate the precise conditions under which Al Gore would have been elected president, and offer new evidence of the striking effectiveness of the Republican effort to convince local election officials to count invalid ballots in Bush counties and not count them in Gore counties.

Article
Gary King. 2004. “EI: A Program for Ecological Inference.” Journal of Statistical Software, 11. Publisher's Version
Enhancing the Validity and Cross-cultural Comparability of Measurement in Survey Research
Gary King, Christopher J.L. Murray, Joshua A. Salomon, and Ajay Tandon. 2004. “Enhancing the Validity and Cross-cultural Comparability of Measurement in Survey Research.” American Political Science Review, 98, Pp. 191–207.Abstract

We address two long-standing survey research problems: measuring complicated concepts, such as political freedom or efficacy, that researchers define best with reference to examples and and what to do when respondents interpret identical questions in different ways. Scholars have long addressed these problems with approaches to reduce incomparability, such as writing more concrete questions – with uneven success. Our alternative is to measure directly response category incomparability and to correct for it. We measure incomparability via respondents’ assessments, on the same scale as the self-assessments to be corrected, of hypothetical individuals described in short vignettes. Since actual levels of the vignettes are invariant over respondents, variability in vignette answers reveals incomparability. Our corrections require either simple recodes or a statistical model designed to save survey administration costs. With analysis, simulations, and cross-national surveys, we show how response incomparability can drastically mislead survey researchers and how our approach can fix them.

Article

Pages