Writings

2001
Proper Nouns and Methodological Propriety: Pooling Dyads in International Relations Data
Gary King. 2001. “Proper Nouns and Methodological Propriety: Pooling Dyads in International Relations Data.” International Organization, 55, Pp. 497–507.Abstract
The intellectual stakes at issue in this symposium are very high: Green, Kim, and Yoon (2000 and hereinafter GKY) apply their proposed methodological prescriptions and conclude that they key findings in the field is wrong and democracy "has no effect on militarized disputes." GKY are mainly interested in convincing scholars about their methodological points and see themselves as having no stake in the resulting substantive conclusions. However, their methodological points are also high stakes claims: if correct, the vast majority of statistical analyses of military conflict ever conducted would be invalidated. GKY say they "make no attempt to break new ground statistically," but, as we will see, this both understates their methodological contribution to the field and misses some unique features of their application and data in international relations. On the ltter, GKY’s critics are united: Oneal and Russett (2000) conclude that GKY’s method "produces distorted results," and show even in GKY’s framework how democracy’s effect can be reinstated. Beck and Katz (2000) are even more unambiguous: "GKY’s conclusion, in table 3, that variables such as democracy have no pacific impact, is simply nonsense...GKY’s (methodological) proposal...is NEVER a good idea." My given task is to sort out and clarify these conflicting claims and counterclaims. The procedure I followed was to engage in extensive discussions with the participants that included joint reanalyses provoked by our discussions and passing computer program code (mostly with Monte Carlo simulations) back and forth to ensure we were all talking about the same methods and agreed with the factual results. I learned a great deal from this process and believe that the positions of the participants are now a lot closer than it may seem from their written statements. Indeed, I believe that all the participants now agree with what I have written here, even though they would each have different emphases (and although my believing there is agreement is not the same as there actually being agreement!).
Article
2000
Gary King. 2000. “Geography, Statistics, and Ecological Inference.” Annals of the Association of American Geographers, 90, Pp. 601–606.Abstract
I am grateful for such thoughtful review from these three distinguished geographers. Fotheringham provides an excellent summary of the approach offered, including how it combines the two methods that have dominated applications (and methodological analysis) for nearly half a century– the method of bounds (Duncan and Davis, 1953) and Goodman’s (1953) least squares regression. Since Goodman’s regression is the only method of ecological inference "widely used in Geography" (O’Loughlin), adding information that is known to be true from the method of bounds (for each observation) would seem to have the chance to improve a lot of research in this field. The other addition that EI provides is estimates at the lowest level of geography available, making it possible to map results, instead of giving only single summary numbers for the entire geographic region. Whether one considers the combined method offered "the" solution (as some reviewers and commentators have portrayed it), "a" solution (as I tried to describe it), or, perhaps better and more simply, as an improved method of ecological inference, is not importatnt. The point is that more data are better, and this method incorporates more. I am gratified that all three reviewers seem to support these basic points. In this response, I clarify a few points, correct some misunderstandings, and present additional evidence. I conclude with some possible directions for future research.
Article
Improving Quantitative Studies of International Conflict: A Conjecture
Nathaniel Beck, Gary King, and Langche Zeng. 2000. “Improving Quantitative Studies of International Conflict: A Conjecture.” American Political Science Review, 94, Pp. 21–36.Abstract
We address a well-known but infrequently discussed problem in the quantitative study of international conflict: Despite immense data collections, prestigious journals, and sophisticated analyses, empirical findings in the literature on international conflict are often unsatisfying. Many statistical results change from article to article and specification to specification. Accurate forecasts are nonexistant. In this article we offer a conjecture about one source of this problem: The causes of conflict, theorized to be important but often found to be small or ephemeral, are indeed tiny for the vast majority of dyads, but they are large, stable, and replicable wherever the ex ante probability of conflict is large. This simple idea has an unexpectedly rich array of observable implications, all consistent with the literature. We directly test our conjecture by formulating a statistical model that includes critical features. Our approach, a version of a "neural network" model, uncovers some interesting structural features of international conflict, and as one evaluative measure, forecasts substantially better than any previous effort. Moreover, this improvement comes at little cost, and it is easy to evaluate whether the model is a statistical improvement over the simpler models commonly used.
Article
Making the Most of Statistical Analyses: Improving Interpretation and Presentation
Gary King, Michael Tomz, and Jason Wittenberg. 2000. “Making the Most of Statistical Analyses: Improving Interpretation and Presentation.” American Journal of Political Science, 44, Pp. 341–355. Publisher's VersionAbstract
Social Scientists rarely take full advantage of the information available in their statistical results. As a consequence, they miss opportunities to present quantities that are of greatest substantive interest for their research and express the appropriate degree of certainty about these quantities. In this article, we offer an approach, built on the technique of statistical simulation, to extract the currently overlooked information from any statistical method and to interpret and present it in a reader-friendly manner. Using this technique requires some expertise, which we try to provide herein, but its application should make the results of quantitative articles more informative and transparent. To illustrate our recommendations, we replicate the results of several published works, showing in each case how the authors’ own conclusions can be expressed more sharply and informatively, and, without changing any data or statistical assumptions, how our approach reveals important new information about the research questions at hand. We also offer very easy-to-use Clarify software that implements our suggestions.
Article
1999
Binomial-Beta Hierarchical Models for Ecological Inference
Gary King, Ori Rosen, and Martin A Tanner. 1999. “Binomial-Beta Hierarchical Models for Ecological Inference.” Sociological Methods and Research, 28, Pp. 61–90.Abstract
The authors develop binomial-beta hierarchical models for ecological inference using insights from the literature on hierarchical models based on Markov chain Monte Carlo algorithms and King’s ecological inference model. The new approach reveals some features of the data that King’s approach does not, can easily be generalized to more complicated problems such as general R x C tables, allows the data analyst to adjust for covariates, and provides a formal evaluation of the significance of the covariates. It may also be better suited to cases in which the observed aggregate cells are estimated from very few observations or have some forms of measurement error. This article also provides an example of a hierarchical model in which the statistical idea of "borrowing strength" is used not merely to increase the efficiency of the estimates but to enable the data analyst to obtain estimates.
Article
The Future of Ecological Inference Research: A Reply to Freedman et al.
Gary King. 1999. “The Future of Ecological Inference Research: A Reply to Freedman et al.” Journal of the American Statistical Association, 94, Pp. 352-355.Abstract
I appreciate the editor’s invitation to reply to Freedman et al.’s (1998) review of "A Solution to the Ecological Inference Problem: Reconstructing Individual Behavior from Aggregate Data" (Princeton University Press.) I welcome this scholarly critique and JASA’s decision to publish in this field. Ecological inference is a large and very important area for applications that is especially rich with open statistical questions. I hope this discussion stimulates much new scholarship. Freedman et al. raise several interesting issues, but also misrepresent or misunderstand the prior literature, my approach, and their own empirical analyses, and compound the problem, by refusing requests from me and the editor to make their data and software available for this note. Some clarification is thus in order.
Article
Gary King and Michael Laver. 1999. “Many Publications, but Still No Evidence.” Electoral Studies, 18, Pp. 597–598.Abstract
In 1990, Budge and Hofferbert (B&H) claimed that they had found solid evidence that party platforms cause U.S. budgetary priorities, and thus concluded that mandate theory applies in the United States as strongly as it does elsewhere. The implications of this stunning conclusion would mean that virtually every observer of the American party system in this century has been wrong. King and Laver (1993) reanalyzed B&H’s data and demonstrated in two ways that there exists no evidence for a causal relationship. First, accepting their entire statistical model, and correcting only an algebraic error (a mistake in how they computed their standard errors), we showed that their hypothesized relationship holds up in fewer than half the tests they reported. Second, we showed that their statistical model includes a slightly hidden but politically implausible assumption that a new party achieves every budgetary desire immediately upon taking office. We then specified a model without this unrealistic assumption and we found that the assumption was not supported, and that all evidence in the data for platforms causing government budgets evaporated. In their published response to our article, B&H withdrew their key claim and said they were now (in 1993) merely interested in an association and not causation. That is how it was left in 1993—a perfectly amicable resolution as far as we were concerned—since we have no objection to the claim that there is a non-causal or chance association between any two variables. Of course, we see little reason to be interested in non-causal associations in this area any more than in the chance correlation that exists between the winner of the baseball World Series and the party winning the U.S. presidency. Since party mandate theory only makes sense as a causal theory, the conventional wisdom about America’s porous, non-mandate party system stands.
Article
No Evidence on Directional vs. Proximity Voting
Jeffrey Lewis and Gary King. 1999. “No Evidence on Directional vs. Proximity Voting.” Political Analysis, 8, Pp. 21–33.Abstract
The directional and proximity models offer dramatically different theories for how voters make decisions and fundamentally divergent views of the supposed microfoundations on which vast bodies of literature in theoretical rational choice and empirical political behavior have been built. We demonstrate here that the empirical tests in the large and growing body of literature on this subject amount to theoretical debates about which statistical assumption is right. The key statistical assumptions have not been empirically tested and, indeed, turn out to be effectively untestable with exiting methods and data. Unfortunately, these assumptions are also crucial since changing them leads to different conclusions about voter processes.
Article
Not Asked and Not Answered: Multiple Imputation for Multiple Surveys
Andrew Gelman, Gary King, and Chuanhai Liu. 1999. “Not Asked and Not Answered: Multiple Imputation for Multiple Surveys.” Journal of the American Statistical Association, 93, Pp. 846–857.Abstract
We present a method of analyzing a series of independent cross-sectional surveys in which some questions are not answered in some surveys and some respondents do not answer some of the questions posed. The method is also applicable to a single survey in which different questions are asked or different sampling methods are used in different strata or clusters. Our method involves multiply imputing the missing items and questions by adding to existing methods of imputation designed for single surveys a hierarchical regression model that allows covariates at the individual and survey levels. Information from survey weights is exploited by including in the analysis the variables on which the weights are based, and then reweighting individual responses (observed and imputed) to estimate population quantities. We also develop diagnostics for checking the fit of the imputation model based on comparing imputed data to nonimputed data. We illustrate with the example that motivated this project: a study of pre-election public opinion polls in which not all the questions of interest are asked in all the surveys, so that it is infeasible to impute within each survey separately.
Article
A Statistical Model for Multiparty Electoral Data
Jonathan Katz and Gary King. 1999. “A Statistical Model for Multiparty Electoral Data.” American Political Science Review, 93, Pp. 15–32.Abstract
We propose a comprehensive statistical model for analyzing multiparty, district-level elections. This model, which provides a tool for comparative politics research analagous to that which regression analysis provides in the American two-party context, can be used to explain or predict how geographic distributions of electoral results depend upon economic conditions, neighborhood ethnic compositions, campaign spending, and other features of the election campaign or aggregate areas. We also provide new graphical representations for data exploration, model evaluation, and substantive interpretation. We illustrate the use of this model by attempting to resolve a controversy over the size of and trend in electoral advantage of incumbency in Britain. Contrary to previous analyses, all based on measures now known to be biased, we demonstrate that the advantage is small but meaningful, varies substantially across the parties, and is not growing. Finally, we show how to estimate the party from which each party’s advantage is predominantly drawn.
Article
1998
AMELIA: A Program for Missing Data
James Honaker, Anne Joseph, Gary King, Kenneth Scheve, and Naunihal Singh. 1998. “AMELIA: A Program for Missing Data”. Publisher's Version
Estimating the Probability of Events that Have Never Occurred: When Is Your Vote Decisive?
Andrew Gelman, Gary King, and John Boscardin. 1998. “Estimating the Probability of Events that Have Never Occurred: When Is Your Vote Decisive?” Journal of the American Statistical Association, 93, Pp. 1–9.Abstract
Researchers sometimes argue that statisticians have little to contribute when few realizations of the process being estimated are observed. We show that this argument is incorrect even in the extreme situation of estimating the probabilities of events so rare that they have never occurred. We show how statistical forecasting models allow us to use empirical data to improve inferences about the probabilities of these events. Our application is estimating the probability that your vote will be decisive in a U.S. presidential election, a problem that has been studied by political scientists for more than two decades. The exact value of this probability is of only minor interest, but the number has important implications for understanding the optimal allocation of campaign resources, whether states and voter groups receive their fair share of attention from prospective presidents, and how formal "rational choice" models of voter behavior might be able to explain why people vote at all. We show how the probability of a decisive vote can be estimated empirically from state-level forecasts of the presidential election and illustrate with the example of 1992. Based on generalizations of standard political science forecasting models, we estimate the (prospective) probability of a single vote being decisive as about 1 in 10 million for close national elections such as 1992, varying by about a factor of 10 among states. Our results support the argument that subjective probabilities of many types are best obtained through empirically based statistical prediction models rather than solely through mathematical reasoning. We discuss the implications of our findings for the types of decision analyses used in public choice studies.
Article
Gary King. 1998. “MAXLIK”.Abstract

A set of Gauss programs and datasets (annotated for pedagogical purposes) to implement many of the maximum likelihood-based models I discuss in Unifying Political Methodology: The Likelihood Theory of Statistical Inference, Ann Arbor: University of Michigan Press, 1998, and use in my class. All datasets are real, not simulated.

Unifying Political Methodology: The Likelihood Theory of Statistical Inference
Gary King. 1998. Unifying Political Methodology: The Likelihood Theory of Statistical Inference. Ann Arbor: University of Michigan Press. Publisher's Version Errata
Gary King and Bradley Palmquist. 1998. “The Record of American Democracy, 1984-1990.” Sociological Methods and Research, 26, Pp. 424–427. Publisher's Version
1996
Advantages of Conflictual Redistricting
Andrew Gelman and Gary King. 1996. “Advantages of Conflictual Redistricting.” In Fixing the Boundary: Defining and Redefining Single-Member Electoral Districts, edited by Iain McLean and David Butler, Pp. 207–218. Aldershot, England: Dartmouth Publishing Company.Abstract
This article describes the results of an analysis we did of state legislative elections in the United States, where each state is required to redraw the boundaries of its state legislative districts every ten years. In the United States, redistrictings are sometimes controlled by the Democrats, sometimes by the Republicans, and sometimes by bipartisan committees, but never by neutral boundary commissions. Our goal was to study the consequences of redistricting and at the conclusion of this article, we discuss how our findings might be relevant to British elections.
Chapter PDF
The Generalization in the Generalized Event Count Model, With Comments on Achen, Amato, and Londregan
Gary King and Curtis S Signorino. 1996. “The Generalization in the Generalized Event Count Model, With Comments on Achen, Amato, and Londregan.” Political Analysis, 6, Pp. 225–252.Abstract
We use an analogy with the normal distribution and linear regression to demonstrate the need for the Generalize Event Count (GEC) model. We then show how the GEC provides a unified framework within which to understand a diversity of distributions used to model event counts, and how to express the model in one simple equation. Finally, we address the points made by Christopher Achen, Timothy Amato, and John Londregan. Amato's and Londregan's arguments are consistent with ours and provide additional interesting information and explanations. Unfortunately, the foundation on which Achen built his paper turns out to be incorrect, rendering all his novel claims about the GEC false (or in some cases irrelevant).
Article
Kenneth Benoit and Gary King. 1996. “A Preview of EI and EzI: Programs for Ecological Inference.” Social Science Computer Review, 14, Pp. 433–438.Abstract
Ecological inference, as traditionally defined, is the process of using aggregate (i.e., "ecological") data to infer discrete individual-level relationships of interest when individual-level data are not available. Existing methods of ecological inference generate very inaccurate conclusions about the empirical world- which thus gives rise to the ecological inference problem. Most scholars who analyze aggregate data routinely encounter some form of this problem. EI (by Gary King) and EzI (by Kenneth Benoit and Gary King) are freely available software that implement the statistical and graphical methods detailed in Gary King’s book A Solution to the Ecological Inference Problem. These methods make it possible to infer the attributes of individual behavior from aggregate data. EI works within the statistics program Gauss and will run on any computer hardware and operating system that runs Gauss (the Gauss module, CML, or constrained maximum likelihood- by Ronald J. Schoenberg- is also required). EzI is a menu-oriented stand-alone version of the program that runs under MS-DOS (and soon Windows 95, OS/2, and HP-UNIX). EI allows users to make ecological inferences as part of the powerful and open Gauss statistical environment. In contrast, EzI requires no additional software, and provides an attractive menu-based user interface for non-Gauss users, although it lacks the flexibility afforded by the Gauss version. Both programs presume that the user has read or is familiar with A Solution to the Ecological Inference Problem.
Racial Fairness in Legislative Redistricting
Gary King, John Bruce, and Andrew Gelman. 1996. “Racial Fairness in Legislative Redistricting.” In Classifying by Race, edited by Paul E Peterson, Pp. 85-110. Princeton: Princeton University Press.Abstract
In this chapter, we study standards of racial fairness in legislative redistricting- a field that has been the subject of considerable legislation, jurisprudence, and advocacy, but very little serious academic scholarship. We attempt to elucidate how basic concepts about "color-blind" societies, and similar normative preferences, can generate specific practical standards for racial fairness in representation and redistricting. We also provide the normative and theoretical foundations on which concepts such as proportional representation rest, in order to give existing preferences of many in the literature a firmer analytical foundation.
Chapter PDF

Pages