Writings

2003
Michael Tomz, Jason Wittenberg, and Gary King. 2003. “CLARIFY: Software for Interpreting and Presenting Statistical Results.” Journal of Statistical Software. Abstract
This is a set of easy-to-use Stata macros that implement the techniques described in Gary King, Michael Tomz, and Jason Wittenberg's "Making the Most of Statistical Analyses: Improving Interpretation and Presentation". To install Clarify, type "net from http://gking.harvard.edu/clarify" at the Stata command line. The documentation [ HTML | PDF ] explains how to do this. We also provide a zip archive for users who want to install Clarify on a computer that is not connected to the internet. Winner of the Okidata Best Research Software Award. Also try -ssc install qsim- to install a wrapper, donated by Fred Wolfe, to automate Clarify's simulation of dummy variables.
Christopher Adolph, Gary King, Kenneth W Shotts, and Michael C Herron. 2003. “A Consensus on Second Stage Analyses in Ecological Inference Models.” Political Analysis, 11: 86–94. Abstract
Since Herron and Shotts (2003a and hereinafter HS), Adolph and King (2003 andhereinafter AK), and Herron and Shotts (2003b and hereinafter HS2), the four of us have iterated many more times, learned a great deal, and arrived at a consensus on this issue. This paper describes our joint recommendations for how to run second-stage ecological regressions, and provides detailed analyses to back up our claims.
Article
Emmanuela Gakidou and Gary King. 2003. “Determinants of Inequality in Child Survival: Results from 39 Countries.” In Health Systems Performance Assessment: Debates, Methods and Empiricism, edited by Chrisopher Murray and David B Evans, 497-502. Geneva: World Health Organization.
EI: A Program for Ecological Inference
Gary King and Kenneth Benoit. 2003. “EzI: A(n Easy) Program for Ecological Inference”. Publisher's Version
Gary King. 2003. “The Future of Replication.” International Studies Perspectives, 4: 443–499. Abstract

Since the replication standard was proposed for political science research, more journals have required or encouraged authors to make data available, and more authors have shared their data. The calls for continuing this trend are more persistent than ever, and the agreement among journal editors in this Symposium continues this trend. In this article, I offer a vision of a possible future of the replication movement. The plan is to implement this vision via the Virtual Data Center project, which – by automating the process of finding, sharing, archiving, subsetting, converting, analyzing, and distributing data – may greatly facilitate adherence to the replication standard.

Article
Numerical Issues Involved in Inverting Hessian Matrices
Jeff Gill and Gary King. 2003. “Numerical Issues Involved in Inverting Hessian Matrices.” In Numerical Issues in Statistical Computing for the Social Scientist, edited by Micah Altman and Michael P. McDonald, 143-176. Hoboken, NJ: John Wiley and Sons, Inc.
Chapter PDF
Michael Tomz, Gary King, and Langche Zeng. 2003. “ReLogit: Rare Events Logistic Regression.” Journal of Statistical Software, 8. Publisher's Version
ReLogit: Rare Events Logistic Regression
Gary King, Michael Tomz, and Langche Zeng. 2003. “ReLogit: Rare Events Logistic Regression”. Publisher's Version
Some Statistical Methods for Evaluating Information Extraction Systems
Will Lowe and Gary King. 2003. “Some Statistical Methods for Evaluating Information Extraction Systems.” Proceedings of the 10th Conference of the European Chapter of the Association for Computational Linguistics, 19-26. Abstract

We present new statistical methods for evaluating information extraction systems. The methods were developed to evaluate a system used by political scientists to extract event information from news leads about international politics. The nature of this data presents two problems for evaluators: 1) the frequency distribution of event types in international event data is strongly skewed, so a random sample of newsleads will typically fail to contain any low frequency events. 2) Manual information extraction necessary to create evaluation sets is costly, and most effort is wasted coding high frequency categories . We present an evaluation scheme that overcomes these problems with considerably less manual effort than traditional methods, and also allows us to interpret an information extraction system as an estimator (in the statistical sense) and to estimate its bias.

Article
2002
Armed Conflict as a Public Health Problem
Christopher JL Murray, Gary King, Alan D Lopez, Niels Tomijima, and Etienne Krug. 2002. “Armed Conflict as a Public Health Problem.” BMJ (British Medical Journal), 324: 346–349. Abstract
Armed conflict is a major cause of injury and death worldwide, but we need much better methods of quantification before we can accurately assess its effect. Armed conflict between warring states and groups within states have been major causes of ill health and mortality for most of human history. Conflict obviously causes deaths and injuries on the battlefield, but also health consequences from the displacement of populations, the breakdown of health and social services, and the heightened risk of disease transmission. Despite the size of the health consequences, military conflict has not received the same attention from public health research and policy as many other causes of illness and death. In contrast, political scientists have long studied the causes of war but have primarily been interested in the decision of elite groups to go to war, not in human death and misery. We review the limited knowledge on the health consequences of conflict, suggest ways to improve measurement, and discuss the potential for risk assessment and for preventing and ameliorating the consequences of conflict.
Article
A stand-alone, easy-to-use program for running event count and duration regression models, developed by and/or discussed in a series of journal articles by me. (Event count models have a dependent variable measured as the number of times something happens, such as the number of uncontested seats per state or the number of wars per year. Duration models explain dependent variables measured as the time until some event, such as the number of months a parliamentary cabinet endures.) Winner of the APSA Research Software Award.
Empirical Research and The Goals of Legal Scholarship: A Response
Lee Epstein and Gary King. 2002. “Empirical Research and The Goals of Legal Scholarship: A Response.” University of Chicago Law Review, 69: 1–209. Abstract
Although the term "empirical research" has become commonplace in legal scholarship over the past two decades, law professors have, in fact, been conducting research that is empirical – that is, learning about the world using quantitative data or qualitative information – for almost as long as they have been conducting research. For just as long, however, they have been proceeding with little awareness of, much less compliance with, the rules of inference, and without paying heed to the key lessons of the revolution in empirical analysis that has been taking place over the last century in other disciplines. The tradition of including some articles devoted to exclusively to the methododology of empirical analysis – so well represented in journals in traditional academic fields – is virtually nonexistent in the nation’s law reviews. As a result, readers learn considerably less accurate information about the empirical world than the studies’ stridently stated, but overconfident, conclusions suggest. To remedy this situation both for the producers and consumers of empirical work, this Article adapts the rules of inference used in the natural and social sciences to the special needs, theories, and data in legal scholarship, and explicate them with extensive illustrations from existing research. The Article also offers suggestions for how the infrastructure of teaching and research at law schools might be reorganized so that it can better support the creation of first-rate empirical research without compromising other important objectives.
Article
Estimating Risk and Rate Levels, Ratios, and Differences in Case-Control Studies
Gary King and Langche Zeng. 2002. “Estimating Risk and Rate Levels, Ratios, and Differences in Case-Control Studies.” Statistics in Medicine, 21: 1409–1427. Abstract
Classic (or "cumulative") case-control sampling designs do not admit inferences about quantities of interest other than risk ratios, and then only by making the rare events assumption. Probabilities, risk differences, and other quantities cannot be computed without knowledge of the population incidence fraction. Similarly, density (or "risk set") case-control sampling designs do not allow inferences about quantities other than the rate ratio. Rates, rate differences, cumulative rates, risks, and other quantities cannot be estimated unless auxiliary information about the underlying cohort such as the number of controls in each full risk set is available. Most scholars who have considered the issue recommend reporting more than just the relative risks and rates, but auxiliary population information needed to do this is not usually available. We address this problem by developing methods that allow valid inferences about all relevant quantities of interest from either type of case-control study when completely ignorant of or only partially knowledgeable about relevant auxiliary population information.
Article
James Honaker, Gary King, and Jonathan N Katz. 2002. “A Fast, Easy, and Efficient Estimator for Multiparty Electoral Data.” Political Analysis, 10: 84–100. Abstract
Katz and King (1999) develop a model for predicting or explaining aggregate electoral results in multiparty democracies. This model is, in principle, analogous to what least squares regression provides American politics researchers in that two-party system. Katz and King applied this model to three-party elections in England and revealed a variety of new features of incumbency advantage and where each party pulls support from. Although the mathematics of their statistical model covers any number of political parties, it is computationally very demanding, and hence slow and numerically imprecise, with more than three. The original goal of our work was to produce an approximate method that works quicker in practice with many parties without making too many theoretical compromises. As it turns out, the method we offer here improves on Katz and King’s (in bias, variance, numerical stability, and computational speed) even when the latter is computationally feasible. We also offer easy-to-use software that implements our suggestions.
Article
This is an invited response to an article by Anselin and Cho. I make two main points: The numerical results in this article violate no conclusions from prior literature, and the absence of the deterministic information from the bounds in the article’s analyses invalidates its theoretical discussion of spatial autocorrelation and all of its actual simulation results. An appendix shows how to draw simulations correctly.
Article
Emmanuela Gakidou and Gary King. 2002. “Measuring Total Health Inequality: Adding Individual Variation to Group-Level Differences.” BioMed Central: International Journal for Equity in Health, 1. Abstract
Background: Studies have revealed large variations in average health status across social, economic, and other groups. No study exists on the distribution of the risk of ill-health across individuals, either within groups or across all people in a society, and as such a crucial piece of total health inequality has been overlooked. Some of the reason for this neglect has been that the risk of death, which forms the basis for most measures, is impossible to observe directly and difficult to estimate. Methods: We develop a measure of total health inequality – encompassing all inequalities among people in a society, including variation between and within groups – by adapting a beta-binomial regression model. We apply it to children under age two in 50 low- and middle-income countries. Our method has been adopted by the World Health Organization and is being implemented in surveys around the world and preliminary estimates have appeared in the World Health Report (2000). Results: Countries with similar average child mortality differ considerably in total health inequality. Liberia and Mozambique have the largest inequalities in child survival, while Colombia, the Philippines and Kazakhstan have the lowest levels among the countries measured. Conclusions: Total health inequality estimates should be routinely reported alongside average levels of health in populations and groups, as they reveal important policy-related information not otherwise knowable. This approach enables meaningful comparisons of inequality across countries and future analyses of the determinants of inequality.
Article
Rethinking Human Security
Gary King and Christopher J.L. Murray. 2002. “Rethinking Human Security.” Political Science Quarterly, 116: 585–610. Abstract

In the last two decades, the international community has begun to conclude that attempts to ensure the territorial security of nation-states through military power have failed to improve the human condition. Despite astronomical levels of military spending, deaths due to military conflict have not declined. Moreover, even when the borders of some states are secure from foreign threats, the people within those states do not necessarily have freedom from crime, enough food, proper health care, education, or political freedom. In response to these developments, the international community has gradually moved to combine economic development with military security and other basic human rights to form a new concept of "human security". Unfortunately, by common assent the concept lacks both a clear definition, consistent with the aims of the international community, and any agreed upon measure of it. In this paper, we propose a simple, rigorous, and measurable definition of human security: the expected number of years of future life spent outside the state of "generalized poverty". Generalized poverty occurs when an individual falls below the threshold in any key domain of human well-being. We consider improvements in data collection and methods of forecasting that are necessary to measure human security and then introduce an agenda for research and action to enhance human security that follows logically in the areas of risk assessment, prevention, protection, and compensation.

Article
The Rules of Inference
Lee Epstein and Gary King. 2002. “The Rules of Inference.” University of Chicago Law Review, 69: 1–209. Abstract

Although the term "empirical research" has become commonplace in legal scholarship over the past two decades, law professors have, in fact, been conducting research that is empirical – that is, learning about the world using quantitative data or qualitative information – for almost as long as they have been conducting research. For just as long, however, they have been proceeding with little awareness of, much less compliance with, the rules of inference, and without paying heed to the key lessons of the revolution in empirical analysis that has been taking place over the last century in other disciplines. The tradition of including some articles devoted to exclusively to the methododology of empirical analysis – so well represented in journals in traditional academic fields – is virtually nonexistent in the nation’s law reviews. As a result, readers learn considerably less accurate information about the empirical world than the studies’ stridently stated, but overconfident, conclusions suggest. To remedy this situation both for the producers and consumers of empirical work, this Article adapts the rules of inference used in the natural and social sciences to the special needs, theories, and data in legal scholarship, and explicate them with extensive illustrations from existing research. The Article also offers suggestions for how the infrastructure of teaching and research at law schools might be reorganized so that it can better support the creation of first-rate empirical research without compromising other important objectives.

Article
2001
Aggregation Among Binary, Count, and Duration Models: Estimating the Same Quantities from Different Levels of Data
James E Alt, Gary King, and Curtis Signorino. 2001. “Aggregation Among Binary, Count, and Duration Models: Estimating the Same Quantities from Different Levels of Data.” Political Analysis, 9: 21–44. Abstract
Binary, count and duration data all code discrete events occurring at points in time. Although a single data generation process can produce all of these three data types, the statistical literature is not very helpful in providing methods to estimate parameters of the same process from each. In fact, only single theoretical process exists for which know statistical methods can estimate the same parameters - and it is generally used only for count and duration data. The result is that seemingly trivial decisions abut which level of data to use can have important consequences for substantive interpretations. We describe the theoretical event process for which results exist, based on time independence. We also derive a set of models for a time-dependent process and compare their predictions to those of a commonly used model. Any hope of understanding and avoiding the more serious problems of aggregation bias in events data is contingent on first deriving a much wider arsenal of statistical models and theoretical processes that are not constrained by the particular forms of data that happen to be available. We discuss these issues and suggest an agenda for political methodologists interested in this very large class of aggregation problems.
Article

Pages