Publications by Year: 2002

2002
Barry C. Burden, David C. Kimball, and Gary King. 3/4/2002. Archive of the controversy involving Wendy K. Tam Cho, Brian J. Gaines, and the American Political Science Review.Abstract

An article by Barry C. Burden and David C. Kimball entitled “A New Approach to the Study of Ticket Splitting” was published in the September 1998 issue of the American Political Science Review. The empirical part of the article made use of an ecological inference technique developed by Gary King in his book, A Solution to the Ecological Inference Problem (Princeton University Press,1997). As the Burden-Kimball paper was going to press, Wendy K.Tam Cho and Brian J. Gaines submitted a critique of it to the APSR using data publicly archived by Burden and Kimball at the ICPSR. The Cho-Gaines paper criticized many aspects of the Burden-Kimball article, but focused primarily on the use of King's ecological inference method. The Cho-Gaines paper survived the review process and was accepted for publication, at which point the APSR Editor, Ada Finifter, permitted Burden-Kimball and King to submit responses. These responses made use of replication datasets provided by Cho and Gaines (but not available to their reviewers) and went through the review process as well. Both papers discredited the Cho-Gaines critique, but the Burden-Kimball paper also revealed that Cho and Gaines had failed to replicate Burden and Kimball’s analysis as they had claimed. This led Finifter to pull the Cho-Gaines paper from the publication pipeline and publish none of the papers. The following statement was offered to Review readers:

“Because of inaccuracies discovered during the prepublication process, ‘Reassessing the Study of Split-Ticket Voting,’ by Wendy K. Tam Cho and Brian J. Gaines, previously listed as forthcoming, has been withdrawn from publication” (December 2001 APSR).

This archive contains the material necessary for those who wish to review the entire case. The 56 files provided here include the Cho-Gaines paper and the rebuttals by Burden-Kimball and King, replication datasets provided by Cho and Gaines, and a decision letter from Finifter.

See the README for an overview, my concluding comment, and the entire archive here and in the ICPSR Replication Archive.

Armed Conflict as a Public Health Problem
Christopher JL Murray, Gary King, Alan D Lopez, Niels Tomijima, and Etienne Krug. 2002. “Armed Conflict as a Public Health Problem.” BMJ (British Medical Journal), 324, Pp. 346–349.Abstract
Armed conflict is a major cause of injury and death worldwide, but we need much better methods of quantification before we can accurately assess its effect. Armed conflict between warring states and groups within states have been major causes of ill health and mortality for most of human history. Conflict obviously causes deaths and injuries on the battlefield, but also health consequences from the displacement of populations, the breakdown of health and social services, and the heightened risk of disease transmission. Despite the size of the health consequences, military conflict has not received the same attention from public health research and policy as many other causes of illness and death. In contrast, political scientists have long studied the causes of war but have primarily been interested in the decision of elite groups to go to war, not in human death and misery. We review the limited knowledge on the health consequences of conflict, suggest ways to improve measurement, and discuss the potential for risk assessment and for preventing and ameliorating the consequences of conflict.
Article
COUNT: A Program for Estimating Event Count and Duration Regressions
Gary King. 2002. “COUNT: A Program for Estimating Event Count and Duration Regressions”.Abstract

This software is no longer being actively updated. Previous versions and information about the software are archived here.

A stand-alone, easy-to-use program for running event count and duration regression models, developed by and/or discussed in a series of journal articles by me. (Event count models have a dependent variable measured as the number of times something happens, such as the number of uncontested seats per state or the number of wars per year. Duration models explain dependent variables measured as the time until some event, such as the number of months a parliamentary cabinet endures.) Winner of the APSA Research Software Award.

Empirical Research and The Goals of Legal Scholarship: A Response
Lee Epstein and Gary King. 2002. “Empirical Research and The Goals of Legal Scholarship: A Response.” University of Chicago Law Review, 69, Pp. 1–209.Abstract
Although the term "empirical research" has become commonplace in legal scholarship over the past two decades, law professors have, in fact, been conducting research that is empirical – that is, learning about the world using quantitative data or qualitative information – for almost as long as they have been conducting research. For just as long, however, they have been proceeding with little awareness of, much less compliance with, the rules of inference, and without paying heed to the key lessons of the revolution in empirical analysis that has been taking place over the last century in other disciplines. The tradition of including some articles devoted to exclusively to the methododology of empirical analysis – so well represented in journals in traditional academic fields – is virtually nonexistent in the nation’s law reviews. As a result, readers learn considerably less accurate information about the empirical world than the studies’ stridently stated, but overconfident, conclusions suggest. To remedy this situation both for the producers and consumers of empirical work, this Article adapts the rules of inference used in the natural and social sciences to the special needs, theories, and data in legal scholarship, and explicate them with extensive illustrations from existing research. The Article also offers suggestions for how the infrastructure of teaching and research at law schools might be reorganized so that it can better support the creation of first-rate empirical research without compromising other important objectives.
Article
Estimating Risk and Rate Levels, Ratios, and Differences in Case-Control Studies
Gary King and Langche Zeng. 2002. “Estimating Risk and Rate Levels, Ratios, and Differences in Case-Control Studies.” Statistics in Medicine, 21, Pp. 1409–1427.Abstract
Classic (or "cumulative") case-control sampling designs do not admit inferences about quantities of interest other than risk ratios, and then only by making the rare events assumption. Probabilities, risk differences, and other quantities cannot be computed without knowledge of the population incidence fraction. Similarly, density (or "risk set") case-control sampling designs do not allow inferences about quantities other than the rate ratio. Rates, rate differences, cumulative rates, risks, and other quantities cannot be estimated unless auxiliary information about the underlying cohort such as the number of controls in each full risk set is available. Most scholars who have considered the issue recommend reporting more than just the relative risks and rates, but auxiliary population information needed to do this is not usually available. We address this problem by developing methods that allow valid inferences about all relevant quantities of interest from either type of case-control study when completely ignorant of or only partially knowledgeable about relevant auxiliary population information.
Article
A Fast, Easy, and Efficient Estimator for Multiparty Electoral Data
James Honaker, Gary King, and Jonathan N. Katz. 2002. “A Fast, Easy, and Efficient Estimator for Multiparty Electoral Data.” Political Analysis, 10, Pp. 84–100.Abstract
Katz and King (1999) develop a model for predicting or explaining aggregate electoral results in multiparty democracies. This model is, in principle, analogous to what least squares regression provides American politics researchers in that two-party system. Katz and King applied this model to three-party elections in England and revealed a variety of new features of incumbency advantage and where each party pulls support from. Although the mathematics of their statistical model covers any number of political parties, it is computationally very demanding, and hence slow and numerically imprecise, with more than three. The original goal of our work was to produce an approximate method that works quicker in practice with many parties without making too many theoretical compromises. As it turns out, the method we offer here improves on Katz and King’s (in bias, variance, numerical stability, and computational speed) even when the latter is computationally feasible. We also offer easy-to-use software that implements our suggestions.
Article
Gary King. 2002. “Isolating Spatial Autocorrelation, Aggregation Bias, and Distributional Violations in Ecological Inference.” Political Analysis, 10, Pp. 298–300.Abstract
This is an invited response to an article by Anselin and Cho. I make two main points: The numerical results in this article violate no conclusions from prior literature, and the absence of the deterministic information from the bounds in the article’s analyses invalidates its theoretical discussion of spatial autocorrelation and all of its actual simulation results. An appendix shows how to draw simulations correctly.
Article
Emmanuela Gakidou and Gary King. 2002. “Measuring Total Health Inequality: Adding Individual Variation to Group-Level Differences.” BioMed Central: International Journal for Equity in Health, 1.Abstract
Background: Studies have revealed large variations in average health status across social, economic, and other groups. No study exists on the distribution of the risk of ill-health across individuals, either within groups or across all people in a society, and as such a crucial piece of total health inequality has been overlooked. Some of the reason for this neglect has been that the risk of death, which forms the basis for most measures, is impossible to observe directly and difficult to estimate. Methods: We develop a measure of total health inequality – encompassing all inequalities among people in a society, including variation between and within groups – by adapting a beta-binomial regression model. We apply it to children under age two in 50 low- and middle-income countries. Our method has been adopted by the World Health Organization and is being implemented in surveys around the world and preliminary estimates have appeared in the World Health Report (2000). Results: Countries with similar average child mortality differ considerably in total health inequality. Liberia and Mozambique have the largest inequalities in child survival, while Colombia, the Philippines and Kazakhstan have the lowest levels among the countries measured. Conclusions: Total health inequality estimates should be routinely reported alongside average levels of health in populations and groups, as they reveal important policy-related information not otherwise knowable. This approach enables meaningful comparisons of inequality across countries and future analyses of the determinants of inequality.
Article
Rethinking Human Security
Gary King and Christopher J.L. Murray. 2002. “Rethinking Human Security.” Political Science Quarterly, 116, Pp. 585–610.Abstract

In the last two decades, the international community has begun to conclude that attempts to ensure the territorial security of nation-states through military power have failed to improve the human condition. Despite astronomical levels of military spending, deaths due to military conflict have not declined. Moreover, even when the borders of some states are secure from foreign threats, the people within those states do not necessarily have freedom from crime, enough food, proper health care, education, or political freedom. In response to these developments, the international community has gradually moved to combine economic development with military security and other basic human rights to form a new concept of "human security". Unfortunately, by common assent the concept lacks both a clear definition, consistent with the aims of the international community, and any agreed upon measure of it. In this paper, we propose a simple, rigorous, and measurable definition of human security: the expected number of years of future life spent outside the state of "generalized poverty". Generalized poverty occurs when an individual falls below the threshold in any key domain of human well-being. We consider improvements in data collection and methods of forecasting that are necessary to measure human security and then introduce an agenda for research and action to enhance human security that follows logically in the areas of risk assessment, prevention, protection, and compensation.

Article
The Rules of Inference
Lee Epstein and Gary King. 2002. “The Rules of Inference.” University of Chicago Law Review, 69, Pp. 1–209.Abstract

Although the term "empirical research" has become commonplace in legal scholarship over the past two decades, law professors have, in fact, been conducting research that is empirical – that is, learning about the world using quantitative data or qualitative information – for almost as long as they have been conducting research. For just as long, however, they have been proceeding with little awareness of, much less compliance with, the rules of inference, and without paying heed to the key lessons of the revolution in empirical analysis that has been taking place over the last century in other disciplines. The tradition of including some articles devoted to exclusively to the methododology of empirical analysis – so well represented in journals in traditional academic fields – is virtually nonexistent in the nation’s law reviews. As a result, readers learn considerably less accurate information about the empirical world than the studies’ stridently stated, but overconfident, conclusions suggest. To remedy this situation both for the producers and consumers of empirical work, this Article adapts the rules of inference used in the natural and social sciences to the special needs, theories, and data in legal scholarship, and explicate them with extensive illustrations from existing research. The Article also offers suggestions for how the infrastructure of teaching and research at law schools might be reorganized so that it can better support the creation of first-rate empirical research without compromising other important objectives.

Article