Applications

Automated Text Analysis

Automated and computer-assisted methods of extracting, organizing, and consuming knowledge from unstructured text.

Content Analysis

General Purpose Computer-Assisted Clustering and Conceptualization
Grimmer, Justin, and Gary King. 2011. General Purpose Computer-Assisted Clustering and Conceptualization, Proceedings of the National Academy of Sciences. Publisher's VersionAbstract
We develop a computer-assisted method for the discovery of insightful conceptualizations, in the form of clusterings (i.e., partitions) of input objects. Each of the numerous fully automated methods of cluster analysis proposed in statistics, computer science, and biology optimize a different objective function. Almost all are well defined, but how to determine before the fact which one, if any, will partition a given set of objects in an "insightful" or "useful" way for a given user is unknown and difficult, if not logically impossible. We develop a metric space of partitions from all existing cluster analysis methods applied to a given data set (along with millions of other solutions we add based on combinations of existing clusterings), and enable a user to explore and interact with it, and quickly reveal or prompt useful or insightful conceptualizations. In addition, although uncommon in unsupervised learning problems, we offer and implement evaluation designs that make our computer-assisted approach vulnerable to being proven suboptimal in specific data types. We demonstrate that our approach facilitates more efficient and insightful discovery of useful information than either expert human coders or many existing fully automated methods.
An Automated Information Extraction Tool For International Conflict Data with Performance as Good as Human Coders: A Rare Events Evaluation Design
Methods to evaluate automated information extraction systems when coding rare events, the success of one such system, along with considerable data. King, Gary, and Will Lowe. 2003. An Automated Information Extraction Tool For International Conflict Data with Performance as Good as Human Coders: A Rare Events Evaluation Design, International Organization 57: 617-642.Abstract
Despite widespread recognition that aggregated summary statistics on international conflict and cooperation miss most of the complex interactions among nations, the vast majority of scholars continue to employ annual, quarterly, or occasionally monthly observations. Daily events data, coded from some of the huge volume of news stories produced by journalists, have not been used much for the last two decades. We offer some reason to change this practice, which we feel should lead to considerably increased use of these data. We address advances in event categorization schemes and software programs that automatically produce data by "reading" news stories without human coders. We design a method that makes it feasible for the first time to evaluate these programs when they are applied in areas with the particular characteristics of international conflict and cooperation data, namely event categories with highly unequal prevalences, and where rare events (such as highly conflictual actions) are of special interest. We use this rare events design to evaluate one existing program, and find it to be as good as trained human coders, but obviously far less expensive to use. For large scale data collections, the program dominates human coding. Our new evaluative method should be of use in international relations, as well as more generally in the field of computational linguistics, for evaluating other automated information extraction tools. We believe that the data created by programs similar to the one we evaluated should see dramatically increased use in international relations research. To facilitate this process, we are releasing with this article data on 4.3 million international events, covering the entire world for the last decade.
How Censorship in China Allows Government Criticism but Silences Collective Expression
King, Gary, Jennifer Pan, and Margaret E Roberts. 2013. How Censorship in China Allows Government Criticism but Silences Collective Expression, American Political Science Review 107, no. 2 (May): 1-18.Abstract
We offer the first large scale, multiple source analysis of the outcome of what may be the most extensive effort to selectively censor human expression ever implemented. To do this, we have devised a system to locate, download, and analyze the content of millions of social media posts originating from nearly 1,400 different social media services all over China before the Chinese government is able to find, evaluate, and censor (i.e., remove from the Internet) the large subset they deem objectionable. Using modern computer-assisted text analytic methods that we adapt to and validate in the Chinese language, we compare the substantive content of posts censored to those not censored over time in each of 85 topic areas. Contrary to previous understandings, posts with negative, even vitriolic, criticism of the state, its leaders, and its policies are not more likely to be censored. Instead, we show that the censorship program is aimed at curtailing collective action by silencing comments that represent, reinforce, or spur social mobilization, regardless of content. Censorship is oriented toward attempting to forestall collective activities that are occurring now or may occur in the future --- and, as such, seem to clearly expose government intent.
Hopkins, Daniel, Gary King, and Ying Lu. System for Estimating a Distribution of Message Content Categories in Source Data. U.S. Patent and Trademark Office. US Patent 8180717, filed 2012.Abstract
A method of computerized content analysis that gives “approximately unbiased and statistically consistent estimates” of a distribution of elements of structured, unstructured, and partially structured source data among a set of categories. In one embodiment, this is done by analyzing a distribution of small set of individually-classified elements in a plurality of categories and then using the information determined from the analysis to extrapolate a distribution in a larger population set. This extrapolation is performed without constraining the distribution of the unlabeled elements to be equal to the distribution of labeled elements, nor constraining a content distribution of content of elements in the labeled set (e.g., a distribution of words used by elements in the labeled set) to be equal to a content distribution of elements in the unlabeled set. Not being constrained in these ways allows the estimation techniques described herein to provide distinct advantages over conventional aggregation techniques.
<p>Computer-Assisted Keyword and Document Set Discovery from Unstructured Text</p>
King, Gary, Patrick Lam, and Margaret Roberts. 2014.

Computer-Assisted Keyword and Document Set Discovery from Unstructured Text

.Abstract
The (unheralded) first step in many applications of automated text analysis involves selecting keywords to choose documents from a large text corpus for further study. Although all substantive results depend crucially on this choice, researchers typically pick keywords in ad hoc ways, given the lack of formal statistical methods to help. Paradoxically, this often means that the validity of the most sophisticated text analysis methods depends in practice on the inadequate keyword counting or matching methods they are designed to replace. The same ad hoc keyword selection process is also used in many other areas, such as following conversations that rapidly innovate language to evade authorities, seek political advantage, or express creativity; generic web searching; eDiscovery; look-alike modeling; intelligence analysis; and sentiment and topic analysis. We develop a computer-assisted (as opposed to fully automated) statistical approach that suggests keywords from available text, without needing any structured data as inputs. This framing poses the statistical problem in a new way, which leads to a widely applicable algorithm. Our specific approach is based on training classifiers, extracting information from (rather than correcting) their mistakes, and then summarizing results with Boolean search strings. We illustrate how the technique works with examples in English and Chinese.
King, Gary, and Justin Grimmer. Method and Apparatus for Selecting Clusterings to Classify A Predetermined Data Set. U.S. Patent and Trademark Office. US Patent 8,438,162, filed 2013.Abstract
A method for selecting clusterings to classify a predetermined data set of numerical data comprises five steps. First, a plurality of known clustering methods are applied, one at a time, to the data set to generate clusterings for each method. Second, a metric space of clusterings is generated using a metric that measures the similarity between two clusterings. Third, the metric space is projected to a lower dimensional representation useful for visualization. Fourth, a “local cluster ensemble” method generates a clustering for each point in the lower dimensional space. Fifth, an animated visualization method uses the output of the local cluster ensemble method to display the lower dimensional space and to allow a user to move around and explore the space of clustering.
<p>Reverse-engineering censorship in China: Randomized experimentation and participant observation</p>
King, Gary, Jennifer Pan, and Margaret E Roberts. 2014.

Reverse-engineering censorship in China: Randomized experimentation and participant observation

, Science 345, no. 6199: 1-10. Publisher's VersionAbstract
Existing research on the extensive Chinese censorship organization uses observational methods with well-known limitations. We conducted the first large-scale experimental study of censorship by creating accounts on numerous social media sites, randomly submitting different texts, and observing from a worldwide network of computers which texts were censored and which were not. We also supplemented interviews with confidential sources by creating our own social media site, contracting with Chinese firms to install the same censoring technologies as existing sites, and—with their software, documentation, and even customer support—reverse-engineering how it all works. Our results offer rigorous support for the recent hypothesis that criticisms of the state, its leaders, and their policies are published, whereas posts about real-world events with collective action potential are censored.
<p>A Method of Automated Nonparametric Content Analysis for Social Science</p>
A method that gives unbiased estimates of the proportion of text documents in investigator-chosen categories, given only a small subset of hand-coded documents. Also includes the first correction for the far less-than-perfect levels of inter-coder reliability that typically characterize hand coding. Applications to sentiment detection about politicians in blog posts. Hopkins, Daniel, and Gary King. 2010.

A Method of Automated Nonparametric Content Analysis for Social Science

, American Journal of Political Science 54, no. 1: 229–247.Abstract
The increasing availability of digitized text presents enormous opportunities for social scientists. Yet hand coding many blogs, speeches, government records, newspapers, or other sources of unstructured text is infeasible. Although computer scientists have methods for automated content analysis, most are optimized to classify individual documents, whereas social scientists instead want generalizations about the population of documents, such as the proportion in a given category. Unfortunately, even a method with a high percent of individual documents correctly classified can be hugely biased when estimating category proportions. By directly optimizing for this social science goal, we develop a method that gives approximately unbiased estimates of category proportions even when the optimal classifier performs poorly. We illustrate with diverse data sets, including the daily expressed opinions of thousands of people about the U.S. presidency. We also make available software that implements our methods and large corpora of text for further analysis. This article led to the formation of Crimson Hexagon
<p><em>You Lie!</em> Patterns of Partisan Taunting in the U.S. Senate (Poster)</p>
Grimmer, Justin, Gary King, and Chiara Superti. 2014.

You Lie! Patterns of Partisan Taunting in the U.S. Senate (Poster)

, in Society for Political Methodology. Athens, GA.Abstract
This is a poster that describes our analysis of "partisan taunting," the explicit, public, and negative attacks on another political party or its members, usually using vitriolic and derogatory language. We first demonstrate that most projects that hand code text in the social sciences optimize with respect to the wrong criterion, resulting in large, unnecessary biases. We show how to fix this problem and then apply it to taunting. We find empirically that, unlike most claims in the press and the literature, taunting is not inexorably increasing; it appears instead to be a rational political strategy, most often used by those least likely to win by traditional means -- ideological extremists, out-party members when the president is unpopular, and minority party members. However, although taunting appears to be individually rational, it is collectively irrational: Constituents may resonate with one cutting taunt by their Senator, but they might not approve if he or she were devoting large amounts of time to this behavior rather than say trying to solve important national problems. We hope to partially rectify this situation by posting public rankings of Senatorial taunting behavior.

Software

Incumbency Advantage

Proof that previously used estimators of electoral incumbency advantage were biased, and a new unbiased estimator. Also, the first systematic demonstration that constituency service by legislators increases the incumbency advantage.

How to Estimate the Electoral Advantage of Incumbency

A Statistical Model for Multiparty Electoral Data
A general purpose method for analyzing multiparty electoral data, including estimating the incumbency advantage. Katz, Jonathan, and Gary King. 1999. A Statistical Model for Multiparty Electoral Data, American Political Science Review 93: 15–32.Abstract
We propose a comprehensive statistical model for analyzing multiparty, district-level elections. This model, which provides a tool for comparative politics research analagous to that which regression analysis provides in the American two-party context, can be used to explain or predict how geographic distributions of electoral results depend upon economic conditions, neighborhood ethnic compositions, campaign spending, and other features of the election campaign or aggregate areas. We also provide new graphical representations for data exploration, model evaluation, and substantive interpretation. We illustrate the use of this model by attempting to resolve a controversy over the size of and trend in electoral advantage of incumbency in Britain. Contrary to previous analyses, all based on measures now known to be biased, we demonstrate that the advantage is small but meaningful, varies substantially across the parties, and is not growing. Finally, we show how to estimate the party from which each party’s advantage is predominantly drawn.
Estimating Incumbency Advantage Without Bias
Proves that all previous measures of incumbency advantage in the congressional elections literature were biased or inconsistent, and develops an unbiased estimator based on a simple linear regression model. Gelman, Andrew, and Gary King. 1990. Estimating Incumbency Advantage Without Bias, American Journal of Political Science 34: 1142–1164.Abstract
In this paper we prove theoretically and demonstrate empirically that all existing measures of incumbency advantage in the congressional elections literature are biased or inconsistent. We then provide an unbiased estimator based on a very simple linear regression model. We apply this new method to congressional elections since 1900, providing the first evidence of a positive incumbency advantage in the first half of the century.
A generalization of the previous article, more practical for larger numbers of parties (part of a symposium in the same issue of Political Analysis). Honaker, James, Gary King, and Jonathan N Katz. 2002. A Fast, Easy, and Efficient Estimator for Multiparty Electoral Data, Political Analysis 10: 84–100.Abstract
Katz and King (1999) develop a model for predicting or explaining aggregate electoral results in multiparty democracies. This model is, in principle, analogous to what least squares regression provides American politics researchers in that two-party system. Katz and King applied this model to three-party elections in England and revealed a variety of new features of incumbency advantage and where each party pulls support from. Although the mathematics of their statistical model covers any number of political parties, it is computationally very demanding, and hence slow and numerically imprecise, with more than three. The original goal of our work was to produce an approximate method that works quicker in practice with many parties without making too many theoretical compromises. As it turns out, the method we offer here improves on Katz and King’s (in bias, variance, numerical stability, and computational speed) even when the latter is computationally feasible. We also offer easy-to-use software that implements our suggestions.

Causes and Consequences

Systemic Consequences of Incumbency Advantage in the U.S. House
King, Gary, and Andrew Gelman. 1991. Systemic Consequences of Incumbency Advantage in the U.S. House, American Journal of Political Science 35: 110–138.Abstract
The dramatic increase in the electoral advantage of incumbency has sparked widespread interest among congressional researchers over the last 15 years. Although many scholars have studied the advantages of incumbency for incumbents, few have analyzed its effects on the underlying electoral system. We examine the influence of the incumbency advantage on two features of the electoral system in the U.S. House elections: electoral responsiveness and partisan bias. Using a district-level seats-votes model of House elections, we are able to distinguish systematic changes from unique, election-specific variations. Our results confirm the significant drop in responsiveness, and even steeper decline outside the South, over the past 40 years. Contrary to expectations, we find that increased incumbency advantage explains less than a third of this trend, indicating that some other unknown factor is responsible. Moreover, our analysis also reveals another dramatic pattern, largely overlooked in the congressional literature: in the 1940’s and 1950’s the electoral system was severely biased in favor of the Republican party. The system shifted incrementally from this severe Republican bias over the next several decades to a moderate Democratic bias by the mid-1980’s. Interestingly, changes in incumbency advantage explain virtually all of this trend in partisan bias since the 1940’s. By removing incumbency advantage and the existing configuration of incumbents and challengers analytically, our analysis reveals an underlying electoral system that remains consistently biased in favor of the Republican party. Thus, our results indicate that incumbency advantage affects the underlying electoral system, but contrary to conventional wisdom, this changes the trend in partisan bias more than electoral responsiveness.
Constituency Service and Incumbency Advantage
The first systematic evidence that constituency service increases the electoral advantage of incumbency. King, Gary. 1991. Constituency Service and Incumbency Advantage, British Journal of Political Science 21: 119–128.Abstract
This Note addresses the long-standing discrepancy between scholarly support for the effect of constituency service on incumbency advantage and a large body of contradictory empirical evidence. I show first that many of the methodological problems noticed in past research reduce to a single methodological problem that is readily resolved. The core of this Note then provides among the first systematic empirical evidence for the constituency service hypothesis. Specifically, an extra $10,000 added to the budget of the average state legislator gives this incumbent an additional 1.54 percentage points in the next election (with a 95% confidence interval of 1.14 to 1.94 percentage points).

Data

Mexican Health Care Evaluation

New designs and statistical methods for large scale policy evaluations; robustness to implementation errors and political interventions, with very high levels of statistical efficiency. Application to the Mexican Seguro Popular De Salud (Universal Health Insurance) Program.

Section 1

An evaluation of the Mexican Seguro Popular program (designed to extend health insurance and regular and preventive medical care, pharmaceuticals, and health facilities to 50 million uninsured Mexicans), one of the world's largest health policy reforms of the last two decades. Our evaluation features a new design for field experiments that is more robust to the political interventions and implementation errors that have ruined many similar previous efforts; new statistical methods that produce more reliable and efficient results using fewer resources, assumptions, and data; and an implementation of these methods in the largest randomized health policy experiment to date. (See the Harvard Gazette story on this project.)
1. The evaluation design: King, Gary, Emmanuela Gakidou, Nirmala Ravishankar, Ryan T Moore, Jason Lakin, Manett Vargas, Martha María Téllez-Rojo, Juan Eugenio Hernández Ávila, Mauricio Hernández Ávila, and Héctor Hernández Llamas. 2007. A "Politically Robust" Experimental Design for Public Policy Evaluation, with Application to the Mexican Universal Health Insurance Program, Journal of Policy Analysis and Management 26: 479-506.Abstract
We develop an approach to conducting large scale randomized public policy experiments intended to be more robust to the political interventions that have ruined some or all parts of many similar previous efforts. Our proposed design is insulated from selection bias in some circumstances even if we lose observations and our inferences can still be unbiased even if politics disrupts any two of the three steps in our analytical procedures and and other empirical checks are available to validate the overall design. We illustrate with a design and empirical validation of an evaluation of the Mexican Seguro Popular de Salud (Universal Health Insurance) program we are conducting. Seguro Popular, which is intended to grow to provide medical care, drugs, preventative services, and financial health protection to the 50 million Mexicans without health insurance, is one of the largest health reforms of any country in the last two decades. The evaluation is also large scale, constituting one of the largest policy experiments to date and what may be the largest randomized health policy experiment ever.
The Essential Role of Pair Matching in Cluster-Randomized Experiments, with Application to the Mexican Universal Health Insurance Evaluation
2. The statistical analysis methods: Imai, Kosuke, Gary King, and Clayton Nall. 2009. The Essential Role of Pair Matching in Cluster-Randomized Experiments, with Application to the Mexican Universal Health Insurance Evaluation, Statistical Science 24: 29–53.Abstract
A basic feature of many field experiments is that investigators are only able to randomize clusters of individuals–-such as households, communities, firms, medical practices, schools, or classrooms–-even when the individual is the unit of interest. To recoup the resulting efficiency loss, some studies pair similar clusters and randomize treatment within pairs. However, many other studies avoid pairing, in part because of claims in the literature, echoed by clinical trials standards organizations, that this matched-pair, cluster-randomization design has serious problems. We argue that all such claims are unfounded. We also prove that the estimator recommended for this design in the literature is unbiased only in situations when matching is unnecessary and and its standard error is also invalid. To overcome this problem without modeling assumptions, we develop a simple design-based estimator with much improved statistical properties. We also propose a model-based approach that includes some of the benefits of our design-based estimator as well as the estimator in the literature. Our methods also address individual-level noncompliance, which is common in applications but not allowed for in most existing methods. We show that from the perspective of bias, efficiency, power, robustness, or research costs, and in large or small samples, pairing should be used in cluster-randomized experiments whenever feasible and failing to do so is equivalent to discarding a considerable fraction of one’s data. We develop these techniques in the context of a randomized evaluation we are conducting of the Mexican Universal Health Insurance Program.
Matched Pairs and the Future of Cluster-Randomized Experiments: A Rejoinder
2a. Comments from four scholars on previous article and rejoinder by us: Imai, Kosuke, Gary King, and Clayton Nall. 2009. Matched Pairs and the Future of Cluster-Randomized Experiments: A Rejoinder, Statistical Science 24: 64–72.Abstract
A basic feature of many field experiments is that investigators are only able to randomize clusters of individuals–-such as households, communities, firms, medical practices, schools, or classrooms–-even when the individual is the unit of interest. To recoup the resulting efficiency loss, some studies pair similar clusters and randomize treatment within pairs. However, many other studies avoid pairing, in part because of claims in the literature, echoed by clinical trials standards organizations, that this matched-pair, cluster-randomization design has serious problems. We argue that all such claims are unfounded. We also prove that the estimator recommended for this design in the literature is unbiased only in situations when matching is unnecessary and and its standard error is also invalid. To overcome this problem without modeling assumptions, we develop a simple design-based estimator with much improved statistical properties. We also propose a model-based approach that includes some of the benefits of our design-based estimator as well as the estimator in the literature. Our methods also address individual-level noncompliance, which is common in applications but not allowed for in most existing methods. We show that from the perspective of bias, efficiency, power, robustness, or research costs, and in large or small samples, pairing should be used in cluster-randomized experiments whenever feasible and failing to do so is equivalent to discarding a considerable fraction of one’s data. We develop these techniques in the context of a randomized evaluation we are conducting of the Mexican Universal Health Insurance Program.
3. The results of the evaluation: King, Gary, Emmanuela Gakidou, Kosuke Imai, Jason Lakin, Ryan T Moore, Clayton Nall, Nirmala Ravishankar, et al. 2009. Public Policy for the Poor? A Randomised Assessment of the Mexican Universal Health Insurance Programme, The Lancet 373: 1447-1454.Abstract
Background: We assessed aspects of Seguro Popular, a programme aimed to deliver health insurance, regular and preventive medical care, medicines, and health facilities to 50 million uninsured Mexicans. Methods: We randomly assigned treatment within 74 matched pairs of health clusters–-i.e., health facility catchment areas–-representing 118,569 households in seven Mexican states, and measured outcomes in a 2005 baseline survey (August 2005, to September 2005) and follow-up survey 10 months later (July 2006, to August 2006) in 50 pairs (n=32 515). The treatment consisted of encouragement to enrol in a health-insurance programme and upgraded medical facilities. Participant states also received funds to improve health facilities and to provide medications for services in treated clusters. We estimated intention to treat and complier average causal effects non-parametrically. Findings: Intention-to-treat estimates indicated a 23% reduction from baseline in catastrophic expenditures (1·9% points and 95% CI 0·14-3·66). The effect in poor households was 3·0% points (0·46-5·54) and in experimental compliers was 6·5% points (1·65-11·28), 30% and 59% reductions, respectively. The intention-to-treat effect on health spending in poor households was 426 pesos (39-812), and the complier average causal effect was 915 pesos (147-1684). Contrary to expectations and previous observational research, we found no effects on medication spending, health outcomes, or utilisation. Interpretation: Programme resources reached the poor. However, the programme did not show some other effects, possibly due to the short duration of treatment (10 months). Although Seguro Popular seems to be successful at this early stage, further experiments and follow-up studies, with longer assessment periods, are needed to ascertain the long-term effects of the programme.

Related Research

Presidency Research; Voting Behavior

Resolution of the paradox of why polls are so variable over time during presidential campaigns even though the vote outcome is easily predictable before it starts. Also, a resolution of a key controversy over absentee ballots during the 2000 presidential election; and the methodology of small-n research on executives.

Voting Behavior

Estimating Partisan Bias of the Electoral College Under Proposed Changes in Elector Apportionment
Thomas, AC, Andrew Gelman, Gary King, and Jonathan N Katz. 2012. Estimating Partisan Bias of the Electoral College Under Proposed Changes in Elector Apportionment, Statistics, Politics, and Policy: 1-13. Statistics, Politics and Policy (publisher version)Abstract
In the election for President of the United States, the Electoral College is the body whose members vote to elect the President directly. Each state sends a number of delegates equal to its total number of representatives and senators in Congress; all but two states (Nebraska and Maine) assign electors pledged to the candidate that wins the state's plurality vote. We investigate the effect on presidential elections if states were to assign their electoral votes according to results in each congressional district,and conclude that the direct popular vote and the current electoral college are both substantially fairer compared to those alternatives where states would have divided their electoral votes by congressional district.
Ordinary Economic Voting Behavior in the Extraordinary Election of Adolf Hitler
King, Gary, Ori Rosen, Martin Tanner, and Alexander Wagner. 2008. Ordinary Economic Voting Behavior in the Extraordinary Election of Adolf Hitler, Journal of Economic History 68, no. 4: 996.Abstract
The enormous Nazi voting literature rarely builds on modern statistical or economic research. By adding these approaches, we find that the most widely accepted existing theories of this era cannot distinguish the Weimar elections from almost any others in any country. Via a retrospective voting account, we show that voters most hurt by the depression, and most likely to oppose the government, fall into separate groups with divergent interests. This explains why some turned to the Nazis and others turned away. The consequences of Hitler's election were extraordinary, but the voting behavior that led to it was not.
Did Illegal Overseas Absentee Ballots Decide the 2000 U.S. Presidential Election?
Resolved for the New York Times a key controversy over the 2000 presidential election. Imai, Kosuke, and Gary King. 2004. Did Illegal Overseas Absentee Ballots Decide the 2000 U.S. Presidential Election?, Perspectives on Politics 2: 537–549.Abstract
Although not widely known until much later, Al Gore received 202 more votes than George W. Bush on election day in Florida. George W. Bush is president because he overcame his election day deficit with overseas absentee ballots that arrived and were counted after election day. In the final official tally, Bush received 537 more votes than Gore. These numbers are taken from the official results released by the Florida Secretary of State's office and so do not reflect overvotes, undervotes, unsuccessful litigation, butterfly ballot problems, recounts that might have been allowed but were not, or any other hypothetical divergence between voter preferences and counted votes. After the election, the New York Times conducted a six month long investigation and found that 680 of the overseas absentee ballots were illegally counted, and no partisan, pundit, or academic has publicly disagreed with their assessment. In this paper, we describe the statistical procedures we developed and implemented for the Times to ascertain whether disqualifying these 680 ballots would have changed the outcome of the election. The methods involve adding formal Bayesian model averaging procedures to King's (1997) ecological inference model. Formal Bayesian model averaging has not been used in political science but is especially useful when substantive conclusions depend heavily on apparently minor but indefensible model choices, when model generalization is not feasible, and when potential critics are more partisan than academic. We show how we derived the results for the Times so that other scholars can use these methods to make ecological inferences for other purposes. We also present a variety of new empirical results that delineate the precise conditions under which Al Gore would have been elected president, and offer new evidence of the striking effectiveness of the Republican effort to convince local election officials to count invalid ballots in Bush counties and not count them in Gore counties.
Why are American Presidential Election Campaign Polls so Variable when Votes are so Predictable?
Resolution of a paradox in the study of American voting behavior. Gelman, Andrew, and Gary King. 1993. Why are American Presidential Election Campaign Polls so Variable when Votes are so Predictable?, British Journal of Political Science 23: 409–451.Abstract
As most political scientists know, the outcome of the U.S. Presidential election can be predicted within a few percentage points (in the popular vote), based on information available months before the election. Thus, the general election campaign for president seems irrelevant to the outcome (except in very close elections), despite all the media coverage of campaign strategy. However, it is also well known that the pre-election opinion polls can vary wildly over the campaign, and this variation is generally attributed to events in the campaign. How can campaign events affect people’s opinions on whom they plan to vote for, and yet not affect the outcome of the election? For that matter, why do voters consistently increase their support for a candidate during his nominating convention, even though the conventions are almost entirely predictable events whose effects can be rationally forecast? In this exploratory study, we consider several intuitively appealing, but ultimately wrong, resolutions to this puzzle, and discuss our current understanding of what causes opinion polls to fluctuate and yet reach a predictable outcome. Our evidence is based on graphical presentation and analysis of over 67,000 individual-level responses from forty-nine commercial polls during the 1988 campaign and many other aggregate poll results from the 1952–1992 campaigns. We show that responses to pollsters during the campaign are not generally informed or even, in a sense we describe, "rational." In contrast, voters decide which candidate to eventually support based on their enlightened preferences, as formed by the information they have learned during the campaign, as well as basic political cues such as ideology and party identification. We cannot prove this conclusion, but we do show that it is consistent with the aggregate forecasts and individual-level opinion poll responses. Based on the enlightened preferences hypothesis, we conclude that the news media have an important effect on the outcome of Presidential elections–-not due to misleading advertisements, sound bites, or spin doctors, but rather by conveying candidates’ positions on important issues.
On Party Platforms, Mandates, and Government Spending
King, Gary, and Michael Laver. 1993. On Party Platforms, Mandates, and Government Spending, American Political Science Review 87: 744–750.Abstract
In their 1990 Review article, Ian Budge and Richard Hofferbert analyzed the relationship between party platform emphases, control of the White House, and national government spending priorities, reporting strong evidence of a "party mandate" connection between them. Gary King and Michael Laver successfully replicate the original analysis, critique the interpretation of the causal effects, and present a reanalysis showing that platforms have small or nonexistent effects on spending. In response, Budge, Hofferbert, and Michael McDonald agree that their language was somewhat inconsistent on both interactions and causality but defend their conceptualization of "mandates" as involving only an association, not necessarily a causal connection, between party commitments and government policy. Hence, while the causes of government policy are of interest, noncausal associations are sufficient as evidence of party mandates in American politics.
Party Competition and Media Messages in U.S. Presidential Election Campaigns
A popular version of the previous article. Gelman, Andrew, Gary King, and Sandy L Maisel. 1994. Party Competition and Media Messages in U.S. Presidential Election Campaigns, in The Parties Respond: Changes in the American Party System, 255-295. Boulder, Colorado: Westview Press.Abstract
At one point during the 1988 campaign, Michael Dukakis was ahead in the public opinion polls by 17 percentage points, but he eventually lost the election by 8 percent. Walter Mondale was ahead in the polls by 4 percent during the 1984 campaign but lost the election in a landslide. During June and July of 1992, Clinton, Bush, and Perot each had turns in the public opinion poll lead. What explains all this poll variation? Why do so many citizens change their minds so quickly about presidential choices?
Estimating the Probability of Events that Have Never Occurred: When Is Your Vote Decisive?
The first extensive empirical study of the probability of your vote changing the outcome of a U.S. presidential election? Most previous studies of the probability of a tied vote have involved theoretical calculation without data. Gelman, Andrew, Gary King, and John Boscardin. 1998. Estimating the Probability of Events that Have Never Occurred: When Is Your Vote Decisive?, Journal of the American Statistical Association 93: 1–9.Abstract
Researchers sometimes argue that statisticians have little to contribute when few realizations of the process being estimated are observed. We show that this argument is incorrect even in the extreme situation of estimating the probabilities of events so rare that they have never occurred. We show how statistical forecasting models allow us to use empirical data to improve inferences about the probabilities of these events. Our application is estimating the probability that your vote will be decisive in a U.S. presidential election, a problem that has been studied by political scientists for more than two decades. The exact value of this probability is of only minor interest, but the number has important implications for understanding the optimal allocation of campaign resources, whether states and voter groups receive their fair share of attention from prospective presidents, and how formal "rational choice" models of voter behavior might be able to explain why people vote at all. We show how the probability of a decisive vote can be estimated empirically from state-level forecasts of the presidential election and illustrate with the example of 1992. Based on generalizations of standard political science forecasting models, we estimate the (prospective) probability of a single vote being decisive as about 1 in 10 million for close national elections such as 1992, varying by about a factor of 10 among states. Our results support the argument that subjective probabilities of many types are best obtained through empirically based statistical prediction models rather than solely through mathematical reasoning. We discuss the implications of our findings for the types of decision analyses used in public choice studies.
No Evidence on Directional vs. Proximity Voting
Proves that the extensive debate between supporters of the direction and proximity models of voting has been based on theoretical specification rather than empirical evidence. Lewis, Jeffrey, and Gary King. 1999. No Evidence on Directional vs. Proximity Voting, Political Analysis 8: 21–33.Abstract
The directional and proximity models offer dramatically different theories for how voters make decisions and fundamentally divergent views of the supposed microfoundations on which vast bodies of literature in theoretical rational choice and empirical political behavior have been built. We demonstrate here that the empirical tests in the large and growing body of literature on this subject amount to theoretical debates about which statistical assumption is right. The key statistical assumptions have not been empirically tested and, indeed, turn out to be effectively untestable with exiting methods and data. Unfortunately, these assumptions are also crucial since changing them leads to different conclusions about voter processes.

Presidency Research

King, Gary, and Michael Laver. 1999. Many Publications, but Still No Evidence, Electoral Studies 18: 597–598.Abstract
In 1990, Budge and Hofferbert (B&H) claimed that they had found solid evidence that party platforms cause U.S. budgetary priorities, and thus concluded that mandate theory applies in the United States as strongly as it does elsewhere. The implications of this stunning conclusion would mean that virtually every observer of the American party system in this century has been wrong. King and Laver (1993) reanalyzed B&H’s data and demonstrated in two ways that there exists no evidence for a causal relationship. First, accepting their entire statistical model, and correcting only an algebraic error (a mistake in how they computed their standard errors), we showed that their hypothesized relationship holds up in fewer than half the tests they reported. Second, we showed that their statistical model includes a slightly hidden but politically implausible assumption that a new party achieves every budgetary desire immediately upon taking office. We then specified a model without this unrealistic assumption and we found that the assumption was not supported, and that all evidence in the data for platforms causing government budgets evaporated. In their published response to our article, B&H withdrew their key claim and said they were now (in 1993) merely interested in an association and not causation. That is how it was left in 1993—a perfectly amicable resolution as far as we were concerned—since we have no objection to the claim that there is a non-causal or chance association between any two variables. Of course, we see little reason to be interested in non-causal associations in this area any more than in the chance correlation that exists between the winner of the baseball World Series and the party winning the U.S. presidency. Since party mandate theory only makes sense as a causal theory, the conventional wisdom about America’s porous, non-mandate party system stands.
The Methodology of Presidential Research
Small-n issues in presidency research, and how to resolve the problem. King, Gary, George Edwards III, Bert A Rockman, and John H Kessel. 1993. The Methodology of Presidential Research, in Researching the Presidency: Vital Questions, New Approaches, 387–412. Pittsburgh: University of Pittsburgh.Abstract
The original purpose of the paper this chapter was based on was to use the Presidency Research Conference’s first-round papers– by John H. Aldrich, Erwin C. Hargrove, Karen M. Hult, Paul Light, and Richard Rose– as my "data." My given task was to analyze the literature ably reviewed by these authors and report what political methodology might have to say about presidency research. I focus in this chapter on the traditional presidency literature, emphasizing research on the president and the office. For the most part, I do not consider research on presidential selection, election, and voting behavior, which has been much more similar to other fields in American politics.

Informatics and Data Sharing

Replication Standards New standards, protocols, and software for citing, sharing, analyzing, archiving, preserving, distributing, cataloging, translating, disseminating, naming, verifying, and replicating scholarly research data and analyses. Also includes proposals to improve the norms of data sharing and replication in science.
<p>Replication, Replication</p>
"The replication standard holds that sufficient information exists with which to understand, evaluate, and build upon a prior work if a third party can replicate the results without any additional information from the author." This, and the data sharing to support it, was proposed for political science, along with policy suggestions in King, Gary. 1995.

Replication, Replication

, PS: Political Science and Politics 28: 443–499.Abstract
Political science is a community enterprise and the community of empirical political scientists need access to the body of data necessary to replicate existing studies to understand, evaluate, and especially build on this work. Unfortunately, the norms we have in place now do not encourage, or in some cases even permit, this aim. Following are suggestions that would facilitate replication and are easy to implement – by teachers, students, dissertation writers, graduate programs, authors, reviewers, funding agencies, and journal and book editors.
<p>Publication, Publication</p>
King, Gary. 2006.

Publication, Publication

, PS: Political Science and Politics 39: 119–125. Publisher's VersionAbstract
I show herein how to write a publishable paper by beginning with the replication of a published article. This strategy seems to work well for class projects in producing papers that ultimately get published, helping to professionalize students into the discipline, and teaching them the scientific norms of the free exchange of academic information. I begin by briefly revisiting the prominent debate on replication our discipline had a decade ago and some of the progress made in data sharing since.

The Dataverse Network Project

The Dataverse Network Project: a major ongoing project to write web applications, standards, protocols, and software for automating the process of citing, archiving, preserving, distributing, cataloging, translating, disseminating, naming, verifying, and replicating data and associated analyses (Website: TheData.Org). See also:
An Introduction to the Dataverse Network as an Infrastructure for Data Sharing
King, Gary. 2007. An Introduction to the Dataverse Network as an Infrastructure for Data Sharing, Sociological Methods and Research 36: 173–199.Abstract
We introduce a set of integrated developments in web application software, networking, data citation standards, and statistical methods designed to put some of the universe of data and data sharing practices on somewhat firmer ground. We have focused on social science data, but aspects of what we have developed may apply more widely. The idea is to facilitate the public distribution of persistent, authorized, and verifiable data, with powerful but easy-to-use technology, even when the data are confidential or proprietary. We intend to solve some of the sociological problems of data sharing via technological means, with the result intended to benefit both the scientific community and the sometimes apparently contradictory goals of individual researchers.
<p>From Preserving the Past to Preserving the Future: The Data-PASS Project and the Challenges of Preserving Digital Social Science Data</p>
Gutmann, Myron P, Mark Abrahamson, Margaret O Adams, Micah Altman, Caroline Arms, Kenneth Bollen, Michael Carlson, et al. 2009.

From Preserving the Past to Preserving the Future: The Data-PASS Project and the Challenges of Preserving Digital Social Science Data

, Library Trends 57: 315–337.Abstract
Social science data are an unusual part of the past, present, and future of digital preservation. They are both an unqualified success, due to long-lived and sustainable archival organizations, and in need of further development because not all digital content is being preserved. This article is about the Data Preservation Alliance for Social Sciences (Data-PASS), a project supported by the National Digital Information Infrastructure and Preservation Program (NDIIPP), which is a partnership of five major U.S. social science data archives. Broadly speaking, Data-PASS has the goal of ensuring that at-risk social science data are identified, acquired, and preserved, and that we have a future-oriented organization that could collaborate on those preservation tasks for the future. Throughout the life of the Data-PASS project we have worked to identify digital materials that have never been systematically archived, and to appraise and acquire them. As the project has progressed, however, it has increasingly turned its attention from identifying and acquiring legacy and at-risk social science data to identifying on going and future research projects that will produce data. This article is about the project's history, with an emphasis on the issues that underlay the transition from looking backward to looking forward.

Hidden Section 1

A symposium on replication, edited by Nils Petter Gleditsch and Claire Metelits, with several articles including mine, King, Gary. 2003. The Future of Replication, International Studies Perspectives 4: 443–499.Abstract
Since the replication standard was proposed for political science research, more journals have required or encouraged authors to make data available, and more authors have shared their data. The calls for continuing this trend are more persistent than ever, and the agreement among journal editors in this Symposium continues this trend. In this article, I offer a vision of a possible future of the replication movement. The plan is to implement this vision via the Virtual Data Center project, which – by automating the process of finding, sharing, archiving, subsetting, converting, analyzing, and distributing data – may greatly facilitate adherence to the replication standard.

The Virtual Data Center

The Virtual Data Center, the predecessor to the Dataverse Network. See:
A Digital Library for the Dissemination and Replication of Quantitative Social Science Research
Altman, Micah, Leonid Andreev, Mark Diggory, Gary King, Daniel L Kiskis, Elizabeth Kolster, Michael Krot, and Sidney Verba. 2001. A Digital Library for the Dissemination and Replication of Quantitative Social Science Research, Social Science Computer Review 19: 458–470.Abstract
The Virtual Data Center (VDC) software is an open-source, digital library system for quantitative data. We discuss what the software does, and how it provides an infrastructure for the management and dissemination of disturbed collections of quantitative data, and the replication of results derived from this data.

See Also

<p>A Proposed Standard for the Scholarly Citation of Quantitative Data</p>
Altman, Micah, and Gary King. 2007.

A Proposed Standard for the Scholarly Citation of Quantitative Data

, D-Lib Magazine 13. Publisher's VersionAbstract
An essential aspect of science is a community of scholars cooperating and competing in the pursuit of common goals. A critical component of this community is the common language of and the universal standards for scholarly citation, credit attribution, and the location and retrieval of articles and books. We propose a similar universal standard for citing quantitative data that retains the advantages of print citations, adds other components made possible by, and needed due to, the digital form and systematic nature of quantitative data sets, and is consistent with most existing subfield-specific approaches. Although the digital library field includes numerous creative ideas, we limit ourselves to only those elements that appear ready for easy practical use by scientists, journal editors, publishers, librarians, and archivists.

Related Papers on New Forms of Data

Preserving Quantitative Research-Elicited Data for Longitudinal Analysis.  New Developments in Archiving Survey Data in the U.S.
Abrahamson, Mark, Kenneth A Bollen, Myron P Gutmann, Gary King, and Amy Pienta. 2009. Preserving Quantitative Research-Elicited Data for Longitudinal Analysis. New Developments in Archiving Survey Data in the U.S., Historical Social Research 34, no. 3: 51-59.Abstract
Social science data collected in the United States, both historically and at present, have often not been placed in any public archive -- even when the data collection was supported by government grants. The availability of the data for future use is, therefore, in jeopardy. Enforcing archiving norms may be the only way to increase data preservation and availability in the future.
Computational Social Science
Lazer, David, Alex Pentland, Lada Adamic, Sinan Aral, Albert-Laszlo Barabasi, Devon Brewer, Nicholas Christakis, et al. 2009. Computational Social Science, Science 323: 721-723.Abstract
A field is emerging that leverages the capacity to collect and analyze data at a scale that may reveal patterns of individual and group behaviors.
Ensuring the Data Rich Future of the Social Sciences
King, Gary. 2011. Ensuring the Data Rich Future of the Social Sciences, Science 331, no. 11 February: 719-721.Abstract
Massive increases in the availability of informative social science data are making dramatic progress possible in analyzing, understanding, and addressing many major societal problems. Yet the same forces pose severe challenges to the scientific infrastructure supporting data sharing, data management, informatics, statistical methodology, and research ethics and policy, and these are collectively holding back progress. I address these changes and challenges and suggest what can be done.
The Changing Evidence Base of Social Science Research
King, Gary. 2009. The Changing Evidence Base of Social Science Research, in The Future of Political Science: 100 Perspectives, Gary King, Schlozman, Kay, and Nie, Norman. New York: Routledge Press.Abstract
This (two-page) article argues that the evidence base of political science and the related social sciences are beginning an underappreciated but historic change.

International Conflict

Methods for coding, analyzing, and forecasting international conflict and state failure. Evidence that the causes of conflict, theorized to be important but often found to be small or ephemeral, are indeed tiny for the vast majority of dyads, but are large, stable, and replicable wherever the ex ante probability of conflict is large.
An Automated Information Extraction Tool For International Conflict Data with Performance as Good as Human Coders: A Rare Events Evaluation Design
King, Gary, and Will Lowe. 2003. An Automated Information Extraction Tool For International Conflict Data with Performance as Good as Human Coders: A Rare Events Evaluation Design, International Organization 57: 617-642.Abstract
Despite widespread recognition that aggregated summary statistics on international conflict and cooperation miss most of the complex interactions among nations, the vast majority of scholars continue to employ annual, quarterly, or occasionally monthly observations. Daily events data, coded from some of the huge volume of news stories produced by journalists, have not been used much for the last two decades. We offer some reason to change this practice, which we feel should lead to considerably increased use of these data. We address advances in event categorization schemes and software programs that automatically produce data by "reading" news stories without human coders. We design a method that makes it feasible for the first time to evaluate these programs when they are applied in areas with the particular characteristics of international conflict and cooperation data, namely event categories with highly unequal prevalences, and where rare events (such as highly conflictual actions) are of special interest. We use this rare events design to evaluate one existing program, and find it to be as good as trained human coders, but obviously far less expensive to use. For large scale data collections, the program dominates human coding. Our new evaluative method should be of use in international relations, as well as more generally in the field of computational linguistics, for evaluating other automated information extraction tools. We believe that the data created by programs similar to the one we evaluated should see dramatically increased use in international relations research. To facilitate this process, we are releasing with this article data on 4.3 million international events, covering the entire world for the last decade.
Event Count Models for International Relations: Generalizations and Applications
King, Gary. 1989. Event Count Models for International Relations: Generalizations and Applications, International Studies Quarterly 33: 123–147.Abstract
International relations theorists tend to think in terms of continuous processes. Yet we observe only discrete events, such as wars or alliances, and summarize them in terms of the frequency of occurrence. As such, most empirical analyses in international relations are based on event count variables. Unfortunately, analysts have generally relied on statistical techniques that were designed for continuous data. This mismatch between theory and method has caused bias, inefficiency, and numerous inconsistencies in both theoretical arguments and empirical findings throughout the literature. This article develops a much more powerful approach to modeling and statistical analysis based explicity on estimating continuous processes from observed event counts. To demonstrate this class of models, I present several new statistical techniques developed for and applied to different areas of international relations. These include the influence of international alliances on the outbreak of war, the contagious process of multilateral economic sanctions, and reciprocity in superpower conflict. I also show how one can extract considerably more information from existing data and relate substantive theory to empirical analyses more explicitly with this approach.
Improving Quantitative Studies of International Conflict: A Conjecture
Beck, Nathaniel, Gary King, and Langche Zeng. 2000. Improving Quantitative Studies of International Conflict: A Conjecture, American Political Science Review 94: 21–36.Abstract
We address a well-known but infrequently discussed problem in the quantitative study of international conflict: Despite immense data collections, prestigious journals, and sophisticated analyses, empirical findings in the literature on international conflict are often unsatisfying. Many statistical results change from article to article and specification to specification. Accurate forecasts are nonexistant. In this article we offer a conjecture about one source of this problem: The causes of conflict, theorized to be important but often found to be small or ephemeral, are indeed tiny for the vast majority of dyads, but they are large, stable, and replicable wherever the ex ante probability of conflict is large. This simple idea has an unexpectedly rich array of observable implications, all consistent with the literature. We directly test our conjecture by formulating a statistical model that includes critical features. Our approach, a version of a "neural network" model, uncovers some interesting structural features of international conflict, and as one evaluative measure, forecasts substantially better than any previous effort. Moreover, this improvement comes at little cost, and it is easy to evaluate whether the model is a statistical improvement over the simpler models commonly used.
Proper Nouns and Methodological Propriety: Pooling Dyads in International Relations Data
King, Gary. 2001. Proper Nouns and Methodological Propriety: Pooling Dyads in International Relations Data, International Organization 55: 497–507.Abstract
The intellectual stakes at issue in this symposium are very high: Green, Kim, and Yoon (2000 and hereinafter GKY) apply their proposed methodological prescriptions and conclude that they key findings in the field is wrong and democracy "has no effect on militarized disputes." GKY are mainly interested in convincing scholars about their methodological points and see themselves as having no stake in the resulting substantive conclusions. However, their methodological points are also high stakes claims: if correct, the vast majority of statistical analyses of military conflict ever conducted would be invalidated. GKY say they "make no attempt to break new ground statistically," but, as we will see, this both understates their methodological contribution to the field and misses some unique features of their application and data in international relations. On the ltter, GKY’s critics are united: Oneal and Russett (2000) conclude that GKY’s method "produces distorted results," and show even in GKY’s framework how democracy’s effect can be reinstated. Beck and Katz (2000) are even more unambiguous: "GKY’s conclusion, in table 3, that variables such as democracy have no pacific impact, is simply nonsense...GKY’s (methodological) proposal...is NEVER a good idea." My given task is to sort out and clarify these conflicting claims and counterclaims. The procedure I followed was to engage in extensive discussions with the participants that included joint reanalyses provoked by our discussions and passing computer program code (mostly with Monte Carlo simulations) back and forth to ensure we were all talking about the same methods and agreed with the factual results. I learned a great deal from this process and believe that the positions of the participants are now a lot closer than it may seem from their written statements. Indeed, I believe that all the participants now agree with what I have written here, even though they would each have different emphases (and although my believing there is agreement is not the same as there actually being agreement!).
Explaining Rare Events in International Relations
King, Gary, and Langche Zeng. 2001. Explaining Rare Events in International Relations, International Organization 55: 693–715.Abstract
Some of the most important phenomena in international conflict are coded s "rare events data," binary dependent variables with dozens to thousands of times fewer events, such as wars, coups, etc., than "nonevents". Unfortunately, rare events data are difficult to explain and predict, a problem that seems to have at least two sources. First, and most importantly, the data collection strategies used in international conflict are grossly inefficient. The fear of collecting data with too few events has led to data collections with huge numbers of observations but relatively few, and poorly measured, explanatory variables. As it turns out, more efficient sampling designs exist for making valid inferences, such as sampling all available events (e.g., wars) and a tiny fraction of non-events (peace). This enables scholars to save as much as 99% of their (non-fixed) data collection costs, or to collect much more meaningful explanatory variables. Second, logistic regression, and other commonly used statistical procedures, can underestimate the probability of rare events. We introduce some corrections that outperform existing methods and change the estimates of absolute and relative risks by as much as some estimated effects reported in the literature. We also provide easy-to-use methods and software that link these two results, enabling both types of corrections to work simultaneously.
Improving Forecasts of State Failure
King, Gary, and Langche Zeng. 2001. Improving Forecasts of State Failure, World Politics 53: 623–658.Abstract
We offer the first independent scholarly evaluation of the claims, forecasts, and causal inferences of the State Failure Task Force and their efforts to forecast when states will fail. State failure refers to the collapse of the authority of the central government to impose order, as in civil wars, revolutionary wars, genocides, politicides, and adverse or disruptive regime transitions. This task force, set up at the behest of Vice President Gore in 1994, has been led by a group of distinguished academics working as consultants to the U.S. Central Intelligence Agency. State Failure Task Force reports and publications have received attention in the media, in academia, and from public policy decision-makers. In this article, we identify several methodological errors in the task force work that cause their reported forecast probabilities of conflict to be too large, their causal inferences to be biased in unpredictable directions, and their claims of forecasting performance to be exaggerated. However, we also find that the task force has amassed the best and most carefully collected data on state failure in existence, and the required corrections which we provide, although very large in effect, are easy to implement. We also reanalyze their data with better statistical procedures and demonstrate how to improve forecasting performance to levels significantly greater than even corrected versions of their models. Although still a highly uncertain endeavor, we are as a consequence able to offer the first accurate forecasts of state failure, along with procedures and results that may be of practical use in informing foreign policy decision making. We also describe a number of strong empirical regularities that may help in ascertaining the causes of state failure.
Armed Conflict as a Public Health Problem
Murray, Christopher JL, Gary King, Alan D Lopez, Niels Tomijima, and Etienne Krug. 2002. Armed Conflict as a Public Health Problem, BMJ (British Medical Journal) 324: 346–349.Abstract
Armed conflict is a major cause of injury and death worldwide, but we need much better methods of quantification before we can accurately assess its effect. Armed conflict between warring states and groups within states have been major causes of ill health and mortality for most of human history. Conflict obviously causes deaths and injuries on the battlefield, but also health consequences from the displacement of populations, the breakdown of health and social services, and the heightened risk of disease transmission. Despite the size of the health consequences, military conflict has not received the same attention from public health research and policy as many other causes of illness and death. In contrast, political scientists have long studied the causes of war but have primarily been interested in the decision of elite groups to go to war, not in human death and misery. We review the limited knowledge on the health consequences of conflict, suggest ways to improve measurement, and discuss the potential for risk assessment and for preventing and ameliorating the consequences of conflict.
Theory and Evidence in International Conflict: A Response to de Marchi, Gelpi, and Grynaviski
Beck, Nathaniel, Gary King, and Langche Zeng. 2004. Theory and Evidence in International Conflict: A Response to de Marchi, Gelpi, and Grynaviski 98: 379-389.Abstract
We thank Scott de Marchi, Christopher Gelpi, and Jeffrey Grynaviski (2003 and hereinafter dGG) for their careful attention to our work (Beck, King, and Zeng, 2000 and hereinafter BKZ) and for raising some important methodological issues that we agree deserve readers’ attention. We are pleased that dGG’s analyses are consistent with the theoretical conjecture about international conflict put forward in BKZ –- "The causes of conflict, theorized to be important but often found to be small or ephemeral, are indeed tiny for the vast majority of dyads, but they are large stable and replicable whenever the ex ante probability of conflict is large" (BKZ, p.21) –- and that dGG agree with our main methodological point that out-of-sample forecasting performance should always be one of the standards used to judge studies of international conflict, and indeed most other areas of political science. However, dGG frequently err when they draw methodological conclusions. Their central claim involves the superiority of logit over neural network models for international conflict data, as judged by forecasting performance and other properties such as ease of use and interpretation ("neural networks hold few unambiguous advantages... and carry significant costs" relative to logit and dGG, p.14). We show here that this claim, which would be regarded as stunning in any of the diverse fields in which both methods are more commonly used, is false. We also show that dGG’s methodological errors and the restrictive model they favor cause them to miss and mischaracterize crucial patterns in the causes of international conflict. We begin in the next section by summarizing the growing support for our conjecture about international conflict. The second section discusses the theoretical reasons why neural networks dominate logistic regression, correcting a number of methodological errors. The third section then demonstrates empirically, in the same data as used in BKZ and dGG, that neural networks substantially outperform dGG’s logit model. We show that neural networks improve on the forecasts from logit as much as logit improves on a model with no theoretical variables. We also show how dGG’s logit analysis assumed, rather than estimated, the answer to the central question about the literature’s most important finding, the effect of democracy on war. Since this and other substantive assumptions underlying their logit model are wrong, their substantive conclusion about the democratic peace is also wrong. The neural network models we used in BKZ not only avoid these difficulties, but they, or one of the other methods available that do not make highly restrictive assumptions about the exact functional form, are just what is called for to study the observable implications of our conjecture.
The Supreme Court During Crisis: How War Affects only Non-War Cases
Epstein, Lee, Daniel E Ho, Gary King, and Jeffrey A Segal. 2005. The Supreme Court During Crisis: How War Affects only Non-War Cases, New York University Law Review 80: 1–116.Abstract
Does the U.S. Supreme Court curtail rights and liberties when the nation’s security is under threat? In hundreds of articles and books, and with renewed fervor since September 11, 2001, members of the legal community have warred over this question. Yet, not a single large-scale, quantitative study exists on the subject. Using the best data available on the causes and outcomes of every civil rights and liberties case decided by the Supreme Court over the past six decades and employing methods chosen and tuned especially for this problem, our analyses demonstrate that when crises threaten the nation’s security, the justices are substantially more likely to curtail rights and liberties than when peace prevails. Yet paradoxically, and in contradiction to virtually every theory of crisis jurisprudence, war appears to affect only cases that are unrelated to the war. For these cases, the effect of war and other international crises is so substantial, persistent, and consistent that it may surprise even those commentators who long have argued that the Court rallies around the flag in times of crisis. On the other hand, we find no evidence that cases most directly related to the war are affected. We attempt to explain this seemingly paradoxical evidence with one unifying conjecture: Instead of balancing rights and security in high stakes cases directly related to the war, the Justices retreat to ensuring the institutional checks of the democratic branches. Since rights-oriented and process-oriented dimensions seem to operate in different domains and at different times, and often suggest different outcomes, the predictive factors that work for cases unrelated to the war fail for cases related to the war. If this conjecture is correct, federal judges should consider giving less weight to legal principles outside of wartime but established during wartime, and attorneys should see it as their responsibility to distinguish cases along these lines.
When Can History Be Our Guide? The Pitfalls of Counterfactual Inference
King, Gary, and Langche Zeng. 2007. When Can History Be Our Guide? The Pitfalls of Counterfactual Inference, International Studies Quarterly: 183-210.Abstract
Inferences about counterfactuals are essential for prediction, answering "what if" questions, and estimating causal effects. However, when the counterfactuals posed are too far from the data at hand, conclusions drawn from well-specified statistical analyses become based on speculation and convenient but indefensible model assumptions rather than empirical evidence. Unfortunately, standard statistical approaches assume the veracity of the model rather than revealing the degree of model-dependence, and so this problem can be hard to detect. We develop easy-to-apply methods to evaluate counterfactuals that do not require sensitivity testing over specified classes of models. If an analysis fails the tests we offer, then we know that substantive results are sensitive to at least some modeling choices that are not based on empirical evidence. We use these methods to evaluate the extensive scholarly literatures on the effects of changes in the degree of democracy in a country (on any dependent variable) and separate analyses of the effects of UN peacebuilding efforts. We find evidence that many scholars are inadvertently drawing conclusions based more on modeling hypotheses than on their data. For some research questions, history contains insufficient information to be our guide.

Legislative Redistricting

The definition of partisan symmetry as a standard for fairness in redistricting; methods and software for measuring partisan bias and electoral responsiveness; discussion of U.S. Supreme Court rulings about this work. Evidence that U.S. redistricting reduces bias and increases responsiveness, and that the electoral college is fair; applications to legislatures, primaries, and multiparty systems.

U.S. Legislatures

The Future of Partisan Symmetry as a Judicial Test for Partisan Gerrymandering after LULAC v. Perry
The U.S. Supreme Court responds favorably to the nonpartisan Amici Curae Brief on partisan gerrymandering filed by Gary King, Bernard Grofman, Andrew Gelman, and Jonathan Katz (see brief) and requests additional information. This information is provided in the context of a brief history of the scholarly literature, a summary of the state of the art in conceptualization and measurement of partisan symmetry, and the state of current jurisprudence, in: Grofman, Bernard, and Gary King. 2008. The Future of Partisan Symmetry as a Judicial Test for Partisan Gerrymandering after LULAC v. Perry, Election Law Journal 6, no. 1: 2-35.Abstract
While the Supreme Court in Bandemer v. Davis found partisan gerrymandering to be justiciable, no challenged redistricting plan in the subsequent 20 years has been held unconstitutional on partisan grounds. Then, in Vieth v. Jubilerer, five justices concluded that some standard might be adopted in a future case, if a manageable rule could be found. When gerrymandering next came before the Court, in LULAC v. Perry, we along with our colleagues filed an Amicus Brief (King et al., 2005), proposing the test be based in part on the partisan symmetry standard. Although the issue was not resolved, our proposal was discussed and positively evaluated in three of the opinions, including the plurality judgment, and for the first time for any proposal the Court gave a clear indication that a future legal test for partisan gerrymandering will likely include partisan symmetry. A majority of Justices now appear to endorse the view that the measurement of partisan symmetry may be used in partisan gerrymandering claims as “a helpful (though certainly not talismanic) tool” (Justice Stevens, joined by Justice Breyer), provided one recognizes that “asymmetry alone is not a reliable measure of unconstitutional partisanship” and possibly that the standard would be applied only after at least one election has been held under the redistricting plan at issue (Justice Kennedy, joined by Justices Souter and Ginsburg). We use this essay to respond to the request of Justices Souter and Ginsburg that “further attention … be devoted to the administrability of such a criterion at all levels of redistricting and its review.” Building on our previous scholarly work, our Amicus Brief, the observations of these five Justices, and a supporting consensus in the academic literature, we offer here a social science perspective on the conceptualization and measurement of partisan gerrymandering and the development of relevant legal rules based on what is effectively the Supreme Court’s open invitation to lower courts to revisit these issues in the light of LULAC v. Perry.

Hidden Section

The concept of partisan symmetry

The concept of partisan symmetry as a standard for assessing partisan gerrymandering:
Seats, Votes, and Gerrymandering: Measuring Bias and Representation in Legislative Redistricting
Browning, Robert X, and Gary King. 1987. Seats, Votes, and Gerrymandering: Measuring Bias and Representation in Legislative Redistricting, Law and Policy 9: 305–322.Abstract
The Davis v. Bandemer case focused much attention on the problem of using statistical evidence to demonstrate the existence of political gerrymandering. In this paper, we evaluate the uses and limitations of measures of the seat-votes relationship in the Bandemer case. We outline a statistical method we have developed that can be used to estimate bias and the form of representation in legislative redistricting. We apply this method to Indiana State House and Senate elections for the period 1972 to 1984 and demonstrate a maximum bias 6.2% toward the Republicans in the House and a 2.8% bias in the Senate.
Democratic Representation and Partisan Bias in Congressional Elections
Defines, distinguishes, and measures "partisan bias" and "electoral responsiveness" (or "repesentation"), key concepts that had been conflated in much previous academic literature, and "partisan symmetry" as the definition of fairness to parties in districting. A consensus in the academic literature on partisan symmetry as the definition of partisan fairness has held since this article. King, Gary, and Robert X Browning. 1987. Democratic Representation and Partisan Bias in Congressional Elections, American Political Science Review 81: 1252–1273.Abstract
The translation of citizen votes into legislative seats is of central importance in democratic electoral systems. It has been a longstanding concern among scholars in political science and in numerous other disciplines. Through this literature, two fundamental tenets of democratic theory, partisan bias and democratic representation, have often been confused. We develop a general statistical model of the relationship between votes and seats and separate these two important concepts theoretically and empirically. In so doing, we also solve several methodological problems with the study of seats, votes and the cube law. An application to U.S. congressional districts provides estimates of bias and representation for each state and deomonstrates the model’s utility. Results of this application show distinct types of representation coexisting in U.S. states. Although most states have small partisan biases, there are some with a substantial degree of bias.
Racial Fairness in Legislative Redistricting
Related work on clarifying normative assumptions underlying proposed standards for fairness to different ethnic groups, and formalizes several absolute standards. King, Gary, John Bruce, Andrew Gelman, and Paul E Peterson. 1996. Racial Fairness in Legislative Redistricting, in Classifying by Race. Princeton University Press.Abstract
In this chapter, we study standards of racial fairness in legislative redistricting- a field that has been the subject of considerable legislation, jurisprudence, and advocacy, but very little serious academic scholarship. We attempt to elucidate how basic concepts about "color-blind" societies, and similar normative preferences, can generate specific practical standards for racial fairness in representation and redistricting. We also provide the normative and theoretical foundations on which concepts such as proportional representation rest, in order to give existing preferences of many in the literature a firmer analytical foundation.

Methods for measuring partisan bias and electoral responsiveness

The methods for measuring partisan bias and electoral responsiveness, and related quantities, that first relaxed the assumptions of exact uniform partisan swing and the exact correspondence between statewide electoral results and legislative electoral results, among other improvements:
Representation Through Legislative Redistricting: A Stochastic Model
The first attempt to eliminate the exact uniform partisan swing assumption, using data from a single election. King, Gary. 1989. Representation Through Legislative Redistricting: A Stochastic Model, American Journal of Political Science 33: 787–824.Abstract
This paper builds a stochastic model of the processes that give rise to observed patterns of representation and bias in congressional and state legislative elections. The analysis demonstrates that partisan swing and incumbency voting, concepts from the congressional elections literature, have determinate effects on representation and bias, concepts from the redistricting literature. The model shows precisely how incumbency and increased variability of partisan swing reduce the responsiveness of the electoral system and how partisan swing affects whether the system is biased toward one party or the other. Incumbency, and other causes of unresponsive representation, also reduce the effect of partisan swing on current levels of partisan bias. By relaxing the restrictive portions of the widely applied "uniform partisan swing" assumption, the theoretical analysis leads directly to an empirical model enabling one more reliably to estimate responsiveness and bias from a single year of electoral data. Applying this to data from seven elections in each of six states, the paper demonstrates that redistricting has effects in predicted directions in the short run: partisan gerrymandering biases the system in favor of the party in control and, by freeing up seats held by opposition party incumbents, increases the system’s responsiveness. Bipartisan-controlled redistricting appears to reduce bias somewhat and dramatically to reduce responsiveness. Nonpartisan redistricting processes substantially increase responsiveness but do not have as clear an effect on bias. However, after only two elections, prima facie evidence for redistricting effects evaporate in most states. Finally, across every state and type of redistricting process, responsiveness declined significantly over the course of the decade. This is clear evidence that the phenomenon of "vanishing marginals," recognized first in the U.S. Congress literature, also applies to these different types of state legislative assemblies. It also strongly suggests that redistricting could not account for this pattern.
Estimating the Electoral Consequences of Legislative Redistricting
The most technically sophisticated method, many aspects of which were simplified in the above paper. Gelman, Andrew, and Gary King. 1990. Estimating the Electoral Consequences of Legislative Redistricting, Journal of the American Statistical Association 85: 274–282.Abstract
We analyze the effects of redistricting as revealed in the votes received by the Democratic and Republican candidates for state legislature. We develop measures of partisan bias and the responsiveness of the composition of the legislature to changes in statewide votes. Our statistical model incorporates a mixed hierarchical Bayesian and non-Bayesian estimation, requiring simulation along the lines of Tanner and Wong (1987). This model provides reliable estimates of partisan bias and responsiveness along with measures of their variabilities from only a single year of electoral data. This allows one to distinguish systematic changes in the underlying electoral system from typical election-to-election variability.
A Unified Method of Evaluating Electoral Systems and Redistricting Plans
A now widely used set of methods for estimating bias and responsiveness, including applications to redistricting in the states and the U.S. Congress. Gelman, Andrew, and Gary King. 1994. A Unified Method of Evaluating Electoral Systems and Redistricting Plans, American Journal of Political Science 38: 514–554.Abstract
We derive a unified statistical method with which one can produce substantially improved definitions and estimates of almost any feature of two-party electoral systems that can be defined based on district vote shares. Our single method enables one to calculate more efficient estimates, with more trustworthy assessments of their uncertainty, than each of the separate multifarious existing measures of partisan bias, electoral responsiveness, seats-votes curves, expected or predicted vote in each district in a legislature, the probability that a given party will win the seat in each district, the proportion of incumbents or others who will lose their seats, the proportion of women or minority candidates to be elected, the incumbency advantage and other causal effects, the likely effects on the electoral system and district votes of proposed electoral reforms, such as term limitations, campaign spending limits, and drawing majority-minority districts, and numerous others. To illustrate, we estimate the partisan bias and electoral responsiveness of the U.S. House of Representatives since 1900 and evaluate the fairness of competing redistricting plans for the 1992 Ohio state legislature.

Paradoxical benefits of redistricting

Demonstrates the paradoxical benefits of redistricting to American democracy, even partisan gerrymandering, (as compared to no redistricting) in reducing partian bias and increasing electoral responsiveness. (Of course, if the symmetry standard were imposed, redistricting by any means would produce less bias than any other arrangement.)
Enhancing Democracy Through Legislative Redistricting
Gelman, Andrew, and Gary King. 1994. Enhancing Democracy Through Legislative Redistricting, American Political Science Review 88: 541–559.Abstract
We demonstrate the surprising benefits of legislative redistricting (including partisan gerrymandering) for American representative democracy. In so doing, our analysis resolves two long-standing controversies in American politics. First, whereas some scholars believe that redistricting reduces electoral responsiveness by protecting incumbents, others, that the relationship is spurious, we demonstrate that both sides are wrong: redistricting increases responsiveness. Second, while some researchers believe that gerrymandering dramatically increases partisan bias and others deny this effect, we show both sides are in a sense correct. Gerrymandering biases electoral systems in favor of the party that controls the redistricting as compared to what would have happened if the other party controlled it, but any type of redistricting reduces partisan bias as compared to an electoral system without redistricting. Incorrect conclusions in both literatures resulted from misjudging the enormous uncertainties present during redistricting periods, making simplified assumptions about the redistricters’ goals, and using inferior statistical methods.
Advantages of Conflictual Redistricting
A shortened, popular version of the previous article. Gelman, Andrew, Gary King, Iain McLean, and David Butler. 1996. Advantages of Conflictual Redistricting, in Fixing the Boundary: Defining and Redefining Single-Member Electoral Districts, 207–218. Aldershot, England: Dartmouth Publishing Company.Abstract
This article describes the results of an analysis we did of state legislative elections in the United States, where each state is required to redraw the boundaries of its state legislative districts every ten years. In the United States, redistrictings are sometimes controlled by the Democrats, sometimes by the Republicans, and sometimes by bipartisan committees, but never by neutral boundary commissions. Our goal was to study the consequences of redistricting and at the conclusion of this article, we discuss how our findings might be relevant to British elections.

Other Districting Systems

Electoral Responsiveness and Partisan Bias in Multiparty Democracies
Unifies existing multi-year seats-votes models as special cases of a new general model, and was the first formalization of, and method for estimating, electoral responsiveness and partisan bias in electoral systems with any number of political parties. King, Gary. 1990. Electoral Responsiveness and Partisan Bias in Multiparty Democracies, Legislative Studies Quarterly XV: 159–181.Abstract
Because the goals of local and national representation are inherently incompatible, there is an uncertain relationship between aggregates of citizen votes and the national allocation of legislative seats in almost all democracies. In particular electoral systems, this uncertainty leads to diverse configurations of electoral responsiveness and partisian bias, two fundamental concepts in empirical democratic theory. This paper unifies virtually all existing multiyear seats-votes models as special cases of a new general model. It also permits the first formalization of, and reliable method for empirically estimating, electoral responsiveness and partisian bias in electoral systems with any number of political parties. I apply this model to data from nine democratic countries, revealing clear patterns in responsiveness and bias across different types of electoral rules.
Measuring the Consequences of Delegate Selection Rules in Presidential Nominations
Formalizes normative criteria used to judge presidential selection contests by modeling the translation of citizen votes in primaries and caucuses into delegates to the national party conventions and reveals the patterns of biases and responsiveness in the Democratic and Republican nomination systems. Ansolabehere, Stephen, and Gary King. 1990. Measuring the Consequences of Delegate Selection Rules in Presidential Nominations, Journal of Politics 52: 609–621.Abstract
In this paper, we formalize existing normative criteria used to judge presidential selection contests by modeling the translation of citizen votes in primaries and caucuses into delegates to the national party conventions. We use a statistical model that enables us to separate the form of electoral responsiveness in presidential selection systems, as well as the degree of bias toward each of the candidates. We find that (1) the Republican nomination system is more responsive to changes in citizen votes than the Democratic system and (2) non-PR primaries are always more responsive than PR primaries and (3) surprisingly, caucuses are more proportional than even primaries held under PR rules and (4) significant bias in favor of a candidate was a good prediction of the winner of the nomination contest. We also (5) evaluate the claims of Ronald Reagan in 1976 and Jesse Jackson in 1988 that the selection systems were substantially biased against their candidates. We find no evidence to support Reagan’s claim, but substantial evidence that Jackson was correct.
Empirically Evaluating the Electoral College
Evaluates the partisan bias of the electoral college, and shows that there is little basis for reform of the system. Changing to popular vote of the president would not even increase individual voting power. Gelman, Andrew, Jonathan Katz, Gary King, Ann N Crigler, Marion R Just, and Edward J McCaffery. 2004. Empirically Evaluating the Electoral College, in Rethinking the Vote: The Politics and Prospects of American Electoral Reform, 75-88. New York: Oxford University Press.Abstract
The 2000 U.S. presidential election rekindled interest in possible electoral reform. While most of the popular and academic accounts focused on balloting irregularities in Florida, such as the now infamous "butterfly" ballot and mishandled absentee ballots, some also noted that this election marked only the fourth time in history that the candidate with a plurality of the popular vote did not also win the Electoral College. This "anti-democratic" outcome has fueled desire for reform or even outright elimination of the electoral college. We show that after appropriate statistical analysis of the available historical electoral data, there is little basis to argue for reforming the Electoral College. We first show that while the Electoral College may once have been biased against the Democrats, the current distribution of voters advantages neither party. Further, the electoral vote will differ from the popular vote only when the average vote shares of the two major candidates are extremely close to 50 percent. As for individual voting power, we show that while there has been much temporal variation in relative voting power over the last several decades, the voting power of individual citizens would not likely increase under a popular vote system of electing the president.

Software

JudgeIt II: A Program for Evaluating Electoral Systems and Redistricting Plans
Gelman, Andrew, Gary King, and Andrew Thomas. 2010. JudgeIt II: A Program for Evaluating Electoral Systems and Redistricting Plans. Publisher's VersionAbstract
A program for analyzing most any feature of district-level legislative elections data, including prediction, evaluating redistricting plans, estimating counterfactual hypotheses (such as what would happen if a term-limitation amendment were imposed). This implements statistical procedures described in a series of journal articles and has been used during redistricting in many states by judges, partisans, governments, private citizens, and many others. The earlier version was winner of the APSA Research Software Award.

Data

Mortality Studies

Methods for forecasting mortality rates (overall or for time series data cross-classified by age, sex, country, and cause); estimating mortality rates in areas without vital registration; measuring inequality in risk of death; applications to US mortality, the future of the Social Security, armed conflict, heart failure, and human security.

Forecasting Mortality

Statistical Security for Social Security
Soneji, Samir, and Gary King. 2012. Statistical Security for Social Security, Demography 49, no. 3: 1037-1060 . Publisher's versionAbstract
The financial viability of Social Security, the single largest U.S. Government program, depends on accurate forecasts of the solvency of its intergenerational trust fund. We begin by detailing information necessary for replicating the Social Security Administration’s (SSA’s) forecasting procedures, which until now has been unavailable in the public domain. We then offer a way to improve the quality of these procedures due to age-and sex-specific mortality forecasts. The most recent SSA mortality forecasts were based on the best available technology at the time, which was a combination of linear extrapolation and qualitative judgments. Unfortunately, linear extrapolation excludes known risk factors and is inconsistent with long-standing demographic patterns such as the smoothness of age profiles. Modern statistical methods typically outperform even the best qualitative judgments in these contexts. We show how to use such methods here, enabling researchers to forecast using far more information, such as the known risk factors of smoking and obesity and known demographic patterns. Including this extra information makes a sub¬stantial difference: For example, by only improving mortality forecasting methods, we predict three fewer years of net surplus, $730 billion less in Social Security trust funds, and program costs that are 0.66% greater of projected taxable payroll compared to SSA projections by 2031. More important than specific numerical estimates are the advantages of transparency, replicability, reduction of uncertainty, and what may be the resulting lower vulnerability to the politicization of program forecasts. In addition, by offering with this paper software and detailed replication information, we hope to marshal the efforts of the research community to include ever more informative inputs and to continue to reduce the uncertainties in Social Security forecasts. This work builds on our article that provides forecasts of US Mortality rates (see King and Soneji, The Future of Death in America), a book developing improved methods for forecasting mortality (Girosi and King, Demographic Forecasting), all data we used (King and Soneji, replication data sets), and open source software that implements the methods (Girosi and King, YourCast).  Also available is a New York Times Op-Ed based on this work (King and Soneji, Social Security: It’s Worse Than You Think), and a replication data set for the Op-Ed (King and Soneji, replication data set).
Understanding the Lee-Carter Mortality Forecasting Method
Girosi, Federico, and Gary King. 2007. Understanding the Lee-Carter Mortality Forecasting Method.Abstract
We demonstrate here several previously unrecognized or insufficiently appreciated properties of the Lee-Carter mortality forecasting approach, the dominant method used in both the academic literature and practical applications. We show that this model is a special case of a considerably simpler, and less often biased, random walk with drift model, and prove that the age profile forecast from both approaches will always become less smooth and unrealistic after a point (when forecasting forward or backwards in time) and will eventually deviate from any given baseline. We use these and other properties we demonstrate to suggest when the model would be most applicable in practice.
Demographic Forecasting
Girosi, Federico, and Gary King. 2008. Demographic Forecasting. Princeton: Princeton University Press.Abstract
We introduce a new framework for forecasting age-sex-country-cause-specific mortality rates that incorporates considerably more information, and thus has the potential to forecast much better, than any existing approach. Mortality forecasts are used in a wide variety of academic fields, and for global and national health policy making, medical and pharmaceutical research, and social security and retirement planning. As it turns out, the tools we developed in pursuit of this goal also have broader statistical implications, in addition to their use for forecasting mortality or other variables with similar statistical properties. First, our methods make it possible to include different explanatory variables in a time series regression for each cross-section, while still borrowing strength from one regression to improve the estimation of all. Second, we show that many existing Bayesian (hierarchical and spatial) models with explanatory variables use prior densities that incorrectly formalize prior knowledge. Many demographers and public health researchers have fortuitously avoided this problem so prevalent in other fields by using prior knowledge only as an ex post check on empirical results, but this approach excludes considerable information from their models. We show how to incorporate this demographic knowledge into a model in a statistically appropriate way. Finally, we develop a set of tools useful for developing models with Bayesian priors in the presence of partial prior ignorance. This approach also provides many of the attractive features claimed by the empirical Bayes approach, but fully within the standard Bayesian theory of inference.
The Future of Death in America
King, Gary, and Samir Soneji. 2011. The Future of Death in America, Demographic Research 25, no. 1: 1--38. WebsiteAbstract
Population mortality forecasts are widely used for allocating public health expenditures, setting research priorities, and evaluating the viability of public pensions, private pensions, and health care financing systems. In part because existing methods seem to forecast worse when based on more information, most forecasts are still based on simple linear extrapolations that ignore known biological risk factors and other prior information. We adapt a Bayesian hierarchical forecasting model capable of including more known health and demographic information than has previously been possible. This leads to the first age- and sex-specific forecasts of American mortality that simultaneously incorporate, in a formal statistical model, the effects of the recent rapid increase in obesity, the steady decline in tobacco consumption, and the well known patterns of smooth mortality age profiles and time trends. Formally including new information in forecasts can matter a great deal. For example, we estimate an increase in male life expectancy at birth from 76.2 years in 2010 to 79.9 years in 2030, which is 1.8 years greater than the U.S. Social Security Administration projection and 1.5 years more than U.S. Census projection. For females, we estimate more modest gains in life expectancy at birth over the next twenty years from 80.5 years to 81.9 years, which is virtually identical to the Social Security Administration projection and 2.0 years less than U.S. Census projections. We show that these patterns are also likely to greatly affect the aging American population structure. We offer an easy-to-use approach so that researchers can include other sources of information and potentially improve on our forecasts too.

Estimating Overall and Cause-Specific Mortality Rates

Inexpensive methods of estimating the overall and cause-specific mortality rates from surveys when vital registration (death certificates) or other monitoring is unavailable or inadequate.

Hidden Region 1

A method for estimating cause-specific mortality from "verbal autopsy" data that is less expensive, more reliable, requires fewer assumptions, and will normally be more accurate.
Estimating Incidence Curves of Several Infections Using Symptom Surveillance Data
Goldstein, Edward, Benjamin J Cowling, Allison E Aiello, Saki Takahashi, Gary King, Ying Lu, and Marc Lipsitch. 2011. Estimating Incidence Curves of Several Infections Using Symptom Surveillance Data, PLoS ONE 6, no. 8: e23380.Abstract
We introduce a method for estimating incidence curves of several co-circulating infectious pathogens, where each infection has its own probabilities of particular symptom profiles. Our deconvolution method utilizes weekly surveillance data on symptoms from a defined population as well as additional data on symptoms from a sample of virologically confirmed infectious episodes. We illustrate this method by numerical simulations and by using data from a survey conducted on the University of Michigan campus. Last, we describe the data needs to make such estimates accurate. Link to PLoS version
Designing Verbal Autopsy Studies
King, Gary, Ying Lu, and Kenji Shibuya. 2010. Designing Verbal Autopsy Studies, Population Health Metrics 8, no. 19.Abstract
Background: Verbal autopsy analyses are widely used for estimating cause-specific mortality rates (CSMR) in the vast majority of the world without high quality medical death registration. Verbal autopsies -- survey interviews with the caretakers of imminent decedents -- stand in for medical examinations or physical autopsies, which are infeasible or culturally prohibited. Methods and Findings: We introduce methods, simulations, and interpretations that can improve the design of automated, data-derived estimates of CSMRs, building on a new approach by King and Lu (2008). Our results generate advice for choosing symptom questions and sample sizes that is easier to satisfy than existing practices. For example, most prior effort has been devoted to searching for symptoms with high sensitivity and specificity, which has rarely if ever succeeded with multiple causes of death. In contrast, our approach makes this search irrelevant because it can produce unbiased estimates even with symptoms that have very low sensitivity and specificity. In addition, the new method is optimized for survey questions caretakers can easily answer rather than questions physicians would ask themselves. We also offer an automated method of weeding out biased symptom questions and advice on how to choose the number of causes of death, symptom questions to ask, and observations to collect, among others. Conclusions: With the advice offered here, researchers should be able to design verbal autopsy surveys and conduct analyses with greatly reduced statistical biases and research costs.
Deaths From Heart Failure: Using Coarsened Exact Matching to Correct Cause of Death Statistics
Stevens, Gretchen, Gary King, and Kenji Shibuya. 2010. Deaths From Heart Failure: Using Coarsened Exact Matching to Correct Cause of Death Statistics, Population Health Metrics 8, no. 6.Abstract
Background: Incomplete information on death certificates makes recorded cause of death data less useful for public health monitoring and planning. Certifying physicians sometimes list only the mode of death (and in particular, list heart failure) without indicating the underlying disease(s) that gave rise to the death. This can prevent valid epidemiologic comparisons across countries and over time. Methods and Results: We propose that coarsened exact matching be used to infer the underlying causes of death where only the mode of death is known; we focus on the case of heart failure in U.S., Mexican and Brazilian death records. Redistribution algorithms derived using this method assign the largest proportion of heart failure deaths to ischemic heart disease in all three countries (53%, 26% and 22%), with larger proportions assigned to hypertensive heart disease and diabetes in Mexico and Brazil (16% and 23% vs. 7% for hypertensive heart disease and 13% and 9% vs. 6% for diabetes). Reassigning these heart failure deaths increases US ischemic heart disease mortality rates by 6%.Conclusions: The frequency with which physicians list heart failure in the causal chain for various underlying causes of death allows for inference about how physicians use heart failure on the death certificate in different settings. This easy-to-use method has the potential to reduce bias and increase comparability in cause-of-death data, thereby improving the public health utility of death records. Key Words: vital statistics, heart failure, population health, mortality, epidemiology
Verbal Autopsy Methods with Multiple Causes of Death
King, Gary, and Ying Lu. 2008. Verbal Autopsy Methods with Multiple Causes of Death, Statistical Science 23: 78–91.Abstract
Verbal autopsy procedures are widely used for estimating cause-specific mortality in areas without medical death certification. Data on symptoms reported by caregivers along with the cause of death are collected from a medical facility, and the cause-of-death distribution is estimated in the population where only symptom data are available. Current approaches analyze only one cause at a time, involve assumptions judged difficult or impossible to satisfy, and require expensive, time consuming, or unreliable physician reviews, expert algorithms, or parametric statistical models. By generalizing current approaches to analyze multiple causes, we show how most of the difficult assumptions underlying existing methods can be dropped. These generalizations also make physician review, expert algorithms, and parametric statistical assumptions unnecessary. With theoretical results, and empirical analyses in data from China and Tanzania, we illustrate the accuracy of this approach. While no method of analyzing verbal autopsy data, including the more computationally intensive approach offered here, can give accurate estimates in all circumstances, the procedure offered is conceptually simpler, less expensive, more general, as or more replicable, and easier to use in practice than existing approaches. We also show how our focus on estimating aggregate proportions, which are the quantities of primary interest in verbal autopsy studies, may also greatly reduce the assumptions necessary, and thus improve the performance of, many individual classifiers in this and other areas. As a companion to this paper, we also offer easy-to-use software that implements the methods discussed herein.

Hidden Region 2

Armed Conflict as a Public Health Problem
Evidence of the massive selection bias in all data on mortality from war (vital registration systems rarely continue to operate when war begins). Uncertainty in mortality estimates from major wars is as large as the estimates. Murray, Christopher JL, Gary King, Alan D Lopez, Niels Tomijima, and Etienne Krug. 2002. Armed Conflict as a Public Health Problem, BMJ (British Medical Journal) 324: 346–349.Abstract
Armed conflict is a major cause of injury and death worldwide, but we need much better methods of quantification before we can accurately assess its effect. Armed conflict between warring states and groups within states have been major causes of ill health and mortality for most of human history. Conflict obviously causes deaths and injuries on the battlefield, but also health consequences from the displacement of populations, the breakdown of health and social services, and the heightened risk of disease transmission. Despite the size of the health consequences, military conflict has not received the same attention from public health research and policy as many other causes of illness and death. In contrast, political scientists have long studied the causes of war but have primarily been interested in the decision of elite groups to go to war, not in human death and misery. We review the limited knowledge on the health consequences of conflict, suggest ways to improve measurement, and discuss the potential for risk assessment and for preventing and ameliorating the consequences of conflict.
Death by Survey: Estimating Adult Mortality without Selection Bias from Sibling Survival Data
Unbiased estimates of mortality rates from surveys about sibling and others' survival; explains and reduces biases in existing methods. Gakidou, Emmanuela, and Gary King. 2006. Death by Survey: Estimating Adult Mortality without Selection Bias from Sibling Survival Data, Demography 43: 569–585.Abstract
The widely used methods for estimating adult mortality rates from sample survey responses about the survival of siblings, parents, spouses, and others depend crucially on an assumption that we demonstrate does not hold in real data. We show that when this assumption is violated – so that the mortality rate varies with sibship size – mortality estimates can be massively biased. By using insights from work on the statistical analysis of selection bias, survey weighting, and extrapolation problems, we propose a new and relatively simple method of recovering the mortality rate with both greatly reduced potential for bias and increased clarity about the source of necessary assumptions.

Uses of Mortality Rates

Rethinking Human Security
Provides a rigorous and measurable definition of human security; discusses the improvements in data collection and methods of forecasting necessary to measure human security; and introduces an agenda to enhance human security that follows logically in the areas of risk assessment, prevention, protection, and compensation. King, Gary, and Christopher JL Murray. 2002. Rethinking Human Security, Political Science Quarterly 116: 585–610.Abstract
In the last two decades, the international community has begun to conclude that attempts to ensure the territorial security of nation-states through military power have failed to improve the human condition. Despite astronomical levels of military spending, deaths due to military conflict have not declined. Moreover, even when the borders of some states are secure from foreign threats, the people within those states do not necessarily have freedom from crime, enough food, proper health care, education, or political freedom. In response to these developments, the international community has gradually moved to combine economic development with military security and other basic human rights to form a new concept of "human security". Unfortunately, by common assent the concept lacks both a clear definition, consistent with the aims of the international community, and any agreed upon measure of it. In this paper, we propose a simple, rigorous, and measurable definition of human security: the expected number of years of future life spent outside the state of "generalized poverty". Generalized poverty occurs when an individual falls below the threshold in any key domain of human well-being. We consider improvements in data collection and methods of forecasting that are necessary to measure human security and then introduce an agenda for research and action to enhance human security that follows logically in the areas of risk assessment, prevention, protection, and compensation.
The Effects of International Monetary Fund Loans on Health Outcomes
A Perspective article on the effect of the IMF on increasing tuberculosis mortality rates: Murray, Megan, and Gary King. 2008. The Effects of International Monetary Fund Loans on Health Outcomes, PLoS Medicine 5.Abstract
A "Perspective" article that discusses an article by David Stuckler and colleagues showing that, in Eastern European and former Soviet countries, participation in International Monetary Fund economic programs have been associated with higher mortality rates from tuberculosis.
Statistical Security for Social Security
Soneji, Samir, and Gary King. 2012. Statistical Security for Social Security, Demography 49, no. 3: 1037-1060 . Publisher's versionAbstract
The financial viability of Social Security, the single largest U.S. Government program, depends on accurate forecasts of the solvency of its intergenerational trust fund. We begin by detailing information necessary for replicating the Social Security Administration’s (SSA’s) forecasting procedures, which until now has been unavailable in the public domain. We then offer a way to improve the quality of these procedures due to age-and sex-specific mortality forecasts. The most recent SSA mortality forecasts were based on the best available technology at the time, which was a combination of linear extrapolation and qualitative judgments. Unfortunately, linear extrapolation excludes known risk factors and is inconsistent with long-standing demographic patterns such as the smoothness of age profiles. Modern statistical methods typically outperform even the best qualitative judgments in these contexts. We show how to use such methods here, enabling researchers to forecast using far more information, such as the known risk factors of smoking and obesity and known demographic patterns. Including this extra information makes a sub¬stantial difference: For example, by only improving mortality forecasting methods, we predict three fewer years of net surplus, $730 billion less in Social Security trust funds, and program costs that are 0.66% greater of projected taxable payroll compared to SSA projections by 2031. More important than specific numerical estimates are the advantages of transparency, replicability, reduction of uncertainty, and what may be the resulting lower vulnerability to the politicization of program forecasts. In addition, by offering with this paper software and detailed replication information, we hope to marshal the efforts of the research community to include ever more informative inputs and to continue to reduce the uncertainties in Social Security forecasts. This work builds on our article that provides forecasts of US Mortality rates (see King and Soneji, The Future of Death in America), a book developing improved methods for forecasting mortality (Girosi and King, Demographic Forecasting), all data we used (King and Soneji, replication data sets), and open source software that implements the methods (Girosi and King, YourCast).  Also available is a New York Times Op-Ed based on this work (King and Soneji, Social Security: It’s Worse Than You Think), and a replication data set for the Op-Ed (King and Soneji, replication data set).
A method to estimate total and within-group inequality in health (all prior research is about mean differences between groups). Gakidou, Emmanuela, and Gary King. 2002. Measuring Total Health Inequality: Adding Individual Variation to Group-Level Differences, BioMed Central: International Journal for Equity in Health 1.Abstract
Background: Studies have revealed large variations in average health status across social, economic, and other groups. No study exists on the distribution of the risk of ill-health across individuals, either within groups or across all people in a society, and as such a crucial piece of total health inequality has been overlooked. Some of the reason for this neglect has been that the risk of death, which forms the basis for most measures, is impossible to observe directly and difficult to estimate. Methods: We develop a measure of total health inequality – encompassing all inequalities among people in a society, including variation between and within groups – by adapting a beta-binomial regression model. We apply it to children under age two in 50 low- and middle-income countries. Our method has been adopted by the World Health Organization and is being implemented in surveys around the world and preliminary estimates have appeared in the World Health Report (2000). Results: Countries with similar average child mortality differ considerably in total health inequality. Liberia and Mozambique have the largest inequalities in child survival, while Colombia, the Philippines and Kazakhstan have the lowest levels among the countries measured. Conclusions: Total health inequality estimates should be routinely reported alongside average levels of health in populations and groups, as they reveal important policy-related information not otherwise knowable. This approach enables meaningful comparisons of inequality across countries and future analyses of the determinants of inequality.

Teaching and Administration

Publications and other projects designed to improve teaching, learning, and university administration, as well as broader writings on the future of the social sciences.
Restructuring the Social Sciences: Reflections from Harvard’s Institute for Quantitative Social Science
King, Gary. 2014. Restructuring the Social Sciences: Reflections from Harvard’s Institute for Quantitative Social Science, PS: Political Science and Politics 47, no. 1: 165-172. Cambridge University Press versionAbstract
The social sciences are undergoing a dramatic transformation from studying problems to solving them; from making do with a small number of sparse data sets to analyzing increasing quantities of diverse, highly informative data; from isolated scholars toiling away on their own to larger scale, collaborative, interdisciplinary, lab-style research teams; and from a purely academic pursuit to having a major impact on the world. To facilitate these important developments, universities, funding agencies, and governments need to shore up and adapt the infrastructure that supports social science research. We discuss some of these developments here, as well as a new type of organization we created at Harvard to help encourage them -- the Institute for Quantitative Social Science.  An increasing number of universities are beginning efforts to respond with similar institutions. This paper provides some suggestions for how individual universities might respond and how we might work together to advance social science more generally.
The Troubled Future of Colleges and Universities (with comments from five scholar-administrators)
King, Gary, and Maya Sen. 2013. The Troubled Future of Colleges and Universities (with comments from five scholar-administrators), PS: Political Science and Politics 46, no. 1: 81--113.Abstract
The American system of higher education is under attack by political, economic, and educational forces that threaten to undermine its business model, governmental support, and operating mission. The potential changes are considerably more dramatic and disruptive than what we've already experienced. Traditional colleges and universities urgently need a coherent, thought-out response. Their central role in ensuring the creation, preservation, and distribution of knowledge may be at risk and, as a consequence, so too may be the spectacular progress across fields we have come to expect as a result. Symposium contributors include Henry E. Brady, John Mark Hansen, Gary King, Nannerl O. Keohane, Michael Laver, Virginia Sapiro, and Maya Sen.
How Social Science Research Can Improve Teaching
King, Gary, and Maya Sen. 2013. How Social Science Research Can Improve Teaching, PS: Political Science and Politics 46, no. 3: 621-629.Abstract
We marshal discoveries about human behavior and learning from social science research and show how they can be used to improve teaching and learning. The discoveries are easily stated as three social science generalizations: (1) social connections motivate, (2) teaching teaches the teacher, and (3) instant feedback improves learning. We show how to apply these generalizations via innovations in modern information technology inside, outside, and across university classrooms. We also give concrete examples of these ideas from innovations we have experimented with in our own teaching. See also a video presentation of this talk before the Harvard Board of Overseers
Ensuring the Data Rich Future of the Social Sciences
King, Gary. 2011. Ensuring the Data Rich Future of the Social Sciences, Science 331, no. 11 February: 719-721.Abstract
Massive increases in the availability of informative social science data are making dramatic progress possible in analyzing, understanding, and addressing many major societal problems. Yet the same forces pose severe challenges to the scientific infrastructure supporting data sharing, data management, informatics, statistical methodology, and research ethics and policy, and these are collectively holding back progress. I address these changes and challenges and suggest what can be done.
The Changing Evidence Base of Social Science Research
King, Gary. 2009. The Changing Evidence Base of Social Science Research, in The Future of Political Science: 100 Perspectives, Gary King, Schlozman, Kay, and Nie, Norman. New York: Routledge Press.Abstract
This (two-page) article argues that the evidence base of political science and the related social sciences are beginning an underappreciated but historic change.
The Science of Political Science Graduate Admissions
King, Gary, John M Bruce, and Michael Gilligan. 1993. The Science of Political Science Graduate Admissions, PS: Political Science and Politics XXVI: 772–778.Abstract
As political scientists, we spend much time teaching and doing scholarly research, and more time than we may wish to remember on university committees. However, just as many of us believe that teaching and research are not fundamentally different activities, we also need not use fundamentally different standards of inference when studying government, policy, and politics than when participating in the governance of departments and universities. In this article, we describe our attempts to bring somewhat more systematic methods to the process and policies of graduate admissions.
<p>Publication, Publication</p>
King, Gary. 2006.

Publication, Publication

, PS: Political Science and Politics 39: 119–125. Publisher's VersionAbstract
I show herein how to write a publishable paper by beginning with the replication of a published article. This strategy seems to work well for class projects in producing papers that ultimately get published, helping to professionalize students into the discipline, and teaching them the scientific norms of the free exchange of academic information. I begin by briefly revisiting the prominent debate on replication our discipline had a decade ago and some of the progress made in data sharing since.
<p>A Proposed Standard for the Scholarly Citation of Quantitative Data</p>
Altman, Micah, and Gary King. 2007.

A Proposed Standard for the Scholarly Citation of Quantitative Data

, D-Lib Magazine 13. Publisher's VersionAbstract
An essential aspect of science is a community of scholars cooperating and competing in the pursuit of common goals. A critical component of this community is the common language of and the universal standards for scholarly citation, credit attribution, and the location and retrieval of articles and books. We propose a similar universal standard for citing quantitative data that retains the advantages of print citations, adds other components made possible by, and needed due to, the digital form and systematic nature of quantitative data sets, and is consistent with most existing subfield-specific approaches. Although the digital library field includes numerous creative ideas, we limit ourselves to only those elements that appear ready for easy practical use by scientists, journal editors, publishers, librarians, and archivists.