Gary King is the Weatherhead University Professor at Harvard University. He also serves as Director of the Institute for Quantitative Social Science. He and his research group develop and apply empirical methods in many areas of social science research. Full bio and CV

Research Areas

    • Evaluating Social Security Forecasts
      The accuracy of U.S. Social Security Administration (SSA) demographic and financial forecasts is crucial for the solvency of its Trust Funds, government programs comprising greater than 50% of all federal government expenditures, industry decision making, and the evidence base of many scholarly articles. Forecasts are also essential for scoring policy proposals, put forward by both political parties. Because SSA makes public little replication information, and uses ad hoc, qualitative, and antiquated statistical forecasting methods, no one in or out of government has been able to produce fully independent alternative forecasts or policy scorings. Yet, no systematic evaluation of SSA forecasts has ever been published by SSA or anyone else. We show that SSA's forecasting errors were approximately unbiased until about 2000, but then began to grow quickly, with increasingly overconfident uncertainty intervals. Moreover, the errors all turn out to be in the same potentially dangerous direction, each making the Social Security Trust Funds look healthier than they actually are. We also discover the cause of these findings with evidence from a large number of interviews we conducted with participants at every level of the forecasting and policy processes. We show that SSA's forecasting procedures meet all the conditions the modern social-psychology and statistical literatures demonstrate make bias likely. When those conditions mixed with potent new political forces trying to change Social Security and influence the forecasts, SSA's actuaries hunkered down trying hard to insulate themselves from the intense political pressures. Unfortunately, this otherwise laudable resistance to undue influence, along with their ad hoc qualitative forecasting models, led them to also miss important changes in the input data such as retirees living longer lives, and drawing more benefits, than predicted by simple extrapolations. We explain that solving this problem involves using (a) removing human judgment where possible, by using formal statistical methods -- via the revolution in data science and big data; (b) instituting formal structural procedures when human judgment is required -- via the revolution in social psychological research; and (c) requiring transparency and data sharing to catch errors that slip through -- via the revolution in data sharing & replication.An article at Barron's about our work.
    • Incumbency Advantage
      Proof that previously used estimators of electoral incumbency advantage were biased, and a new unbiased estimator. Also, the first systematic demonstration that constituency service by legislators increases the incumbency advantage.
    • Information Control by Authoritarian Governments
      Reverse engineering Chinese information controls -- the most extensive effort to selectively control human expression in the history of the world. We show that this massive effort to slow the flow of information paradoxically also conveys a great deal about the intentions, goals, and actions of the leaders. We downloaded all Chinese social media posts before the government could read and censor them; wrote and posted comments randomly assigned to our categories on hundreds of websites across the country to see what would be censored; set up our own social media website in China; and discovered that the Chinese government fabricates and posts 450 million social media comments a year in the names of ordinary people and convinced those posting (and inadvertently even the government) to admit to their activities. We found that the goverment does not engage on controversial issues (they do not censor criticism or fabricate posts that argue with those who disagree with the government), but they respond on an emergency basis to stop collective action (with censorship, fabricating posts with giant bursts of cheerleading-type distractions, responding to citizen greviances, etc.). They don't care what you think of them or say about them; they only care what you can do.
    • Mexican Health Care Evaluation
      An evaluation of the Mexican Seguro Popular program (designed to extend health insurance and regular and preventive medical care, pharmaceuticals, and health facilities to 50 million uninsured Mexicans), one of the world's largest health policy reforms of the last two decades. Our evaluation features a new design for field experiments that is more robust to the political interventions and implementation errors that have ruined many similar previous efforts; new statistical methods that produce more reliable and efficient results using fewer resources, assumptions, and data; and an implementation of these methods in the largest randomized health policy experiment to date. (See the Harvard Gazette story on this project.)
    • Presidency Research; Voting Behavior
      Resolution of the paradox of why polls are so variable over time during presidential campaigns even though the vote outcome is easily predictable before it starts. Also, a resolution of a key controversy over absentee ballots during the 2000 presidential election; and the methodology of small-n research on executives.
    • Informatics and Data Sharing
      Replication Standards New standards, protocols, and software for citing, sharing, analyzing, archiving, preserving, distributing, cataloging, translating, disseminating, naming, verifying, and replicating scholarly research data and analyses. Also includes proposals to improve the norms of data sharing and replication in science.
    • International Conflict
      Methods for coding, analyzing, and forecasting international conflict and state failure. Evidence that the causes of conflict, theorized to be important but often found to be small or ephemeral, are indeed tiny for the vast majority of dyads, but are large, stable, and replicable wherever the ex ante probability of conflict is large.
    • Legislative Redistricting
      The definition of partisan symmetry as a standard for fairness in redistricting; methods and software for measuring partisan bias and electoral responsiveness; discussion of U.S. Supreme Court rulings about this work. Evidence that U.S. redistricting reduces bias and increases responsiveness, and that the electoral college is fair; applications to legislatures, primaries, and multiparty systems.
    • Mortality Studies
      Methods for forecasting mortality rates (overall or for time series data cross-classified by age, sex, country, and cause); estimating mortality rates in areas without vital registration; measuring inequality in risk of death; applications to US mortality, the future of the Social Security, armed conflict, heart failure, and human security.
    • Teaching and Administration
      Publications and other projects designed to improve teaching, learning, and university administration, as well as broader writings on the future of the social sciences.
    • Automated Text Analysis
      Automated and computer-assisted methods of extracting, organizing, understanding, conceptualizing, and consuming knowledge from massive quantities of unstructured text.
    • Anchoring Vignettes (for interpersonal incomparability)
      Methods for interpersonal incomparability, when respondents (from different cultures, genders, countries, or ethnic groups) understand survey questions in different ways; for developing theoretical definitions of complicated concepts apparently definable only by example (i.e., "you know it when you see it").
    • Causal Inference
      Methods for detecting and reducing model dependence (i.e., when minor model changes produce substantively different inferences) in inferring causal effects and other counterfactuals. Matching methods; "politically robust" and cluster-randomized experimental designs; causal bias decompositions.
    • Event Counts and Durations
      Statistical models to explain or predict how many events occur for each fixed time period, or the time between events. An application to cabinet dissolution in parliamentary democracies which united two previously warring scholarly literature. Other applications to international relations and U.S. Supreme Court appointments.
    • Ecological Inference
      Inferring individual behavior from group-level data: The first approach to incorporate both unit-level deterministic bounds and cross-unit statistical information, methods for 2x2 and larger tables, Bayesian model averaging, applications to elections, software.
    • Missing Data & Measurement Error
      Statistical methods to accommodate missing information in data sets due to scattered unit nonresponse, missing variables, or values or variables measured with error. Easy-to-use algorithms and software for multiple imputation and multiple overimputation for surveys, time series, and time series cross-sectional data. Applications to electoral, and other compositional, data.
    • Qualitative Research
      How the same unified theory of inference underlies quantitative and qualitative research alike; scientific inference when quantification is difficult or impossible; research design; empirical research in legal scholarship.
    • Rare Events
      How to save 99% of your data collection costs; bias corrections for logistic regression in estimating probabilities and causal effects in rare events data; estimating base probabilities or any quantity from case-control data; automated coding of events.
    • Survey Research
      How surveys work and a variety of methods to use with surveys. Surveys for estimating death rates, why election polls are so variable when the vote is so predictable, and health inequality.
    • Unifying Statistical Analysis
      Development of a unified approach to statistical modeling, inference, interpretation, presentation, analysis, and software; integrated with most of the other projects listed here.

Recent Papers

A Unified Model of Cabinet Dissolution in Parliamentary Democracies

A Unified Model of Cabinet Dissolution in Parliamentary Democracies
Gary King, James Alt, Nancy Burns, and Michael Laver. 1990. “A Unified Model of Cabinet Dissolution in Parliamentary Democracies.” American Journal of Political Science, 34, Pp. 846–871.Abstract
The literature on cabinet duration is split between two apparently irreconcilable positions. The attributes theorists seek to explain cabinet duration as a fixed function of measured explanatory variables, while the events process theorists model cabinet durations as a product of purely stochastic processes. In this paper we build a unified statistical model that combines the insights of these previously distinct approaches. We also generalize this unified model, and all previous models, by including (1) a stochastic component that takes into account the censoring that occurs as a result of governments lasting to the vicinity of the maximum constitutional interelection period, (2) a systematic component that precludes the possibility of negative duration predictions, and (3) a much more objective and parsimonious list of explanatory variables, the explanatory power of which would not be improved by including a list of indicator variables for individual countries.
Read more

Variance Specification in Event Count Models: From Restrictive Assumptions to a Generalized Estimator

Variance Specification in Event Count Models: From Restrictive Assumptions to a Generalized Estimator
Gary King. 1989. “Variance Specification in Event Count Models: From Restrictive Assumptions to a Generalized Estimator.” American Journal of Political Science, 33, Pp. 762–784.Abstract
This paper discusses the problem of variance specification in models for event count data. Event counts are dependent variables that can take on only nonnegative integer values, such as the number of wars or coups d’etat in a year. I discuss several generalizations of the Poisson regression model, presented in King (1988), to allow for substantively interesting stochastic processes that do not fit into the Poisson framework. Individual models that cope with, and help analyze, heterogeneity, contagion, and negative contagion are each shown to lead to specific statistical models for event count data. In addition, I derive a new generalized event count (GEC) model that enables researchers to extract significant amounts of new information from existing data by estimating features of these unobserved substantive processes. Applications of this model to congressional challenges of presidential vetoes and superpower conflict demonstrate the dramatic advantages of this approach.
Read more

A Seemingly Unrelated Poisson Regression Model

A Seemingly Unrelated Poisson Regression Model
Gary King. 1989. “A Seemingly Unrelated Poisson Regression Model.” Sociological Methods and Research, 17, Pp. 235–255.Abstract
This article introduces a new estimator for the analysis of two contemporaneously correlated endogenous event count variables. This seemingly unrelated Poisson regression model (SUPREME) estimator combines the efficiencies created by single equation Poisson regression model estimators and insights from "seemingly unrelated" linear regression models.
Read more

Estimating Incumbency Advantage Without Bias

Estimating Incumbency Advantage Without Bias
Andrew Gelman and Gary King. 1990. “Estimating Incumbency Advantage Without Bias.” American Journal of Political Science, 34, Pp. 1142–1164.Abstract
In this paper we prove theoretically and demonstrate empirically that all existing measures of incumbency advantage in the congressional elections literature are biased or inconsistent. We then provide an unbiased estimator based on a very simple linear regression model. We apply this new method to congressional elections since 1900, providing the first evidence of a positive incumbency advantage in the first half of the century.
Read more

Stochastic Variation: A Comment on Lewis-Beck and Skalaban’s ’The R-Square’

Gary King. 1991. “Stochastic Variation: A Comment on Lewis-Beck and Skalaban’s ’The R-Square’.” Political Analysis, 2, Pp. 185–200.Abstract
In an interesting and provocative article, Michael Lewis-Beck and Andrew Skalaban make an important contribution by emphasizing several philosophical issues in political methodology that have received too little attention from methodologists and quantitative researchers. These issues involve the role of systematic, and especially stochastic, variation in statistical models. After briefly discussing a few points of disagreement, hoping to reduce them to points of clarification, I turn to the philosophical issues. Examples with real data follow.
Read more

Estimating the Electoral Consequences of Legislative Redistricting

Estimating the Electoral Consequences of Legislative Redistricting
Andrew Gelman and Gary King. 1990. “Estimating the Electoral Consequences of Legislative Redistricting.” Journal of the American Statistical Association, 85, Pp. 274–282.Abstract
We analyze the effects of redistricting as revealed in the votes received by the Democratic and Republican candidates for state legislature. We develop measures of partisan bias and the responsiveness of the composition of the legislature to changes in statewide votes. Our statistical model incorporates a mixed hierarchical Bayesian and non-Bayesian estimation, requiring simulation along the lines of Tanner and Wong (1987). This model provides reliable estimates of partisan bias and responsiveness along with measures of their variabilities from only a single year of electoral data. This allows one to distinguish systematic changes in the underlying electoral system from typical election-to-election variability.
Read more

Measuring the Consequences of Delegate Selection Rules in Presidential Nominations

Measuring the Consequences of Delegate Selection Rules in Presidential Nominations
Stephen Ansolabehere and Gary King. 1990. “Measuring the Consequences of Delegate Selection Rules in Presidential Nominations.” Journal of Politics, 52, Pp. 609–621.Abstract
In this paper, we formalize existing normative criteria used to judge presidential selection contests by modeling the translation of citizen votes in primaries and caucuses into delegates to the national party conventions. We use a statistical model that enables us to separate the form of electoral responsiveness in presidential selection systems, as well as the degree of bias toward each of the candidates. We find that (1) the Republican nomination system is more responsive to changes in citizen votes than the Democratic system and (2) non-PR primaries are always more responsive than PR primaries and (3) surprisingly, caucuses are more proportional than even primaries held under PR rules and (4) significant bias in favor of a candidate was a good prediction of the winner of the nomination contest. We also (5) evaluate the claims of Ronald Reagan in 1976 and Jesse Jackson in 1988 that the selection systems were substantially biased against their candidates. We find no evidence to support Reagan’s claim, but substantial evidence that Jackson was correct.
Read more
All writings

Presentations

Simplifying Matching Methods for Causal Inference, at University of Pennsylvania, APPC, Friday, April 1, 2016:

In this talk, Gary King introduces methods of matching for causal inference that are simpler, more powerful, and easier to understand than prior approaches. Software is available to implement everything discussed. Copies of some of his papers on the subject are available at his web site GaryKing.org.

Discovering and Explaining Systematic Bias and Nontransparency in US Social Security Administration Forecasts, at University of Florida, Department of Political Science, Friday, March 18, 2016:

The accuracy of U.S. Social Security Administration (SSA) demographic and financial forecasts is crucial for the solvency of its Trust Funds, government programs comprising greater than 50% of all federal government expenditures, industry decision making, and the evidence base of many scholarly articles. Forecasts are also essential for scoring policy proposals put forward by both political parties or anyone else. Because SSA makes public little replication information, and uses ad hoc, qualitative, and antiquated statistical forecasting methods, no one in or out of government has...

Read more about Discovering and Explaining Systematic Bias and Nontransparency in US Social Security Administration Forecasts
Big Data is Not About the Data!, at University of Florida, Informatics Symposium, Thursday, March 17, 2016:

In this talk, Gary King explains that the spectacular progress the media describes as "big data" has little to do with the data.  Data, after all, is becoming commoditized, less expensive, and an automatic byproduct of other changes in organizations and society. More data alone doesn't generate insights; it often just makes data analysis harder. The real revolution isn't about the data, it is about the stunning progress in the statistical methods of extracting insights from the data. He will illustrate these points...

Read more about Big Data is Not About the Data!
Why Propensity Scores Should Not Be Used For Matching, at Yale University, MacMillan-CSAP Workshop on Quantitative Research Methods, Thursday, March 10, 2016:

This talk summarizes a paper -- Gary King and Richard Nielsen. 2016. “Why Propensity Scores Should Not Be Used for Matching” -- with this abstract:  Researchers use propensity score matching (PSM) as a data preprocessing step to selectively prune units prior to applying a model to estimate a causal effect. The goal of PSM is to reduce imbalance in the chosen pre-treatment covariates between the treated and control groups, thereby reducing the...

Read more about Why Propensity Scores Should Not Be Used For Matching
All presentations

Books

  • «
  • 2 of 2
  •  
All writings

Gary King on Twitter