Gary King is the Weatherhead University Professor at Harvard University. He also serves as Director of the Institute for Quantitative Social Science. He and his research group develop and apply empirical methods in many areas of social science research. Full bio and CV

Research Areas

    • Evaluating Social Security Forecasts
      The accuracy of U.S. Social Security Administration (SSA) demographic and financial forecasts is crucial for the solvency of its Trust Funds, government programs comprising greater than 50% of all federal government expenditures, industry decision making, and the evidence base of many scholarly articles. Forecasts are also essential for scoring policy proposals, put forward by both political parties. Because SSA makes public little replication information, and uses ad hoc, qualitative, and antiquated statistical forecasting methods, no one in or out of government has been able to produce fully independent alternative forecasts or policy scorings. Yet, no systematic evaluation of SSA forecasts has ever been published by SSA or anyone else. We show that SSA's forecasting errors were approximately unbiased until about 2000, but then began to grow quickly, with increasingly overconfident uncertainty intervals. Moreover, the errors all turn out to be in the same potentially dangerous direction, each making the Social Security Trust Funds look healthier than they actually are. We also discover the cause of these findings with evidence from a large number of interviews we conducted with participants at every level of the forecasting and policy processes. We show that SSA's forecasting procedures meet all the conditions the modern social-psychology and statistical literatures demonstrate make bias likely. When those conditions mixed with potent new political forces trying to change Social Security and influence the forecasts, SSA's actuaries hunkered down trying hard to insulate themselves from the intense political pressures. Unfortunately, this otherwise laudable resistance to undue influence, along with their ad hoc qualitative forecasting models, led them to also miss important changes in the input data such as retirees living longer lives, and drawing more benefits, than predicted by simple extrapolations. We explain that solving this problem involves using (a) removing human judgment where possible, by using formal statistical methods -- via the revolution in data science and big data; (b) instituting formal structural procedures when human judgment is required -- via the revolution in social psychological research; and (c) requiring transparency and data sharing to catch errors that slip through -- via the revolution in data sharing & replication.An article at Barron's about our work.
    • Incumbency Advantage
      Proof that previously used estimators of electoral incumbency advantage were biased, and a new unbiased estimator. Also, the first systematic demonstration that constituency service by legislators increases the incumbency advantage.
    • Information Control by Authoritarian Governments
      Reverse engineering Chinese information controls -- the most extensive effort to selectively control human expression in the history of the world. We show that this massive effort to slow the flow of information paradoxically also conveys a great deal about the intentions, goals, and actions of the leaders. We downloaded all Chinese social media posts before the government could read and censor them; wrote and posted comments randomly assigned to our categories on hundreds of websites across the country to see what would be censored; set up our own social media website in China; and discovered that the Chinese government fabricates and posts 450 million social media comments a year in the names of ordinary people and convinced those posting (and inadvertently even the government) to admit to their activities. We found that the goverment does not engage on controversial issues (they do not censor criticism or fabricate posts that argue with those who disagree with the government), but they respond on an emergency basis to stop collective action (with censorship, fabricating posts with giant bursts of cheerleading-type distractions, responding to citizen greviances, etc.). They don't care what you think of them or say about them; they only care what you can do.
    • Mexican Health Care Evaluation
      An evaluation of the Mexican Seguro Popular program (designed to extend health insurance and regular and preventive medical care, pharmaceuticals, and health facilities to 50 million uninsured Mexicans), one of the world's largest health policy reforms of the last two decades. Our evaluation features a new design for field experiments that is more robust to the political interventions and implementation errors that have ruined many similar previous efforts; new statistical methods that produce more reliable and efficient results using fewer resources, assumptions, and data; and an implementation of these methods in the largest randomized health policy experiment to date. (See the Harvard Gazette story on this project.)
    • Presidency Research; Voting Behavior
      Resolution of the paradox of why polls are so variable over time during presidential campaigns even though the vote outcome is easily predictable before it starts. Also, a resolution of a key controversy over absentee ballots during the 2000 presidential election; and the methodology of small-n research on executives.
    • Informatics and Data Sharing
      Replication Standards New standards, protocols, and software for citing, sharing, analyzing, archiving, preserving, distributing, cataloging, translating, disseminating, naming, verifying, and replicating scholarly research data and analyses. Also includes proposals to improve the norms of data sharing and replication in science.
    • International Conflict
      Methods for coding, analyzing, and forecasting international conflict and state failure. Evidence that the causes of conflict, theorized to be important but often found to be small or ephemeral, are indeed tiny for the vast majority of dyads, but are large, stable, and replicable wherever the ex ante probability of conflict is large.
    • Legislative Redistricting
      The definition of partisan symmetry as a standard for fairness in redistricting; methods and software for measuring partisan bias and electoral responsiveness; discussion of U.S. Supreme Court rulings about this work. Evidence that U.S. redistricting reduces bias and increases responsiveness, and that the electoral college is fair; applications to legislatures, primaries, and multiparty systems.
    • Mortality Studies
      Methods for forecasting mortality rates (overall or for time series data cross-classified by age, sex, country, and cause); estimating mortality rates in areas without vital registration; measuring inequality in risk of death; applications to US mortality, the future of the Social Security, armed conflict, heart failure, and human security.
    • Teaching and Administration
      Publications and other projects designed to improve teaching, learning, and university administration, as well as broader writings on the future of the social sciences.
    • Anchoring Vignettes (for interpersonal incomparability)
      Methods for interpersonal incomparability, when respondents (from different cultures, genders, countries, or ethnic groups) understand survey questions in different ways; for developing theoretical definitions of complicated concepts apparently definable only by example (i.e., "you know it when you see it").
    • Automated Text Analysis
      Automated and computer-assisted methods of extracting, organizing, understanding, conceptualizing, and consuming knowledge from massive quantities of unstructured text.
    • Causal Inference
      Methods for detecting and reducing model dependence (i.e., when minor model changes produce substantively different inferences) in inferring causal effects and other counterfactuals. Matching methods; "politically robust" and cluster-randomized experimental designs; causal bias decompositions.
    • Event Counts and Durations
      Statistical models to explain or predict how many events occur for each fixed time period, or the time between events. An application to cabinet dissolution in parliamentary democracies which united two previously warring scholarly literature. Other applications to international relations and U.S. Supreme Court appointments.
    • Ecological Inference
      Inferring individual behavior from group-level data: The first approach to incorporate both unit-level deterministic bounds and cross-unit statistical information, methods for 2x2 and larger tables, Bayesian model averaging, applications to elections, software.
    • Missing Data & Measurement Error
      Statistical methods to accommodate missing information in data sets due to scattered unit nonresponse, missing variables, or values or variables measured with error. Easy-to-use algorithms and software for multiple imputation and multiple overimputation for surveys, time series, and time series cross-sectional data. Applications to electoral, and other compositional, data.
    • Qualitative Research
      How the same unified theory of inference underlies quantitative and qualitative research alike; scientific inference when quantification is difficult or impossible; research design; empirical research in legal scholarship.
    • Rare Events
      How to save 99% of your data collection costs; bias corrections for logistic regression in estimating probabilities and causal effects in rare events data; estimating base probabilities or any quantity from case-control data; automated coding of events.
    • Survey Research
      How surveys work and a variety of methods to use with surveys. Surveys for estimating death rates, why election polls are so variable when the vote is so predictable, and health inequality.
    • Unifying Statistical Analysis
      Development of a unified approach to statistical modeling, inference, interpretation, presentation, analysis, and software; integrated with most of the other projects listed here.

Recent Papers

Constituency Service and Incumbency Advantage

Constituency Service and Incumbency Advantage
Gary King. 1991. “Constituency Service and Incumbency Advantage.” British Journal of Political Science, 21, Pp. 119–128.Abstract
This Note addresses the long-standing discrepancy between scholarly support for the effect of constituency service on incumbency advantage and a large body of contradictory empirical evidence. I show first that many of the methodological problems noticed in past research reduce to a single methodological problem that is readily resolved. The core of this Note then provides among the first systematic empirical evidence for the constituency service hypothesis. Specifically, an extra $10,000 added to the budget of the average state legislator gives this incumbent an additional 1.54 percentage points in the next election (with a 95% confidence interval of 1.14 to 1.94 percentage points).
Read more

The Methodology of Presidential Research

The Methodology of Presidential Research
Gary King, George Edwards III, Bert A Rockman, and John H Kessel. 1993. “The Methodology of Presidential Research.” In Researching the Presidency: Vital Questions, New Approaches, Pp. 387–412. Pittsburgh: University of Pittsburgh.Abstract
The original purpose of the paper this chapter was based on was to use the Presidency Research Conference’s first-round papers– by John H. Aldrich, Erwin C. Hargrove, Karen M. Hult, Paul Light, and Richard Rose– as my "data." My given task was to analyze the literature ably reviewed by these authors and report what political methodology might have to say about presidency research. I focus in this chapter on the traditional presidency literature, emphasizing research on the president and the office. For the most part, I do not consider research on presidential selection, election, and voting behavior, which has been much more similar to other fields in American politics.
Read more

Why are American Presidential Election Campaign Polls so Variable when Votes are so Predictable?

Why are American Presidential Election Campaign Polls so Variable when Votes are so Predictable?
Andrew Gelman and Gary King. 1993. “Why are American Presidential Election Campaign Polls so Variable when Votes are so Predictable?.” British Journal of Political Science, 23, Pp. 409–451.Abstract

As most political scientists know, the outcome of the U.S. Presidential election can be predicted within a few percentage points (in the popular vote), based on information available months before the election. Thus, the general election campaign for president seems irrelevant to the outcome (except in very close elections), despite all the media coverage of campaign strategy. However, it is also well known that the pre-election opinion polls can vary wildly over the campaign, and this variation is generally attributed to events in the campaign. How can campaign events affect people’s opinions on whom they plan to vote for, and yet not affect the outcome of the election? For that matter, why do voters consistently increase their support for a candidate during his nominating convention, even though the conventions are almost entirely predictable events whose effects can be rationally forecast? In this exploratory study, we consider several intuitively appealing, but ultimately wrong, resolutions to this puzzle, and discuss our current understanding of what causes opinion polls to fluctuate and yet reach a predictable outcome. Our evidence is based on graphical presentation and analysis of over 67,000 individual-level responses from forty-nine commercial polls during the 1988 campaign and many other aggregate poll results from the 1952–1992 campaigns. We show that responses to pollsters during the campaign are not generally informed or even, in a sense we describe, "rational." In contrast, voters decide which candidate to eventually support based on their enlightened preferences, as formed by the information they have learned during the campaign, as well as basic political cues such as ideology and party identification. We cannot prove this conclusion, but we do show that it is consistent with the aggregate forecasts and individual-level opinion poll responses. Based on the enlightened preferences hypothesis, we conclude that the news media have an important effect on the outcome of Presidential elections–-not due to misleading advertisements, sound bites, or spin doctors, but rather by conveying candidates’ positions on important issues.

Read more

The Science of Political Science Graduate Admissions

The Science of Political Science Graduate Admissions
Gary King, John M Bruce, and Michael Gilligan. 1993. “The Science of Political Science Graduate Admissions.” PS: Political Science and Politics, XXVI, Pp. 772–778.Abstract

As political scientists, we spend much time teaching and doing scholarly research, and more time than we may wish to remember on university committees. However, just as many of us believe that teaching and research are not fundamentally different activities, we also need not use fundamentally different standards of inference when studying government, policy, and politics than when participating in the governance of departments and universities. In this article, we describe our attempts to bring somewhat more systematic methods to the process and policies of graduate admissions.

Read more

On Party Platforms, Mandates, and Government Spending

On Party Platforms, Mandates, and Government Spending
Gary King and Michael Laver. 1993. “On Party Platforms, Mandates, and Government Spending.” American Political Science Review, 87, Pp. 744–750.Abstract

In their 1990 Review article, Ian Budge and Richard Hofferbert analyzed the relationship between party platform emphases, control of the White House, and national government spending priorities, reporting strong evidence of a "party mandate" connection between them. Gary King and Michael Laver successfully replicate the original analysis, critique the interpretation of the causal effects, and present a reanalysis showing that platforms have small or nonexistent effects on spending. In response, Budge, Hofferbert, and Michael McDonald agree that their language was somewhat inconsistent on both interactions and causality but defend their conceptualization of "mandates" as involving only an association, not necessarily a causal connection, between party commitments and government policy. Hence, while the causes of government policy are of interest, noncausal associations are sufficient as evidence of party mandates in American politics.

Read more

Transfers of Governmental Power: The Meaning of Time Dependence

Transfers of Governmental Power: The Meaning of Time Dependence
James E Alt and Gary King. 1994. “Transfers of Governmental Power: The Meaning of Time Dependence.” Comparative Political Studies, 27, Pp. 190–210.Abstract
King, Alt, Burns, and Laver (1990) proposed and estimated a unified model in which cabinet durations depended on seven explanatory variables reflecting features of the cabinets and the bargaining environments in which they formed, along with a stochastic component in which the risk of a cabinet falling was treated as a constant across its tenure. Two recent research reports take issue with one aspect of this model. Warwick and Easton replicate the earlier findings for explanatory variables but claim that the stochastic risk should be seen as rising, and at a rate which varies, across the life of the cabinet. Bienen and van de Walle, using data on the duration of leaders, allege that random risk is falling. We continue in our goal of unifying this literature by providing further estimates with both cabinet and leader duration data that confirm the original explanatory variables’ effects, showing that leaders’ durations are affected by many of the same factors that affect the durability of the cabinets they lead, demonstrating that cabinets have stochastic risk of ending that is indeed constant across the theoretically most interesting range of durations, and suggesting that stochastic risk for leaders in countries with cabinet government is, if not constant, more likely to rise than fall.
Read more
All writings


How the news media activate public expression and influence national agendas, at St. Louis Area Methods Meeting (SLAMM), Iowa State University, Friday, April 20, 2018:
This talk reports on the results of first large scale randomized news media experiment. We demonstrate that even small news media outlets can cause large numbers of Americans to take public stands on specific issues, join national policy conversations, and express themselves publicly—all key components of democratic politics—more often than they would otherwise. After recruiting \(48\) mostly small media outlets, and working with them over \(5\) years, we chose groups of these outlets to write and publish articles on subjects we... Read more about How the news media activate public expression and influence national agendas
How the news media activate public expression and influence national agendas Friday, February 16, 2018:
This talk reports on the results of first large scale randomized news media experiment. We demonstrate that even small news media outlets can cause large numbers of Americans to take public stands on specific issues, join national policy conversations, and express themselves publicly—all key components of democratic politics—more often than they would otherwise. After recruiting 48 mostly small media outlets, and working with them over 5 years, we chose groups of these outlets to write and publish articles on subjects we approved, on dates we randomly assigned. We estimate the causal effect... Read more about How the news media activate public expression and influence national agendas
How to Measure Legislative District Compactness If You Only Know it When You See it, at Stony Brook University, Institute for Advanced Computational Science, Thursday, February 15, 2018:
To prevent gerrymandering and to encourage a form of democratic representation, many state constitutions and judicial opinions require US legislative districts be "compact." Yet, few precise definitions are offered other than "you know it when you see it," effectively assuming the existence of a common understanding of the concept. In contrast, academics have concluded that the concept has multiple theoretical dimensions requiring large numbers of conflicting empirical measures. This has proved extremely challenging for courts tasked with adjudicating compactness. We hypothesize that both are... Read more about How to Measure Legislative District Compactness If You Only Know it When You See it
Matching Methods for Causal Inference, at Microsoft, Cambridge, Friday, January 19, 2018:
This presentation shows how to use matching in causal inference to ameliorate model dependence -- where small, indefensible changes in model specification have large impacts on our conclusions. We introduce matching methods that are simpler, more powerful, and easier to understand. We also show that the most commonly used existing method, propensity score matching, should rarely be used. Easy-to-use software is available to implement all methods discussed.
  • 1 of 48
  • »
All presentations


Demographic Forecasting

Demographic Forecasting
Federico Girosi and Gary King. 2008. Demographic Forecasting. Princeton: Princeton University Press.Abstract

We introduce a new framework for forecasting age-sex-country-cause-specific mortality rates that incorporates considerably more information, and thus has the potential to forecast much better, than any existing approach. Mortality forecasts are used in a wide variety of academic fields, and for global and national health policy making, medical and pharmaceutical research, and social security and retirement planning.

As it turns out, the tools we developed in pursuit of this goal also have broader statistical implications, in addition to their use for forecasting mortality or other variables with similar statistical properties. First, our methods make it possible to include different explanatory variables in a time series regression for each cross-section, while still borrowing strength from one regression to improve the estimation of all. Second, we show that many existing Bayesian (hierarchical and spatial) models with explanatory variables use prior densities that incorrectly formalize prior knowledge. Many demographers and public health researchers have fortuitously avoided this problem so prevalent in other fields by using prior knowledge only as an ex post check on empirical results, but this approach excludes considerable information from their models. We show how to incorporate this demographic knowledge into a model in a statistically appropriate way. Finally, we develop a set of tools useful for developing models with Bayesian priors in the presence of partial prior ignorance. This approach also provides many of the attractive features claimed by the empirical Bayes approach, but fully within the standard Bayesian theory of inference.

Read more

Ecological Inference: New Methodological Strategies

Ecological Inference: New Methodological Strategies
Gary King, Ori Rosen, Martin Tanner, Gary King, Ori Rosen, and Martin A Tanner. 2004. Ecological Inference: New Methodological Strategies. New York: Cambridge University Press.Abstract
Ecological Inference: New Methodological Strategies brings together a diverse group of scholars to survey the latest strategies for solving ecological inference problems in various fields. The last half decade has witnessed an explosion of research in ecological inference – the attempt to infer individual behavior from aggregate data. The uncertainties and the information lost in aggregation make ecological inference one of the most difficult areas of statistical inference, but such inferences are required in many academic fields, as well as by legislatures and the courts in redistricting, by businesses in marketing research, and by governments in policy analysis.
Read more
  • 1 of 2
  • »
All writings

Gary King on Twitter