Ansolabehere

In Press, 2015
Preface: Big Data is Not About the Data!
King, Gary. In Press, 2015. “Preface: Big Data Is Not About the Data!.” In Computational Social Science: Discovery and Prediction, edited by R. Michael Alvarez. Cambridge: Cambridge University Press.Abstract
A few years ago, explaining what you did for a living to Dad, Aunt Rose, or your friend from high school was pretty complicated. Answering that you develop statistical estimators, work on numerical optimization, or, even better, are working on a great new Markov Chain Monte Carlo implementation of a Bayesian model with heteroskedastic errors for automated text analysis is pretty much the definition of conversation stopper.Then the media noticed the revolution we’re all apart of, and they glued a label to it. Now “Big Data” is what you and I do.  As trivial as this change sounds, we should be grateful for it, as the name seems to resonate with the public and so it helps convey the importance of our field to others better than we had managed to do ourselves. Yet, now that we have everyone’s attention, we need to start clarifying for others -- and ourselves -- what the revolution means. This is much of what this book is about. Throughout, we need to remember that for the most part, Big Data is not about the data....
Chapter
In Press
A Unified Approach to Measurement Error and Missing Data: Overview
Blackwell, Matthew, James Honaker, and Gary King. In Press. “A Unified Approach to Measurement Error and Missing Data: Overview.” Sociological Methods and Research.Abstract
Although social scientists devote considerable effort to mitigating measurement error during data collection, they often ignore the issue during data analysis. And although many statistical methods have been proposed for reducing measurement error-induced biases, few have been widely used because of implausible assumptions, high levels of model dependence, difficult computation, or inapplicability with multiple mismeasured variables. We develop an easy-to-use alternative without these problems; it generalizes the popular multiple imputation (MI) framework by treating missing data problems as a limiting special case of extreme measurement error, and corrects for both. Like MI, the proposed framework is a simple two-step procedure, so that in the second step researchers can use whatever statistical method they would have if there had been no problem in the first place. We also offer empirical illustrations, open source software that implements all the methods described herein, and a companion paper with technical details and extensions (Blackwell, Honaker, and King, 2014b).
Article
Automating Open Science for Big Data
Crosas, Merce, James Honaker, Gary King, and Latanya Sweeney. In Press. “Automating Open Science for Big Data.” Annals of the American Academy of Political and Social Science 659 (1): 260-273. Publisher's VersionAbstract
The vast majority of social science research presently uses small (MB or GB scale) data sets. These fixed-scale data sets are commonly downloaded to the researcher's computer where the analysis is performed locally, and are often shared and cited with well-established technologies, such as the Dataverse Project (see Dataverse.org), to support the published results.  The trend towards Big Data -- including large scale streaming data -- is starting to transform research and has the potential to impact policy-making and our understanding of the social, economic, and political problems that affect human societies.  However, this research poses new challenges in execution, accountability, preservation, reuse, and reproducibility. Downloading these data sets to a researcher’s computer is infeasible or not practical; hence, analyses take place in the cloud, require unusual expertise, and benefit from collaborative teamwork and novel tool development. The advantage of these data sets in how informative they are also means that they are much more likely to contain highly sensitive personally identifiable information. In this paper, we discuss solutions to these new challenges so that the social sciences can realize the potential of Big Data.
paper.pdf
A Unified Approach to Measurement Error and Missing Data: Details and Extensions
Blackwell, Matthew, James Honaker, and Gary King. In Press. “A Unified Approach to Measurement Error and Missing Data: Details and Extensions.” Sociological Methods and Research.Abstract
We extend a unified and easy-to-use approach to measurement error and missing data. Blackwell, Honaker, and King (2014a) gives an intuitive overview of the new technique, along with practical suggestions and empirical applications. Here, we offer more precise technical details; more sophisticated measurement error model specifications and estimation procedures; and analyses to assess the approach's robustness to correlated measurement errors and to errors in categorical variables. These results support using the technique to reduce bias and increase efficiency in a wide variety of empirical research.
Uncorrected proofs
2015
Kashin, Konstantin, Gary King, and Samir Soneji. 2015. “Replication Data For: Explaining Systematic Bias and Nontransparency in U.s. Social Security Administration Forecasts..” Harvard Dataverse v1. Published on Harvard Dataverse
Kashin, Konstantin, Gary King, and Samir Soneji. 2015. “Replication Data For: Systematic Bias and Nontransparency in U.s. Social Security Administration Forecasts..” Harvard Dataverse v1. Published on Harvard Dataverse
Systematic Bias and Nontransparency in US Social Security Administration Forecasts
Kashin, Konstantin, Gary King, and Samir Soneji. 2015. “Systematic Bias and Nontransparency in Us Social Security Administration Forecasts.” Journal of Economic Perspectives 29 (2). Publisher's VersionAbstract
The financial stability of four of the five largest U.S. federal entitlement programs, strategic decision making in several industries, and many academic publications all depend on the accuracy of demographic and financial forecasts made by the Social Security Administration (SSA). Although the SSA has performed these forecasts since 1942, no systematic and comprehensive evaluation of their accuracy has ever been published by SSA or anyone else. The absence of a systematic evaluation of forecasts is a concern because the SSA relies on informal procedures that are potentially subject to inadvertent biases and does not share with the public, the scientific community, or other parts of SSA sufficient data or information necessary to replicate or improve its forecasts. These issues result in SSA holding a monopoly position in policy debates as the sole supplier of fully independent forecasts and evaluations of proposals to change Social Security. To assist with the forecasting evaluation problem, we collect all SSA forecasts for years that have passed and discover error patterns that could have been---and could now be---used to improve future forecasts. Specifically, we find that after 2000, SSA forecasting errors grew considerably larger and most of these errors made the Social Security Trust Funds look more financially secure than they actually were. In addition, SSA's reported uncertainty intervals are overconfident and increasingly so after 2000. We discuss the implications of these systematic forecasting biases for public policy.
Article
Explaining Systematic Bias and Nontransparency in US Social Security Administration Forecasts
Kashin, Konstantin, Gary King, and Samir Soneji. 2015. “Explaining Systematic Bias and Nontransparency in Us Social Security Administration Forecasts.” Political Analysis 23 (4). Publisher's VersionAbstract
The accuracy of U.S. Social Security Administration (SSA) demographic and financial forecasts is crucial for the solvency of its Trust Funds, other government programs, industry decision making, and the evidence base of many scholarly articles. Because SSA makes public little replication information and uses qualitative and antiquated statistical forecasting methods, fully independent alternative forecasts (and the ability to score policy proposals to change the system) are nonexistent. Yet, no systematic evaluation of SSA forecasts has ever been published by SSA or anyone else --- until a companion paper to this one (King, Kashin, and Soneji, 2015a). We show that SSA's forecasting errors were approximately unbiased until about 2000, but then began to grow quickly, with increasingly overconfident uncertainty intervals. Moreover, the errors are all in the same potentially dangerous direction, making the Social Security Trust Funds look healthier than they actually are. We extend and then attempt to explain these findings with evidence from a large number of interviews we conducted with participants at every level of the forecasting and policy processes. We show that SSA's forecasting procedures meet all the conditions the modern social-psychology and statistical literatures demonstrate make bias likely. When those conditions mixed with potent new political forces trying to change Social Security, SSA's actuaries hunkered down trying hard to insulate their forecasts from strong political pressures. Unfortunately, this otherwise laudable resistance to undue influence, along with their ad hoc qualitative forecasting models, led the actuaries to miss important changes in the input data. Retirees began living longer lives and drawing benefits longer than predicted by simple extrapolations. We also show that the solution to this problem involves SSA or Congress implementing in government two of the central projects of political science over the last quarter century: [1] promoting transparency in data and methods and [2] replacing with formal statistical models large numbers of qualitative decisions too complex for unaided humans to make optimally.
Article
Why Propensity Scores Should Not Be Used for Matching
King, Gary, and Richard Nielsen. 2015. “Why Propensity Scores Should Not Be Used for Matching”.Abstract
Researchers use propensity score matching (PSM) as a data preprocessing step to selectively prune observations prior to applying a model to estimate a causal effect. The goal of PSM is to reduce imbalance in pre-treatment covariates between the treatment and control groups, thereby reducing the degree of model dependence and potential for bias. Although some applied researchers have combined PSM with various ad hoc procedures and checks to produce useful analyses, we show that the core PSM procedure itself often accomplishes the opposite of what is intended -- increasing imbalance, model dependence, and bias. The weakness of PSM is that it approximates a completely randomized experiment, rather than, as with other matching methods, a more powerful fully blocked randomized experiment. PSM is therefore blind to the portion of imbalance that would have been eliminated by also approximating full blocking. Moreover, in data balanced enough to approximate complete randomization, either to begin with or after pruning some observations, PSM approximates random matching which turns out to increase imbalance. We show that these problems occur even in data designed for PSM with as few as two covariates, and is exacerbated in data with better balance, higher dimensionality, and (in our experience) real applications. Although these results suggest that propensity scores not be used for matching, propensity scores have many other productive uses. In addition, although we show that matching by most other methods, when used under unusual or extreme circumstances, can exhibit some of the same damaging characteristics as PSM does regularly, other matching methods remain a strongly recommended way to make causal inferences.
Paper
The Balance-Sample Size Frontier in Matching Methods for Causal Inference
King, Gary, Christopher Lucas, and Richard Nielsen. 2015. “The Balance-Sample Size Frontier in Matching Methods for Causal Inference”.Abstract
We propose a simplified approach to matching for causal inference that simultaneously optimizes both balance (between the treated and control groups) and matched sample size. This procedure resolves two widespread tensions in the use of this popular methodology. First, current practice is to run a matching method that maximizes one balance metric (such as a propensity score or average Mahalanobis distance), but then to check whether it succeeds with respect to a different balance metric for which it was not designed (such as differences in means or L1). Second, current matching methods either fix the sample size and maximize balance (e.g., Mahalanobis or propensity score matching), fix balance and maximize the sample size (such as coarsened exact matching), or are arbitrary compromises between the two (such as calipers with ad hoc thresholds applied to other methods). These tensions lead researchers to either try to optimize manually, by iteratively tweaking their matching method and rechecking balance, or settle for suboptimal solutions. We address these tensions by first defining and showing how to calculate the matching frontier as the set of matching solutions with maximum balance for each possible sample size. Researchers can then choose one, several, or all matching solutions from the frontier for analysis in one step without iteration. The main difficulty in this strategy is that checking all possible solutions is exponentially difficult. We solve this problem with new algorithms that finish fast, optimally, and without iteration or manual tweaking. We also offer easy-to-use software that implements these ideas, along with analyses of the effect of sex on judging and job training programs that show how the methods we introduce enable us to extract new knowledge from existing data sets.
Paper
A Theory of Statistical Inference for Matching Methods in Applied Causal Research
Iacus, Stefano M, Gary King, and Giuseppe Porro. 2015. “A Theory of Statistical Inference for Matching Methods in Applied Causal Research”.Abstract
Matching methods for causal inference have become a popular way of reducing model dependence and bias, in large part because of their convenience and conceptual simplicity. Researchers most commonly use matching as a data preprocessing step, after which they apply whatever statistical model and uncertainty estimators they would have without matching. Unfortunately, for a given sample of any finite size, this approach is theoretically appropriate only under exact matching, which is usually infeasible; approximate matching can be justified under asymptotic theory, if large enough sample sizes are available, but then specialized point and variance estimators are required, which sacrifices some of matching's simplicity and convenience. Researchers also violate statistical theory with ad hoc iterations between formal matching methods and informal balance checks. Instead of asking researchers to change their widely used practices, we develop a comprehensive theory of statistical inference able to justify them. The theory we propose is substantively plausible, requires no asymptotic theory, and is simple to understand. Its core conceptualizes continuous variables as having natural breakpoints, which are common in applications (e.g., high school or college degrees in years of education, a governmental poverty level in income, or phase transitions in temperature). The theory allows binary, multicategory, and continuous treatment variables from the outset and straightforward extensions for imperfect treatment assignment and different versions of treatments. Although this theory provides a valid foundation for most commonly used methods of matching, researchers must still satisfy the assumptions in any real application.
Paper
How Robust Standard Errors Expose Methodological Problems They Do Not Fix, and What to Do About It
King, Gary, and Margaret E Roberts. 2015. “How Robust Standard Errors Expose Methodological Problems They Do Not Fix, and What to Do About It.” Political Analysis 23 (2): 159–179. Publisher's VersionAbstract
"Robust standard errors" are used in a vast array of scholarship to correct standard errors for model misspecification. However, when misspecification is bad enough to make classical and robust standard errors diverge, assuming that it is nevertheless not so bad as to bias everything else requires considerable optimism. And even if the optimism is warranted, settling for a misspecified model, with or without robust standard errors, will still bias estimators of all but a few quantities of interest. The resulting cavernous gap between theory and practice suggests that considerable gains in applied statistics may be possible. We seek to help researchers realize these gains via a more productive way to understand and use robust standard errors; a new general and easier-to-use "generalized information matrix test" statistic that can formally assess misspecification (based on differences between robust and classical variance estimates); and practical illustrations via simulations and real examples from published research. How robust standard errors are used needs to change, but instead of jettisoning this popular tool we show how to use it to provide effective clues about model misspecification, likely biases, and a guide to considerably more reliable, and defensible, inferences. Accompanying this article [soon!] is software that implements the methods we describe. 
Article
2014
You Lie! Patterns of Partisan Taunting in the U.S. Senate (Poster)
Grimmer, Justin, Gary King, and Chiara Superti. 2014. “

You Lie! Patterns of Partisan Taunting in the U.s. Senate (Poster)

.” In Society for Political Methodology. Athens, GA.Abstract
This is a poster that describes our analysis of "partisan taunting," the explicit, public, and negative attacks on another political party or its members, usually using vitriolic and derogatory language. We first demonstrate that most projects that hand code text in the social sciences optimize with respect to the wrong criterion, resulting in large, unnecessary biases. We show how to fix this problem and then apply it to taunting. We find empirically that, unlike most claims in the press and the literature, taunting is not inexorably increasing; it appears instead to be a rational political strategy, most often used by those least likely to win by traditional means -- ideological extremists, out-party members when the president is unpopular, and minority party members. However, although taunting appears to be individually rational, it is collectively irrational: Constituents may resonate with one cutting taunt by their Senator, but they might not approve if he or she were devoting large amounts of time to this behavior rather than say trying to solve important national problems. We hope to partially rectify this situation by posting public rankings of Senatorial taunting behavior.
Poster
Methods for Extremely Large Scale Media Experiments and Observational Studies (Poster)
King, Gary, Benjamin Schneer, and Ariel White. 2014. “

Methods for Extremely Large Scale Media Experiments and Observational Studies (Poster)

.” In Society for Political Methodology. Athens, GA.Abstract
This is a poster presentation describing (1) the largest ever experimental study of media effects, with more than 50 cooperating traditional media sites, normally unavailable web site analytics, the text of hundreds of thousands of news articles, and tens of millions of social media posts, and (2) a design we used in preparation that attempts to anticipate experimental outcomes
Poster
Participant Grouping for Enhanced Interactive Experience
King, Gary, Brian Lukoff, and Eric Mazur. 2014. “Participant Grouping for Enhanced Interactive Experience”.Abstract
Representative embodiments of a method for grouping participants in an activity include the steps of: (i) defining a grouping policy; (ii) storing, in a database, participant records that include a participant identifer, a characteristic associated With the participant, and/or an identifier for a participant’s handheld device; (iii) defining groupings based on the policy and characteristics of the participants relating to the policy and to the activity; and (iv) communicating the groupings to the handheld devices to establish the groups.
Patent
MatchingFrontier: R Package for Calculating the Balance-Sample Size Frontier
King, Gary, Christopher Lucas, and Richard Nielsen. 2014. “

Matchingfrontier: R Package for Calculating the Balance-Sample Size Frontier

”.Abstract
MatchingFrontier is an easy-to-use R Package for making optimal causal inferences from observational data.  Despite their popularity, existing matching approaches leave researchers with two fundamental tensions. First, they are designed to maximize one metric (such as propensity score or Mahalanobis distance) but are judged against another for which they were not designed (such as L1 or differences in means). Second, they lack a principled solution to revealing the implicit bias-variance trade off: matching methods need to optimize with respect to both imbalance (between the treated and control groups) and the number of observations pruned, but existing approaches optimize with respect to only one; users then either ignore the other, or tweak it, usually suboptimally, by hand. MatchingFrontier resolves both tensions by consolidating previous techniques into a single, optimal, and flexible approach. It calculates the matching solution with maximum balance for each possible sample size (N, N-1, N-2,...). It thus directly calculates the entire balance-sample size frontier, from which the user can easily choose one, several, or all subsamples from which to conduct their final analysis, given their own choice of imbalance metric and quantity of interest. MatchingFrontier solves the joint optimization problem in one run, automatically, without manual tweaking, and without iteration.  Although for each subset size k, there exist a huge (N choose k) number of unique subsets, MatchingFrontier includes specially designed fast algorithms that give the optimal answer, usually in a few minutes.   MatchingFrontier implements the methods in this paper:   King, Gary, Christopher Lucas, and Richard Nielsen. 2014. The Balance-Sample Size Frontier in Matching Methods for Causal Inference, copy at http://j.mp/1dRDMrE See http://projects.iq.harvard.edu/frontier/
Reverse-engineering censorship in China: Randomized experimentation and participant observation
King, Gary, Jennifer Pan, and Margaret E Roberts. 2014. “

Reverse-Engineering Censorship in China: Randomized Experimentation and Participant Observation

.” Science 345 (6199): 1-10. Publisher's VersionAbstract
Existing research on the extensive Chinese censorship organization uses observational methods with well-known limitations. We conducted the first large-scale experimental study of censorship by creating accounts on numerous social media sites, randomly submitting different texts, and observing from a worldwide network of computers which texts were censored and which were not. We also supplemented interviews with confidential sources by creating our own social media site, contracting with Chinese firms to install the same censoring technologies as existing sites, and—with their software, documentation, and even customer support—reverse-engineering how it all works. Our results offer rigorous support for the recent hypothesis that criticisms of the state, its leaders, and their policies are published, whereas posts about real-world events with collective action potential are censored.
Article Supplementary materials Article Summary
Computer-Assisted Keyword and Document Set Discovery from Unstructured Text
King, Gary, Patrick Lam, and Margaret Roberts. 2014. “

Computer-Assisted Keyword and Document Set Discovery from Unstructured Text

”.Abstract
The (unheralded) first step in many applications of automated text analysis involves selecting keywords to choose documents from a large text corpus for further study. Although all substantive results depend crucially on this choice, researchers typically pick keywords in ad hoc ways, given the lack of formal statistical methods to help. Paradoxically, this often means that the validity of the most sophisticated text analysis methods depends in practice on the inadequate keyword counting or matching methods they are designed to replace. The same ad hoc keyword selection process is also used in many other areas, such as following conversations that rapidly innovate language to evade authorities, seek political advantage, or express creativity; generic web searching; eDiscovery; look-alike modeling; intelligence analysis; and sentiment and topic analysis. We develop a computer-assisted (as opposed to fully automated) statistical approach that suggests keywords from available text, without needing any structured data as inputs. This framing poses the statistical problem in a new way, which leads to a widely applicable algorithm. Our specific approach is based on training classifiers, extracting information from (rather than correcting) their mistakes, and then summarizing results with Boolean search strings. We illustrate how the technique works with examples in English and Chinese.
Paper
Google Flu Trends Still Appears Sick: An Evaluation of the 2013‐2014 Flu Season
Lazer, David, Ryan Kennedy, Gary King, and Alessandro Vespignani. 2014. “

Google Flu Trends Still Appears Sick: an Evaluation of the 2013‐2014 Flu Season

”.Abstract
Last year was difficult for Google Flu Trends (GFT). In early 2013, Nature reported that GFT was estimating more than double the percentage of doctor visits for influenza like illness than the Centers for Disease Control and Prevention s (CDC) sentinel reports during the 2012 2013 flu season (1). Given that GFT was designed to forecast upcoming CDC reports, this was a problematic finding. In March 2014, our report in Science found that the overestimation problem in GFT was also present in the 2011 2012 flu season (2). The report also found strong evidence of autocorrelation and seasonality in the GFT errors, and presented evidence that the issues were likely, at least in part, due to modifications made by Google s search algorithm and the decision by GFT engineers not to use previous CDC reports or seasonality estimates in their models what the article labeled algorithm dynamics and big data hubris respectively. Moreover, the report and the supporting online materials detailed how difficult/impossible it is to replicate the GFT results, undermining independent efforts to explore the source of GFT errors and formulate improvements.
Paper
The Parable of Google Flu: Traps in Big Data Analysis
Lazer, David, Ryan Kennedy, Gary King, and Alessandro Vespignani. 2014. “

The Parable of Google Flu: Traps in Big Data Analysis

.” Science 343 (14 March): 1203-1205.Abstract
Large errors in flu prediction were largely avoidable, which offers lessons for the use of big data. In February 2013, Google Flu Trends (GFT) made headlines but not for a reason that Google executives or the creators of the flu tracking system would have hoped. Nature reported that GFT was predicting more than double the proportion of doctor visits for influenza-like illness (ILI) than the Centers for Disease Control and Prevention (CDC), which bases its estimates on surveillance reports from laboratories across the United States ( 1, 2). This happened despite the fact that GFT was built to predict CDC reports. Given that GFT is often held up as an exemplary use of big data ( 3, 4), what lessons can we draw from this error?
Article