Journal Article

Computer-Assisted Keyword and Document Set Discovery from Unstructured Text
Gary King, Patrick Lam, and Margaret Roberts. In Press. “Computer-Assisted Keyword and Document Set Discovery from Unstructured Text.” American Journal of Political Science, 2017.Abstract

The (unheralded) first step in many applications of automated text analysis involves selecting keywords to choose documents from a large text corpus for further study. Although all substantive results depend on this choice, researchers usually pick keywords in ad hoc ways that are far from optimal and usually biased. Paradoxically, this often means that the validity of the most sophisticated text analysis methods depend in practice on the inadequate keyword counting or matching methods they are designed to replace. Improved methods of keyword selection would also be valuable in many other areas, such as following conversations that rapidly innovate language to evade authorities, seek political advantage, or express creativity; generic web searching; eDiscovery; look-alike modeling; intelligence analysis; and sentiment and topic analysis. We develop a computer-assisted (as opposed to fully automated) statistical approach that suggests keywords from available text without needing structured data as inputs. This framing poses the statistical problem in a new way, which leads to a widely applicable algorithm. Our specific approach is based on training classifiers, extracting information from (rather than correcting) their mistakes, and summarizing results with Boolean search strings. We illustrate how the technique works with analyses of English texts about the Boston Marathon Bombings, Chinese social media posts designed to evade censorship, among others.

The Balance-Sample Size Frontier in Matching Methods for Causal Inference
Gary King, Christopher Lucas, and Richard Nielsen. In Press. “The Balance-Sample Size Frontier in Matching Methods for Causal Inference.” American Journal of Political Science, 2016.Abstract

We propose a simplified approach to matching for causal inference that simultaneously optimizes balance (similarity between the treated and control groups) and matched sample size. Existing approaches either fix the matched sample size and maximize balance or fix balance and maximize sample size, leaving analysts to settle for suboptimal solutions or attempt manual optimization by iteratively tweaking their matching method and rechecking balance. To jointly maximize balance and sample size, we introduce the matching frontier, the set of matching solutions with maximum possible balance for each sample size. Rather than iterating, researchers can choose matching solutions from the frontier for analysis in one step. We derive fast algorithms that calculate the matching frontier for several commonly used balance metrics. We demonstrate with analyses of the effect of sex on judging and job training programs that show how the methods we introduce can extract new knowledge from existing data sets.

Easy to use, open source, software is available here to implement all methods in the paper.

Effectiveness of the WHO Safe Childbirth Checklist Program in Reducing Severe Maternal, Fetal, and Newborn Harm: Study Protocol for a Matched-Pair, Cluster Randomized Controlled Trial in Uttar Pradesh, India
Katherine Semrau, Lisa R. Hirschhorn, Bhala Kodkany, Jonathan Spector, Danielle E. Tuller, Gary King, Stuart Lisptiz, Narender Sharma, Vinay P. Singh, Bharath Kumar, Neelam Dhingra-Kumar, Rebecca Firestone, Vishwajeet Kumar, and Atul Gawande. Forthcoming. “Effectiveness of the WHO Safe Childbirth Checklist Program in Reducing Severe Maternal, Fetal, and Newborn Harm: Study Protocol for a Matched-Pair, Cluster Randomized Controlled Trial in Uttar Pradesh, India.” Trials.Abstract

Background: Although access to facility-based delivery and skilled birth attendance has improved in resource-limited settings, poor quality of care has limited the expected benefits in maternal and neonatal mortality and morbidity. Programs supporting the use of clinical management tools, such as the World Health Organization Safe Childbirth Checklist, may improve implementation of recommended, lifesaving childbirth practices.

Methods: Data collectors observed childbirth care in a cluster randomized controlled trial of a checklist-based implementation program in Uttar Pradesh, India. The BetterBirth program implements the WHO Checklist through leadership and facility engagement involving training, and visitation from a trained peer “coach” providing on-site supportive supervision and real-time data feedback with decreasing frequency over an 8-month period. No clinical supplies or skills training are provided. Independent observers measured birth attendants’ adherence to essential childbirth practices 8 weeks after the intervention began in 15 out of 60 pairs of sites in the trial. Observations were made at three key points: at admission, before delivery, and until 1 hour after childbirth. A composite score of 18 essential practices was compared between study arms using the Rao-Scott chi-square test.

Findings: Observers assessed adherence to essential practices in 2,606 deliveries (1,282 intervention, 1,324 control) from 8 to 14 weeks after intervention initiation. Out of 18 essential practices, intervention sites delivered an average of 13 (95% CI: 12.0, 13.9) compared with 7.5 (95% CI: 6.9, 8.1) practices delivered in the control sites (RR: 1.76, 95%CI: 1.57,1.97). In intervention sites, attendants used the checklist in 74% of cases; use was associated with improved adherence to essential practices [13·5 (95%CI: 12.5, 14.4) versus 11.2 (10.2, 12.1), p<0·0001].

Interpretation: Implementation of the WHO checklist through peer coaching, measurement, and data feedback successfully improves the quality of care provided to women and their newborns in Uttar Pradesh, India. An Education System with Hierarchical Concept Maps
Michail Schwab, Hendrik Strobelt, James Tompkin, Colin Fredericks, Connor Huff, Dana Higgins, Anton Strezhnev, Mayya Komisarchik, Gary King, and Hanspeter Pfister. Forthcoming. “ An Education System with Hierarchical Concept Maps.” IEEE Transactions on Visualization and Computer Graphics.Abstract

Information hierarchies are difficult to express when real-world space or time constraints force traversing the hierarchy in linear presentations, such as in educational books and classroom courses. We present, which allows linear and non-linear presentation and navigation of educational concepts and material. To support a breadth of material for each concept, is Web based, which allows adding material such as lecture slides, book chapters, videos, and LTIs. A visual interface assists the creation of the needed hierarchical structures. The goals of our system were formed in expert interviews, and we explain how our design meets these goals. We adapt a real-world course into, and perform introductory qualitative evaluation with students.

Comment on 'Estimating the Reproducibility of Psychological Science'
Daniel Gilbert, Gary King, Stephen Pettigrew, and Timothy Wilson. 2016. “Comment on 'Estimating the Reproducibility of Psychological Science'.” Science, 6277, 351: 1037a-1038a. Publisher's VersionAbstract

recent article by the Open Science Collaboration (a group of 270 coauthors) gained considerable academic and public attention due to its sensational conclusion that the replicability of psychological science is surprisingly low. Science magazine lauded this article as one of the top 10 scientific breakthroughs of the year across all fields of science, reports of which appeared on the front pages of newspapers worldwide. We show that OSC's article contains three major statistical errors and, when corrected, provides no evidence of a replication crisis. Indeed, the evidence is consistent with the opposite conclusion -- that the reproducibility of psychological science is quite high and, in fact, statistically indistinguishable from 100%. (Of course, that doesn't mean that the replicability is 100%, only that the evidence is insufficient to reliably estimate replicability.) The moral of the story is that meta-science must follow the rules of science.

Replication data is available in this dataverse archive. See also the full web site for this article and related materials, and one of the news articles written about it.

Scoring Social Security Proposals: Response from Kashin, King, and Soneji
Konstantin Kashin, Gary King, and Samir Soneji. 2016. “Scoring Social Security Proposals: Response from Kashin, King, and Soneji.” Journal of Economic Perspectives, 2, 30: 245-248, Spring 2016. Publisher's VersionAbstract

This is a response to Peter Diamond's comment on two paragraph comment on a passage in our article, Konstantin Kashin, Gary King, and Samir Soneji. 2015. “Systematic Bias and Nontransparency in US Social Security Administration Forecasts.” Journal of Economic Perspectives, 2, 29: 239-258. 

A Unified Approach to Measurement Error and Missing Data: Overview and Applications
Matthew Blackwell, James Honaker, and Gary King. 2015. “A Unified Approach to Measurement Error and Missing Data: Overview and Applications.” Sociological Methods and Research, 1-39. Publisher's VersionAbstract

Although social scientists devote considerable effort to mitigating measurement error during data collection, they often ignore the issue during data analysis. And although many statistical methods have been proposed for reducing measurement error-induced biases, few have been widely used because of implausible assumptions, high levels of model dependence, difficult computation, or inapplicability with multiple mismeasured variables. We develop an easy-to-use alternative without these problems; it generalizes the popular multiple imputation (MI) framework by treating missing data problems as a limiting special case of extreme measurement error, and corrects for both. Like MI, the proposed framework is a simple two-step procedure, so that in the second step researchers can use whatever statistical method they would have if there had been no problem in the first place. We also offer empirical illustrations, open source software that implements all the methods described herein, and a companion paper with technical details and extensions (Blackwell, Honaker, and King, 2014b).

How Robust Standard Errors Expose Methodological Problems They Do Not Fix, and What to Do About It
Gary King and Margaret E Roberts. 2015. “How Robust Standard Errors Expose Methodological Problems They Do Not Fix, and What to Do About It.” Political Analysis, 2, 23: 159–179. Publisher's VersionAbstract

"Robust standard errors" are used in a vast array of scholarship to correct standard errors for model misspecification. However, when misspecification is bad enough to make classical and robust standard errors diverge, assuming that it is nevertheless not so bad as to bias everything else requires considerable optimism. And even if the optimism is warranted, settling for a misspecified model, with or without robust standard errors, will still bias estimators of all but a few quantities of interest. The resulting cavernous gap between theory and practice suggests that considerable gains in applied statistics may be possible. We seek to help researchers realize these gains via a more productive way to understand and use robust standard errors; a new general and easier-to-use "generalized information matrix test" statistic that can formally assess misspecification (based on differences between robust and classical variance estimates); and practical illustrations via simulations and real examples from published research. How robust standard errors are used needs to change, but instead of jettisoning this popular tool we show how to use it to provide effective clues about model misspecification, likely biases, and a guide to considerably more reliable, and defensible, inferences. Accompanying this article [soon!] is software that implements the methods we describe. 

A Unified Approach to Measurement Error and Missing Data: Details and Extensions
Matthew Blackwell, James Honaker, and Gary King. 2015. “A Unified Approach to Measurement Error and Missing Data: Details and Extensions.” Sociological Methods and Research, 1-28. Publisher's VersionAbstract

We extend a unified and easy-to-use approach to measurement error and missing data. In our companion article, Blackwell, Honaker, and King give an intuitive overview of the new technique, along with practical suggestions and empirical applications. Here, we offer more precise technical details, more sophisticated measurement error model specifications and estimation procedures, and analyses to assess the approach’s robustness to correlated measurement errors and to errors in categorical variables. These results support using the technique to reduce bias and increase efficiency in a wide variety of empirical research.

Automating Open Science for Big Data
Merce Crosas, James Honaker, Gary King, and Latanya Sweeney. 2015. “Automating Open Science for Big Data.” ANNALS of the American Academy of Political and Social Science, 1, 659: 260-273. Publisher's VersionAbstract

The vast majority of social science research presently uses small (MB or GB scale) data sets. These fixed-scale data sets are commonly downloaded to the researcher's computer where the analysis is performed locally, and are often shared and cited with well-established technologies, such as the Dataverse Project (see, to support the published results.  The trend towards Big Data -- including large scale streaming data -- is starting to transform research and has the potential to impact policy-making and our understanding of the social, economic, and political problems that affect human societies.  However, this research poses new challenges in execution, accountability, preservation, reuse, and reproducibility. Downloading these data sets to a researcher’s computer is infeasible or not practical; hence, analyses take place in the cloud, require unusual expertise, and benefit from collaborative teamwork and novel tool development. The advantage of these data sets in how informative they are also means that they are much more likely to contain highly sensitive personally identifiable information. In this paper, we discuss solutions to these new challenges so that the social sciences can realize the potential of Big Data.

  • 1 of 15
  • »