Writings

2018
Compactness: An R Package for Measuring Legislative District Compactness If You Only Know it When You See It
Aaron Kaufman, Gary King, and Mayya Komisarchik. 2018. “Compactness: An R Package for Measuring Legislative District Compactness If You Only Know it When You See It”.Abstract

This software implements the method described in Aaron Kaufman, Gary King, and Mayya Komisarchik. Forthcoming. “How to Measure Legislative District Compactness If You Only Know it When You See It.” American Journal of Political Science. Copy at http://j.mp/2u9OWrG 

Our paper abstract:  To deter gerrymandering, many state constitutions require legislative districts to be "compact." Yet, the law offers few precise definitions other than "you know it when you see it," which effectively implies a common understanding of the concept. In contrast, academics have shown that compactness has multiple dimensions and have generated many conflicting measures. We hypothesize that both are correct -- that compactness is complex and multidimensional, but a common understanding exists across people. We develop a survey to elicit this understanding, with high reliability (in data where the standard paired comparisons approach fails). We create a statistical model that predicts, with high accuracy, solely from the geometric features of the district, compactness evaluations by judges and public officials responsible for redistricting, among others. We also offer compactness data from our validated measure for 20,160 state legislative and congressional districts, as well as software to compute this measure from any district.
 

 

Edited transcript of a talk on Partisan Symmetry at the 'Redistricting and Representation Forum'
Gary King. 2018. “Edited transcript of a talk on Partisan Symmetry at the 'Redistricting and Representation Forum'.” Bulletin of the American Academy of Arts and Sciences, Winter, Pp. 55-58.Abstract

The origin, meaning, estimation, and application of the concept of partisan symmetry in legislative redistricting, and the justiciability of partisan gerrymandering. An edited transcript of a talk at the “Redistricting and Representation Forum,” American Academy of Arts & Sciences, Cambridge, MA 11/8/2017.

Here also is a video of the original talk.

Article
PSI (Ψ): a Private data Sharing Interface
Marco Gaboardi, James Honaker, Gary King, Kobbi Nissim, Jonathan Ullman, and Salil Vadhan. 2018. “PSI (Ψ): a Private data Sharing Interface”. Publisher's VersionAbstract

We provide an overview of PSI ("a Private data Sharing Interface"), a system we are developing to enable researchers in the social sciences and other fields to share and explore privacy-sensitive datasets with the strong privacy protections of differential privacy.  (See software here and our OpenDP.org project which builds on this paper.)

Paper
Readme2: An R Package for Improved Automated Nonparametric Content Analysis for Social Science
Connor T. Jerzak, Gary King, and Anton Strezhnev. 2018. “Readme2: An R Package for Improved Automated Nonparametric Content Analysis for Social Science”.Abstract

An R package for estimating category proportions in an unlabeled set of documents given a labeled set, by implementing the method described in Jerzak, King, and Strezhnev (2019). This method is meant to improve on the ideas in Hopkins and King (2010), which introduced a quantification algorithm to estimate category proportions without directly classifying individual observations. This version of the software refines the original method by implementing a technique for selecitng optimal textual features in order to minimize the error of the estimated category proportions. Automatic differentiation, stochastic gradient descent, and batch re-normalization are used to carry out the optimization. Other pre-processing functions are available, as well as an interface to the earlier version of the algorithm for comparison. The package also provides users with the ability to extract the generated features for use in other tasks.

(Here's the abstract from our paper: Computer scientists and statisticians are often interested in classifying textual documents into chosen categories. Social scientists and others are often less interested in any one document and instead try to estimate the proportion falling in each category. The two existing types of techniques for estimating these category proportions are parametric "classify and count" methods and "direct" nonparametric estimation of category proportions without an individual classification step. Unfortunately, classify and count methods can sometimes be highly model dependent or generate more bias in the proportions even as the percent correctly classified increases. Direct estimation avoids these problems, but can suffer when the meaning and usage of language is too similar across categories or too different between training and test sets. We develop an improved direct estimation approach without these problems by introducing continuously valued text features optimized for this problem, along with a form of matching adapted from the causal inference literature. We evaluate our approach in analyses of a diverse collection of 73 data sets, showing that it substantially improves performance compared to existing approaches. As a companion to this paper, we offer easy-to-use software that implements all ideas discussed herein.)

2017
How to conquer partisan gerrymandering
Gary King and Robert X Browning. 12/26/2017. “How to conquer partisan gerrymandering.” Boston Globe (Op-Ed), 292 , 179 , Pp. A10. Publisher's VersionAbstract
PARTISAN GERRYMANDERING has long been reviled for thwarting the will of the voters. Yet while voters are acting disgusted, the US Supreme Court has only discussed acting — declaring they have the constitutional right to fix the problem, but doing nothing. But as better data and computer algorithms are now making gerrymandering increasingly effective, continuing to sidestep the issue could do permanent damage to American democracy. In Gill v. Whitford, the soon-to-be-decided challenge to Wisconsin’s 2011 state Assembly redistricting plan, the court could finally fix the problem for the whole country. Judging from the oral arguments, the key to the case is whether the court endorses the concept of “partisan symmetry,” a specific standard for treating political parties equally in allocating legislative seats based on voting.
Article
How the news media activate public expression and influence national agendas
Gary King, Benjamin Schneer, and Ariel White. 11/10/2017. “How the news media activate public expression and influence national agendas.” Science, 358, Pp. 776-780. Publisher's VersionAbstract

We demonstrate that exposure to the news media causes Americans to take public stands on specific issues, join national policy conversations, and express themselves publicly—all key components of democratic politics—more often than they would otherwise. After recruiting 48 mostly small media outlets, we chose groups of these outlets to write and publish articles on subjects we approved, on dates we randomly assigned. We estimated the causal effect on proximal measures, such as website pageviews and Twitter discussion of the articles’ specific subjects, and distal ones, such as national Twitter conversation in broad policy areas. Our intervention increased discussion in each broad policy area by approximately 62.7% (relative to a day’s volume), accounting for 13,166 additional posts over the treatment week, with similar effects across population subgroups. 

On the Science website: AbstractReprintFull text, and a comment (by Matthew Gentzkow) "Small media, big impact".

 

 

Article Supplementary Appendix
2017. “OpenScholar”.
2017. “Thresher”.
The Balance-Sample Size Frontier in Matching Methods for Causal Inference
Gary King, Christopher Lucas, and Richard Nielsen. 2017. “The Balance-Sample Size Frontier in Matching Methods for Causal Inference.” American Journal of Political Science, 61, 2, Pp. 473-489.Abstract

We propose a simplified approach to matching for causal inference that simultaneously optimizes balance (similarity between the treated and control groups) and matched sample size. Existing approaches either fix the matched sample size and maximize balance or fix balance and maximize sample size, leaving analysts to settle for suboptimal solutions or attempt manual optimization by iteratively tweaking their matching method and rechecking balance. To jointly maximize balance and sample size, we introduce the matching frontier, the set of matching solutions with maximum possible balance for each sample size. Rather than iterating, researchers can choose matching solutions from the frontier for analysis in one step. We derive fast algorithms that calculate the matching frontier for several commonly used balance metrics. We demonstrate with analyses of the effect of sex on judging and job training programs that show how the methods we introduce can extract new knowledge from existing data sets.

Easy to use, open source, software is available here to implement all methods in the paper.

Proofs Supplementary Appendix
booc.io: An Education System with Hierarchical Concept Maps
Michail Schwab, Hendrik Strobelt, James Tompkin, Colin Fredericks, Connor Huff, Dana Higgins, Anton Strezhnev, Mayya Komisarchik, Gary King, and Hanspeter Pfister. 2017. “booc.io: An Education System with Hierarchical Concept Maps.” IEEE Transactions on Visualization and Computer Graphics, 23, 1, Pp. 571-580. Publisher's VersionAbstract

Information hierarchies are difficult to express when real-world space or time constraints force traversing the hierarchy in linear presentations, such as in educational books and classroom courses. We present booc.io, which allows linear and non-linear presentation and navigation of educational concepts and material. To support a breadth of material for each concept, booc.io is Web based, which allows adding material such as lecture slides, book chapters, videos, and LTIs. A visual interface assists the creation of the needed hierarchical structures. The goals of our system were formed in expert interviews, and we explain how our design meets these goals. We adapt a real-world course into booc.io, and perform introductory qualitative evaluation with students.

booc.io: Software for an Education System with Hierarchical Concept Maps
Michail Schwab, Hendrik Strobelt, James Tompkin, Colin Fredericks, Connor Huff, Dana Higgins, Anton Strezhnev, Mayya Komisarchik, Gary King, and Hanspeter Pfister. 2017. “booc.io: Software for an Education System with Hierarchical Concept Maps”.
Heather K. Gerken, Jonathan N. Katz, Gary King, Larry J. Sabato, and Samuel S.-H. Wang. 2017. “Brief of Heather K. Gerken, Jonathan N. Katz, Gary King, Larry J. Sabato, and Samuel S.-H. Wang as Amici Curiae in Support of Appellees.” Filed with the Supreme Court of the United States in Beverly R. Gill et al. v. William Whitford et al. 16-1161 .Abstract
SUMMARY OF ARGUMENT
Plaintiffs ask this Court to do what it has done many times before. For generations, it has resolved cases involving elections and cases on which elections ride. It has adjudicated controversies that divide the American people and those, like this one, where Americans are largely in agreement. In doing so, the Court has sensibly adhered to its long-standing and circumspect approach: it has announced a workable principle, one that lends itself to a manageable test, while allowing the lower courts to work out the precise contours of that test with time and experience.

Partisan symmetry, the principle put forward by the plaintiffs, is just such a workable principle. The standard is highly intuitive, deeply rooted in history, and accepted by virtually all social scientists. Tests for partisan symmetry are reliable, transparent, and easy to calculate without undue reliance on experts or unnecessary judicial intrusion on state redistricting judgments. Under any of these tests, Wisconsin’s districts cannot withstand constitutional scrutiny.
Amici Brief
Computer-Assisted Keyword and Document Set Discovery from Unstructured Text
Gary King, Patrick Lam, and Margaret Roberts. 2017. “Computer-Assisted Keyword and Document Set Discovery from Unstructured Text.” American Journal of Political Science, 61, 4, Pp. 971-988. Publisher's VersionAbstract

The (unheralded) first step in many applications of automated text analysis involves selecting keywords to choose documents from a large text corpus for further study. Although all substantive results depend on this choice, researchers usually pick keywords in ad hoc ways that are far from optimal and usually biased. Paradoxically, this often means that the validity of the most sophisticated text analysis methods depend in practice on the inadequate keyword counting or matching methods they are designed to replace. Improved methods of keyword selection would also be valuable in many other areas, such as following conversations that rapidly innovate language to evade authorities, seek political advantage, or express creativity; generic web searching; eDiscovery; look-alike modeling; intelligence analysis; and sentiment and topic analysis. We develop a computer-assisted (as opposed to fully automated) statistical approach that suggests keywords from available text without needing structured data as inputs. This framing poses the statistical problem in a new way, which leads to a widely applicable algorithm. Our specific approach is based on training classifiers, extracting information from (rather than correcting) their mistakes, and summarizing results with Boolean search strings. We illustrate how the technique works with analyses of English texts about the Boston Marathon Bombings, Chinese social media posts designed to evade censorship, among others.

Article
How the Chinese Government Fabricates Social Media Posts for Strategic Distraction, not Engaged Argument
Gary King, Jennifer Pan, and Margaret E. Roberts. 2017. “How the Chinese Government Fabricates Social Media Posts for Strategic Distraction, not Engaged Argument.” American Political Science Review, 111, 3, Pp. 484-501. Publisher's VersionAbstract

The Chinese government has long been suspected of hiring as many as 2,000,000 people to surreptitiously insert huge numbers of pseudonymous and other deceptive writings into the stream of real social media posts, as if they were the genuine opinions of ordinary people. Many academics, and most journalists and activists, claim that these so-called ``50c party'' posts vociferously argue for the government's side in political and policy debates. As we show, this is also true of the vast majority of posts openly accused on social media of being 50c. Yet, almost no systematic empirical evidence exists for this claim, or, more importantly, for the Chinese regime's strategic objective in pursuing this activity. In the first large scale empirical analysis of this operation, we show how to identify the secretive authors of these posts, the posts written by them, and their content. We estimate that the government fabricates and posts about 448 million social media comments a year. In contrast to prior claims, we show that the Chinese regime's strategy is to avoid arguing with skeptics of the party and the government, and to not even discuss controversial issues. We show that the goal of this massive secretive operation is instead to distract the public and change the subject, as most of the these posts involve cheerleading for China, the revolutionary history of the Communist Party, or other symbols of the regime. We discuss how these results fit with what is known about the Chinese censorship program, and suggest how they may change our broader theoretical understanding of ``common knowledge'' and information control in authoritarian regimes.

This paper is related to our articles in Science, “Reverse-Engineering Censorship In China: Randomized Experimentation And Participant Observation”, and the American Political Science Review, “How Censorship In China Allows Government Criticism But Silences Collective Expression”.

Article Supplementary Appendix
A Unified Approach to Measurement Error and Missing Data: Details and Extensions
Matthew Blackwell, James Honaker, and Gary King. 2017. “A Unified Approach to Measurement Error and Missing Data: Details and Extensions.” Sociological Methods and Research, 46, 3, Pp. 342-369. Publisher's VersionAbstract

We extend a unified and easy-to-use approach to measurement error and missing data. In our companion article, Blackwell, Honaker, and King give an intuitive overview of the new technique, along with practical suggestions and empirical applications. Here, we offer more precise technical details, more sophisticated measurement error model specifications and estimation procedures, and analyses to assess the approach’s robustness to correlated measurement errors and to errors in categorical variables. These results support using the technique to reduce bias and increase efficiency in a wide variety of empirical research.

Advanced access version
A Unified Approach to Measurement Error and Missing Data: Overview and Applications
Matthew Blackwell, James Honaker, and Gary King. 2017. “A Unified Approach to Measurement Error and Missing Data: Overview and Applications.” Sociological Methods and Research, 46, 3, Pp. 303-341. Publisher's VersionAbstract

Although social scientists devote considerable effort to mitigating measurement error during data collection, they often ignore the issue during data analysis. And although many statistical methods have been proposed for reducing measurement error-induced biases, few have been widely used because of implausible assumptions, high levels of model dependence, difficult computation, or inapplicability with multiple mismeasured variables. We develop an easy-to-use alternative without these problems; it generalizes the popular multiple imputation (MI) framework by treating missing data problems as a limiting special case of extreme measurement error, and corrects for both. Like MI, the proposed framework is a simple two-step procedure, so that in the second step researchers can use whatever statistical method they would have if there had been no problem in the first place. We also offer empirical illustrations, open source software that implements all the methods described herein, and a companion paper with technical details and extensions (Blackwell, Honaker, and King, 2017b).

Article
2016
Method and Apparatus for Selecting Clusterings to Classify a Data Set
Gary King and Justin Grimmer. 12/13/2016. “Method and Apparatus for Selecting Clusterings to Classify a Data Set.” United States of America 9,519,705 B2 (Patent and Trademark Office).Abstract

In a computer assisted clustering method, a clustering space is generated from fixed basis partitiions that embed the entire space of all possible clusterings. A lower dimensional clustering space is created from the space of all possible clusterings by isometrically embedding the space of all possible clusterings in a lower dimensional Euclidean space. This lower dimensional space is then sampled based on the number of documents in the corpus. Partitions are then developed based on the samples that tessellate the space. Finally, using clusterings representative of these tessellations, a two-dimensional representation for users to explore is created.

Patent
Cross-Classroom and Cross-Institution Item Validation
Gary King, Brian Lukoff, and Eric Mazur. 11/29/2016. “Cross-Classroom and Cross-Institution Item Validation.” United States of America 9,508,266 (US Patent and Trademark Office).Abstract

Anonymous pretesting items for subsequent presentation to participants in a group enable an instructor to validate responses and revise the items accordingly. ... The present invention facilitates anonymous pretesting of items in classrooms (and/or other similar settings) to which the item author has no direct access or knowledge. In some enbodiments, pretesting is performed by software used by the instructor/author in his or her own classroom for other tasks. In various implementations, the software shares information with a central clearninghouse anonymously. The central clearinghouse then automatically matches students in the instructor's class with "relevant" students from other classes -- e.g., students that a statistical algorithm predicts will have approximately the same understanding, and will give approximately the same answers, as the instructor's class. ...

Patent
Systems and methods for calculating category proportions
Aykut Firat, Mitchell Brooks, Christopher Bingham, Amac Herdagdelen, and Gary King. 11/1/2016. “Systems and methods for calculating category proportions.” United States of America 9,483,544 (U.S. Patent and Trademark Office).Abstract

Systems and methods are provided for classifying text based on language using one or more computer servers and storage devices. A computer-implemented method includes receiving a training set of elements, each element in the training set being assigned to one of a plurality of categories and having one of a plurality of content profiles associated therewith; receiving a population set of elements, each element in the population set having one of the plurality of content profiles associated therewith; and calculating using at least one of a stacked regression algorithm, a bias formula algorithm, a noise elimination algorithm, and an ensemble method consisting of a plurality of algorithmic methods the results of which are averaged, based on the content profiles associated with and the categories assigned to elements in the training set and the content profiles associated with the elements of the population set, a distribution of elements of the population set over the categories.

Patent
Comment on 'Estimating the Reproducibility of Psychological Science'
Daniel Gilbert, Gary King, Stephen Pettigrew, and Timothy Wilson. 2016. “Comment on 'Estimating the Reproducibility of Psychological Science'.” Science, 351, 6277, Pp. 1037a-1038a. Publisher's VersionAbstract

recent article by the Open Science Collaboration (a group of 270 coauthors) gained considerable academic and public attention due to its sensational conclusion that the replicability of psychological science is surprisingly low. Science magazine lauded this article as one of the top 10 scientific breakthroughs of the year across all fields of science, reports of which appeared on the front pages of newspapers worldwide. We show that OSC's article contains three major statistical errors and, when corrected, provides no evidence of a replication crisis. Indeed, the evidence is consistent with the opposite conclusion -- that the reproducibility of psychological science is quite high and, in fact, statistically indistinguishable from 100%. (Of course, that doesn't mean that the replicability is 100%, only that the evidence is insufficient to reliably estimate replicability.) The moral of the story is that meta-science must follow the rules of science.

Replication data is available in this dataverse archive. See also the full web site for this article and related materials, and one of the news articles written about it.

Article, with Supplementary Appendix Our Response to OSC's Reply Reply to post-publication discussion

Pages