Writings

Working Paper
Do Nonpartisan Programmatic Policies Have Partisan Electoral Effects? Evidence from Two Large Scale Randomized Experiments
Kosuke Imai, Gary King, and Carlos Velasco Rivera. Working Paper. “Do Nonpartisan Programmatic Policies Have Partisan Electoral Effects? Evidence from Two Large Scale Randomized Experiments”.Abstract

A vast literature demonstrates that voters around the world who benefit from their governments' discretionary spending cast more ballots for the incumbent party than those who do not benefit. But contrary to most theories of political accountability, some evidence suggests that voters also reward incumbent parties for implementing "programmatic" spending legislation, over which incumbents have no discretion, and even when passed with support from all major parties. Why voters would attribute responsibility when none exists is unclear, as is why minority party legislators would approve of legislation that will cost them votes. We study the electoral effects of two prominent programmatic policies that fit the ideal type unusually well. For the first, we implement one of the largest randomized social experiments ever, and find that its programmatic policies do not increase voter support for incumbents. For the second, we reanalyze the study cited as claiming the strongest support for the electoral effects of programmatic policies, which is also a very large randomized experiment. We show that its key results vanish after correcting either a simple coding error affecting only two observations or highly unconventional data analysis procedures (or both). Our results may differ from those of prior research because we were able to marshal large scale experiments rather than observational studies or because we analyze relatively pure forms of programmatic policies rather than mixtures of programmatic and clientelistic policies. However, we conjecture that the primary explanation is the differing nature of the politics for which these policies are passed and implemented.

Paper Supplementary Appendix
How Human Subjects Research Rules Mislead You and Your University, and What to Do About it
Gary King and Melissa Sands. Working Paper. “How Human Subjects Research Rules Mislead You and Your University, and What to Do About it”.Abstract

Universities require faculty and students planning research involving human subjects to pass formal certification tests and then submit research plans for prior approval. Those who diligently take the tests may better understand certain important legal requirements but, at the same time, are often misled into thinking they can apply these rules to their own work which, in fact, they are not permitted to do. They will also be missing many other legal requirements not mentioned in their training but which govern their behaviors. Finally, the training leaves them likely to completely misunderstand the essentially political situation they find themselves in. The resulting risks to their universities, collaborators, and careers may be catastrophic, in addition to contributing to the more common ordinary frustrations of researchers with the system. To avoid these problems, faculty and students conducting research about and for the public need to understand that they are public figures, to whom different rules apply, ones that political scientists have long studied. University administrators (and faculty in their part-time roles as administrators) need to reorient their perspectives as well. University research compliance bureaucracies have grown, in well-meaning but sometimes unproductive ways that are not required by federal laws or guidelines. We offer advice to faculty and students for how to deal with the system as it exists now, and suggestions for changes in university research compliance bureaucracies, that should benefit faculty, students, staff, university budgets, and our research subjects.

Paper
How to Measure Legislative District Compactness If You Only Know it When You See It
Aaron Kaufman, Gary King, and Mayya Komisarchik. Working Paper. “How to Measure Legislative District Compactness If You Only Know it When You See It”.Abstract
To prevent gerrymandering, and to impose a specific form of democratic representation, many state constitutions and judicial opinions require US legislative districts to be "compact." Yet, the law offers few precise definitions other than "you know it when you see it," which effectively implies a common understanding of the concept. In contrast, academics have shown that the concept has multiple theoretical dimensions and have generated large numbers of conflicting empirical measures. This has proved extremely challenging for courts tasked with adjudicating compactness. We hypothesize that both are correct --- that compactness is complex and multidimensional, but a common understanding exists across people. We develop a survey design to elicit this understanding, without bias in favor of one's own political views, and with high levels of reliability (in data where the standard paired comparisons approach fails). We then create a statistical model that predicts, with high accuracy and solely from the geometric features of the district, compactness evaluations by 96 judges, justices, and public officials responsible for redistricting (and 102 redistricting consultants, expert witnesses, law professors, law students, graduate students, undergraduates, and Mechanical Turk workers). We also offer data on compactness from our validated measure for 18,215 state legislative and congressional districts, as well as software to compute this measure from any district. We then discuss what may be the wider applicability of our general methodological approach to measuring important concepts that you only know when you see.
Paper
An Improved Method of Automated Nonparametric Content Analysis for Social Science
Connor T. Jerzak, Gary King, and Anton Strezhnev. Working Paper. “An Improved Method of Automated Nonparametric Content Analysis for Social Science”.Abstract
Computer scientists and statisticians are often interested in classifying textual documents into chosen categories. Social scientists and others are often less interested in any one document and instead try to estimate the proportion falling in each category. The two existing types of techniques for estimating these category proportions are parametric "classify and count" methods and "direct" nonparametric estimation of category proportions without an individual classification step. Unfortunately, classify and count methods can sometimes be highly model dependent or generate more bias in the proportions as the percent correctly classified increases. Direct estimation avoids these problems, but can suffer when the meaning and usage of language is too similar across categories or too different between training and test sets. We develop an improved direct estimation approach without these problems by introducing continuously valued text features optimized for this problem, along with a form of matching adapted from the causal inference literature. We evaluate our approach in analyses of a diverse collection of 72 data sets, showing that it achieves substantially improved performance compared to existing approaches. As a companion to this paper, we offer easy-to-use software that implements all ideas discussed herein.
Paper
PSI (Ψ): a Private data Sharing Interface
Marco Gaboardi, James Honaker, Gary King, Kobbi Nissim, Jonathan Ullman, and Salil Vadhan. Working Paper. “PSI (Ψ): a Private data Sharing Interface”. Publisher's VersionAbstract

We provide an overview of PSI ("a Private data Sharing Interface"), a system we are developing to enable researchers in the social sciences and other fields to share and explore privacy-sensitive datasets with the strong privacy protections of differential privacy.

Paper
Why Propensity Scores Should Not Be Used for Matching
Gary King and Richard Nielsen. Working Paper. “Why Propensity Scores Should Not Be Used for Matching”.Abstract

We show that propensity score matching (PSM), an enormously popular method of preprocessing data for causal inference, often accomplishes the opposite of its intended goal -- increasing imbalance, inefficiency, model dependence, and bias. PSM supposedly makes it easier to find matches by projecting a large number of covariates to a scalar propensity score and applying a single model to produce an unbiased estimate. However, in observational analysis the data generation process is rarely known and so users typically try many models before choosing one to present. The weakness of PSM comes from its attempts to approximate a completely randomized experiment, rather than, as with other matching methods, a more efficient fully blocked randomized experiment. PSM is thus uniquely blind to the often large portion of imbalance that can be eliminated by approximating full blocking with other matching methods. Moreover, in data balanced enough to approximate complete randomization, either to begin with or after pruning some observations, PSM approximates random matching which, we show, increases imbalance even relative to the original data. Although these results suggest that researchers replace PSM with one of the other available methods when performing matching, propensity scores have many other productive uses.

Paper Supplementary Appendix
In Press
The Balance-Sample Size Frontier in Matching Methods for Causal Inference
Gary King, Christopher Lucas, and Richard Nielsen. In Press. “The Balance-Sample Size Frontier in Matching Methods for Causal Inference.” American Journal of Political Science.Abstract

We propose a simplified approach to matching for causal inference that simultaneously optimizes balance (similarity between the treated and control groups) and matched sample size. Existing approaches either fix the matched sample size and maximize balance or fix balance and maximize sample size, leaving analysts to settle for suboptimal solutions or attempt manual optimization by iteratively tweaking their matching method and rechecking balance. To jointly maximize balance and sample size, we introduce the matching frontier, the set of matching solutions with maximum possible balance for each sample size. Rather than iterating, researchers can choose matching solutions from the frontier for analysis in one step. We derive fast algorithms that calculate the matching frontier for several commonly used balance metrics. We demonstrate with analyses of the effect of sex on judging and job training programs that show how the methods we introduce can extract new knowledge from existing data sets.

Easy to use, open source, software is available here to implement all methods in the paper.

Proofs Supplementary Appendix
Computer-Assisted Keyword and Document Set Discovery from Unstructured Text
Gary King, Patrick Lam, and Margaret Roberts. In Press. “Computer-Assisted Keyword and Document Set Discovery from Unstructured Text.” American Journal of Political Science.Abstract

The (unheralded) first step in many applications of automated text analysis involves selecting keywords to choose documents from a large text corpus for further study. Although all substantive results depend on this choice, researchers usually pick keywords in ad hoc ways that are far from optimal and usually biased. Paradoxically, this often means that the validity of the most sophisticated text analysis methods depend in practice on the inadequate keyword counting or matching methods they are designed to replace. Improved methods of keyword selection would also be valuable in many other areas, such as following conversations that rapidly innovate language to evade authorities, seek political advantage, or express creativity; generic web searching; eDiscovery; look-alike modeling; intelligence analysis; and sentiment and topic analysis. We develop a computer-assisted (as opposed to fully automated) statistical approach that suggests keywords from available text without needing structured data as inputs. This framing poses the statistical problem in a new way, which leads to a widely applicable algorithm. Our specific approach is based on training classifiers, extracting information from (rather than correcting) their mistakes, and summarizing results with Boolean search strings. We illustrate how the technique works with analyses of English texts about the Boston Marathon Bombings, Chinese social media posts designed to evade censorship, among others.

Article
Forthcoming
booc.io: An Education System with Hierarchical Concept Maps
Michail Schwab, Hendrik Strobelt, James Tompkin, Colin Fredericks, Connor Huff, Dana Higgins, Anton Strezhnev, Mayya Komisarchik, Gary King, and Hanspeter Pfister. Forthcoming. “booc.io: An Education System with Hierarchical Concept Maps.” IEEE Transactions on Visualization and Computer Graphics.Abstract

Information hierarchies are difficult to express when real-world space or time constraints force traversing the hierarchy in linear presentations, such as in educational books and classroom courses. We present booc.io, which allows linear and non-linear presentation and navigation of educational concepts and material. To support a breadth of material for each concept, booc.io is Web based, which allows adding material such as lecture slides, book chapters, videos, and LTIs. A visual interface assists the creation of the needed hierarchical structures. The goals of our system were formed in expert interviews, and we explain how our design meets these goals. We adapt a real-world course into booc.io, and perform introductory qualitative evaluation with students.

Edited transcript of a talk on Partisan Symmetry at the 'Redistricting and Representation Forum'
Gary King. Forthcoming. “Edited transcript of a talk on Partisan Symmetry at the 'Redistricting and Representation Forum'.” Bulletin of the American Academy of Arts and Sciences, Winter, Pp. 55-58.Abstract

The origin, meaning, estimation, and application of the concept of partisan symmetry in legislative redistricting, and the justiciability of partisan gerrymandering. An edited transcript of a talk at the “Redistricting and Representation Forum,” American Academy of Arts & Sciences, Cambridge, MA 11/8/2017.

Here also is a video of the original talk.

Article
2017
How to conquer partisan gerrymandering
Gary King and Robert X Browning. 12/26/2017. “How to conquer partisan gerrymandering.” Boston Globe (Op-Ed), 292 , 179 , Pp. A10. Publisher's VersionAbstract
PARTISAN GERRYMANDERING has long been reviled for thwarting the will of the voters. Yet while voters are acting disgusted, the US Supreme Court has only discussed acting — declaring they have the constitutional right to fix the problem, but doing nothing. But as better data and computer algorithms are now making gerrymandering increasingly effective, continuing to sidestep the issue could do permanent damage to American democracy. In Gill v. Whitford, the soon-to-be-decided challenge to Wisconsin’s 2011 state Assembly redistricting plan, the court could finally fix the problem for the whole country. Judging from the oral arguments, the key to the case is whether the court endorses the concept of “partisan symmetry,” a specific standard for treating political parties equally in allocating legislative seats based on voting.
Article
How the news media activate public expression and influence national agendas
Gary King, Benjamin Schneer, and Ariel White. 11/10/2017. “How the news media activate public expression and influence national agendas.” Science, 358, Pp. 776-780. Publisher's VersionAbstract

We demonstrate that exposure to the news media causes Americans to take public stands on specific issues, join national policy conversations, and express themselves publicly—all key components of democratic politics—more often than they would otherwise. After recruiting \(48\) mostly small media outlets, we chose groups of these outlets to write and publish articles on subjects we approved, on dates we randomly assigned. We estimated the causal effect on proximal measures, such as website pageviews and Twitter discussion of the articles’ specific subjects, and distal ones, such as national Twitter conversation in broad policy areas. Our intervention increased discussion in each broad policy area by \(\approx 62.7\%\) (relative to a day’s volume), accounting for \(13,166\) additional posts over the treatment week, with similar effects across population subgroups. 

On the Science website: AbstractReprintFull text, and a comment (by Matthew Gentzkow) "Small media, big impact

 

 

Article Supplementary Appendix
Heather K. Gerken, Jonathan N. Katz, Gary King, Larry J. Sabato, and Samuel S.-H. Wang. 2017. “Brief of Heather K. Gerken, Jonathan N. Katz, Gary King, Larry J. Sabato, and Samuel S.-H. Wang as Amici Curiae in Support of Appellees.” Filed with the Supreme Court of the United States in Beverly R. Gill et al. v. William Whitford et al. 16-1161 .Abstract
SUMMARY OF ARGUMENT
Plaintiffs ask this Court to do what it has done many times before. For generations, it has resolved cases involving elections and cases on which elections ride. It has adjudicated controversies that divide the American people and those, like this one, where Americans are largely in agreement. In doing so, the Court has sensibly adhered to its long-standing and circumspect approach: it has announced a workable principle, one that lends itself to a manageable test, while allowing the lower courts to work out the precise contours of that test with time and experience.

Partisan symmetry, the principle put forward by the plaintiffs, is just such a workable principle. The standard is highly intuitive, deeply rooted in history, and accepted by virtually all social scientists. Tests for partisan symmetry are reliable, transparent, and easy to calculate without undue reliance on experts or unnecessary judicial intrusion on state redistricting judgments. Under any of these tests, Wisconsin’s districts cannot withstand constitutional scrutiny.
Amici Brief
How the Chinese Government Fabricates Social Media Posts for Strategic Distraction, not Engaged Argument
Gary King, Jennifer Pan, and Margaret E. Roberts. 2017. “How the Chinese Government Fabricates Social Media Posts for Strategic Distraction, not Engaged Argument.” American Political Science Review, 111, 3, Pp. 484-501. Publisher's VersionAbstract

The Chinese government has long been suspected of hiring as many as 2,000,000 people to surreptitiously insert huge numbers of pseudonymous and other deceptive writings into the stream of real social media posts, as if they were the genuine opinions of ordinary people. Many academics, and most journalists and activists, claim that these so-called ``50c party'' posts vociferously argue for the government's side in political and policy debates. As we show, this is also true of the vast majority of posts openly accused on social media of being 50c. Yet, almost no systematic empirical evidence exists for this claim, or, more importantly, for the Chinese regime's strategic objective in pursuing this activity. In the first large scale empirical analysis of this operation, we show how to identify the secretive authors of these posts, the posts written by them, and their content. We estimate that the government fabricates and posts about 448 million social media comments a year. In contrast to prior claims, we show that the Chinese regime's strategy is to avoid arguing with skeptics of the party and the government, and to not even discuss controversial issues. We show that the goal of this massive secretive operation is instead to distract the public and change the subject, as most of the these posts involve cheerleading for China, the revolutionary history of the Communist Party, or other symbols of the regime. We discuss how these results fit with what is known about the Chinese censorship program, and suggest how they may change our broader theoretical understanding of ``common knowledge'' and information control in authoritarian regimes.

This paper is related to our articles in Science, “Reverse-Engineering Censorship In China: Randomized Experimentation And Participant Observation”, and the American Political Science Review, “How Censorship In China Allows Government Criticism But Silences Collective Expression”.

Article Supplementary Appendix
A Theory of Statistical Inference for Matching Methods in Causal Research
Stefano M. Iacus, Gary King, and Giuseppe Porro. 2017. “A Theory of Statistical Inference for Matching Methods in Causal Research”.Abstract

Researchers who generate data often optimize efficiency and robustness by choosing stratified over simple random sampling designs. Yet, all theories of inference proposed to justify matching methods are based on simple random sampling. This is all the more troubling because, although these theories require exact matching, most matching applications resort to some form of ex post stratification (on a propensity score, distance metric, or the covariates) to find approximate matches, thus nullifying the statistical properties the theories are designed to ensure. Fortunately, the type of sampling used in a theory of inference is an axiom, rather than an assumption vulnerable to being proven wrong, and so we can replace simple with stratified sampling, so long as we can show, as we do here, that the implications of the theory are coherent and remain true. We also show, under our resulting stratified sampling-based theory of inference, that matching in observational studies becomes intuitive and easy to understand. Properties of estimators based on this theory can be satisfied without asymptotics, assumptions hidden in data analysis rather than stated up front, or unfamiliar estimators. This theory also allows binary, multicategory, and continuous treatment variables from the outset and straightforward   extensions for imperfect treatment assignment and different versions of treatments.

Paper
A Unified Approach to Measurement Error and Missing Data: Details and Extensions
Matthew Blackwell, James Honaker, and Gary King. 2017. “A Unified Approach to Measurement Error and Missing Data: Details and Extensions.” Sociological Methods and Research, 46, 3, Pp. 342-369. Publisher's VersionAbstract

We extend a unified and easy-to-use approach to measurement error and missing data. In our companion article, Blackwell, Honaker, and King give an intuitive overview of the new technique, along with practical suggestions and empirical applications. Here, we offer more precise technical details, more sophisticated measurement error model specifications and estimation procedures, and analyses to assess the approach’s robustness to correlated measurement errors and to errors in categorical variables. These results support using the technique to reduce bias and increase efficiency in a wide variety of empirical research.

Advanced access version
A Unified Approach to Measurement Error and Missing Data: Overview and Applications
Matthew Blackwell, James Honaker, and Gary King. 2017. “A Unified Approach to Measurement Error and Missing Data: Overview and Applications.” Sociological Methods and Research, 46, 3, Pp. 303-341. Publisher's VersionAbstract

Although social scientists devote considerable effort to mitigating measurement error during data collection, they often ignore the issue during data analysis. And although many statistical methods have been proposed for reducing measurement error-induced biases, few have been widely used because of implausible assumptions, high levels of model dependence, difficult computation, or inapplicability with multiple mismeasured variables. We develop an easy-to-use alternative without these problems; it generalizes the popular multiple imputation (MI) framework by treating missing data problems as a limiting special case of extreme measurement error, and corrects for both. Like MI, the proposed framework is a simple two-step procedure, so that in the second step researchers can use whatever statistical method they would have if there had been no problem in the first place. We also offer empirical illustrations, open source software that implements all the methods described herein, and a companion paper with technical details and extensions (Blackwell, Honaker, and King, 2017b).

Article
2016
Method and Apparatus for Selecting Clusterings to Classify a Data Set
Gary King and Justin Grimmer. 12/13/2016. “Method and Apparatus for Selecting Clusterings to Classify a Data Set.” United States of America 9,519,705 B2 (Patent and Trademark Office).Abstract

In a computer assisted clustering method, a clustering space is generated from fixed basis partitiions that embed the entire space of all possible clusterings. A lower dimensional clustering space is created from the space of all possible clusterings by isometrically embedding the space of all possible clusterings in a lower dimensional Euclidean space. This lower dimensional space is then sampled based on the number of documents in the corpus. Partitions are then developed based on the samples that tessellate the space. Finally, using clusterings representative of these tessellations, a two-dimensional representation for users to explore is created.

Patent
Cross-Classroom and Cross-Institution Item Validation
Gary King, Brian Lukoff, and Eric Mazur. 11/29/2016. “Cross-Classroom and Cross-Institution Item Validation.” United States of America 9,508,266 (US Patent and Trademark Office).Abstract

Anonymous pretesting items for subsequent presentation to participants in a group enable an instructor to validate responses and revise the items accordingly. ... The present invention facilitates anonymous pretesting of items in classrooms (and/or other similar settings) to which the item author has no direct access or knowledge. In some enbodiments, pretesting is performed by software used by the instructor/author in his or her own classroom for other tasks. In various implementations, the software shares information with a central clearninghouse anonymously. The central clearinghouse then automatically matches students in the instructor's class with "relevant" students from other classes -- e.g., students that a statistical algorithm predicts will have approximately the same understanding, and will give approximately the same answers, as the instructor's class. ...

Patent
Systems and methods for calculating category proportions
Aykut Firat, Mitchell Brooks, Christopher Bingham, Amac Herdagdelen, and Gary King. 11/1/2016. “Systems and methods for calculating category proportions.” United States of America 9,483,544 (U.S. Patent and Trademark Office).Abstract

Systems and methods are provided for classifying text based on language using one or more computer servers and storage devices. A computer-implemented method includes receiving a training set of elements, each element in the training set being assigned to one of a plurality of categories and having one of a plurality of content profiles associated therewith; receiving a population set of elements, each element in the population set having one of the plurality of content profiles associated therewith; and calculating using at least one of a stacked regression algorithm, a bias formula algorithm, a noise elimination algorithm, and an ensemble method consisting of a plurality of algorithmic methods the results of which are averaged, based on the content profiles associated with and the categories assigned to elements in the training set and the content profiles associated with the elements of the population set, a distribution of elements of the population set over the categories.

Patent

Pages