All

Statistically Valid Inferences from Differentially Private Data Releases, with Application to the Facebook URLs Dataset
Georgina Evans and Gary King. 2023. “Statistically Valid Inferences from Differentially Private Data Releases, with Application to the Facebook URLs Dataset.” Political Analysis, 31, 1, Pp. 1-21. Publisher's VersionAbstract

We offer methods to analyze the "differentially private" Facebook URLs Dataset which, at over 40 trillion cell values, is one of the largest social science research datasets ever constructed. The version of differential privacy used in the URLs dataset has specially calibrated random noise added, which provides mathematical guarantees for the privacy of individual research subjects while still making it possible to learn about aggregate patterns of interest to social scientists. Unfortunately, random noise creates measurement error which induces statistical bias -- including attenuation, exaggeration, switched signs, or incorrect uncertainty estimates. We adapt methods developed to correct for naturally occurring measurement error, with special attention to computational efficiency for large datasets. The result is statistically valid linear regression estimates and descriptive statistics that can be interpreted as ordinary analyses of non-confidential data but with appropriately larger standard errors.

We have implemented these methods in open source software for R called PrivacyUnbiased.  Facebook has ported PrivacyUnbiased to open source Python code called svinfer.  We have extended these results in Evans and King (2021).

An Improved Method of Automated Nonparametric Content Analysis for Social Science
Connor T. Jerzak, Gary King, and Anton Strezhnev. 2022. “An Improved Method of Automated Nonparametric Content Analysis for Social Science.” Political Analysis, 31, Pp. 42-58.Abstract

Some scholars build models to classify documents into chosen categories. Others, especially social scientists who tend to focus on population characteristics, instead usually estimate the proportion of documents in each category -- using either parametric "classify-and-count" methods or "direct" nonparametric estimation of proportions without individual classification. Unfortunately, classify-and-count methods can be highly model dependent or generate more bias in the proportions even as the percent of documents correctly classified increases. Direct estimation avoids these problems, but can suffer when the meaning of language changes between training and test sets or is too similar across categories. We develop an improved direct estimation approach without these issues by including and optimizing continuous text features, along with a form of matching adapted from the causal inference literature. Our approach substantially improves performance in a diverse collection of 73 data sets. We also offer easy-to-use software software that implements all ideas discussed herein.

Brief of Empirical Scholars as Amici Curiae in Support of Respondents
Ian Ayres, Richard A. Berk, Richard R.W. Brooks, Daniel E. Ho, Gary King, Kevin Quinn, Donald B. Rubin, and Sherod Thaxton. 2022. “Brief of Empirical Scholars as Amici Curiae in Support of Respondents.” Filed with the Supreme Court of the United States in Students for Fair Admissions v. President and Fellows of Harvard College.Abstract
Amici curiae are leaders in the field of quantitative social science and statistical methodology. Amici submit this brief to point out the substantial methodological flaws in the “mismatch” research discussed in the Brief for Richard Sander as Amicus Curiae in Support of Petitioner. Professor Sander’s mismatch hypothesis is unsupported and based on work that fails to adhere to basic tenets of research design.
Jonathan Katz, Gary King, and Elizabeth Rosenblatt. 2022. “Rejoinder: Concluding Remarks on Scholarly Communications.” Political Analysis.Abstract

We are grateful to DeFord et al. for the continued attention to our work and the crucial issues of fair representation in democratic electoral systems. Our response (Katz, King, and Rosenblatt, forthcoming) was designed to help readers avoid being misled by mistaken claims in DeFord et al. (forthcoming-a), and does not address other literature or uses of our prior work. As it happens, none of our corrections were addressed (or contradicted) in the most recent submission (DeFord et al., forthcoming-b).

We also offer a recommendation regarding DeFord et al.’s (forthcoming-b) concern with how expert witnesses, consultants, and commentators should present academic scholarship to academic novices, such as judges, public officials, the media, and the general public. In these public service roles, scholars attempt to translate academic understanding of sophisticated scholarly literatures, technical methodologies, and complex theories for those without sufficient background in social science or statistics.
 

A Theory of Statistical Inference for Ensuring the Robustness of Scientific Results
Beau Coker, Cynthia Rudin, and Gary King. 2021. “A Theory of Statistical Inference for Ensuring the Robustness of Scientific Results.” Management Science, Pp. 1-24. Publisher's VersionAbstract
Inference is the process of using facts we know to learn about facts we do not know. A theory of inference gives assumptions necessary to get from the former to the latter, along with a definition for and summary of the resulting uncertainty. Any one theory of inference is neither right nor wrong, but merely an axiom that may or may not be useful. Each of the many diverse theories of inference can be valuable for certain applications. However, no existing theory of inference addresses the tendency to choose, from the range of plausible data analysis specifications consistent with prior evidence, those that inadvertently favor one's own hypotheses. Since the biases from these choices are a growing concern across scientific fields, and in a sense the reason the scientific community was invented in the first place, we introduce a new theory of inference designed to address this critical problem. We derive "hacking intervals," which are the range of a summary statistic one may obtain given a class of possible endogenous manipulations of the data. Hacking intervals require no appeal to hypothetical data sets drawn from imaginary superpopulations. A scientific result with a small hacking interval is more robust to researcher manipulation than one with a larger interval, and is often easier to interpret than a classical confidence interval. Some versions of hacking intervals turn out to be equivalent to classical confidence intervals, which means they may also provide a more intuitive and potentially more useful interpretation of classical confidence intervals. 
Cluster Analysis of Participant Responses for Test Generation or Teaching (2nd)
Gary King, Brian Lukoff, and Eric Mazur. 2/16/2021. “Cluster Analysis of Participant Responses for Test Generation or Teaching (2nd).” United States of America US 10,922,991 B2 (U.S Patent and Trademark Office).Abstract
Textual responses to open-ended (i.e., free-response) items provided by participants (e.g., by means of mobile wireless devices) are automatically classified, enabling an instructor to assess the responses in a convenient, organized fashion and adjust instruction accordingly.
Designing Social Inquiry: Scientific Inference in Qualitative Research, New Edition
Gary King, Robert O. Keohane, and Sidney Verba. 2021. Designing Social Inquiry: Scientific Inference in Qualitative Research, New Edition. 2nd ed. Princeton: Princeton University Press. Publisher's VersionAbstract
"The classic work on qualitative methods in political science"

Designing Social Inquiry presents a unified approach to qualitative and quantitative research in political science, showing how the same logic of inference underlies both. This stimulating book discusses issues related to framing research questions, measuring the accuracy of data and the uncertainty of empirical inferences, discovering causal effects, and getting the most out of qualitative research. It addresses topics such as interpretation and inference, comparative case studies, constructing causal theories, dependent and explanatory variables, the limits of random selection, selection bias, and errors in measurement. The book only uses mathematical notation to clarify concepts, and assumes no prior knowledge of mathematics or statistics.

Featuring a new preface by Robert O. Keohane and Gary King, this edition makes an influential work available to new generations of qualitative researchers in the social sciences.
Education and Scholarship by Video
Gary King. 2021. “Education and Scholarship by Video”. [Direct link to paper]Abstract

When word processors were first introduced into the workplace, they turned scholars into typists. But they also improved our work: Turnaround time for new drafts dropped from days to seconds. Rewriting became easier and more common, and our papers, educational efforts, and research output improved. I discuss the advantages of and mechanisms for doing the same with do-it-yourself video recordings of research talks and class lectures, so that they may become a fully respected channel for scholarly output and education, alongside books and articles. I consider innovations in video design to optimize education and communication, along with technology to make this change possible.

Excerpts of this paper appeared in Political Science Today (Vol. 1, No. 3, August 2021: Pp.5-6, copy here) and in APSAEducate. See also my recorded videos here.

How to Measure Legislative District Compactness If You Only Know it When You See It
Aaron Kaufman, Gary King, and Mayya Komisarchik. 2021. “How to Measure Legislative District Compactness If You Only Know it When You See It.” American Journal of Political Science, 65, 3, Pp. 533-550. Publisher's VersionAbstract

To deter gerrymandering, many state constitutions require legislative districts to be "compact." Yet, the law offers few precise definitions other than "you know it when you see it," which effectively implies a common understanding of the concept. In contrast, academics have shown that compactness has multiple dimensions and have generated many conflicting measures. We hypothesize that both are correct -- that compactness is complex and multidimensional, but a common understanding exists across people. We develop a survey to elicit this understanding, with high reliability (in data where the standard paired comparisons approach fails). We create a statistical model that predicts, with high accuracy, solely from the geometric features of the district, compactness evaluations by judges and public officials responsible for redistricting, among others. We also offer compactness data from our validated measure for 20,160 state legislative and congressional districts, as well as open source software to compute this measure from any district.

Winner of the 2018 Robert H Durr Award from the MPSA.

Letter to US Census Bureau: "Request for release of “noisy measurements file” by September 30 along with redistricting data products"
Cynthia Dwork, Ruth Greenwood, and Gary King. 8/12/2021. “Letter to US Census Bureau: "Request for release of “noisy measurements file” by September 30 along with redistricting data products"”.Abstract
A letter, submitted on behalf of a large group of expert signatories, to request the release of the “noisy measurements file” and other redistricting data by September 30, 2021.  This includes the data created by the Bureau in preparing its differentially private data release, without their unnecessary (and, in many important situations, information destroying) post-processing.