Publications by Year: Working Paper

Working Paper
Correcting Measurement Error Bias in Conjoint Survey Experiments
Katherine Clayton, Yusaku Horiuchi, Aaron R. Kaufman, Gary King, and Mayya Komisarchik. Working Paper. “Correcting Measurement Error Bias in Conjoint Survey Experiments”.Abstract

Conjoint survey designs are spreading across the social sciences due to their unusual capacity to identify many causal effects from a single randomized experiment. Unfortunately, because the nature of conjoint designs violates aspects of best practices in questionnaire construction, they generate substantial measurement error-induced bias, which can exaggerate, attenuate, or flip signs of causal and descriptive estimates. By replicating both data collection and analysis of eight prominent conjoint studies, all of which closely reproduce published results, we show that about half of all observed variation in this most common type of conjoint experiment is effectively random noise. We then discover a common empirical pattern in how measurement error appears in conjoint studies and use it to derive an easy-to-use statistical method that corrects the bias.
 

Paper Supplementary Appendix
If a Statistical Model Predicts That Common Events Should Occur Only Once in 10,000 Elections, Maybe it’s the Wrong Model
Danny Ebanks, Jonathan N. Katz, and Gary King. Working Paper. “If a Statistical Model Predicts That Common Events Should Occur Only Once in 10,000 Elections, Maybe it’s the Wrong Model”.Abstract
Election surprises are hardly surprising. Unexpected challengers, deaths, retirements, scandals, campaign strategies, real world events, and heresthetical maneuvers all conspire to confuse the best models. Quantitative researchers usually model district-level elections with linear functions of measured covariates, to account for systematic variation, and normal error terms, to account for surprises. However, although these models  work well in many situations they can be embarrassingly overconfident: Events that commonly used models indicate should occur once in 10,000 elections occur almost every year, and even those which the model indicates should occur once in a trillion-trillion elections are sometimes observed.  We develop a new general purpose statistical model of district-level legislative elections, validated with extensive out-of-sample (and distribution-free) tests. As an illustration, we use this model to generate the first ever correctly calibrated probabilities of incumbent losses in US Congressional elections, one of the most important quantities for evaluating the functioning of a representative democracy.  Analyses lead to an optimistic conclusion about American democracy: Even when marginals vanish, incumbency advantage grows, and dramatic changes occur, the risk of an incumbent losing an election has been high and essentially constant from the 1950s until the present day.
Paper
Statistically Valid Inferences from Differentially Private Data Releases, II: Extensions to Nonlinear Transformations
Georgina Evans and Gary King. Working Paper. “Statistically Valid Inferences from Differentially Private Data Releases, II: Extensions to Nonlinear Transformations”.Abstract

We extend Evans and King (Forthcoming, 2021) to nonlinear transformations, using proportions and weighted averages as our running examples.

Paper