# Causal Inference

Methods for detecting and reducing model dependence (i.e., when minor model changes produce substantively different inferences) in inferring causal effects and other counterfactuals. Matching methods; "politically robust" and cluster-randomized experimental designs; causal bias decompositions.

## Methods for Observational Data

## Evaluating Model Dependence

Evaluating whether counterfactual questions (predictions, what-if questions, and causal effects) can be reasonably answered from given data, or whether inferences will instead be highly model-dependent; also, a new decomposition of bias in causal inference. These articles overlap (and each as been the subject of a journal symposium):

For complete mathematical proofs, general notation, and other technical material, see: . 2006. “The Dangers of Extreme Counterfactuals.” Political Analysis, 14, Pp. 131–159.Abstract

For more intuitive, but less general, notation, but with additional examples and more pedagogically oriented material, see: . 2007. “When Can History Be Our Guide? The Pitfalls of Counterfactual Inference.” International Studies Quarterly, Pp. 183-210.Abstract

## Matching Methods

. 2009. “CEM: Software for Coarsened Exact Matching.” Journal of Statistical Software, 30. Publisher's VersionAbstract

A technical paper that describes a new class of matching methods, of which coarsened exact matching is an example: . 2011. “Multivariate Matching Methods That are Monotonic Imbalance Bounding.” Journal of the American Statistical Association, 106, 493, Pp. 345-361.Abstract

A unified approach to matching methods as a way to reduce model dependence by preprocessing data and then using any model you would have without matching: . 2007. “Matching as Nonparametric Preprocessing for Reducing Model Dependence in Parametric Causal Inference.” Political Analysis, 15, Pp. 199–236.Abstract

A simple and powerful method of matching: . 2012. “Causal Inference Without Balance Checking: Coarsened Exact Matching.” Political Analysis, 20, 1, Pp. 1--24. WebsiteAbstract

. In Press. “The Balance-Sample Size Frontier in Matching Methods for Causal Inference.” American Journal of Political Science.Abstract

. 2011. “MatchIt: Nonparametric Preprocessing for Parametric Causal Inference.” Journal of Statistical Software, 42, 8, Pp. 1--28. Publisher's VersionAbstract

. 2018. “A Theory of Statistical Inference for Matching Methods in Causal Research.” Political Analysis, Pp. 1-23.Abstract

## Additional Approaches

A method to estimate base probabilities or any quantity of interest from case-control data, even with no (or partial) auxiliary information. Discusses problems with odds-ratios. . 2002. “Estimating Risk and Rate Levels, Ratios, and Differences in Case-Control Studies.” Statistics in Medicine, 21, Pp. 1409–1427.Abstract

. 1991. “'Truth' is Stranger than Prediction, More Questionable Than Causal Inference.” American Journal of Political Science, 35, Pp. 1047–1053.Abstract

Causal inference in qualitative research (Chapter 4). . 1994. Designing Social Inquiry: Scientific Inference in Qualitative Research. Princeton: Princeton University Press. Publisher's Version

## Experimental Design

. 11/10/2017. “How the news media activate public expression and influence national agendas.” Science, 358, Pp. 776-780. Publisher's VersionAbstract

. 2014. “Methods for Extremely Large Scale Media Experiments and Observational Studies (Poster).” In Society for Political Methodology. Athens, GA.Abstract

. 2012. “Letter to the Editor on the "Medicare Health Support Pilot Program" (by McCall and Cromwell).” New England Journal of Medicine, 366, 7, Pp. 667. New England Journal of Medicine version

An evaluation of the Mexican Seguro Popular program (designed to extend health insurance and regular and preventive medical care, pharmaceuticals, and health facilities to 50 million uninsured Mexicans), one of the world's largest health policy reforms of the last two decades. The evaluation features the largest randomized health policy experiment in history, a new design for field experiments that is more robust to the political interventions that have ruined many similar previous efforts, and new statistical methods that produce more reliable and efficient results using substantially fewer resources, assumptions, and data. . 2011. “Avoiding Randomization Failure in Program Evaluation.” Population Health Management, 14, 1, Pp. S11-S22.Abstract

**(Articles on the Seguro Popular Evaluation: Website)**

Clarifying serious misunderstandings in the advantages and uses of the most common research designs for making causal inferences. . 2008. “Misunderstandings Among Experimentalists and Observationalists about Causal Inference.” Journal of the Royal Statistical Society, Series A, 171, part 2, Pp. 481–502.Abstract

## Software

WhatIf: Software for Evaluating Counterfactuals.” Journal of Statistical Software, 15, 4, Pp. 1--18. Publisher's versionAbstract

. 2005. “
CLARIFY: Software for Interpreting and Presenting Statistical Results.” Journal of Statistical Software.Abstract

. 2003. “## Applications

A brief summary of the above article for an undergraduate audience: . 2005. “The Supreme Court During Crisis: How War Affects only Non-War Cases.” New York University Law Review, 80, Pp. 1–116.Abstract

. 2006. “The Effect of War on the Supreme Court.” In Principles and Practice in American Politics: Classic and Contemporary Readings, , 3rd ed. Washington, D.C. Congressional Quarterly Press.Abstract