A vast literature demonstrates that voters around the world who benefit from their governments' discretionary spending cast ballots for the incumbent party in larger proportions than those not receiving funds. But surprisingly, and contrary to most theories of political accountability, the evidence seems to indicate that voters also reward incumbent parties for implementing ``programmatic'' spending legislation, passed with support from all major parties, and over which incumbents have no discretion. Why voters would attribute responsibility when none exists is unclear, as is why minority party legislators would approve of legislation that will cost them votes. We address this puzzle with one of the largest randomized social experiments ever, resulting in clear rejection of the claim that programmatic policies greatly increase voter support for incumbents. We also reanalyze the study cited as claiming the strongest support for the electoral effects of programmatic policies, which is also a very large scale randomized experiment. We show that its key results vanish after correcting either a simple coding error affecting only two observations or highly unconventional data analysis procedures (or both). We also discuss how these consistent empirical results from the only two probative experiments on this question may be reconciled with several observational and theoretical studies touching on similar questions in other contexts.

# Causal Inference

Methods for detecting and reducing model dependence (i.e., when minor model changes produce substantively different inferences) in inferring causal effects and other counterfactuals. Matching methods; "politically robust" and cluster-randomized experimental designs; causal bias decompositions.

## Methods for Observational Data

## Evaluating Model Dependence

Evaluating whether counterfactual questions (predictions, what-if questions, and causal effects) can be reasonably answered from given data, or whether inferences will instead be highly model-dependent; also, a new decomposition of bias in causal inference. These articles overlap (and each as been the subject of a journal symposium):

For complete mathematical proofs, general notation, and other technical material, see: . 2006. “The Dangers of Extreme Counterfactuals.” Political Analysis, 14: 131–159.Abstract

For more intuitive, but less general, notation, but with additional examples and more pedagogically oriented material, see: . 2007. “When Can History Be Our Guide? the Pitfalls of Counterfactual Inference.” International Studies Quarterly, 183-210, March.Abstract

## Matching Methods

A simple and powerful method of matching: . 2011. “Causal Inference Without Balance Checking: Coarsened Exact Matching.” Political Analysis.Abstract

. 2009. “CEM: Software for Coarsened Exact Matching.” Journal of Statistical Software, 30. Publisher's VersionAbstract

A technical paper that describes a new class of matching methods, of which coarsened exact matching is an example: . 2011. “Multivariate Matching Methods That Are Monotonic Imbalance Bounding.” Journal of the American Statistical Association, 493, 106: 345-361, 2011.Abstract

A unified approach to matching methods as a way to reduce model dependence by preprocessing data and then using any model you would have without matching: . 2007. “Matching as Nonparametric Preprocessing for Reducing Model Dependence in Parametric Causal Inference.” Political Analysis, 15: 199–236.Abstract

. 2011. “MatchIt: Nonparametric Preprocessing for Parametric Causal Inference.” Journal of Statistical Software, 8, 42. Publisher's VersionAbstract

. In Press. “The Balance-Sample Size Frontier in Matching Methods for Causal Inference.” American Journal of Political Science, 2016.Abstract

## Additional Approaches

A method to estimate base probabilities or any quantity of interest from case-control data, even with no (or partial) auxiliary information. Discusses problems with odds-ratios. . 2002. “Estimating Risk and Rate Levels, Ratios, and Differences in Case-Control Studies.” Statistics in Medicine, 21: 1409–1427.Abstract

. 1991. “'Truth' Is Stranger Than Prediction, More Questionable Than Causal Inference.” American Journal of Political Science, 35: 1047–1053, November.Abstract

Causal inference in qualitative research (Chapter 4). . 1994. Designing Social Inquiry: Scientific Inference in Qualitative Research. Princeton: Princeton University Press. Publisher's Version

## Experimental Design

. 2014. “Methods for Extremely Large Scale Media Experiments and Observational Studies (Poster).” In Society for Political Methodology. Athens, GA, 24 July.Abstract

. 2012. “Letter to the Editor on the "Medicare Health Support Pilot Program" (By McCall and Cromwell).” New England Journal of Medicine, 7, 366: 667. New England Journal of Medicine version

. 2011. “Avoiding Randomization Failure in Program Evaluation.” Population Health Management, 1, 14: S11-S22, 2011.Abstract An evaluation of the Mexican Seguro Popular program (designed to extend health insurance and regular and preventive medical care, pharmaceuticals, and health facilities to 50 million uninsured Mexicans), one of the world's largest health policy reforms of the last two decades. The evaluation features the largest randomized health policy experiment in history, a new design for field experiments that is more robust to the political interventions that have ruined many similar previous efforts, and new statistical methods that produce more reliable and efficient results using substantially fewer resources, assumptions, and data.

**(Articles on the Seguro Popular Evaluation: Website)**

Clarifying serious misunderstandings in the advantages and uses of the most common research designs for making causal inferences. . 2008. “Misunderstandings Among Experimentalists and Observationalists About Causal Inference.” Journal of the Royal Statistical Society, Series A, 171, part 2: 481–502.Abstract

## Software

WhatIf: Software for Evaluating Counterfactuals.” Journal of Statistical Software, 15. Publisher's Version

. 2005. “
CLARIFY: Software for Interpreting and Presenting Statistical Results.” Journal of Statistical Software 8.Abstract

. 2003. “## Applications

A brief summary of the above article for an undergraduate audience: . 2005. “The Supreme Court During Crisis: How War Affects Only Non-War Cases.” New York University Law Review, 80: 1–116, April.Abstract

. 2006. “The Effect of War on the Supreme Court.” In Principles and Practice in American Politics: Classic and Contemporary Readings, , 3rd ed. Washington, D.C.: Congressional Quarterly Press.Abstract