"Robust standard errors" are used in a vast array of scholarship to correct standard errors for model misspecification. However, when misspecification is bad enough to make classical and robust standard errors diverge, assuming that it is nevertheless not so bad as to bias everything else requires considerable optimism. And even if the optimism is warranted, settling for a misspecified model, with or without robust standard errors, will still bias estimators of all but a few quantities of interest. The resulting cavernous gap between theory and practice suggests that considerable gains in applied statistics may be possible. We seek to help researchers realize these gains via a more productive way to understand and use robust standard errors; a new general and easier-to-use "generalized information matrix test" statistic that can formally assess misspecification (based on differences between robust and classical variance estimates); and practical illustrations via simulations and real examples from published research. How robust standard errors are used needs to change, but instead of jettisoning this popular tool we show how to use it to provide effective clues about model misspecification, likely biases, and a guide to considerably more reliable, and defensible, inferences. Accompanying this article [soon!] is software that implements the methods we describe.

# Unifying Statistical Analysis

Development of a unified approach to statistical modeling, inference, interpretation, presentation, analysis, and software; integrated with most of the other projects listed here.

## Unifying Approaches to Statistical Analysis

. 2015. “How Robust Standard Errors Expose Methodological Problems They Do Not Fix, and What to Do About It.” Political Analysis, 2, 23: 159–179. Publisher's VersionAbstract

A paper that describes the advances underlying Zelig software: . 2008. “Toward A Common Framework for Statistical Analysis and Development.” Journal of Computational Graphics and Statistics, 17: 1–22.Abstract

Sets out the general framework. . 1998. Unifying Political Methodology: The Likelihood Theory of Statistical Inference. Ann Arbor: University of Michigan Press. Publisher's Version

A generalization of Clarify, and much other software, implemented in R. The extensive manual encompasses most of the above works and can be read independently as an introduction to wide range of models. Under active development. . 2006. “Zelig: Everyone's Statistical Software”. Publisher's Version

Generalizes the unification in the book (replacing its Section 5.2 with simulation to compute quantities of interest). This paper, which was originally titled "Enough with the Logit Coefficients, Already!", explains how to compute any quantity of interest from almost any statistical model; and shows, with replications of several published works, how to extract considerably more information than standard practices, without changing any data or statistical assumptions. . 2000. “Making the Most of Statistical Analyses: Improving Interpretation and Presentation.” American Journal of Political Science, 44: 341–355, April. Publisher's VersionAbstract

Software that accompanies the above article and implements its key ideas in easy-to-use Stata macros. . 2003. “CLARIFY: Software for Interpreting and Presenting Statistical Results.” Journal of Statistical Software 8.Abstract

## Related Materials

. 1986. “How Not to Lie With Statistics: Avoiding Common Mistakes in Quantitative Political Science.” American Journal of Political Science, 30: 666–687, August.Abstract

. 2004. “What to do When Your Hessian is Not Invertible: Alternatives to Model Respecification in Nonlinear Estimation.” Sociological Methods and Research, 32: 54-87, August.Abstract

. 2009. “The Changing Evidence Base of Social Science Research.” In The Future of Political Science: 100 Perspectives, . New York: Routledge Press.Abstract

. 2003. “Numerical Issues Involved in Inverting Hessian Matrices.” In Numerical Issues in Statistical Computing for the Social Scientist, , 143-176. Hoboken, NJ: John Wiley and Sons, Inc.

. 1991. “Calculating Standard Errors of Predicted Values based on Nonlinear Functional Forms.” The Political Methodologist, 4, Fall.