How Robust Standard Errors Expose Methodological Problems They Do Not Fix, and What to Do About It

Citation:

King, Gary, and Margaret E Roberts. 2014. How Robust Standard Errors Expose Methodological Problems They Do Not Fix, and What to Do About It, Political Analysis: 1-21. Copy at http://j.mp/InK5jU
Article1.18 MB

Abstract:

"Robust standard errors" are used in a vast array of scholarship to correct standard errors for model misspecification. However, when misspecification is bad enough to make classical and robust standard errors diverge, assuming that it is nevertheless not so bad as to bias everything else requires considerable optimism. And even if the optimism is warranted, settling for a misspecified model, with or without robust standard errors, will still bias estimators of all but a few quantities of interest. The resulting cavernous gap between theory and practice suggests that considerable gains in applied statistics may be possible. We seek to help researchers realize these gains via a more productive way to understand and use robust standard errors; a new general and easier-to-use "generalized information matrix test" statistic that can formally assess misspecification (based on differences between robust and classical variance estimates); and practical illustrations via simulations and real examples from published research. How robust standard errors are used needs to change, but instead of jettisoning this popular tool we show how to use it to provide effective clues about model misspecification, likely biases, and a guide to considerably more reliable, and defensible, inferences. Accompanying this article [soon!] is software that implements the methods we describe. 

Publisher's Version

DOI: doi:10.1093/pan/mpu015