Amelia II is a complete R package for multiple imputation of missing data. The package implements a new expectation-maximization with bootstrapping algorithm that works faster, with larger numbers of variables, and is far easier to use, than various Markov chain Monte Carlo approaches, but gives essentially the same answers. The program also improves imputation models by allowing researchers to put Bayesian priors on individual cell values, thereby including a great deal of potentially valuable and extensive information. It also includes features to accurately impute cross-sectional datasets, individual time series, or sets of time series for diļ¬erent cross-sections. A full set of graphical diagnostics are also available. The program is easy to use, and the simplicity of the algorithm makes it far more robust; both a simple command line and extensive graphical user interface are included.

%B Journal of Statistical Software %V 45 %P 1-47 %G eng %N 7 %0 Journal Article %J Journal of Statistical Software %D 2011 %T Anchors: Software for Anchoring Vignettes Data %A Jonathan Wand %A Gary King %A Olivia Lau %XWhen respondents use the ordinal response categories of standard survey questions in different ways, the validity of analyses based on the resulting data can be biased. Anchoring vignettes is a survey design technique intended to correct for some of these problems. The anchors package in R includes methods for evaluating and choosing anchoring vignettes, and for analyzing the resulting data.

%B Journal of Statistical Software %V 42 %P 1--25 %G eng %U http://www.jstatsoft.org/v42/i03/ %N 3 %0 Generic %D 2011 %T AutoCast: Automated Bayesian Forecasting with YourCast %A Jonathan Bischof %A Gary King %A Samir Soneji %G eng %U http://gking.harvard.edu/software/autocast-automated-bayesian-forecasting-yourcast %0 Journal Article %J Population Health Management %D 2011 %T Avoiding Randomization Failure in Program Evaluation %A Gary King %A Richard Nielsen %A Carter Coberley %A James E. Pope %A Aaron Wells %XWe highlight common problems in the application of random treatment assignment in large scale program evaluation. Random assignment is the defining feature of modern experimental design. Yet, errors in design, implementation, and analysis often result in real world applications not benefiting from the advantages of randomization. The errors we highlight cover the control of variability, levels of randomization, size of treatment arms, and power to detect causal effects, as well as the many problems that commonly lead to post-treatment bias. We illustrate with an application to the Medicare Health Support evaluation, including recommendations for improving the design and analysis of this and other large scale randomized experiments.

%B Population Health Management %V 14 %P S11-S22 %8 2011 %G eng %N 1 %0 Generic %D 2011 %T Comparative Effectiveness of Matching Methods for Causal Inference %A Gary King %A Richard Nielsen %A Carter Coberley %A James E. Pope %A Aaron Wells %XMatching is an increasingly popular method of causal inference in observational data, but following methodological best practices has proven difficult for applied researchers. We address this problem by providing a simple graphical approach for choosing among the numerous possible matching solutions generated by three methods: the venerable ``Mahalanobis Distance Matching'' (MDM), the commonly used ``Propensity Score Matching'' (PSM), and a newer approach called ``Coarsened Exact Matching'' (CEM). In the process of using our approach, we also discover that PSM often approximates random matching, both in many real applications and in data simulated by the processes that fit PSM theory. Moreover, contrary to conventional wisdom, random matching is not benign: it (and thus PSM) can often degrade inferences relative to not matching at all. We find that MDM and CEM do not have this problem, and in practice CEM usually outperforms the other two approaches. However, with our comparative graphical approach and easy-to-follow procedures, focus can be on choosing a matching solution for a particular application, which is what may improve inferences, rather than the particular method used to generate it.

%G eng %0 Journal Article %J Science %D 2011 %T Ensuring the Data Rich Future of the Social Sciences %A Gary King %XMassive increases in the availability of informative social science data are making dramatic progress possible in analyzing, understanding, and addressing many major societal problems. Yet the same forces pose severe challenges to the scientific infrastructure supporting data sharing, data management, informatics, statistical methodology, and research ethics and policy, and these are collectively holding back progress. I address these changes and challenges and suggest what can be done.

%B Science %V 331 %P 719-721 %8 2011 %G eng %N 11 February %0 Journal Article %J PLoS ONE %D 2011 %T Estimating Incidence Curves of Several Infections Using Symptom Surveillance Data %A Edward Goldstein %A Benjamin J. Cowling %A Allison E. Aiello %A Saki Takahashi %A Gary King %A Ying Lu %A Marc Lipsitch %XWe introduce a method for estimating incidence curves of several co-circulating infectious pathogens, where each infection has its own probabilities of particular symptom profiles. Our deconvolution method utilizes weekly surveillance data on symptoms from a defined population as well as additional data on symptoms from a sample of virologically confirmed infectious episodes. We illustrate this method by numerical simulations and by using data from a survey conducted on the University of Michigan campus. Last, we describe the data needs to make such estimates accurate.

%B PLoS ONE %V 6 %P e23380 %G eng %N 8 %0 Journal Article %J Demographic Research %D 2011 %T The Future of Death in America %A Gary King %A Samir Soneji %XPopulation mortality forecasts are widely used for allocating public health expenditures, setting research priorities, and evaluating the viability of public pensions, private pensions, and health care financing systems. In part because existing methods seem to forecast worse when based on more information, most forecasts are still based on simple linear extrapolations that ignore known biological risk factors and other prior information. We adapt a Bayesian hierarchical forecasting model capable of including more known health and demographic information than has previously been possible. This leads to the first age- and sex-specific forecasts of American mortality that simultaneously incorporate, in a formal statistical model, the effects of the recent rapid increase in obesity, the steady decline in tobacco consumption, and the well known patterns of smooth mortality age profiles and time trends. Formally including new information in forecasts can matter a great deal. For example, we estimate an increase in male life expectancy at birth from 76.2 years in 2010 to 79.9 years in 2030, which is 1.8 years greater than the U.S. Social Security Administration projection and 1.5 years more than U.S. Census projection. For females, we estimate more modest gains in life expectancy at birth over the next twenty years from 80.5 years to 81.9 years, which is virtually identical to the Social Security Administration projection and 2.0 years less than U.S. Census projections. We show that these patterns are also likely to greatly affect the aging American population structure. We offer an easy-to-use approach so that researchers can include other sources of information and potentially improve on our forecasts too.

%B Demographic Research %V 25 %P 1--38 %G eng %U http://www.demographic-research.org/volumes/vol25/1/ %N 1 %0 Journal Article %J Proceedings of the National Academy of Sciences %D 2011 %T General Purpose Computer-Assisted Clustering and Conceptualization %A Justin Grimmer %A Gary King %XWe develop a computer-assisted method for the discovery of insightful conceptualizations, in the form of clusterings (i.e., partitions) of input objects. Each of the numerous fully automated methods of cluster analysis proposed in statistics, computer science, and biology optimize a different objective function. Almost all are well defined, but how to determine before the fact which one, if any, will partition a given set of objects in an "insightful" or "useful" way for a given user is unknown and difficult, if not logically impossible. We develop a metric space of partitions from all existing cluster analysis methods applied to a given data set (along with millions of other solutions we add based on combinations of existing clusterings), and enable a user to explore and interact with it, and quickly reveal or prompt useful or insightful conceptualizations. In addition, although uncommon in unsupervised learning problems, we offer and implement evaluation designs that make our computer-assisted approach vulnerable to being proven suboptimal in specific data types. We demonstrate that our approach facilitates more efficient and insightful discovery of useful information than either expert human coders or many existing fully automated methods.

%B Proceedings of the National Academy of Sciences %G eng %U http://www.pnas.org/content/early/2011/01/31/1018067108.abstract %0 Journal Article %J Journal of Statistical Software %D 2011 %T MatchIt: Nonparametric Preprocessing for Parametric Causal Inference %A Daniel E. Ho %A Kosuke Imai %A Gary King %A Elizabeth A. Stuart %X MatchIt implements the suggestions of Ho, Imai, King, and Stuart (2007) for improving parametric statistical models by preprocessing data with nonparametric matching methods. MatchIt implements a wide range of sophisticated matching methods, making it possible to greatly reduce the dependence of causal inferences on hard-to-justify, but commonly made, statistical modeling assumptions. The software also easily ts into existing research practices since, after preprocessing data with MatchIt, researchers can use whatever parametric model they would have used without MatchIt, but produce inferences with substantially more robustness and less sensitivity to modeling assumptions. MatchIt is an R program, and also works seamlessly with Zelig. %B Journal of Statistical Software %V 42 %P 1--28 %G eng %U https://www.jstatsoft.org/article/view/v042i08 %N 8 %0 Journal Article %J Journal of the American Statistical Association %D 2011 %T Multivariate Matching Methods That are Monotonic Imbalance Bounding %A Stefano M. Iacus %A Gary King %A Giuseppe Porro %XWe introduce a new "Monotonic Imbalance Bounding" (MIB) class of matching methods for causal inference with a surprisingly large number of attractive statistical properties. MIB generalizes and extends in several new directions the only existing class, "Equal Percent Bias Reducing" (EPBR), which is designed to satisfy weaker properties and only in expectation. We also offer strategies to obtain specific members of the MIB class, and analyze in more detail a member of this class, called Coarsened Exact Matching, whose properties we analyze from this new perspective. We offer a variety of analytical results and numerical simulations that demonstrate how members of the MIB class can dramatically improve inferences relative to EPBR-based matching methods.

%B Journal of the American Statistical Association %V 106 %P 345-361 %8 2011 %G eng %N 493