Publications by Author:

2020
Population-scale Longitudinal Mapping of COVID-19 Symptoms, Behaviour and Testing
William E. Allen, Han Altae-Tran, James Briggs, Xin Jin, Glen McGee, Andy Shi, Rumya Raghavan, Mireille Kamariza, Nicole Nova, Albert Pereta, Chris Danford, Amine Kamel, Patrik Gothe, Evrhet Milam, Jean Aurambault, Thorben Primke, Weijie Li, Josh Inkenbrandt, Tuan Huynh, Evan Chen, Christina Lee, Michael Croatto, Helen Bentley, Wendy Lu, Robert Murray, Mark Travassos, Brent A. Coull, John Openshaw, Casey S. Greene, Ophir Shalem, Gary King, Ryan Probasco, David R. Cheng, Ben Silbermann, Feng Zhang, and Xihong Lin. 8/26/2020. “Population-scale Longitudinal Mapping of COVID-19 Symptoms, Behaviour and Testing.” Nature Human Behavior. Publisher's VersionAbstract
Despite the widespread implementation of public health measures, coronavirus disease 2019 (COVID-19) continues to spread in the United States. To facilitate an agile response to the pandemic, we developed How We Feel, a web and mobile application that collects longitudinal self-reported survey responses on health, behaviour and demographics. Here, we report results from over 500,000 users in the United States from 2 April 2020 to 12 May 2020. We show that self-reported surveys can be used to build predictive models to identify likely COVID-19-positive individuals. We find evidence among our users for asymptomatic or presymptomatic presentation; show a variety of exposure, occupational and demographic risk factors for COVID-19 beyond symptoms; reveal factors for which users have been SARS-CoV-2 PCR tested; and highlight the temporal dynamics of symptoms and self-isolation behaviour. These results highlight the utility of collecting a diverse set of symptomatic, demographic, exposure and behavioural self-reported data to fight the COVID-19 pandemic.
Article
2019
Ecological Regression with Partial Identification
Wenxin Jiang, Gary King, Allen Schmaltz, and Martin A. Tanner. 2019. “Ecological Regression with Partial Identification.” Political Analysis, 28, 1, Pp. 1--22.Abstract

Ecological inference (EI) is the process of learning about individual behavior from aggregate data. We relax assumptions by allowing for ``linear contextual effects,'' which previous works have regarded as plausible but avoided due to non-identification, a problem we sidestep by deriving bounds instead of point estimates. In this way, we offer a conceptual framework to improve on the Duncan-Davis bound, derived more than sixty-five years ago. To study the effectiveness of our approach, we collect and analyze 8,430  2x2 EI datasets with known ground truth from several sources --- thus bringing considerably more data to bear on the problem than the existing dozen or so datasets available in the literature for evaluating EI estimators. For the 88% of real data sets in our collection that fit a proposed rule, our approach reduces the width of the Duncan-Davis bound, on average, by about 44%, while still capturing the true district level parameter about 99% of the time. The remaining 12% revert to the Duncan-Davis bound. 

Easy-to-use software is available that implements all the methods described in the paper. 

article Online Supplementary Appendix
2009
From Preserving the Past to Preserving the Future: The Data-PASS Project and the Challenges of Preserving Digital Social Science Data
Myron P Gutmann, Mark Abrahamson, Margaret O Adams, Micah Altman, Caroline Arms, Kenneth Bollen, Michael Carlson, Jonathan Crabtree, Darrell Donakowski, Gary King, Jaret Lyle, Marc Maynard, Amy Pienta, Richard Rockwell, Lois Rocms-Ferrara, and Copeland H Young. 2009. “From Preserving the Past to Preserving the Future: The Data-PASS Project and the Challenges of Preserving Digital Social Science Data.” Library Trends, 57, Pp. 315–337.Abstract

Social science data are an unusual part of the past, present, and future of digital preservation. They are both an unqualified success, due to long-lived and sustainable archival organizations, and in need of further development because not all digital content is being preserved. This article is about the Data Preservation Alliance for Social Sciences (Data-PASS), a project supported by the National Digital Information Infrastructure and Preservation Program (NDIIPP), which is a partnership of five major U.S. social science data archives. Broadly speaking, Data-PASS has the goal of ensuring that at-risk social science data are identified, acquired, and preserved, and that we have a future-oriented organization that could collaborate on those preservation tasks for the future. Throughout the life of the Data-PASS project we have worked to identify digital materials that have never been systematically archived, and to appraise and acquire them. As the project has progressed, however, it has increasingly turned its attention from identifying and acquiring legacy and at-risk social science data to identifying on going and future research projects that will produce data. This article is about the project's history, with an emphasis on the issues that underlay the transition from looking backward to looking forward.

Article
2007
A Proposed Standard for the Scholarly Citation of Quantitative Data
Micah Altman and Gary King. 2007. “A Proposed Standard for the Scholarly Citation of Quantitative Data.” D-Lib Magazine, 13. Publisher's VersionAbstract

An essential aspect of science is a community of scholars cooperating and competing in the pursuit of common goals. A critical component of this community is the common language of and the universal standards for scholarly citation, credit attribution, and the location and retrieval of articles and books. We propose a similar universal standard for citing quantitative data that retains the advantages of print citations, adds other components made possible by, and needed due to, the digital form and systematic nature of quantitative data sets, and is consistent with most existing subfield-specific approaches. Although the digital library field includes numerous creative ideas, we limit ourselves to only those elements that appear ready for easy practical use by scientists, journal editors, publishers, librarians, and archivists.

Article
2003
Numerical Issues Involved in Inverting Hessian Matrices
Jeff Gill and Gary King. 2003. “Numerical Issues Involved in Inverting Hessian Matrices.” In Numerical Issues in Statistical Computing for the Social Scientist, edited by Micah Altman and Michael P. McDonald, Pp. 143-176. Hoboken, NJ: John Wiley and Sons, Inc. Chapter PDF
2001
Micah Altman, Leonid Andreev, Mark Diggory, Gary King, Daniel Kiskis, Elizabeth Kolster, Michael Krot, and Sidney Verba. 2001. “Virtual Data Center”.
Aggregation Among Binary, Count, and Duration Models: Estimating the Same Quantities from Different Levels of Data
James E Alt, Gary King, and Curtis Signorino. 2001. “Aggregation Among Binary, Count, and Duration Models: Estimating the Same Quantities from Different Levels of Data.” Political Analysis, 9, Pp. 21–44.Abstract
Binary, count and duration data all code discrete events occurring at points in time. Although a single data generation process can produce all of these three data types, the statistical literature is not very helpful in providing methods to estimate parameters of the same process from each. In fact, only single theoretical process exists for which know statistical methods can estimate the same parameters - and it is generally used only for count and duration data. The result is that seemingly trivial decisions abut which level of data to use can have important consequences for substantive interpretations. We describe the theoretical event process for which results exist, based on time independence. We also derive a set of models for a time-dependent process and compare their predictions to those of a commonly used model. Any hope of understanding and avoiding the more serious problems of aggregation bias in events data is contingent on first deriving a much wider arsenal of statistical models and theoretical processes that are not constrained by the particular forms of data that happen to be available. We discuss these issues and suggest an agenda for political methodologists interested in this very large class of aggregation problems.
Article
A Digital Library for the Dissemination and Replication of Quantitative Social Science Research
Micah Altman, Leonid Andreev, Mark Diggory, Gary King, Daniel L Kiskis, Elizabeth Kolster, Michael Krot, and Sidney Verba. 2001. “A Digital Library for the Dissemination and Replication of Quantitative Social Science Research.” Social Science Computer Review, 19, Pp. 458–470.Abstract
The Virtual Data Center (VDC) software is an open-source, digital library system for quantitative data. We discuss what the software does, and how it provides an infrastructure for the management and dissemination of disturbed collections of quantitative data, and the replication of results derived from this data.
Article
An Introduction to the Virtual Data Center Project and Software
Micah Altman, Leonid Andreev, Mark Diggory, Gary King, Elizabeth Kolster, M Krot, Sidney Verba, and Daniel L Kiskis. 2001. “An Introduction to the Virtual Data Center Project and Software.” Proceedings of The First ACM+IEEE Joint Conference on Digital Libraries, Pp. 203–204. Article
Micah Altman, Leonid Andreev, Mark Diggory, Gary King, Daniel L. Kiskis, Elizabeth Kolster, Michael Krot, and Sidney Verba. 2001. “An Overview of the Virtual Data Center Project and Software.” JCDL ’01: First Joint Conference on Digital Libraries, Pp. 203-204.Abstract

Software is now superseded by Dataverse.

In this paper, we present an overview of the Virtual Data Center (VDC) software, an open-source digital library system for the management and dissemination of distributed collections of quantitative data. (see http://TheData.org). The VDC functionality provides everything necessary to maintain and disseminate an individual collection of research studies, including facilities for the storage, archiving, cataloging, translation, and on-line analysis of a particular collection. Moreover, the system provides extensive support for distributed and federated collections including: location-independent naming of objects, distributed authentication and access control, federated metadata harvesting, remote repository caching, and distributed "virtual" collections of remote objects.

1994
Transfers of Governmental Power: The Meaning of Time Dependence
James E Alt and Gary King. 1994. “Transfers of Governmental Power: The Meaning of Time Dependence.” Comparative Political Studies, 27, Pp. 190–210.Abstract
King, Alt, Burns, and Laver (1990) proposed and estimated a unified model in which cabinet durations depended on seven explanatory variables reflecting features of the cabinets and the bargaining environments in which they formed, along with a stochastic component in which the risk of a cabinet falling was treated as a constant across its tenure. Two recent research reports take issue with one aspect of this model. Warwick and Easton replicate the earlier findings for explanatory variables but claim that the stochastic risk should be seen as rising, and at a rate which varies, across the life of the cabinet. Bienen and van de Walle, using data on the duration of leaders, allege that random risk is falling. We continue in our goal of unifying this literature by providing further estimates with both cabinet and leader duration data that confirm the original explanatory variables’ effects, showing that leaders’ durations are affected by many of the same factors that affect the durability of the cabinets they lead, demonstrating that cabinets have stochastic risk of ending that is indeed constant across the theoretically most interesting range of durations, and suggesting that stochastic risk for leaders in countries with cabinet government is, if not constant, more likely to rise than fall.
Article
1990
A Unified Model of Cabinet Dissolution in Parliamentary Democracies
Gary King, James Alt, Nancy Burns, and Michael Laver. 1990. “A Unified Model of Cabinet Dissolution in Parliamentary Democracies.” American Journal of Political Science, 34, Pp. 846–871.Abstract
The literature on cabinet duration is split between two apparently irreconcilable positions. The attributes theorists seek to explain cabinet duration as a fixed function of measured explanatory variables, while the events process theorists model cabinet durations as a product of purely stochastic processes. In this paper we build a unified statistical model that combines the insights of these previously distinct approaches. We also generalize this unified model, and all previous models, by including (1) a stochastic component that takes into account the censoring that occurs as a result of governments lasting to the vicinity of the maximum constitutional interelection period, (2) a systematic component that precludes the possibility of negative duration predictions, and (3) a much more objective and parsimonious list of explanatory variables, the explanatory power of which would not be improved by including a list of indicator variables for individual countries.
Article