Areas of Research

    • Evaluating U.S. Social Security Administration Forecasts
      The accuracy of U.S. Social Security Administration (SSA) demographic and financial forecasts is crucial for the solvency of its Trust Funds, government programs comprising greater than 50% of all federal government expenditures, industry decision making, and the evidence base of many scholarly articles. Forecasts are also essential for scoring policy proposals, put forward by both political parties. Because SSA makes public little replication information, and uses ad hoc, qualitative, and antiquated statistical forecasting methods, no one in or out of government has been able to produce fully independent alternative forecasts or policy scorings. Yet, no systematic evaluation of SSA forecasts has ever been published by SSA or anyone else. We show that SSA's forecasting errors were approximately unbiased until about 2000, but then began to grow quickly, with increasingly overconfident uncertainty intervals. Moreover, the errors all turn out to be in the same potentially dangerous direction, each making the Social Security Trust Funds look healthier than they actually are. We also discover the cause of these findings with evidence from a large number of interviews we conducted with participants at every level of the forecasting and policy processes. We show that SSA's forecasting procedures meet all the conditions the modern social-psychology and statistical literatures demonstrate make bias likely. When those conditions mixed with potent new political forces trying to change Social Security and influence the forecasts, SSA's actuaries hunkered down trying hard to insulate themselves from the intense political pressures. Unfortunately, this otherwise laudable resistance to undue influence, along with their ad hoc qualitative forecasting models, led them to also miss important changes in the input data such as retirees living longer lives, and drawing more benefits, than predicted by simple extrapolations. We explain that solving this problem involves using (a) removing human judgment where possible, by using formal statistical methods -- via the revolution in data science and big data; (b) instituting formal structural procedures when human judgment is required -- via the revolution in social psychological research; and (c) requiring transparency and data sharing to catch errors that slip through -- via the revolution in data sharing & replication. An article at Barron's about our work.
    • Incumbency Advantage
      Proof that previously used estimators of electoral incumbency advantage were biased, and a new unbiased estimator. Also, the first systematic demonstration that constituency service by legislators increases the incumbency advantage.
    • Mexican Health Care Evaluation
      An evaluation of the Mexican Seguro Popular program (designed to extend health insurance and regular and preventive medical care, pharmaceuticals, and health facilities to 50 million uninsured Mexicans), one of the world's largest health policy reforms of the last two decades. Our evaluation features a new design for field experiments that is more robust to the political interventions and implementation errors that have ruined many similar previous efforts; new statistical methods that produce more reliable and efficient results using fewer resources, assumptions, and data; and an implementation of these methods in the largest randomized health policy experiment to date. (See the Harvard Gazette story on this project.)
    • Presidency Research; Voting Behavior
      Resolution of the paradox of why polls are so variable over time during presidential campaigns even though the vote outcome is easily predictable before it starts. Also, a resolution of a key controversy over absentee ballots during the 2000 presidential election; and the methodology of small-n research on executives.
    • Informatics and Data Sharing
      Replication Standards New standards, protocols, and software for citing, sharing, analyzing, archiving, preserving, distributing, cataloging, translating, disseminating, naming, verifying, and replicating scholarly research data and analyses. Also includes proposals to improve the norms of data sharing and replication in science.
    • International Conflict
      Methods for coding, analyzing, and forecasting international conflict and state failure. Evidence that the causes of conflict, theorized to be important but often found to be small or ephemeral, are indeed tiny for the vast majority of dyads, but are large, stable, and replicable wherever the ex ante probability of conflict is large.
    • Legislative Redistricting
      The definition of partisan symmetry as a standard for fairness in redistricting; methods and software for measuring partisan bias and electoral responsiveness; discussion of U.S. Supreme Court rulings about this work. Evidence that U.S. redistricting reduces bias and increases responsiveness, and that the electoral college is fair; applications to legislatures, primaries, and multiparty systems.
    • Mortality Studies
      Methods for forecasting mortality rates (overall or for time series data cross-classified by age, sex, country, and cause); estimating mortality rates in areas without vital registration; measuring inequality in risk of death; applications to US mortality, the future of the Social Security, armed conflict, heart failure, and human security.
    • Teaching and Administration
      Publications and other projects designed to improve teaching, learning, and university administration, as well as broader writings on the future of the social sciences.
    • Automated Text Analysis
      Automated and computer-assisted methods of extracting, organizing, and consuming knowledge from unstructured text.
    • Causal Inference
      Methods for detecting and reducing model dependence (i.e., when minor model changes produce substantively different inferences) in inferring causal effects and other counterfactuals. Matching methods; "politically robust" and cluster-randomized experimental designs; causal bias decompositions.
    • Event Counts and Durations
      Statistical models to explain or predict how many events occur for each fixed time period, or the time between events. An application to cabinet dissolution in parliamentary democracies which united two previously warring scholarly literature. Other applications to international relations and U.S. Supreme Court appointments.
    • Ecological Inference
      Inferring individual behavior from group-level data: The first approach to incorporate both unit-level deterministic bounds and cross-unit statistical information, methods for 2x2 and larger tables, Bayesian model averaging, applications to elections, software.
    • Missing Data
      Statistical methods to accommodate missing information in data sets due to scattered unit nonresponse, missing variables, or cell values or variables measured with error. Easy-to-use algorithms and software for multiple imputation and multiple overimputation for surveys, time series, and time series cross-sectional data. Applications to electoral, and other compositional, data.
    • Qualitative Research
      How the same unified theory of inference underlies quantitative and qualitative research alike; scientific inference when quantification is difficult or impossible; research design; empirical research in legal scholarship.
    • Rare Events
      How to save 99% of your data collection costs; bias corrections for logistic regression in estimating probabilities and causal effects in rare events data; estimating base probabilities or any quantity from case-control data; automated coding of events.
    • Survey Research
      "Anchoring Vignette" methods for when different respondents (perhaps from different cultures, countries, or ethnic groups) understand survey questions in different ways; an approach to developing theoretical definitions of complicated concepts apparently definable only by example (i.e., "you know it when you see it"); how surveys work.
    • Unifying Statistical Analysis
      Development of a unified approach to statistical modeling, inference, interpretation, presentation, analysis, and software; integrated with most of the other projects listed here.

Recent Work

Methods for Extremely Large Scale Media Experiments and Observational Studies (Poster)
King, Gary, Benjamin Schneer, and Ariel White. 2014. “

Methods for Extremely Large Scale Media Experiments and Observational Studies (Poster)

.” In Society for Political Methodology. Athens, GA.Abstract
This is a poster presentation describing (1) the largest ever experimental study of media effects, with more than 50 cooperating traditional media sites, normally unavailable web site analytics, the text of hundreds of thousands of news articles, and tens of millions of social media posts, and (2) a design we used in preparation that attempts to anticipate experimental outcomes
You Lie! Patterns of Partisan Taunting in the U.S. Senate (Poster)
Grimmer, Justin, Gary King, and Chiara Superti. 2014. “

You Lie! Patterns of Partisan Taunting in the U.s. Senate (Poster)

.” In Society for Political Methodology. Athens, GA.Abstract
This is a poster that describes our analysis of "partisan taunting," the explicit, public, and negative attacks on another political party or its members, usually using vitriolic and derogatory language. We first demonstrate that most projects that hand code text in the social sciences optimize with respect to the wrong criterion, resulting in large, unnecessary biases. We show how to fix this problem and then apply it to taunting. We find empirically that, unlike most claims in the press and the literature, taunting is not inexorably increasing; it appears instead to be a rational political strategy, most often used by those least likely to win by traditional means -- ideological extremists, out-party members when the president is unpopular, and minority party members. However, although taunting appears to be individually rational, it is collectively irrational: Constituents may resonate with one cutting taunt by their Senator, but they might not approve if he or she were devoting large amounts of time to this behavior rather than say trying to solve important national problems. We hope to partially rectify this situation by posting public rankings of Senatorial taunting behavior.
Reverse-engineering censorship in China: Randomized experimentation and participant observation
King, Gary, Jennifer Pan, and Margaret E Roberts. 2014. “

Reverse-Engineering Censorship in China: Randomized Experimentation and Participant Observation

.” Science 345 (6199): 1-10. Publisher's VersionAbstract
Existing research on the extensive Chinese censorship organization uses observational methods with well-known limitations. We conducted the first large-scale experimental study of censorship by creating accounts on numerous social media sites, randomly submitting different texts, and observing from a worldwide network of computers which texts were censored and which were not. We also supplemented interviews with confidential sources by creating our own social media site, contracting with Chinese firms to install the same censoring technologies as existing sites, and—with their software, documentation, and even customer support—reverse-engineering how it all works. Our results offer rigorous support for the recent hypothesis that criticisms of the state, its leaders, and their policies are published, whereas posts about real-world events with collective action potential are censored.
Computer-Assisted Keyword and Document Set Discovery from Unstructured Text
King, Gary, Patrick Lam, and Margaret Roberts. 2014. “

Computer-Assisted Keyword and Document Set Discovery from Unstructured Text

”.Abstract
The (unheralded) first step in many applications of automated text analysis involves selecting keywords to choose documents from a large text corpus for further study. Although all substantive results depend crucially on this choice, researchers typically pick keywords in ad hoc ways, given the lack of formal statistical methods to help. Paradoxically, this often means that the validity of the most sophisticated text analysis methods depends in practice on the inadequate keyword counting or matching methods they are designed to replace. The same ad hoc keyword selection process is also used in many other areas, such as following conversations that rapidly innovate language to evade authorities, seek political advantage, or express creativity; generic web searching; eDiscovery; look-alike modeling; intelligence analysis; and sentiment and topic analysis. We develop a computer-assisted (as opposed to fully automated) statistical approach that suggests keywords from available text, without needing any structured data as inputs. This framing poses the statistical problem in a new way, which leads to a widely applicable algorithm. Our specific approach is based on training classifiers, extracting information from (rather than correcting) their mistakes, and then summarizing results with Boolean search strings. We illustrate how the technique works with examples in English and Chinese.
Google Flu Trends Still Appears Sick: An Evaluation of the 2013‐2014 Flu Season
Lazer, David, Ryan Kennedy, Gary King, and Alessandro Vespignani. 2014. “

Google Flu Trends Still Appears Sick: an Evaluation of the 2013‐2014 Flu Season

”.Abstract
Last year was difficult for Google Flu Trends (GFT). In early 2013, Nature reported that GFT was estimating more than double the percentage of doctor visits for influenza like illness than the Centers for Disease Control and Prevention s (CDC) sentinel reports during the 2012 2013 flu season (1). Given that GFT was designed to forecast upcoming CDC reports, this was a problematic finding. In March 2014, our report in Science found that the overestimation problem in GFT was also present in the 2011 2012 flu season (2). The report also found strong evidence of autocorrelation and seasonality in the GFT errors, and presented evidence that the issues were likely, at least in part, due to modifications made by Google s search algorithm and the decision by GFT engineers not to use previous CDC reports or seasonality estimates in their models what the article labeled algorithm dynamics and big data hubris respectively. Moreover, the report and the supporting online materials detailed how difficult/impossible it is to replicate the GFT results, undermining independent efforts to explore the source of GFT errors and formulate improvements.
The Parable of Google Flu: Traps in Big Data Analysis
Lazer, David, Ryan Kennedy, Gary King, and Alessandro Vespignani. 2014. “

The Parable of Google Flu: Traps in Big Data Analysis

.” Science 343 (14 March): 1203-1205.Abstract
Large errors in flu prediction were largely avoidable, which offers lessons for the use of big data. In February 2013, Google Flu Trends (GFT) made headlines but not for a reason that Google executives or the creators of the flu tracking system would have hoped. Nature reported that GFT was predicting more than double the proportion of doctor visits for influenza-like illness (ILI) than the Centers for Disease Control and Prevention (CDC), which bases its estimates on surveillance reports from laboratories across the United States ( 1, 2). This happened despite the fact that GFT was built to predict CDC reports. Given that GFT is often held up as an exemplary use of big data ( 3, 4), what lessons can we draw from this error?
Restructuring the Social Sciences: Reflections from Harvard’s Institute for Quantitative Social Science
King, Gary. 2014. “Restructuring the Social Sciences: Reflections from Harvard’S Institute for Quantitative Social Science.” Ps: Political Science and Politics 47 (1): 165-172. Cambridge University Press versionAbstract
The social sciences are undergoing a dramatic transformation from studying problems to solving them; from making do with a small number of sparse data sets to analyzing increasing quantities of diverse, highly informative data; from isolated scholars toiling away on their own to larger scale, collaborative, interdisciplinary, lab-style research teams; and from a purely academic pursuit to having a major impact on the world. To facilitate these important developments, universities, funding agencies, and governments need to shore up and adapt the infrastructure that supports social science research. We discuss some of these developments here, as well as a new type of organization we created at Harvard to help encourage them -- the Institute for Quantitative Social Science.  An increasing number of universities are beginning efforts to respond with similar institutions. This paper provides some suggestions for how individual universities might respond and how we might work together to advance social science more generally.
The Troubled Future of Colleges and Universities (with comments from five scholar-administrators)
King, Gary, and Maya Sen. 2013. “The Troubled Future of Colleges and Universities (With Comments from Five Scholar-Administrators).” Ps: Political Science and Politics 46 (1): 81--113.Abstract
The American system of higher education is under attack by political, economic, and educational forces that threaten to undermine its business model, governmental support, and operating mission. The potential changes are considerably more dramatic and disruptive than what we've already experienced. Traditional colleges and universities urgently need a coherent, thought-out response. Their central role in ensuring the creation, preservation, and distribution of knowledge may be at risk and, as a consequence, so too may be the spectacular progress across fields we have come to expect as a result. Symposium contributors include Henry E. Brady, John Mark Hansen, Gary King, Nannerl O. Keohane, Michael Laver, Virginia Sapiro, and Maya Sen.
How Social Science Research Can Improve Teaching
King, Gary, and Maya Sen. 2013. “How Social Science Research Can Improve Teaching.” Ps: Political Science and Politics 46 (3): 621-629.Abstract
We marshal discoveries about human behavior and learning from social science research and show how they can be used to improve teaching and learning. The discoveries are easily stated as three social science generalizations: (1) social connections motivate, (2) teaching teaches the teacher, and (3) instant feedback improves learning. We show how to apply these generalizations via innovations in modern information technology inside, outside, and across university classrooms. We also give concrete examples of these ideas from innovations we have experimented with in our own teaching. See also a video presentation of this talk before the Harvard Board of Overseers
How Censorship in China Allows Government Criticism but Silences Collective Expression
King, Gary, Jennifer Pan, and Margaret E Roberts. 2013. “How Censorship in China Allows Government Criticism but Silences Collective Expression.” American Political Science Review 107 (2 (May): 1-18.Abstract
We offer the first large scale, multiple source analysis of the outcome of what may be the most extensive effort to selectively censor human expression ever implemented. To do this, we have devised a system to locate, download, and analyze the content of millions of social media posts originating from nearly 1,400 different social media services all over China before the Chinese government is able to find, evaluate, and censor (i.e., remove from the Internet) the large subset they deem objectionable. Using modern computer-assisted text analytic methods that we adapt to and validate in the Chinese language, we compare the substantive content of posts censored to those not censored over time in each of 85 topic areas. Contrary to previous understandings, posts with negative, even vitriolic, criticism of the state, its leaders, and its policies are not more likely to be censored. Instead, we show that the censorship program is aimed at curtailing collective action by silencing comments that represent, reinforce, or spur social mobilization, regardless of content. Censorship is oriented toward attempting to forestall collective activities that are occurring now or may occur in the future --- and, as such, seem to clearly expose government intent.
Estimating Partisan Bias of the Electoral College Under Proposed Changes in Elector Apportionment
Thomas, AC, Andrew Gelman, Gary King, and Jonathan N Katz. 2012. “Estimating Partisan Bias of the Electoral College Under Proposed Changes in Elector Apportionment.” Statistics, Politics, and Policy, 1-13. Statistics, Politics and Policy (publisher version)Abstract
In the election for President of the United States, the Electoral College is the body whose members vote to elect the President directly. Each state sends a number of delegates equal to its total number of representatives and senators in Congress; all but two states (Nebraska and Maine) assign electors pledged to the candidate that wins the state's plurality vote. We investigate the effect on presidential elections if states were to assign their electoral votes according to results in each congressional district,and conclude that the direct popular vote and the current electoral college are both substantially fairer compared to those alternatives where states would have divided their electoral votes by congressional district.
Demographic Forecasting
Girosi, Federico, and Gary King. 2008. Demographic Forecasting. Princeton: Princeton University Press.Abstract
We introduce a new framework for forecasting age-sex-country-cause-specific mortality rates that incorporates considerably more information, and thus has the potential to forecast much better, than any existing approach. Mortality forecasts are used in a wide variety of academic fields, and for global and national health policy making, medical and pharmaceutical research, and social security and retirement planning. As it turns out, the tools we developed in pursuit of this goal also have broader statistical implications, in addition to their use for forecasting mortality or other variables with similar statistical properties. First, our methods make it possible to include different explanatory variables in a time series regression for each cross-section, while still borrowing strength from one regression to improve the estimation of all. Second, we show that many existing Bayesian (hierarchical and spatial) models with explanatory variables use prior densities that incorrectly formalize prior knowledge. Many demographers and public health researchers have fortuitously avoided this problem so prevalent in other fields by using prior knowledge only as an ex post check on empirical results, but this approach excludes considerable information from their models. We show how to incorporate this demographic knowledge into a model in a statistically appropriate way. Finally, we develop a set of tools useful for developing models with Bayesian priors in the presence of partial prior ignorance. This approach also provides many of the attractive features claimed by the empirical Bayes approach, but fully within the standard Bayesian theory of inference.
Ecological Inference: New Methodological Strategies
King, Gary, Ori Rosen, Martin Tanner, Gary King, Ori Rosen, and Martin A Tanner. 2004. Ecological Inference: New Methodological Strategies. New York: Cambridge University Press.Abstract
Ecological Inference: New Methodological Strategies brings together a diverse group of scholars to survey the latest strategies for solving ecological inference problems in various fields. The last half decade has witnessed an explosion of research in ecological inference – the attempt to infer individual behavior from aggregate data. The uncertainties and the information lost in aggregation make ecological inference one of the most difficult areas of statistical inference, but such inferences are required in many academic fields, as well as by legislatures and the courts in redistricting, by businesses in marketing research, and by governments in policy analysis.
MatchingFrontier: R Package for Calculating the Balance-Sample Size Frontier
King, Gary, Christopher Lucas, and Richard Nielsen. 2014. “

Matchingfrontier: R Package for Calculating the Balance-Sample Size Frontier

”.Abstract
MatchingFrontier is an easy-to-use R Package for making optimal causal inferences from observational data.  Despite their popularity, existing matching approaches leave researchers with two fundamental tensions. First, they are designed to maximize one metric (such as propensity score or Mahalanobis distance) but are judged against another for which they were not designed (such as L1 or differences in means). Second, they lack a principled solution to revealing the implicit bias-variance trade off: matching methods need to optimize with respect to both imbalance (between the treated and control groups) and the number of observations pruned, but existing approaches optimize with respect to only one; users then either ignore the other, or tweak it, usually suboptimally, by hand. MatchingFrontier resolves both tensions by consolidating previous techniques into a single, optimal, and flexible approach. It calculates the matching solution with maximum balance for each possible sample size (N, N-1, N-2,...). It thus directly calculates the entire balance-sample size frontier, from which the user can easily choose one, several, or all subsamples from which to conduct their final analysis, given their own choice of imbalance metric and quantity of interest. MatchingFrontier solves the joint optimization problem in one run, automatically, without manual tweaking, and without iteration.  Although for each subset size k, there exist a huge (N choose k) number of unique subsets, MatchingFrontier includes specially designed fast algorithms that give the optimal answer, usually in a few minutes.   MatchingFrontier implements the methods in this paper:   King, Gary, Christopher Lucas, and Richard Nielsen. 2014. The Balance-Sample Size Frontier in Matching Methods for Causal Inference, copy at http://j.mp/1dRDMrE See http://projects.iq.harvard.edu/frontier/
JudgeIt II: A Program for Evaluating Electoral Systems and Redistricting Plans
Gelman, Andrew, Gary King, and Andrew Thomas. 2010. “Judgeit Ii: A Program for Evaluating Electoral Systems and Redistricting Plans”. Publisher's VersionAbstract
A program for analyzing most any feature of district-level legislative elections data, including prediction, evaluating redistricting plans, estimating counterfactual hypotheses (such as what would happen if a term-limitation amendment were imposed). This implements statistical procedures described in a series of journal articles and has been used during redistricting in many states by judges, partisans, governments, private citizens, and many others. The earlier version was winner of the APSA Research Software Award.
AMELIA II: A Program for Missing Data
Honaker, James, Gary King, and Matthew Blackwell. 2009. “Amelia Ii: A Program for Missing Data”. Publisher's VersionAbstract
This program multiply imputes missing data in cross-sectional, time series, and time series cross-sectional data sets. It includes a Windows version (no knowledge of R required), and a version that works with R either from the command line or via a GUI.
YourCast
Girosi, Frederico, and Gary King. 2004. “Yourcast”. Publisher's VersionAbstract
YourCast is (open source and free) software that makes forecasts by running sets of linear regressions together in a variety of sophisticated ways. YourCast avoids the bias that results when stacking datasets from separate cross-sections and assuming constant parameters, and the inefficiency that results from running independent regressions in each cross-section.
Tomz, Michael, Jason Wittenberg, and Gary King. 2003. “Clarify: Software for Interpreting and Presenting Statistical Results.” Journal of Statistical Software.Abstract
This is a set of easy-to-use Stata macros that implement the techniques described in Gary King, Michael Tomz, and Jason Wittenberg's "Making the Most of Statistical Analyses: Improving Interpretation and Presentation". To install Clarify, type "net from http://gking.harvard.edu/clarify" at the Stata command line. The documentation [ HTML | PDF ] explains how to do this. We also provide a zip archive for users who want to install Clarify on a computer that is not connected to the internet. Winner of the Okidata Best Research Software Award. Also try -ssc install qsim- to install a wrapper, donated by Fred Wolfe, to automate Clarify's simulation of dummy variables.
Simplifying Matching Methods for Causal Inference, at MIT, Political Methodology Series, Monday, March 16, 2015:

This talk explains how to make matching methods for causal inference easier to use and more powerful. Applied researchers commonly use matching methods as a data preprocessing step for reducing model dependence and bias, after which they use whatever statistical procedure they would have without matching, such as regression. They routinely ignore the requirement that all matches be exact, and also commonly use ad hoc analyses that iterate between formal matching methods and informal balance and sample size checks.

Reverse-Engineering Censorship in China, at Harvard Graduate Commons Program, Wednesday, February 11, 2015:

Chinese government censorship of social media constitutes the largest selective suppression of human communication in recorded history. In three ways, we show, paradoxically, that this large system also leaves large footprints that reveal a great deal about itself and the intentions of the government. First is an observational study where we download all social media posts before the Chinese government can read and censor those they deem objectionable, and then detect from a network of computers all over the world which are censored.

Simplifying Matching Methods for Causal Inference, at Princeton University, Center for Statistics and Machine Learning, Friday, February 6, 2015:

This talk explains how to make matching methods for causal inference easier to use and more powerful. Applied researchers commonly use matching methods as a data preprocessing step for reducing model dependence and bias, after which they use whatever statistical procedure they would have without matching, such as regression. They routinely ignore the requirement that all matches be exact, and also commonly use ad hoc analyses that iterate between formal matching methods and informal balance and sample size checks.

Reverse-Engineering Censorship in China, at American University, Friday, January 30, 2015:

Chinese government censorship of social media constitutes the largest selective suppression of human communication in recorded history. In three ways, we show, paradoxically, that this large system also leaves large footprints that reveal a great deal about itself and the intentions of the government. First is an observational study where we download all social media posts before the Chinese government can read and censor those they deem objectionable, and then detect from a network of computers all over the world which are censored.

Reverse-Engineering Censorship in China, at Stanford University, Computer Science, Data Science, Friday, January 16, 2015:

Chinese government censorship of social media constitutes the largest selective suppression of human communication in recorded history. In three ways, we show, paradoxically, that this large system also leaves large footprints that reveal a great deal about itself and the intentions of the government. First is an observational study where we download all social media posts before the Chinese government can read and censor those they deem objectionable, and then detect from a network of computers all over the world which are censored.

Simplifying Matching Methods for Causal Inference, at Stanford University, Department of Political Science, Wednesday, January 14, 2015:

This talk explains how to make matching methods for causal inference easier to use and more powerful. Applied researchers commonly use matching methods as a data preprocessing step for reducing model dependence and bias, after which they use whatever statistical procedure they would have without matching, such as regression. They routinely ignore the requirement that all matches be exact, and also commonly use ad hoc analyses that iterate between formal matching methods and informal balance and sample size checks.

Reverse-Engineering Censorship in China, at Harvard Kennedy School, Inequality Seminar, Monday, October 27, 2014:

Chinese government censorship of social media constitutes the largest selective suppression of human communication in recorded history. In three ways, we show, paradoxically, that this large system also leaves large footprints that reveal a great deal about itself and the intentions of the government. First is an observational study where we download all social media posts before the Chinese government can read and censor those they deem objectionable, and then detect from a network of computers all over the world which are censored.

Reverse-Engineering Censorship in China, at University of Southern California, Marshall School of Business, Friday, October 17, 2014:

Chinese government censorship of social media constitutes the largest selective suppression of human communication in recorded history. In three ways, we show, paradoxically, that this large system also leaves large footprints that reveal a great deal about itself and the intentions of the government. First is an observational study where we download all social media posts before the Chinese government can read and censor those they deem objectionable, and then detect from a network of computers all over the world which are censored.

Reverse-Engineering Censorship in China, at Capital One, Wednesday, October 15, 2014:

Chinese government censorship of social media constitutes the largest selective suppression of human communication in recorded history. In three ways, we show, paradoxically, that this large system also leaves large footprints that reveal a great deal about itself and the intentions of the government. First is an observational study where we download all social media posts before the Chinese government can read and censor those they deem objectionable, and then detect from a network of computers all over the world which are censored.

Reverse Engineering Chinese Censorship, at Harvard Alumni Assembly 2014, Saturday, September 20, 2014:

Chinese government censorship of social media constitutes the largest selective suppression of human communication in recorded history. In three ways, we show, paradoxically, that this large system also leaves large footprints that reveal a great deal about itself and the intentions of the government. First is an observational study where we download all social media posts before the Chinese government can read and censor those they deem objectionable, and then detect from a network of computers all over the world which are censored.