Publications by Year: 2018

2018
Management of Off-Task Time in a Participatory Environment
Gary King, Brian Lukoff, and Eric Mazur. 5/8/2018. “Management of Off-Task Time in a Participatory Environment .” United States of America US 9,965,972 B2 ( U.S Patent and Trademark Office).Abstract
Participatory activity carried out using electronic devices is enhanced by occupying the attention of participants who complete a task before a set completion time. For example, a request or question having an expected response time less than the remaining answer time may be provided to early-finishing participants. In another of the many embodiments, the post-response tasks are different for each participant, depending upon, for example, the rate at which the participant has successfully provided answers to previous questions. This ensures continuous engagement of all participants.
Patent
Use of a Social Annotation Platform for Pre-Class Reading Assignments in a Flipped Introductory Physics Class
Kelly Miller, Brian Lukoff, Gary King, and Eric Mazur. 3/2018. “Use of a Social Annotation Platform for Pre-Class Reading Assignments in a Flipped Introductory Physics Class.” Frontiers in Education, 3, 8, Pp. 1-12. Publisher's VersionAbstract

In this paper, we illustrate the successful implementation of pre-class reading assignments through a social learning platform that allows students to discuss the reading online with their classmates. We show how the platform can be used to understand how students are reading before class. We find that, with this platform, students spend an above average amount of time reading (compared to that reported in the literature) and that most students complete their reading assignments before class. We identify specific reading behaviors that are predictive of in-class exam performance. We also demonstrate ways that the platform promotes active reading strategies and produces high-quality learning interactions between students outside class. Finally, we compare the exam performance of two cohorts of students, where the only difference between them is the use of the platform; we show that students do significantly better on exams when using the platform.

Reprinted in Cassidy, R., Charles, E. S., Slotta, J. D., Lasry, N., eds. (2019). Active Learning: Theoretical Perspectives, Empirical Studies and Design Profiles. Lausanne: Frontiers Media. doi: 10.3389/978-2-88945-885-1

Article
Aaron Kaufman, Gary King, and Mayya Komisarchik. 2018. “Compactness: An R Package for Measuring Legislative District Compactness If You Only Know it When You See It”. Publisher's VersionAbstract

 

This software implements the method described in Aaron Kaufman, Gary King, and Mayya Komisarchik. Forthcoming. “How to Measure Legislative District Compactness If You Only Know it When You See It.” American Journal of Political Science. Copy at http://j.mp/2u9OWrG 

Our paper abstract:  To deter gerrymandering, many state constitutions require legislative districts to be "compact." Yet, the law offers few precise definitions other than "you know it when you see it," which effectively implies a common understanding of the concept. In contrast, academics have shown that compactness has multiple dimensions and have generated many conflicting measures. We hypothesize that both are correct -- that compactness is complex and multidimensional, but a common understanding exists across people. We develop a survey to elicit this understanding, with high reliability (in data where the standard paired comparisons approach fails). We create a statistical model that predicts, with high accuracy, solely from the geometric features of the district, compactness evaluations by judges and public officials responsible for redistricting, among others. We also offer compactness data from our validated measure for 20,160 state legislative and congressional districts, as well as software to compute this measure from any district.
 

 

Connor T. Jerzak, Gary King, and Anton Strezhnev. 2018. “Readme2: An R Package for Improved Automated Nonparametric Content Analysis for Social Science”. Publisher's VersionAbstract

An R package for estimating category proportions in an unlabeled set of documents given a labeled set, by implementing the method described in Jerzak, King, and Strezhnev (2019). This method is meant to improve on the ideas in Hopkins and King (2010), which introduced a quantification algorithm to estimate category proportions without directly classifying individual observations. This version of the software refines the original method by implementing a technique for selecitng optimal textual features in order to minimize the error of the estimated category proportions. Automatic differentiation, stochastic gradient descent, and batch re-normalization are used to carry out the optimization. Other pre-processing functions are available, as well as an interface to the earlier version of the algorithm for comparison. The package also provides users with the ability to extract the generated features for use in other tasks.

(Here's the abstract from our paper: Computer scientists and statisticians are often interested in classifying textual documents into chosen categories. Social scientists and others are often less interested in any one document and instead try to estimate the proportion falling in each category. The two existing types of techniques for estimating these category proportions are parametric "classify and count" methods and "direct" nonparametric estimation of category proportions without an individual classification step. Unfortunately, classify and count methods can sometimes be highly model dependent or generate more bias in the proportions even as the percent correctly classified increases. Direct estimation avoids these problems, but can suffer when the meaning and usage of language is too similar across categories or too different between training and test sets. We develop an improved direct estimation approach without these problems by introducing continuously valued text features optimized for this problem, along with a form of matching adapted from the causal inference literature. We evaluate our approach in analyses of a diverse collection of 73 data sets, showing that it substantially improves performance compared to existing approaches. As a companion to this paper, we offer easy-to-use software that implements all ideas discussed herein.)