<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Blog | Gary King</title><link>http://gking.harvard.edu/blog/</link><atom:link href="http://gking.harvard.edu/blog/index.xml" rel="self" type="application/rss+xml"/><description>Blog</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><copyright>Gary King</copyright><lastBuildDate>Fri, 24 Mar 2017 12:00:00 +0000</lastBuildDate><item><title>Suggestions for Changes in Journal Publication Rules</title><link>http://gking.harvard.edu/blog/suggestions-for-changes-in-journal-publication-rules/</link><pubDate>Fri, 24 Mar 2017 12:00:00 +0000</pubDate><guid>http://gking.harvard.edu/blog/suggestions-for-changes-in-journal-publication-rules/</guid><description>&lt;div class="hwp-text-block field field--name-field-hwp-body field--type-text-long field--label-hidden"&gt;&lt;p&gt;(This was originally a post on the now defunkt Perestroika mailing list, on 9/27/10.)&lt;/p&gt;&lt;p&gt;I have two suggestions stemming from the discussion over the last few weeks.&lt;/p&gt;&lt;p&gt;Well before the Perestroika list started, many people have expressed complaints about how the American Political Science Review (APSR) and some other journals poorly represent the work of some; despite some changes, the complaints haven't subsided a lot. Since the APSR treats journal space as a scarce resource, it should not be a surprise to all of us political scientists that we still see lots of political discussions like these surrounding the allocation of those scarce resources. However, more recently, the world seems to have passed a threshold in publication where online is as good or better than print. If you don't already find it more convenient to look for an article in jstor sitting at your desk than reaching 'all the way' behind you to grab the print version, you will soon. Plus it's much easier to search electronic versions, and a vast amount of value-added information is being created with the digital but not print versions – comments, collaborative highlighting &amp;amp; note taking, social media posts right from the publications, etc., etc. Moreover, in many areas of scholarship, if you can't find prior research through Google or Bing, it just doesn't exist. &lt;/p&gt;&lt;p&gt;Whether this change is good or bad is a good question but not my point. Instead, I suggest we ask the APSR and APSA to recognize this change and respond to it, since when scarce resources become plentiful, many problems are automatically solved. And acting as if they are still scarce only perpetuates unnecessary division. So instead of pushing the APSR to publish more works like whatever we each do, why not push them to vastly increase the number of articles published? The marginal cost of publishing more articles is now nearly zero. My own view is that the threshold for publication should be something simple like whether the article represents a positive contribution to our knowledge or understanding of the world (or to something!); if yes, then publish. If its wrong, or misleading, or unclear, or dumb, or fraudulent, then reject. And if reviewers can "revise and resubmit" the author into doing a better job, then great. But we don't need reviewers and editors deciding on assessments of "importance", "area", "quant vs qual balance", or other irrelevant, or essentially political matters. &lt;/p&gt;&lt;p&gt;The Internet (and searches that return &amp;gt;2M items, but ranked so that the one you want is first) is ample proof that more information doesn't hurt anyone. When the press was actually a physical press, publication was expensive and the presses became the gatekeepers to their pocketbooks (and the visibility of our work); now publication is almost free. If someone wants to have a series of awards for the best articles, or articles that are above some higher threshold, or which meet some criteria such as area or balance or anything else, then fine. Let the politics continue around these awards, rather than what it does today, which is to essentially ensure that some types of works or some works do not see the light of day. &lt;/p&gt;&lt;p&gt;I'll go another step. The point of the APSA is the creation, dissemination, and preservation of knowledge about political science (and a vast array of supporting activities). To achieve these goals better, why not make the APSR open source and free? Open source journals have more readers (especially in the developing world) and a bigger impact on the rest of the scholarly literature. The APSA as an association will still do very well financially, and its mission will be achieved at a much higher level.&lt;/p&gt;&lt;p&gt;So how about it? Encourage the APSR to publish more – without discriminating &lt;em&gt;at all &lt;/em&gt;based on area and type of work and only on quality – and to make the proceeds of our work available for free to the world.&lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;/div&gt;</description></item><item><title>are you making causal inferences?</title><link>http://gking.harvard.edu/blog/are-you-making-causal-inferences/</link><pubDate>Tue, 25 Aug 2009 12:00:00 +0000</pubDate><guid>http://gking.harvard.edu/blog/are-you-making-causal-inferences/</guid><description>&lt;div class="hwp-text-block field field--name-field-hwp-body field--type-text-long field--label-hidden"&gt;&lt;p&gt;Do you have a research project where you're trying to make causal inferences from observational data? Do you think matching might be a useful technique? Are you wondering how to get reviewers to stop bothering you?! Would you like some free consulting advice and data analysis help?&lt;/p&gt;&lt;p&gt;We're involved in some methodological research in this area and could use some experience exploring different types of data sets. If you are interested, we would be like to help you with your data analyses and inferences (for a limited number of people and a limited time). Our interactions about your data will remain between us; in particular, we promise not to scoop you, criticize you in print, or use your data for any substantive purposes at all. In fact, for most purposes we don't even need to see your dependent variable. We would be interested in reporting in our research a few aggregated statistics that test methods we are developing, but we would only do that with your permission.&lt;/p&gt;&lt;p&gt;If you're interested, can you send us an email?&lt;/p&gt;&lt;p&gt;Many thanks,&lt;/p&gt;&lt;p&gt;Stefano Iacus (&lt;a href="mailto:stefano.iacus@unimi.it"&gt;stefano.iacus@unimi.it&lt;/a&gt;)&lt;br/&gt;Gary King (&lt;a href="mailto:king@harvard.edu"&gt;king@harvard.edu&lt;/a&gt;)&lt;/p&gt; &lt;p&gt;Posted by Gary King at August 25, 2009 11:02 AM&lt;/p&gt;&lt;/div&gt;</description></item><item><title>The Value of Control Groups in Causal Inference (and Breakfast Cereal)</title><link>http://gking.harvard.edu/blog/the-value-of-control-groups-in-causal-inference-and-breakfas/</link><pubDate>Mon, 31 Oct 2005 12:00:00 +0000</pubDate><guid>http://gking.harvard.edu/blog/the-value-of-control-groups-in-causal-inference-and-breakfas/</guid><description>&lt;div class="hwp-text-block field field--name-field-hwp-body field--type-text-long field--label-hidden"&gt;&lt;p&gt;A few years ago, I taught the following lesson in my daughter's kindergarden class and my graduate methods class in the same week. It worked pretty well in both. Anyone who has a kid in kindergarten, some good graduate students, or both, might want to try this. It was especially fun for the instructor.&lt;/p&gt;&lt;p&gt;To start, I hold up some nails and ask "does everyone likes to eat nails?" The kindergarten kids scream, "Nooooooo." The graduate students say "No," trying to look cool. I say I'm going to convince them otherwise.&lt;/p&gt;&lt;p&gt;I hand out a little magnet to everyone. I ask the class to figure out what it sticks to and what it doesn't stick to. After a few minutes running around the classroom, the kindergardners figure out that magnets stick to stuff with iron in it, and anything without iron in it doesn't stick. The graduate students sit there looking cool.&lt;/p&gt;&lt;p&gt;From behind the table, I pull out a box of Total Cereal (in my experience, teaching is just like doing magic tricks, except that you get paid more as a magician). I show them the list of ingredients; "iron, 100 percent" is on the list. I ask by a show of hands whether this is the same iron as in the nails. 3 of 23 kindergarten kids say "yes"; 5 of 44 Harvard graduate students say "yes" (almost the same percent in both classes!).&lt;/p&gt;&lt;p&gt;I show the students that the box is sealed (and I have nothing up my sleeves), Then, I open the box, spill some cereal on a table, and smash it up into tiny pieces with a rolling pin. I take the pile of squashed cereal around the room and let the kids put their magnet next to it and see whether the cereal sticks to the magnet. To everyone's amazement, it sticks!&lt;/p&gt;&lt;p&gt;Then I ask, "are we now convinced that the iron in the nails is the same iron as in the cereal?" All the kids in kindergarten and all the graduate students say "yes."&lt;/p&gt;&lt;p&gt;I respond by saying "but how do you know the cereal stuck to the magnet because it had iron in it? Maybe it was just sticky, like gum or tape." Now that I finally have their attention (not a minor matter with kindergartners), I get to explain to them what a control group is (the point of the lesson). And from behind the table, I pull out a box of Rice Krispies (which are made of nothing). We examine the side of the box to verify the lack of (much) iron, and then I smash up the Rice Krispies, and let them see if their magnet sticks. It doesn't stick!&lt;/p&gt;&lt;p&gt;Everyone gets to take home a cool fact (they love to eat the stuff in nails), I get to convey the point of the lesson in a way they won't forget (the &lt;em&gt;essential&lt;/em&gt; role of control groups in causal inference), and everyone gets a free magnet.&lt;/p&gt;&lt;p&gt;(This post was originally published on 10/31/2005. Since then, Kellogg's started to put iron in Rice Krispies, so to do this experiment you now need to find some other cereal. I find that cereal marked "organic" often doesn't have added iron.)&lt;/p&gt;&lt;p&gt; &lt;/p&gt;&lt;/div&gt;</description></item><item><title>A Social Science of Architecture</title><link>http://gking.harvard.edu/blog/a-social-science-of-architecture/</link><pubDate>Tue, 18 Oct 2005 12:00:00 +0000</pubDate><guid>http://gking.harvard.edu/blog/a-social-science-of-architecture/</guid><description>&lt;div class="hwp-text-block field field--name-field-hwp-body field--type-text-long field--label-hidden"&gt;&lt;p&gt; After eight years of learning something about architecture (from &lt;a href="http://www.pcf-p.com/a/f/fme/hnc/b/b.html"&gt;Harry Cobb&lt;/a&gt; and his team) and extensive programmatic planning, the Institute for Quantitative Social Science this semester moves into the new Center for Government and International Studies buildings. Our official address is the Third Floor of 1737 Cambridge Street (the design is vaguely reminiscent of the bridge of the Starship Enterprise), although we also occupy some of the other floors and some of the building across the street. It is not really finished yet, but it is a terrific facility, with floor to ceiling windows in most offices, a wonderful seminar room for our Applied Statistics Workshop, and many other useful features. Perhaps even more remarkably, everyone seems to love it (Congratulations Harry!).&lt;/p&gt;&lt;p&gt; One issue I learned during this long process was how the field of architecture has the best science, engineering, and art, but very little modern social scientific analysis. Yet, social science, quantitative social science in particular, could greatly help architecture achieve its goals, I think. Ultimately the goal of this particular $100M-plus building, and of most buildings built by universities, is not only to create beautiful surroundings but also to increase the amount of knowledge created, disseminated, and preserved (my summary of the purpose of modern research universities). So do not limit yourself to asking how a building makes you feel, what architectural critics might think, how it fits in with the style of other buildings on campus, or whether your office is to your liking. Ask instead, or in addition, whether the building increases the units of knowledge created, disseminated, and preserved more than some other building or some other potential use for the money. This strikes me as the central question to be answered by those who decide what buildings to build, and yet the systematic scientific basis for this decision is almost nonexistent.&lt;/p&gt;&lt;p&gt; As such, some systematic data collection could have a considerable impact on this field. Do corridors or suites make the faculty and students produce and learn more? Does vertical circulation work as well as horizontal? Should we put faculty in close proximity to others working on the same projects or should we maximize interdisciplinary adjacencies? Which types of floor plans increase interaction? Which types of interaction produce the most knowledge created, generated, and preserved? Do we want to build buildings that encourage doors to be kept open, so as to make the faculty seem approachable or should we try to keep doors closed so that they can get work done? In this field as in most others, a great deal can be learned by directly measuring the relevant outcome variable; in architecture, quite remarkably, this has only rarely been attempted.&lt;/p&gt;&lt;p&gt; Of course it is done all the time via qualitative judgments, but in almost every field of science where a sufficient fraction of information can be quantified, statistical analysis beats human judgment. There is no reason to think that the same kind of statistical science wouldn't also create enormous advances here too.&lt;/p&gt;&lt;p&gt; I have heard of a couple of isolated academic works on this subject, but we're talking about some of the most important and expensive decisions universities make (and among the biggest decisions businesses, and many other institutions make too). There should be an entire subfield devoted to the subject. All it would take is some data collection and analysis. Outcome measures could include, for example faculty citation rates, publications, awards, grants, and departmental rankings, along with student recruitment, retention, graduation, and placement rates. The key treatment variables would include various information on the types of buildings and architectural design. Random assignment seems infeasible, but relatively exogenous features might include departmental moves or city and town building restrictions. Universities that allow faculty the choice of buildings could also provide useful revealed preference measures. I would think that a few enterprising scholars on this path could have an enormous impact both in creating a new academic subfield and in improving a vitally important set of university (and societal) decisions.&lt;/p&gt;&lt;p&gt; In the interm, we'll enjoy the new buildings and hope they have a positive impact.&lt;/p&gt;&lt;/div&gt;</description></item></channel></rss>