Survey researchers have long sought to protect the privacy of their respondents via de-identification (removing names, addresses, and other directly identifying information) before analyzing or sharing data. Although these procedures obviously help in important circumstances, recent research demonstrates that they fail to protect survey respondents from intentional attempts at re-identification, a problem that threatens to undermine vast survey enterprises in academia, government, and industry. This is especially a problem for political science because political beliefs are not only the subject of our survey questions and scholarship; they are key information respondents seek to keep private and elected representatives use to write privacy legislation. In this paper, we build on the concept of "differential privacy" to offer new survey research data sharing procedures with mathematical guarantees for protecting respondent privacy and statistical validity guarantees for social scientists analyzing differentially private data. The cost of these new procedures is larger standard errors or confidence intervals, which can be overcome with somewhat larger sample sizes.