Political scientists are defensive these days because in May the House passed an amendment to a bill eliminating National Science Foundation grants for political scientists. Soon the Senate may vote on similar legislation. Colleagues, especially those who have received N.S.F. grants, will loathe me for saying this, but just this once I’m sympathetic with the anti-intellectual Republicans behind this amendment. Why? The bill incited a national conversation about a subject that has troubled me for decades: the government — disproportionately — supports research that is amenable to statistical analyses and models even though everyone knows the clean equations mask messy realities that contrived data sets and assumptions don’t, and can’t, capture.Science involves quantifiable data and testable hypotheses. Political Science has neither.
It’s an open secret in my discipline: in terms of accurate political predictions (the field’s benchmark for what counts as science), my colleagues have failed spectacularly and wasted colossal amounts of time and money. The most obvious example may be political scientists’ insistence, during the cold war, that the Soviet Union would persist as a nuclear threat to the United States. In 1993, in the journal International Security, for example, the cold war historian John Lewis Gaddis wrote that the demise of the Soviet Union was “of such importance that no approach to the study of international relations claiming both foresight and competence should have failed to see it coming.” And yet, he noted, “None actually did so.” Careers were made, prizes awarded and millions of research dollars distributed to international relations experts, even though Nancy Reagan’s astrologer may have had superior forecasting skills.
Political prognosticators fare just as poorly on domestic politics. In a peer-reviewed journal, the political scientist Morris P. Fiorina wrote that “we seem to have settled into a persistent pattern of divided government” — of Republican presidents and Democratic Congresses. Professor Fiorina’s ideas, which synced nicely with the conventional wisdom at the time, appeared in an article in 1992 — just before the Democrat Bill Clinton’s presidential victory and the Republican 1994 takeover of the House.
How do we know that these examples aren’t atypical cherries picked by a political theorist munching sour grapes? Because in the 1980s, the political psychologist Philip E. Tetlock began systematically quizzing 284 political experts — most of whom were political science Ph.D.’s — on dozens of basic questions, like whether a country would go to war, leave NATO or change its boundaries or a political leader would remain in office. His book “Expert Political Judgment: How Good Is It? How Can We Know?” won the A.P.S.A.’s prize for the best book published on government, politics or international affairs.It's also better when your predictions work slightly better than random guesses.
Professor Tetlock’s main finding? Chimps randomly throwing darts at the possible outcomes would have done almost as well as the experts.
The author, a political scientist himself, thinks that because political scientists cannot accurately predict anything, they should be given their money for nothing, at random.
Government can — and should — assist political scientists, especially those who use history and theory to explain shifting political contexts, challenge our intuitions and help us see beyond daily newspaper headlines. Research aimed at political prediction is doomed to fail. At least if the idea is to predict more accurately than a dart-throwing chimp.That's not obvious to me.
To shield research from disciplinary biases of the moment, the government should finance scholars through a lottery: anyone with a political science Ph.D. and a defensible budget could apply for grants at different financing levels. And of course government needs to finance graduate student studies and thorough demographic, political and economic data collection. I look forward to seeing what happens to my discipline and politics more generally once we stop mistaking probability studies and statistical significance for knowledge.
No comments:
Post a Comment