It’s an open secret in my discipline: in terms of accurate political predictions (the field’s benchmark for what counts as science), my colleagues have failed spectacularly and wasted colossal amounts of time and money. .... Many of today’s peer-reviewed studies offer trivial confirmations of the obvious and policy documents filled with egregious, dangerous errors.
Part of the problem may be the government tends to disproportionately fund statistical research, she says. But now Republicans want to defund political science grants from the National Science Foundation.
The bill incited a national conversation about a subject that has troubled me for decades: the government — disproportionately — supports research that is amenable to statistical analyses and models even though everyone knows the clean equations mask messy realities that contrived data sets and assumptions don’t, and can’t, capture.
The two problems are linked. There are inherent limits to forecasting, but you only compound errors if you insist on throwing out most of the relevant qualitative evidence before you start. Any serious analysis has to have some historical awareness, for a start.