One other area which has experienced consistent failure is economic forecasting. I cite what Silver says here with a certain amount of glee. Of course, I know this as background information, but to see the hard facts marshalled together is striking. Take a survey economic forecasts in 2008, for instance.
Nor was this a once-off occurrence because of a freak once-in-a-lifetime crisis.
As I mentioned, the economists in this survey thought that GDP would end up at about 2.4 percent in 2008, slightly below its long-term trend. This was a very bad forecast: GDP actually shrank by 3.3 percent once the financial crisis hit. What may be worse is that the economists were extremely confident in their bad prediction. They assigned only a 3 percent chance to the economy’s shrinking by any margin over the whole of 2008.15 And they gave it only about a 1-in-500 chance of shrinking by at least 2 percent, as it did.
Aggregate forecasts tend to be more reliable than individual forecasts, however. This has been bad news for in-house corporate economists, who were mostly eliminated in the 1990s. Bluechip or Consensus Forecasts are better.
In fact, the actual value for GDP fell outside the economists’ prediction interval six times in eighteen years, or fully one-third of the time. Another study,18 which ran these numbers back to the beginnings of the Survey of Professional Forecasters in 1968, found even worse results: the actual figure for GDP fell outside the prediction interval almost half the time. There is almost no chance that the economists have simply been unlucky; they fundamentally overstate the reliability of their predictions.
Perhaps the new availability of computers made forecasters particularly overconfident in the 1960s and 1970s, he says - the age of the massive economic forecasting model. But ultimately you have to have some theoretical understanding or you will sink into mere data mining, he says.
My research into the Survey of Professional Forecasters suggests that these aggregate forecasts are about 20 percent more accurate than the typical individual’s forecast at predicting GDP, 10 percent better at predicting unemployment, and 30 percent better at predicting inflation. This property—group forecasts beat individual ones—has been found to be true in almost every field in which it has been studied.
Economics has inherent limitations on theory, however. One of the decisive intellectual impacts on me in college was learning about the Lucas Critique, which says people's behavior may change when policy changes, so you cannot rely on large-scale econometric relationships. I lost interest in econometrics and forecasting.
The idea that a statistical model would be able to “solve” the problem of economic forecasting was somewhat in vogue during the 1970s and 1980s when computers came into wider use. But as was the case in other fields, like earthquake forecasting during that time period, improved technology did not cover for the lack of theoretical understanding about the economy; it only gave economists faster and more elaborate ways to mistake noise for a signal. Promising-seeming models failed badly at some point or another and were consigned to the dustbin.
The economics profession has mostly responded to this problem by searching for policy-invariant microfoundations. It tries to model individial choice, far below the level of economic aggregates. In practice, this mostly entrenches naive rational-choice mathematical optimization even further.
A better answer to this is deeper knowledge of history. At least some people in the central banks note that we are fortunate that Ben Bernanke was an acknowledged expert in the history of the Great Depression, rather than, say, real business cycle models.