All forecasts are wrong. That’s the nature of trying to look through the fog of uncertainty for guidance on the future. Some forecasts are less wrong than others, of course, but it’s always sensible to assume that the prediction du jour contains noise. That doesn’t mean that forecasting is worthless, but it does make predictions dangerous if you don’t look at the estimates in probabilistic terms. But that’s hardly standard procedure in the wider world. Thinking through the finer points for forecasting is too often neglected, and sometimes ignored entirely.
Perceptions about forecasts generally fall into three broad categories. One is blindly accepting the numbers. At the opposite extreme is rejecting forecasts out of hand. Somewhere in the middle of these severe views lies the greatest opportunity. But it’s no free lunch. The primary challenge is managing the associated risk that bedevils all forecasts—a crucial subject that’s at the heart of Nate Silver’s new book The Signal and the Noise: Why So Many Predictions Fail-but Some Don’t.
Silver doesn’t offer hot tips or short cuts to better forecasting, for the simple reason that such things don’t exist. Instead, he takes readers on a tour of forecasting as a process across several disciplines—economics, finance, politics, baseball, earthquakes, weather, and a few other areas. The idea here is that by reviewing what’s worked, and what hasn’t, and why, we can improve our capacity for generating better forecasts and interpreting predictions made by others in a more objective way. The book largely succeeds in its basic mission, and not a moment too soon, given the precarious state of the economy and the explosion of fast-and-loose forecasts about what’s coming.
The main takeaway for me in Silver’s book is that the forecast du jour is worthless without knowing something about the process behind the guesstimate. That’s a reminder that most of the daily deluge of predictions are about as useful as a glass of saltwater in the middle of the Atlantic. “A probabilistic consideration of outcomes is an essential part of a scientific forecast,” Silver writes.
The dirty little secret that’s really not all that secret is that good forecasting requires hard work. In economics that means combining theory with empirical fact and blending it carefully, intelligently with an econometric toolkit. As part of this process it’s important to track the forecasts and compare the results with the actual data as it arrives. It’s also crucial to design a forecasting system that’s dynamic, which is to say that the forecasts evolve as new data arrives. I do a bit of this on these pages with the GDP nowcasts and analyzing/forecasting the economic trend.
Tracking and analyzing the errors, which are inevitable, is essential in the forecasting game. This information provides valuable context for improving the process, and for keeping forecasters humble. In the grand scheme of economic prognostications, there’s a sea of guesses ahead of the actual data. What you see very little of is after-the-fact analysis. That’s partly because writing about the full process of forecasting and the post-mortem analysis is time consuming. I spend a fair amount of time tracking my predictions in a bid to learn from the errors. A bit of that shows up on these pages (see the above links, for instance). Most serious economists do the same, and on a fairly sophisticated level, even if the details aren’t available gratis to the public.
Some media hounds are particularly good at making dramatic claims about what’s coming—sometimes far into the future. Just the other day a “respected” hedge fund manager was on one of the talk shows and vaguely warned about a recession at some point next year and/or in 2014. He offered no data, no insight beyond attacking the Fed and other branches of government. This plays well as entertainment and resonates emotionally as the election draws near. But it’s the forecasting equivalent of driving drunk.
Granted, this particular hedge fund guy may have spent hours crunching the numbers before sounding off, but it’s impossible to know, at least from the video clip I saw. He certainly mentioned no serious research.
The reality is that if you do your homework, and routinely analyze the data as it’s updated, you can develop reasonably reliable intuition about the current state of the business cycle. As always, major turning points are a serious threat to even the best forecasts. Otherwise, looking ahead for the short term with a fair amount of confidence is possible, but far from inevitable. That said, the further out in time we gaze, the lower the odds that the forecast will be accurate.
It doesn’t help that economic data is subject to revisions, sometimes dramatically so. “The economy, like the atmosphere, is a dynamic system: everything affects everything else and the systems are perpetually in motion,” Silver reminds. That means that any one forecast in macro should be taken with a grain of salt. Meantime, it’s important to study the vintage data along with the revised numbers. That’s the basis for arguing that careful forecasting through time, and tracking how the forecasts evolve, is a much stronger methodology for looking forward. Even better is a set of forecasts that draw on different data sets and crunch the numbers with different methodologies. Why? Because no one really knows which indicators work best for predicting the business cycle. Ditto for the models that will always deliver the optimal outputs.
Good information, in other words, doesn’t come easy in macro, even in a world ripe with raw data. That doesn’t stop anyone from making bold forecasts that are based on little more than emotion and a shallow review of a few indicators. Then again, there’s a method to that madness if you’re trying to drum up publicity or promote your hedge fund.
No wonder that simple techniques do a decent job as a general rule in macro. “If you’re looking for an economic forecast, the best place to turn is the average or aggregate prediction rather than that of any one economist,” Silver writes. Fair enough. Numerous studies through the years support this notion, and so it’s a reasonable benchmark for judging the wider world of predictions. “And yet while the notion that aggregate forecasts beat individual ones is an important empirical regularity,” he adds, “it is sometimes used as a cop-out when forecasts might be improved.”
Improvement comes slowly in economic forecasting, if at all, and only with hard work. Quite a lot of that hard work is directly related to minimizing error as opposed to improving estimates proper. On that note, one sure-fire way to enhance our strategic intelligence is to ignore the economic porn that passes as informed judgment. Fortunately, there’s an ample of supply of good forecasters to follow. The bad news is that they’re usually drowned out by the pundits who specialize in drama and quantity over careful analysis.