Model Failure: Election Forecasting vs. Recession Nowcasting

Donald Trump’s election as President of the United States on Tuesday coincides with a colossal failure of the data models that predicted the opposite. Some commentators have been quick to see parallels between the crash of quantitative political forecasting and efforts to estimate recession risk in real time. But the two efforts are very different animals, or at least they can be, depending on the model design. The devil’s always in the details, of course, although a well-designed macro model that focuses on estimating the probability of economic contraction can avoid many of the pitfalls that bedevil election forecasting.

The one thing that everyone can agree on is the now-obvious fact that many widely respected modeling efforts in the political realm turned out to be dead wrong. The Princeton Election Consortium, for example, advised, just two days before the election, that Hillary Clinton’s win probability was in the 98%-99% range. In late October, Nate Silver’s estimated Hillary’s Clinton’s odds of success in excess of 80%. And on Oct. 30, The New York Times’ Upshot model gave Clinton a 91% probability of becoming the next occupant of the White House.

What went wrong? A key factor appears to be the lack of breadth and depth in the polling data that the models relied on. Most polls gave Clinton a decisive edge in the election—an erroneous estimate that spilled over into the various models that crunched this data. As Politico explains,

Geoff Garin, a veteran Democratic pollster who worked for the pro-Clinton super PAC Priorities USA, said many surveys had under-sampled non-college-educated whites, a group that Trump appealed to. He also argued there had been an over-emphasis on the belief that the country’s rising demographic diversity would put Clinton over the top.

Data quality aside, all the polling and modeling in the world can’t change the fact that voters can lie to pollsters and/or change their minds at the last minute. Economic data, by contrast, is far less prone to sudden and radical shifts. Although most macro time series are revised, dramatic changes are rare. Meanwhile, aggregating a carefully selected mix of economic and financial numbers also limits the surprise factor. Any one indicator can lead us astray, but the odds are far lower for suffering false signals when monitoring the trend across a broad set of figures.

There’s also another key difference between estimating recession risk and modeling the outcome of elections. Whereas political modeling is only concerned with predicting the future, macro modeling has obvious value if the analytics can provide insight into the recent past (backcasting), the present (nowcasting), as well as the future (forecasting). This is an important distinction and it explains why it’s a mistake to conflate election predictions with macro modeling.

To be fair, some economic analysts go too far by trying to predict recession risk well into the future. But focusing on current conditions and the recent past is a recipe for developing relatively reliable estimates. In fact, the biggest risk for a prudently designed recession-risk model is less about accuracy vs. timeliness.

A broadly defined measure of the economy’s trend can be counted on to identify the start of a new recession. The challenge is doing so as close as possible to the actual turning point with a high degree of accuracy.

Consider the equivalent in political modeling. It’s obvious that Trump was elected on Tuesday—we don’t need a model to confirm that. But when it comes to the business cycle, it’s not clear from casual observation if a recession started, or didn’t, on Tuesday. Why? Two main reasons.

One, economic data is published with a lag. No one knows (yet) if US employment was rising or falling on Nov. 8. Two, there are no hard and fast rules about what constitutes a major turning point for the business cycle. That’s not much of a problem if we’re looking back over the last six months and evaluating the broad trend. But using a single snapshot of time—October’s macro profile, for instance–and trying to decide what it means in isolation for recession risk is devilishly difficult if not impossible.

The solution? A model to aggregate and evaluate the numbers in search of context, using history as a guide. It’s an imperfect system, but it can perform quite well if you build the model carefully and avoid the obvious traps—relying on a narrowly defined set of data points, for instance.

Consider The Capital Spectator’s business-cycle model, which has been running (and evolving) on these pages for the past four years. (For details, see my book, Nowcasting The Business Cycle: A Practical Guide For Spotting Business Cycle Peaks.) Earlier this year the model endured a stress test when the US stock market tumbled sharply, suggesting to some that a US recession was fate. This site’s macro model sagged but never reached the tipping point. Even when the outlook appeared darkest via the equity market, the US Business Cycle Risk Report in February and March continued to show that a new recession was still a low probability event. (For perspective, here’s last month’s update.)

Relying on one model, even one with an encouraging track record, is an unnecessary risk, which is why the weekly updates of The US Business Cycle Risk Report draw on several business cycle indexes, including two benchmarks published by Federal Reserve banks.

Meantime, some analysts are warning that recession risk is rising now that Trump has won the election. Maybe, but there’s no hard data to support that claim at the moment.

The good news is that deciding if there’s any economic basis in the weeks and months ahead for arguing that recession risk is elevated enjoys modeling that’s more reliable than forecasting elections. But nothing’s perfect.

The biggest threat to nowcasting recession risk is the infamous exogenous shock—a change in the state of macro that’s not captured in the data. The obvious example: the 1973 oil crisis, which is widely attributed as the precipitating event for the 1973-75 recession. From a modeling perspective, the critical factor—Saudi Arabia’s political decision to invoke the oil weapon—that unleashed the energy crisis was a bolt from the blue.

Fortunately, exogenous shocks that push the economy over the edge are rare. Meantime, modeling the business cycle enjoys several benefits that don’t apply to election forecasting. Failure is still a possibility, of course. But macro’s prospects for success for nowcasting look quite a bit brighter compared with trying to predict the next election.

One thought on “Model Failure: Election Forecasting vs. Recession Nowcasting

  1. Pingback: Colossal Failure of Data Models in Election -

Comments are closed.