The ISM Manufacturing Index is expected to decline slightly to 55.0 in tomorrow’s update (Feb. 2) for January vs. the previous month, based on The Capital Spectator’s median point forecast for several econometric estimates. The prediction is still well above the neutral 50.0 mark and so the current outlook remains firmly in growth territory for this benchmark of economic activity in the US manufacturing sector.

Compared with three consensus estimates based on recent surveys of economists, The Capital Spectator’s median point forecast for January is at the upper range of the projections.

Here’s a closer look at the numbers, followed by brief summaries of the methodologies behind the forecasts that are used to calculate The Capital Spectator’s median prediction:

VAR-1: A vector autoregression model that analyzes the history of industrial production in context with the ISM Manufacturing Index. The forecasts are run in R with the “vars” package.

VAR-6: A vector autoregression model that analyzes six economic time series in context with the ISM Manufacturing Index. The six additional series: industrial production, private non-farm payrolls, index of weekly hours worked, US stock market (Wilshire 5000), spot oil prices, and the Treasury yield spread (10 year Note less 3-month T-bill). The forecasts are run in R with the “vars” package.

ARIMA: An autoregressive integrated moving average model that analyzes the historical record of the ISM Manufacturing Index in R via the “forecast” package.

ES: An exponential smoothing model that analyzes the historical record of the ISM Manufacturing Index in R via the “forecast” package.

TRI: A model that’s based on combining point forecasts, along with the upper and lower prediction intervals (at the 95% confidence level), via a technique known as triangular distributions. The basic procedure: 1) run a Monte Carlo simulation on the combined forecasts and generate 1 million data points on each forecast series to estimate a triangular distribution; 2) take random samples from each of the simulated data sets and use the expected value with the highest frequency as the prediction. The forecast combinations are drawn from the following projections: Econoday.com’s consensus forecast data and the predictions generated by the models above. The forecasts are run in R with the “triangle” package.

JekahIt was 53.5

Why not mention it? Seems relevant.

James PicernoPost authorJekah,

Well, sure. But the 53.5 wasn’t known when this “Preview” was published. The idea here is to look at range of forecasts before the actual number is reported. Obviously, the reported data was unexpectedly weak, a topic that will be discussed in a subsequent post.

JekahUnderstood. I look forward to the post.

James PicernoPost authorJekah,

Your comment also inspires a fresh look at why one might bother to forecast at all. On that note, I’ll take a fresh look at why I publish forecasts and how they can be used. Clearly, there’s plenty of risk when it comes to predicting. But as I’ll explain, or at least try to, this isn’t the quixotic effort that it often appears to be.