Research Review | 12 June 2020 | Forecasting

Breaking Bad Trends
Ashish Garg (Research Affiliates), et al.
May 7, 2020
We document and quantify the negative impact of trend breaks (i.e., turning points in the trajectory of asset prices) on the performance of standard trend-following strategies across several assets and asset classes. The frequency of trend breaks has increased in recent years, which can help explain the lower performance of monthly trend following in the last decade. We illustrate how to repair trend-following strategies by exploiting the return forecasting properties of the different types of trend breaks: market corrections and rebounds. We construct dynamic multi-asset trend-following portfolios, which harvest more than double the average returns of standard trend-following investing strategies over the last decade.

Machine Learning, the Treasury Yield Curve and Recession Forecasting
Michael Puglia and Adam Tucker (Federal Reserve)
May 2020
We use machine learning methods to examine the power of Treasury term spreads and other financial market and macroeconomic variables to forecast US recessions, vis-à-vis probit regression. In particular we propose a novel strategy for conducting cross-validation on classifiers trained with macro/financial panel data of low frequency and compare the results to those obtained from standard k-folds cross-validation. Consistent with the existing literature we find that, in the time series setting, forecast accuracy estimates derived from k-folds are biased optimistically, and cross-validation strategies which eliminate data “peeking” produce lower, and perhaps more realistic, estimates of forecast accuracy. More strikingly, we also document rank reversal of probit, Random Forest, XGBoost, LightGBM, neural network and support-vector machine classifier forecast performance over the two cross-validation methodologies. That is, while a k-folds cross-validation indicates tha t the forecast accuracy of tree methods dominates that of neural networks, which in turn dominates that of probit regression, the more conservative cross-validation strategy we propose indicates the exact opposite, and that probit regression should be preferred over machine learning methods, at least in the context of the present problem. This latter result stands in contrast to a growing body of literature demonstrating that machine learning methods outperform many alternative classification algorithms and we discuss some possible reasons for our result. We also discuss techniques for conducting statistical inference on machine learning classifiers using Cochrane’s Q and McNemar’s tests; and use the SHapley Additive exPlanations (SHAP) framework to decompose US recession forecasts and analyze feature importance across business cycles.

Forecasting Consumption Spending Using Credit Bureau Data
Dean Croushore (U. of Richmond) and Stephanie Wilshusen (Philadelphia Fed)
June 2020
This paper considers whether the inclusion of information contained in consumer credit reports might improve the predictive accuracy of forecasting models for consumption spending. To investigate the usefulness of aggregate consumer credit information in forecasting consumption spending, this paper sets up a baseline forecasting model. Based on this model, a simulated real-time, out-of-sample exercise is conducted to forecast one-quarter ahead consumption spending. The exercise is run again after the addition of credit bureau variables to the model. Finally, a comparison is made to test whether the model using credit bureau data produces lower or higher root-mean-squared-forecast errors than the baseline model. Key features of the analysis include the use of real-time data, out-of-sample forecast tests, a strong parsimonious benchmark model, and data that span more than two business cycles. Our analysis reveals evidence that some credit bureau variables may be useful in improving forecasts of consumption spending in certain subperiods and for some categories of consumption spending, especially for services. Also, the use of credit bureau variables sometimes makes the forecasts significantly worse by adding noise into the forecasting models.

How Well Does Economic Uncertainty Forecast Economic Activity?
John Rogers (Federal Reserve) and Jiawen Xu (Shanghai University)
December 2019
Despite the enormous reach and influence of the literature on economic and economic policy uncertainty, one surprisingly under-researched topic has been the forecasting performance of economic uncertainty measures. We evaluate the ability of seven popular measures of uncertainty to forecast in-sample and out-of-sample over real and financial outcome variables. We also evaluate predictive content over different quantiles of the GDP growth distribution. Real-time data and estimation considerations are highly consequential, and we devote considerable attention to them. Four main findings emerge. First, there is some explanatory power in all uncertainty measures, with relatively good performance by macroeconomic uncertainty (Jurado, Ludvigson, and Ng, 2015). Second, macro uncertainty has additional predictive content over the widely-used excess bond premium of (Gilchrist and Zakrajsek, 2012) and the National Financial Conditions Index. Third, quantile regressions for GDP growth indicate strong predictive power, especially at the lower ends of the distribution, for all uncertainty measures except the VIX. Finally, we construct new real-time versions of both macroeconomic and financial uncertainty and compare them to their ex-post counterparts used in the literature. Real-time uncertainty measures have comparatively poor forecasting performance, even to the point of overturning some of the conclusions that emerge from using ex-post uncertainty measures.

Forecasting Consumption Spending Using Credit Bureau Data
Dean Croushore (U. of Richmond) and S. Wilshusen (Philadelphia Fed)
June 2020
This paper considers whether the inclusion of information contained in consumer credit reports might improve the predictive accuracy of forecasting models for consumption spending. To investigate the usefulness of aggregate consumer credit information in forecasting consumption spending, this paper sets up a baseline forecasting model. Based on this model, a simulated real-time, out-of-sample exercise is conducted to forecast one-quarter ahead consumption spending. The exercise is run again after the addition of credit bureau variables to the model. Finally, a comparison is made to test whether the model using credit bureau data produces lower or higher root-mean-squared-forecast errors than the baseline model. Key features of the analysis include the use of real-time data, out-of-sample forecast tests, a strong parsimonious benchmark model, and data that span more than two business cycles. Our analysis reveals evidence that some credit bureau variables may be useful in improving forecasts of consumption spending in certain subperiods and for some categories of consumption spending, especially for services. Also, the use of credit bureau variables sometimes makes the forecasts significantly worse by adding noise into the forecasting models.

Making Text Count: Economic Forecasting Using Newspaper Text
Eleni Kalamara (King’s College London), et al.
22 May 2020
We consider the best way to extract timely signals from newspaper text and use them to forecast macroeconomic variables using three popular UK newspapers that collectively represent UK newspaper readership in terms of political perspective and editorial style. We find that newspaper text can improve economic forecasts both in absolute and marginal terms. We introduce a powerful new method of incorporating text information in forecasts that combines counts of terms with supervised machine learning techniques. This method improves forecasts of macroeconomic variables including GDP, inflation, and unemployment, including relative to existing text-based methods. Forecast improvements occur when it matters most, during stressed periods.

Stock Price Forecasting and Hypothesis Testing Using Neural Networks
Kerda Varaku (Rice University)
May 10, 2020
In this work we use Recurrent Neural Networks and Multilayer Perceptrons, to predict NYSE, NASDAQ and AMEX stock prices from historical data. We experiment with different architectures and compare data normalization techniques. Then, we leverage those findings to question the efficient-market hypothesis through a formal statistical test.


Learn To Use R For Portfolio Analysis
Quantitative Investment Portfolio Analytics In R:
An Introduction To R For Modeling Portfolio Risk and Return

By James Picerno


One thought on “Research Review | 12 June 2020 | Forecasting

  1. Pingback: Quantocracy's Daily Wrap for 06/12/2020 | Quantocracy

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.