The problem of fat tails is everywhere in risk analysis. It’s a big issue, but there’s no easy solution. There are, however, lots of partial solutions. Each comes with its own set of pros and cons, which implies that the practical strategy for dealing with the messy but essential issues related to measuring and managing risk starts with the iron rule to never, ever rely on one risk metric.
The good news is that there’s a small library of risk measures. (A useful reference can be found in a number of finance books, including Carl Bacon’s Practical portfolio performance: measurement and attribution.) But the wide selection is also the bad news. How does one sort through the possibilities? Ah, that could take a while. It seems reasonable to start with the usual suspects, such as volatility, Sharpe ratio, Treynor ratio, beta, to name but a few, and carefully expand your list from there. All of the standard metrics have well-known flaws. That doesn’t make them worthless, but it’s a reminder that we must understand where any one risk measure stumbles; where it can provide insight; and what are the possible fixes, if any.
One of the challenges with adding a new risk metric to your analytical toolbox is deciding how much reality to embrace with the methodology while keeping applications practical. Modeling fat tails and known unknowns in detail is wickedly complex (the unknown unknowns are beyond the ken of mortal minds). As such, one has to be wary of falling down the black hole of time consumption. Compromise inevitably rolls into the picture and so one question that keeps popping up: How does one balance the need for parsimony in risk measurement with the goal of recognizing that market returns aren’t normally distributed? There are a number of intriguing risk gauges that straddle these two competing interests, and one that’s worth considering is known as the modified Sharpe ratio (MSR).
The standard Sharpe ratio (SR) is simply the risk premium of an asset or asset class (total return less the risk free rate) divided by its volatility (standard deviation). This is the original risk metric in financial economics, dating to the 1960s, when it was originally proposed by Professor Sharpe. Its chief flaw, as many analysts have discussed over the years, is that it uses standard deviation as a proxy for risk. That’s a problem because standard deviation works best with normal distributions. Standard deviation has some validity over long periods of time, but in the short run normality can fly out the window pretty quickly. The bottom line: extreme losses occur more frequently than you’d expect when assuming that price changes are always and forever random (i.e., distributed normally).
Enter MSR, which is one of several attempts to ameliorate standard deviation’s limitations. MSR is far from a complete solution, but its intriguing nonetheless because it factors in two aspects of non-normal distributions: skewness and kurtosis. MSR’s nod to skewness and kurtosis is through the use of what’s known as a modified Value at Risk measure (MVaR) as the denominator. Laurent Favre and Andreas Signer outlined the process in a 2002 paper, explaining:
If returns are not distributed normally, a simple VaR model can no longer be used, so another method is required to calculate the VaR. One option is the so-called Cornish-Fisher expansion, which can adjust the VaR in terms of asymmetric distribution (skewness) and above-average frequency of earnings at both ends of the distribution (kurtosis). This method of calculating the VaR is hereinafter referred to as modified VaR.
MVaR makes some compromises relative to other adjustment methodologies for VaR, of which there are several. Even so, it’s a step in the right direction and since it’s relatively easy to compute in Excel it’s certainly worthwhile as one of several ways to quantify risk beyond the standard metrics. For some perspective, consider how the modified Share ratio compares with simple volatility and the standard Sharpe ratio for four asset classes via proxy indices over the past 10 years:
Note that in all cases the modified Sharpe ratio is lower than its traditional Sharpe ratio counterpart. The message is that for the past decade, risk-adjusted returns are lower than it appears after adjusting for skewness and kurtosis. Keep in mind that when we look at shorter-term rolling measures of MSR vs. the conventional SR—rolling 3-year measures, for instance—there tends to be more fluctuation with MSR. Depending on the time period, MSR may be higher or lower by more than trivial amounts compared with its standard counterpart. Why? Because MSR has higher sensitivity to changes in non-normal distributions whereas the standard SR is immune to those influences.
The modified Sharpe ratio is hardly a perfect solution for solving the fat-tails measurement challenge. But given MSR’s relatively easy calculation, combined with its greater sensitivity to non-normal distributions, it’s a compelling addition for risk analytics. It’s not going to solve all our risk measurement problems, but no other risk metric alone is up to that task either. But MSR can and arguably should be part of the solution.
The modified VaR is really Cornish-Fisher VaR. The biggest problem with these types of measures is to calculate it with a portfolio today, rather than as a univariate measure (which is straightforward). To calculate these for the portfolio today, it requires a co-skewness and co-kurtosis matrix, analogous to a covariance matrix except that covariance is nXn and these are like n^3 and n^4. If you think there’s difficulties in estimating covariance matrices, these are far worse. So no, I don’t think this is a good alternative.
That being said, I prefer mean to mixed ES (aka conditional VaR or CVaR) or mixed ES deviation (aka CVaR deviation). VaR is basically a quantile, whereas CVaR is the average over the quantile. Surprisingly it is easier to calculate for a portfolio than what is above and is a much better measure of risk.
You make a common mistake when you state that the standard deviation assumes the normal distribution. The standard deviation is simply a measure of dispersion around the mean, and it can be used to calculate probabilities from distributions that are very skewed and have very “fat” tails, like the “extreme value distribution”. The formula for the probability in the tail is different, but can be related to the standard deviation.
BillP makes a good point. Standard deviation can be calculated for any distribution curve. Nonetheless, using standard deviation to analyze non-normal distributions can produce misleading estimates of the true risk. I should have written that standard deviation works best with normal distributions. For clarity, I made the change above.
John,
Yes, indeed–conditional VaR is yet another possibility and one that should be considered. At some future date, I’ll post something on CVaR. But this too has its own set of problems. The challenge of clarifying expectations in the 5% or 1% end of the tail can get quite murky. Also, the basic calculation of CVaR uses a normal distribution. Moving beyond that gets tricky. Nonetheless, CVaR is worth a look.
That’s a good point about the 1% vs. 5%. However, that’s the beauty of the mixed versions. They are basically weighted averages of any confidence level you want. So you could say 50% of the 1% CVaR and 50% of the 5% CVaR or some other kind of combination.
Neither CVaR nor VaR make explicit assumptions about the distribution. Parametric versions use that simply to make things easy, but it is by no means required.