The volatility in yen markets, both stocks and credit, continued again with wild swings seemingly everywhere. The Bank of Japan also continued to promise measures aimed restoring calm, including outright purchases of ¥600 billion ($6 billion) JGB’s in the 5-10 year maturity range and an additional ¥300 billion ($3 billion) of maturities beyond 10-years.

The focus remains on nominal yields, but as I mentioned yesterday the real concern is volatility. Markets, credit markets in particular, abhor volatility of such magnitude. There seems to be a growing recognition that the Bank of Japan may have miscalculated market acceptance and expectations for its QE-steroids program, a development that is not at all surprising given recent history in global finance.

First and foremost, modern monetary economics is a creature of mathematical modeling. In the US, the Federal Reserve makes no decisions without endlessly simulating through the use of its ferbus model of the US economy. When Chairman Bernanke proclaimed in June 2008 that, “The risk that the economy has entered a substantial downturn appears to have diminished over the past month or so,” it was because ferbus indicated as such. His confidence was in his math, but math and modeling is not objective science.

Where did the Bank of Japan come up with the ¥7 trillion QE-steroid figure? It was not some random guess or gut feeling; it was the result of mathematical simulations run in the Bank of Japan’s computers. The statistical probabilities of outcomes favored that amount as likely leading to the desired outcome (2% inflation in 2 years) with least amount of disruption.

The problem with statistical analysis in economic and monetary settings is their decided lack of objectivity, general perceptions to the contrary. To make statistical models meaningful, given that economic and financial systems are complex (in the technical sense), its practitioners must make assumptions, shortcuts in variables and parameters that they assume preserves meaningful results. One of the largest assumptions surrounds conditional heteroskedasticity (a long, tangential topic to this note, so best left for some other day).

One of the most poignant and illustrative examples of this mismatch between modeling expectations and financial reality is from November 2007. Morgan Stanley took a trade position in its proprietary book (in other words, the bank’s own money) shorting subprime mortgages – hard. The trade ended up losing $3.7 billion.

How could this happen? Based on my own conjecture of reconstructing from what we do know (which is admittedly only a partial picture), Morgan Stanley was absolutely correct in the direction of the subprime mortgage market. After all, as a prime CDO sponsor it knew all about what was taking place. The bank’s CFO, Colm Kelleher, explained it this way:

“We began with a short position in the subprime asset class, which went right through to the first quarter; as the structure of this book had big negative convexity and the markets continued to decline, our risk exposure swung from short to flat to long.”

The total exposure on the trade was about $8 billion, so not at all insignificant. The clue about what went wrong comes from the invocation of that deeply financial term, negative convexity. In structured finance, negative convexity can manifest in several ways, but most importantly through correlation.

However, due to the overuse of the Gaussian copula as a means to price CDO & MBS tranches, correlation was always implied. That meant it was derived from other “market-based” indications, in this case yield spreads of liquid mortgage indices and credits. In 2007, yield spreads and risk spreads reacted in much the same manner due to growing illiquidity and concern over subprime – the very movement Morgan Stanley was counting on to cash in on its short position.

But in tranches, correlation pricing often exhibits a “smile”, a quirk of trying to fit modeled assumptions into real world situations. The reason for the smile is straightforward as a function of position in the tranche structure. High correlation impacts the ends (the equity piece and the super senior piece) at greater rates than the middle; thus the “smile”. 100% correlation, for example, means either everyone will default or no one will. These are extreme outcomes that increase the pricing volatility in the ends as implied correlation rises.

In an equity tranche, therefore, rising correlation past a certain point produces wild swings in price because it will only be really good or really bad. The same is true in the super senior tranche.

In Morgan Stanley’s trade, the short position was likely paired with a long hedge position in super seniors of greater size (the estimates were for $10 billion). Where negative convexity plays a role is in its impact across tranches. While prices of the less-senior tranches were declining due to default fears and cash flow deterioration, the price of the super senior would exhibit very low volatility – until implied correlations hit that magical point where the correlation smile produces larger price volatility. That makes this trade seem like a perfect expression of a bearish outlook; make money on the middle tranche losing value while the super senior adds a hedge protection.

The problem was in their modeling of not only implied correlation and negative convexity, but the degree of reliance on derivative measures of financial parameters. As illiquidity was impacting credit spreads and Gaussian copula-based models were interpreting such moves as rising correlation, the negative convexity feedback loop was a blind spot since Morgan Stanley likely never modeled “irrational” fear driving illiquidity as an input. As implied correlation levels rose, the super senior began to take on the characteristics of a “long” trade in subprime, a truly bad place to be in the latter half of 2007.

Indeed, by March 2008, correlation pricing across the entire market was yielding nonsensical outputs, including dealers quoting correlations greater than 100%. Such an outcome is mathematically impossible and meaningless, but models are a very imperfect means to interpret and predict real market conditions. Modeled assumptions sometimes leave out too much information in the vanity of statistical confidence (pun intended).

The econometric models that central banks use are insanely complex and stunningly elegant, some of the best theoretical work in any discipline, but there is only so far they can take the math. Complex systems are such that they don’t suffer assumptions, particularly when pushed into criticality. What some models see as random error terms are human markets reacting in simply unpredictable fashion. The Bank of Japan thinks it can measure how to get from A to B, but markets often have different ideas because they are never limited, despite central planners’ best efforts, by assumptions and shortcuts.

 

Click here to sign up for our free weekly e-newsletter.

“Wealth preservation and accumulation through thoughtful investing.”

For information on Alhambra Investment Partners’ money management services and global portfolio approach to capital preservation, contact us at: jhudak@4kb.d43.myftpupload.com or 561-686-6844 . You can also book an appointment for a free, no-obligation consultation using our contact form.