Philip M Halperin - Scribblings

  Thoughts on the Usefulness of VAR

  The Question We Should be Asking

One of the many imponderables about the appearance of VAR is the emergence of strong opinions, both pro and con, that its manifestation has elicited. Partially this is in reaction to a perceived establishment of a paradigm for risk measurement.

After all, it has all happened so fast:

Five years ago, it was a rare institution that had a sense of guantitative global risk measurement, let alone one utilising a common stochastic metric. Today, Risk Management departments abound, calibrating, validating, correlating, optimising, allocating, and generally limiting traders' freedom in ways unfathomable to the layman. Together with this emergence has come a set of management-science/financial-engineering/multivariate-statistical jargon. And with it all employing a scientific-sounding jargon and common set of apparently arcane technique, including "historical simulation", "correlation matrices", "goodness-of-fit", "Monte-Carlo simulation", "back-testing", "stress-testing", "confidence-intervals", all of which find their way into the grand paradigmatic, regulator-mandated, acronymic and oh-so-au-courant "Value at Risk" or VAR hyper-technique.

Now, it is natural that so rapid an ascension of technique, together with the establishment of a new, powerful department of technocrats will raise fears together with the economic power issues that ensue. And there is a proper backlash led by Nassim Taleb (intellectual gadfly extraordinare) with a counter-backlash from Phillippe Jorion (defender of the VAR proto-paradigm) to grace the literature.

 

Where do I come out in this debate? I have sat on both sides of the trader/risk manager divide, and have some sympathy for the extreme points of view of both. Clearly VAR qua VAR has its limitations, these have been enumerated far more eloquently than I could do in a short article, and the limitations become ever more apparent with the both the increase in exaggerated claims made for the technique, and the spread of the technique (due to the availability of cheap computing firepower) beyond the domain of practitioners (both former traders and theoreticians) who truly understand its limitations. Just as clearly, having a technique that utilises a statistical metric, however flawed it may be in the minutiae of implementation, is a marked improvement over the previous graspings for a solution, that included lot-limits, margin-based limits, dollar metrics, or such abominations as absolute percent move limits across markets that widely varied in volatility.

At the same time, I feel that the debate is somewhat myopic, for the real question is, "How should we be measuring and controlling risk?" In the context of the wider question, VAR takes a significant, but very limited role. It is a useful tool, but by no means a complete solution.

Marc Lore (co-founder and president of GARP) recently expressed the opinion that the art of risk management is at relatively the same stage of development as accountancy before the publication of accepted Standards (presumably such as FASB). I think that the accounting analogy is an excellent one, and, at the risk of beating a metaphor to death, I will develop my argument with reference to it.

When one looks at an accounting report, there are generally two or three major items, i.e., the balance sheet, the income statement, and a sources and uses of funds report. Of these, probably the most central summary is the balance sheet, which can be pored over and analysed, and in turn can produce no end of inciteful ratios and measures that helps our understanding of the financial position of the concern. But the balance sheet does not appear alone; a good set of financial reports will generally give a prior balance sheet for comparison. Moreover, the income statement (or statement of profit and loss), will summarise in a standard fashion the *process* that transformed the prior balance sheet into the current. Finally, the sources and uses of funds statement will attempt to show the cash process that parallels the accrual of income. These supplementary statements, the income statement and the sources and uses statement, taken together with the two periods of balance sheet, effectively try to communicate not just the current financial situation of the company, but also the process of making money that the company is involved in; they communicate not just a snapshot in time, but movement over time.

Extending the analogy, the VAR approach is equivalent to the creation of a consolidated "risk balance sheet". One can argue whether the principles of consolidation are adequate (they probably are not, given the inherent problems with correlational assumptions, among other things), but subsidiary "risk accounts" can be produced sufficiently for the inquiring mind to overcome that limitation. The problem is that there is no general risk accounting for the process that generates the risk snapshot being measured. Just as any serious financial analyst would need to see the income and sources and uses statements to measure a company's prospects, one would think that the serious risk management team would need to analyse the post-hoc realisation of risk through risk/returns analysis of the various risk units that compose the consolidated and subsidiary VAR statements. Accordingly, there need to be established a methodologies that can produce, analogously, something like a risk-income statement, and perhaps a sources-and-uses-of-risk statement.

 

There is another limitation to the VAR methodology --- the implicit assumption that, in order to measure risk adequately, it is somehow sufficient to measure positions and markets. In some manner this measurement, together with adequate statistical inference is generally implicitly assumed to be necessary and sufficient to measure the risk taken on by the operating unit, or the company. Yet the very real information generated by the historical experience of human beings originating or calling loans, buying and selling spreads, or dynamically hedging their exotics book does not generally factor into risk analysis as generally practiced. Any experienced trading manager would recognise this limitation as sheer folly. One could easily demonstrate vastly different post-hoc results generated by two different individual managing identical ex-ante portfolios.

Analogously, any experienced credit officer will tell you that the first principle of his work is to know the counterpart. The financial accounting measurements (including income and sources and usage statements) will provide an indication of creditworthiness, but cannot be sufficient in the credit decision. One could easily demonstrate vastly different results generated by two different concerns manifesting identical ex-ante balance sheets.

It cannot be denied that the individual, desk, or operating unit is as vital a source in shaping the profile of risk and reward as the instruments traded, the loans made, or the capital invested. Yet the range of the risk measurement function is determined without including this vital variable in the domain. And, curiously, discussion of this kind of grounded process control is essentially absent from the ongoing debate about VAR.

To return for a moment to the accounting analogy (already stretched to the limit of the reader's patience above), another use for the income statement, beside the important informational benefit it gives on its own, is the implicit calibration it gives to a pair of successsive balance sheets. This allows us to reconcile snapshots of a position at a point in time with what actually took place over a period of time.

The process of reconciling PL to balance sheet imposes (enforces?) a certain discipline in the kingdom of accounting, a certain tie-back to the real world of moving time, which unfortunately has not yet a parallel in the realm of risk. That VAR, the high court trial in the realm of risk, is essentially a castle in the air is implicitly recognised in the increasing regulatory requirements for testing under CAD2 including a mandate that ultimately contemplates "dirty" results testing. This technique, essentially comparing predicted variance with subsequently realised P/L variance, is intended to bring a certain discipline to the generation of VAR numbers, to help place a foundation in the ground under the castle in the air. This groundedness should be welcomed by practitioner and quant alike, as it at last lends a modicum of scientific method to the proceedings.

Now the most avid proponents of VAR as an ideal might object to this line of argument, and claim that historical backtesting of price distributions is an adequate empirical control for the theory. My reply to this assertion is that it begs the question posed earlier.

For the essence of the concern expressed about VAR is not whether it is intellectually elegant, internally consistent with its assumptions, or even whether the ultimate VAR technique, once achieved, will prove to be a uniformly most powerful robust and unbiased estimator of the popoulation of market price effects upon a portfolio. These are valid concerns for the academic alone (along with whether the development of such a refinement will gain the developer sufficient minimally publishable units to achieve tenure). No, the essence of the concern is: Is this an effective tool for the measurement and control of real risk? In the real world of trading, this risk has at least three elements: the market, the portfolio, and the trader. VAR as currently constituted does a better job of measuring the first two than anything previous. This would hold a fortiori if we had a perfect method for forecasting price, rate, volatility and correlation distributions and perfect models to measure those distributions' effect upon the instruments that compose a portfolio. And, we are getting there. But the third element, the trader, is missing.

Looking upon the question from another angle, the process of risk control can be viewed as a process of statistical inference and the VAR calculation as an element in that process. I contend that this process is essentially Bayesian in nature, and as such is dependent on conditional probabilities. An element in the process of Bayesian estimation is the incorperation of realised results to systemically modify our prior estimates. This is not being done at present for the risk generating function---a compound of market behaviour, trader behaviour, and their effects upon a portfolio which is also determinate only at a moment in time, but is in itself vector of random variables conditional upon trader behaviour.

So we actually have a long way to go. VAR is a good first step, but it is only a first step. To return to the accounting analogy, I would contend that we are not yet even in the world of Lombard accounting before the development of General Accepted Standards. Rather, our world is one in which we have a concept of what the risk balance sheet should look like, but it is divorced from the daily transaction journal, and we have yet but a dim inkling that a risk income statement could exeist, far less how to derive it or how to balance the books once we have done so.

In some of my published work, I demonstrate one such set of process control: The discipline of measurement of post-hoc, or realised, risk. The work I have done at REFCO both as trader, trading manager, and risk manager has led to the development of an (admittedly limited) methodology of graphical post-hoc returns analysis and trading signature analysis. Some of this work is appears on my website, and could lead, with further development from those more capable than I, to a more robust method of risk-process reporting and the actual strategic management of risk.

 

©Copyright 1998

 

Back