Quants --Alive and Kicking

Discussion in 'Wall St. News' started by marketsurfer, Dec 23, 2008.

  1. at least in gambling you know the distribution ex-ante!
     
    #71     Jan 12, 2009
  2. The battle continues--- anyone attending??






    http://www.battleofthequants.com/
    To VaR is Human

    "We would like society to lock up quantitative risk managers before they cause more damage"
    -Nassim Taleb

    "If we want to describe how human beings actually make their choices in the presence of uncertainty we had better make sure that we truly understand how probabilities are actually used in decision making -- rather than how stylized hyperrational agents endowed with perfect God-given statistical information would reach these decisions."
    Riccardo Rebonato – “Plight of the Fortune Tellers"

    Risk management is a relatively simple in essence: what is the probability of an event occurring, and if it happens how badly will it hurt? This simple statement becomes complicated very quickly: How do we estimate the probability and how do we estimate loss? Regarding the probabilities, man has turned to statistic to make guesses at estimation. However, the human brain is incapable of distinguishing probabilities of less than 1%. For the size of loss, man has turned to valuation models, some of which are well entrenched and are taught as gospel at many MBA and CFA programs.*

    For both items, understanding the limits of theory is an important part of its usage. Engineers, who often use the same techniques, usually do and still build the highways that fulfill their utility functions, with the full understanding that they may collapse during stress tests (like the ones in Northridge earthquake). Do the engineers get blamed for lack of insight on the collapsed structure? No, because it was understood that they will fail. One can’t stop to think of the storied bridge (though I can’t recall the name of the bridge) where the engineer had to stand underneath it, at opening, as to create confidence for the first folks to cross. But what guarantees do they have that it will not fail? The engineers hold the social responsibility to design structures that are well thought out and safe, yet they do fail.**

    At the end, financial engineering is in its infancy (50 years compare to the history of engineering in the 3000 years range) and has much to learn from its mistakes. In traditional engineering, most structures are overdesigned by a factor of 3, a number reached through many years of experience and failure. Why is the factor of safety set to three (even though the actual overdesign number is a multiple of that)? No one knows but it is assumed that over the years, the rule of thumb (expertise) was developed. In financial engineering, concepts are often not well-thought-out, maybe because the end user is some unknown financial buyer half a world away, and not some human being that could get hurt if the structure collapsed. For one thing, the current markets bring these creations full circle, because the financial engineer may be forced to accept social responsibility since their neighbor/friend can be one of the individuals that are harmed.

    VaR has become popular because it elegantly helps predict loss within a probabilistic tolerance. The moving parts are thought to be well understood (data and models). Yet much room exists for abuse and unwarranted confidence. In his article*** Chris Finger aptly points to the complacency of the banks that did not become suspicious when their models failed to fail (i.e., went several years without excession of the expectations). This complacency illustrates part of the problem: lack of willingness to ask question when no regulation is breached, while profits are rolling in. When the expectations are indeed exceeded, i.e., the 1% in question, often it is forgotten that the outcome could be 2 standard deviation or perhaps 15. The reality is that these dizzyingly rare events happen only too often.

    Should this make us question the data and the models? Absolutely. In fact, one should question the judgment of the users of these models (most often risk managers, since the original designers of these structures have long walked into more esoteric sunset). Often, the critics of the models fail to recognize that humans make decisions, not systems. And humans are susceptible to err. One of the problems with financial engineering is that there are no physical limits to new creations; a credit default swap can be replicated over and over (to $50 trillion dollars apparently). In real life, a design is bound by gravity, cost, and many other practical issues.

    The risk managers, who are aware of the realities of life, constantly thrive to improve their arsenal of tools. For instance, they try to improve their understanding of reality, so they can better grasp the behavior of the models. In this context we hope that Dr. Taleb sheds some light (sooner rather than later) on how to use Mandelbrot-like observations for practical risk management (e.g., better prediction of rare events).****

    Or they try to improve the modeling ability (sometimes too exclusively). The reality is that many of the instruments, including the Credit Default Swap, are created because of a need in the markets. These models evolve over time and become more robust. However our understanding is bound by a natural limit: the closest abstraction of reality, is reality itself; which means that engineers, whether of physical or financial type, must make assumptions in order to proceed (then proceed to forget about or ignore these Achilles heels, which usually end up causing the problem).

    Or they try to protect themselves by complementing their arsenal of tools with other less stochastic and more practical methods: stress tests, sensitivity analysis, exposure monitoring, and hedging.

    The practitioner can use all of the above tools to measure and manage its risks, but needs to use judgment to understand when each one is appropriate. In the current environment for instance, historical stresses are not helpful, sensitivities are incomplete, limits are often breached too often and across many assets, and hedging (if in place) often breaks down.

    Where does this leave us? There are a few suggestions one can make: 1) research is important and must go on to improve our understanding of models, tail end behavior, and new domains of risk (like liquidity) 2) While knowledge is cumulative, wisdom isn’t; since humans make decisions, they should be better trained (for instance use more of the physical engineers’ awareness and sense of responsibility ; 3) as an upshot, risk management must seize the moment to establish itself as a real voice, and not a biteless monitor of events.

    * There is a strong movement that questions the validity of the models and even suggests to strip the Nobel prizes that were awarded for them. See Professor Wants Pairs 1997 Nobel Revoked, NPR Mon, 13 Oct 2008 5:00 AM PDT.
    ** A great treatise for the weaknesses inherent in engineering is “To Engineer Is Human: The Role of Failure in Successful Design" by Henry Petrosky.
    *** Financial Times, entitled “Liquidity forecasting is the next challenge"
    **** For a basic review of this advances please refer to Nassim Taleb’s essay “The Fourth Quadrant: A Map of the Limits of Statistics".
     
    #72     Jan 12, 2009
  3. Correction: the number for November is not wrong, it is actually highly influenced by Madoff funds... Once again the proof that HF indices are all but reliable!


     
    #73     Jan 13, 2009