Lie Groups and underlying trading

Discussion in 'Options' started by nitro, Jan 6, 2010.

  1. nitro

    nitro

    A Lie group is a mathematical object of tremendous importance to modern mathematics.

    http://en.wikipedia.org/wiki/Lie_group

    I have not seen a lot of applications of them to finance though. Perhaps it is because Lie groups deal with continuous processes, as opposed to e.g. stock prices that are discreet and contain jumps.

    But there is a way to salvage this. If instead we deal with probabilities in the same way that Quantum Mechanics treats position and momentum as probability [wave] functions, continuity is reintroduced and Lie theory can be brought to bear. Therefore, it seems that even for the underlying trader, a possibly more coherent place to do analysis is in the option domain.

    I have always thought that the notion of distance in the price domain using the standard Euclidean distance was flawed when it comes to stock prices (correlation, etc). Instead, it seems that a more natural object is to create a complex Lie Group, i.e, the p-adic [Lie] group of probabilities. Now, distance is not the standard definition that one thinks of in terms of price nearness, but something else. In the options domain, it could be a constructed object completely unrelated to price distance. This would have obvious implications to portfolio theory, as then one can better delta-gamma-vega hedge a portfolio of many different instruments, imo. "Rotating" a position in one instrument into another would be trivial if you had the right Lie group, and hence you would know the risk of one in terms of the other.
     
  2. LTCM revisited. :cool:
     
  3. It seems to me that the real issue is finding a good metric, not necessarily invoking Lie groups (or other abstract constructs).

    Personally, I think the metric is different depending on the type of system (time frame, style etc.) you're looking at, and "finding" the metric is one of the key aspects of system development.

    Thank you for sharing a thought-provoking idea!
     
    dartmus likes this.
  4. nitro

    nitro

    On the contrary. This tries to have a higher resolution of cross risk accuracy. And this has been said a million times, at LTCM it was not the models that went bad, it was the immense leverage that did not allow them time to recover from a massively unexpected standard deviation event. The people that bailed them out made out like bandits when mean reversion of their positions occurred. The leverage used was not the fault of the model, but the emotion of the managers when they thought one hundred year floods came on average every one hundred years.

    You guys continue to harp on one instance where a quantitative approach went bad, but ignore the hundreds if not thousands where money is made literally on a daily basis with scalpel-like precision using these quantitative tools.

    It gets really monotonous and tiring actually.
     
  5. Okay. I like "rocket science" as much as the next guy. Can they consistently outperform the S&P-500? Do they only make money during a bull market? Do they rely on being able to fleece an ignorant group of institutional investors? :cool:
     
  6. nitro

    nitro

    Dude,

    This is probably the most theoretical post I have ever made in my life regarding markets. I don't know.

    But you are missing a main point. I am thinking this should be used not to price and seek under/over valued derivatives, but to manage their risk relative to each other once they are already in your book. Those two things are not the same thing. For example, many (most ?) option practitioners use one model to price options, and another to manage a book risk. This would allow for even more complex instruments in a book to be priced, assuming it works.

    The current methods are to use simulation like Monte Carlo etc to manage complex books composed of lots of different derivatives. Crude at best because there is no real underlying theory.
     
  7. Not quite sure about their proprietary models. But I think Merton and Scholes could have done better even with massive leverage had they not used standard normal. Nothing in the real world if quite "normal".
     
  8. nitro

    nitro

    Look at it this way, if you are using 25:1 leverage, a 4% adverse move against you will blow you out.

    When LTCM was "ok", they had ~ $2.1B leveraged to $100B in their book. That is an astonishing 50:1 leverage. Even the smallest hiccup would devastate such a book. The precision and confidence they had in their models must have been based on doing these trades so many times so often that it gave them a false sense of confidence. When the currency crisis came, their "cash" went to about $400M in assets, at which time the their leverage went to an incredible 250:1. At this point they got a margin call they could not meet.

    You could argue as you do, that the currency crisis was a non-normal event with a much higher probability if a distribution with a fatter tail had been used than the normal. I claim that simply keeping you leverage to no more than 20:1 would have allowed them to survive all but the most extreme events. It is a defensive way to look at it. It is driving with the foot on the break. Using different probability distribution is a psychologically different view. It says, hey as long as we know the risk is there, give us the chance to press the accelerator hard when we want, and to take our foot off when we want to scale back. But we don't use breaks.
     
  9. sjfan

    sjfan

    As a practitioner of quant magic, this is absolutely untrue. There's a deep theoretical and empirical foundation for how monte carlos are used (correctly; there are always people who don't know what they are doing). The volume of research in the last 30 years is VAST. Naturally, there are limitations due to simplified (and unrealistic) assumptions, but we are aware of them. They are some times ignored (to the detriment of the users), but that's another matter all together.

    As for your main post about the Lie group - I don't know enough about it to form an opinion, but I think it'll be tough for formulate a usable general or partial equilibrium theory around it - or even a useful and workable stochastic process as a starting point for one.

    I'm going to completely ignore the "can it beat spx" comment. 99% of economics and theoretical finance isn't concerned with beating the market (except on the topic whether markets can be beaten in the long run at all)

     
  10. nitro

    nitro

    Sorry Vikana, I must be going blind - I missed your response.

    We are in agreement that the metric is the key. But think on this and allow me an analogy. Einstein's General Theory of Relativity says that matter affects the shape of space, and space in turns affects matter by telling it what a "straight line" is. The metric tensor describes this. Later it was discoverd by Emmy Noether (Noetherian Rings are incredibly important in modern algrebra and imo the greatest woman mathematician in history by a mile)

    http://en.wikipedia.org/wiki/Emmy_Noether

    that at the heart of all these arguments are arguments for invariance. So Special Relativity is true because (3D) space has translation and rotational symmetry, or SO(3). General Relativity has the de Sitter group as it's invariant group. The weak and strong nuclear forces have gauge invariance, or have SO(5) and SU(2) ×U(1) symmetries. Electromagnetism has U(1) symmetries. These are all Lie groups! In unification physics, if we could find the right Lie group that would describe how all the symmetries of the four forces fit together, we would have achieved a complete understanding of all fundamental physics.

    So if we take this analogy to managing book risk with many different types of underlying (different "forces" affecting our risk), we see that in order for them to fit together under one big symmetry group, we need the equivalent "book risk" Lie group that unifies the forces affecting risk in that book. That is the basic motivation for this idea.
     
    #10     Jan 8, 2010