Efficient Use of Capital, Position Sizing, Model Allocations

Discussion in 'Risk Management' started by jtrader33, Feb 19, 2010.

  1. Top top thread guys.
    I am following with much interest.
     
    #11     Feb 21, 2010
  2. Good info - thanks for sharing that!

    I'm still reading through Vince's article about the Leverage Space Model.

    Is anyone here using Optimal f? If so can we go through a real world portfolio example? Has anyone done or, can anyone do a real world example using Vince's LS Model? Do you guys think its worth it to fully explore this concept?

    I need to do some more work to figure the details of what he's getting at but it seems the correlation issue is a big one (to me at least)... I don't yet understand how he has "eliminated the fallacy and danger of correlation" - note I haven't read this content carefully enough - maybe someone can explain this elimination succinctly?

    Mike
     
    #12     Feb 22, 2010
  3. He gave this example in the other thread:

    I KNOW that a matrix of probabilties of cross scenarios has less information loss than the simple, single metric of correlation -- particularly in the tails. I have written about this at length, performed ample studies on it, and have experienced the benefits and consequences of both, firsthand.

    Suppose I have 2 components I am looking to allocate among. Say, 98 periods one component loses 1 unit, the other, gains 2 units (the subsequent period, the reverse occurs, the former now gains 2 units, the latter loses 1 unit. They keep flipping like this, with a net gain of 1 unit, for 98 perdiods. If this were the only data, our correlation coefficient would be -1.0). Then there is the one period where they both lose 10, simultaneously, and the 100th period where they both gain 10. My correlation coefficient in this case, over the 100 periods, is -.04753. That single parameter would be used to describe the relationship of these two streams -- yet, there is a lot of information going on in there -- some really BAD stuff two on that solitary period of -10,-10.

    Contrast using this single metric with the notion of using a matrix of joint probabilities:
    p A B
    .01 -10 -10
    0 -10 -1
    0 -10 2
    0 -10 10
    0 -1 -10
    0 -1 -1
    .48 -1 2
    0 -1 10
    0 2 -10
    .48 2 -1
    0 2 2
    0 2 10
    0 10 -10
    0 10 -1
    0 10 2
    .01 10 10

    Which has more information? Which is more valuable on the disaster days?
    This is only 100 days. The outliers in real life tend to occur far less than .01,
    so your correlation coefficent, r, would typically be far more negative than shown here. (Incidentally, this matrix is the only thing one needs to gather to employ a leverage-space type model)

    ------------------------

    I ordered the book last night...should be through it by the end of the week. If someone doesn't come in with answers before then, I should be able to give you a better description, so you can decide for yourself if its worth investing the time. If there's considerable interest and we work up an example, who knows, maybe we can convince him to comment in the thread (long shot though, seemed like his last appearance was pretty uncommon).
     
    #13     Feb 22, 2010
  4. Hugin

    Hugin

    We’re currently using LSP so I might have some input. First some clarifications on what LSP does. The goal of LSP is to find the allocation among assets that gives maximum growth of capital given a drawdown constraint. It is not a mean-variance method (Ralph Vince seems to hate those). It uses drawdown as the risk measure and it does not use correlation/covariance. Instead it looks at the historical co-movements of the assets. This is somewhat similar, at least conceptually, to how you calculate value at risk for a portfolio using historical prices.

    One problem with correlation/covariance is that it looks at how assets move “on average” whereas co-movement takes into account how assets have moved together including when conditions are extreme. Obviously we still have the problem how well history describes the future, but this will probably always be the case.

    The drawdown constraint is stated as “less than P% probability of reaching a drawdown larger than DD% during N periods”. Given a certain allocation this is tested by running a scenario simulation for N periods into the future. If the number of assets is large and you simulate more than a few periods this requires a lot of calculations. It is somewhat similar to an option pricing lattice with the difference that the branching factor in each node is larger (>10 in many cases).

    We’ve been using LSP since October last year to:
    1. Handle monthly allocation between systems.
    2. Calculation of hedges for some systems.
    3. Manage weekly allocation for a system based on current drawdown.
    4. Position sizing within a system (future).

    So far it’s been working well and when we simulate what is would have done for us in the past it seems to work very well (this is obviously in hindsight since we know the systems have worked up until now).

    The main benefit of LSP is that it addresses both how to get optimal growth and handle risk within the same method. In LSP you set a drawdown that will act as the level when you stop trading your system, working as a “stop-loss” on the system level. And IMO using drawdown as a barrier is more natural than working with variance.

    /Hugin
     
    #14     Feb 24, 2010
  5. Thanks for sharing your thoughts Hugin - your posts are always clearly expressed and of high quality. I should have been more careful with my mean-variance comment...I didn't intend to imply that the objective function/constraints of LSP were the same as M-V...but simply that they're distant cousins (imo) because of the way they fit to historical data.

    Certainly drawdown is a more appropriate risk consideration. However, I'm still a little uneasy about setting up a risk management framework with the implicit assumption that the historical joint probability distributions will be representative of the future...most especially during extreme environments. Among other things, there are relatively few extreme historical data points to analyze, raising questions about the statistical validity of the approach.

    Perhaps the best of both worlds would be to set up a framework that first addresses the four points I listed in caps in a previous post above, then within those constraints use LSP to maximize growth within a DD threshold. This would allow one to consider what they deem to be realistic 'what if' scenarios which may not be present in the historical probability matrix (e.g an overnight market shock of -40%, a single short position being +300%, etc.). Then address those risks by determining set of constraints which would not violate a chosen DD tolerance should those unprecedented events occur. Those limits would likely be in the form of maximum net long/short exposure, maximum model allocation, and max position size. Then LSP could do its thing within these additional constraints (as well as the usual max DD%).

    Having some hands on experience with LSP, do you think such an approach is possible/practical/sensible?
     
    #15     Feb 24, 2010
  6. Hugin

    Hugin

    As you say above, building the scenarios from historical data is the most crucial part of the LSP method. I guess it will always be hard to create a good, statistically valid method if you have little data. In that case you probably will have to work with some parametric method with the problems that entails.

    One thing you can do that is suggested by Ralph Vince is to add some additional ”tail risk” in the scenarios. An idea is to find the optimal f* and then, as you suggest, do some tail risk tests by manipulating the tails (magnitude and/or probability) and run the drawdown simulation and see what happens. In our implementation we haven’t any special tail risk analysis, but maybe we will.
    Regarding additional constraints we use them in our implementation. For each trading system we have a number of constraints on system and trades, e.g. max net/gross exposures, max (and min) position size etc. So the additional constraints you describe can definitely be incorporated. How to do it depends on the characteristics of the trading system (e.g. if it is long/short, type of instrument, frequency of trades).

    Earlier someone wondered whether it is worth the effort to implement LSP. If you already have a working system and feel uncertain about using historical data for position sizing then the answer is probably ”no”. Implementing it requires quite a lot of programming and some knowledge about optimization. Moreover you probably have to adapt the method as described in the book to work with your trading system. In our case we have modified it by making it possible to use different data for the optimization of growth and in the drawdown calculations, thereby separating optimal positioning from risk management.

    In our case the existing risk system was rather trivial (based on average exposures and VaR relative to the market and scenario analysis) and we lacked a systematic approach to continously control leverage and how to shut down a system that loses its edge. So for us it made sense to put in the effort.

    /Hugin
     
    #16     Feb 25, 2010
  7. Thanks guys for your time and your contributions.

    I've gotten through chapter 1 and in the process I've found the following resource:

    http://r-forge.r-project.org/projects/lspm/

    Has anyone played around with this R project? I've created some Excel spreadsheet's to calculate HPR's, but it's slightly cumbersome and I don't want to reinvent the wheel here.

    Anyone care to handout some pointers or maybe we can create/share some of the techniques?

    The attached spreadsheet shows two systems A & B. A is a stock system and B is futures system. Both start out at 10k and equity is reinvested using the HPR. The futures system is not realistic as it assumes reinvestment at fractional contract sizes, so it ends up significantly outpacing the stock system.

    Lets assume I'm looking to keep drawdowns <25% and I have $20k to allocate as necessary for both systems. The systems exploit opposite market effects (some element of diversification, so they should work well together) and I also want a relatively smooth equity curve, but, not at a drastic reduction in P/L. Lets try to find out what allocations make sense.

    These are both intraday models.

    Mike
     
    #17     Feb 26, 2010
  8. Roscoe

    Roscoe

    I am currently using it and I find it to be very good. The thought of coding up the whole thing from scratch was daunting in the extreme.

    You will need to create a Joint Probability Table from your historical n-period equity changes data (without position-sizing applied), well 2 JPT's actually, a "Trades" and a "Probs table", for input to LSPM. The LSP book describes the process fairly well. Then simply run LSPM with your desired parameters.
     
    #18     Feb 26, 2010
  9. Hey guys I have been reading the thread, but doing some traveling over the past week so have been a bit out of touch.

    I have seen this approach, and the problem is the same: it relies on information from historical system performance. (Whether you're using correlations or some other measure of historical performance, you're basically accomplishing the same thing.) I think we all understand that historical measures of performance are unstable, but there is another bigger problem.

    The easiest way to think about this is to imagine you have 4 systems that trade the dow 30 stocks. you have done the work correctly. they trade different types of setups and have return series that are not strongly correlated. you run whatever position sizing algo you want and come up with a solution.

    But.. .what happens if the market sells off sharply, say 30% in a week? What do you think happens to the correlation of these systems?

    The same problem applies to different markets -- global markets, commodities, etc... they are all more than capable of moving together and invalidating the math.

    The danger of all these methods is that they will tell you to trade larger than is prudent.

    Just my two cents... I've seen this happen through several cycles already. If you want to use a method like this, great... maybe it's better than no answer at all, but maybe it's more dangerous because it leads to overconfidence.

    Just a warning... carry on!

     
    #19     Feb 26, 2010
  10. bc9hz

    bc9hz

    talontrading,

    In a situation like that, when markets are down substantially in a week and correlations are spiking, couldn't you have an exception rule throw in all along that puts your exit strategies into a hypersensitive mode if major negative market swings are probable. When the VIX is hitting a certain pattern or other indicators are throwing flags, then tighten your exit assumptions? By all means use the standard mathematical methods to allocate capital and manage risk within your systems, but throw in a check ever hour to look for hints of a major market draw down. Be unusually cautious and exit positions rather than have four systems take their maximum draw downs even if that will not wipe you out of 'the game.
     
    #20     Feb 27, 2010