Efficient Use of Capital, Position Sizing, Model Allocations

Discussion in 'Risk Management' started by jtrader33, Feb 19, 2010.

  1. Hugin

    Hugin

    As always when you rely on historical relationships the warnings above are valid. Common sense should always be used. If your position sizing system comes up with a leverage of 800% then maybe you should think again before using it. Even if Ralph Vince among others (I think I have a reasearch paper somewhere) claim that historical co-movements are better than covariance in describing what happens in extreme market conditions your decisions will still be based on historical data.

    It might be obvious but here it is just in case. Never, ever use backtest trading results for your leverage calculations. You’ll have a significant risk of overestimating your systems actual performance, thus taking on large positions that could easily lead to big losses.
    This is a definitely a problem with this kind of method. You think that you can live a “5% chance of loosing 25% of your account for the next N periods” so you set the parameters. But the method is actually using the constraint “5% chance of loosing 25% OR MORE …”. There’s actually no limit to the potential loss. And this is as long as your historical data is valid also in the future. Act accordingly.


    There are other alternatives to risk management but most of them relies on historical data, at least indirectly. Apart from classical mean-variance methods what do we have?

    Personally, I have looked at other risk measurement methods, like Conditional Drawdown (CDD) and Conditional VaR (CVaR) by Uryasev, which are somewhat related to to how LSP works. One benefit of some of the methods is that under certain assumptions the optimization problem can be solved using Linear Programming. I have also looked at tracking the distribution of drawdowns but I'm a bit uncertain how to use it.

    Anyone with other ideas?

    /Hugin
     
    #21     Feb 27, 2010
  2. Mike - I took your sheet and set up the basic framework for whats been discussed. It generates the joint probability distributions based on the HPRs you posted (I worked with monthly intervals but that can easily be changed). Then you can define allocation ranges, step size, DD confidence level, etc. With those inputs it will run however many trials you specify for each model allocation scenario. For each of the different model allocations, it will then output the Max DD at your confidence level along with the expected Final Equity value. So in your example, you would just look for the largest Final Equity Value where the Max DD listed is <25% to get the 'optimal' weightings.

    Its slow and a little crude at this point b/c I just slapped it together, but I don't think it would be impossible to accomplish fullscale in Excel if one was patient and so desired (I usually like to start there so i can see what is happening and then take things over to R). Anyway, it should be pretty easy to follow (the only code used is for running the sim loops) and I believe it demonstrates the approach (perhaps Hugin or Roscoe could confirm).

    Unfortunately, I can't share it here because some of the functions don't work with the older excel versions. I'm attaching a values only xls version which will show the layout and just following it left to right will give a pretty good overview of the process. If anyone wants the proper file I can email it...or better yet, maybe someone can suggest a way for me to share it online?

    Personally, I've decided to go in a different direction which I will detail at some point, but hopefully this will be useful/informative for someone.
     
    #22     Feb 27, 2010
  3. Someone asked me a question via PM and I asked him if I could answer here:

    >This might be a stupid question but with my swing positions I usually have 10 positions open with a max limit stop loss of .5% on each trade. This includes the capital to out right purchase the stock but I didnt see the cost of say 100 shares for $2000 as the risk. I saw it as $200 being .5%. Am I looking at that wrong? I assume because it was .5% max hard stop, that it was Ok for risk in regards to 1 large position. My concern is that in a black swan event I could see a bigger draw down because of liquidity in such a situation. Did I miss something?

    ==========

    This might seem like a basic point, but it's worth making over and over and over so excuse the repetition. If you are long 10 positions with a stop loss of .5% (of the total account size) you would expect your maximum loss is 10 * .5% = 5% if all the trades are stopped out. This is a naive assumption because we know sometimes bad things happen to companies. What if one of them goes bankrupt or has a major disruption to operations and opens down say 50%? (Yes, it certainly does happen. It has happened to me more times than I care to count.)

    Maybe we should make some adjustment to our back of the envelope loss calculation and now say that there is a small probability of a loss ranging up to 20% of the account, right? (I just made that number up.) However, what if a cataclysmic event hits the market and ALL of these stocks open down 50% or more? If you used margin your loss could now be 100%. Uh oh... what do we do about this?

    Well, you see the problem, and again, if you trade small enough to respect that risk your returns will be well under risk free rate so there is no point in trading. For me, the key is being mentally prepared and knowing how to react in that end of the world scenario. What would you do? What is the right thing? In my opinion, more people should spend more time preparing for these kind of scenarios than figuring out how to apply what is possibly the correct math (or not) to potentially misleading historical relationships.

    For what it's worth, in those kinds of scenarios the right thing is usually to buy more... a catastrophic situation like that that is usually the only time it makes sense to double down on a losing trade. However, what if you are short the little $5 biotech that discovers the cure for cancer, and the next tick is $500? Yikes... that's a bad day!

    These are the kinds of questions that should be occupying most of your mental energy, not how to create conditional probability distributions for backtested systems... Just my usual cranky two cents!
     
    #23     Feb 27, 2010
  4. xburbx

    xburbx

    interesting. thanks for responding. so now im lost with how to trade my system considering that scenario :). i have not run into that yet and didnt really think of it until it was mentioned here. it gets pretty ballsy to add to that size of a drop and im not sure i have the risk tolerance for it. any other prep thoughts to research?
     
    #24     Feb 27, 2010
  5. xburbx

    xburbx

    i guess part of it would be to see how often an event like that occures and how large my risk tolerance per trade happens to be. considering the average area i would cut a trade loose for a total blowout would be between 5-10% move on the stock against me. this would be .5% of my account at 5-10% of the stock move.
     
    #25     Feb 27, 2010
  6. Thanks bs2167,

    I appreciate your explanations and your introducing me to the literature. Its always nice to learn another tool. At this point I've managed to perform much of the LSP model analysis on a variety of my systems - the R project is a great tool BTW - and if you want to pursue this method further at some point I believe it would be worth all our time to dig into this thing deeper. This is one of the best systematic analysis methods I've personally encountered along the way, but, like Hugin said, it needs to be applied with a solid dose of common sense.

    The Excel sheet your provided uses monthly returns which are good for longer term systems. I ran the same process on a few intraday systems primarily because I do intraday stuff almost exclusively, but also, because I wanted to see the net effect.

    The idea of a joint probability table is the heart of the idea IMO and it is also the most flexible. By this I mean that one can use "tricks" and add in hypothetical worst case risk scenarios into the model - i.e. by not only manually editing some of the Joint Probabilities, but also by adding other non-linear relationships into the mix of joint probabilities.

    At this point I believe Vince's framework is one of the best out there that allows for robust defintion, optimization and implementation of N-system allocations. Overall, personally, the greatest benefit is in the process and hard-quantitiative structure, wherein prior to this I was using a qualitative means (i.e. common sense approach), this framework now allows me to quantitatively test certain theories I've heard about and have been off/on developing over the years. One of the more applicable I've come across is the idea that a system with negative expectation can add significant alpha to a portfolio of systems. I now have a method to test this kind of dynamic rigoursly, i.e. not just through qualitiative thought experiments.

    Thanks bs2167, hughin, roscoe and thanks R. Vince for this resource, I'm already planning to incorporate this method into my day to day.

    Mike
     
    #26     Feb 27, 2010
  7. Roscoe

    Roscoe

    #27     Mar 3, 2010
  8. Saw this in another thread....didn't take long for a real life example:

    Zanett, Inc. (Public, NASDAQ:ZANE)

    Today: $2.12 +1.81 (581.67%)
     
    #28     Mar 4, 2010
  9. DanMitroi

    DanMitroi

    I'm hoping I'm not off topic here, but I think that the following kind of falls in the topic of risk management, and this is a very good audience, so here goes:

    I have noticed one person around here (TSGannGalt or whatever name he was using at the time) claiming that he has had spectacular results by managing the exposure of his systems among different trades. As in, analyzing the specific P&L figures generated by the trades of a system and figuring out a tailored money management scheme that significantly improves the end result of that system.

    At first, his claims made sense. Since each system behaves in a certain way and generates results that have certain characteristics (the p&l distribution talon seems to be fond of :) ), it seems to be possible to figure out a money management scheme that optimizes the allocation among those trades. However, after spending some time on this road of thought I can say that my results have been disappointing. To me, it seems that all trades are completely independent and unpredictable, so the best I can do is to allocate the same risk% to each trade.

    The question is, have any you guys worked on a similar approach? If yes, were your results encouraging?
     
    #29     Mar 9, 2010
  10. This may be the issue; your study may not have an adequate system/portfolio of systems, or, sample set from which to draw these types of conclusions.

    Most all models have some element of correlation, either high or low. Ideally, we want to trade models with low correlation together such that when one is in a DD, the other is making new equity highs.

    Once you see enough trades over time from a variety of models you begin to notice that the correlation will vary slightly but remain fairly exploitable/constant over time.

    There is also the issue of serial correlation. Does anyone have any good techniques for exploiting serial correlations (assuming that one exists of course)? I've read that if the serial correlation exists, it must be exploited - what are everyone's thoughts on this?

    Mike
     
    #30     Mar 9, 2010