Efficient Use of Capital, Position Sizing, Model Allocations

Discussion in 'Risk Management' started by jtrader33, Feb 19, 2010.

  1. For those of you who followed Talon's thread, toward the end there was some interest in risk management. I thought I might try to start a thread to walk through the process of getting multiple models working together - with the hopes that guys like Talon, Mike, etc. will continue their generosity and weigh in if they have time.

    Before one can start comparing returns between models (say on a monthly basis), some decisions need to be made per model on:
    1. Maximum # of open positions allowed
    2. Position sizing as a result of step 1

    Factors that need to be considered when determining the above are:
    1. % Exposure - assuming risk management is in order, want to have as much capital working as possible
    2. Effect of above decisions on system performance (profit factor, expectancy, cost of commissions, etc.)

    For example, I have a system that trades much more often during high volatility periods. Not only does it trade more frequently during those times, but the average trade pnl in % terms is higher, profit factor higher, return/DD higher. Also, during high volatility periods, the trades that signal after several trades are already open do even better. Having said all that, the system still performs acceptably during normal volatility periods. Here's what I think are a few options for dealing with this:

    A. Use smaller position sizing so that there's room to take all or most of the signals during high volatility when the system is performing at its best (i.e. better risk adjusted returns). The drawback to this approach is that a majority of the time when only a small number of signals are generated, the amount of exposure will be very low - not an efficient use of capital.

    B. Significantly reduce the number of open positions allowed to improve exposure levels (and thereby increase per trade position size). Risk adjusted performance will suffer but the level of income will increase.

    C. Get creative and adjust position sizes according to the VIX or something like that (higher vol -> smaller positions and vice versa). Best of both worlds?

    [I should also mention that this system does not use stops and I couldn't see putting more than 4-5% of the account into any one position regardless of the methodology. Curious to hear if anyone thinks that's too aggressive? ]

    Anyway, I think with a few properly developed models together this is the first in a series of questions that needs to be addressed when looking to combine them. I am clearly looking for help here, but I would imagine that hearing opinions on the issues that will (hopefully) be discussed in this thread will be helpful to others as well.

    Anyone have thoughts on A,B, or C above? Or a different approach entirely?

    Thanks for reading and appreciate any input.

    ----------------------------------------------------------------

    I decided to attach some numbers to demonstrate the effect. To generate the results, I set the initial equity to a ballpark figure that I think will get allocated to the model (wanted to put a halfway realistic dollar amount here so that the effects of commissions will be accounted for as position sizes get smaller). Position size for each trade is simply = Initial Equity / Max # of Positions.
     
  2. Hi bs2167,

    Thanks for starting this thread - I like where you're going with this.

    Many of these topics are slightly more advanced than just getting to a single tradeable model and maybe we can come up with some best practices.

    Just a few thoughts:

    I've found varying position size to be slightly misleading when trading a model that has a varying number of signals day-to-day across a portfolio.

    Often times, when the market is displaying a type of behaviour, 75% or more of all stocks will be displaying a similar behavior - and in rare cases - 95% of the market will be displaying the same behavior. This is why allocating more/less capital during these highly/less active times requires that one forecast the broad market behavior (which is a circular argument).

    If you assume that 75% of your portfolio is going to do the same thing given the model you're trading (say its mean reversion), then how do you account for days when 95% of the trades taken end in losses? You end up with higher positive days and steeper down days...

    I don't think I explained the above very well; here's a more literal example with 2 portfolio allocations:

    1. 100k for 20 positions max @ 5k per position.
    2. 100k for 50 positions max @ 2k per position.

    Normal day = 75% of your portfolio follows the index

    Case 1 index reverts:
    - 15 positions profitable @ 1.5% each = $1125 profit
    - 5 positions lose @ -1.5% each = -$375 loss
    Net = $750 profit (-750 when index does not revert).

    Case 2 index reverts:
    - 37.5 positions profitable @ 1.5% each = $1125 profit
    - 12.5 positions lose @ -1.5% each = -$375 loss
    Net = $750 profit (-750 when index does not revert).

    Fat tail day = 95% of your portfolio follows the index.

    Case 1 index reverts:
    - 19 positions profitable @ 1.5% each = $1425 profit
    - 1 positions lose @ -1.5% each = -$75 loss
    Net = $1350 profit (-1350 when index does not revert).

    Case 2 index reverts:
    - 47.5 positions profitable @ 1.5% each = $1425 profit
    - 2.5 positions lose @ -1.5% each = -$75 loss
    Net = $1350 profit (-1350 when index does not revert).

    So what do these trivial calculations mean? That unless you can forecast a fat-tail day versus a normal day, all position sizing will get you is a smoother versus sharper equity curves (no free lunch)...

    What's my conclusion to this? Well, here's the heart of the issue. *One has to understand the achilles heel of the strategy* in order to properly incorporate it into a basket of models.

    For a mean reversion strat the achilles heel is the strong trending move (its basically a volatility breakout move). So the best thing one can do is create strategies that oppose (or hedge) each other when appropriate, i.e. create a complementary strategy that trades the opposite effect of mean reversion and weight that strategy accordingly. You can do this with an index. In terms of volatility trading, volatility breakout != mean reversion and the two can be traded together to capture all the 75% days and "buffer" the 95% days. Note that there are days where everything goes to shit :) and you'r gonna get f--ked on both ends, those days are rare, but they can sting.

    The below is basically what I do day-to-day:

    MR Short 0.209
    MR Long 0.293
    Vola. BO A (L+S) 0.057
    Vola. BO B (L+S) 0.267
    Misc. 0.172

    Misc. is basically a vola based strat. so you can see they're roughly equally weighted (note this assumes 1x leverage which is not the case in real life for these particular strats).

    The position sizing remains fixed at 6-12% of the available capital for the particular strat. and leverage tends to stay under 3x (usually).

    So for a 100k account, MR short would have about 21k allocated at 1x giving a position size of $1200-$2400.

    Note the above is for intraday models, overnight models changes a lot of these assumptions.

    Mike

    Edit: The point of the above case examples is to vary the win/loss percentages and the strategy portfolio correlation to the index. +/- 1.5% and 75%,95% are just figures to start off with, the level of correlation and win percentages will make a substantial difference in your risk calcs.
     
  3. Hey Mike- was hoping you'd see this. Great post...I knew from reading some of you prior comments in the 'coin flip' thread that you had conquered a lot of these issues a long time ago...and on a considerable scale with the number of models and markets you trade. What you demonstrated in your example is exactly where I was ultimately going with this - intelligently combining models that are structurally different (MR, trend, etc).

    I've read up on various approaches to doing optimal allocations (between strategies, models, etc.), but everything I've come across is rooted in some form of curve fitting. I was hoping that Ralph Vince's LSP was going to be the answer, but in my opinion still suffers from too much reliance on historical relationships. In my view, risk management should be about 'what if scenarios' regardless if they've happened in the past or not, rather than going through historical data to find the combination that gets you the best risk-adjusted score(s). We already take a bit of a chance using historical data to build our models...building a risk management strategy for those models based on the same data seems counterproductive. Having roughly equivalent exposure to the major model themes makes sense to me. From there tweaks can be made after looking at some scenarios with price shocks, volatility and/or correlation extremes, etc.. This is getting a little off topic, but I would like to share some ideas on the higher level risk mgmt topics at some point.

    Backing up a bit...I believe in the past you've mentioned that the portfolios you look at are quite large, so there must be a great deal of variance in the number of open positions in a given model over time. Is there an objective process you follow when deciding where to cap the maximum open # of positions for each model? As I tried to demonstrate in my first post, this can have a material impact on the type of performance to expect from the model.

    So say your 3x levered was a hard limit on 100k account. MR Short gets 20% allocated or 60k and each position on average is 10% or 6k. Did you decide on 10% because that will fully utilize the 20% bucket allocated to MR Short >x% of the time? Or do you consider some other performance trade offs when coming to this decision?

    Thanks again for your contribution.
     
  4. good thread.
     
  5. Hey guys,

    Thanks for starting this thread. After closing the other thread I thought long and hard about starting this thread, but honestly decided I wasn't qualified. I mean I know all the industry standard risk management models and a lot of non-standard ones, and I have a pretty good understanding of the math behind them. Furthermore, we have hired some top notch risk managers at different points in my business's life cycle, so I've had the good fortune to work with some of the best people in industry.

    The problem comes down to this, from an existential sense: Your true risk, in any position, is always unknown. In the limit, you might assume that your risk on any long position is 100% and on a short is a loss so large you can't really comprehend it. (FWIW, I use 2000% loss on shorts in my back-of-the-envelope mental calculus.) If you accept that risk, you will not trade or you will trade so small that you will return far under the risk free rate, so there's no point.

    At the complete other end of the spectrum is pure stupidity: thinking that your risk is defined by how much you plan to lose on any trade. One shouldn't have to trade very long to see the ignorance of that assumption.

    At the more reasonable other end is the guy who accepts the standard risk management models, but the problem is that the math is wrong. Correlations become much closer to 1 when things get bad than anyone expects, returns are far more variable than anyone imagined, and 20+ standard deviation events occur in a large sample size with shocking regularity.

    From a practical standpoint, if I have a system that has technical trades in 20 stocks, I have to do the mental trick of realizing those are not 20 positions... it's actually potentially 1 big position. Even positions in seemingly unrelated markets have become rather tightly correlated in recent years (though that has lessened over the most recent months).

    So... I don't have an answer. My risk management is basically to trade small enough that if I take the giant loss I won't be knocked out of the game. I am not advocating trading from a position of fear, but of healthy respect for risks you know you cannot quantify of even fully comprehend.

    This is why I did not start this thread... I just feel my contribution would be minimal, but I will add whatever I can to the discussion.
     
  6. Talon- really appreciate you taking out time to share some thoughts - as always your insights are highly valued. As you've stated, its a tricky topic to discuss since no right answers or easily quantifiable solutions exist to most of the risk mgmt issues. Recognizing that, my hopes for this thread were to highlight a few major risk categories, get a feel for some ballpark tolerance levels that those with experience have chosen, and perhaps some tradeoffs, nuances, etc. to be aware of.

    I've come up with four main risks when trading multiple models:

    1. PRICE SHOCK WITH PERFECT CORRELATION: some unprecedented event causes a trading halt and everything in the portfolio drops 10,20,30, or 40%. (stops are no help)
    -Defense: Limit aggregate net directional exposure. So if i think a worst case scenario is 40% and I'm only willing to lose 20% of the account in such a scenario, then the absolue value of: (Long Exposure - Short Exposure)/Account Equity needs to be less than 50%.

    Do any of you employ this type of constraint? Where do you set it?

    2. FAILURE OF AN INDIVIDUAL MODEL(S): the edge for a given model disappears.
    -Defense: 1. Limit the % of account equity that any one model can employ. Mike has already shared how he does this. 2. Diversify models as in 3 below.

    3. CORRELATION BETWEEN MULTIPLE MODEL DRAWDOWNS:
    -Defense: Also as Mike has shown, intentionally devise strategies that work opposite of one another.

    4. OUTLIER EVENT IN SINGLE POSITION:
    -Defense: keep position sizes small

    Talon, if I understood correctly, it seems like you're prepared to live with a 20x loss on a short. Does that translate into smaller positions in shorts vs longs for you? Puts instead of shorting? Or just keep all positions size under x% (1%)?

    Anyone have additions to those four risks?

    **Of course all 4 of these points, come at the cost of having less capital working in the market. Is there a minimum level of capital at work that you all try to maintain? I realize thats highly dependent on the expectancy of the systems and whatnot, but to the extent that anyone is willing to share, it would be great to get a feel for what these limits/exposures look like for those running a profitable operation.
     
  7. Can you tell me more about Ralph Vince and LSP? I've never heard of him or his work - I'm always interested in reading something new.

    Fortunately or unfortunately (depending on how one uses the information), quantitatively speaking all we have is historical relationships. That said, I think the VIX is a good historical measure to incorporate in any risk management scheme since it is an unbiased measure of implied volatility. The recent events of 2008 have provided us with a fairly good framework to study "worst" case scenarios from both a common sense standpoint and a quantitative standpoint.

    Personally I incorporate the VIX value into a very simple algo that adjusts some of my model parameters. Its basically a linear fit model that takes the VIX as an input and adjusts macro level entry, exit, sizing and exposure levels to account for unusual volatility. Higher vola gets more emphasis (less correlated portfolio exposure) than lower vola... Conceptually, this makes sense to me and I tested the concept without fitting any values and found it did add benefit during crazy periods. My motivation for creating this was my "black swan" day - 9/29/2008. I had my largest losing day ever that day and I made a concerted effort to reduce the impact of such days going forward. I still ended that month in OK shape due to great vola., but, that one day still serves as a harsh reminder of how bad the worst case can really get - that day is embedded in my psyche and I am actually thankful for that, it a was very good lesson. I guess we can call that my "common sense slap in the face"; every trader needs a few of these IMO.

    In terms of capping open positions, I just set max limits on the amount of capital a strategy can use. I will occasioanlly interfere with this max allocation (lowering it more often than not) when I have a good read on the market. Days with solid news or a solid direction will often cause me to stop further allocation to an MR method for example. This is somewhat quantifiable as I'll often have profitable positions doing the opposite of MR. That said I was a fairly good discretionary trader before I went fully automated, so my read on my strategies and how they work in a given situation has proven to save me some cash over the years. It comes back to understanding the flaw in your strat and watching that flaw play-out a few times. Once you see it happen you can start to interfere and then start to figure out ways to quantify it. Note that years of experience is an issue here - I am in no way saying that one should actively be interefering with their automated strategy execution, but, interfering with global allocation and maximums is often necessary IMO.

    Something along those lines - but I have not fully quantified this specific conditional expectancy. I am in the process of looking at ways to better quantify this, however, here is my approach and maybe you or talon or anyone else can provide some insight. On any given day, I am scanning the entire stock universe - I have no way of knowing in advance how many positions I'll have during the day; but, from historical testing, I know that if I set a max cap of say 100 positions open for that particular strat, the difference in net profit over time in going to 200 positions max is negligible given the drawdown (even with allocating 2x the capital), but the worst case days will me much much worse in the 200 position case... so given one strategy, its best to choose open position limits based on worst case DD rather than net profit, i.e. choosing the best risk adjusted return will almost always result in lowering the # of open positions and raising per position allocation.

    Did that make sense? Its strategy specific, but, I think it illustrates the issue.

    In the case above 60k is the most I'll be net short for MR Short for any given day. If the strat takes one position at 6k, I may add some if its working knowing that I will likely not see many more positions for that strat that day.
     
  8. xburbx

    xburbx

    Just checking that I understand this. So if you have 10 open positions for a total of 5% of your account, your worst case scenario is a total blowout (2000%)? I must be missing something?

     
  9. He's an author that has put out a handful of books on money management over the last 10-15 years. I respect his work because unlike most in his field, he imo makes an honest effort to bring something new to the table and openly acknowledges that many unresolved questions remain even if you adopt his method whole hog. I've done a decent amount of homework in consideration of his method, but in the spirit of full disclosure - I have never read the Leverage Space Model book in its entirety (yet). Rather than me giving you a butchered version of his method, here's an intro article he put out:

    http://parametricplanet.com/rvince/article.pdf

    Also, he (rvince99) posted on here a few times discussing a few points:

    http://www.elitetrader.com/vb/showthread.php?s=&threadid=164157

    Finally, on his homepage, there's a spreadsheet which shows how he sets up the joint probabilities of periodic returns between systems....pretty straightforward...major overgeneralization, but once thats set up, you basically run through a bunch of sims with various reweightings to maximize your custom defined target within a custom defined constraint(s) (max DD or whatever) at a desired confidence interval.

    http://parametricplanet.com/rvince/

    This may not be fair to say given that I haven't read the entire book, but it seems like a souped up version of mean-variance optimization - so not necessarily revolutionary....but I believe where he goes out on a limb more and kicks around some experimental stuff is towards the end of the book - but i'm not at all familiar enough with it to comment.

    If anyone has a more complete understanding, and I'm way off base here, please say so...its definitely possible.

    [going to grab some food - will hit the other topics in another post]
     
  10. Yes, was considering incorporating the vix as well...has to be one of the best 'tells' out there. Unfortunately, using it won't help in another 9/11 type situation in a low vol period, but nothing aside from a crude monitoring of net long/short exposure will (or maybe holding some OTM SPY puts).

    Yep, in my mind, once you can successfully implement that sort of intervention, you've reached a level where you cannot only be consistently profitable, but consistently produce sizeable returns regardless of market environment. Talon gave a perfect example in the other thread of stepping in and intermittently turning off an SP intraday system during the credit crisis...i believe the description was something like 'as a result went from nearly unprofitable to wildly profitable' during the period. I'd guess its those kinds of things that differentiate to the hobbyist from fulltime pro. And, separately, point taken on adjusting global allocation/maximums.

    Makes sense...and most of my systems fall into line with what you've said. However, in the spreadsheet example above, the exact opposite is true...it really pigs out during high volatility periods and the 20th position open at any given time performs better than the 10th...the 30th opened performs better than the 20th, etc.. But I think what I've learned here is that theres a serious flaw in my development process. I see 6,614 trades over a 10-yr period with solid returns, limited DDs, etc. and think everything looks great. However, during testing I didn't set a realistic limit on the number of open positions while looking at these stats. So there are times in the testing period when ~300 trades are open at once, it performs great, and then these few periods distort the aggregate results, showing unrealistic performance numbers since i wouldnt realistically be opening that many simultaneous positions. Stupid. So I guess I'm looking for ideas on the following:

    Max # of positions, % position size, and % allocation of account to a model are all interrelated and circular. At what stage of the testing process and in what order do you determine each of these?
     
    #10     Feb 21, 2010