Backtesting and Optimization Discussion

Discussion in 'Automated Trading' started by flipflopper, Feb 4, 2010.

  1. Topic #1: Position Sizing and Optimization

    1. Is it always best to optimize and backtest using 1 share?

    Using this method you will find the best pure input settings without regard to position sizing. I am thinking this will get you the most accurate results when you bring your system live.

    2. You can backtest and optimize by calculating position size based percentage of total equity you are willing to risk.

    Example: Initial Capital = $100,000, Risk = 2% per trade

    Assumptions: you do not recalculate Capital based on NetProfit after trades. When you readjust this way when backtesting then as your account equity grows the future trades are going to be weighted much more heavily as the position size is much larger. This is definitely NOT what we want.

    Here is the problem I've found backtesting with the above method. The program LOVES extremely low risk high reward ratios. It figures that if you risk less in terms of a tight stop you can put on huge position size and eventually catch something that will make a lot of money. There is some truth to this theory but in the real world slippage and the effect these large orders will have on the market make it not plausible.

    Question: How can you balance the reality that when you use tighter stops you can trade larger positions with the fantasy that you can trade HUGE positions with tight stops and that slippage and MM's who will stop hunt won't tear your system to shreds?

    3. You can calculate position size based on use of buying power.

    The issue I have with this is it doesn't take into account volatility. It just looks at share price. If you have two stocks that are both $100 dollars and one has twice the range (which is used to calculate stops) then the program will buy the same position size for both these stocks even though you are risking twice as much on one compared to the other

    Anyone who has gone through this or has any thoughts I would be interested in some discussion.
  2. Code7


    Use dollar ATR to calculate how many shares you trade based on the same amount of capital. Amount of capital is fixed $ when optimizing inputs. You can make it a function of buying power later. That way, you use volatility to account for the risk. Making position size dependent on stop loss screws a lot as you've already noticed. Always make conservative estimations for trading costs, i.e. overestimate.
  3. How is this different then what I have in example 2?

    Seems to me when you optimize this way the program loves very tight stops (as function of ATR) and trading increases share size.

    The risk will always be the same $2000 lets say. But You can risk that by trading 1000 shares with a two dollar stop or 100,000 shares with a 2 cent stop.

    I have problems with this methodology. Which is why I want to discuss it and share ideas.
  4. Baywolf


    I rarely use stop orders anymore precisely for the reason you stated in the OP; what you call 'stop hunters' and algos that breach key levels, be it arbitrary ATR/BB bands or support and resistance levels.

    Instead what I've done to manage risk is to place a basket of limit orders premarket (~75% long usually) and check back on the positions around 10-11am. I will close out the positions that are heading the wrong direction or have just flattened out. Many times you can see a big buyer or seller just accumulating all the shares. At the market close I check again and more often than not, the still open positions continue in the correct direction.

    I read your other post today about portfolio backtesting, and that sounds like something I'd like to do also. Maybe there is a way I can optimize exits on all my open positions without babysitting.
  5. Baywolf


    one more thing wrt your odd-lot position sizing. Because im using limit orders, I round up/down to the nearest 100 shares since odd lot orders get filled last.
  6. Code7


    My suggestion was to NOT make it dependent on SL but only on dollar ATR. SL is only your maximum risk. You can exit positions before. Typical risk is your average loss. You should be able to change SL without changing position size or anything else. That's the only way to properly test what SL does to your system.

    As I see it, making position size dependent on volatility is the way to go. Some years like 2008 are more volatile than other years like 2005. You might want to account for that in your backtest and trade smaller size in 2008. I think that makes sense.
  7. So this would be the same as what I said in example 1.

    Only use one share in backtesting. This would get me results that would optimize total net wins. But it doesn't take into account that when you have a tighter stop you can use more leverage and make more on winners.

    It seems that any way you backtest and optimize you are not going to account for the totality of trading realities. I have read a couple books on auto trading and none seem to address these issues which are shocking.

    Unless I'm completely missing something.
  8. This is a fallacy. If you don't see why you should study the nature of volatility, serial correlation and stationarity.

    The OP is right, when designing any system, i.e. during backtesting and optimization, do not muck with position size at all, ever.

    What you can do is incorporate a set of volatility based parameters that adjust system variables (such as entry points) based on recent volatility.

    Edit: By constant position size, I mean keep it a fixed $ amount for all positions for the entire sample space. If the stock is $100, you're trading 100 shares, if the stock is $10 you're trading 1000 shares, etc..
  9. Chabah


    The discussion of position size depends largely on what exactly you are testing. If you are testing % winners and losers, or average % gain/loss, then there is no need to screw with position size. If you are testing a specific position sizing algorithm, then clearly you want to use that algorithm in order to test it.

    The position sizing of a system is just as important (if not more important) than the entries and exits, and deserves just as much testing.

    An interim point I haven't seen yet is adjusting the risk amount (before you divide risk by price - exit). Some systems may alter the risk amount based on the system parameters. For example:

    If P < 50 MA, risk $0 (sell)
    If P > 50 MA and P < 50 MA * 1.05, risk $500 (stop of 50 MA)
    If P > 50 MA * 1.05 and P < 50 MA * 1.1, risk $300 (stop of 50MA * 1.05)
    If P > 50 MA * 1.1, risk $0 (sell)

    That's essentially an entire system right there, and it involves different position sizes that are critical to the performance of the system. While it's obvious one might optimize the multipliers (1.05 and 1.1), one could also optimize the risk amounts ($500 and $300). (not that I condone much optimizing)

    So back to the question, I think it really depends on what type of system you are looking at. Does it have a high win rate? Is it based on lots of small losses and the occasional home run? Does it scale into and out of positions?

    Good luck,

  10. Code7


    Just a few comments.

    @ flipflopper

    It's not the same. I suggested to use dollar ATR to calculate number of shares without making it dependent on SL. That's different from all your examples 1, 2, 3.

    @ Mike805

    You can do that but you will double position size after a stock lost 50 % of value. Volatility is usually higher in downtrends plus you will increase size, giving that period a much greater weighting.

    I first want to equally weight all stocks all the time, on a dollar risk adjusted basis. I can still implement volatility filters but want to backtest them without bias.
    #10     Feb 5, 2010