Stability/Curve-Fit Analysis For Low Trade Count Strategies

Discussion in 'Strategy Development' started by sle, Jun 17, 2012.

  1. sle


    Given the kind of trading that I do, I keep bumping into the same issue when analyzing and improving my strategies. There are theoretical or heuristic models that prompt these trades, but I still like to look at the historical performance.

    For example, I have a strategy that produces from 10 to 20 trades a year with the data going back 10 years. The trades are prompted by a combination of volatility and actuarial models. However, proper tests for curve-fitting are not possible, simply because of the small number of trades. My general MO has been to do the following
    - bootstrap the results to asses the statistical significance of the performance (usually vs some ex-condition tests)
    - if the strategy uses multiple filters/models, asses the performance of the strategy with all possible combinations of the filters
    - use random bootstrapping of a subset (e.g. 50%) of the trades and produce 50th percentile performance statistics and lowest 5th percentile performance statistics (simple stuff, like median trades, percent winners etc)
    - use random bumps to model parameters to assess stability of the strategy with respect to the environment
    - if the strategy uses stop-losses, only overlay stop loss performance from ex-condition testing

    How do you handle these issues, I am pretty sure there are other people working with low frequency strategies and must have come up with some smart ways of checking themselves.
  2. Statistically assess the validity and robustness of the underlying vol and heuristic models, preferably in combination, rather than the validity and robustneess of the strat (trade history) itself.

    Bootstrapping will be useful in that assesment, but be careful with the naive bootstrap using auto-correlated and conditionally heterodskedastic underlying data.
  3. Quit agonizing over it and just trade your systems that pop more freakwantly.
  4. sle


    I already do that, models are validated on their own (or they are already built on some sort of statistical relationships, e.g. based on regressions). I want to get a better handle on the actual strategy testing (though it's secondary in my process), simply because this is yet another layer that helps you sleep better.
  5. jcl


    Increase the number of trades by oversampling. Re-sample the price data several times with different bar offsets, and run the simulation over every sample. Check the average and the variance of the sample results.
  6. I don't think that curve-fitting tests are not possible because of small number of trades.

    First of all, is this a trend-following system? If it is, judging from the low number of trades, then have you tested it across several correlated markets? Does it perform as well? This is one curve-fit test you can do right away.
  7. sle


    I meant "proper" curve-fitting tests, like out-of-sample re-tests.

    I do test across the asset space when there are enough assets to test across. For some of these, there might be a similar asset, but for a large set of systems there is nothing "comparable" (like the stuff that I do on VIX).
  8. sle


    I'll try that, interesting idea. Would you use random bar length x=R(1,2,3) or simply generate all possible combinations and use random sample of trades as "out-of-sample"?
  9. Out-of-sample tests are not necessarily curve-fitting tests. Think about it. Also see this post,
  10. oh my lord, its the first thing i agree with you.
    correct, good out-of-sample results are NOT a indication that the system will perform in the future.
    To be honest, nothing can be such indication as future is always unknown.

    however, the more frequently a system trades and the less it is protected when losing-- the more likely it will fail.
    #10     Jun 18, 2012