Evaluation of Backtested System Results

Discussion in 'Strategy Building' started by trend456, Apr 23, 2003.

  1. Trend,


    Suggest you read TRADING systems THAT WORK by Thomas Stridsman. Link:

    http://www.amazon.com/exec/obidos/ASIN/007135980X/tradestationw-20/002-5185722-1417664

    While the title suggests it is about trading systems I found it to be about testing them and it includes Monte Carlo as well as other methods to verify the validity of a system before you risk money. It also covers MM but mostly uses Vince's work.

    I've read over 50 books on trading and this is far and away the best one I've come across. I'm just starting to put some of his ideas to work, ie MonteCarlo.

    Doug S
     
    #21     Apr 25, 2003
  2. man

    man

    1. stability within parameter environment
    2. out of sample testing
    3. paper trading
    4. stress testing by increasing trading cost

    I personally do not believe too much in monte carlo for this purpose since it is self fullfilling that an opimised sample will beat random. that is the assumption of optimisation. Nevertheless it might make sense to run it over the trades and check for streaks.


    peace
     
    #22     Apr 25, 2003
  3. acrary

    acrary

    No matter what test you do, the trades are going to only be a sample of the ultimate distribution. If it shows 60% winners for the past 10 years, that may be the mean or only a skewed result from your tests. Here's something to stress test the sample.

    To find out estimate of error in system test sample:

    (Can be used for % win because the frequency of wins/losses is a normal distribution.)

    Error estimate

    E = (z * std. dev. of sample) / sqrt of number of samples in test

    E = Error estimate
    z = number of std. dev. of normal distribution for the confidence level needed.
    z = 3.08 = 99.8% confidence level
    z= 2.58 = 99.0% confidence level
    z=1.96 = 95.0% confidence level
    z=1.645 = 90.0% confidence level

    Ex. 50 trades in test (1 = win 0 = loss)
    sample mean = 40% winners or .40
    sample std. dev. = .25

    If we want to know the estimate of the mean to the 99% level then:

    E = (2.58 * .25) / sqrt(50)

    E = .0912

    so with 99% certainty, we know the mean winning % range is .40 +- .0912 (you can expect to see wins between 30.88% and 49.12% in the future) If it's not acceptable, either do more tests or work on a system with a tighter standard deviation of wins versus losses.

    So how many samples do we need to be 99% certain of the mean?

    n = ((z**2) * (std. dev. of sample**2)) / (( 1 - confidence level required)**2)

    n = number of tests we need to run
    z = same as above
    std. dev. of sample = std. dev. from sample size we have seen
    1 - confidence level required = how exact do we want it:
    .90 confidence = 1 - .9 or .1 for the formula
    .95 confidence = 1 - .95 or .05 for the formula
    .99 confidence = 1 - .99 or .01 for the formula
    .998 confidence = 1 - .998 or .002 for the formula

    in this case we want 99% confidence

    n = ((2.58**2)* (.25**2)) / (.01**2)

    n = (6.6564 * .0625) / .0001

    n = 4,160 tests needed to prove the mean at the 99% confidence level is really 40% winners.

    After you've done the test for win% you can also do it for win size and loss size (independently). Usually the win size will not correspond to a normal distribution. If you're cutting losses short and letting profits run, then you should have some outlier trades in the win size distribution. For the test to be valid you need to eliminate the outlier's. I've found that removing the top 5% winning trades (best 5 out of each 100), has been enough to move the distribution to a more normal bell curve.

    When you've done the tests on the win size and loss size, you'll end up with something like:

    win size mean $500 +- $100 at the 99% confidence level

    loss size mean $250 +- $50 at the 99% confidence level

    Then you compute a pessimistic expectation using the low end of win % and win size and the high end of the loss size. If it shows any profit, then you've probably got a winner (as long as it wasn't curve fit).

    Ex.
    E = (400 * .5) - (300 * .5)
    E = 50
     
    #23     Apr 25, 2003
  4. Acrary
    Thanks.
     
    #24     Apr 25, 2003
  5. acrary

    acrary

    Thanks for pointing out the error. I edited the post so it should show the correct calculations now.
     
    #25     Apr 25, 2003
  6. maxpi

    maxpi

    It is not rocket science to me, maybe to others. I just play with the data, throw out the "outliers", data where a small change in parameter makes a big change in outcome, etc. Try to find the sweet spot, the parameters for the system that seem best, that suit my trading requirements. Then I put it on a chart and look at a number of issues and see if it holds up over the market. That might lead me to go back for another optimization, maybe not.

    Actually I just searched on 3D graphing and learned that Excel will do that!! Here is a link that illustrates that:

    http://www.tradefutures.com/cqg.htm


    Max
     
    #26     Apr 25, 2003
  7. #27     Apr 25, 2003
  8. acrary,

    Thanks for the info. Your comments are very helpful.

    Good Trade

    Trend
     
    #28     Apr 26, 2003
  9. Max,

    Cool!

    Thanks .

    Trend
     
    #29     Apr 26, 2003
  10. The error estimate suggested by Acrary gives a good
    statical estimates of the error when the distribution of events
    is normal distributed.
    http://www.uic.edu/classes/upp/upp503/sanders7.pdf

    How would you estimate the error if the events are skewed to the right?

    This is the case when average win > the average loss.

    Thanks
     
    #30     Apr 26, 2003