Valid de-optimization?

Discussion in 'Strategy Building' started by dom993, Aug 22, 2013.

  1. dom993

    dom993

    My system CLAlwaysIn uses 24 patterns for its trading decisions ... those trade decisions are only taken at every trend-change (signaled by my PA-based trend-indicator).

    The current (and all prior versions) of the system, uses a fixed-priority scheme amongst the 24 patterns, and selects the matching pattern of highest priority.
    This gave a very high performance level in-sample, with an out-of sample performance about 65% of that on the 1st half 2013.

    I have tested a weighted approach, where each applicable pattern's score is added (*), and the trading decision is based on the cumulative score. The in-sample performance is about 12% less, but the out-of sample performance on the 1st half of 2013 is virtually identical (using avg/trade as performance metric).

    (in sample is about 5400 trades, out-of-sample 1st half 2013 is roughly 300 trades).

    Interestingly, the out-of-sample performance of the weighted approach is much better than of the fixed-priority one (about +35% on the avg/trade).

    I am tempted to believe that the weighted approach is much less "over-fitted" than fixed-priority one ... comments welcome.

    Thanks in advance


    (*) a positive score for a pattern indicates statistical follow-through (opportunity to go with the signal), a negative score indicates statistical chop (opportunity to fade the signal).
     

  2. Your thoughts are consistent with statistical learning and ensemble methods.
    Nice observation and work.