Here's a couple of podcast that go over some interesting ways to tackle optimizing. http://bettersystemtrader.com/system-parameter-permutation-a-better-alternative/ http://bettersystemtrader.com/046-perry-kaufman/
Yes, I do have out of sample parameters that work well. What else can I do to ensure that I have not curve fitted. This is the reason why I also want to run the parameters again from 2007 to 2009 (another batch of out of sample).
Oh, I see. If you're just running the same parameter values through another set of out-of-sample data for additional confirmation, that's fine. Just to make sure we're on the same page, you have some parameters, p. You optimize them over some time period, t2, and get optimal values for p, p*. Then you look at your results over some out-of-sample (OOS) set, t3, with p* and you like what you see. Then you want to run p* over t1 for additional confirmation. No problem. You've just enlarged your OOS data. Works for me. One thing to think about is whether your strategy is time dependent in some way so that the in-sample (IS) set needs to be prior to the OOS set for some reason. Sometimes this happens. If you're thinking things are looking good and you want more evidence against curve fitting (I actually prefer to call it "over-optimization"), then you should do some basic statistics to see if the OOS results are statistically significantly better than the risk-free rate or, for simplicity, zero. Depending on sample sizes and variance, you could easily have what appears to be a good result in your OOS data that is actually just due to random chance.
Great audio , lots of useful info. And "Lose pants fits anyone",appears to be good philosophy for statistical analysis algo.
Gents, Have you guys listened to the latest interview - 060 – Strategy Optimization with Robert Pardo? It is really awesome. BTW, Robert is not a fan of Anchored WFO!
In addition to the out-of-sample testing, I would suggest some bias-correction measures, which are rooted in the information theory. For example, you may consider something like this: BCP = (P * SQRT(T)) / (D * D) where BCP is bias-corrected performance P is the raw in-sample performance, such as Sharpe ratio T is the number of trades D is the dimension of the model (6 in your case) So, in-sample, you want to find a set of parameters which maximize BCP. For motivation behind this formula, take a look at the Akaike Information Criterion.