I've heard that the most rigorous backtesting mechanism is the "walk-forward" optimization where you optimize your parameters dynamically. This maximally emulates the way strategy developers backtest and apply our trading strategies: at any point of time, one optimizes the parameters based on the data from the past history, and then apply the "optimized" model onto a certain future period; after a while, one re-calibrates by refitting the model to updated data. After the "walk-foward optimization", there are only two parameters that are global and floating - the lookback period in the "walk-forward optimization" (i.e. on each optimization, what past data does on optimize his model on), and the forward step size (i.e. how often does one re-calibrate)... Any thoughts on how to choose these two global floating grand parameters? If I run another bigger optimization on top of these "walk-forward optimizations", and I found that: 1. The overall Sharpe Ratio is 2.09 if I re-calibrate the model every 12 months, and use the past 6 months data in each optimization. 2. The overall Sharpe Ratio is 1.98 if I re-calibrate the model every 1 day, and use the past 6 months data in each optimization. And based on above results, I choose to re-calibrate the model every 12 months, instead of every 1 day, I feel I am just data-mining... what do you think? Any better approaches? Thanks a lot!