Ideally, the set of optimal parameters and decisions for the first part of the backtest period is also the optimal set for the second period, but things are rarely this perfect. The performance on the second part of the data should at least be reasonable. Otherwise, the model has data-snooping bias built into it, and one way to cure it is to simplify the model and eliminate some parameters A more rigorous (albeit more computationally intensive) method of out-of-sample testing is to use moving optimization of the parameters. In this case, the parameters themselves are constantly adapting to the changing historical data, and data-snooping bias with respect to parameters is eliminated. In general, you should eliminate as many conditions, constraints, and parameters as possible as long as there is no significant decrease in performance in the test set, Whatever changes you make to the strategy to improve its performance on the training set, it must also improve the performance on the test set. As I argued in the risk management section, there are rational reasons to hold on to a losing position (e.g., when you expect mean-reverting behavior); however, these behavioral biases cause traders to hold on to losing positions even when there is no rational reason (e.g., when you expect trending behavior, and the trend is such that your positions will lose even more). Trading strategies can be profitable only if securities prices are either mean-reverting or trending. Otherwise, they are randomwalking, and trading will be futile I said before that one of the ways momentum is generated is the slow diffusion of information. In this case, the process has a finite lifetime. The average value of this finite lifetime determines the optimal holding period, which can usually be discovered in a backtest. However, rather than imposing an arbitrary stop-loss price and thus introducing an extra adjustable parameter, which invites data-snooping bias, exiting based on the most recent entry signal is clearly justified based on the rationale for the momentum model. The reason why these strategies have Sharpe ratio is simple: Based on the “law of large numbers,” the more bets you can place, the smaller the percent deviation from the mean return you will experience. Mean-reverting regimes are more prevalent than trending regimes. Trending regimes are usually triggered by the diffusion of new information, the execution of a large institutional order, or “herding” behavior. Exit signals should be created differently for mean-reversion versus momentum strategies

A more rigorous (albeit more computationally intensive) method of out-of-sample testing is to use moving optimization of the parameters. In this case, the parameters themselves are constantly adapting to the changing historical data, and data-snooping bias with respect to parameters is eliminated. Anyone could explain the above concept vs. curve fitting, please?

I hope someone more knowledgeable than me can give you a definitive answer, but here's how I understood it: Let's say you use ATR to fade extended move. A) use backtest to come up with best ATR value for the dataset => curve fit. B) use dynamic ATR value based on recent volatility => more adaptive.

Those are fine suggestions but I will go with Michael Harris' method of portfolio tests based on similar securities and eliminating out of sample completely. If the strategy does not perform at least marginally on similar securities then it is probably over fitted. The complete method for strategy analysis is in his book Fooled by Technical Analysis. I saved multiples of what I paid for the book by following the methodology.

Forward adapting to most recent volatility is the only way. Backtested and optimized is only good for R&D. Can’t be trusted live.