I have sufficient data to backtest a simple intraday strategy to end up with approx 1500+ trades in each test case. With this data, I can (METHOD A) optimize for the three strategy variables by running all possible variable combinations over the whole data set (i.e. 1500+ trades), and look then for stable value ranges for each variable that give me the best results (in terms of PF, or Sharpe, or cumulative profit, or win/loss ratio, or average trade, etc, or whatever my criteria are). Alternatively, I could (METHOD B) divide the data set into âtrainingâ (âin-sampleâ) and âtestingâ (âout-of-sampleâ) data sets, then optimize over the first group, and then test my results in the second. My questions are: 1) What benefits will METHOD B bring me that I donât get with (the much simpler) METHOD A? After all, I can run METHOD A over 1500+ theoretical historical trades â¦. seems like a lot to me, and enough to give confidence (or at least as much I would get from any method) that the strategy may have an edge â¦. 2) Do the benefits of METHOD B outweigh the extra time and complication of going through all the steps of METHOD B (and the additional hazards of selection bias in deciding which data to include in-sample, and which out-of-sample, etc.). A forward test will need to be the next step anyway in both cases, after all â¦ Unless the benefits of METHOD B are material, keep-it-simple-stupid still seems the best mantra to follow. I await the flaming â¦ Please feel free simply to point me in the direction of other ET threads, if you feel this matter has already been hashed thoroughly and effectively elsewhere. As always, thanks in advance.