Given the kind of trading that I do, I keep bumping into the same issue when analyzing and improving my strategies. There are theoretical or heuristic models that prompt these trades, but I still like to look at the historical performance. For example, I have a strategy that produces from 10 to 20 trades a year with the data going back 10 years. The trades are prompted by a combination of volatility and actuarial models. However, proper tests for curve-fitting are not possible, simply because of the small number of trades. My general MO has been to do the following - bootstrap the results to asses the statistical significance of the performance (usually vs some ex-condition tests) - if the strategy uses multiple filters/models, asses the performance of the strategy with all possible combinations of the filters - use random bootstrapping of a subset (e.g. 50%) of the trades and produce 50th percentile performance statistics and lowest 5th percentile performance statistics (simple stuff, like median trades, percent winners etc) - use random bumps to model parameters to assess stability of the strategy with respect to the environment - if the strategy uses stop-losses, only overlay stop loss performance from ex-condition testing How do you handle these issues, I am pretty sure there are other people working with low frequency strategies and must have come up with some smart ways of checking themselves.