Curious, how do you guys go about backtesting? For example, if you have 10yrs worth of data, will you simply decide on the strategies/commodities you are willing to trade, find the most stable primary paramteres, backtest, do some forward testing and then be satisfied that your system is reliable (given it satisfies the gaols you're trying to achieve and the risks you're willing to take) Or do you do alternative methodologies like backtesting across the entire 10yr timeframe and also across the last 3yrs (so like 1/1/91-1/1/01 and 1/1/98-1/101) picking the most stable parameters from the later part that are also strong across the 10yr period (so basically giving more weight to the last 3yrs vs evenly acros sthe last 10) Etc..... <~Just an old dog looking to pick up new tricks....I generally do the 1st one (straight backtesting across 10yrs, some forward testing, then am satisfied), but always looking for new ideas

if you let a system create random entries your chances are about that 3 out of 100 tries will have a sharpe ratio of above 1.0, which is about the figure the CTAs on this planet are trading. that means that you could expect that most of your backtesting will yield results that are statistical flukes. we try to address this in three different ways. the first approach is to start with some research on the time series as such and try to find beyond random anomalies. second backtesting approach is to let a system choose between different parameter sets and trade it for some time and do it again. that is a classic walk forward approach and we do it on up to 15 years of intraday data. the third thing is to have things run on paper trading aside, which we would think they are a fit from academic perspective, yet feel that there could be something behind it.

yes very profitable when i worked/traded them with futures and ETFs. My main point was to see if there were any new tricks and old dog could pick up. ^_^

Actually I'd love this straightforward approach. Looks like every robot, after plugging in data feed and history, can make money easily - without any further human intervention.

Like there can't be a "Theory" on trading, there is NO "Theory" on backtesting. You got to find something by yourself. Only charlatans have theories, often without backtesting: look around at ET - if you can't spot them, you're in big trouble.

As a general principle... Backtesting should be used only to verify market inefficiencies that ** can be rationally explained **. Otherwise... You are just practicing "data mining"... Meaning applying brute force scanning to random data... Which will, inevitably, result in you finding endless seemingly non-random patterns... That you will think are significant... But are just the worthless product of a classic statistical pitfall called "data mining".

Often true, but not always true. Data mining is susceptible to curve-fitting, yet valid systems can be found through such searches.

Here's my 2 cents: I use a walk-forward approach with two parameters: number of in-sample days (IS) and number of out-of-sample days (OS). Any models/algorithms I build are calibrated on the IS data. I evaluate the system by looking at OS performance, and the ratio of OS/IS performance. Before I consider trading the system, I make sure that it performs well under a wide range of IS and OS parameter values (time periods). I usually get ideas for the models/algorithms from financial/ econometric literature. So, suppose I have 2000 days of end-of-day data. I choose IS=1000 and OS=50. Walking my model forward requires (2000-1000)/50=20 model calibrations, and generates an out-of-sample time series of 20 x 50 = 1000 days. I evaluate this time series with a variety of measures (e.g. sharpe, sortino ratio, VaR, skewness of return distribution, alpha, beta, etc. etc.). Then I repeat for different combinations of IS and OS values.