One common story I had heard about trading system is that, Successfully backtest for 3 year's data, yet the system blows up in 3 days, what causes this ? Thanks.

Implementation errors, like "peeking" for one. But assuming that you've done the job right... ... one thing that's always on my mind (as in... I pray I'm not doing this) is curve-fitting. If you optimize your results on the exact sequence of the last 3 years, then that doesn't at all suggest you have an optimal strategy in genearal. To avoid curve-fitting... build your strategy by optimizing on the basis of (let's say) half your historical data. Once you think you're done optimizing, run it on the remaining "unseen" half of the historical data. This is the only way to really know you have something working.

I back tested my system and every time my RSI went below 29.431 and my 23.6 bar moving aveage turned higher in the past year if I would have bought I would have made money, I am ready to trade the SP500.

You have to get this side of random with some theory, before you start backtesting. It is entirely possible, and it seems to happen, that people backtest what is really randomness but they don't think it can be randomness. They test it in-sample and out, it looks fine [pure coincidence that the in and out of sample data produces good results]... then they run it and it proceeds to lose money or not produce as well. If the only measure of "good results" is making money, then there is a one in two chance that a random strategy that makes money in-sample will also make money out of sample... and one in four that it will both make money out of sample and with a real market... and if it makes money with a real market, at some point it's going to give it all back...

IMO, the most robust way to backtest a systematic strategy is to do the following. I will give a simple example later to clarify. 1) use a historical data set that is long enough to cover multiple periods of market regimes including bullish, bearish and ranging. 2) select "Training Period (TP)" which is a subset of the data where the market exhibited regime changes. This training period is a rolling subset of data that is used to optimize you system parameters. 3) select an "Out-of-Sample Period (OOSP)" which is also a rolling subset of data. This period is used to backtest using the optimized parameters from step 2. 4) to find parameter, you need to start with the first TP and optimize your parameters. The objective function to maximize can be the cumulative return, the Sharpe Ratio, etc. I personally use the product of the two. 5) to back test, use the optimized parameters from step 4 to trade in OOSP which should start right after the last data used in TP. Log the trading results in OOSP. 6) to complete backtest, you need to repeat steps 4 & 5 after rolling TP subset forward by OOSP. Here is a simple example: Lets assume you have ~10 years of daily historical data starting from Jan 2000. Lets also assume that the Training Period (TP) and the Out-of-Sample Period (OOSP) are 3M and 1M respectively. First you start with the first 3M data i.e. Jan-Mar 2000 and optimize/calibrate the parameters to maximize the performance of your model. Then utilize these parameters to trade the month of April (without any further calibration!!!). Log your trading performance for April. Next you need to roll forward your TP by OOSP, i.e., use Feb-April as the new TP to optimize/calibrate parameters and use the new parameters to trade and log the month of May. This procedure requires you to optimize/calibrate 12 times for every year of data and trade only out-of-sample without calibration. The final backtest is now reflected in the log of all your out-of-sample trades. By following this, you avoid curve fitting. One final check is to repeat the entire procedure but using a different start date. For example choosing your first TP from Jan 15 to April 15 instead. This is essentially like rolling your historical data set forward or backward to make sure your trading strategy is robust irrespective of when your start trading. I hope this helps.

That and $2.00 still won't get you a good Latte. Backtesting is a way of automating things that don't work, even though they look like they do. You run a bunch of simulations. Then you concentrate on the cases that look profitable. But it is a fluke - in every case, statistics tells you that in a random batch of 1000 tests, then 10 tests are in the top 1%. And this is absolutely meaningless...

Clearly, logic or reason is not your strong side. If this was true, then all possible trading systems will produce no profits in the long run, which is definitely not supported by the evidence, facts. If you think backtesting is always a flaw, then you automatically assume that each and every system cannot be profitable in the long run since every system obviously can be backtested. Probably you are unable to come up with any profitable system or rule-based method - that's why you promote such views, I feel sorry for you...

Datamining bias (aranson, evidence based statistics, or some similar title) Prado, system design and optimization - what a gem. and then maybe something on behavioral finance + psyochology - was it kahneman aranson will show you why your great method is full of statistical flaws. kahneman will explain why, and prado will show how to do it right.