dangers of curve fitting?

Discussion in 'Trading' started by jedwards, Dec 9, 2009.

  1. I've been reading up on trading systems and optimization, and the one thing I've been confused about was the apparent dangers of curve fitting.

    I understand the concept where you can over-optimize a system to the point where it works great for an entire range of data, but then it doesn't work in real life, because the optimization was optimized for the specific data you ran against.

    I'm trying to gain a bit more insight into why this is a problem though. At the heart of it, all optimization is some type of curve-fitting, so isn't that risk always inherent?

    I'm guessing the scenario where curve fitting is the worst is where you take all your backtesting data, and then optimize the system so that it produces the best results, leaving you with no further data to test. The problem with testing against all your data is that you could have a couple of months where you get tremendous returns, and the rest of the time you get really bad returns, but the bad performance would be hidden by the fact you are optimizing by the end result, ie. the total amount you've made, instead of breaking up the data or optimizing on a smaller timeframe.

    If you optimize it against a subset of data, and then test it against the rest of your backtest data, is this what is known as walking forward?

    How I test my systems are to test against all my backtest data, but breaking down their results on a per-month or per-week basis, and then throwing out particularly good months and evaluating the performance with what remains. Is what I'm doing in essence the same as "walking forward"? Or are there other inherent problems with my methodology?
     
  2. The simple answer is that all results for simple (read testable) systems are random. So no matter what method of optimization you use you are optimizing randomness. The only thing not random about it is your loss of money trading the optimized system. The corollary to my theory is that no profitable system CAN be simple, or traders would get rich, market makers would lose their grubstakes, and markets would close down. We can't have that. Unthinkable. Like there being no Vegas.
     
  3. I disagree that the markets are completely random. However, our different viewpoints are what makes the market a tradeable market!
     
  4. ssb11

    ssb11

    to best protect vs. curve fitting one must use some walk forward testing. create system on one set of data; test it on another.

    one must also test that small changes in input do not create large changes in performance. if a 20 period XYZ is +$1000, then 13-19 should each show a profit rising up to 20 and 21-27 should as well falling off as you move away from 20. one should not pick the parameters that produced the highest backtested profit, that is luck, but rather the center of the largest "plateau". example

    14=+329
    15=+489
    16=+592
    17=+528
    18=+650
    19=+450
    20=+250

    i would argue that 17 should be selected.

    remember, you are interested in future not past profits. robustness, limited number of parameters, and broad strokes are indicative of futures that will look more like the past. the more, "if, then" statements that you use, the less likely the future will look like the past. i would not use something without at least 500 instances.
     
  5. I did not say that markets are random. I said that no simple systems work. And that testing simple systems yields random results.
     
  6. Some would argue I haven't been profitable long enough to answer this. Ok, so take my opinion with a grain of salt. I don't know if this will directly answer your question, but I think it is the info you need:

    Markets change. So if you curve fit, I think it makes sense to fit the curve back to when the current market first changed...or perhaps other similar markets at earlier periods. Otherwise, you're not comparing apples to apples. But the danger of this is that if you fit your curve nicely, at some point, the market will change again, and you could (and likely will) suffer a loss. It has happened to me.

    So I think the trick is to do the curve fitting for the current market, but then play to minimize loss...play as if it could stop working at any time.

    As a counter-point, some will argue that a method must work in all time frames to be a good method. There is merit in that too. Then you can never lose But I think it depends on how optimized you can make your system, and how big of a loss you'll take if things change. I believe the thing to do is to have different models for different types of markets to optimize gain with the realization that at some point, you'll have the entirely wrong type of model and you will get burned a little bit. At least that is what seems to be working for me.

    SM
     
  7. Thanks everyone. I guess based on what was said, my methodology isn't horrible incorrect. The key is to not get blinded by just a single factor or a single set of datapoints and to understand how the system behaves as opposed to just tweaking values of a black box.

    I read this one article talking about how you could optimize your data against a certain set of data, but then when testing it against out-of-band data, the results are terrible. That I guess that echos my earlier thoughts.

    ssb11, that's an interesting point you bring up about how the results using similar inputs should show some sort of gradual movement towards a key value. If the similar inputs produce wildly varying results, I guess that would point to some sort of optimization that only works on the noise of the data, as opposed to any sort of feature of inherent behavior. That makes sense and definitely is something I will keep in mind.
     
  8. Interesting thought there Deco, guess it depends on what you refer to as 'simple' Personally, thats exactly the way I like to keep my systems. Simple..... The simpler the better actually....
     
  9. I'm sure some will disagree, but I am of the mindset that walk forward testing is a far more reliable way to test/optimize your system than backtesting:
    • It requires your system to adapt just as it would when it is trading. Markets aren't static, so you system can't be either.
    • Fills/Spreads, etc are as accurate as they can be. Yes, your backtest may attempt to take this all in to account, but nothing models it more realistically than actual market conditions. Besides, many brokers offer paper accounts (not just demos), so you can see how things work as they would if it were live.
    • "Real" backtesting is complicated...you need sample and out-of-sample data, but it needs to be from similar market conditions. You ran your system with your parameters set to "X" and now you want to test them with setting "Y". You can re-use your sample data, but not your out-of-sample data so you've got to find equivalent new stuff. Hmm, 98-00 was a bull market...was it the same as 09? Etc, etc.
    Nothing says you can't run 20 systems in parallel to see which one proves the strongest.

    My systems tend to be extremely simple things that I determine from looking at a chart for a bit and then test them in real-time with either a paper account or very small numbers of shares. I consider this to be "R&D" costs and just an expense of doing business.
    My $0.02