Simulation over “unseen” data

Discussion in 'Data Sets and Feeds' started by abattia, Jan 30, 2012.

  1. After setting strategy parameters on “training” data, an often repeated recommendation is to run with these same parameters over “unseen” data.

    When doing so, what’s the result one is looking for? What outcome provides the best possible confirmation of the validity of the earlier run’s parameters?
  2. jcl


    The best possible confirmation is a repeated cycle of parameter optimization and performance test, each with different price data. This is called walk forward test.

    If it gives positive results over several cycles, you can trade the strategy with some confidence for a time period that corresponds to the test period of the cycle. After that time, the strategy must optimize its parameters again. Parameters are usually only valid for a limited time.

    You can find a tutorial about this on the Zorro website:
  3. Thanks, but this sounds more like a description of "process" than an answer to "what counts as confirmation from the out-of-sample test"?

    Perhaps you meant the answer to be in your reference to "positive results"? What do you mean by "positive results"?
  4. jcl


    If you're asking with which results you should put real money in a strategy: Average annual return > 100%, annual Sharpe ratio > 1.5 and Ulcer Index < 10%. All this from unseen data of course.
  5. No, that's not what I meant... but the fault is mine for not being clear!

    Let me try again ...

    From a run over training data, let's suppose you identify a possible set of strategy parameters that perform satisfactorily against whatever are your chosen "hurdle" levels for your chosen performance statistics.

    Now you run the "out-of-sample" test...
    ... what are you looking for?

    The same values?

    Nearly the same values?

    Values within a confidence interval given by something like a Monte Carlo simulation?

    Or, are you using the out-of-sample data for another round of optimization ... which I think is what you're suggesting above?
  6. jcl


    I now understand what you mean, but more experienced strategy developers can this probably answer better.

    In my limited experience, the results from unseen data are always worse than from in sample data, and often even outside the range of a Monte Carlo simulation with in sample data. When your strategy has more than 1 or 2 parameters to optimize, the results from a simulation with in sample data are so optimistic that they are almost worthless. But even if you don't optimize parameters at all, the mere fact of selecting a strategy and selecting an asset makes the price data you used for the selection unsuited for testing.