Forward Optimization: How to optimize multiple parameters

Discussion in 'Trading' started by SebR, Jan 25, 2022.

  1. SebR

    SebR

    Hi RedDuke!

    Thanks for that recommendation; in fact I have studied their content a lot and I agree that it is the best Youtube content I ever found for Trading. But I did not find an answer to that particular question: although Martyn also recommends to keep the amount of optimization parameters low in each optimization and rather stack them, he does not really specify how do to that in the forward optimization. Or did you find something that I missed?

    Thanks,

    Seb
     
    #11     Jan 25, 2022
    Steven+49 likes this.
  2. SebR

    SebR

    Thanks GaryBtrader!

    Well actually this is what I am coming from, and what should be improved further by the walk-forward-optimization: For each window, in-sample is what I called BWD and out-of-sample is what I called FWD.

    BR

    Seb
     
    #12     Jan 25, 2022
  3. It helps understanding and suggesting by sharing common terminology. Training data, validation data, test data.

     
    #13     Jan 25, 2022
  4. Q.E.D.

    Q.E.D.

    I know this will not be well accepted, but testing "out of sample data" is a waste of time. I will neither identify, confirm, or disprove that the particular system has been data-fitted. Sounds plausable that it might, but no evidence whatsoever. Why would arbitrarily declaring some data range, more impt than other data. And the person that tests out-of-sample starting say with Jan data, would just be included in the in sample historical data, for the trader who does not start his testing 'til a few months later. The plausible argument is that testing a system on "new" data is somehow unique. However, if you have already tested on say 5 years of daily / intra day data, the one month of out-of-sample, is no more unique than any other period. More significant, is that those who use out-of-sample don't just stop there, they go back, reconfigure their system, & then repeat the process. So it could just be data-fitting in 2 segments, rather than one. The best out-of-sample data is real time trading. And the truism is that your largest draw-down period is always ahead of you. Regards,
     
    #14     Jan 25, 2022
  5. ph1l

    ph1l

    This contradicts
    Using out-of-sample validation data is useful because when the system fails on that data, it's unlikely to be successful in the future. When the system succeeds on the out-of-sample data, it has a chance of working in the future.
     
    #15     Jan 25, 2022
  6. Q.E.D.

    Q.E.D.

    Why? What makes 1 or 2 months out-of-data so important? The system presumably worked on 5 years of data, but not on 2 months? System are typically profitable about 1/3 of the trades, which means any 1-2 month data period more likely to be losing than profitable. Makes no difference if that is in or out-of-sample. Sorry, my last comment on this subject Just wanted to provide some contra to the popular methodology popular with most newbies re out-of-sample testing. Regards,
     
    #16     Jan 25, 2022
  7. It is not a method chosen by some or most newbies but by the world's leading data scientists and the entire science community. You may want to read up on the concept to better understand it. There is tons of content on the internet that explain it.

     
    Last edited: Jan 25, 2022
    #17     Jan 25, 2022
  8. userque

    userque

    None of the above.

    I recommend boosting.

    https://en.wikipedia.org/wiki/Boosting_(machine_learning)
     
    #18     Jan 25, 2022
  9. ph1l

    ph1l

    What you write is true for a small out-of-sample data set. But if you made the validation set larger, such as 30% of all the data, failure of a trained system on the larger validation set means that system probably won't work in the future.
     
    #19     Jan 25, 2022
    Clark Bruno likes this.
  10. Code all your parameters into 1 strategy then use the optimized parameters for the best returns.
     
    #20     Jan 26, 2022