Is backtesting necessary before Walk Forward Analysis?

Discussion in 'Strategy Building' started by kojinakata, Mar 25, 2015.

Do you Backtest before Walk Forward Analysis

  1. YES

    73.3%
  2. NO

    26.7%
  1. I am always skeptical to what I read. That is why I asked you for verification for you educational background, hedge fund career, but all you did was referring to some threads in this forum. You said to verify everything even my "comrades" write in this thread, then here I am trying to verify if you lied. You wanted skepticism, here you go then.

    Coming to the topic of parametrization and optimization: Parametrization is deciding which variables to use, optimization is finding the best values for those variables for making the result (most traders use for example, Net Profit) maximum or minimum (for those use for example, Maximum Drawdown).

    And no, I am not focusing on the way you talk, I am focusing on the content and information you provide. Till you prove what you wrote about your background, I am going to assume everything you said or going to say is a lie.
     
    #51     Mar 27, 2015
  2. I never promised I will provide verification of my educational background. Take it or leave it, I honestly do not take issue whatsoever whether you believe me or not. All I asked was you be critical about everything you hear or read. And if you remain critical after my posting several Bloomberg screen shots in different threads, comments (with proof) about my exotic derivatives background and knowledge, designing and programming a complete systematized trading platform, among the many other technical discussions I participated in then that is your choice.

    From the definition I posted above it shall be apparent that parameterization is both, the choice of variable as well as the values attached to such. Optimization aims at only deriving values of chosen variables. If that is any consolation to you then I am happy to oblige.

    lol, so unless certain system vendors in this thread send you a complete set of educational records, work back ground, and pass your programming and project design tests all they say and claim is lies? I would certainly not believe so...and you are saying you are not selective in your judgement? I hope you apply a different logic to developing algorithmic strategies.

    P.S.: But I make you an offer: Read through the technical threads I participated in and if still in doubt PM me and I am happy to send you a copy of my advanced degree from CMU.


     
    #52     Mar 27, 2015
  3. None of the other posters here backed up their claim with their background, that was the reason why asked for it. And for some reason I cannot view your posts&threads because "This member limits who may view their full profile." I will take that offer after reading your posts.
     
    #53     Mar 27, 2015
  4. So, someone's claims become more credible when they are just posted in isolation than someone else's claims with an explanation where and how the claims originated (via a specific education and/or professional work experience)? I am afraid I do not follow you here...

    You can do a full post search by member.

    But what interests me much more is why you still believe that any of the terminology used for out-of-sample testing might have a different implication for strategy testing.

     
    #54     Mar 27, 2015
  5. No, they did not posted in isolation, they provided reasons,logic or some kind of proof other than their background.

    Ok thanks, I will do that.

    Because there are many ways to do out-of-sample testing. The WFA (anchored & rolling), optimizing with historical data and testing it live, optimizing with some of the historical data and testing with out-of-sample historical data are all different ways to do out-of-sample testing. Their logic is different, their forecasting ability is different. Because they have different characteristics, even though at the core they are all out-of-sample testing, calling all of them the same is wrong. It is like calling all sedans, Sedan or point-guards, Point Guard.
     
    #55     Mar 27, 2015
  6. the strategies could not care less whether you feed them with a live incoming data feed at a rate of x ticks/second or feed historical out-of-sample time series data at a rate of x million ticks per second. If you do not comprehend that then we do not need to further engage in a discussion.

     
    #56     Mar 27, 2015
  7. You are right but I am not comparing how strategies perform under historical/live data. I am trying to compare which data windows are best to be optimized on and tested on.
     
    #57     Mar 27, 2015
  8. ...well then maybe you should start a different thread because that was surely not your original question.

     
    #58     Mar 27, 2015
  9. No it was my original question, because backtesting and WFA uses different time intervals from historical data. Backtesting uses the whole sample, WFA seperates it into smaller pieces and does rolling out-of-sample testing. So the question still is: Is optimizing-your-strategy-on-the-whole-historical-data (what I meant by backtesting, clarified in earlier post) necessary before WFA (clarified in post #29)?
     
    Last edited: Mar 27, 2015
    #59     Mar 27, 2015
  10. you seem to still be confused about the basics of strategy testing: So, if none of your variables/parameters are dependent on historical data (meaning, you dont fit your strategy properties to historical data) then you can use the full set of historical data to test the performance of your strategy. You can also wait for each incoming tick in real-time and simulate executions and sit around for a year till you gathered sufficient amounts of data. Both comes down to exactly the same.

    Now if you fit your strategy parameters to historical data then you should not use those data sets to evaluate the performance of your strategy. You will instead use out-of-sample historical data sets. Whether those data lie technically in the past or whether you are collecting future data over which you run your strategy to test its performance again comes down to exactly one and the same procedure.

    How you slice and dice (split up in-sample and out-of-sample) is an entirely different topic and has nothing to do with "walkforward", robust vs non robust, and all the other catch phrases.

    Does that make it clearer now?


     
    #60     Mar 27, 2015