Here is my Linkedin Profile volpunter please send us a link to yours. If you combine all "out of sample windows" and analyze them as a composite, if the results are robust then you have a system you can trade. In addition when using walk forward analysis, yes you continue to repeat the process every "N" bars , where "N" was the out of sample period used to create the model as you really trade. You continue to walk it forward, my program does that for you and generates trades for the next bar. See when you are just trying to impress people you argue over nuance, which is what volpunter is doing. If people were saying they optimize on all the data and trade, that not even out of sample testing and not scientifically sound. Arguing about the creation of Walk forward analysis or "Forward testing" as part of a specializations vocabulary ,versus out of sample testing is nuance. Look at the Wikipedia link , it opens up with saying "Walk forward optimization is a method used in finance for determining the best parameters to use in a trading strategy. " This shows my point, this is what the trading community calls this type of testing even if another field call it something different or includes it in a larger subclass. You will even see this type of stuff in Psychological testing "criterion-referenced " is really normalization.
I'm not really qualified to answer this - from personal experience as I don't use automating testing myself (either back or forward) but given that caveat I understand the process to be: 1. Look at your charts to get some ideas of relationships (that seem to repeat often enough to be trade-able for your trading time horizon). 2. Segment your historical data. - keeping in reserve one or more segments. 3. Test (BT) your initial set of rules 3a. Modify your rules if required or discard the strategy. 3c. Re-run your BT with your modified set of rules. repeat as required. 4. Using the reserved time segments and your finalized rules run a test on those segments. IF the strategy holds then move to 5. 5. taking random time clips (mini segments) run with a simulator and look for conditions of trades. 6. With info from step 5 determine when the trade should work and when it should fail and the expected profit and loss. 7. Paper trade the system noting if the market conditions are similar to the time segments in 4. If they are and if your profits are not then start and "do not pass go". 8. If profit are ok trade cash, bank your profits and send me a chunk!! (hah-hah)
Hi guys, The biggest problem I have with this idea of "rolling window" where the old data gets dropped out, is that I can't see a cost effective way to do it... If I'm training a NN, SVM, etc on the new slice of data after using it for a quiz I only need to run training on the new data; but if I want to push "old data" from the left side, then I need to restart and process from scratch on every slice of data... Volpunter, You seem to be much more concerned with overfitting (high variance) than with underfitting (high bias), is there a particular reason for this? Ps. CS degree from CMU? Nice! No wonder you can make windows run like a decent system
Yes I worry about overfitting parameters to past data because it will degrade the strategy performance with a higher probability than underfitting. think of it like this: let's say you have a strategy idea that works on average and produces not mediocre but also not superior returns. It performs better at certain market cycles than others but averaged out over a long enough period it produces stable returns. If you now over fit the strategy you pretty much destroy the original strategy properties and instead force the strategy to behave in a very specific market condition In one way and that may produce superior returns but as soon as market dynamics change your performance will drastically decay. What you are left with is not only much higher return variance or possibly a blow out but your confidence will also be shattered. After all there is still a human with human emotions sitting with a hand on a switch. When does one conclude a strategy does not work anymore. How much drawdown can one accept before bailing? That is why I concern myself much more with retaining strategy concepts rather than trying to bend the strategy to data within a narrow window. BTW I do not have a CS degree. My degree is in computational finance, a combination of statistics, math (mostly stochastic calculus) and computer science In order to apply all in combination to price and value financial assets.
I see what you mean, with underfitting the strategy looks bad when training and it also looks bad when generalizing to out of sample data... but overfitting can be deceptive and show it's ugly face only when generalizing against new data...
Simple Use TradersStudio, all my tools do things walk forward. My CycleStudio which I just released and Neural Studio which will be out soon show examples which work in the walk forward way.