Interesting link to my development lab where you can see Live my new algo strategy for volatility. https://forums.collective2.com/t/direct-link-to-watch-my-strategies-development-lab/10508 or direct link to the lab www.screenleap.com/tradingmachine I will be happy to answer any question.
This is how I approach my backtests and its validation. Let's say I want to backtest the performance of an ATM fly purchase on ETf's that crossed its 50-day moving average. I would create the filters for the 50Ma, create the option payoff that simulates an ATM fly purchase, run the formula on years of data, pick out 30 trades randomly, eyeball 20 of them making sure that there is nothing funky wit the results (ex... entry price of SPY is $200, 3 weeks later, SPY settles at $203 - implies that my ATM fly made $$) . I then take the top 5 losses and top 5 gains, plug the ATM fly into thinkorswim on demand feature and see tick-by-tick how the position made or lost money. There is drift between the TOS on demand vs. the backtest but not enough to invalidate the backtest.
Sure, I'll try. Decay and higher probability of market changing as time passes. Then there is the problem of data-snooping bias. If you forward test a few strategies you may think one matches your backtests by luck. There is an expert in this area that recently wrote an article in Medium. In the first Figure 1 replace out-of-sample test by forward test and you get the same effect of data-snooping bias.
very well designed i want to know your opinion.. is it wise and possible to create self-adjusted modules within a specific market condition? is it wise and possible to have a dynamic monitoring system that capable of recognizing market characters rather than using static measurements? with minimum statistical bias and minimum operator involvement..