Have you been on a ship going from Cape Hook to the tip of Antarctica or vice versa. You would sail through the "convergence". There are many more things at play on that route. A person who looks at the processes the opposite way you mention, will make more forward progress than looking at it the way you did. The wheat races long ago were a tribute to those who knew what they were doing. Applying science to the markets is not very difficult. My posts are intended to make very clear how it turns out when that is done in one of the many ways that it is possible. Elsewhere there is a post where a person reviews what he thinks the majority do. He thinks they can guide him.
Strongly disagree. There is no grand master plan (i.e., "curve") being fitted by evolution. We'd be descended from velociraptors rather than from monkeys if not for the black swan of a big asteroid slamming into this planet. Shit happens, especially random shit, and evolution adapts to it.
In many practical cases it's discovering a trading strategy and choosing parameters (if any) that is a difficult task. Once this has been accomplished, a lot of strategies are computationally simple and can be coded with a moderate effort directly into the chosen charting/autotrading platform (or even Excel). Those few strategues that actually have to run complex computations in real time can use a 3rd party library or indeed call directly into R.
Out-of-sample testing is an invaluable tool but it doesn't (and can't) eliminate curve-fitting entirely. As soon as among backtested strategies you start picking those with satisfactory out-of-sample performance, you introduce data-mining bias and end up with the same spectrum of problems you tried to eliminate by introducing forward-testing.
It is not the same spectrum of problems. Data mining bias can be a good bias. It can work for you rather than against you. Do you actually think that data mining bias is always bad?
The point of my post was simply that if not used very carefully forward-testing introduces very much similar problems to those it is used to solve. I.e. selection bias... By choosing trading sytems that perform well on a forward test we indirectly choose systems that are fitted to the forward-test period. I haven't seen a practical case where data-mined systems would consistently perform better than on the in-sample tests. If your perception is I am missing this big part of the data-mining goodness, please enlighten me.
I missed your reply and I thought it is too important to ignore. In principle I agree with you. I have been able to get data-mined system that are as good in the OOS as they were in the IS, not better though. See also this post: http://www.elitetrader.com/vb/showthread.php?s=&postid=3546776#post3546776 If I can get data-mined systems with a profit factor of at least 50% of that calculated for the IS, then I am satisfied because in most cases it simply means that those favorable condtions in the IS that gave a boost to the profit factor were not present in the OOS.