If Frosty were to post a daily chart (screen capture or exit and entry points in a short table) showing his entries and exits we could really come up with lots of ideas. He would not be revealing his actual approach to entries and exits but a discretionary trader could look at the trades on his/her own charts and point out that "the ADX reversed and in two out of three trades they would have been winners". System designers might note that all losing trades were entered during a certain time of day and suggest a filter, etc., etc.. Everyone would\might learn something about trading and it would probably reduce our personal frustration level. It is hard to assist someone with very minimal information. Of course none of this really explains why his back-testing is consistently good and his real-time is consistently bad. Jack
His back-testing is consitently good and his real-time is consistently bad because the bot shows trades which happened after-the-fact, so everything looks just hunky dory, because it's taking trades through rear-view-mirror so to speak in his backtests. austinp spoke to this, he also explained to him why, in this case at least, he could do all the backtesting in the world and it still wouldn't work, as well as the type of testing he should be doing, not to mention total system redesign (one that includes protective stops, hopefully). The fact of the matter is, modeling a longer-term position trader would be much more optimal, in an indice with no nearly so much leverage as the ER ... but that was already mentioned as well. *** At this point, the direction of the thread should go into system design from scratch, so that atleast someone can comeup with some good trading ideas and implment them. Jimmy Jam
Now this kid has talent, if he can ever develop the ability to see the obvious and program a bot to execute his trades while he's hanging-out playing video games he's going to be one scary dude. JJ
This will never work, I done lots of study with neural networks. A system that is trained to recognise apples/oranges will attempt to class everything else as one of these. You throw in a pear it will class it as 75% apple. Train it on green apples and You throw in a red apple - it could get confused by the colour class & it the same - 75% apple - 25% orange.
Did frost ever say if his bot trades intra-bar or at bar close only? Did he mention the bar timeframe?
Booking -- it is possible to define a k-nearest neighbor NN with a threshold and simply have three outputs -- apple, orange, neither.
well not quite what i meant. it can classify apples and oranges. it doesn't classify every none orange as an apple. it knows what an apple looks like and it knows what an orange looks like. it doesn't attempt to classify anything that does not look like one of the two. so when bananas come down the tape it just goes and flies a kite. which is what I was trying to get at in my post, a regular classification system is judged on how it performs on every element of test data, 'i dont know' isn't usually a valid option, but with trading you can use 'i dont know' as much as you want, and make your profits on the subsets of the problem your system does very well. thus curve-fitting to the apple / orange curves proves a beneficial thing. taking the case for color. say it finds in the training set apples are red and oranges are .. orange. and that is its whole hypothesis. sure it might attempt to classify a strawberry or dragon fruit as an apple and get it wrong. but if over the year of back data you saw 100% profitable days with small drawdowns, maybe misclassifying strawberries and dragon fruits is just the cost of doing business. and you base your assumption on the amount of strawberries and dragonfruits based on what you saw in your large sample set. of course it could be wrong but the only hope you have at predicting the future is by looking at the past, if you are going to assume a huge change in the number of dragonfruits etc, there is no point trying to learn the market, using either ATSs or discretionary. ah. no thats not what i was trying to say. what i meant was that it is possible to over optimize on data of any size depending on the granularity of your a.i's grammar. but even if you do over optimize on past data, it is not necessarily a bad thing with trading since as long as the optimizations defines any correct sub-sets of profitable trades(100% profitable days seen), and not too many incorrect subsets(relatively small drawdowns), it will be a good system regardless of the trades you are missing / you being over optimized for very specific elements. i guess the nutmeat of the idea is that curve fitting to past data when creating a regular classification system can often be very bad, but with trading it is acceptable to have the same type of over curve fitting on past data as long as you a) have a large sample set b) make enough trades during the back test c) have a small grammar and vc-dim. basically allow 'i dont know' as a valid classification option as long as it does attempt to classify sometimes (and turns a profit on those classifications). we can accept someone as a good boxer solely because he fights weak opponents in this case, regardless of his actual boxing ability. otherwise known as my baby punching lemma.
According to my inside information, the bot trades based on 1-minute ER2 bars. However, this may be outdated or even plain wrong. Even the insiders are kept in the dark in regards to the very basic elements of the bot. The active participation in this thread, despite the obvious lack of information from the originator, tells me that a lot of people are struggling with their bots and are looking for the right path.
i wouldn't be so sure about that. the people with successful bots who are so bored waiting for them to trade, trying to find reasons to postpone further r&d, that they cling to any type of interesting conversation available?