the system looks good...... the only problem i see is that slippage will most definately be a major issue in real time trading of ES....
Finished doing the tests. For the tests I used backadjusted data from Tickdata.com from 1/2/2003 - 10/31/2006. I used 100 for Max number of bars strategy will reference and optimized by total profit. I ran optimization tests for each year separately from 10-99 so a total of 90 tests were run on each year for each timeframe. I used no costs in the optimization. A note about the output. Net profit is total profit for the year. The sum of all years is under the 2006 number. Best PF was the profit factor of the best total profit run (not best overall profit factor). Ave. trade was the average trade of the best total profit run. Best PER is the best parameter I found doing the optimization. % Ave. Trade > $80 is the percentage of all the runs that had a ave. profit per-trade of $80 or more. Analysis: I noticed several things doing these tests. 1). The best parameters varied wildly from year to year with no grouping. If I had to select a parameter for next year I believe it would be no better than a random guess. 2). The % > $80 for ave. trade is important to me because this tells me how wide of a target we have to make money. I use $80 because this strategy relies on market orders so a lost tick on entry and exit plus $5 for comm. and you've lost $30 of potential profit. This leaves us with $50 or 1 point which is my minimum for looking at a strategy. You can see using 15 min. data that there was a 0% chance of hitting the $80 ave. per-trade mark with even the best parameters. The other timeframes are also pretty dismal. It looks like 2004 was the best year for this strategy and if you used the 120 min. for each bar, then you'd have a 71% chance of hitting a $80 per-trade profit just using a random number from 10 - 99. Otherwise, there is no wide target area where we could make some money with any reliability. 3). I want the lowest timeframe I can get with the best chance for success. In this case I think the 60 min. data is the minimum that can be used for continued development. The profit dropoff isn't too bad and it has the most years with the widest parameter fit. Notice how wide the best parameter is for the 60 min. data. It ranges from 20 - 95. With such a tight decent profit area in each one it's unlikely we'd make much money with this plain vanilla strategy.
For the next step I went and optimized on all the data from 2003 - 10/31/2006 using 60 min. bars for the data. To do this test I added a cost of $30 per-round turn to reflect the loss of a tick on entry/exit and $5 comm. The optimal parameter was 77. Here is the annual performance for those years. Notice how 2004 had such a good year. This is consistent with the previous work showing that this strategy did well with a wide range of parameters during that year. Notice how poorly all the other years did. This is also consistent with the narrow range of profitable ranges found during those years.
Here's the optimization graph of parameters to net profit. The parameters from 10 - 99 are the x axis and the total net profit is the y axis. Notice how the lower parameter numbers are all underwater. If we avoided the lower numbers we'd have a pretty wide area where we could have made a profit (in hindsight). Notice the second dip in profits around the middle area of the parameters. This indicates the parameters are not really robust. The upper parameter areas look pretty stable with some profits.
Sometimes you can find hidden relationships using the graphs by switching the x and y axis. Here I swapped the net profit and parameter settings and plotted the graph. We could see the higher parameter settings had some profit but here you can see them trending higher (with plenty of noise). So if we had to use a parameter setting in general, the higher the number the better chance we probably would have.
So where am I at? So far we found there were some trends, though the parameters aren't stable. Up until now all we've been doing is checking general trendiness in the data. 2004 looks like it was a pretty good year for this method. All the others have unacceptable results. The next step is to see if we can improve by using filters. This will result in fewer trades and hopefully a more stable relationship. The kinds of filters I'm thinking of are: check the relationship between current trend condition and seeing if it persists. Also, does a enter on a dip strategy while in a trend have a better chance for success? When I have more results I'll post the methodology as well as the results.
"use $80 because this strategy relies on market orders so a lost tick on entry and exit plus $5 for comm. and you've lost $30 of potential profit. " um, no. classic mistake but no look at a contract that has a spread. if you buy the ask, sell the bid, you lose the spread. you do NOT lose the "spread twice" as many people incorrectly assume otoh, if you buy the bid, sell the ask (play market maker) you MAKE the spread (not applicable here). if you buy the ask, and sell the ask, you neither make nor lose the spread. that's a breakeven (absent commissions). so, assuming market orders on a liquid contract (you buy the ask, sell the bid), you lose the spread. that's $12.50 not $25 . if it is a losing trade, and you are using a stop (not stop-limit) and it's moving quickly, you may lose MORE than the spread of course. but, contrary to common belief, you do not pay the spread twice when you buy ask and sell bid. you pay the spread
these results are based on parameter values from 1-50. my results using 1-99 do not match no_pm's results. lets just assume he's correct since tickdata seems to be the standard.