Thoughts on system optimization

Discussion in 'Automated Trading' started by frostengine, Dec 10, 2007.

  1. I have recently been wondering how sound of an approach the following is:

    1. You take a series of data and a basic strategy. One direction at a time, for instance take a long strategy. You now GREATLY optimize it on the data until it has a good return and only a handful of trades.

    2. Now you do the same thing with the same strategy or a different strategy. As long as the parameter values it outputs is different than the first one....

    3. Continue doing this until you have SEVERAL long only strategies that each produce only a few trades but are really good trades.. basically producing a bunch of overly curve fitted parameter combinations for a particular strategy.

    4. Now you do this same procedure with short trades.

    Lastly you take all of these optimized parameter cominbations for the long and short strategies and put them into one single strategy. Now for each incomming bar you test it as such:

    if optimizedlongstrategy1 signal =true
    buy
    else if optmizedlongstrategy2 signal = true
    buy
    else if............

    Now you have all of these optimized parameters forming one big strategy that actually makes 1000's of trades of the test data with REALLY good results....

    Now generally curve fitted strategies are bad because its unlikely the events will replicate exactly in the future.... BUT in this scenario each curve fitted strategy is set to find a small niche that rarely happens..

    I guess the overall question after all this rambling is.. would you trust such a system? Could this actually be a good way to develop a strategy?

    Open to all thoughts regarding this.... basically just a big rambling idea i've had lately...
     
  2. I would trust this data if and only if you can get convergence from other similar markets.

    if the parameters you're trying gives good results on one instrument, and it doesn't on another similar instrument, then forget it

    if it does however you might have found something good


    exemple. a strategy that generates 1000$ on 10 trades on the emini s&p, loses 800$ on 9 trades on the emini russell, thats a red flag right there

    convergence of data is really important

    every filter that improves results on one instrument should ideally improve it on other similar instruments. Else its exactly that, curve fitting, and it will be a very expensive lesson.
     
  3. cd23

    cd23

    trust?.. yes

    develop?... this would just be a beginning.

    Unfortunately, the reduction part of modelling/development has a rather tricky set of requirements. By foreseeing these (becoming knowledgeable), you then decide to not begin to formalize any of the work you cited above.

    Instead, you do something that is orthagonal to what you did above. By reading about how software is done, you can find out how models are "developed". Models are not developed using just any sequence that is possible. They are deeply constrained by the requirement to not make mistakes in coding.

    What you are talking about as the beginning of a development is a giant landmine, roughly speaking.
     

  4. yes, another lesson that was expensive to me
     
  5. BJL

    BJL

    How about... you try it yourself and see if it works.

    Develop a system using 1990-2000 data, out-of-sample test it for 2001-2007.

    If you want to get really thorough walk-forward test it. Use 1990-2000 to make a 2001 model, use 1991-2001 to make a 2002 model etc.etc..

    You'll see how good your method works easily.
     
  6. I do not understand what you are referring to here. Or how it relates to what I proposed. Can you elaborate? Thanks
     
  7. MGJ

    MGJ

    Find out how well the idea would have worked if you had thought of it and implemented it on 12/31/2006.

    Do your (sub)system optimizations on data through 12/31/2006 and then run the final combo-shebang on 2007 data. See how well it would have worked this year, when optimized on data prior to this year. ("out of sample testing"). If you like what you see then it may embolden you to do the optimization over again on 12/31/2007 and trade it with real money in the year of our lord 2008.
     
  8. This will probably take you 10 years to understand...
    If you bang your head against the pole that long...
    But purely technical strategies of this type...
    Are quite WEAK in the long run...
    Certainly not strong enough to build a business on.

    There is a fundamental reason why this is:

    Experts make better decisions that rigid systems...
    Therefore... experts will play the game better than your rigid system...
    And they will TAKE YOUR MONEY.

    To understand this better... here is a question:

    Is it possible to develop a backtested strategy of this type...
    To beat sports betting?

    If yes why... if no why not?
     
  9. cd23

    cd23

    My response to you is in the light of constructive criticism.

    "I do not understand what you are referring to here."

    Alot of coding originated, in the language sense, from places like BTL (Murray Hill) and UCSC (Philadelphia). There is literature that relates as to how to go about making applications of these products. If a person does not follow the prescriptions recommended, then they wind up in loops of troubleshooting when solutions or fixes often spawn subsequent glitches and bugs. This can eat up schedules and money.


    "Or how it relates to what I proposed."

    You look at the market and its opportunities as a synthesis of opportunities. As an aside, a learning model that applies to building the mind in the learning process would find this least effective.

    What is required instead, to be in concert with ordinary development, is to take the parts you have and put them in order. Alexander's Method (Berkeley) becomes your primary resource for this. You want to pile things together as nodes and show their links to each other. Having done that, you can then slice the aggregation into islands that are least coupled.

    In this manner the major parts have achieved an autonomy of sorts. These parts can be worked upon in parallel without any inter part interference of any significant magnitude. Later, you can manage the interconnection of the parts.

    for piling, you want to pick up each piece and match the left sides (first occurring in time). of very similar pieces. put the short pieces on top and later turn them upside down and code the longest piece and then pull the outputs of the shorter pieces out as you work your way down the inverted pile.

    "Can you elaborate?"

    What I mentioned first is the real route for you to take. You have to inform yourself to do the work of mechanizing anything. The approach that you are using is not prudent, basically.

    It has been pointed out once in a while that making money in the markets is a function of knowing the markets. Try to reason through what it means to know the markets.

    If you did know the markets, it would change how you presently view what is possible and how to approach the modelling and how, subsequently, to approach the development to achieve mechanization.

    It is hard to explain, softly, how and why a person may want to change direction. The tradeoff of gaining practice before undertaking a significant work of science is profound. The cost of embarking on a project (something that begins and ends) that will not bear fruit can be discouraging to some types of modelers and developers. The book I referenced recently, The Predictors" is such an example. These people never figures out they did not know what they were doing. The people who left the project did, however.

    There are many tradeoffs. NSF supports projects in the science of the financial industry. They anything but noteworthy so far in the process. There in that major effort you can find numerous examples of what not to be doing.

    My comments to you are intended to be constructive and they are in the context of what you are doing and my orientation to make these comments is not a vacuum. I have worked in many fields. As a consequence I have been a MTS at BTL, A division manager at UCSC and have been designated a program of the year in grants from NSF. therefore, my comments to you are in the vein of commending you for doing a warm up drill and I am nudging you to get organized. Further, I am suggesting that you upgrade yourself with another level of knowledge that relates to your infinite scheme of things and I am asking you to consider the value of being pragmatic in an imperfect situation where dominos do not line up conveniently.

    The particular venue you have chosen has no public results from which to draw upon. What you are presently doing is well trodden and not too productive because of the entanglements that inevitably ensue.
     
  10. cd23

    cd23

    You may be mistakenly thinking that he would be designing a rigid system. There are no contemoprary "rigid" systems that have been modeled or developed in recent times. Rigidity left the scene of the accident long ago.

    If you can do some catchup reading and searches.

    Your sports analogy is priceless.
     
    #10     Dec 11, 2007