One week journal

Discussion in 'Journals' started by TSGannGalt, Dec 18, 2008.

Thread Status:
Not open for further replies.
  1. I post stuff for a week or less about system development. I want to start and end this in a quiet manner.

    So you've got two systems, both trading under a different tendency. You've done all the tests and analyzed all the key figures required to run and manage the system solo. The obvious next step is to find out how you manage them within a portfolio of other models.

    Obviously, you run the multiple systems under the dataset you developed both models under. Literally, you repeat the tests you've done for a single system in a portfolio. You get the correlation factors, equity curve volatility (Std. Dev.) figures and the test out a bunch of allocation models with it.

    So now, you take each of the new models and create a new market dataset excluding the tendencies each of the models use. In another words, you tweak the original market data so that the potential system fails. So you take the slip-forward dataset for each of the systems (including the live models) and iterate through all combinations.

    At this point you get a good picture of "what happens" when a specific and combinations of models fail and you get a post-slipped market correlation factors of each of the models.

    Obviously, you re-factor the "fitted" results and the "slipped" results together to get the optimal allocation scheme for your portfolio. And again, you get a better picture of whether the new systems are significant enough to add to your live server.
     
  2. This actually came into my realization after having a drink with nitro 5-6 years back in Chicago at Dublins. I took the concept of the discussion forward and pursued it to where it is now. Mad props to nitro...


    Now... a bit more in detail.

    There are a few pre-requisites to run this test. First requirement is for the developer to isolate a specific condition that the system would be profitable under. If the system is simple, the required condition of the test data would be simple. So using the example:

    I have a trend-following system. I'm going to use a simple Turtle entry/exit which would be:

    Entry - 20-bar breakout.
    Exit - 10-bar breakout.

    In this case, the condition required for the model to be profitable is quite simple. You want to code up a time-series data generator that frequently breaks out after 20 bars and you want the price to be above the entry price after a reversal 10-day breakout. You generate the data and you have a 100% profitable dataset for your trend-following system.

    But then you would also want to add a few more things to the dataset to make it more concrete. For the trend-following system to be profitable, you want the balance between the entry and the exit to reflect key statistical measures (the tough part of this is it's very model centered. Each model has their own set of "fitness" that matches). Keeping things simple, for the system to be profitable, you want the ratios between the % profitability (or the frequency of an upside move) and the risk/reward to be in balance. You can (should) generate both kinds of datasets for future tests.

    If the system is simple, the "data filter logic" is relatively simple. But more complicated the system, more coding you have to do to make the dataset. This process is not only for running slip-forwards, it actually becomes an important part of the management models when you are running it live.

    More you know about the system, the better... Plus, you're still dealing with the required condition. This has to be taken further into exposing the tendency it requires for the model to work. If you can expose the models to the tendency level, then you've got more control and clarity over how you should be trading the model. (Most models can't be exposed to the core tendency due to most systems working under multiple tendencies but you can get close and the process of drilling down is what's important)
     
  3. Continuing... (There's plenty of smart people in the world... I don't claim to be the first person to develop models in this way)

    So up to now... you have a "Data Filter" and competency of the system that can allow you to:

    1. Generate data that is compatible with your system.
    2. Adjust the data based what type of statistics you would like the system to have. (I use Brownian as a base)

    So you continue the typical quant. routine of running models on random data but in this case a biased data. So this time around, you put aside what you have done so far and you start tweaking the dataset from the generator side rather than the system side. Starting with Volatility, tick directions and others... you basically start adding all the "characters" of the market you can think of and start getting the statistics of the how those measures affect the system performance.

    I have a long list of measures and click my app. to run the tests (long and tedious). I get the result and move onto the next step. Which is to combine the 2 resulting logics that you have.

    One side you have a rather static "data filter" to that works for the model, on the other side you have a rather statistical" market measures that works for the model.

    First, you do the usual rational thinking process.
    Second, you again iterate through tests using the results of the 2 sides.
    Third, you tweak a bunch of the codes fit based on the combined testing.

    In the end... you want a reasonable "TrendFollowFilter" class, you can add to your market generator.

    ... I skipped through quite a bit because the purpose is to write about the Slip-Forward, not write about how to generate a system specific data. It's quite easy once you get the hang of it. And I'm actually writing this for the mid+ level developer crowd. So programming in a custom language and random market data generation shouldn't be much of a hastle.
     
  4. Few more stuff, about data generation I'll add for now:

    1. This is part of the single model development process. So typically, the models get revised after each test.

    2. A single model development is like a unit test for programming. Also, it's a test-driven development process, "somewhat" similar to Agile Programming. By working on the data filter side-by-side with the models themselves, you eliminate time and confirm the reliability of the models.

    3. Market driven contingencies are much more faster and reliable than a typical Equity curve/performance based contigency. Basically, you get signals regarding changed market from the market rather than signals resulting from models slipping, which is still hard to figure out...

    4. There is always a risk of some sort in any single process. Slip-Forward does too. So you have to have multiple processes in which you develop and assess models. In another words, this is not the holy grail, just one part of a whole.
     
  5. One note.

    I thought that I read somewhere that there's more editing capabilities (like no 30 min. limits for the OP) in threads in the Journal Forum. Primarily, the reason why I started the thread in here...

    But I guess I miss understood... No biggie...
     
  6. Up-to-now, you have a single system with signals and a filter.

    The next step is the integration testing part, or I generally think of it as a refining step. You run the models with other filters and make sure the filters you have developed can co-exist with other filters. So the first thing you do is you take the results of the real market and try to recreate the results with the dataset you've coded.

    So taking the other 1/2 of the example, a counter-trend system. To keep things simple, the assuming core stats for the system is Avg. Tick per trade on both the Winning and the Losing side the % profitability. In the real market, both system co-exist with:

    Counter-trend:
    Avg. Win - 10 ticks
    Avg. Loss - 5 ticks
    % Profitability - 70%

    Trend-following:
    Risk/Reward Ratio: 3
    % Profitability - 40%

    You set up your data generator and your data filter so that you get the same results if you run both systems. So you have:

    Data Generator creating data with the:

    CounterTrendFilter
    AND TrendFollowFilter.

    You test the results and check to see if you are able to re-create the actual result with the data set.

    The first few systems are going to be a breeze. But once you get to your 3-4th systems, the story suddenly changes. You will have models that can't co-exist with each other. There's obvious reasons for this:

    1. There's a flaw in your code.

    2. The DataFilter isn't well thought out.

    So you go back to step one of your system development process and think, refine, unit test, and integration test. In another words, you're simply refining your dataset and testing/researching to isolate the character and tendency "as much as possible.

    As you start refining, you'll tend to realize how certain models are connected. For example:

    Let's say you have a short term momentum system on top of the trend and counter-trend. Changing the duration of the momentum affect counter-trend following model, changing the amplitude of the swing can affect the trend-following. You divide and conquer by making the momentum work with the trend-following. Then you work on making the counter-trend and momo to work. The you check to see if the trend and counter-trend affected each other.

    This happens quite frequently when you first get exposed to this type of development. But this is just part of a process of test or research. You're drilling down.... thinking, analyzing, testing and integrating...

    And... once you get to around 7 or 8 models... it's literally impossible to make them co-exist on a single/static test. So you start iterating the bunch of results and check if the mean performance is with an acceptable level... aka... you get into a statistical process.

    This is still the testing phase. Nothing gets changed on the LIVE environment, regardless of how well you tweaked a model that is running LIVE. Models that goes LIVE has their own set of separate criteria from this process. Again, this is in a testing phase of the development. You have other process and criteria to develop and trade models.

    System Management and System Development are always separate. One reason I get doubts about a software packages where you can do both on the same software.

    On the side note, once I decide on developing a new model to add to my portfolio, it takes me about 2 days to a week for me to finish integrating it with other models. On an average day, my actual coding time is around 3-4 hours. Experience is one important aspect and keeping a healthy amount of codes and automating a lot of the process helps out a lot.

    Moving on... with my next post.
     
  7. Once the integration is done, my development process branches out to a few other processes. I won't get into them in detail but a brief overview of some other things I do are:

    1. For any reason, I may want to develop a new model. I re-generate the dataset with all the models failing, and try to develop a model using that dataset. We all run out of ideas... every once in a while, it can start here.

    2. I refine my market selection scanner. Knowing which markets to trade are equally important with the actual systems you trade.

    3. The front-end application, is a pure reflection of how anyone trades and manages their models. A good system without a good tool to manage them is close to having a useless system. In another words, I start researching, testing and thinking about system management.

    Side note. I'm always using Real Market Data side-by-side with the Generated ones. Last thing you want is to be blinded sighted by a FAKE data and equally be curve-fitting using something FAKE. FAKE is still FAKE but it's a useful tool, if you understand that it's FAKE.

    Anyways... I can finally get into the main topic. Next post, about Slip-Forward.
     
  8. Before the Chit-Chat thread and this one, I took some time to plan out what I would write about and when I'll be posting them. But once I started, I've unintentionally left out some stuff and because I intend to cover the topics under a reasonable level, here's another post. This thread WILL end in within this week. (Of course, I get a few comments from people out of ET regarding this thread and other unexpected things so far.)

    1. First and foremost. The material in this thread is not for a newbie. I'm writing this intended for the middle+ leveled system traders. It came to an attention that I need to emphasize on it so I'll write a few things regarding it for the aspiring ones.

    The whole process about coding the filter is to understand the underlying tendency of the system you plan on trading. In most newbie cases, they can only realize the changed market only after the system fails, and it ends there. Out of all the crowd of developers I have been in contact with, only a few takes a step forward, taking some time to figure out, "what actually changed" to make the system slip. Just like a professional race driver taking a step forward from the millions of regular drivers.

    You're not trying to find the holy grail of market models. Filters developed has nothing to do with predicting what the market will do. It's only a list of measures of what the market is doing. The basic philosophy behind ANY trading is, "If a certain condition continues to be sustaining... then XXXXX." What the Filters are providing is, the conditions in which you expect it to be sustaining. (for the models to be profitable)

    Replace "condition" with "however you analyze the market". Predictive models goes into a whole different category / genre of research.

    2. Second. Quants. and System Traders are 2 different animals. We're both cats, but different like Lions and Tigers. You both needs claws and fangs, but they're 2 different species.

    3. To be a bit more technical. I have a long list of market character components that I use to generate the data. These are not static parameter.

    As an example, staying within a simple example... System A requires the "trendiness" and "serial correlation" to be within a reasonable level. Let's just use High and Low, as a value.

    So... in the SystemAFilter, I have the filter tell the generator to keep the "trendiness" above "HIGH" and keep the "serial correlation" to "HIGH". Higher it is, the better performing.

    Equally, when I tweak the real market data, the Filters will set the dataset to a "HIGH" value, when it goes below "trendiness" and "serial correlation". So what I'm doing is keeping the "rest of the measure" within what the market is doing and altering the system specific part.

    Under a real life test, the logic and code is a lot more complicated and a simple "trendiness" and "serial correlation" is too broad to be used as a base measure affecting the other aspects too much, but that's a simplified example of tweaking the market data. There is never a single component but a combination of them. Making the market measure a system specific filter, plus leading the list of components to be extremely long.

    (It's the toughest part of setting this up and still is tough even now... You are required to DRILL down the market to bits and pieces, which a never ending pursuit so you need to draw the line somewhere... which largely depends on the developer's brains and choices. Though it is extremely rewarding. You will be viewing the market very differently real-time and more makes sense to you. You're constantly removing greyness of the the markets by researching)

    5. Finally, I'm only providing a road map. It's up to the individuals to gain the skills and do the actual work. It's a choice and the efforts of the individual.

    ... I'm done writing a comment and the disclaimer for the newbies... and filling in a few gaps for the others.

    I'll be back in topic from tomorrow's post.
     
  9. Slip-Forward. Step 1.

    First step is a simple step. You find out how the system will perform under the current portfolio.

    For the previous set of tests, it was about refining the filter and the system you had. Let's just assume that it's refined to a reasonable level.

    First you get confidence levels of how the system may perform. Something like, "Under the current condition... the system has a confidence level of 70% of breaking even." etc. etc. It's close to what you've done in the previous test but this phase isn't about refining your code and logic, all you'll be doing is re-arranging the test results.

    Second, you iterate through your system based on the system performance figures. Starting from the lowest possible to the highest possible, you want check how the new system will affect the other systems. As mentioned, the tendency more inter-twined, and one model's filter affects another. So you're actually testing the impact of one system's performance against the other.

    Let's say that System A performs better when System B is failing. You wouldn't want to blindly run both allocating the same capital. Of even more, implement a model that can kill a system's performance either way. Under a "pure" negative correlated system, the allocation becomes very important. The assumption of both systems reducing volatility and smoothing curve to it's regression is an assumption based on the idea that the system will be continue to work. If a system fades, the relationship of the 2 systems can lead to becoming net negative.

    As mentioned previously, I think it has to be mentioned that all risk management models base their concepts based on the assumption that the current "relationship" AND (&&) "condition" sustains. The models never reflect the high potentil fact that it will change. Managers dealing with CDOs and sub-prime assessed their risk based on the typical flaw within any model. There's a bunch of auto-adjusting systems but they also assume that the adjustment criteria they embedded will work. It's a matter of degree.

    This current market is a great example. Anything can happen, there's always a risk of failure or a change.

    Slip-Forward assumes that the models will fail. (Even though, there's risks like dealing with FAKE data that's highly possible that it won't relate to a real market) If you know it, then you can deal with it, accordingly.

    Getting back into topic. Last thing you want is to have a system kill another systems performance. The measurement of the performance may vary but you want to make sure that it's beneficial to add the system. What you are doing is to actually measure the potential impact of what may happen if you add a system. Let's say:

    System A returns $100 a month.
    System B returns $50 a month.

    You find out that during months that System B returns $100, System A becomes BreakEven. You're actually losing $50. So then what happens when the System B wins $200?

    How you should be allocating capital to this? For this example, you can increase the allocation for System B. But what about the frequency of System B making $100+? I'm keeping my examples very simple but I hope you get the point.

    nuff written.
     
  10. First file I tried to upload was the wrong one. Forget about it...

    :(
     
    #10     Dec 21, 2008
Thread Status:
Not open for further replies.