One week journal

Discussion in 'Journals' started by TSGannGalt, Dec 18, 2008.

Thread Status:
Not open for further replies.
  1. OK...

    I'm going to try again with a single txt file. The top mapping is the system mapping of some of the systems I have. I picked out the low correlation ones to make things interesting.

    The bottom mapping is the impact of one system to another, quantified.

    Interestingly, even if the systems are correlated or not, the impact of one system to another doesn't show any related relevance. Intuitively, it may seem that higher the correlation factor, higher the impact it may have on the other system. The truth (considering and not forgetting that this is fake data) is that the impact a system can make on the other and the correlation of systems are 2 different things.

    Even within systems, the 3rd system may not have much impact even though it may have high impact against the other 2.

    So far... it shows:

    1. How dynamic the market is.
    2. How correlation doesn't reflect the relationship between models.
    3. How measures in Slip-Forward are very much system-to-system oriented.
    4. The risks of weighing your decision using any measure or process.

    Anyways, that's what you do in the 1st step.

    Explanation of the next step coming... when it comes.
     
    #11     Dec 21, 2008
  2. So with the first step, you find how your models are inter-related.

    I usually move into a different process by categorizing each of affecting components and creating a sub-group of models. That's a different process so I'll end the discussion here.

    Moving on... I finally get into slipping the models.

    Obviously, I always want to make sure how accurate the tests are going to be so I get take the period that the system in the portfolio has slipped. So I have a DataGrid where I set a bunch of DateTimes telling the performance to slip to a certain point, then back to normal. I iterate it a few times to see how well the performance stands up relatively.
    (
    *Meaning all my systems do not have a smooth curve. As MY definition, a good system is not a system that has a low Sharpe. A good system is a system that I can manage easily. Because, better the system is, the better I can shut them off when it starts slipping... meaning I can literally cut a 50% drawdown on a static test to 5% or less by managing them...

    *And... I test with a reasonable amount of REAL historical data. I want to see a system slip. It gives me a better idea what to do when it does.
    )

    So finally, I start my Slip-Forward. I basically iterate through the each of the models and tell one model to suddenly slip. On the txt file, I have 20 models so if I run a test on it I'll be iterating through them 1,048,576 times. Logically speaking, I should be taking the System2System Impact Map but I like to get the trade logs for each combination when I start seeing discrepancies, which always leads new things like systems or refining parts of the code and logic. I'm a bit lazy to run the report and get the stats for this thread because it's obviously a tedious routine.

    ---------------------------------------------------------------------
    Side note:

    Usually, it takes me... about... 1-2 weeks to come to this point. But most of the tests are pretty much done here. All I have I have left is to put together the results to code a new set of rules and adjust my portfolio models.

    Half way into writing this... I actually had to cut it out because further steps go into the actual porfolio models and I integrate them together. As of this writing, I've got 5 portfolio (3 main and 2 minor) models running seperately from each other so that I can diversify my performance in all levels of my trading...

    The toughest (and the never ending, pursuit) of part of any portfolio is to decide on what list of fitness you want to base your decisions on. Each model has their own set of fitness measure that needs to co-exist with fitness measure of the portfolio. In terms of Slip-Forward, the primary fitness measure you're getting is "vulnerabililty towards changing markets". This measure, in itself, doesn't serve as a sole fitness measure but a supplementary measure, added to the list of port measures.

    The overall usage is pretty simple and self-explanatory, it's very much as is. You can optimize based on the fitness and get a static amount. Or you can an agent program with NN, AIS or some other computer learning to keep it "dynamic". The hard-coding it shouldn't be much of a problem if you got enough skills to come this far.


    ------------------------------------------------------------------

    So I've got the mapping. I know how the system may react when one fails. The obvious next step is to take the results and put it on top of your portfolio model.

    First step is to get the fitness of the slip-forward results so that you can compare the results. If you want to keep it simple, you can just take the average or the mean of the combined distribution and use that figure as the setup.

    Second step is to have a code the mapped results and have the figures adjust automatically. You can have the impact coefficients hard coded or I simply have an config/schema file with the coefficients. (I would love to have this automated on the real-time basis but I don't have the computer power, YET and you really don't need to run this frequently, bar-to-bar... you can just manually do it on a 3rd party app. and over-write the required files... and just have your monitoring tool the feature of over-riding it on handled events)

    The basic feature of the code is to monitor the current models' performance and adjust the allocation of the related models to it. So if the performance measure is at a certain level, it tries to adjust the measure of another system.

    So let's say that when a trend-following system slipping usually ends up with the counter-trend's performance increasing, it "can" increase the risk allocation of the counter-trend accordingly.

    ... That's pretty much it.

    Simple process.

    The holiday season is making me all lazy and abstract. So I'll now fill in the gaps for the next few posts and end the thread.
     
    #12     Dec 23, 2008
  3. I read the whole thread back and I don't see much I can add.

    Everything is pretty much on the people doing the actual job so...

    MAGNA, please close this thread.

    Merry Christmas!!!!
     
    #13     Dec 24, 2008
Thread Status:
Not open for further replies.