Random F*ckin' Rants And The Rest Of The Sh*t

Discussion in 'Chit Chat' started by TSGannGalt, Dec 18, 2008.

Do you have me on the ignore list?

  1. Yes

    1 vote(s)
    25.0%
  2. No

    3 vote(s)
    75.0%
Thread Status:
Not open for further replies.
  1. I need a place where I can curse and post whatever I want.
     
  2. If you don't like me, put me on ignore. None of you fuckers need to post shit on here... (I wanted this on Journals but I can understand the reason behind this being moved.)

    Portfolio Management stuff hasn't been discussed in ET that much. There was one thread but that got trashed because of some moron writing retarded comments (and me replying) so I'm going to start off continuing what I intended. Last time around the example used was a counter-trend and a trend following system so I'm going to continue using that as an example, but it really doesn't matter...

    So you've got two systems, both trading under a different tendency. You've done all the tests and analyzed all the key figures required to run and manage the system solo. The obvious next step is to find out how you manage them within a portfolio of other models.

    Obviously, you run the multiple systems under the dataset you developed both models under. Obviously, you repeat the tests you've done for a single system in a portfolio. You get the correlation factors, equity curve volatility (Std. Dev.) figures and the test out a bunch of allocation models with it.

    So now, you take each of the new models and create a new market dataset excluding the tendencies each of the models use. In another words, you tweak the original market data so that the potential system fails. So you take the slip-forward dataset for each of the systems (including the live models) and iterate through all combinations.

    At this point you get a good picture of "what happens" when a specific and combinations of models fail and you get a post-slipped market correlation factors of each of the models.

    Obviously, you re-factor the "fitted" results and the "slipped" results together to get the optimal allocation scheme for your portfolio. And again, you get a better picture of whether the new systems are significant enough to add to your live server.

    I'll post more on this shit when I feel like it. (I have the 2 system example for a reason...)
     
  3. Oh... don't bother googling and looking for "slip-forward" techniques. You won't find shit....

    This actually came into my realization after having a drink with nitro 5-6 years back in Chicago at Dublins. I took the concept of the discussion forward and pursued it to where it is not. Mad props to nitro...

    I've gotta brain that allows me to think of things and figure out development processes to reduce risk on my own, rather than all the fuckin' sheeple.
     
  4. Now... a bit more in detail.

    There are a few pre-requisites to run this test. First requirement is for the developer to isolate a specific condition that the system would be profitable under. If the system is simple, the required condition of the test data would be simple. So using the example:

    I have a trend-following system. I'm going to use a simple Turtle entry/exit which would be:

    Entry - 20-bar breakout.
    Exit - 10-bar breakout.

    In this case, the condition required for the model to be profitable is quite simple. You want to code up a time-series data generator that frequently breaks out after 20 bars and you want the price to be above the entry price after a reversal 10-day breakout. You generate the data and you have a 100% profitable dataset for your trend-following system.

    But then you would also want to add a few more things to the dataset to make it more concrete. For the trend-following system to be profitable, you want the balance between the entry and the exit to reflect key statistical measures (the tough part of this is it's very model centered. Each model has their own set of "fitness" that matches). Keeping things simple, for the system to be profitable, you want the ratios between the % profitability (or the frequency of an upside move) and the risk/reward to be in balance. You can (should) generate both kinds of datasets for future tests.

    If the system is simple, the "data filter logic" is relatively simple. But more complicated the system, more coding you have to do to make the dataset. This process is not only for running slip-forwards, it actually becomes an important part of the management models when you are running it live.

    More you know about the system, the better... Plus, you're still dealing with the required condition. This has to be taken further into exposing the tendency it requires for the model to work. If you can expose the models to the tendency level, then you've got more control and clarity over how you should be trading the model. (Most models can't be exposed to the core tendency due to most systems working under multiple tendencies but you can get close and the process of drilling down is what's important)

    A man has to sleep so I'm fuckin' out for the night. Will continue when I feel like writing again.
     
  5. Thread closed as per request of op.
     
Thread Status:
Not open for further replies.