Robotrading: CT + Trending Strategy on folios of futures

Discussion in 'Journals' started by fullautotrading, Oct 11, 2010.

  1. Well stops (at instrument level) never had an enemy worse than I, and nobody knows that better than you ;-) So I fully agree with you.

    But let's recall what was the original motivation for us to look into a <b>trending component</b> to possibly overlay on CT.
    We already know that CT is a beautiful strategy, which has some "deterministic" guarantee. Problem is that it is only suitable for funds, banks or millionaires, which can possibly overlay several folios and tolerate the additional investments. As always, moneys make more money! We were trying to make it playable with a possibly smaller capital.

    The problem with stops is there with <b>any</b> strategy which uses them. Clearly, if you enlarge your timeframe you can slow down your "walk toward the coffin", but if a strategy is unprofitable it's not changing the timeframe that will make a big difference.

    The original idea here was actually to use a directional strategy as an hedging tool. We already know well, from thousand simulations, that <b>any algorithmic strategy which has stops at instrument level is a zero profit game</b> in the long run, but in this case this fact would be a good thing, as we are overlaying on a deterministic profitable strategy.

    So all we need is that the "hedging" strategy remains as "neutral" as as possible <b>oscillating around a zero profit</b> but providing the necessary hedge.

    This might mean that we need to give up the idea of quick and sharp inversions, because as you remark they can grow quickly a sharp drowdown, if a sequence of stops is hit.

    Well, we know well the limits of this procedure, and probably using a strangle of options, or a separate trending strategies is still the best way to go. We will see. I just wish to explore for a while this path. In case we will give up and return to CT alone.


    Tom
     
    #11     Oct 29, 2010
  2. unco

    unco

    Hi Tom,

    Looking at your different CT+T charts, it seems that some of them don't experience a drawdown at the same time (seems to be 2/3 different behaviour). May be by merging all of them you may finish with a more regular equity curve.

    CT seems to be a good strategy, finding interesting entry points... However it would be interesting to know how much the strategy can lose during strong trends, and most important, if the PL made during ranging/small trends can compensate the lost of the strong trend period.
    Even big fund have been wiped out by accumulating position against a strong trend. With this kind of strategy most the time it's a matter of time before you find THE trend that will erase more than 1 year of PL... or everything.

    I keep thinking that pursuing your research with T is a good idea, and as most systematic fund manager have said, it's impossible to be consistently profitable using 1 market or 1 strategy. ;)
     
    #12     Oct 30, 2010
  3. Thank you unco. Following your advice as fund manager i have been doing more research which seems quite fruitful...

    <b>** A Random Walk Down Wall Street ? **</b>

    Hi friends.

    sorry for some delay. I have been working hard to generalize my framework and also to provide other objective assessments of strategy performances, also because looks like that very soon we will finally pass from the paper to the money ... ;-)

    Although i have recorded during the years a large collection of tickdata, I felt anyway that testing with real data was not enough, and above all did not help much in the process of strategy selection, especially to determine the best hedging actions. This is due to the fact that, once some price curve is observed, we have only a single realization of the "universe" of possibilities and the performance assessment could easily be mislead by various form of overfitting, or in any case tend to favor those strategies, that would turn out to be not "statistically" the best, but which do better with the specific observed price determination.

    So i felt it could be useful, to the purpose of strategy performance assessment, to measure performances against "random walks" too.
    Some traders may argue that this is not much useful as, they state, nobody can be systematically "profitable" against a "random walk", just because its "random". Some other people are of the opinion that it is possible to be slightly profitable even trading against a random walk. For instance, this site: http://www.isigmasystems.com/implications.html
    claims to have a proof. A friend of mine, instead just says that "proof" is wrong. Is it ? :))

    Other people say that real price are far from being "random". But, on the other hand, if they are given a random generation they are generally not able to tell if that is a random walk or actual prices.

    My justification to use - also - random walks for testing is that since, in any case, i am using pretty much a "mechanical" approach, which "neglects" any possibility (which is however not denied) to "forecast" price action, to my purpose it does not change much if prices are "real" of the outcome of a random walk. Further, the hedging action is pretty much "mechanical" and if it has some value, it is also expected to mitigate losses on simulated data. Is it ?

    Clearly, the nature of the random walk can affect the performance of the strategy. For instance, a random walk could easily "reach very far", taking prices that are economically "unreasonable". That depends actually on the model one uses for the random generation. For instance using the famous geometric brownian motion (GBM) to model a commodity could be pretty far from reality, and that's why "mean reversion" and "jump diffusion", etc., variants have been proposed.

    We have actually seen in our previous trading a big deal of "reversion" (much more than any GBM), and infact the so called CT strategy which worked on a "total reversion" concept did quite well most of the time. Except, clearly, until we felt the necessity to add an hedging action (T).

    In any case, if we have 2 strategies and one is doing much better than another one on several random walks, that could be certainly something we want to know. And if a strategy is more catastrophic on any set of random walks than another one, we probably do not want to use it for real trading. Do we ?

    In my "mechanical view", where a strategy is not meant to "predict" but just to perform the "best" actions to grab a profit and, at the same time hedging, i believe that looking at the performance against random walks can be of interest, ** especially to the purpose of selecting the best hedging mechanism **.

    Clearly, we expect that, depending on the generation process, it can be pretty difficult, if not impossible, to make money against the pure "chaos". For instance, using a "coin toss model" like that discussed on this site: http://www.tvmcalcs.com/blog/comments/coin_tosses_and_stock_price_charts/ or using a GBM like that discussed on this other site: http://www.tvmcalcs.com/blog/comments/coin_tosses_and_stock_price_charts/ <b>can lead to enormous drawdown, extremely resilient "fat tails", where really huge drawdowns keep appearing (even if with small probability)</b>, and, consequently, huge return variances, which keep the avg PNL oscillating between positive and negative values.
    This because such random walks can actually "go, freely, almost wherever they like", while in the real world there are actually "forces" and "manipulations" which contribute to determine the prices and keep them within "supports", "resistances" and other bounds.
    Nevertheless, to the mere purpose of comparing 2 hedging mechanism or 2 strategies, i think that looking at the result (even if strongly negative) on a large number of random walks can be of interest.

    But i'd really like to hear your diverse opinions about that.

    I have been spending also some time fully integrating a random walk generator (GBM for now, but i will expand to other models) within my application, in order to have just another test mechanism (i hope of some usefulness), beyond backtesting and live testing.

    In the next posts, i will be showing what may look like some results obtained with some strategies on a large number of GBM price generations.
    Then i will resume trading also the best performers suggested by those simulations and we will see how they actually do in the real world.


    Tom
     
    #13     Dec 16, 2010
  4. Ok, after generalizing my framework, i am finally able to begin some simulation with the <b>"random walk"</b>.
    Based on our previous trading experiences, I have tried to identify the most useful <b>"rules"</b> to create "good" "trading mechanisms".

    I have come out set of settings ("rules") which are flexible enough to yield different <b>trading mechanisms</b>, which can be carefully tested. If we have some more ideas, I will be adding further rules to adjust the trading mechanism.

    I have programmed both a <b>GBM ("Geometric Brownian Motion")</b> and also a GBM <b>with mean reversion</b>. Let's give a try to the GBM first, which is, probably, the "nastier" one to trade. Isn't it?
    As said, we do not really expect, clearly, to be able to trade profitably a random walk on a long timespan and a big number of iterations, however it's a kind of analysis which can surely provide some insight.

    Programming the GBM, it turns out, is not difficult at all. Essentially from a starting price, you can generate the next price with an instruction like:

    NextPrice = CurrentPrice * Math.Exp((Drift - 0.5 * VolatilitySquared) * Elapsed + Volatility * Math.Sqrt(Elapsed) * Norm1);

    where "volatility" and "drift" are the 2 well known parameters of the random motion, Norm1 indicates a determination from the standard normal. I have set drift = 0 and volatility = .3
    (The command for the GBM with mean reversion is slightly more complicated, and has a third parameters, which expresses the "speed" of the "reversion")
    (As implementation detail, notice that the above command clearly will generate prices which may differ from previous price more than 1 tick. To avoid that, I interpolate each generation adding linearly, in a loop, the ticks necessary to arrive to "NextPrice" )

    There are many ways of organizing simulation experiments.
    To start with, let's consider the following simple schema. We may consider later different plans. Assume I trade continuously a future contract for 6 days. Then I quit, taking whatever PNL I have at the end of the week.
    I repeat this process at least 100 times and look at the following:

    - Statistical Distribution of the Avg Daily PNL
    - Distribution of the Maximum Drawdown
    - Distribution of the ratio Avg Daily PNL / Maximum Drawdown x 100K

    (I am taking only a week because I have my machines occupied by other simulations, and limited computing power at the moment for these tests)

    This first analysis can be of some help to have a quick preliminary idea of the performances of different rules and strategies created
    by combining the various rules. At least we could immediately throw away the most "disastrous" strategies (combination of "mechanical" rules).

    As total elapsed time, this experiment is equivalent to several years trading with clean tickdata. Anyway the final PNL would not be the same as if we trade "continuously" (without stopping each week, which adds "artificial" stops), as we quit every 6 days, to repeat the experiment and create a "distribution" of results. Anyway, the resulting distributions will be anyway useful to have an idea of performances, drawdown, exposition, etc.

    To start with and have a reference point. Let's first look at the result we get by using a set of rules equivalent to our so called CT strategy we have been trading in the previous months. That strategy resulted (as final effect) in a countertrending game without directional "hedging".

    This is the automatic <b>report</b> of my application about that strategy:<a href="http://www.datatime.eu/public/gbot/CT_Strat/CT_bound_HighLow.htm"> Strategy Report (automatic)</a> Let me know if you see errors. Our purpose is still that of identifying the most "convenient" mechanical hedging "game". So we will move from here searching for improvements.

    Below 2 screenshots follow (sorry for high resolution). First one is a capture of the screen while the Random Motion was being generated. The other one a view of the simulation console with some rules (still changing them) and some performances.

    <b>GB Random Motion at work:</b>

    <img src="http://www.datatime.eu/public/gbot/CT_Strat/ScreenRandomWalk2.jpg">

    Note that the GBM "moves much more" and wildly than what a "real" instrument would, often with very large drawdowns. Infact, i reset the price to an initial level (80$) at each simulation or else with the current params after few simulations we may even arrive soon close to 0 or easily double the initial price. We have seen instead that instruments like CL have a lot of reversion.

    The beauty and usefulness of this kind of testing, i believe, is that, once you find some definitive "improvement", being essentially "mechanical" in nature and being a random walk, clearly, impossible to "overfit" due to its chaotic nature and absolutely free of any kind of <b>curve fitting</b> issue, one can be pretty confident that improvement will certainly have effects and reflect itself in real trading.

    <img src="http://www.datatime.eu/public/gbot/CT_Strat/ScreenRandomWalk1.jpg">

    If you are interested in playing with this simulator (pretty fun, actually), i have it also scorporated from my application, and I can send it (send me a PM). I will be actively adding trading rules until i find the most satisfying setup (so be prepared to frequent updates!)

    Happy holidays!

    Tom
     
    #14     Dec 23, 2010
  5. Hi, how was your Christmas ? ;-)

    I have decided to <b>raise the number of iterations to 1000</b>, in order to have resampling distribution estimates which can be considered definitely "reliable". This is equivalent to almost <b>24 years of millisecs clean tickdata</b> (it takes a while, but i can still afford that, since i am considering just a week of tickdata for each iteration). For now i will leave the experimental timespan equal to a week. For relative strategy comparison it - probably - does not matter much (we may try larger timespan later, when we have a first "selection" of good "performers").

    Before attempting strategies (combinations of rules) with "hedging rules" into them, my first attempt is to explore if the original CT can be improved with some additional rules. In fact, I must say, i did have a few rules back in my mind which "intuitively", in my expectations, should have raised performances. I have also tried them in the past with real data. The problem was that there were too little "diversification" in the "realized" real data and I could not get a real "feel" whether they where really mechanically "beneficial" or not.

    The "random walk" can just provide enough "chaos" and enough data for the response I was looking for. So I <b>have added 3 more rules</b>, and actually have found a slightly <b>better mechanical strategy</b>, still within the realm of the "pure countertrenders". Thes new rules have mostly a <b>"balancing effect" on long short</b> "traders":

    <img src="http://www.datatime.eu/public/gbot/StratView1.jpg">

    Next step will be exploring the best strategies with the <b>"hedging rules"</b> (say the "T game") enabled.

    Here is the report for the slight "mechanical" improvement i have found. This could become our next "reference result" to beat and find new improvements
    (note that this one is raising the bar of the <b>average Sharpe ratio up to 2.43 </b> on our "controlled" trading experiments with random prices.):

    <a href="http://www.datatime.eu/public/gbot/CT_Strat - Improv1/CT_Strat - Improv1.htm"> Strategy Report </a>


    Tom
     
    #15     Dec 26, 2010
  6. I have found another improvement, still without the T component (avg Sharpe on random walk up to 2.66). This has a larger take profit and a tighter entry scheme:
    <a href="http://www.datatime.eu/public/gbot/CT_Strat - Improv2/CT_Strat - Improv2.htm" target="_blank"> Strategy Report </a>

    Playing with these simulations with random data I have observed that, sometimes, it happens that a strategy keeps appearing consistently largely profitable for several years (10-15 years), even though, eventually, it takes serious hits which make lose much of accumulated profits. We know that here there are extreme hits due to "impossible walks", but, still, this has made me think ...maybe it's the late hour suggesting "philosophical" considerations ;-)

    If these data were <b>"real"</b> looking at this phenomenon we would have said that: "market conditions have changed!"
    Actually it's a common quote i often heard, that some systems work until <b>"market conditions" change</b>.

    But, here, there is no really such thing as "market conditions", as data is random, and we may still observe the same phenomenon (clearly, depending on the sequence of outcomes) !

    So I was thinking what we sometimes perceive as "mutating market conditions", can be just the <b>result of the variation of a strategy outcomes</b>. A strategy with a larger variance, will be more subject to this phenomenon. And this may be viewed not as a consequence of the "market conditions", but just an <b>intrinsic property of the strategy itself</b> !

    It's a curious though, which anyway shows as some insight may lead to a different way of looking at some common conceptions ...


    Tom
     
    #16     Dec 26, 2010
  7. I have completed a first round of experiments with the GBM. This is a first set of strategies with corresponding performance assessment on GBM:

    Long Short with "Hedging" (T game):
    <a href="http://www.datatime.eu/public/gbot/GBM/Strategy CT_T_LS_1/Strategy_CT_T_LS_1.htm" target="_blank"> Strategy_CT_T_LS_1 </a>
    <a href="http://www.datatime.eu/public/gbot/GBM/Strategy CT_T_LS_2/Strategy_CT_T_LS_2.htm" target="_blank"> Strategy_CT_T_LS_2 </a>

    Long Short without T game:
    <a href="http://www.datatime.eu/public/gbot/GBM/Strategy CT_LS_1/Strategy_CT_LS_1.htm" target="_blank"> Strategy_CT_LS_1 </a>
    <a href="http://www.datatime.eu/public/gbot/GBM/Strategy CT_LS_2/Strategy_CT_LS_2.htm" target="_blank"> Strategy_CT_LS_2 </a>

    Long (or short) only without T game:
    <a href="http://www.datatime.eu/public/gbot/GBM/Strategy CT_L_1/Strategy_CT_L_1.htm" target="_blank"> Strategy_CT_L_1 </a>
    <a href="http://www.datatime.eu/public/gbot/GBM/Strategy CT_L_2/Strategy_CT_L_2.htm" target="_blank"> Strategy_CT_L_2 </a>

    There are some interesting things and concepts which emerge from simulations, which can probably applied to <b>real trading</b> too.
    (Next thing i will do is to compare these same strategies with different generation models (it does not matter if there are not the very best: i am interested in a comparison) such as GBM with mean reversion and simple coin toss.)

    1.
    One interesting thing, at distribution level is that, if we consider the statistical distribution of the Sharpe Ratio, every time a "one side" only strategy with extreme entries is used (long only or short only), a <b>"big peak"</b> always emerges in the middle of of the positive side of the distribution. This <b>"Sharpe peak" phenomenon</b> for one-sided extreme-entry strategies is very persistent, and i have still to think what is the deep reason of that. Maybe you may have some ideas.

    For instance, if you look at "Strategy_CT_L_1" you will notice that on 1000 we had 262 Sharpe Ratios falling in the same class 6.26 - 6.96 (an enormous value, by the way). Similarly, strategy "Strategy_CT_L_2", has 269 runs. out of the 1000, all concentrated in the class 6.10 - 6.86.

    <img src="http://www.datatime.eu/public/gbot/ExampleOfStrategyAnalysisAgainstChaos.jpg">

    This is a characteristic phenomenon, and I actually do not know if it has been noted before by anyone else.

    2.
    Studying and following the simulations as they develop it becomes readily apparent that if one had to actually trade a <b>random motion</b>, the only way to survive (and even be profitable for several years, if enough fund is available) would be to make trade one side only and to make "extreme" entries only. In the sense that for instance, if, for instance, you begin with a buy (sell) all the following orders have to be buys (sells) and each one must be entered below (above) any previous one: clearly strategy must keep track of all previous entries. Also equidistant entries seems to do better. This could actually have also application for a real world strategy. As we could start trading one side, and when we reach the extreme of the price range invert the trading side.

    This is quite intuitive as a result, as clearly the best strategies against random data are much concerned in letting the horizontal scalping prevail over the "vertical component", which could wipe out all the profits. So making "extreme" entries only, the vertical component is "compressed" as much as possible. A very prudential attitude. Clearly, in real trading, were ranges are much smaller, there are also other considerations which would recommend against this approach, such as the presence of correlations in a folio. So that the use of hedging entries and inversion can help against correlations and drawdowns.

    Happy New Year and tons of money to all in the next year!

    Tom
     
    #17     Dec 31, 2010
  8. Hi friends, thanks a lot for your support and much interest.

    I think that exploring also the results obtained with random walk has been useful to clarify some ideas and understand better some features of trading mechanics. On some article on the web i have read some <b>aprioristic denial</b> of the possibility of random data to be useful for strategy assessment, but i think that "shutting a door" is not always the best thing to do, especially when the pursuit of a deeper knowledge is concerned.
    In a next thread, i will start trading the strategies and ideas we have identified so far. I must say that i have tried to apply the same strategies to different random walks generators, namely pure coin toss and Geometric Brownian Motion with mean reversion, but i could not see any significant difference in relative performances.
    The fact that all generators tend to suggest the same performance order on a given set of strategies is something that must make reflect on the importance of this kind of assessment.

    As you know well, (at this point) i believe mostly in a "mechanistic" approach to trading (don't believe in a "mechanistic universe" though! ;-)), and mostly reject "indicator"/"prevision" based approaches.
    My justification for this standpoint, in a short explanation, is as follows. Once you open a position, if the price goes in the right direction at a certain point you take a profit. And nobody has problem with that. The problem is what to do when price moves against you and you need to take countermeasures ("hedging" strategy). Now the process of protecting the profits has a pretty mechanical nature, and some set of action clearly "dominate" other sets of hedging actions, as any simulation can show. There is no doubt about that. This should justify why a mechanical approach can be meaningful. Frankly, i would never use a strategy that shows in a RW comparatively very poor performance. Would you? From a probabilistic point of view this can be viewed as follows.
    From a starting price p0, consider the set PS(p0) all the possible price patterns stemming from that point. Given 2 strategies S1 and S2, and a performance indicator P. We might consider S1 better than S2 ("uniform dominance") if: prob( P > a | S1 ) > prob( P > a | S2 ) for all a, p0.

    As we noticed experimentally that the main different generators tend to maintain this preorder (with respect to the chosen performance measure) we might, not unreasonably, assume that the hierarchy obtained with a random generators might be meaningful for use with real data too. Or, at least, we might assume that keeping into account this information is better than not using it at all, or in any case "better than nothing" (many trading strategies are based on just "nothing", except that untested (or poorly tested) trader "beliefs"). In other words, and more intuitively, <b>a good strategy is one which relegates the possibility "to lose" to an unlikely subset of price patterns"</b>. Clearly, this <b>has nothing to do with assuming that real prices form a random walk </b> !

    Further, our experiments with random data has make apparent (to me) the appropriateness to categorize, for now, the strategies in the following categories:

    <b>1. Extreme Entries Only</b>
    1.1 Use "hedgers"
    1.2 Don't use "hedgers"
    1.2.1 Bidirectional scalping
    1.2.1 One-directional scalping (long only)
    1.2.1 One-directional scalping (short only)
    <b>2. Allow "Intermediate" Entries</b>
    1.1 Use "hedgers"
    1.2 Don't use "hedgers"
    1.2.1 Bidirectional scalping
    1.2.1 One-directional scalping (long only)
    1.2.1 One-directional scalping (short only)

    In summary, we have a few categories. If we denote by CT the countertrending game, T the hedgers (or trending game), E the "extreme" entries, L and S long and short:
    1.
    CT T LS E
    CT LS E
    CT L E and CT S E

    2.
    CT T LS
    CT LS
    CT L and CT S

    I think it may be meaningful to keep them separated because their dynamics are quite different. Pretty much the same way you don't let a heavyweight fight with a lightweight. (In future we might also consider the possibility of <b>"mixing" the strategies</b>, depending on market conditions.)

    In a next post, i will show the indicative performance of an instance for each of these 6 (or 8 if you prefer) strategy types.
    Then, we will give them a try with real data.

    In the meantime, let me know if you have more ideas.

    Tom
     
    #18     Jan 3, 2011
  9. 2 or more losing strategies in the long run, can only, if combined, produce a bigger losing strategy in the long run. This is a mathematical truth. In the mean time, you are just wasting time.
     
    #19     Jan 3, 2011
  10. My friend, those are not "combinations" of strategies. Perhaps my english is not much refined, but i think i have made clear that each one represent a different class of strategies.
    Nor they are "losing". You need to watch the performances within each timeframe.
    The effect of the <b>"exogenoous stops"</b> (due to truncation) induced by the resampling scheme must, clearly, not be kept into account in the assessment. I think I have stated that: you may have missed it.

    But it's a good chance, anyway, to remind it (i could perhaps enhance the performance report layout, to make it clearer, by leaving out the averages across sessions which may mislead naive readers).

    What i can suggest is to test your own strategies also against random generators, and, by simple comparison, you will realize the meaning of the performances shown here. So i invite you to show your results within the same controlled conditions. Without any form of precise assessment, we remain in the realm of pure opinions.

    Tom
     
    #20     Jan 3, 2011