ROBOT autotrading futures (through IB Gateway)

Discussion in 'Journals' started by fullautotrading, Jun 22, 2010.

  1. Hi ZMiniTrader, thanks for your question.

    The processes of searching consistent profitability involves a lot of struggling which in time, may change completely the way one approaches trading and make realize that most of concept found on some websites (not ET! clearly ;-) or books, about trading, are often misleading, as they actually lead to systematic unprofitability. So during this process there is often a radical change of perspective and development of new perspectives.

    Why i am saying that ? Well, simply because, as i see it, i do not have the concept of "risk per trade" because a single "trade" itself is not actually defined. I see a "strategy" as a continuous flow orders which are issued with an objective, which is the maximization of the overall ratio AvgProfit / Max drawdown. Anyway, i see where you are leading and the real question probably becomes: how does this particular algo (strategy) manages to hedge ? The idea, as anticipated, is that, at any time, it would try to "embrace" the price curve with (low) buys and (high) sells:

    SELL.....................x x x
    price curve
    BUY......................x x x x

    Now, it is evident that, especially at the beginning of the trading session, the price will "escape" several times those (dynamic) "bounds" breaking up or down. And, in the case of futures the break up / down can sometimes be sudden and deep. When there is "break up/down", clearly, while one side is "profiting", there is always one side (the opposite one) which is causing drawdown, either because there will be buys above the new current price (break down) or because there will be sells below the new current price (break up).
    The algorithm discussed here manages to recover from this (temporary) drawdowns by "wrapping again" the price on the new "range". Clearly, the entity of drawdown is dependent on the nature and magnitude of the break outs. If they could always be of relatively small or moderate entity you might actually never see any negative P&L because usually the trades inside the buy/sell (sideways movement) often readily create a protective, hedging, profit "cushion". If, instead, several instruments break up or down all simultaneously and violently, as not uncommonly happen with correlated futures, then the drawdown can be more significant and last longer, while the algo manages to "take control" of the new price range.
    To determine what is the capital necessary to manage these events, depending on the algorithms parameters, backtesting (and or experience) is very useful. The actual values could also be worked out theoretically as a function of the minimum distance between orders (what i have called improperly "trade size", just to simplify) and the rate of increase of order distance as a function of instrument volatility.

    I'd like to point out that another way to look at this algorithm is like having 2 traders sharing one account. Assume that periodically - say when the price moves M ticks - they both make simultaneously an order:

    TRADER L (always playing long) buys I contract
    TRADER S (always playing short) sells I contract

    They are both attempting to catch a so-called "trend" (TRADER L hopes the price goes up, while TRADER S has an opposite hope). On the other end, while both these trades will have a "position", an external observer looking at the shared account will see actually a 0 position on the account. If Trader L, for instance, gets lucky and he catches a trend and then, at a certain point, takes profit, our external observer will perceive that the account has now position -1. This just to say that, depending on how we like to look at it, the game of trying to "wrap" an expanding range could be seen, at the same time, as "trending", or "countertrending", depending on the perspective. Personally, i dont embrace any of these 2 points of view and prefer to look at it as dynamic process where orders at any time attempt to wrap the price, by expanding with volatility.

    Using a folio, usually causes that while some instruments break out, several other ones will have the price wrapped inside buy/sell (a profitable situation) which, in time, provides a sort of "cushion" to hedge against the periodical break up/down, which are anyway necessary to continue profiting.
    Clearly, very large, or simultaneous, break up/down (as currently happening) will take longer to recover (and, eventually, turn into profit).

    The june 28 gold vertical plunge (27$ less in 1 our and a half) and simultaneous break down of several folio instruments (aud, cad, es, nq, ym) is providing a manual example of drawdown in a nasty scenario. Also CL (over 400 ticks) and EUR (almost 250) dropped down. Now, while someone can talk about market manipulation prior to option expiration:

    [see for instance: http://www.marketoracle.co.uk/Article20677.html
    <i>"The extreme concentration of paper short positions by 4 or fewer banks is certainly fishy ... As a trader, you can utilize the trend documented above in order to seek short-term trading opportunities. As a long-term investor, you should realize that the sharp sell-offs in precious metals just prior to expiration dates are likely manufactured and almost always short-lived. Therefore, don’t be a panic seller and play into their game. If you believe in the fundamentals and long-term prospects for gold, clutch your precious metals with strong hands and don’t let your emotions force you to sell at the wrong times. You will invariably have to buy back at higher prices, incur additional trading fees and create high levels of undue stress in the process"</i>]

    whether manipulation theories are paranoid or actually accurate, the robot simply could not care less, and certainly does not get scared by these (speculative or not) movements. It simply sistematically adjusts to buy/sell at lower/higher levels, to "wrap" again the price according to its predefined logic.

    This is not, clearly, a particularly favourable configuration just at the beginning of our forward test, but it will be a good occasion to see how the algorithm behaves versus very unfavourable situations and high volatility, and whether it is actually able to recover from such events and survive profitably in the market (and perhaps will suggest ideas to devise additional protection mechanisms at folio level). Also provides an order of magnitude for drawdowns.

    Currently, most of drawdown is due to CL and ES (and aud, eur): later will show drawdown charts, also comparing the drawdown obtained with a larger "trade size" parameter.


    Tom
     
    #11     Jun 29, 2010
  2. Ok let's take a look at the first days of trading.

    Well, looks like I have chosen one of the best week of the year to start.
    If i were looking for a volatile and tough scenario, I have been suited well ;-)
    (Should not complain too much however, as we already know that since when banks have been allowed to play with deposits it's quite difficult to see in price curves a resemblance with a "random walk" ;-))

    While this would provide some good data for future strategy backtesting, anyway, still the drawdown seen during these notable events did not reach the levels I have seen in my backtesting.
    As far as i know, this values should be still considered "normal", considering we are in the game seeking
    "consistent profitability" (over a relatively long timespan), mainly along big banks and other manipulators.

    Note even at the deepest drawdown, the algorithm was still increasing the "realized" (see "realized" curve).

    Since, by design, the "realized" cannot decrease, in time there is no drawdown that well be able to "pull" the P&L below zero. A greater volatility just make the process slower, but cannot stop it (if there is clearly enough cash reserve to stay in the market).
    The "realize force" (how i call it) of the current test seem still to exceed 1K per day.

    The key in the game of <b>consistent</b> profitability boils down to be able to calibrate algorithm parameters in such a way that it is possible to stay in the market and continue trading, with the given capital (based on historical volatility) and cash reserve.
    [ And it's certainly not placing a stop on each trade that any consistent profitability can be attained (that would lead, in algorithmic procedures, in very the best and lucky of cases to a zero profit game). ]

    A common problem with HF scalpers is that while they also have (by definition) large "realize force" they, instead, have the problem that the "unrealized" can grow at a rate much higher of the "realized" (in volatile markets), which causes larger and often unbearable drawdowns. Here this problem is not present cause, from an intuitive point of view it's, practically, like we worked within an "ever expanding" scalping range which in time contains and bounds the growth of the "unrealized" component of P&L.

    Within the current algorithm, the drawdown can be seen intuitively as the investment necessary to "bound" the unrealized component (and it is therefore proportional to volatility and price range expansion).

    In time, since the realized is monotonic increasing, it will pull up inesorably the P&L.

    As a rough estimate, a folio like this one, should be tradable with a cash reserve of 500K. And hopefully ensure, in the relatively long run, an average daily profit above 1K.


    <a href="http://www.datatime.eu/public/gbot/Sun%2027%20Jun%202010_port4002_Cli4/GBotReport_2010-06-27_port4002_Cli4.htm"> Chart Update </a>
     
    #12     Jul 9, 2010
  3. <b>ROUGH CASH RESERVE ESTIMATE </b>

    In any case, even with the most (consistently) profitable strategy i have personally ever seen (tested) the ratio R = (AvgDailyProfit / Max Drawdown) does have "upper limit" beyond which it cannot be pushed further, due to the natural volatility of markets.

    In case of futures, to be safe, for what i know, one should assume that this ratio AvgDailyProfit/ Max Drawdown might be less than 2% (if the data were more like "random walk" the ratio could get near 8% - 10%, and beyond).

    That is, for instance, if you are calibrating your strategy to yield an average daily profit of 1K, you must consider perfectly "normal" the possibility of seeing a max drawdown of 50K. (Make it 100K, to be more conservative).

    As to margins, we must consider the max number of contracts (say max 50-70 contracts), based on the max position historically seen in backtesting. Multiply this figure for a max value for the margin of 1 contract (say for instance 10K) and there you are with a preliminary estimate of the cash reserve necessary to trade relatively safe, and with a systematic avg daily profit. (Clearly we are talking about average profit. Usually especially at beginning there will be drawdowns, according to the algorithm logic)


    <img src="http://www.datatime.eu/public/gbot/HighVolatSession.jpg">


    <b>WHEN TO QUIT</b>

    The algo can run continuously but clearly we can quit anytime see a profit we like.

    It does not make much sense (according to the algorithm logic) to quit when the Unrealized component is high (visually when the realized green dotted line is much above the blue PNL line), even if a nice profit is showing, because we would "waste" a potential profit.

    In fact the "Unrealized" component can intuitively be interpreted as what we profit on price reversal.
    So for instance in this specific case i should not close the trading session.

    I will make an exception because i realized i had a small bug in the code (can see on the EUR an unwanted sell order of -7) which should have been instead distributed way up (this has caused a larger drawdown too).
    So i am going to (hopefully) fix the bug and restart our test with another fresh session.

    Tom
     
    #13     Jul 9, 2010
  4. <b>CHOICE OF THE FOLIO</b>

    As we proceed expounding and testing the general idea and algorithm, it becomes evident that the ideal porfolio would be one containing instruments which are moderately volatile.
    So for instance, one should be careful with commodities.

    Correlation does not really help too. So another good idea is to choose instruments keeping that in mind.

    In general, the ideal performances would be with a folio of "randomly walking" independent instruments (in which case running the algorithm is actually like printing money!).

    It would also be nice to test a folio of ETFs.
    I welcome suggestions on this and other topics.

    Any question is welcome, as usual.


    Tom
     
    #14     Jul 9, 2010
  5. Ok have restarted. The main parameter (minimum order distance) is 260$

    <a href="http://www.datatime.eu/public/gbot/Sun%2011%20Jul%202010_port4001_Cli1/GBotReport_2010-07-11_port4001_Cli1.htm"> Chart update </a>
    (have added a couple of vertical lines to charts, to visually delimit RTH)

    The overall return so far has been very good.

    Here is what happened the first 3 days, until now (detail of trades in in the link above):

    <img src="http://www.datatime.eu/public/gbot/res_sun11_1.png">
     
    #15     Jul 14, 2010
  6. Nice results...

    Few questions:

    What is the purpose of the folio? If there is no correlation then do they not all trade independently?

    How do you scale size from 1 to 50 contracts? Is this a martingale or anti-martingale strategy?

    Can you determine a price range for the algo to operate in with a fixed draw down? ie. ES trading robot covers +/-100 points = max contracts 25 max draw down 100K etc... What about assigning set range / risk / contracts / draw down per instrument? Similar to Mutual fund allocations in a 401K.

    What time frame are you holding contracts open?

    What is the criteria for taking profits?

    How are you hedging the runaway risk? ie. strong and long directional moves?

    Would this longer term strategy be better suited to trade future options? Appears you are waiting for the position to get "in the money" Options would appear to significantly lower the required capital.
     
    #16     Jul 15, 2010
  7. Tough questions PocketChange ! ;-)
    Will try to answer one at a time.
    Feel free to break in any time ...

    The most obvious purpose of a folio is investment diversification.

    As this approach never "realizes losses", a diversified and incorrelated set of instrument would mitigate drawdowns.
    This specific algorithms infact has drawdown (it "invests") in presence of directional moves.

    A well diversified folio should distribute the "investment" across multiple instruments catching directional moves at different times. This way, the less volatile instruments can mitigate the drawdowns due to the "investment" in one or more instrument and continue to "realize".

    From the testing so far, it is quite evident that the "realize force" (how i call it) of this algo and parameter set is good.
    Such continuous "scalping action", pulls the PNL up, and a folio helps keeping this increase constant in time, so that after a period (weeks) there should be no price moves (=investment) capable to decrease the PNL under negative values.

    In other words, having a diversified folio, we will be "investing" on some instruments and realizing on others, and this would hopefully happen at different times, contributing to hedging.
    The above should be the most obvious reason. There are other technical reasons, though. Another, more "algorithmic-specific", reason for using a folio is the following.

    The ideal situation would be the one of a folio of independent "random walkers". (This would generate very small drawdown and large returns on actually used capital). In the real word however instruments are correlated. Trading appropriately a folio we can neutralize some correlation or use it in our favor.

    First of all, normalizing the scalp size, helps "breaking" some correlations of obviously correlated instruments.
    Second, if correlations are computed dinamically, this information can be used for hedging purposes. Infact, if we detect that a given instrument is being volatile and it is "absorbing" much of the capital, we can reduce temporarily new investment on correlated instruments.

    (This is pretty much the rule I anticipated when answering kxvid's questions.)
     
    #17     Jul 15, 2010
  8. Still doing ok: 12K realized, positive PNL
    and max drawdown -3.8K, almost 5 days running.

    As to the second question from PocketChange:

    Let's, first of all, say that "martingale" is one of the words I have removed from my vocabulary :))
    I have removed a few other words, but will not tell which one... I bet you may guess ;-)

    Investopedia defines an "Anti-martingale" as: "A system of position sizing that correlates the levels of investment with the risk and portfolio size"

    I think this definition is probably closer to what this algorithm does most of the time. In fact, as volatility increases, the distance between the orders increases too (because we are attempting, at any time, to "embrace" the price curve). Besides, the correlation rule mentioned above reduces temporarily new investments when volatility increases.

    (The algo does have, among the parameters, a factor to sligthly increase order size, but that requires moves so deep that it is practically negligible in most situations.)
     
    #18     Jul 16, 2010
  9. Robot has just closed all positions on a 16K P&L, averaging more than 2K a day (july,20 - 9:16 edt).

    <a href="http://www.datatime.eu/public/gbot/Sun%2011%20Jul%202010_port4001_Cli1_1/GBotReport_2010-07-11_port4001_Cli1.htm"> Chart update </a>

    I have this rule, allowing going flat, when P&L exceeding the realized. Algorithm calibration shows that it would be profitable even without the rule (actually more profitable), but the maximum position is much higher. So I prefer to play safe, to avoid challenging margins.

    <img src="http://www.datatime.eu/public/gbot/res_sun11_2.png">

    I will let it run (without restarting a new trading session). It will reopen automatically in a few minutes.
     
    #19     Jul 20, 2010
  10. While the robot continues patiently and inesorably its work (It is 19K realized right now and positive PNL, btw) , let see the next questions from PocketChange

    Within this approach and based on calibration results it appears that drawdown is not a big issue for the present algorithm, because the cash reserve needed for the margins of the maximum position ever seen is bigger than 2 times the maximum drawdown ever seen.
    What is is important is, of course, to have enough money not to be liquidated. This can be determined trough calibration on worst case scenarios and historical volatility.

    Timeframe, as evident from the charts posted, is variable as the robot adjust to volatility. Can vary from few minutes (less volatility) to days.

    Taking profit occurs in such a way that the realized component of the P&L will not be permanently surpassed by the unrealized.
    This is a common problem of high frequency scalpers.

    I have a couple of high frequency scalping strategies which are also "consistently profitable", in that case the problem of vertical moves not being able to compensate horizontal scalping become even more evident.

    In fact, this particular algorithm has been derived directly from an high frequency scalper appropriately modified to "bound" more effectively the unrealized component. This make the strategy more "playable", latencies become less important, and in general problems on the infrastructural side are not able to hurt.

    The robot takes care of deep moves by making longer steps to "embrace" the price curve. The maximum exposition depends on strategy parameters, and can be "calibrated" exactly by using worst case scenarios.

    I am not sure about the option argument. This could be an area of research actually. In theory, the robot does not care about what instrument is feeded with. (Spreads may be an issue though.) I have noted, for instance, that trading FX futures does not give the same results as trading OTC. Corresponding futures yield consistenly better performances. (Possibly, due to spreads)

    Tom
     
    #20     Jul 21, 2010