Fully automated futures trading

Discussion in 'Journals' started by globalarbtrader, Feb 11, 2015.

  1. Kernfusion

    Kernfusion


    No problem, my pleasure, thanks! :)
    I don't have many unit-tests myself (my stack is C#+sql+matlab with all the real-time trading written in C#, which is at least compilable and strongly\statically-typed, which gives some error-checking, matlab is mostly for reports and graphs).,
    In fact I noticed that I only write tests for the parts of the code I don't fully understand :) i.e. the more complicated the logic - the more tests I'll write (e.g. for my automatic futures-rolls I have the highest number of tests, like any kind of real-life scenarios I could think of, some probably multiple times, which indicates that I don't fully understand that code and have low confidence in it :), so I at least I wanted to check that it does what needs to be done on the actual scenarios).
    On the other hand, for the code I can read and understand (like all code should be ideally) I don't write tests at all, because they simply add to the code-baggage I need to support, take time, and give me a false sense of security, like "my tests are passing, so all is good" - no, I find it much more important to intrinsically know how your code works (so it should be as small as possible and also simple - i.e. ideal code is no code at all :) ). Also tests are good if you could get them "for free", but when you factor in the time you have to spend on them, I think it's better to invest it in improving the actual code and architecture (unless you're a huge company with unlimited resources, then sure, just hire 10 more people.. and then your code-base will quadruple in size, no one will know anymore what the system is actually doing i.e. it'll turn into the usual corporate mess :) ). So no!, big code-base size is a cost, complexity is a cost, time spent on tests is a cost, and therefore all these things should be minimised..
    I also unit-test complicated math-logic-code (e.g. forecast-functions) where I have concrete expected results to verify against..

    In my own risk-overlay code I'm actually doing it slightly differently (hopefully also valid and I'm not doing something idiotic :) ), I first element-wise-multiply weights with STDs and then risk=sqrt(wStd * corr * transpose(wStd ) ) - which seems to be giving the same result:

    wstd = w*std
    risk=wstd.dot(cor).dot(wstd.transpose()) **0.5 #0.11276790323

    Also, instead of using system base-capital I use the total dollar-price of the current positions, and current weights instead of configured ones.
    So if currently I only hold 2 GE contracts (1 point=2500$) and 1 NG contract(1 point=10,000$), then my full capital will be 15,000$ and the weights
    GE=5,000/15,000=0.33
    NG=10,000/15,000=0.67
    So these weights will add up to 1.
    I then compute the daily portfolio standard deviation (say it's 0.01), multiply it by the current capital-in (15k in this case), which gives me current daily cash risk (15k*0.01=150$), which therefore can be compared to my target daily cash risk (which is say 100k base capital * 0.25 / 16 = $1.562k).
    I also use the multiples you proposed originally (2xNormal risk, 2.5xCorrelationBreakdownRisk, 3xVolSpikeRist)., your last 2 numbers in the post are quite higher, not sure if it's because of that bug or something else..

    I also compute correlations differently (and here I have no idea if it's "correct or not, or "doesn't really matter"):
    for each instrument pair I take 260 daily prices, downsample it to weekly by skipping every 4 points, calculate weekly returns, compute regular Pearson "raw" correlation, and take the average of that result with the 9 previous "raw" correlations.
    I then check if the result in not 5x lower or 5x higher of the default correlations you provided in your first book and cap it to these numbers (e.g. if the default correlation is 0.1 and I got 0.6, I'll cap it to 0.5, or if I got 0.01, I'll cap it to 0.02).
    So in short I use 1y of weekly returns, averaging 10 last correlations and capping it to be "within 5x" of the default correlations.
    All of this probably makes it too slow to react (or ignore) to "extreme recent events when correlations spike".. not sure..

    Sorry for too many words :)
     
    Last edited: May 13, 2020
    #2131     May 13, 2020
  2. Hi GAT and Others,
    Would you consider using pysystemtrade with a low volatility target of 5% or 10% as a place to store cash for a couple of years? At my bank, I get 1.5% interest. If I used a flavor of your system with 8 instruments, which might get me a sharpe of .60, at 10% volatility, that would be 6% per year on average. Is this crazy talk? Thanks.
     
    #2132     May 13, 2020
  3. How do you handle holidays in this process? Suppose you were tracking the Friday close prices for this. Now a week comes along with a holiday on Thursday. Thus you have no Thursday close price in your database. Your sampling code would now change from using the Friday close price to the following Monday close price. And from this point on continue using Monday close prices due to skipping 4 data points. Until another holiday comes along and you shift to yet another weekday.
     
    #2133     May 13, 2020
  4. Crazy talk. You should never speculate with money you'll need in a couple of years time.

    GAT
     
    #2134     May 14, 2020
    Napoleon@Dynamite likes this.
  5. I'm assuming @Kernfusion is already working with data that has been resampled to business days, so on holidays it will have NAs rather than missing rows.

    GAT
     
    #2135     May 14, 2020
  6. No it's a bit more subtle than that. The figures have to be calibrated for the trading system you are using. The one in the blogpost is a bit smaller than my normal system in terms of numbers of instruments. However it has sufficient diversification that it just hits an IDM of 2.5. The realised risk, before risk overlay, comes in at 23.7% pretty much spot on target (25%). The maximum risk levels in the post are appropriate for a system which hits it's target risk on average.

    However my actual system has more instruments, and should actually have an IDM that is a fair bit higher than 2.5; however I cap the IDM value at 2.5. As a result the realised risk comes in lower, at around 19%. This also means that the extreme measures of risk are hit less often, and hence lower values for the maximum levels are appropriate if I am targeting them to come on a particular % of the time. Alternatively I could take the view that the maximum IDM is doing much of the risk management effort and l could raise the maximum risk levels (which would mean the risk overlay becomes close to irrelevant as it barely ever works), or I could raise the IDM and let the maximum risk levels cope with extreme scenarios (which would mean higher realised risk... which I'm not super happy with).

    GAT
     
    #2136     May 14, 2020
  7. Fixed

    thanks
    GAT
     
    #2137     May 14, 2020
  8. Kernfusion

    Kernfusion

    Yeah, something like that, I actually wasn't even thinking about it this way, I just have a price-series with closing prices and dates (I don't even know on which week-days each point is taken). I then take 2 of these serieses (for the 2 instruments I need to calculate correlation for), then I first exclude prices from each series for which there's no corresponding price in the other series for the same date (in case it was missed\not collected for one instrument but existed for the other), after that I have 2 serieses with identical number of points taken on the same dates, then I simply loop through each of them taking only every 5th point creating 2 downsampled serieses, then compute correlation on them. This happens every day in C# code when the system starts, so it's not for backtesting, it's for actual live system (I restart it daily for maintenance and things like that). So every day I get a new correlation (but that correlation is averaged with the 9 before it and capped as I described)..
    It does mean that I might be using Mondays for half of the prices, then switch to Fridays in the middle, yes., but does it matter ? a price is a price, unless I assume that "Monday prices" are somehow different from "Friday prices".. which might be, I don't know, but I doubt that the effect would be strong enough to affect anything..
     
    Last edited: May 14, 2020
    #2138     May 14, 2020
    HobbyTrading likes this.
  9. That's a good explanation. I like your approach of making sure that the involved time series are compared, in order to make sure that you only use those dates for which all have a price point before downsampling them. This is especially relevant if you were to use time series from multiple countries, having different holidays and/or trading sessions.
     
    #2139     May 14, 2020
  10. Kernfusion

    Kernfusion

    I see, so to simplify, the "smaller" the system the higher overlay thresholds should be (when IDM is capped)..
    If we use low threshold numbers on a small system it will be hitting the overlay too often.

    Currently I'm running 31 instruments@350k in paper and 14 instruments@100k in live, for both of them I use the same thresholds 2\2.5\3.
    I reran backtests for both configurations (takes a long time in my case as I'm effectively pushing historical prices through the system written for real-time execution).
    And indeed, I see the expected results (another reassurance for myself that my system isn't doing something crazy):
    14Instrument@100k:
    median realised volatility=22.5%;
    % of time when "risk reduction coefficient is above 1" = 15% (I implemented it the other way around, instead of a 0 to 1 filter, it's a reduction coefficient, I actually like the 0-1 filter better after I saw it :) )

    31Instrument@350k:
    median realised volatility=18%;
    % of time when "risk reduction coefficient is above 1" = 7.5%

    So realised volatility of the more diversified system is less than target vol and less than the "smaller" system's volatility., also the amount of time I'm hitting risk overlay is greater in the less-diversified one (with the same threshold numbers).
    I wander if 15% of time is really too much and I should relax the thresholds (especially that's what I'm running in the live account..)
    But on the other hand, the actual values of the "reduction coefficient" aren't that high, so if 15% of the time with the avg. coefficient 1.15 I'll have 87% of my normal positions\risk, that doesn't seem very significant..

    14Instrument@100k:
    upload_2020-5-17_21-18-48.png

    31Instrument@350k:
    upload_2020-5-17_21-42-51.png
     
    Last edited: May 17, 2020
    #2140     May 17, 2020