Fully automated futures trading

Discussion in 'Journals' started by globalarbtrader, Feb 11, 2015.

  1. Intuitively I didn't think it would work but I have a mate who works for a US CTA who wanted me to check it.

    Plus it's more honest to report on the failures as well as success. Most things don't work.

    GAT
     
    #2311     Oct 11, 2020
    wopr, HobbyTrading and sef88 like this.
  2. sef88

    sef88

    Hi @globalarbtrader

    From your systematic trading book, I understand that correlation matrix is required to calculate the forecast diversification multiplier. I have 2 questions,

    - In deriving your correlation matrix in table 19 for trading rules, how do you come up with this table? I would think that this is done through some form of bootstrapping (or block bootstrapping) across a pool of instruments? If so, what's the duration used to construct this correlation matrix (e.g. 1 year of data)?
    - How do we account for dynamic nature of the correlation matrix in the back tests?

    Thanks.

    Jirong
     
    #2312     Oct 11, 2020
  3. I agree on this one and appreciate your honesty.
     
    #2313     Oct 11, 2020
    tgibson11 likes this.
  4. No, muich simpler than that. It's literally the correlation matrix of all the data I have across all the instruments I trade.

    Correlations between trading rules are relatively stable and I see no reason to use any kind of moving window / more recent history to estimate them.

    GAT
     
    #2314     Oct 12, 2020
  5. sef88

    sef88

    Ok thanks!
     
    #2315     Oct 12, 2020
  6. Hello Robert,

    I have a question related to the handcrafting method.

    I see that the main input for the handcrafting is just the correlation (assuming all items are volatility normalised). Apart from the slight Sharpe adjustment scheme where we know Sharpe differences with great certainty (i.e. costs) the general assumption is that the performance is equal across the items (assets or rules).

    I've been looking at EWMAC rule variations and their performance difference sticks out.

    For instance, if we compare the fastest EWMAC 2x8 with the slowest EWMAC 64x256 (giving each instrument equal weight, looking at pre-cost performance).
    For the time period between 1960 and 1989 I'm getting Sharpe ratios of 1.25 and 1.11 respectively, but in more recent years, from 1990 and 2020 the Sharpe ratios are 0.23 and 0.70

    The performance difference between the two rule variations is quite large - 0.47
    The correlation between them is almost 0 and period tested 30 years, so it seems the difference is quite significant, however the handcrafting method would not take this into account. In another scenario, one could add a losing trading rule to the portfolio and handcrafting would include it on par with the good ones (whereas bootstrapping would not). What is your take on this issue?
     
    #2316     Oct 12, 2020
  7. You're correct that the difference between fast EWMAC and the rest after 1990 is quite striking, and an obvious counterexample to the idea that we use all data to estimate performance figures.

    In this blog post (https://qoppac.blogspot.com/2019/12/new-and-improved-sharpe-ratio.html) I outline a way to properly use Sharpe Ratios to reduce forecast weights accordingly. I would imagine that if you used a rolling 30 year window then by now the weight on fast EWMAC would be extremely small using this methodology.

    To a degree this is moot, because the fastest EWMAC rules would already be removed before weights were allocated as they are too expensive for virtually all

    GAT
     
    #2317     Oct 12, 2020
    test_user likes this.
  8. Thanks for the link. I think that blog post addresses the exact problem I had in mind.

    I understand the fastest EWMAC is mostly untradeable (due to cost threshold). But even with the slower one, the Sharpe difference is quite large, e.g. EWMAC 4x16 vs EWMAC 64x256 is 0.32 vs 0.70

    My main worry was that you could come up with some really crappy rule and then you'd end up with a portfolio of 0.33 for momentum, 0.33 for carry and 0.33 for [insert whatever crappy rule] since you can't simply discard it without causing implicit fitting.

    I'll give the post a reading now.
     
    #2318     Oct 12, 2020
  9. Hello Robert,

    I have read the post on the new and improved Sharpe ratio adjustment method, gave it some thought and came up with a couple of questions regarding it. Likely nothing major, but out of sheer intellectual interest I'd like to hear your opinion.

    1) The Sharpe ratio difference estimation depends on how much data you have. When looking for instrument weights it is straightforward. But when looking for forecast weights, we can (and probably should) pool our data from all instruments. Suppose we take 30 years of data, but pool together 40 markets. Is "years_of_data" parameter value 30 or 1200 (30 times 40)? Or some value between 30 and 1200 since markets are correlated.

    2) The new method is based on the idea of estimating the differences in uncertain returns. That's fine for the performance data. But what about the costs data where we have return differences with much greater certainty? It seems to me that the cost adjustment could be incorporated into the method (although the adjustment factor for costs is not that great/important, given that the max theoretical difference is less than 0.13 if we cut out expensive rules completely).
     
    #2319     Oct 13, 2020
  10. 1) That's actually quite a hard question to answer. I'd probably use 30 years as it's the easiest and most conservative.

    2) Use post cost returns for the input. That assumes that costs are known with 100% certainty (assuming costs are known with x% certainty is another wicked problem), which is close enough to the truth; leaving the method to worry about the uncertainty of the pre-cost returns.

    GAT
     
    #2320     Oct 14, 2020
    test_user likes this.