Nooby McNoob becomes a quant

Discussion in 'Journals' started by nooby_mcnoob, Mar 24, 2017.

  1. I admit its an industry full of BS artists but this is site is full of very credible material otherwise I wouldn't recommend it (and I get nothing from doing so)

    At some point I'd probably write an ebook myself (technically I already have since my physical book is also available as an ebook, but obviously I mean a pure ebook). Its the ideal product if your total sales are likely to be less than 1,000 units in which case no publisher will touch you unless the cover price is more than $150. So ideal for a niche market.

    GAT
     
    #61     Mar 27, 2017
    algofy likes this.
  2. +1

    1) No matter how careful you are with testing there is the problem of tacit overfitting where to a degree you will only test things you know will work from prior knowledge
    2) For slower models you will be affected by the effect that long only asset returns will be lower than the backtest period (say since 1980), primarily because you aren't benefiting from a secular fall in inflation, fall in yields, and repricing in equities.
    3) For quicker models you have the usual problem of alpha decay; if something is that good it will eventually be crowded out.

    GAT
     
    #62     Mar 27, 2017
  3. Zzzz1

    Zzzz1

    What do you mean with "it's full of very credible material". I was under the assumption that you are the author yourself, no?

     
    #63     Mar 27, 2017
  4. Zzzz1

    Zzzz1

    It is not overfitting to only test things you know might work from prior experience. Overfitting is to design algorithms that only work in certain environments but not others or are not robust over time, and even then it may not be overfitting. A strategy, for example, does not have to be overfitting if it works with spot EM currencies but not currency futures. One has to be more rigorous to figure why there is a difference in performance and risk metrics.

    But to get back to the original point, applying what one knows works is imho the smartest and most efficient way to get going. Has nothing to do with overfitting. Maybe with being biased but there is nothing wrong being biased.

     
    #64     Mar 27, 2017
  5. No quantstart isn't my site
     
    #65     Mar 27, 2017
  6. Zzzz1

    Zzzz1

    Thanks for clarifying

     
    #66     Mar 27, 2017
  7. sle

    sle

    Overfitting is the process of introducing extra or changing existing parameters into your alpha function with the sole purpose of improving your back-test performance. It could be hard to avoid unless you are able to apply theoretical models.
     
    #67     Mar 27, 2017
  8. Zzzz1

    Zzzz1

    You provided a pretty good textbook definition. I believe it is, however, an overused term and concept by many stem researchers. Biotech researchers and medical researchers working on cancer cures focus on experience and observations of what they already know and don't perform tests of unknown variables at first but base their core effort on approaches that look promising to them based on empirical evidence and prior knowledge. In the same way someone developing new hft algorithms and order book semantics will base his research and testing on phenomena observed and garnered from years of risk management and trading experience. That is why from my own experience the most successful individuals that develop trading algorithms actually come from the trading and risk management field and not some engineers or computer scientists. The academic field often postulates approaches that are somewhat stifled by their fear of overfitting and while it is a valid concern it often also leads nowhere because meaningless metrics and parameters are introduced in the hope to avoid overfitting and bias.

    I am and have worked with some brilliant network and computer engineers but they could for the life of it not come up with a single profitable trading algorithm, neither in low latency environments and even lesser so on longer time frames and holding periods. My point is that the notion by @globalarbtrader that testing what is known to work and focusing on that leads to overfitting is invalid and rather the opposite holds true, it actually leads to a higher success rate and should not be termed "overfitting".

     
    Last edited: Mar 27, 2017
    #68     Mar 27, 2017
    MoneyMatthew and FX xtc like this.
  9. Perhaps 'overfitting' is the wrong term. But testing stuff you already know will work will push up your backtested Sharpe Ratio, by construction, because you're accessing information that wouldn't have been available in the past. So the argument that one should revise down backtested SR is still valid, IMHO.

    GAT
    Definitely agree with that.

    It is a valid approach - I didn't say it wasn't - and indeed you can't really avoid doing it, but I think one should recognise that it's likely to push up the backtested SR which is what the original discussion was about.

    GAT
     
    #69     Mar 27, 2017
  10. Zzzz1

    Zzzz1

    Isn't it desirable to push up your sharpes? The thing to avoid is to optimize to only perform better in the past but not future because the optimization itself did not add to stability and robustness. I guess we agree and just used different terminology?

     
    #70     Mar 27, 2017