Discussion in 'Strategy Building' started by Sergio77, Jul 24, 2016.
Interesting paper by Mike Harris
It seems like this article is suggesting that backtesting itself is an exercise in futility. It looks at an example where a system developer has discovered a market dynamic that has persisted for decades and appears to be robust. Isn't this the entire purpose of backtesting a strategy? According to this paper, the only way to validate a strategy would be to forward test the system, either by paper trading or with real money. I think anyone conducting a backtest is doing so under the assumption that patterns in the past will continue in the future.. if this assumption isn't valid, neither is any test on historical data.
Before one backtest anything, one should have solid reasons why the system should work consistently, and then test the hypothesis. Otherwise, you are heavily prone to selection-bias and random statistical spikes.
In real development, your system often has holes that the market slips through, so it's really a process of continuous improvement, rather than finding the one robust system that lasts for decades. Also for improvements, it depends on what solid reasons you're making the improvement: By all means to prevent actually making things worse or to no benefit. Statistically, this happens for 2 out of 3 "improvement" activities in typical organizations.
Really, the myopic feedback-loop of most backtests can in many cases fool someone to think they've struck gold, and become too confident in their system's performance. For purely technical traders, this is hard news, because there's really very little concrete to work on as a foundation already.
One should be wary of systems that heavily require parameter optimization, as anything can work in hindsight. The logic is lacking if the system can't be prepared before something happens. You never know if it'll last for 1 day, 5 years or 100 years, when it'll come back and why, but we do know market dynamics change over time.
Testing out of sample, forward testing, real trading, all tests works toward the same goal, but won't help if the system is fundamentally flawed in the first place. Often, hopefully not too often, only a full make-over will do, then with the benefit of larger experience and a clean slate.
If a system is perfect, no backtest or any other test necessary, in a dream-world of course. So the trading plan is the key and the process. Of course, the perfect system would include all necessary contingency plans unless it's a time-machine.
I totally agree.
I was just saying that the example in the paper is about as simple, robust, legitimate, etc. of a trading system as one could come up with. If one was to come up with the hypothesis that "when markets go up, they tend to continue to go up", they would take the last 4 decades of market data to test their initial premise. They would find that regardless of lookback window, a basic trend following system out-performs buy and hold to a significant degree.
This would confirm their initial hypothesis (or rather, reject the null hypothesis with some level of confidence). Assuming that you believe this behavior will continue (which you should if you're going to do any type of testing on historical data), you trade it going forward.
If the system is unprofitable going forward, there is nothing one could have done during the system development process to prevent this. If you believed a shift in market behavior was coming in the near future, you shouldn't be using past data for testing in the first place.
I guess I just don't understand the point the paper is trying to make. Obviously, if the market behavior that's driving your profits ceases, your system will fail.
It should be in the Abstract, but the paper's full text focuses alot on limitations of backtest overfitting. I don't see them backing up this claim in the Abstract, but don't disagree with it either:
In this paper we present two examples that demonstrate the limitation of quantitative evaluation of trading strategies and we claim that the most effective way of guarding against overfitting and selection bias is by limiting the applications of backtesting to a class of strategies that employ similar but simple predictors of price. We claim that determining when market conditions change is in many cases fundamentally more important than any quantitative claims about trading strategy evaluation.
The Conclusion has some more explanation that is more of the common sense stuff, although not really backed up by the paper either. Eg.:
“There’s a creative moment when you think of a hypothesis, maybe it’s that interest rate data drives currency rates. So we think about that first before mining the data. We don’t mine the data to come up with ideas.”
Only naive practitioners feed data to a machine learning model in hope that it will generate a significant result. Quantitative analysis shows that results from multiple trials can be misleading.
At least for me, such research may at times be interesting, but to spend so much time writing and studying papers on such simple systems and on weak trading-premises that do not consider risk management may seem like a little waste of time, if one really wants to make something worthwhile. I see the Conclusion in a convoluted, but perhaps more "correct" way, says very much the same things I stated in my previous post. Ironically, the paper's Conclusion seems to agree with this too:
Although the academic community has contributed significantly in raising awareness about certain issues, it cannot provide a framework for generating those “creative moments” Leda Braga referred to above but only investigate whether a moment was not as creative as was expected. Although this is partly progress, it is far from a solution to the problem, if such a solution exists at all.
In my mind, it's the quality of the feedback-loop when doing development, testing and executing real trades, that determines wether one optimizes on potentially false premises or not, and that for purely technical data you never really know anyways. That higher earnings may drive price higher in the future, is a much more concrete hypothesis, but less to do with trading being more part of an investment strategy.
Can you be more specific? Especially about risk management?
I'll try. Most academic papers and training material only considers pure, idealized technical trading signals over a known and static sample period. Signals like crossovers, breakouts, candle patterns, and the like, without any understanding what the signals actually signify. Price is just price, but an indicator or candle has lost some of the information contained in price, so is less than price itself. Often such research is based on too much greed and hope: trying to maximize profits on obsolete historical data, without adequate research into preventing potential big losses and fatal failures to be faced in the uncertain future.
There are more important issues in trading, such as how much % of account you should risk per trade (both risk and total position-wise), position sizing, how you avoid big losses, where to put stops, how/where to take profit, etc., than relying on simple signals to do some twisted magic that somehow is supposed to make you come out in the green. If one is serious about trading, I'd recommend researching extensively on the topic of risk management. A start might be to explore what Van Tharp has to say:
Searching for position sizing would reveal many more sources of information about trading risk management, which also can be searched for.
Hard to be more specific since risk management is such a broad topic that really covers every nook and cranny of trading.
To put it very simply, and maybe a bit motivational: If you learn to prevent losses, especially the bigger ones, over time you should be winning..
I thought Van Tharp has admitted he has never traded? Can anyone else confirm this? Do you want to take advice form someone who has never actually used his methods in trading?
Position sizing is an overrated conundrum. It is actually really simple. You do not need fanciness. You can ask Ernest Chan. He manages a real fund with real money and with real simple position sizing method. He always responds to questions.
Many ways to skin the cat, or look at a cat in a box
Tharp doesn't trade himself as can be confirmed in this interview: http://www.donttalkaboutyourstocks.com/market-wizard-dr-van-k-tharp/
It was fascinating to learn that Tharp doesn’t actually trade himself, despite his vast knowledge of what makes a successful trader.
I had forgotten that fact, but didn't mean to study Van Tharp only. However, who said practitioners make good teachers? Besides, why would a successful trader reveal anything but what gives general directions to others? I study for myself, but reading what others think provides new clues to investigate for myself. Often I've rediscovered many methods invented by others, but this takes too much time, so much better to read more and get impulses to work on.
My words: Position size is everything in trading!
Realistically, without a position size, there's no trade.
Do you trade yourself?
Chan's a good guy, but I haven't managed to actually use such methods myself. Funds operate in a completely different space than someone like me, who because I scaled down to learn more, have to pay 7% of position per trade to the broker! Am planning to scale up, but I don't have the scale such funds have anyways to be using fleeting edges, and not planning to go there either. Some knowledge of fund-trading has even become public knowledge. Simply because you have to be a fund to implement them, so unavailable to retail traders in practice, there's little damage in revealing the rules.
You can use a simple position sizing rule (ATR x N is statistical and fine in many cases). I don't plan to overcomplicate it myself. However, position size is key to trading in general, and what you omit in position sizing, need to be compensated in other ways. For me, rebalancing just costs too much anyways, but funds do it all the time. It's sort of an ideal to strive for, but the ideal is unreachable for typical retail traders although some of it maybe usable as one scales up.
The way I view it is this: position sizing IS trading. The only question to ask in trading is "what should my position be given everything I know about this market?". To me, there is no such thing as an entry or an exit as they are traditionally defined. There is only an ideal position to have on given the current risk and reward characteristics of the market. How you define those characteristics is within the context of your system/methodology. But there are times when the ideal position is to be long, or short, or flat, and the amount or size is determined by the current outlook on the market.
That being said, you need an edge. That is first and foremost. You can't position size your way out of a system with no edge. In fact, you can turn a profitable system into an unprofitable one through position sizing, but not vice versa. Imagine the case where you win half your trades, you gain 55% on your wins, and you lose 50% on your losses. Profitable if you trade at constant size, ruinous if you scale up.
This thread got derailed fast..
Separate names with a comma.