I now put both alternatives (start from zero, start from optimal rounded) to a broad range of tests with different portfolios, time frames and capital. Some qualitative observations: Zero has more leeway to put on large instruments that get selected in the first few iterations and reduce the tracking error by a big chunk. Because of that it looks a little bit more diversified than rounded in the individual positions but overall there is not much difference in performance. I would consider them as equals on a long backtest. Rounded has about 10% higher costs but makes that up by a little bit lower volatility. In general it undershoots the risk target more than zero. In position selection rounded tends to track the smaller sized instruments more exactly and therefore looks a little bit less diversivied. I also tested different shrinkage methods on the correlation and covariance matrices. On the full backtest the differences are neglible. Now to the ugly part: All these differences can make a huge difference in the shortterm. You may end up with completely different performance for something like 2 years. I did not expect so much sensitivity of shrinkage methods, shadows costs, cost buffer or the search algorithms. It could be quite a frustrating experience if you have an unlucky combination running. But I guess a trend follower is kind of used to that. Average tracking error itself is a pretty useless measure in absolute terms. It does not reflect closeness to the whole portfolio. When in doubt do everything? As a little gimmick version I added up the positions of both search methods and divided by 2 (rounding up). It actually is the best version by a small margin and hits the risk target best.
Power supply with network redundancy (cellular connection as exemple) or cloud based with AWS or other provider
Shrug my shoulders, noting that if all my trades were delayed for 2 weeks in my backtest it would lead to a loss of 3bp a year in SR, or basically nothing over 2 weeks Rob
I've just started the effort of being a bit smarter about my forecast weights, so went back and re-read the handcrafting series and a few other related blog posts. Regarding the above, so if I understood correctly, the differences in forecast weights for instruments within the same asset class are entirely due to the SR adjustment and re-weighting to 1 after expensive rules are removed? Is the SR adjustment done on an instrument basis, or an entire group (asset class in this case)?
Hm, in that case I don't really see where the (small) differences in forecast weights for instruments in the same asset class come from. Eg. SP500 and NASDAQ, same asset class, both really cheap to trade, have some different weights.
Rob, I've been working through pysystemtrade code for calculating net returns for optimization, stumbled upon gross returns adjustment for SR costs, here https://github.com/robcarver17/pysy...136b501d6d514549e2cf/sysquant/returns.py#L174 Shouldn't that line Code: net_returns = daily_gross_returns_for_column + daily_returns_cost_as_ts be subtracting instead of adding? I tried to trace back the call stack thinking that you might be passing in costs as negative, but I don't think that's the case. Would make sense to subtract costs from gross returns to get the net returns. Thanks!
No you are right, I do have costs as negative returns by convention elsewhere, but the SR costs are positive. I don't use SR costs any more hence why I didn't spot this. Fixed in develop branch. Rob
Is that on your local branch somewhere? Looking at the code, both master and develop branches, I see that when estimating weights, ultimately pandl_for_instrument_forecast() is called, this one https://github.com/robcarver17/pysy...2cf/systems/accounts/account_forecast.py#L150, which does use SR costs for calculating net returns later used in returnsPreProcessor.