Thanks so much Rob for the detail answer. Another following up question I would like to ask is risk management in trend following. Do you or any big firms look at risk contribution as a risk measure? For example something that I look at in my portfolio is the risk contribution of an asset group (ie currencies) and risk contribution of an single futures market (ie AD). Do you find potential value is limiting them? For example the first image is the hypothetical % total portfolio risk contribution of the 3 energies given their positions in 2015. The last image is the sum of the total contribution from the Energies. Doing historical simulation, I found that risk contributions for interest rates are also very high historically (~ 50%). Not sure if these measure are something the pros look at too and how do they use this type of information. Thanks

That's a very interesting couple of pictures, thank you for posting them. In theory the risk contribution from an asset class will be proportional to the strength of the forecast. So if interest rates are contributing a lot it suggests they've had strong forecasts. 50% seems, high but of course the risk contributions can add up to more than 100% depending on your methodology, because of diversification effects. I don't actually monitor this statistic, although it would be useful to. The pros do look at it. It might be worth setting a limit on the amount of risk per asset, and then reducing the positions pro-rata if this is exceeded. GAT

I thought I'd provide some more pictures with their underlying forecasts. I agree that forecast should be proportion to its risk contribution but only after a certain point. Before that point it should be risk reducing if the correlation of the underlying market is negative compared to the rest of the portfolio. But from analyzing a few more assets in the portfolio I agree that in general it is true they are proportional. The following are for the Crude Oil markets and rate Markets. The second picture details the risk contribution of rates which is rather high in my opinion. But the correlations within my rates portfolio is rather high. Correlation of Forecast:

Cross post http://www.elitetrader.com/et/index.php?threads/oh-no-not-another-python-backtester.296589/

Hello GAT, First, my wife assures me that your book will appear under our Xmas tree this year. I have a question re the mechanics of building a portfolio of stocks. Let's say I have a means to rank N stocks (or in reality any financial time series) from 1-N in terms of relative strength and that each day/week/month I'm going to allocate my capital equally to the top M stocks. For example, hypothetically, I may take all the DOW stocks, rank them 1-30 using my method and then allocate my capital evenly among the top 3. I'll repeat this process every day/week/month (TBD) - re-ranking and reallocating funds. Assume long only with $200K portfolio and fully automated. The obvious question is how to approach selecting universe of stocks I should rank as this will have a very direct impact on the portfolio's performance. I could probably include all SP500+Nasdaq stocks with maybe some volume filter. I have concern that ranking too large a universe of stocks will result in one-hit-wonders rising to the top and too small a list is my own form of curve fitting. In reality defining the universe of possible stocks is already a bit of a curve fit. Any thoughts?

Hope you enjoy the book. Picking stocks in this way isn't curve fitting; it's a perfectly reasonable thing to do (equity neutral hedge funds do exactly this; though obviously on the short side as well - and they also use a bunch of other filters apart from relative strength). Clearly a combination of a loose filter and a small number of stocks (3) will result in a portfolio with a lot of company specific risk. If I was implementing this kind of strategy I'd probably look to hold at least 20 positions, probably more, depending on any fixed costs you had to pay to trade (fixed costs per ticket make trading small positions uneconomic). If you're not leveraging then you could invest all your 200K - so that's 10K a position and you could probably go lower. And I'd limit the universe as you suggested to more liquid stocks. The other concern I have is trading costs. If you're doing this every day, and you're only holding a very small number of stocks, your turnover would be insane. Again if I was doing this I'd use (a) a slow measure of relative strength (maybe 3 months), (b) a relatively large number of stocks. Together this would reduce the number of stocks that are drifting in and out each day should be reduced to a point where the costs aren't killing you. GAT

Question for carry: Why are you using 30 for forecast scalar? Wouldn't it be better to determine this via rolling window?

The same goes for all forecast scalars; and in my code I'll include out of sample estimation of scalars (I wouldn't use a rolling window though; I'd use an expanding window across all assets I'm trading otherwise you'd lose the fact that carry is systematically higher in some asset classes than others) GAT

In your book, you propose to calculate the Instrument Diversification Multiplier (let's call it IDM to prevent RSI (https://en.wikipedia.org/wiki/Repetitive_strain_injury) by measuring or estimating the correlations and calculating 1 / sqrt(W x H x W_t) where W are the weights and H is the correlation matrix. Don't you assume a gaussian return distribution here? Could we calculate the IDM also like this: Backtest the system with IDM = 1 and a given volatility target, like 20%, then calculate the realized volatility for the system and setting IDM = Vol_desired / Vol_realized. The calculation could be done at the end, that would be the same as your calculation where you are using measured correlations for the whole backtest or on an expanding/rolling window base, only using not-seen data. I think, this should work, since the IDM is linear in your position calculation and stddev(a * x) = |a| * stddev(x) for a constant a. The calculation/backtest for the IDM should be done with otherwise unrealistic fractional positions (without rounding to full contracts), I think. You could do the same for the Forecast Diversification Multiplier (FDM), by backtesting with FDM = 1, calculating combined forecasts and calculating the FDM, so that the mean combined signal over all instruments is 10 on average.

Yes you can also calculate things using a rolling vol. However it has some disadvantages. Mainly because you need a much longer window to get the right answer. If you're just estimating correlations, then correlations move around of course, but you can get a good enough correlation estimate with just a couple of months of daily returns. However the realised vol will depend on (a) the correlation (b) how well we're targeting to and (c) how much our forecasts deviate from the long run average. Mainly because of (c) our realised vol will vary quite a lot; it wouldn't be unlikely that we'd have a lower vol than average for quite a long time. This means that you need much more data to get the estimate correct. I would say at least 5 years to be on the safe side rather than just a couple of months. Because I've seen periods of at least 3 years when forecasts have been much lower than average. The main time this causes problems is when you add instruments to your system (because of new price data coming in). Assuming they add diversification, initially you'd be undershooting vol. After a couple of months you have enough information to use their correlation (and I use a short cut - initially assuming that new instruments have the same average correlations as instruments in the same asset class). However if you're using a rolling measure you have a problem. Suppose you use a short window. You adjust quickly to the presence of the new instruments. But what if right now you have much lower forecasts than normal? You'll end up with a much higher IDM than you should do. Okay let's use a long window - at least 5 years. Your IDM will end up at the right level, but it will take a long time to get there. Correlations are faster and more accurate. I note in passing that the daily returns of trading systems are indeed not Gaussian, but we're basically assuming Gaussian returns (or at least symetteric ones) in a lot of places, so at least we're consistent (the exception being when working out the risk target, when we penalise negative skew systems more heavily). The same argument applies to the FDM. GAT