I will try to simplify the problem statement for the OP. Consider two trading strategies, S1 and S2. They trade in different time frames, so the frequency and duration of trades produced by S1 and S2 are different, but possibly correlated (positively or negatively). Given the finite amount of capital C and available leverage L, what is the optimal (in the sense of maximum risk-adjusted return) allocation of capital between S1 and S2? Here are some "naive" solutions which illustrate the issues: 1. Allocate all capital to whatever strategy triggers first. The drawback is that capital would be denied to the second strategy, if it triggers while the first strategy is in position, 2. Allocate half of the capital to each S1 and S2. The drawback is that half of the capital may be unused most of the time.
Yes, I got that, but I'm trying to understand why there is something special about this portfolio allocation problem compared to the classic asset related one. GAT
In the classic case (Merton-Markowitz), you allocate your capital to N assets simulteneously, and hold these assets until the next optimization period. This is different from allocating capital to N trading strategies, because it doesn't make sense to allocate capital to any strategy before it actually fires a trade. Similarly, there is no sense to tie capital to a strategy which just exited a trade. The optimality is in having enough capital to allocate when a new opportunity comes along, and yet utilizing maximum capital for the current opportunities. In other words, you want to maximize both capital utilization and opportunity utilization. These two take from each other. The question is, what's the optimal balance?
Oh sorry I wasn't clear What I meant was I don't understand the difference between (a) allocating between trading strategies trading different instruments, and (b) between different strategies on the same instrument. They both have the problem of non constant risk meaning risk budgeting constraints or on average coming in significantly under the total risk budget (if you read my earlier comments on the thread, this is the problem I was addressing). Even in the simplest (a) case of risk parity targeting constant risk with two instruments, if you're fully invested (max leverage or zero cash), and the risk of one instrument drops, you will have a problem. What I don't get is that the OP seems to think there is something special about case (b). Except in the case of position limits for (b), I think the problems are common. And to reiterate, there is no easy answer. GAT
I agree there is no easy answer. What's special about case (b) is that some strategies may want to be long instrument I, while other strategies may want to be short the same instrument I, at the same time. This makes it unclear how to to approach this from the capital allocation angle. If strategy S1 wants to be 10 contracts long, and S2 wants to be 10 contracts short the same instrument, it appears to be a waste to allocate (or reserve) any capital to either S1 or S2.
@nonlinear5 i think you have a pretty strong understanding of the first half of my question. Thanks for doing a better job expressing my concerns than I did. In order to clarify my babbling, I'll try to break my question into two separate concerns. 1. As @nonlinear5 did a great job summing up, the first issue is indeed that traditionally you allocate and reoptimize weights discretely over N assets. There is always sufficient capital and conversely, the capital utilization starts roughly the same after each discrete reoptimization. If we have trading strategies that demand capital at different times, it might not be optimal. I'll let him take this explanation from here 2. My second issue, and the one that appears more devicive is my intuition behind the risk management implications of doing this. Let's use VaR to set up a toy example. VaR of course is a function of portfolio volatility * portfolio value. - In the multi asset discrete case, we achieve some clear benefit from adding additional assets to our portfolio because portfolioVariance = weight1^2*sigma1^2 + weight2^2*sigma2^2 + rho12*sigma1*sigma2. If we were just holding some discretely optimized portfolio of multiple assets, VaR wouldn't jump around any more than the normal fluctuations of a portfolio of stocks or whatever. - In the multi strategy, single asset case, we achieve no benefit from a value at risk standpoint by having multiple strategies because at the end of the day, they all get their risk exposure from the same asset. PortfolioVariance = sigmaOfCommonAssetTraded^2. Now, also consider that if these are trading strategies and not assets that we allocate a fixed amount to discretely, as @nonlinear5 mentions, strategy1weight and strategy2weight bounce around all the time. So the VaR equation looks like sigmaOfCommonAsset*(whatever number of contracts these strategies want to hold cumulatively). My concern is that VaR will jump around wildly and cause me to concentrate too much risk. I'd much prefer a method that targets a keeps the risk a lot more static.
Here is a recent paper on the subject: http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2259133 The author makes use of the Kelly criterion to numerically figure out the asset weights. That is, he is looking for the maximization of log(R), where R is the portfolio return. Interestingly enough, the solution falls on the Markowitz mean-variance efficient frontier, if the return distribution is not too tail-heavy. Unlike the asset return distribution, which is likely to be approximately log-normal, the strategy returns may be decidedly non-normal. However, the concept offered in the paper can still be used. Specifically, instead of attempting to find an analytical solution, figure it out numerically: 1. Choose a portfolio performance metric, such as log(R), or R/stdev(R), or something else. 2. Perform a brute search to maximize the chosen performance metric by permutation of weights applied to all strategies in the portfolio. If you have N strategies, and M levels of allocations (from 0% to L*100%, where L is leverage), you'd have approximately M*N combinations to try. So, effectively, you are looking for arguments w1, w2, ..., wN which maximize this: max(U(s1*w1 + s2*w2 + ... + sN*wN), where U is your utility (i.e. the chosen portfolio performance metric), s1, s2, ..., sN are the trading strategies w1, w2, ..., wN are the the capital allocation weights
My only point here is that targeting a particular constant risk (however measured) isn't a great move, except for certain kinds of strategy. Generally I think it's better to let risk vary with strength of forecast. This is particularly the case in a situation where you have "sparse" forecasts (either at a combined forecast or individual forecast level); basically very non normal distribution of forecasts over time; measuring the standard deviation or 5% VAR point of a non stationary, non normal, distribution is always going to be ugly. Sure your expected risk will jump around a lot if you have uncorrelated, sparse, signals; sure it will sometimes be too high and we're back where we started. GAT
I still don't really see the issue and maybe some of what I say would echo what the other folks have already stated... Ultimately, it seems to me that with strategies, just like with assets, what you have is still an arbitrarily complicated linear programming problem that you have to solve at every sample point. Specifically, at every interval where you choose to re-evaluate, your task is to allocate risk to your strategies such that you maximize some measure of expected return, be it ex-ante Sharpe, absolute return, or whatever, while minimizing expected resource consumption. You also probably want to impose some hard constraints, such as max drawdown. That, to me, is still the basic structure of the problem, regardless of the specifics. How sophisticated you want the framework to be is entirely up to you. I don't think that particular rabbit hole has a bottom. For instance, if you want to impose additional rules that smooth your allocations, there's nothing to prevent that.
I actually stumbled on this paper myself while doing my pre-ET-post diligence. This dood at stackoverflow appears to be tackling a very similar problem. I'll read this paper and get an understanding of what it's talking about even though it appears that it might not add much past what we already know about the efficient frontier. It might be a while with finals still very much in swing, but this is an interesting paper. Thanks a lot. You know, GAT, you may have a point about the constant VaR being a handicap. I'm still scared--very scared--of just letting the VaR run wild, but I like the idea of letting max tolerable VaR or whatever risk metric depend on the confidence we place in the forecast. Devils advocate: although I quite like this idea and intuitively it makes a ton of sense, it kinda has the same ring to it as trying to trade one's equity curve, namely that our trading sizing depends on the confidence we place on a trade being profitable. The difference is of course that the way they establish the strength of a forecast is just from recent trade history for the equity curve trading approach, which might not be the best idea. I've seen a lot of naysaying about trading the equity curve, but I think what you propose is worth looking into an I intend to and reasonably different. What I really, really need though if I go with something like approach 2 is a way to estimate the strength of a forecast. If you have any suggestions, I'm all ears. I think I can come up with some decent ideas myself too. Once I get this "strength of forecast" parameter, it might even be a good idea to use one of these ML aggregation approaches I clearly like or regression or what have you to come up with some aggregated parameter of my overall strength of the signals from all strategies combined that could dictate my max-VaR. @Martinghoul (and all). Let me give you a bit more information about what I'm interested in doing as I know you're an options guy. My research project this semester was a volatility trading strategy that I constructed using daily data. Additionally, I have a pretty deep interest in intraday options and futures trading and am working on building out the necessary "infrastructure" there. I have a lot of options and general trading education left ahead of me, but once I'm ready, I would like to run a long term implied volatility strategy alongside short term strategies such as market making and timeseries stuff to enhance profits, reduce trading liquidity costs, and avoid losses as possible. I don't think full blown options market making is the game for me, but as someone interested in adding liquidity and scalping in options and futures markets while maintaining a prevailing view on vol, a lot of the ideas about keeping Greeks neutral that come up in option market making texts are the relevant. Given this information, do you think it makes more sense to break all of these different signals into separate strategies and try to optimize those as a portfolio, or should I instead create a bunch of signals and then aggregate them together into forecasts and "strength of forecast" parameters and use them to define my desired Greeks for a given asset and take positions based on that?