I guess that's the point I'm making. It's fine to have Sharpe as a metric, but then using Sharpe^2 is the identical metric. It contains no more information. Example: Strategy 1 - Sharpe = 1.5 Strategy 2 - Sharpe = 2 Clearly Strategy 2 dominates. Now if we compare them by Sharpe^2, we will see that the ordering remains the same, and this is true for arbitrarily many strategies, because ranking by Sharpe *is exactly the same as* ranking by Sharpe^2. Strategy 1 - Sharpe^2 = 2.25 Strategy 2 - Sharpe^2 = 4 Edit: Essentially, this is true for any monotone transformation. If we have a base metric of Sharpe, then any function f(Sharpe) such that f' > 0 everywhere will give equivalent ranking (the function doesn't have to be smooth either, just strictly increasing).
Understood and agreed. My preference for using Sharpe^2 instead of Sharpe is motivated by the more straightforward interpretation of the former. Since Sharpe^2 measures the "leveraged expectancy" (that's the best that I can describe it), it scales more intuitively to me. For example, if strategy A has Sharpe^2 of 2, and strategy B has Sharpe^2 of 4, then I can say that strategy B is exactly 2 times better than strategy A, because it has twice the leveraged expectancy.
http://www.elitetrader.com/vb/showpost.php?p=1577681&postcount=1 http://www.elem.com/~btilly/kelly-criterion/#complex_kelly
I think we're assuming A and B are independent or very close. That said, I conceptually disagree with your statement. The bets on A and B should be sized the same, but B is MUCH more valuable to dedicate a slice of your portfolio to - to the point that you might never take an A bet for fear of not having capital available for 10 B bets. I guess that really gets at not only how many opportunities there are, but how long each bet lasts before you get your results. If the bets last only seconds and happen a few times a week, you might happily bet A figuring it won't get in the way of B. But if A bets keep you in the market say 95% of the time, A's probably worthless in light of B. The more you've backed off Kelly fraction, the smaller this effect is (ie. the more bet A + bet B at the same time is like A followed by B at a later time) and the less it matters. But when betting full, or even half, Kelly you do NOT want to ignore the fact that you might have multiple bets on at the same time.
By definition, the return on a single bet A is about 10 times of the return on a single bet B. So, let's say that A and B do indeed collide in a sense that if you bet on one, there is not enough capital to bet on the other. If you bet only on A, you make return R, with a standard deviation S. If you bet only on B, you make the same return R, with the same standard deviation S. So why is it so much more important to not miss the chance to bet on B, compared to not missing the chance to bet on A?
I think we need the covariance matrix. What is the covariance matrix of A and B? (three entries). I did not go back to see what A and B are, but I assume they are the two strategies that are leading among the whole set.
I think I hijacked this thread, sort of. Sorry about that, kut2k2. The original problem seems to have been solved, so we have proceeded to discuss other (related) problems. tradingjournals: the A and B strategies have nothing to do with the roulette game. Rather, it's a totally different problem of optimal capital allocation to a portfolio of multiple strategies.
One thing that I'd note is that your approximation of Kelly: k ~ sum[Ri]_n / sum[Ri²]_n resembles the continuous Kelly: CK = R / (s^2) Note that: -- your nominator sum[Ri] is proportional to R -- your denominator sum[Ri²] is a component in (s^2) I think that if you started with ln(r), instead of r, you would have arrived at the true continuous Kelly, which is CK = R / (s^2) The other thing is that the units of continuous Kelly are more convenient, as they indicate leverage, instead of discreet Kelly, which indicate the percent of the bankroll.
I need the same type of rigorous mathematical proof that I supplied to you, not just speculation. Three things: Ln(r) makes no sense to me. Why do it, if we already have r? More to the point, returns are often expressed not as percent price change but as the logarithm of price change. So you're proposing taking the log of a log. The notion of leverage implies a Kelly fraction greater than one. What are the odds of that happening? Perhaps the best reason of all to reject R/s^2. Finally the Kelly formula I posted is just an approximation. We can't seriously replace a sometimes far-off approximation for the exact equation. Sometimes only a rigorous solution of the exact equation will do.