Hi A mean variance optimizer (MVO) returns an "optimal" portfolio that has the highest return with the lowest volatility. Its inputs are the expected returns, volatilities and correlations. How does an Expected Utility Optimizer differ? The one I have gives a risk adjusted return based on my "comfort level" of risk according to the help file. It uses co-variance instead of correlation. So I guess first question - why does it use co-variance? What is this concept of portfolio utility (PU) - I really can't find a good explanation. The optimizer maximizes PU. If I have a montly return of 0.67 and monthly volatility of 0.78, if I enter a risk adjusted parameter of 50 my utility goes to 0.36. How do I interpret that ? I know its the return less the (volatility x 50 ^2) but what is this PU telling me - that my return is 0.36 ? If so, then what's the 50 - how should that be scaled (0-100)? Sorry if this is all pretty basic to you guys but I just can't get my head around this. Matt
Have you read the Wikipedia explanations of Utility? In essence it is a concept that economists invented 200 years ago, to help them fix a hole in their theories. Economic theory predicted that in situation S, people would exhibit behavior B. Except, in that real situation, people didn't behave that way! Riccardo invented Utility, and modified economic theory. Now under the new theory, in the same situation S, people would be expected to exhibit behavior X. And, they did! Son of a bitch, the fixup, fixed it up. Thus was born Utility. As you will read, Utility tries to encapsulate the observed fact that some people are risk seeking, some people are risk neutral, and some people are risk avoiding. Skydivers and lion tamers tend not to be municipal bond buyers. And vice versa.
For those who want a more in-depth (mathematical) discussion of Utility than you find on Wikipedia, may I suggest Y.K. Kwok's home page at the Hong Kong University of Science and Technology: http://www.math.ust.hk/~maykwok/ Specifically the course notes for Fall 2005. I've also attached the lecture notes for his talk on Utility Theory, which I personally find quite beneficial. Beware, they use both algebra and calculus. Ooogah, boogah.
You can calculate the the covariance given the correlations and volatilities. Correlation(x, y) = Covariance(x,y)/Volatility(x)/Volatility(y) The fact that the optimizer asks for covariance versus correlation is not really an issue. I am curious how you calculate expected returns. I guess for a well defined instrument you can have an expected return, but I always thought assigning an expected return to a stock is just making up a number.
expected returns can be empirically derived through backtesting. An example of daily expected return can be found objectively by simply calculating the mean of n historical returns. You can project it out into the future by compounding it n periods into the future. Note that it is only the expected return you are projecting (drift component). It will also have a variance associated with it (which can also be extracted based on empirical history), so that the final expected return will lie somewhere within +/-3 std deviations of the historical volatility width. This assuming you are dealing with a normal distribution. There are variations for non-normal, but then again they are akin to making wild a@@ guesses at that point. It's highly unlikely for the inference to exceed wildly from normal, but gaps do happen more than statistically expected.
Sigh.... why do people insist on using words they don't really understand.... expected returns can be empirically derived through backtesting. Nope. What you are about to describe isn't backtesting. It's just computing the the sample mean. An example of daily expected return can be found objectively by simply calculating the mean of n historical returns. You can project it out into the future by compounding it n periods into the future. Note that it is only the expected return you are projecting (drift component). Compounding does not equal to projecting. All you are doing with compounding is to figure out the equivalent sample return for a different time horizon. Moreover, sample return is not the drift component (in all but the most trivial case). If the structure is AR, then the drift component is certainly not the sample mean of the data. It will also have a variance associated with it (which can also be extracted based on empirical history), so that the final expected return will lie somewhere within +/-3 std deviations of the historical volatility width. Wrong again. Assuming normality (AND, more importantly, a bunch of other time series requirements), the *TRUE* expected return lies within +/- 3std deviations; Your sample mean is the unbiased estimator of the expected return (again, given a LOT of assumptions) This assuming you are dealing with a normal distribution. There are variations for non-normal, but then again they are akin to making wild a@@ guesses at that point. It's highly unlikely for the inference to exceed wildly from normal, but gaps do happen more than statistically expected. This is pretty much missing the point; The sample mean being the unbiased estimator of the expected mean is well established and good enough for almost all cases EXCEPT for portfolio optimization. In portfolio optimization, using the sample mean for expected returns will almost always get you solutions that will not perform well. Of all the progress made in portfolio construction and optimization in the real world in the last decade, estimating the expected return is one that's least well developed.
If you don't fully understand what you are talking about, I suggest you don't try to interject your opinions and arrogance. Oddly, I thought you were one of the FEW competent posters in this area -- scratch that.