One-year data may NOT be enough. At least three or five years are needed to make some significant conclusion.
any big difference in Std Dev of (R -- Rf) vs Std Dev of R alone? we are supposed to use the former? also is there a good measurement for S&P 500s current (2017 year end YTD) sharpe ratio?
I never understood the meaning of Sharpe ratio. If my results get better, the sharpe ratio gets worse????? The returns are hypothetical and just to show what I don't understand. More profit, no losing periods and still worse??? I would expect the opposite.
The Sharpe ratio penalizes the "fat tails", i.e. abnormally high and abnormally low returns. Under certain conditions, it may lead to what's known as a violation of the "first order stochastic dominance" rule. In laymen terms, it means that Sharpe ratio may lead to nonsensical results when evaluating performance. Consider this example. Let's say we have two fund managers, A and B, with the following record of 10 monthly returns for each one: A: {+1%, -1%,+1%, -1%,+1%, -1%,+1%, -1%,+1%, +6%} B: {+1%, -1%,+1%, -1%,+1%, -1%,+1%, -1%,+1%, +20%} Which one has better performance? Well, by all common sense, B is better than A, because in every single month fund B does either the same or better than fund A. But Sharpe ratio of fund B is actually lower than that of fund A. This clear violation of common sense is well published and rightfully criticized as the idiosyncrasy of the Sharpe ratio.
In this era of ultra-low interest rates, the "risk-free" rate Rf is so low (virtually zero) that you can drop it from the equation. There is another aspect to this. Sharpe ratio is supposed to be leverage-insensitive. But when Rf is high enough, it makes Sharpe ratio leverage-sensitive. For this reason, some people choose to drop Rf even when it's significantly different from zero.