I can imagine hiring people to trade with me but I agree, can't imagine trading for someone else unless they will teach me something @destriero
This is interesting to see, but again these returns need to be put in context with the vol these funds are taking
As a retail like me, no. Though many old timers here were fund/portfolio managers of large hedge funds.
Could someone tell me whether Sharpe is usually reported/calculated based on annual returns or monthly or weekly or daily returns? I think I've seen it reported in several ways, but usually without specifically stating what period/interval was used as its basis.
It's a good question, but it can be compiled off of whichever data you wish. The main thing will always having 5 years of data, to begin to judge Sharpe. You can have someone that compiles their data daily? But if it's only two months of data, then it's still only two months of data, and a high or low Sharpe isn't going to mean much; even with daily data. That's why it's important to have a long time period of data, but you can have 5 years, and use daily data. Or 10 years, and use monthly data. Or 10 years, and use weekly data. But generally, you can use whichever data you have, to compile it. It's just that then, the RFR needs to be segmented appropriately to that time-frame, so as to compile the data properly.
Sharpe is a time-period dependent measure and does not scale well (because of volatility) to different time frames. It is not as simple as multiplying it by the sqrt of the observed period. Here is a paper.
Ok, so that’s why I was asking what is the common time frame for widely reported Sharpe, either by hedge funds or in books, etc. And there doesn’t seem to be a standard where people state what type of sharpe they’re talking about. My own sharpe on just one sample overfit strategy I just checked is 5.x annually but only 1.x monthly (again, it’s overfit so not realistic just an example of large discrepancy). I’ve also read about annual sharpe vs annualized sharpe, so there is a world of sharpes out there...
Yeah, there is. There are entire Research Papers out there, on what should be used for the RFR. Arguments even. Myself? On the RFR, I tend to go with the 3 Month LIBOR, although ... during the various QE's from 2008 forward? I switched over to using some Bank Rates, because they were higher. Using the Higher Bank Rates I felt was a little more honest, because .... A) They were higher than 3 Month LIBOR ... and thus it actually lowered my Sharpe ... B) But was more realistic since fear was through the roof ... Yeah, it meant a couple of digits lower on Sharpe to me? But it was just a more honest way to compute it. But that's just the RFR. I keep Sharpe simple. Anything under 3 years worth of data, doesn't mean much to me, because I've seen Sharpe's at 3.5 on a three year data set, and then when more data came in, it invariably dropped back down to a reasonable 0.9 to 1.2 ... on a great program. So regardless of how it's computed, on what time scale ... I need to see over 3 years worth of data and know the objective of the program. And of course, objective of the Program matters. I have seen one Program that Annualizes 22%, with less than 10% Drawdowns ... wildly scaleable ... but it's Sharpe is 0.72. Which is fine, because of the objective of the Program. I have data going back to 2003, so with that much data? I compute it just on the yearly return, the RFR, how much it's beat it and the average and the deviation. Simply because I need to see 3 years anyways ... and I have 16 ... and the yearly returns are really what anyone zero's in on anyways. Did you beat the RFR multiple years in a row? And since there is enough data? When I compute it in other ways? It usually matches up to within a tenth of a decimal point. So it checks out. But if I was just starting a new Strategy that I didn't have a lot of data on? I'd probably look at the weekly computation.
But then again, we just started using Jack Schwager's service (Fund Seeder). We don't use it for finding Seed Money ... since that's not what we're about. But instead, they state that they allow ones to enter data, for free, and use their service as a way to get analytics. So we've used it as a free way to double check all of our own statistics. It's sort of a way to 'double-check' our data, and redundancy on our own work.