I don't follow the hedge fund industry in detail but the amount of automated trading has been on the increase as I'm sure you know. Excerpt from Rise of the billionaire robots: how algorithms have redefined hedge funds. The guys who are still making money hand over fist are doing so using not their own brains, but computer models. To be specific, they are “quants”, or quantitative hedge fund managers: eight of the ten top earners on Alpha’s list fall into that category, and half of the 25 richest of the year are quants. Either they rely exclusively on computer models to tell them when and what to buy and sell, as Jim Simons at Renaissance Technologies – who holds the distinction of being the only person to appear on the list 15 years running – does, or they use them extensively to guide their decision-making, as does Ray Dalio of Bridgewater Associates.
There are plenty. In fact, once you look away from the bigger capital, returns and Sharpe are probably better because of capacity constraints. I know at least 20 smaller managers (under 200 AUM) that are killing it. As for d08 point, there are plenty of things that can not be easily programmed and will stay that way for a while. I have yet to see any algorithms for distressed debt traders
You're probably right but don't you think it's just a question of time (next 10-20 years)? It makes sense that the ones killing it are relatively smaller, capacity constraints in a declining liquidity environment are obvious.
I think what's going on right now is that lines between systematic and discretionary strategies are getting blurred. We are going to see that more and more - for example, I am primarily systematic, but for some things I make manual decisions simply because it's easier and there is learning benefit in doing things by hand for a while before automating them.
My take on “elite” was that they don’t try to reinvent the wheel. It’s more efficient to accomplish in 600 lines of a procedural language using pre-built tools, instead of building from scratch with 20,000 lines of C++
Of course. But that's assuming the solution doesn't require building from scratch. I use as many modules as I can when they suit my needs, even if they might be slower - the time saved compared to developing from nothing is immense.
One can always argue either way, depending on individual preferences as well. If you love something, why do differently? I'd argue you shouldn't! Fewer bugs, initiates create'em just because "new to the tool and thinking logically". An experienced programmer can pick up a new language she likes, and program it almost flawlessly after compiling and learning basic syntax / libraries. Won't be utilizing language's best strengths, but may be productive enough. Creating software with trivial bugs makes testing and trust easier and faster. Fewer lines, although a bad measurement for complexity, is a simple, rough path to fewer bugs. Clarity is superior, but requires ancient experience, skill and is hard to measure and prove. Reusing others' code lead to lesser of your own lines, but is a dependency-nightmare tradeoff, so depends on future maintenance needs, lifecycle and payoff. Of course it doesn't make sense to sacrifice readability for too few lines or too long lines! Someone elite knows many tools, and whenever possible, avoids having to program much at all. A crude shell-script or Excel-sheet may be just enough for that use case, which can be further evolved later. A prototype requires much less than a fully functional solution. A public service require much more than a local process. Generally, you want to do less but still meet needs (requirements). The elite avoids what doesn't need to be done, and isn't shy to walk away from silly endeavours (prioritization) or walk down a different path leading to better places. The automation-part is easy, any decent programmer can do it and quickly weed out technical bugs (a few months). A novice could find it hard and get stuck. Whatever is left to do, can be done when it's needed, so doesn't make sense to time it all. Think evolution and T&M/shares rather than fixed time and price. However, the really hard part is finding edge and weed out trading rule inconsistencies (logical bugs?). It requires properly defining processes of backtesting, strategy validation and how to integrate the final logic into a cohesive system. Execution may be a completely separate part, or loosely coupled though. This is the same type of problem wether you're discretionary or systematic, but for automation, you can't always rely on manual intervention to save your account should the system do the wrong decisions or have poor execution. With discretion, there's more manual intervention and less work on hard logical rules, so relying more on wetware than hardware. So I think everyone is right to some degree, just have different approaches and then whatever someone else do looks like madness!
I'm no authority on programming matters but I've learned a few things about it (and more so, myself). I gave up on Excel as the error detection was difficult for anything more complex than straightforward grids was cumbersome, I've become allergic to GUIs as well. That said I still use Excel for one-off tasks or something very rarely used. For other things, I try to program as much as possible even if it takes more time because I learn things and become more comfortable with the language, in the end it might be faster than filling the grid in Excel. I can't agree that eliminating bugs related to automation is easy for a professional either. Many of the bugs I've encountered are one-off instances, from changes in software (APIs) or changing regulations or network issues. I can't imagine how I could've prepared for all these problems. Many won't be obvious until many months after the initial run.
Ok, but you have to admit 20+ managers killing it is not a lot in the context of some opinions on this thread that systemic is all that. I like your hybrid observation.