Quantitative Investment Management which has a great track record is doing exactly the opposite of what you're saying. See my thread http://www.elitetrader.com/vb/showthread.php?s=&threadid=115513& Also there's a superb article by Nassim Taleb which nails the issue: http://www.elitetrader.com/vb/showthread.php?s=&threadid=124793&highlight=taleb Quote from the article: Things, it turns out, are all too often discovered by accident--but we don't see that when we look at history in our rear-view mirrors. The technologies that run the world today (like the Internet, the computer and the laser) are not used in the way intended by those who invented them. Even academics are starting to realize that a considerable component of medical discovery comes from the fringes, where people find what they are not exactly looking for. It is not just that hypertension drugs led to Viagra or that angiogenesis drugs led to the treatment of macular degeneration, but that even discoveries we claim come from research are themselves highly accidental. They are the result of undirected tinkering narrated after the fact, when it is dressed up as controlled research. The high rate of failure in scientific research should be sufficient to convince us of the lack of effectiveness in its design. If the success rate of directed research is very low, though, it is true that the more we search, the more likely we are to find things "by accident," outside the original plan
i do not see too much of a contradiction to what i said. i just differ on the level at which choices are made. we currently backest very heavily such a self learning system of systems. what is important here is to define what the "parameters" of such a system are. by that i mean that if we run 50.000 systems and choose 500 of them then the 50.000, the 500 and the way how they ranked, eg sharpe, are the paramters. and the interesting thing is to determine how sensitive the output is against these (and, in practise, many more). you seem to believe that systems of systems work in complete un-predefined space, which i find pretty impossible. we are running such direct pattern thing as well. but if you do not limit the space of how such patterns are constructed you blow out each and every computer immediately. these things get better constantly, and you can use a bunch of strategies to make more patterns possible, but the space of possibilities is simply too big. plus you run the problem of finding more and more great patterns and the better they are the rarer they and you fall again into the fitting camp. just increase your number of neurons in a nn ... same thing. having said that i see all these things from a higher perspective. the system of systems is definitely a valid approach. but i can improve pure machine force by intelligent prejudgement. for the second strategy i mentioned, the pure machine pattern search, that means that i throw in pattern ingredients that "make sense", thus are derived from human prejudgement.
TS i am on your side if we talk long short equity. portfolio construction there cannot ignore lending cost and leverage cost. when we traded such a strategy that was a major thing to look at. but in futures that is no big concern. my margin utilisation gives me room to manoeuver. in all directions. very flexible and compared to single stocks: cheap.
Hi, I code all the stuff myself (using Java). I also use Excel for simpler tasks like viewing data, diagrams and for transforming data for input to my programs. That's all. Alan said a few times that he codes all his software by himself too, and uses TradeStation for real-time tracking features. I don't know of any third-party software that can optimize risk weights to combine systems for maximum modified sharpe. It's actually not very complicated stuff. If you don't want to go through all the trouble to learn coding you can learn at least Excel and do this optimization task with it.
I suppose we're looking for the stresscorrelation to be as negative as possible? Coming up with structurally uncorrelated strategies is not that hard. If you have strategy A, then the strategy B should enter where strategy A is failing. Something like that. My biggest headache is how to make these strategies perform better than random. Everything else is covered.
There is a very notable confusion here: correlation between two time series (two or more sets of market data) and correlation between two or more strategies applied to those sets of data. Essentially this confusion stems form the misunderstanding of the difference between random variables and random processes conducted on the sets of variables. Example: S&P 500 Index and Dow Jones Index time series are highly correlated, however one can design two or more trading strategies applied to those indices that would be significantly uncorrelated. The diversification of the trading portfolio is usually associated with the set of securities this portfolio holds but not with the type of strategies this portfolio implements. The risk/reward ratio of the portfolio, therefore, is highly dependant on the correlation level observed in the set of the holding securities bur it does not depend at all on the level of correlation between the strategies implemented in this portfolio. I hope it clear things a bit. Cheers, MAESTRO
I think we would have both correlated and non-correlated confusions here, whether positively or negatively associated, seriously!
So Ashby proved if almost 50 years ago? Gosh, I wonder if anything has changed or been discovered since then? I mean the guys who invented computers in the 1940's expected that the United States would only ever need 10 or 12 of them since ther function was limited to automated math and they weighed 10 tons. Were they worng! Jerry030