People in the industry use more and more the Sortino ratio, which makes more sense. http://en.wikipedia.org/wiki/Sortino_ratio
<a href="http://www.datatime.eu/public/gbot/Mon 06 Sep 2010_Port4001_Cli3/GBotReport_2010-09-06_Port4001_Cli3.htm"> Trades </a> Ok i have launched a first test just after the holiday. A folio of 10 instruments + a "clone folio" od the same instruments: so 20 in all. Both the "original" folio and the "clone folio" trade according to our strategy seen so far. To have entries out of sync I have added <b>some (full automatic) "constraints"</b> on entries. This is the real new part and needs some research. Ideally we wish the twin folios be <b>hedging</b> each other. You will notice as often the instruments are "neutralized" with short/long positions. (I need to work on that; this is just <b>preliminary algo I have improvised to "coordinate" the clone folios</b>). Note that overlaying can provide additional hedging (the strategy itself is made of "competing trenders", anyway, as discussed previously) <img src="http://www.datatime.eu/public/gbot/Res_Sept6_1.gif"> You will note that the sum of absolute positions is now incorrect (larger than due) due to missing neutralization (clone instruments must be first "neutralized", before adding absolute positions). I did not have time to change that piece of code, but I will do it now: it's just matter of a few minutes work ;-) (...same consideration holds for the sum of absolute values of positions) Tom
In terms of "pseudocode" the changes needed should be as follow. To obtain the global results i am clearly looping over all the instruments and accumulating positions, PNL, values, realized, unrealized, and so on. So where we had the statements like: Position_Signed_tmp += Instrument.Position_Signed Value_Signed_tmp += Instrument.ValueOfPosition_Multiplied_Signed Position_Abs_tmp += Math.Abs(Instrument.Position_Signed) Value_Abs_tmp += Math.Abs(Instrument.ValueOfPosition_Multiplied_Signed) [...] i would replace the last two statements by, first of all, creating a new class, say something like: Class NeutralizedSignedQuantitiesForClones Public Position_Signed_Clones (an Integer) Public Value_Signed_Clones (a Decimal) End Class to store the "neutralized" values of each subset of clones (such as CL, CL_C1, CL_C2, ...) and by defining a new "dictionary" (clearly to be "reset" at each loop) where to store all of these objects for all subset of clones (es: set of CL clones, set of EUR clones, ...): NeutralizedPositionForCLones = New Dictionary(Of String, NeutralizedSignedQuantitiesForClones) where the "string" arg would be name of the "original" instrument cloned (stored in all the instruments belonging to the same subset of clones). Then the 2 above instructions in the loop to find the absolute sum of positions and values would be substituted with something like: define: NeutralizedSignedQuantitiesForClones As NeutralizedSignedQuantitiesForClones = Null (or Nothing) If Not NeutralizedSignedQuantitiesForClones_All.TryGetValue(Instrument.Security.SymbolInProgram_OriginalOrCloned, NeutralizedSignedQuantitiesForClones) Then NeutralizedSignedQuantitiesForClones = New NeutralizedSignedQuantitiesForClones NeutralizedSignedQuantitiesForClones_All.Add(Instrument.Security.SymbolInProgram_OriginalOrCloned, NeutralizedSignedQuantitiesForClones) End If NeutralizedSignedQuantitiesForClones.Position_Signed_Clones += .Position_Signed NeutralizedSignedQuantitiesForClones.Value_Signed_Clones += .ValueOfPosition_Multiplied_Signed so we can remove the two statements (which would be now incorrect) and, at the end of the loop of accumulation, we would add up the abs values of the (partially aggregated) signed values (by looping on the subsets of clones): For Each NeutralizedSignedQuantitiesForClones As NeutralizedSignedQuantitiesForClones In NeutralizedSignedQuantitiesForClones_All.Values Position_Abs_tmp += Math.Abs(NeutralizedSignedQuantitiesForClones.Position_Signed_Clones) Value_Abs_tmp += Math.Abs(NeutralizedSignedQuantitiesForClones.Value_Signed_Clones) Next NeutralizedSignedQuantitiesForClones I have now also added full support to folio "superimposition" in Backtesting/Robustness Analysis/Calibration, besides trading. So the infrastructure is now "complete". An open and interesting area of research are the algorithms to "coordinate" the overlaying clone folios, as to <b>maximize the reciprocal hedging action</b>. Say the "engagement rules" of the clone instruments ;-) And there is where we need to dig now... This discussion should also make evident how there are options to the absurd "single trade stop (or reverse)" for a meaningful hedging. But clearly, nobody will tell you, as your money is needed ;-) Tom
Our test with the "clone folio" is doing ok. (Doing better that i though actually, since i havent actually worked on the rules to coordinate the "clone folios".) Here, for a quick test, i have improvised a simple rule where the close of the competing trenders is skipped when that would increase the absolute position of a subset of clone folios. Most "lucky" performer (as often happened before) is CL (maybe it's not just "luck" ;-). It seems that overlaying is increasing significantly the "realize force" of the algorithm. But all this needs to be backtested... Let me know your possible ideas on coordinating the overlays to achieve the best hedging (or best ratio AvgProfit / MaxDrawdown). <img src="http://www.datatime.eu/public/gbot/Res_Mon_2010_09_06_2.png"> (I will need to shut down this run, to restart the version with adjusted computations of sum of abs pos/values.) Tom
ok here the trade details: <a href="http://www.datatime.eu/public/gbot/Mon 06 Sep 2010_Port4001_Cli1_1/GBotReport_2010-09-06_Port4001_Cli1.htm"> Chart update </a> this algorithm already belongs to the past however ;-) I have restarted a new trading session with a slightly different algorithm to coordinate the folio overlay. "Robustness analysis" seems to indicate a slightly improvement with respect to this implementation. Next posts we will see the "revised" algo (incorporating also the corrections about abs pos and value discussed before). <img src="http://www.datatime.eu/public/gbot/Res_Mon_2010_09_06_3.png"> Currently i am researching on better overlay algorithms. (It seems I am running out of machines though, for all these tests ;-) Tom
Welcome to the roller coaster! I was going to shut down this run to focus on the new revision, when volatility begun suddenly to rise. All instruments suddenly have begun <b>tracking together</b>, in a strong upward movement. A valuable member of ET, and my dear friend, "skyped" me to keep it alive this run to see how it would behave in this real nasty scenario and if it would be able to recover. So here we go. Have been days of wild volatility with EUR reaching up to 1.3153. Futures expirations certainly did not help. This is the kind of scenarios, with all instrument "synchronized" which we most dislike, because this way all the realized money is used to "invest" on many instruments, and a large "drawdown" is perceived. As we said, i think that "drawdown" is an unappropriate word when used in this context, because this algo does not allow for any "permanent losses" (as would an algo with stop reverse mechanisms embedded). I like to talk of "investment". An <b>investment in volatility</b>. I think one thing i really like in this robot is the the <b>PNL decomposition </b>and correponding chart. The fact that one can stare at the REALIZED curve is in my opinion a tremendous <b> psychological support</b> (in case it were necessary) because one can know what is the money he has actually "realized" and how much is instead currently <b>being "invested in volatility"</b>. Imagine we would not have the "Realized" curve. We would be "psychologically" lost and would probably begin to wonder about the performances of our strategy when we see the PNL go down, because of heavy investments. Imagine yourself looking the PNL going from 27K to -13K, as it did here (40K "investment"). If you can't rely on a realized line showing 33K and even increasing, it's inevitable that you may start wondering. Clearly, i am not, clearly, saying that it's just the "Realized curve" what makes us totally confident, but if we already have an algorithm we "understand" and trust, then the PNL decomposition is a good psychological support. We can instead see that even in the deepest "drawdown" the algo continued growing the realized component. This is a crucially "reassuring" fact, because we can be confident that no matter what is going to be the market volatility we have the realized <b>"pulling up" the PNL curve</b>, all the time. <img src="http://www.datatime.eu/public/gbot/Res_Mon_2010_09_06_5.png"> One might wonder: But why this algo is capable of growing the "realized" component of the PNL even when massively investing? How come it does not get stuck? Well this is due to the nature of the algorithm which is actually kind of "overlaying" different traders playing independently. I like to call this kind of trading architecture "multithreaded trading", with an analogy to multithreaded programming. (Further, when we see the realized component slow down, that would be an "indication" to start <b>clone folios</b>, for overlay.) Creating trading algorithms is an activity where one learns to adapt to difficulties. Like any other activity of life. If you do trial with you byke, everytime you lose control or get hurt you know you did something wrong, and that is exactly what allows a learning process. In the same way, in my quest for algorithms (tried literally thousands) i have learned by experience and through countless simulations with tickdata, what is likely going to work and what would not. Clearly, this is based on my experience and not an "absolute truth", and by experience could be disproved by others. Anyway, one of the thing i have learned is that if one uses say a "single threaded" approach, which would be dealing with 1 trade at a time: entry/take profit/stop loss or reverse, etc, there is no way to end up with <b>"consistent profitability"</b>, algorithmically speaking, but at most you can build a zero profit game (in the long run). Tom
Ok. i am looking into the "overlay rules" question. I am doing several experiments (backtesting) with several different scenarios. Assume you start a folio F. And consider a "clone folio" F_C1. Now, if we simply started automated trading with both folios at a given time t0, and identical strategies, this would be equivalent to trading F alone, with a double sized initial packet size. (So this case is not interesting.) As we delay the activation of automatic trading on folio F_C1 at a posterior time t1 > t0, we create create another set of instruments which trade <b>"out of sync"</b> with respect to the set F. ....t0....(F trading).................................................... .........................................t1....(F_C1 trading)............ Now there are <b>2 fundamental scenarios</b> i can see at the moment (but, please, do suggest more): <b>1 - INDEPENDENCE</b> 1a. F_C1 runs "independently" of F. In other words each folio is let run according to the predefined strategy rules. 1b. F_C1 runs "independently" of F, but has different strategy parameters (for instance a different min scalp size, etc) 1c. F_C1 runs "independently" of F, but uses a completely different strategy <b>2 - CLONE DEPENDENCE</b> F and F_C1 play the same strategy and can start simultaneously, but F and F_C1 entries and exits are not independent: there are "rules" which enforce some "contraints". (For example, if an instrument is currently long, we migh constraint the clone instruments to open short, etc.) So there are several interesting questions which arise here. For instance: - What is more convenient, letting them run independently or to introduce contraints ? And, in case, what contraints ? - If we choose the INDEPENDENCE way, what is the optimal instant t1 to activate the clone ? Or is there convenience in changing parameters ? - add more... Before attempting a few answers and examining objective results of backtests, i will let you think about it and formulate more questions to be investigated in our quest for best profits (compared to risk and "max investment"). I would also be interested in your <b>trading"intuition"</b>, that is, what would you "intuitively expect"? and which approach you "intuitively" would see as the most convenient ? Tom
I would definitively choose independence. For 1a/1b you could launch 1 instance at the opening of european markets, then a second at the opening of US index (thus trading AFTER the usual first US eco data (13:30 GMT). Personally I would feel more comfortable with 1c (in that case the starting time of the second folio is not really important. However how different would the second strategy be? (still "kind of" contrarian or trend following?)
Congrats unco, powerful intuition! All the simulations i have been doing so far seem to confirm you are right. All "rules" i have tried so far lead to a <b>much less performant combined algorithm</b>. Clearly, this may simply mean I have not been "lucky" so far in the search of these "rules", or it may actually be a <b>general indication</b>. Well, actually after some thinking i feel that the result may be general indeed, as independence will always beat any constraints applied to "coordinate the different folio instances". I invite you to provide an intuitive explanation for this fact (assuming it holds quite generally). This is a tough question ;-) Later we will discuss some simulation results to illustrate this point. As to strategy diversification, one point that could be discussed is the following. What is more convenient: - To overlay independent instances of the same (good) strategy, shifted appropriately in time/price. or - To overlay independent instances of different strategy. ? Well. There is a general consideration to do however. In real nasty scenarios with wild volatility and <b>all instruments tracking together</b> in a <b>monotonic pattern</b> for a long period with just small retracements (say, just enough to trigger trailing stops), it is conceivable that all meaningful strategy will begin to <b>"converge"</b>, as to behavior. Let's make an example. Assume that EUR is 1.25 and starts rising up to 2.00. And assume that almost all the other instruments in the folio would track with it. At a certain point, whatever is the strategy, it is conceivable that most strategies will grow a short position, as it would be suicidal to grow a long position, and <b>at the end of the upward movement to find ourselves on the wrong side of the market</b>. So a general <b>strategy convergence</b> is expected. Tom