OK. I have identified one source of misunderstanding. It comes from your post on Nov 2. The bottom of your simulation screenshot contains the following information: [ SIM Process: GBM_MR, Vol: 77.19%, Drift: -3.22%, Rev. Speed: 0.11 ] My interpretation is that you are simulating on a mean reverting GBM rather than a purely martingale one. Is this the result of testing on the instruments you use? Do simulations of the system still show a profit on a pure (non-mean-reverting) GBM? It seems impossible that it would over a large sample, especially if trading costs were included. I seem to remember this mean-reverting feature of your simulation testing in prior threads also. Hence the confusion. Can you help me to understand? Thanks. mj
This is another part I have trouble understanding. You say the hedging orders appear like stops, but in fact, they are stops - at least in the sense that your overall account no longer has an open position. So this is part of why I am confused about the use of "unrealized". Once you have opened and closed a position with a stop loss, you now just have a loss, there is no open position and no potential future PNL from this round-trip trade. Only future trades being opened are able to increase your PNL. Let me try to explain what I mean using your "layers" diagram from above... and also to verify my own understanding, because it's complex! In this diagram we have an opened trade (I assume a sell order) by Layer 1 at some price level (this trade is the red dot in the L1 part of the diagram). Then, price moves up and in order to hedge (I assume), Layer 2 places an opposing (buy) order at the higher price (this trade is the blue dot in the L2 part of the diagram). At this point, in terms of the aggregate account, there has been one round-trip trade which was closed for a loss (most people would call this a stop). Next, the price moves down below the initial sell price of Layer 1, so Layer 1 issues a buy order (this trade is the blue dot in the L1 part of the diagram). At the account level, this is the opening of a new (long) trade, but in your Layer construct, this is Layer 1 closing out his original short sale at a profit by buying low. Finally, the price changes direction and moves to a higher level, which (critically) is higher than Layer 2's initial buy order. This allows Layer 2 to sell high (the red dot in the L2 part of the diagram) and to also close his trade profitably. At the account level, this is the closing of the long trade at a profit. Since the second, account-level trade (the long trade started by Layer 1 and ended by Layer 2) gained a larger amount than the loss on the first account-level trade (the short trade started by Layer 1 and ended by Layer 2), the combined effect is a positive PNL. If we look at each Layer individually, we can also see that each posts a small positive PNL. I think it is important to note that we can only have every Layer simultaneously posting a positive PNL in the event that the whole account has a positive PNL. -- OK. Back to the beginning. Can you please help explain why this process is a source of edge? You give reference to the scalping action "throwing" the unrealized "G-L" into the PNL, but I can't understand how that works in a meaningful way, unless there is some change to entry and exit strategy as a result of the Layers setup. Is there? For example, suppose price never came back down to the level of Layer 1's short sale (to open the first position). We would see Layer 2 place the buy order to hedge, and the account-level position would be closed for a loss. If price never returns to a low enough level where Layer 1 can book his own profit, does Layer 1 just stay disabled permanently? If not, how is the decision made to shut him down with an open "loss" on his private (Layer-level) books? At the Layer level, in this case of rising price, Layer 2 would eventually "book" his profit, but this would be smaller than Layer 1's unrealized loss by exactly the amount of the account-level loss on the initial short position. This makes sense because the overall account has zero exposure when the net position is closed, so PNL doesn't change until one of the Layers takes action. Also, Layer 2 booking his loss means closing his position - an act which reopens the account's exposure to a short position. I am curious what happens next. If Layer 1 is still disabled, waiting for markets to make a large reversion, does this mean you are limited to Layer 2, or that now you have to spin up an additional Layer 1 style player who opens up short positions? Thanks for taking the time to read all this. I think the whole thing is fascinating and I'm just trying to understand all the moving parts so I can understand how the edge works. mj
Hi monkeyjoe, interesting question, as it opens to an area I have not touched so far yet, to avoid confusing ideas at this stage. That is the simulation facility to study and analyze the "games", prepared by the user. First of all let's say one important thing. The simulation facility is coincident with the actual trading engine, which means that when testing a game, the lines of code which are executed are the same which are executed in a real account. Clearly, when running one instance in game simulation mode, it is not possible to real trade with the same instance, and vice versa. Simulations include also commissions for each orders, which are simulated with a conservative value (usually a little greater than the real one). <b>What is the purpose of this game device simulation within this approach?</b> The simulation device is an application like any other. It can be used and interpreted differently by different managers. This of course happens with any application. Even with the main trading application. As an example, sometimes (especially in early times) it occurred that some users completely disabled the hedging entries (that is, disabling the so called T players) thus transforming the games in purely avg up/down, with the obvious consequence of creating games which are usually not bearable in the long terms with not large enough capital, at least with some instruments (especially indexes, etc.), So while I provide an architecture, of course different persons can have different understanding and interpretation of it. Since, so far, I have passed more time of anyone else thinking about it, it is possible that I have already made most of the steps which managers using the same application and concepts will pass through, but it is also likely that, soon, many of them will be able to push further ahead and provide more useful insight and improvements (especially in the game area). Now, my view on the role of the simulation facility is that this is purely a <b>study environment</b> where the manager can define a "game", that is <b>a set of rules for trading</b>, and the application will provide some <b>synthetic tickdata</b>, so that <b>he can see the effects of the rules he has defined, and how they act on arbitrary stream of prices</b>. The tickdata generator has a lot of flexibility, and it also allows for instance to fix the <b>initial seed</b>, so that if ones sees a stream of tickdata which appears particularly challenging to trade, he can play it again and again. So as I see it it's just <b>a tool to study the effects and action of the game rules on a stream of tick data.</b> How the tickdata is generated, is not the main initial concern and focus, as the purpose is to study the game, provided that some basic intuitive requisites, for the data used, are fulfilled. Clearly, a first intuitive <b>minimum requirement</b> for this study environment to be useful to study the games is that <b>any reasonable trader looking at the tickdata which are being generated, won't the able to say if that data could have been generated by a real instrument or not</b>. This would probably seem obvious, but must be said, or else it may seems that any data generator could be used, including for instance a white noise, while, instead, we would like to explore the application of our trading rules to vastly <b>diversified scenarios, and normally indistinguishable from reality (so that we could not exclude a priori that they could occur)</b>. Clearly, one can obtain that in several ways, either by randomness or chaos or whatever. I found useful, <b>to diversify the scenarios studied</b>, to use a continuously changing mixture of stochastic process with randomized volatilities and drifts (within ranges definable by the user), so that so far nobody has ever been able to tell apart the data generated, from data which could belong to a real instrument. (This kind of flexibility is useful, as, for instance, it also allows to possibly create structural drifts in the tickdata, which sometimes can be useful when tuning games for long term drifting instruments, such as the ultrashort). Recapping, the simulation environment is a study environment for the games. It, of course, cannot provide any prediction of actual results. It can only <b>provide a precise prediction and representation of the "rules" that will be applied</b> to the real data, coming from the real market. The purpose is to allow you to create a game (entry rules, order sizing, hedging, etc.) which you like and you think it does automatically the actions which at any moment you would like to do under the given circumstances, so that you can "sustain" it psychologically (=not shut down the algo) even when underwater The trading application however will not really care about how you define your games, as its main purpose is, whatever are your definitions, to <b>maintain the trading information</b>, for you to have the potential to recover any order which, at any time, had an hedging role (say a "temporary stop"). (Thus giving you a form of "strategic dominance", not an absolute guarantee of profits in a given timespan, with respect to any other trading procedure that does not.)
That's right. This has been a many-year journey and, as you rightly recall, I have also been testing and "tasting" to what point we could arrive without hedging. Even now some people testing the app are tempted to avoid hedging, lured by what can appear initial quick gains. I try to warn against it, as I have seen that most people and capital cannot really sustain, in a decently long term, a purely avg up/down game dynamics, without hedging. Anyway it's also useful, that people walk the walk and make their own journey Games structured with continuous scalping and hedging are invariably "slower" to go "upwater", but they are in most cases much more generally sustainable. The side effect is clearly, in general, a slower growth of the PNL, because the rate of loss is very close to the rate of gains, and it takes a while for the recovery mechanism to unbalance the gains and grow the "G-L" (unless one gets lucky, or chooses very carefully the instruments, and starts under particularly favorable conditions). But that is the way it is. In general, and looking at systematic profitability, under all weathers, it may take some time for an investment to work. After all, for most other types of investment we do, it takes some time to have a return (if any at all), and it is not clear why a trading investment must be different (it is not uncommon, I noticed, then when it comes to trading some people have the most strange and unreasonable expectations). (Actually, I'd say that in most cases, you are forced to an extensive hedging action, as the natural behaviors of many MMs algos is to quicky "move against" your orders, especially if they are relatively significant in the current context. And therefore it is crucial to be able to "undo" in the future those hedging orders.) One problem of this industry is that too many false myths of get-rich-quick stories, have been perpetuated which, in invariably all cases where pure luck on a short period was not the reason of results, were just cover ups, for other activities, often largely illegal, which had nothing to do with genuine and legal trading activities.
The mechanism is actually not difficult at all to get used to, once you see it in action before your eyes and play for a while. (Now I would not even be able to trade otherwise.) Clearly, you must <b>abandon the perhaps more natural perspective of trades, made one at a time (non-overlapping), in a linear sequence</b>. It's however simply a "superposition "of players on a layer, (if you prefer, a metaphor for multiple traders using the same account and instrument and coordinating their actions for best results). Don't look at the actual state of the "physical account" (apart for occasional synchronization checks): just reason in terms of players, as they superpose their results and add up algebraically. I think the best way for you to fully understand the mechanism, is to get a (clearly free) copy of the app, so you can play with this mechanism and see what I have tried to explain in those diagrams. Many people are participating in this project and you can also request your copy from my website (someone already asked about that in a previous post). I think once you see it in action on your side I can more effectively answer your questions, more concretely on practical cases. Intuitively, the crucial point is the <b>"conservation of the trading information"</b>, which actually modifies the "shape" itself of what I call the "order cloud", and allows you to recover the old "stops", without "duplicating" them (and then adding losses to losses). This, in the long term and in statistical terms, matters.
Thank you for taking the time to read my (long) posts and reply in so much detail. I have read all your responses, but I won't quote every piece -- it would just take too much space! A couple more things... I wholeheartedly agree. The only reason I was looking at the actual short-term PNL as an indicator is because of the high volume of trades. My understanding is that high frequency (HF) systems generally show you whether they are profitable or not much more rapidly. In a statistical sense, you might think of this as the sample size going to infinity faster, so HF strategies' PNL generally converges to its limiting distribution rather quickly. I guess in the case of G-bot, you are after long term statistical advantage from scalping eventually overcoming drawdowns, so maybe the convergence wouldn't be as rapid as with a more basic HF strategy. OK. I will email you in the next few days to download a copy. Maybe I can test out the effect of mean reversion on the sims myself! The main thing I am trying to figure out is whether you have found (through the implementation of G-bot) some novel tradable feature of markets in how prices are revisited... perhaps a feature that is not adequately explained by a mean reversion parameter in a GBM model because of concepts like "support" and "resistance" used in technical analysis. If it's not too much trouble, could you try to answer my question about the Layer 1 and Layer 2 example? Specifically, what happens to Layer 1 if the price runs away? If Layer 1 must simply wait and endure the drawdown (only to his private Layer account, of course) until price goes back to a level where he can close for a profit, then I am curious about what drove this choice of strategy. It would seem to limit scalping opportunities if price had just moved to a new higher level and then stabilized. Alternately, if there is no single answer for what happens to Layer 1, then my guess is that what happens may be up to the manager of the system, and that perhaps this is a part of game rules that you haven't discussed as much in the past? It would be interesting to hear what sort of options/controls you have put in place for these types of scenarios, as any scalping strategy's profitability really depends on how runaways are handled. Thanks again, Tom. Your time and effort in development and in explaining stuff is really much appreciated. mj
Continuing the volatility arbitrage discussion... Never typed this out before, so my thoughts might be a little jumbled. Your basic strategy of layering bids and offers to capture these endless moves up and down is essentially a synthetic short volatility position. That can be concluded fairly easily. When an option MM is long a straddle they will delta hedge in order to isolate their execution edge. Their delta hedging will basically look like buying as price drops and selling as it rises. They do this because the hedge is synthetically the opposite of their long straddle allowing them to isolate the favorable price they got on the options. So as I see it, your basic strategy is synthetically the opposite of a long straddle(minus the time component), so short volatility. Say a MM is short a straddle. He is now going to delta hedge by getting synthetically long volatility. Which will look like buying as price rises, and selling as price falls, similar to your stops. So you've basically got synthetic short volatility positions which are also being hedged by synthetic long volatility positions. I'd argue that this is just a vol trading strategy. I know this is how people trade volatility, and they make money by having a vol forecasting edge. How they forecast volatility, I don't know.
@monkeyjoe <b>> The only reason I was looking at the actual short-term PNL as an indicator is because of the high volume of trades</b> "high volume" is a relative concept. "high volume" respect to what ? It does not seem "high volume" to me. Actually the trading pace in this current session feels quite calm. Anyway, the <b>trading frequency</b> could be increased by reducing the entry spacing and making less stringent the entry conditions. My personal believe about this, is that there is a limit to to how "fine" and frequent you can get (well, in any case certainly you can't get too close the spread), because there seems to be anyway a lower bound beyond which the MM algos will easily massacre you, if you try. What seems to happen is that, most often, if you try, some MM algorithms will start rapidly "fluctuate" against all your frequent orders and make your local loss grow beyond any limit (this will be hard to "undo", as you will soon have too big sizes standing on the wrong side of the mkt to be recovered). I have seen some instance of that with some instruments, even if not in the same measure. [The most "aggressive" case I have seen I believe to be UGAZ, which seems equipped with an highly aggressive logic (just my practical, personal experience however, I have no other "proofs") ]. In this sense, somehow the mkt seems to be organized in such a way as not to allow any <b>"market taker"</b> to easily get away with quick and easy money (at least not on a systematic basis). <b>>My understanding is that high frequency (HF) systems generally show you whether they are profitable or not much more rapidly</b> Well they obviously do in relative terms. It depends on what is the logic by which you judge whether you are on the way of "profitability" or not. In our case we just want to see the "G-L" grow steadily and at the same time bound the "unrealized" in a finite range. From my point if view, if I don't see a "sound reason" (that is, considered so by a reasonable number of reputable experienced people) why a trading methodology must be profitable there is no positive PNL equity curve long enough that will ever change my opinion that is a pure <b>work of chance</b>. And, on the other hand, you <b>will never have any other way to "prove" that is not</b>, if you do not have a sound reason to justify it. This is similar to when people talk about "correlation" between phenomenons ("spurious correlation"). If 2 facts have no "logical dependence", whatever high is the absolute value of r (correl. coefficient) that you get, it will have no meaning. It's not the fact that you can compute something that creates a phenomenon. The correlation <b>must exists in the first place</b> (at an agreeable logic level), and then its measure can start making sense <b>>some novel tradable feature of markets in how prices are revisited... perhaps a feature that is not adequately explained by a mean reversion parameter in a GBM model because of concepts like "support" and "resistance" used in technical analysis.</b> Nope I have not. And I don't believe they exist at all I look at all the above concepts as irrelevant to our purpose, attributing to them a purely "descriptive" meaning (which is a worthy role, anyway). <b>> Specifically, what happens to Layer 1 if the price runs away?</b> That depends on what you instruct the game to do. In my current example games, if there is one side running away, players from the opposite side will kick in (the "intensity" of the hedging action, order sizes, etc, is specified in the game rules.) One important thing to note, is that in that post I have mentioned 2 layers just to make clearer the illustration. In reality, as I have also precised, <b>all the work is done on 1 (unique) layer and the players are, as we say, "superposed"</b>. (The "layer overlay" can also be used, but it represents an higher level of logic overlying, say at a "macro" level.) <b>> If Layer 1 must simply wait and endure the drawdown</b> That is a matter of how you define the game. I currently tend to like to hedge promptly but gradually, even if that slows down the growth. The intensity of the hedging action may also depend on the nature of the instrument and on the side (buy/sell) (eg, ultrashort can be treated asymmetrically: that's why I am calling the game I am using "bias"). <b>> It would be interesting to hear what sort of options/controls you have put in place for these types of scenarios, as any scalping strategy's profitability really depends on how runaways are handled</b> Well, as we said before, this is really a "tug of war" between gain and loss growth rates. So the goal of creating good games is to realize a good balance, which provides some protection, which be comfortable to trade (and different people may have different "pain thresholds"), but at the same time does not make the growth too small, or even negative. Our edge is the fact we are "recycling losses" when possible. Where "edge" here does not mean clearly we are guaranteed to become rich soon, but more reasonably that our architecture is providing a <b>statistical advantage</b> respect to a similar strategy that would not be using the past trading information and it's actively using that advantage to "unbalance" the natural 50/50 gain/loss ratio (normally obtained with the odinary "memoryless" stops) in our favor. You just establish a positive "drift", then time must develop it and gradually show it in the PNL (clearly, the larger the folio, also the slower could be the process). In this sense I humbly talk of <b>"strategic dominance".</b> Sure, about games (entry rules, sizing, hedging intensity, etc), it's a completely open field of research and engaging exploration. And one might also tune them respect to entire "classes" of instruments", in order to adjust the game rules to the structural features of the instruments, eg. commodities, indexes, ushort ETFs, etc.
Hi jb514, that's a pretty tough discussion I am not sure I have enough experience with options to really contribute, as I tend to use them mainly for hedging purposes, to argument in favor or against of the analogy you propose. Certainly it sounds pretty suggestive. At a very superficial level, what I can note here is absence of too many features which strongly characterize the options dynamics. You noted "minus the time component". But once you remove the time decay we are already taking out a big chunk of the option logic. Then there are all the other factors, represented by the greeks, which would present a similar challenge: delta, IV, .... So I do like the analogy, but I also imagine that it could probably face perhaps some argument. Certainly, I would like to know the opinions of more experts who have familiarity with options trading. From the conceptual point of view, the limit I see is that the analogy would possibly apply only to <b>a subset</b> of the infinite possible games. In fact, one could, in principle, devise games which for instance are strongly directional, and your analogy would lose strength. Not only, even now, for instance the "Bias" game, which has a strong short bias would probably escape from the analogy. But the most crucial point is perhaps that a possible characterization along those terms would bring focus on the game structure, which I don't consider here the crucial reason for edge, but simply one possible instance, among infinite possible, of how we can control the rate of growth of the different PNL components. Anyway, I feel your analogy is a large an interesting venue of worth of investigation and discussion as we might incorporate in the games some techniques derived from option trading. And trading automatically futures and ETFs is certainly easier than attempting similar stunts with options. Looking forward to hearing more about this ...
We can get into more depth later but you are synthetically creating the pnl of volatility trades by adding convexity through layering into and out of positions. Your term, bias essentially refers to adding a delta component to your volatility bets. I do it and I've seen others do it all the time. I could have a delta bet on, thinking something is going to break out, but I could also bet on vol, thinking it's going to be range bound in the meantime. Sometimes in a winning trade, the vol bet will pay for the delta loss, or the delta bet will pay for the vol loss, or they'll both make money.