puzzling probability and roulette a la Mark Brown true story - armed with a new roulette system my cousin and i raided a casino one day. we stood out in the middle of the floor looking at the roulette light poles that keep track of the most recent wining numbers and their colors. actually the colors were all i was interested in. so the theory was to wait for three reds or blacks in a row and then bet the opposite color. if that failed the triple up the bet and play that opposite color again and again and again if necessary. well after hours of winning at this - we hit a run of 8 in a row against us but no problem we were loaded with chips. we were drawing attention by now so we stopped and planned to return the next day. well like a thief returning to the scene of the crime we went back bright and early the next day. i could tell immediately that the personality of each wheel that i had memorized yesterday was totally off. but none the less armed with budging pleated pockets of chips we went head long into the danger zone. well to make a long story short we hit a run of 12 looser's in a row and we were hitting house limits on the betting by now thats how the house wins by limiting unlimited betting. so with our wives both betting as well as ourselves and much booing from the crowd that shouted cheat and now way ect. we finished our last ever effort at owning a casino. i am next going to use a new method, same theory but when it fails by about the third bet - i quit. ps i know red black have no values but it was fun. yes we won the thirteenth spin of the wheel. our facial profiles are probably in a data bank for stupid stunts pulled off now. mark brown ======================= Perhaps someone who knows could explain this probability question to me. Curious as to whether that simple roulette system mark spoke about could work consistently, I decided to try it out for myself. I went to one of those "play for fun" casinos and, starting with a bankroll of $2000, decided to be a bit more conservative and wait for four in a row of one color to come up before betting the opposite color. If I lost, I would double up and bet the same color again, and so forth until my color came up. Well, this worked fabulously for the first while. I was betting in $50 increments and had increased my account all the way up to $2900. This was great! But then I hit a string of 10 blacks in a row, and the betting limit (of which I was previously unaware) kicked in and so I was only able to bet $1000 instead of $1600 (actually I would only have been able to bet the rest of my bankroll, which was 1400). Anyhow, after that, I quit with $2400 in my account. So the next day I was discussing with this with my brother in law, who is a programmer and has taken probability and statistics in university. But I was not able to wrap my pea brain around what he told me. He said that my betting system wasn't valid because each bet was an event independent of previous events and that my next bet would therefore always have a 50/50 chance of going my way. Yet when I asked him what the odds of one color coming up, say, 5 times in a row, he said that the chance of that happening was about 3% (.5 * .5 * .5 * .5 * .5). Now in my mind, these two different probabilities are measuring the *same event*. So how, on the one hand, can my probability of winning be only 50%, and on the other hand, just 3%? I just can't figure this out. Anyone? Dave ================================ might i add ( since um currently working on volatility model ) that mean reversion isn't that only, volatility in addition to mean reversion goes through increase/decrease cycles governed mainly by information factoring/fading process. it is actually two overlapping processes not just one. ltcm blew horn because they did not factor the second process...and relied on mean reversion only. increase cycle especially dangerous since volatility can snowball to unexpected high value as information intake snowballs and ruin your mean reversion model right there where you expect it to mean revert... where as coin toss or roulette has no increase or decrease info cycles with feedback because simply there is no information intake ( except the spin itself ) unlike in trading, no memory thus governed by random process only. trading not equal gambling. however even in roulette you might have a unexpected random10 reds in row as you will likely be "mean reverting" after 3-5 to bet on black, right? the difference is the nature of the process... in trading it's random + information factoring and feedback in game pure random event and since in trading it's not always random,prediction is possible. in trading is that new information begets new information begets new information... ie there are information cycles. a turning point in trading for instance is where one information cycle is overridden by an new counter one. when volatility snowballs it is the result of the information rolling snowball. compare that to a common information shock such as a news event that creates initial jump in volatility aka price shock then fades and reverts to the mean. selling volatility into the price shock will work but not during the snowball. means defining the cycle ( increase or decrease ) you are in is as important as defining if you are under or over the mean and how much. in roulette there are no snowball cycles, no memory, in blackjack you might have a weak info cycle in there based on the sequence of cards and thus can have a better edge if you can determine the cycle by counting. in roulette about average 100 wheel rotations you should expect about 3 times of 5 reds or blacks in a row...etc. after that the probabilities of black or red are still 50%. bilo.
well i guess i could write a few lines here. suppose you want to predict volatility of price time series. the stylized facts that you base your model on are: - it's mean reverting - it's gaussian in log or near gaussian or stable, whatever but stable. - it's persistent, serially correlated, clustered - it usually positively correlated to volume - it scales in time, usually based on either square root of time law or exp law. that's all you have to go by and often there is very little explained as far as the fundamental nature of volatility or risk. why does volatility go up and go down and how you explain the persistence, mean reversion, scaling laws and volume correlation??? if the model does not explain the above but just takes those as facts then most likely the model won't perform since you will miss something in there... to explain it all you have to start with the fundamental nature of financial markets. the market is an information processing "machine" in which a true value of the underlying traded is found based on incoming information and profits and losses are allowed to be made as a payoff for performing information processing function. look at the market as a box on the input of which you have new information and on the output of which you have new value, more over there is feedback since new value itself is part of new information input. types of info that you have on the input are: - fundamental information, news, rumors, facts, you name it. - technical ( feedback ) information, price action itself, bandwagon expectations ie momentum ie trend, etc... now, information comes in cycles, waves and there are two basic cycles that correspond to up and down moves in price. for simplicity: current info cycle and new contradicting or competing cycle. example, you have a major fed announcement that creates an information cycle that pushes prices higher, after initial factoring of that information a new counter cycle develops based on different interpretation of the same report, prices recede, next a bandwagon expectation develops where everyone piles up on that second reaction, since we all observe the price action every new down tick represents for us new information, next after the bandwagon pile up cycle ends ( information factored ) yet another new information cycle develops where people get a clue that the move might be over and the intraday profit taking move sets in that itself is new information, and on and on and on... so you can see the market outputs price value and creates new information that begets new information, etc... once we know the basics we can link volatility to information or lack of thereof. information in general could be measured in 1.content, 2.units, 3. weights there might be several units of information with different weight based on different content that the market factors in at any given moment. so information factoring is a cycle which has duration, magnitude and defines direction which is basically realized as price move. those info cycle overlap, counteract, resonate, etc... sometimes there is little new information to take in for the market and the market kinda dies down, sometimes there is lots of new info to factor and the market explodes and once the info is factor it fades down... look at the example of new information cycle, the market open. the open presents an ideal example of the new information that piled up from previous day, overnight and preopen and is ready to be factored in straight from the open... trading interest increases, order sizes increase... as a result the common observation is that intraday volatility is the highest on the open, same goes about the volume of the transactions... once the market process the info after a hour, hour and a half after the open the factoring is done the new value is found and volatility dies down towards lunch where typically there is little new information to process and up till the close where information content again grows and those who did not participate in morning factoring do so on the close... thus volatility is directly proportional to the current new information intake. more information coming into the market, the higher the volatility goes and information is factored and no new info coming in the volatility dies down, fades... the best way to visualize volatility increase and decrease cycle is by drawing a horizontal number eight figure. 8 half the eight ( one zig-zag, not the circle part ) is increase cycle and the other half is decrease. see gif the point of symmetry in 8 which is in the middle corresponds to the mean volatility. the rightmost point corresponds to highest outlier and the leftmost to lowest volatility value... immediately you can see that to predict volatility proper you not only need to know which part of 8 you are in, below or above the mean but what cycle you are in, increase or decrease... you can be below the mean but if you assume a decrease cycle ( no new information, info fading ) where as it's increase cycle ( new information ) you will be wrong in your forecast. math wise, you need to have two terms in your volatility model at least. one to take care of the mean reversion and the other to take care of the increase cycle. ( since log volatility is assumed to be normal there is symmetry between the 4 parts of 8, ie below mean inc/dec and above mean inc/dec ) take a look at the garch regression model ( i substituted sigma for V and remove squared for simplicity ): Vn = c + a*Rn-1 + b*Vn-1 can you id if there are mean reversion terms and cycle terms and which ones they are and most importantly if they are proper?
What a crock!!! Here are a few of the many posts in which I pointed out my methodology TO YOU, which was basically to use Spydertrader's coding of your garbage: http://www.elitetrader.com/vb/showthread.php?s=&postid=2496544&#post2496544 http://www.elitetrader.com/vb/showthread.php?s=&postid=2473857&#post2473857 http://www.elitetrader.com/vb/showthread.php?s=&postid=2470470&#post2470470 http://www.elitetrader.com/vb/showthread.php?s=&postid=2461451&#post2461451 http://www.elitetrader.com/vb/showthread.php?s=&postid=2460982&#post2460982 Your Sharpe ratio claim is about as delusional as your claim that you routinely make 3X daily range in ES. Funny how you make outrageous boasts like that, yet come in last in a trading contest with -24%.
well c + a*Rn-1 represents the mean reversion term and b*Vn-1 is the cycle representation attempt. in typical garch model mean returns is assumed to be 0 and c is very close to zero, right? so c + a*Rn-1 is typically comes from c + a*(Rn-1 - m ) where m = mean return an app. zero... so this part trying to take care of the "reversion" or fading cycle. next is b*Vn-1 which is an attempt to id the increase/decrease cycle, where b is typically large compared to a, representing the persistency factor. Vn-1 is in this case is volatility estimate and Rn-1 is the innovation. if we analyze the model closer we can see that there is symmetry in parameters for volatility increase and decrease cycles, this is a "dumb" fading model always anticipating mean reversion... since a + b < 1 hence the assumption that garch only works well predicting in the fading cycle... ie garch can't predict volatility increase well but does well on the volatility decay side... sure it is hard to predict new information or predict the end of the current info cycle and assume fading started... you kinda understand what i mean? so, where i am going with this is that the true volatility model should: - attempt to separate the volatility cycle in either increase or decrease based on whatever technique, ma, cycle id, momentum, etc... something that tells the model that you are most likely in volatility increase OR decrease cycle. - then on top of that the model must id where you are relative to the mean volatility, are you under or are you over... this is important because if you are well over you are likely to revert back or reverting back already, Prob of reversion is greater at the extremes of the distribution... ( not in roulette case ) the best model would then be two ( four ) factor models that are enabled whether you are in the increase or decrease volatility cycle. then you can regress/estimate two models separately on mean reversion and increase/decrease... the volatility distribution must then be conditioned into two separate distributions: either under or over mean or increase / decrease for ease of parm estimation... so you have two or four models with 3 parms each... you then enable the 1 or 2 (3,4) model based on either increase in decrease cycle you are in... ( as you might infer the kink is in the cycle id technique ) each model has same number of terms, parms and estimated same way but from different conditional probability density derived from joint volatility density. the general model should then be: 1. Vn = a1 + b1*Xn-1 + c1*Yn-1 + d1*et 2. Vn = a2 + b2*Xn-1 + c2*Yn-1 + d2*et with 3 parms to estimate b,c,d ( a could be estimated as mean cond. distrib of Vn) Xn-1 - reversion explanatory var Yn-1 - cycle explanatory var et - noise summary: - garch is not so good and only catches half of the action, fading cycle. - current computing power can do the above, i think it's doable. - as the computing power improves the model complexity will increase, the accuracy should increase too... - some papers now confirm that two factor models ( they unknowingly point to increase and decrease cycles ) are advantageous, ie garch (2,2) will do better than 1,1 but not by a whole lot since the parms are estimated from the same density and 2,2 model is not really structured as a two factor model. that's aside, i am currently finishing up on one factor model where i just added the cycle term... since for my log range proxy the cycles are symmetrical i think i can do away with one factor model with 2 expl. vars one for cycle one for reversion. for the cycle id proxy i took just the 1 bar volatility proxy momentum ( the simplest ), ie if on the last bar volatility went up i assume it's an up cycle... simple but better cycle id techniques are guaranteed to improve the accuracy. it is possible to use kalman or arima or kernel regression or fft or wavelets, or mesa or any technique that will tell you whether current volatility is measured as up or down cycle. so LTCM not having the cycle id term in there model could not id the volatility increase cycle and as the crisis ws spreading all over the world they were reverting where they should be waiting for the reversion to start... it's like catching the bottom/top ( falling knife ) but catching the top in volatility and they did not catch it on time... blew horn as further increase in volatility took em out of trading business. dynamic hedging fell apart as the model predicted reversion where as the reality reflected the snowball. if they properly id-ed the smallest increase cycle they would have waited for the top in volatility and then took the proper positions... funny that catching tops and bottoms is a typical newbie trader error ,we all know it right? pretty neat story and there is math to learn from that for sure. do not catch tops and bottoms instead wait for the move to get underway then jump on it. likewise do not anticipate new volatility cycle, wait for it to start first then act in continuation. but i don't blame em, prediction is hard business...measurement is easier. in roulette there is no volatility, no cycles, no reversion, just probabilities, expectations and runs. bilo. ps. the end result of that rocket science risk model is simply an adaptive entry/exit technique for a trading system where risk on a given trade is computed adaptively based on the above considerations. the complexity of the model is on the level of garch... but structurally it is much better than garch. the dll might be available for a fee as this model is half the trading system itself. it's funny that the result sounds simple but the road to it so complex. math pays. math's the key ( in systematic trading ). see gif. and finally no matter what the thread is you always end up dicussing what YOU are interested in. ain't it so?
========================== Yes. Let's say you bet on black, and you want to know the chances of winning at least once in the next 5 bets. If there is a 50-50 chance of getting either color (no greens on this wheel), then the chances of getting 5 REDS in a row, as we already discussed, are (1/2)^5 or 3.125%. Winning on at least one black means you DIDN'T get 5 reds in a row, so the chances of winning at least once are the opposite of 5 reds in a row: 1-.03125 or 96.875%. Gary ========================= The chances of getting 5 blacks (or 5 heads/tails, if flipping a coin) in a row are indeed 0.5^5. However, the chance of getting ONE black "in a row" is 50%, regardless of whether the previous 4 were black or not. In other words, if at any point you asked "what are the chances of the NEXT 5 spins being black?" the answer would be 0.5^5. But if you get 4 blacks in a row, and ask "what are the chances of the next 1 spin being black?" the answer is 50%. That's why this betting strategy isn't valid. You're trying to incorporate the probability of the previous 4 spins into the probability of the next spin. The previous spins have NO BEARING on the next spin. Each is a 50/50 chance, regardless of the spins before it. (BTW to be exact, it isn't quite a 50% chance. The routlette wheel has some green slots. The chance of red or black is something like 48%. That's one of the ways the "house" keeps the odds in its favor.) The problem with applying this logic to trading systems is that you assume each trade is totally independent from the previous ones. Many people say that's a valid assumption -- e.g. Monte Carlo testing is based on that assumption -- but in my opinion that's not always true. The market can get into a particular state and stay there for a while, and your system's response to the market will act accordingly. E.g. look at the market for the last few weeks. It's been stuck in a narrow range for quite a long time. A system that tries to grab breakouts from a range might work quite well in normal conditions, yet hand you loss after loss when the market stays in a range like this. For another example, simple "buy the dip" systems work great as long as the market is in an uptrend, but fail if the market heads down. You can accurately predict that the system will have a much higher percentage of wins or losses as long as the market stays in the same mode. So I would claim the system's win/loss behavior is NOT independent from trade to trade. Gary ======================== We know that flipping a coin will create a "tendency" over time to revert to a mean. We know that the stock market can "overshoot" more than flipping a coin before the same reversion over the same time. My question is... what is the "factor", probability or otherwise, that allows one to "bet" on this reversion over time and how is it measured? Please correct me if wrong, but is this not what Mark's system is about? Not the odds of the "next" flip, but the tendency of the flips, once skewed to one side, to revert? Best regards, Gene Pope =========================== Gene, there is a measure which allows you to bet on reversion of amplitude over time; it is called "volatility". Frequently used by option traders. In fact, the mean reversion tendency of volatility is what makes many spreading strategies feasible in the first place. Best regards, Michael Suesserott ========================= Agreed. In fact I'd say never true. The market is made up of individuals who have memories. What happens today affects what happens tomorrow. Today's market is not independent of yesterday's market. People are also irrational and inconsistent. Therefore, while the market may remember yesterday, how it reacts today is irrational and inconsistent. Monte Carlo testing provides some indication of a system's historical consistency. In fact Monte Carlo simulations are about the only way to take the bias to be successful out of testing. But that is all that MC simulation does. When you start trading a system, all bets are off so to speak. A system with good historical results will not work in reality unless it is based on exploiting a market characteristic. If it doesn't, it's nothing more that a mathematical oddity. That is why Mark's Oddball has such power. It exploits a market characteristic that markets, once in motion, continue in a direction until they overshoot and then they move back the other way. Although Mark has indicated to me that stops are not necessary in Oddball's case at least, we have different views of life and I, personally, cannot accept that. Stops allow me to reconcile the irrational and inconsistent nature of the market and my need to be rational and consistent. Regards, Mike ============= http://www.purebytes.com/cgi-local/swish/swish-cgi.pl search for puzzling probability and roulette a la Mark Brown
For you, only you, and every other one that is addicted to the lognormal historical returns of the SP500 that have absolutely nothing to do in the probability of outcomes and returns of this coin tosser bum losing near breakeven trading system, you can apply it to crude oil or gold. (After all my coins are not lognormal, they are normal coins, and the bum tossing them is quite normal as well. I might be abnormal, but that's another issue). Or better yet, trade crude oil on monday, gold on tuesday, soybeans on wednesday, aussie dollar on thursday, and take friday off to rest since you'll probably be exhausted of watching all those different charts. For everyone else that likes the liquidity of the SP500, let's stick to the ES for now.