In flipping coins, is the probability of obtaining the particular sequence tail tail ... tail 100 times inferior to any other sequence of flipping coins 100 times ?<p> Basically, a probability is :<p> number of favorable outcomes<br> ------------------------------------<br> number of possible outcomes<br> <p> Flipping one coin, the probability of obtaining tail is 1/2<br> Flipping two coins, the probability of obtaining tail tail is (1/2)*(1/2)=1/(2^2)<br> Flipping 100 coins, the probability of obtaining tail tail ... 100 times is 1/(2^100)<p> By using the same reasoning the probability of obtaining tail head tail head ... tail head 100 times is exactly the same than obtaining tail tail ... 100 times that is to say the probability doesn't take the order of sequence into account. <br>So Probability concept, which is purely <b>mathematical</b>, is not enough to define the concept of <b>randomness</b>. In 1965 - after the concept of <b>Entropy</b> was introduced by <b>Shannon</b> - <b>Kolmogorov</b> defined the concept of <b>algorithm complexity</b> for a <u>finite bit of strings</u> so that a <b>random string</b> is <u>"incompressible"</u> in the sense that a <u>program</u> would have to be <u>expanded as the serie grows larger</u>. So Kolmogorov, the father of <b>probability axioms</b> in 1933, recognised their limit himself: the use of these <u>axioms</u> only relie on <b>logical correctness</b> - which is <u>purely mathematical</u> - not on their <u>relevance to physical phenomena</u>. <br><b>Walter Shewart</b>, the father of <b>Quality Control</b>, remarked also - well before <b>Kolmogorov</b> - that it was necessary, for use in <u>real world</u>, to define a concept like <b>randomness</b> not only with <b>probability axioms</b> but also <u>operationally</u> by comparing with a <b>physical operation</b> like flipping coins. <b>Kolmogorov</b> somehow had the same idea than <b>Shewart</b> except that he wanted to use a <b>computer</b> or <b>algorithm</b> which is indeed a <b>physical operation or device</b>. Probability FAQs (in construction): http://www.econometric-wave.com/faqs/probability/home.html.html

harrytrader wrote: The first sentence seems non sequitur to me. The flaws of Kolmogorov-Chaitin complexity (where randomness is equated with maximal compexity) can be corrected in a compexity definition in a mathematically defined physical context (where randomness is properly equated with zero complexity): http://www.santafe.edu/sfi/education/csss/files02/adami3.pdf Question: would Markov Chain Models in High Frequency Finance be considered a "tech" (non-EMH) approach or a "quant" (EMH) approach?

I didn't say that Kolmogorov solved the problem, I said that he was conscious of the problem I refered especially to him since he is the father of probability axioms - I could refer to others also. Shewart already said far before anybody that the concept of randomness was physical and that one has to chose the one that is suitable to the context of use. As he was not only statistician but also practician it is not so astonishing that he devised this long before the pure theoricians attack the problem since he needed this concept for his Quality control field in industry. His practical point of view is that the mathematician can't decide alone in real world problem but only in pure mathematical problems so that the expert knowledge of the domain will stay predominant . That's why when I entered the world of trading, although I was biased towards EMH my point of view changed because I trust more the practicians (the TA) that the pure theoricians (I now call EMH propagandists ). But I have to check by myself that the TA were not like astrologers. With my model I have done such verification and even more I have at the same time improved traditional TA with more rational approach and more precision (although I have still room of improvment to do and also make a definitive link with the more esoteric TA like Elliott and Gann).

I have often pondered this and thought that the longer the sample the more likely 50/50 would prove out....but their are no identifiable cycles...so this conclusion that Harry is commenting on is a no brainer.....I do not know all the formal studies and the doctors involved, so please excuse my contribution here. I view the market like this....Take a rubber band and cut it so that you have a stretchable string with a beginning and an end. Start randomly and place the beginning of the band there. Assuming the band is TEMPORARILY fixed from that point, the price rises from that point and falls stretching the band. The stretch of the band could identify the power and velocity of the volume/movement. You can adjust the initial starting point to find center by using a sort of support and resistance study. Now, when there is a strong trend the initial point of the band must be repositioned to allow for the statistical CYCLE of the movement/stretch of rising and falling. I have just described wave analysis. Michael B.

harrytrader wrote: It seems to me that academics are publishing as many non-EMH models as TA practitioners practice (just browse Quantitative Finance http://www.iop.org/EJ/toc/1469-7688/1/1 e.g. the EMH-violating ABS stochastic process in "Financial markets as nonlinear adaptive evolutionary systems") I have seen it compared to entrail reading. Which is why they say being a TA takes a lot of guts.

Harrytrader is right. Empirical study of the markets quickly discredits all of the EM hypotheses: weak, semistrong and strong. It is rare, if not impossible, to find a normal distribution pf prices. Actually there is one exception. If the entire "strip" of contracts in an energy market such as natural gas is consolidated into a single sample the resulting distribution does come close to normal and the sum of all contracts tends to revert to the mean. The EMH assumption that all relevant information is available to all market players is, to put it kindly, a serious mistake. On the other hand no one who examines the markets statistically can ignore their chaotic Brownian characteristics. So there's plenty of randomness, but it's "pink" noise rather than "white" noise.

valleyvintner wrote: As noted before, in terms of random processes, the issue of randomness is: is a market a Martingale (EMH) or multidimensional Markov (non-EMH, the type a TA would work) process. Whether EMH is flawed or not has nothing to do with it being based on Probability.

Agreed. And again the most cursory analysis disproves the notion of a Martingale (EMH) process. My point about Brownian noise embodies the same view - i.e. that the market is a Markov process. An important corollary of this is that the markets are also fractal, self-similar in different time frames. On a purely empirical level the fact that one can reliably forecast the energies markets with 70-90% accuracy makes the EMH a quaint notion irrespective of its theoretical basis.

Agreed. And again the most cursory analysis disproves the notion of a Martingale (EMH) process. My point about Brownian noise embodies the same view - i.e. that the market is a Markov process. An important corollary of this is that the markets are also fractal, self-similar in different time frames. On a purely empirical level the fact that one can reliably forecast the energies markets with 70-90% accuracy makes the EMH a quaint notion irrespective of its theoretical basis.

Answered already http://www.elitetrader.com/vb/editpost.php?s=&action=editpost&postid=402906 Martingale is not incompatible with Markov Property, the trivial example being that a brownian motion satifies both. So it is not a criteria to distinguish EMH from TA. The very reason why martingale has been introduced in financial theory by Samuelson - who receives the Nobel Prize for that - is that it imposes no autocorrelation (independancy) condition on the residuals which was too restrictive in RMH (Random Market Hypothesis).