This is from the paper by Sullivan, Timmermann and White which as you know provides the statistical framework for an important aspect of the paper cited by rc a few posts ago. Tell me, rc, do you think you could summarize the essence of this statistical nugget so that it will be possible to better appreciate the rigor of your proposal? Here is the link to the website where I secured these data, which were first presented as a discussion paper at UCSD in December 1997 http://data-snooping.martinsewell.com/. Appendix 2: Reality Check Technical Results For the convenience of the reader, we replicate the main results of White (1997) and briefly interpret these. In what follows, the notation corresponds to that of the text unless otherwise noted. Let Po denote the probability measure governing the behavior of the time series {Zt}. Also, ⇒ denotes convergence in distribution, while p→ denotes convergence in probability. Proposition 2.1: Suppose that P1/2( f â E( f )) ⇒ N(0, Ω for Ω positive definite and suppose that E( f1 ) > E( fk ), for all k = 2, ..., l. Then Po [ f1 > f k for all k = 2, ..., l] →1 as T → ∞. If in addition E( f1 ) > 0, then for any 0 ≤ c < E( f1), Po [ f1 > c] →1 as T → ∞. The first conclusion guarantees that the best model eventually has the best estimated performance relative to the benchmark, with probability approaching certainty. The second conclusion ensures that if the best model beats the benchmark, then this is eventually revealed by a positive estimated performance relative to the benchmark. The next result provides the basis for hypothesis tests of the null of no predictive superiority over the benchmark, based on the predictive model selection criterion. Data-Snooping, Technical Trading Rule Performance, and the Bootstrap - 29 - Proposition 2.2: Suppose that P1/2( f â E( f ) ) ⇒ N(0, Ω for Ω positive definite. Then max k=1,...,l P1/2 { f k â E( fk )} ⇒ V ≡ max k=1,...,l { Zk } and min k=1,...,l P1/2 { f k â E( fk )} ⇒ W ≡ min k=1,...,l { Zk }, where Z is an l x 1 vector with components Zk, k = 1, ..., l, distributed as N(0, Ω. Corollary 2.4: Under the conditions of Theorem 2.3 of White (1997), we have ρ ( L [ V * | Z1, ..., ZT+τ ], L [ max k=1,...,l P1/2 ( f k â E( fk ) ) ] ) p→ 0 and ρ ( L [ W * | Z1, ..., ZT+τ ], L [ min k=1,...,l P1/2 ( f k â E( fk ) ) ] ) p→ 0, where W * ≡ min k=1,...,l P1/2 ( f k *â f k ). L denotes the probability law of the indicated random variable, and ρ is any metric on the space of probability laws. Thus, by comparing V to the quantiles of a large sample of realizations of V * , we can compute a P-value appropriate for testing Ho: max k=1,...,l E( fk ) ≤ 0, that is, that the best model has no predictive superiority relative to the benchmark. White (1997) calls this the âReality Check P-value.â The level of the test can be driven to zero at the same time that the power approaches one according to the next result, as the test statistic diverges to infinity at a rate P1/2 under the alternative. Data-Snooping, Technical Trading Rule Performance, and the Bootstrap - 30 - Proposition 2.5: Suppose that conditions A.1(a) or A.1(b) of Whiteâs (1997) Appendix hold, and suppose that E( f1 ) > 0 and E( f1 ) > E( fk ), for all k = 2, ..., l. Then for any 0 < c < E( f1 ), Po [ V > P1/2c ] →1 as T → ∞. Corollary 5.1: Let g: U → ℜ (U ⊂ ℜm ) be continuously differentiable such that the Jacobian of g, Dg, has full row rank 1 at E[ hk ] ∈ U, k = 0, ..., l. Suppose that the assumptions of White (1997, Corollary 5.1) hold. If H = 0 (the Jacobian of h) or (P/R) log log R → 0 then for f * computed using P&Râs stationary bootstrap ρ( L [ P1/2 ( f *â f ) | Z1, ..., ZT+τ ], L [ P1/2 ( f â μ )] ) p→ 0, where ρ and L [ ⋅ ] are as previously defined. Maintaining the original definitions of V * and W * in terms of f k and f k * , we have Corollary 5.2: Under the conditions of Corollary 5.1, we have ρ ( L [ V * | Z1, ..., ZT+τ ], L [ max k=1,...,l P1/2 ( f k â μk ) ] ) p→ 0 and ρ ( L [ W * | Z1, ..., ZT+τ ], L [ min k=1,...,l P1/2 ( f k â μk ) ] ) p→ 0 . The test is performed by imposing the element of the null least favorable to the alternative, i.e., μk = 0, k = 1, ..., l; thus the Reality Check P-value is obtained by comparing V to the Reality Check order statistics, obtained as described in Section II. As before, the test statistic diverges to infinity at the rate P1/2 under the alternative. Proposition 5.3: Suppose the conditions of Corollary 5.1 hold, and suppose that E( f1 ) > 0 and E( f1 ) > E( fk ), for all k = 2, ..., l. Data-Snooping, Technical Trading Rule Performance, and the Bootstrap - 31 - Then for any 0 < c < E( f1 ), Po [ V > P1/2c ] →1 as T → ∞. Note that it is reasonable to expect the conditions required for the above results to hold for the data we are examining. As pointed out by BLL, while stock prices do not seem to be drawn from a stationary distribution, the compounded daily returns (log-differenced prices) can plausibly be assumed to satisfy the stationarity and dependence conditions sufficient for the bootstrap to yield valid results. It is possible to imagine time series for returns with highly persistent dependencies in the higher order moments that might violate the mixing conditions of White (1997), but the standard models for stock returns do not exhibit such persistence. TIA lj PS: When you have enlightened the thread as to exactly what these data mean, then perhaps we can discuss whether or not it was reasonable to apply this statistical test to the set of initial conditions outlined by the authors of your referenced paper - which set of initial conditions is strikingly similar to those of the paper of STW.
I'll repeat again... Get off your lazy butt...go find some profitable traders with verifiable proof via brokerage statements. Then do your analysis on the type of TA the profitable traders are using. It's that simple and explains why you guys continue excluding these types of traders. Do you want to know what works or do you want to continue testing what obviously doesn't work??? It sure ain't the stuff canned by the charting (data) vendors that's easily testable. I think most profitable traders that have been profitable for many years are probably using a rule based methodology and doing much more with their TA than that junk you guys are complaining about. Can it be computer coded so that it includes their trade management rules (stuff after entry) that's absolutely critical to the success of the method along with being rule based... That may be up to the skills of the programmer. I keep hearing from system designers that anything can be coded for testing. I think that's something (anything can be coded) that can be argued among system designers themselves. By the way, how long does someone needs to be profitable for you to not think it was LUCKY based upon your earlier commentary in this thread about being LUCKY??? Mark
Quote from ljyoung: This is from the paper by Sullivan, Timmermann and White... PS: When you have enlightened the thread as to exactly what these data mean, then perhaps we can discuss whether or not it was reasonable to apply this statistical test to the set of initial conditions outlined by the authors of your referenced paper - which set of initial conditions is strikingly similar to those of the paper of STW. It is not my job to "enlighten" the board. If you want to get into the gory details, then do so by all means. Knock yourself out. I see no point in what you are trying to do. If someone wants to defend the pro TA side, they need to find counter evidence, not try to impress others by cutting and pasting an appendix out of the study. Such studies like this are put out for serious review by other institutions before acceptance. They are usually quite rigorous. They usually also publish limitations, observational methods, possible future research areas, references to other published studies, citations, etc. If you have problems, then publish a counter study. Contact the author with your concerns and get him to update the study if you feel there are flaws.
:eek: You really don't get it, do you? TA is not a trading system. It never has been and it never will be. TA is a tool that allegedly will provide a trading system with the necessary edge. In order to test that claim, TA at the very least has to be tied to a position-sizing method in order to have a complete trading system to test. This is Trading 101 stuff: trading system = timing strategy (e.g., TA) plus money management (aka position sizing). No wonder most of those TA studies don't amount to squat. They're destroying whatever edge that might be present through overtrading.
The above is a dumb dumb dumb post. Although money management/position sizing can cause a profitable system to fail, it is not required to prove that a profitable system works. There is edge or there isnt. A real trader can determine if an edge exists without the other stuff. TA studies amount to shit because they are shit, it has nothing to do with money management. Money management doesnt create edge.
i believe one can actually make money without an edge. however an edge IS required to make money for any length of time in any consistency--infact an ever changing edge is actually required. being flexible and able to alter your aproach is the real key. surf
If it can't be coded, your admitting its subjective. i have no issues with subjective approaches and believe they can work. but one needs to understand its subjective---unlike the proflogic types who claim absolute perfection with objective systems then fail to produce any proof of claims. i'll admit its not luck, if the system can be TAUGHT to others with the same or similar results. surf
Nothing in my post indicates that money management creates edge. I explicitly said otherwise, in fact. It is inane to say that (poor) "money management/position sizing can cause a profitable system to fail" (true) but then follow it up with "TA studies amount to shit because they are shit, it has nothing to do with money management." TA studies amount to shit because they have lousy position sizing that destroys whatever edge they're trying to test for.