anyone else struggling with cointegration vectors that break down quickly in out-of-sample backtests? I'm currently backtesting a stat arb strategy - basket trading ETF's and some of their underlyings. it's based on a model that only trades at the close of each trading day. in every one of my tests, the cointegration vector in my model crumbles quickly out-of-sample, and any mean-reverting behavior the spread had quickly evaporates - making the model useless. it's worth noting that in some tests, a slightly distorted version of the in-sample mean-reverting spread will emerge out-of-sample, but it doesn't persist long at all. on top of that, this type of behavior seems completely random among all tests. could the issue be in the selection of the underlying stocks? I've tried selecting purely based on how well each of the underlyings cointegrated with the ETF over, say, a three-year period. doing just this seems to exacerbate the vector breakdowns out-of-sample. I've experimented with telescoping time-frames. that is, how well does XYZ cointegrate with ETF over 1 year? 2 years? 3 years? doing this DID reduce some of the noise and create a more stable spread out-of-sample, but not to such a degree that I'd feel comfortable trading the model. I realize that out-of-sample performance is never what it is in-sample, but this can't be right. there doesn't seem to be any edge over just guessing or throwing darts at the wall. any ideas? thanks in advance.