degenerate cointegrated VAR(p) cases with no AR coefficients

Discussion in 'Strategy Development' started by lolatency, Apr 23, 2009.

  1. Long post, but kind of a simple question:

    I have two processes, x and y. I put them into a multivariate process, looking for a VAR(p), and find that there are no significant autocorrelations at all. Note though, that x and y are I(1), in that one differencing results in stationarity.

    x(t) = u1(t) + WN1(t)
    y(t) = u2(t) + WN2(t)

    Where WN = white noise.
    u1 and u2 are random walks, or "stochastic trends".

    If I look for a cointegrating vector, I am looking for some vector Transposed[ [alpha, beta]] such that:

    alpha * x(t) + beta * y(t) is I(0) -- or stationary.

    So I go back to the original case and subtract the two processes:

    beta*y(t) + alpha*x(t) = (u2(t) + WN2(t)) + (u1(t) + WN1(t))

    Rearrange terms:

    beta*y(t) + alpha*x(t) = (u2(t) + u1(t)) + (WN2(t) + WN1(t))

    For convenience, I say alpha = 1, so:

    beta*y(t) + x(t) = (u2(t) + u1(t)) + (WN2 + WN1)


    x(t) = (u2(t) + u1(t)) + (WN2(t) + WN1(t)) - beta * y(t)

    Now we have a sum of two random walks, both functions of time, the sum of two white noise processes, minus beta * y(t), where beta is the second component of some mystery cointegration vector.

    Clearly, we have non-stationarity here because there are two RWs with drifts, two white noise processes, and some beta. The drifted random walks account for the stochastic trend.

    But since we know the random walks step in time in such a manner that each step is sampled from the same distribution, ... then:

    x(t) = u2(t) + u1(t) + (WN2+WN1) - beta * y(t)


    x(t-1) = u2(t-1) + u1(t-1) + (WN2(t-1) + WN1(t-1)) - beta * y(t-1)

    Now if I difference x(t) and x(t-1):

    delta[x(t)] = rwstep1 + rwstep2 + net_errorterm - beta * ( delta[y] )

    where: rwstep1 + rwstep2 are both stationary processes, the "differenced" stochastic steps, and delta[] is the differencing function, and the net error term is just the differences of the errors.

    If rwstep1, rwstep2, net error term are all I(0) stochastic processes and they are all independent, ... can I lump them together such that:

    delta[x(t)] = stochastic_lump - beta * delta[y],

    and still run around telling people that the beta I've got in this case is not some spurious relation, if, in fact, the t-test on the regression coefficient shows that it is significant?

    I just need some help here with cointegration theory when I have this degenerate case with no AR(p) coefficients.
  2. The other thing I was thinking about was the stochastic_lump in the previous post.

    If the two processes are secretly driven by the same random walk, but only scaled differently, we should still have stationarity guaranteed.

    Reality though, is that the random walk is not the same from the two processes; rather, they are correlated. Now I have a theoretical problem on my hands. It is a spurious regression if the two random walks just happen to be correlated, and then I go back to the original problem that I started with of trying to prove that the whole relationship isn't spurious.

    Can anyone out there see through my confusion?
  3. I am kind of in a rush and I will try to look at this again later if you don't get if figured out, but I noticed something quick that might help.

    if x and y are random walks, then your u1(t) and u2(t) are really x(t-1) and y(t-1)

  4. Entertain my lunacy and suppose WN1 and WN2 are "extra" random shocks over top of the random walks. In other words, they are two different processes altogether, with different variances [but WN1 has 0 mean, whereas the random walks have the drift mean.]

    Force yourself to consider that it's not the same as the previous step + error, it's a totally different process.
  5. At the end of the analysis when I lump those things into that stochastic lump, I'm trying to figure out whether or not I can combine them based on the idea that they are driven by the same stochastic trend, or random walk.

    In the case I am analyzing specifically, I went and did a PCA on the returns of the two separate assets I'm modeling, and I find that most of the energy is directed along one axis, so cutting the dimensionality and making the assumption that there's one stochastic process running the show shouldn't screw up the assumptions, right?

    > summary( pcares )
    Importance of components:
    Comp.1 Comp.2
    Standard deviation 1.3498772 0.42170078
    Proportion of Variance 0.9110842 0.08891577
    Cumulative Proportion 0.9110842 1.00000000

    If 91% of the variance is accounted for in vector comprised of an equal weighting of the two assets.

    So if I reduce dimensionality and model the returns as simply a scalar multiple of the first vector in the orthonormal basis, 91% of the variance can be accounted for -- meaning that regression isn't some spurious coincidence that came about because the two random walks happened to be correlated over a period of time.

  6. What does u3 say?
  7. This is okay - WN1 and WN2 have no memory. They are new random shocks each time, not some combination of the last random shock plus a new shock. So assuming they are stationary then any linear combination of them are stationary.

    Disclaimer- I don't remember all the definitions of stationarity and whether this requires strong stationarity or something weaker, but this is the idea.

    I think the part that is confusing you is that you are making an algebra error or something that keeps the random walks from canceling. If they are cointegrated, when you find the right combination of them they cancel out and you are left with the stationary process. So if they are cointegrated you should have something like:

    alpha * u1(t) = beta * u2(t)
    letting alpha = 1
    u1(t) = beta * u2(t)

    x(t) = beta * u2(t) + WN1(t)
    y(t) = u2(t) + WN2(t)

    then your cointegrating relationship is:
    x(t) - beta *y(t)

    sub in:
    beta *y(t) = beta *u2(t) + beta *WN2(t)

    you get:
    x(t) - beta *y(t) = beta * u2(t) + WN1(t) - beta *u2(t) - beta *WN2(t)

    which simplifies to:
    x(t) - beta *y(t) = WN1(t) - beta *WN2(t)

    which is stationary.
  8. I think you have some problems with the regression and PCA logic too but I am heading to bed.
  9. Looks like you added, rather than subtracted the processes. I think if you subtract, you should come to blackdiamond's conclusion.

    Regarding the particular RW components, I don't think that the difference equation guarantees stationarity in the new series. I recall it has to be tested depending on your specific series (CADF?).