For frequency domain experts

Discussion in 'Automated Trading' started by nitro, Dec 3, 2009.

  1. nitro

    nitro

    This is the Maximum Entropy output of a signal that has been made stationary by differencing once. I was wondering if anyone that is a frequency domain expert help me interpret it.

    [​IMG]
     
  2. nitro

    nitro

    It seems like I have not made the signal stationary enough, as there appears to be a trend up from the left to the right.

    But there do appear to be two pronounced peaks...
     
  3. Not sure what you are trying to do with it, but...
    1) It is an alternative to dft.
    2) the peaks represent higher concentration of frequency energy at those components. Your data has been normalized to 1 (out to 1/2 nyquist rate). So unless, there is a way to de-normalize back to trading days, it doesn't say much (look into
    sampling theory if you are interested).

    3)your example shown is plotted vs sample normalized trading days, meaning there are higher magnitude frequency components of periodicity of such trading days at the peaks. I.e.
    15 trading day periods would be more dominant than say, 10 for example.

    Keep in mind all of the information you are gathering tells you about the past, not so much about the future.

    http://iticsoftware.com/articles/digital-filters-fatl-satl-stlm-ftlm-2.html

    The trend (drift) you are interpreting is not related to stationarity, it is a function of increasing density of high frequency information in your detrended data set.
    If the signal was not stationary, you would expect more energy in the lower frequencies (i.e. your plot somewhat corroborates the stationarity with 0 DC component and increasing HF noise).

    Unless you are familiar with DSP and sampling theory, it is not simple to understand these concepts. However, if you want a good pseudo-layman intro, I highly recommend the book (which was rec'd earlier in this thread), Neural, Novel & Hybrid Algorithms for TS prediction by Masters. It is the book corresponding to the software you displayed (also by the same author of the software).
     
  4. Baywolf

    Baywolf

    Can you please maximize your firefox window and repost the image?
     
  5. nitro

    nitro

    It is not a firefox window. It is a program used to compute these values. Notice that the window is already maximized by looking at the window control icon.
     
  6. nitro

    nitro

    Thanks,

    What I was mostly wondering about is, do the varied number of nonzero peaks (energy of signal is dispersed amongst different frequencies) is it a tell-tale that the signal generating this ME is "random"?

    In other words, it seems that what you want is a ME with one or two peaks, with the rest of the energy concentrated at zero.
     
  7. I don't know that I'd draw that conclusion.
    You need to get into specifics of what your definition of random is.

    A truly random (white noise) time domain stationary signal would have a close to flat spectral density with no peaks. A pure signal (1 large pk) with no noise, as you mentioned, would be perfectly deterministic (over the period you sampled).

    Having peaks shows that there are certain frequency components that make up the signal that are dominant and not random by the above definition.

    If you had a perfect sine wave, you would see one peak. Add some noise and you would see 'skirts' in the freq domain. Multiply or add another different sine, and you'd see more peaks, with differing magnitudes. The only parts that are truly random are the skirts in the noise (flat regions).

    The problem with faulty reasoning is that it only says that the snapshot of time you are sampling has perfectly deterministic frequency components surrounded by smaller magnitude of noise. Unfortunately, take a sample a few steps later, and the dynamics change quickly. Therefore, having dominant peaks (while argumentably is greater than random i.e. high Signal to Noise) while having higher signal content, does not say much about the future Out of Sample specs, nor randomness of population.

    If it helps any, you could have a thousand + peaks, and still be perfectly deterministic (that is what fourier analysis is all about). Yet because it tells you nothing about the total series population, it is about as good an indicator of randomness or determinism as having 1 or two peaks in past data--
    no better, no worse.
     
  8. nitro

    nitro

    Thanks.

    Regarding "...tells you nothing about the total series population...", you could localize in both the time domain and frequency domain using Wavelets, and that does tell you something...

     
  9. Although it is true that wavelets offer certain advantages as you describe above,
    unfortunately, much like Fourier analysis,
    they are only descriptors of the past.
    There is no tool that can ever tell you the total series population with certainty.
    Understanding the theory behind these tools is a step above 'hurst' waves analysis
    (laugh), but much like hurst are only tools to describe something. How you choose to extrapolate that information is an entirely different subject in itself.:cool:
     
  10. LOL! He wanted you to post the Daily Hottie.

    This type of misunderstanding might be a clue for you.
     
    #10     Dec 3, 2009