collaborations

Discussion in 'Strategy Building' started by ssrrkk, Nov 7, 2011.

  1. It sounds to me like you're not using fully simulated ticks from historical bars. What I mean by fully simulated ticks is that for each 1-minute bar you create eg 60 prices - 1 per second - and this is simulated in your backtests. For example, for an up bar, you would have 3 stages: open-low, low-high, high-close. Each of these stages (lines) has 20 ticks and the price of each tick is interpolated between those OHLC values. The price moves down, then up, then down to the close. This way, in your backtest, you are simulating that the bar is really being created tick by tick, by recreating ticks from a bar. I find this pretty important in backtesting. Each tick also has a time stamp. In backtesting you assign it by yourself and in real time you see the system time (or IB server time). With this approach you are really allowed to make more assumptions, because of increased accuracy. Strategies which are highly reactive should often look at the last price (or bid/ask) to determine if a level has been broken - not just at the OHLC values of bars. This also allows you to compute slippage more easily: make it for example 0.05% of the last price delta (last simulated tick delta) so that with speed increases, the slippage will increase.

    This is my model for calculating slippage, it would be interesting to hear about yours (if you're into sharing).
     
    #31     Nov 9, 2011
  2. tim888

    tim888

    It seems that noise is giving you a hard time.
     
    #32     Nov 9, 2011
  3. gmst

    gmst

    Hello, I ran into these issues myself sometime back, different bars different real time vs historical data, computer clock...very minute things but with potential huge impact.

    It is clear that you have muddled enough in the waters to go solo.....leave that friend behind and move on with your trading. You don't need him at this stage of your development (as evident after reading the content of your posts).

    A Very Big Good Luck!
     
    #33     Nov 9, 2011
  4. hoppla

    hoppla

    @OP: sounds like we have similar acquaintances. Though not by choice in my case.
     
    #34     Nov 9, 2011
  5. ssrrkk

    ssrrkk

    hi braincell,
    That is an interesting idea. I haven't ever considered going to that extent because I deliberately make my IB trading algorithm trigger only on minute boundaries -- but there really isn't any good reason for doing it that way other than that I can match it up well with back testing. So basically, if my bars are close enough, my trades should and they indeed do match. Like I said, I haven't done anything special for slippage -- just merely worsening my entries and exits by a fixed fraction, which I have measured through forward testing (and my occasional live runs) -- in other words, I can see the actual minute bar close that triggered the trade, and then I can check the actual fill from that trade (either paper or live accts) and then I can save that number, and then later I can look at the statistics of those real-life measured slippage amounts. By doing this, I basically found that the slippage can be approximated fairly well with the simple fractional adjustment that I am using. However, as I mentioned there is still error in the estimate of course because the actual slippage does vary depending on the velocity of the price movement at the trigger time and also because of my clock difference (bar boundary timing differences). At this point, I can live with the +/- 10-15% difference in profits per day. This is enough precision for me to know whether I have a profitable algorithm or not.
     
    #35     Nov 9, 2011
  6. ssrrkk

    ssrrkk

    Thanks! Same to you and everyone on this board!
     
    #36     Nov 9, 2011
  7. ssrrkk

    ssrrkk

    Actually I should back up and say, there is a good reason for me to use minute boundaries (bars) for trading as opposed to triggering using the streaming last tick value. The reason is that the minute bars act as a crude low pass filter or smoother that prevents triggers on extremely rapid and spurious spikes. That was part of the motivation for using minute bars. I have also played around with using very short (~3 minute) moving average windows too but so far, the minute bar triggers appear to work fine for me.
     
    #37     Nov 9, 2011
  8. Regarding slippage, well alright if you took the time to compare live vs backtest, then that's pretty good. I still haven't done that, but it's a good statistic to have.

    Regarding using 1-min bars as "low pass" filters, I get the idea but I'm not sure I agree with it. Mainly because, your lowpass filter now depends on the randomness of when a bar closes and which ticks come in last (or first, depending on what you're looking at). This may give you a filter towards the end of a bar (45-60 seconds of a minute) but then after that not so much. I suggest you create an internal tick-chart if you want some sort of filtering/MA. The best way to do it is with a high-precision volume tick data (ie CQG, but eSignal has one that's good enough too and affordable in my experience). Volume ticks really give you the best sense of what the market is doing, because just by measuring the slope of the MA you get to see the buy/sell pressures as well. If it's long and flat, it means there's a lot of volume but the price isnt moving etc. This is very useful information, and I do believe that my future systems should and will be built around both the 1-minute and X-volume charts, both generating signals. I'm trying to say that the tick data coming from IB is very poor (like you said) and can't really be relied upon to construct accurate bars or tick charts for filtering ( tick MA).

    In the end, there is one very important aspect when building an ATS and that is that you , as a human person, can decipher what the heck is going on. For those needs, 1minute is perfect.
     
    #38     Nov 9, 2011
  9. ssrrkk

    ssrrkk

    hi braincell,
    yes I do understand the minute bar discretization is not a real low-pass filter -- however, the act of using 1 minute discretization is mathematically well defined and well understood; in DSP this is basically a uniform discrete time sampling and the frequency characteristics of such sampled data is also predictable and well known. But I do hear what you are saying since the sampling is strictly speaking not a clean low pass filter and more importantly there could be information (higher frequency) within those bars.
     
    #39     Nov 9, 2011
  10. ssrrkk

    ssrrkk

    hi braincell,
    thanks to your probing questions, I have actually realized something: since I am indeed sampling the data, the frequency content of my signal now contains higher frequencies folded down to within the sampling frequency -- i.e., aliasing. I am not sure if this helps or hurts at this point, but I might start thinking about putting in an anti-aliasing real low pass filter before I create the bars? or may be this will be a big waste of time. I'm guessing it probably is.
     
    #40     Nov 9, 2011