Nooby McNoob becomes a quant

Discussion in 'Journals' started by nooby_mcnoob, Mar 24, 2017.

  1. Nooby

    This is one reason algofy mentioned if you rely on backtesting results (like myself) to place calculated real-time bets. It must be accurate information.

    The most important reason is execution. Daytrading is all about "getting in faster", and you can't do that without granular data. The low/high of a one minute bar could have occurred in 5-10 seconds, very often printing over 20% of instrument average daily range. Prices move that fast now. You simply cannot overlook this if you want to compete, even outside of HFT.
    #101     Apr 15, 2017
    algofy and nooby_mcnoob like this.
  2. algofy


    Yep exactly this.
    #102     Apr 15, 2017
  3. i960


    Tick data is just another level of precision beyond minute based data - without the book it's still not going to tell you if you could have necessarily executed at those prices (especially for things that are not trading every single millisecond). Sure, if you can get it (and *manage* it) it's preferable to have but without order book data you're still somewhat in the dark - and good luck getting book data for any reasonable cost.

    I'm of the opinion that regardless of the level of precision demanded the implementer should be quite conservative and assume more of the worst execution than "good" execution. Otherwise you're extremely beholden to your actual execution abilities and that can and will change.
    #103     Apr 16, 2017
    terminator8 and digitalnomad like this.
  4. sle


    That's not true - if your execution assumptions are conservative (e.g. you assume that you wait for the whole bar period to cross the spread) you will be OK. The key is to be making a conscious effort not to fit into the HFT noise to begin with and play for longer time frames and bigger opportunities.
    #104     Apr 16, 2017
    nbbo and i960 like this.
  5. I see your point here, but it's better to execute at established bias levels, compared to minute bar closings, and you need tick data to get your orders filled at those levels.
    #105     Apr 16, 2017
  6. sle


    It becomes a question of infrastructure fairly quick - if you are trading a few thousands of names, simulating everything at tick level becomes a pain. Plus, if your holding time is days or longer, it's a waste of an effort that can be applied elsewhere.
    #106     Apr 16, 2017
    i960 and digitalnomad like this.
  7. quant1


    I think it's largely a matter of trade frequency versus number of products. As frequency increases, you'll likely require higher resolution data. If you're reading for dollars per trade versus pennies then the execution assumptions can be relaxed. On the other hand, for pennies, much of the alpha is captured in the fine details of execution.
    #107     Apr 16, 2017
    digitalnomad likes this.
  8. PistolPete


    As much as quick fills at accurate prices matter , bad fills should be balanced out by better fills over time and as such results long term should balance out pretty well . Just saying , everyone remembers the shit fills & moans but no-one remembers the better fills on slow execution which do happen , the next tick has 2 choices
    #108     Apr 18, 2017
  9. Zzzz1


    Very good point and very true. For anyone testing longer, latency insensitive strategies, the deltas between between actual fills and an assumption on 1-minute bars should average out. One may, for example assume a fill at the average between high and low of the 1-minute bar. Or a fair assumption would be a fill at the closing price of the bar or the open price of the new bar.

    Obviously, the more latency sensitive a strategy becomes the worse the above assumptions become.

    #109     Apr 18, 2017
  10. Nooby

    Did you get anywhere near pay dirt with this venture? I was really rooting for you :)
    #110     Jul 15, 2017