Nooby This is one reason algofy mentioned if you rely on backtesting results (like myself) to place calculated real-time bets. It must be accurate information. The most important reason is execution. Daytrading is all about "getting in faster", and you can't do that without granular data. The low/high of a one minute bar could have occurred in 5-10 seconds, very often printing over 20% of instrument average daily range. Prices move that fast now. You simply cannot overlook this if you want to compete, even outside of HFT.
Tick data is just another level of precision beyond minute based data - without the book it's still not going to tell you if you could have necessarily executed at those prices (especially for things that are not trading every single millisecond). Sure, if you can get it (and *manage* it) it's preferable to have but without order book data you're still somewhat in the dark - and good luck getting book data for any reasonable cost. I'm of the opinion that regardless of the level of precision demanded the implementer should be quite conservative and assume more of the worst execution than "good" execution. Otherwise you're extremely beholden to your actual execution abilities and that can and will change.
That's not true - if your execution assumptions are conservative (e.g. you assume that you wait for the whole bar period to cross the spread) you will be OK. The key is to be making a conscious effort not to fit into the HFT noise to begin with and play for longer time frames and bigger opportunities.
I see your point here, but it's better to execute at established bias levels, compared to minute bar closings, and you need tick data to get your orders filled at those levels.
It becomes a question of infrastructure fairly quick - if you are trading a few thousands of names, simulating everything at tick level becomes a pain. Plus, if your holding time is days or longer, it's a waste of an effort that can be applied elsewhere.
I think it's largely a matter of trade frequency versus number of products. As frequency increases, you'll likely require higher resolution data. If you're reading for dollars per trade versus pennies then the execution assumptions can be relaxed. On the other hand, for pennies, much of the alpha is captured in the fine details of execution.
As much as quick fills at accurate prices matter , bad fills should be balanced out by better fills over time and as such results long term should balance out pretty well . Just saying , everyone remembers the shit fills & moans but no-one remembers the better fills on slow execution which do happen , the next tick has 2 choices
Very good point and very true. For anyone testing longer, latency insensitive strategies, the deltas between between actual fills and an assumption on 1-minute bars should average out. One may, for example assume a fill at the average between high and low of the 1-minute bar. Or a fair assumption would be a fill at the closing price of the bar or the open price of the new bar. Obviously, the more latency sensitive a strategy becomes the worse the above assumptions become.