ok, thanks for the reply regarding predictiv models, do they rely essentially on order book data ? Or more on a universe of instruments ? i mean for example you study one stock's performance vs index constituents/futures... PCA like. and then how stable are those relationship, is there any kind of calibration over a certain period required ? cheers,
Hello HFT, thank you for teaching us a lot. Can you please tell me what indicators you think most reliable for equities trading? EMA? Volumes? MA?... ect.... Do you think when a stocks took out 2 days highs with good volumes is a good indicators for a long trade. And vice versa for short. What other indicators that are good? I am new to trading, I appreciate your insights. THANK YOU
hft- If you don't mind, a question for you on measuring latency. In this instance, I am only speaking of the latency between an event occurring on an exchange and then reaching your box. I suppose there are three 'clocks' that matter here: 1. OFFICIAL_TIME - the "true" correct current time 2. EXCHANGE_TIME - the clock that exchanges/venues use to timestamp trades/quotes 3. YOUR_TIME - the time that you keep on your box Obviously, in a perfect world, YOUR_TIME = EXCHANGE_TIME = OFFICIAL_TIME and upon receipt of a message your latency = YOUR_TIME - EXCHANGE_TIMESTAMP. Realistically, you need to know how both YOUR_TIME and EXCHANGE_TIME differ from OFFICIAL_TIME. a) I would assume that you try to get YOUR_TIME as close to OFFICIAL_TIME as possible and then assume that difference to be zero (or perhaps x units fast). How is this done? How close can you get? b) I'd imagine the far more difficult undertaking is to try and estimate the (EXCHANGE_TIME - OFFICIAL_TIME) difference. How do you handle this?
Actually we don't care about official time at all. Most times are measured in diffs between our time only. In other words, we got some market data at YOUR_TIME_A, and sent out a response at YOUR_TIME_B. B-A = our response time. Exchange timestamps are used when they offer multiple timestamps so we can diff them, like the time elapsed between seeing different market data events timestamped by them. Or, some exchanges will tell you when your order was received, then when it hit the matching engine, but in this case we're still measuring diffs, so at no point do we try to sync between their timestamps and ours. We don't even compare timestamps between our own boxes. They sync up to NTP or other protocols for medium-frequency analysis (good to the nearest milli or whatever), but for anything truly latency sensitive we only consider timestamps from a single source (server or wiretap) as comparable.
Yep. Prices, sizes at prices, trades, and time. It bewilders me how many ways you can amalgamate those individual variables into alphas. That varies a lot, both in terms of what's analyzed and its stability. Some settings change on every tick, others haven't changed in years. Most adjustments to models are made somewhere between the tick and daily timeframe, though some parameters admittedly get lost in the shuffle and don't change for years.
I don't know, try another thread/forum. When you talk about a timeframe much longer than seconds I get lost.
Thanks for answer. Could you please tell what do you think is the difference between taking and making strategies? I mean for example why you as you wrote doing mostly making strategies? Do they require different type of price prediction or do taking strategies require better speed? Why can not you run taking strategies on same price prediction and do the same volume of taking as making? Thank you for your answers
So if, TOTAL_LATENCY = EXCHANGE_TO_YOU + PROCESSING_TIME + YOU_TO_EXCHANGE, you only bother measuring PROCESSING_TIME (if I'm understanding correctly). I'm sure much of what you do is aimed at reduced the processing figure, but without constant monitoring of the other two (especially EXCHANGE_TO_YOU), how do you know if you're having network problems or the like? Perhaps a better question - how do you know that a signal is still valid and not stale? For example, say I trade a simple model where after Event1 on the exchange I have x micros to get my order on the book - after this point, positive expectancy is gone. If you don't know how long it's been since Event1 occurred, how can you determine whether sending the order makes sense? I realize that you can get TOTAL_LATENCY figures by receiving a timestamped exchange event, sending an order in response, and then taking the difference in exchange timestamps for your order hitting the book and the original event. However, that is still after the fact and doesn't isolate each element of non-processing time. Clearly, I'm asking because these are things I contend with in my trading (albeit on a much slower scale). What's surprising to me is that I would have thought they were much more critical in your business...and yet you seem to ignore them entirely, ha.
You can only compare times from the same source. If you difference your time and exchange time, you have no way of knowing what combination of clock difference (maybe you, or they, are 100 micros off), network latency, and processing latency. I think he's saying you can only measure - your own processing latency - exchange latency: your time from sending the order to order ack (which can also give you a clue about network latency)
I'll break this up into 3 separate issues we're addressing: 1) Am I only concerned with processing time (internal latency) and disregarding external latency (me to exchange and exchange to me)? No, I measure external latency as well. But instead of breaking it up into 2 separate components (meToExchange and exchangeToMe), I lump it into one measurement (basically OrderSent and OrderAckReceived), that's measured at the same source (me). This way, there are measures that I know I can take to reduce both internal latency (writing more efficient code) and external latency (optimizing interaction with the exchange). 2) How do I know/guess at when events happened in 'real' time at the exchange? Or in other words, how do I know the staleness of data I'm receiving and processing? One close guess, which might be good enough for you, is to look at the lag between the time you receive a fill and the time it's reflected in the pricefeed. Add half a round-trip time to that and that gives you a pretty decent approximation of how long ago events happened that you're just seeing now. You're making a few assumptions (namely that the lag is consistent, and FillNotification-->PublicLastTrade receipt lag is indicative of the lags in other market data as well), but it's certainly a start. 3) How do I decide if a signal is good any more if I can't judge its true age? Simply put, I take the time I receive the signal as the starting time for my signal analysis. So if I think I have a good signal that tells hit the market whenever I receive signal X, and I can effectively model whether my order will get filled, I analyze the PNL/predictivity of signal X that way, without caring about the actual time at which X occurred on the exchange. In your example, if you know you have X micros after Event1 happened on the exchange where it's predictive, can't you change your 'happened on the exchange' to 'received the information'? If your basis is that the time from event happening to market data receipt is inconsistent and you need to get a good model on that inconsistency then I'd understand where you're coming from, and there are exchanges where that applies. If that's the case, I'd try to model lags of the signals you're interested in by trying to simulate them yourself. For example, if your signal is based on LastTrades, look at the Fill-->LastTrade lag that I mentioned earlier). If your signal is based on some big size entering the book, look at the lag between the time you send an order and when it's reflected in the pricefeed. If you give me a specific example we can throw some ideas around, it could be a learning experience for me too.