Hi prophet and nitro, I found your information on precision timing very useful. Would you have any reference on precision absolute time under Windows? I just set up ntp under linux. Did any of you try something like this on Windows. I am interested in this in order to make post mortem checks of quote data streams against exchange data records. Thank you, nononsense
Very close to getting my software to run under the beta of WS2003Ent 64-bit for Extended Systems. My code will still be 32-bits, but the Sandra tests I did before showed huge performace gains from even a 32-bit program running under a 64-bit OS on the Tyan. I downloaded the Visual C# 2005 Express Edition Beta, which uses .NET 2.0 and therefore will install in the beta 64-bit Windows. I found out that they now enforce some things in .NET 2.0 (threads and GUI issues) they didn't in .NET 1.x. I knew it was coming but it worked so I didn't bother fixing it. Otherwise the code compiles and runs fine. Amazing... Now I have to think of the right way to write it so that I don't get a bunch of spaghetti code. For the first time ever I may have a real need to use Reflection, or even Generics which is new to C# 2.0 I am a little afraid of the performance costs of Reflection, so I will have to make sure that I am wasting CPU time just to get elegant code. I would rather have spaghetti code than waste CPU cycles...Generics would offer elegant code and no penalty in performance for sure... Once I get this ported to 64-bit windows, I will start weighing the costs of going to a *nix port in C++, or possibly a C++ port that will compile on several platforms...Alot of work if Windows gives me what I need...I downloaded the Solaris beta that runs Linux software unchanged (in theory.) Really not sure which way to go here, but the easy choice to move towards 64-bits for now is Windows 64-bit beta using C# 2.0 beta and .NET 2.0 beta. nitro
right. a system call, which will get executed based on a system time slice boundary. you see the problem, right? you're now dealing with 2 separate discretizations of time. this gets hairy in a hurry, and the problems are grossly magnified when running multiple processors. yes, it has a millisecond field, but that is not the same as having a clock sliced at 1ms intervals. the system clock under XP runs at 10ms intervals, which is were the problem really lies. you have to step up to linux to get a real 1ms clock.
prophet: re: your PM... you're going to see clustering at those time scales whether or not there really is any. if the market consisted of nothing but automatons synced to the same universal clock issuing synchronous orders you'd still "see" clustering when collecting data at your end because of the variability in speeds across the asynchronous connections to the market place. it's no different than regularly scheduled buses in traffic - no matter how tightly run the schedule, they will cluster. the only way to collect such data meaningfully is at the exchange itself (which is what Oanda was created for). what would be interesting is if they also continuously collected ping type data, so you could back out the various delays and get a sense of what people were actually seeing on their screens when they sent/changed their own orders.
BTW, My earlier answer to this was at best incomplete. Take a look at http://msdn.microsoft.com/library/d...nosticsprocessclassprocessoraffinitytopic.asp This is the .NET way. There are plain SDK ways to do it too. nitro
Damir⦠you are too quick to dismiss this as a phantom or some artifact of network latencies. If that were the case there would be a lot more randomness, evolution over time and differences between symbols. This is NOT what I observe. These patterns are consistent over time and across different symbols, including ES, NQ, 6E, ER2 and YM. They involve the timing and ordering of bid/ask depth/price changes, and trades reported. Certain clustering assumptions yield net-profitable trading models across all symbols. Others don't. So clearly this is more than an artifact of multiple participants with different latencies. I cannot be sure if IB or Globex is responsible. I simply see the structure and have used it successfully. There is plenty of usefulness in timestamps applied at the client (eg IB TWS), despite exchange and broker latencies. I am finding useful patterns with 1/100 second timestamp resolution and still I am only scratching the surface with this form of analysis.