Tick Database Implementations

Discussion in 'Data Sets and Feeds' started by fatrat, Nov 25, 2006.

  1. Maybe he does what I do ;)

    Millisecond, BUT: using 100ns (windows level tick) to ORDER timestamps with the same time. This allows me to keep the order when retrieving, based on the event stream coming from the interface.
     
    #111     Jun 8, 2012
  2. nitro

    nitro

    Storing timestamps as strings would be fine, but it often breaks the rest of the software you are trying to use, at least out of the box. If this was the only project that I had to do, I would probably deal with it. But I don't have time to dick around with trying to adhere myself to a solution. I need it to be the other way around right now.

    As for high resolution timing, we are using:

    http://www.symmetricom.com/products...TimeProvider-5000-and-TimeProvider-Expansion/

    with the PCIe1000 cards.
     
    #112     Jun 8, 2012
  3. januson

    januson

    I can assure you that RavenDb is terrible slow when comparing it to a normal RDBMS and that is caused by the nature of ticks.

    I have experimented with MongoDb which is amazingly faster than RavenDb both in read and write.
    MongoDb is 10Xfaster than RDBMS for write and almost the same speed in retrieval of data.

    My findings is based at 15.000.000 ticks indexed on time stamp. MongoDb had enabled journals :)

    Furthermore RavenDb totally misses the opportunity in aggregation over time, quite contrary to MongoDb which just has developed a small aggregation framework.

    I would without any doubt choose MongoDb for writes :)
     
    #113     Jun 12, 2012
  4. emg

    emg



    5 1/2 years later. What is the verdict? Did u get sued for violating non-compete clause? did u blow up? or are u now a HFT trader?
     
    #114     Jun 12, 2012
  5. Thanks for sharing your RavenDB experience. I am looking for a solution that has faster than 10x RDBMS read times. I guess that would rule out RavenDB. Redis is still a contender but I need to run more performance tests. Have you had the chance to run read performance tests on MongoDB as well, other than writes?

    Thanks


     
    #115     Jun 12, 2012
  6. I've got a lot of experience in these matters and most implementations start from archives of tick tapes. aka NxCore or CME Data Mine. Essentially a recording of the whole market message by message.

    For real time and tick by tick market replay you can run message by message.

    if your able to intelligently process and store these messages you can develop a tick accurate consolidated database. You can shrink the 37M daily tick messages sent for a symbol down to a daily 20K - 50K price tick consolidation. Rinse and repeat for each symbol and your library of tick tapes.

    You can further summarize the consolidations into helper bars ie. 1 min OHLC bars ,1 hour etc.

    Now a 2 step sql query can locate any tick in your db. typically < 100ms.
    The key is storing pre-processed consolidations allowing you to quickly drill down to the bar and fetch ticks without having to process the tapes.

    Hadoop Hives Hue etc allows you to scale up.
     
    #116     Jun 12, 2012
  7. fair points, but what to use to persist the data, and especially read persisted data with high throughput AND intelligent query logic? What are your contenders?

     
    #117     Jun 12, 2012
  8. We load consolidations in inmemory sqlite db's. Our sqlite tick db's are processed, indexed and stored cerod (compressed encrypted read only) chunked at 250mb. The chunking allows efficient integration with cloudera (Hadoop,hdfs, mapreduce,).
     
    #118     Jun 13, 2012
  9. would you know whether any of your solutions provide .Net compliant APIs? Hadoop to my knowledge does not I think...


     
    #119     Jun 13, 2012
  10. #120     Jun 13, 2012