Anyone using ACTIV? (tick level data)

Discussion in 'Data Sets and Feeds' started by WinstonTJ, Mar 31, 2010.

  1. How much data can they push before I start running into issues on their end.

    Sterling is a single-threadded application and started maxing out for me around 900 data points. ACTIV is better but I'm looking to expand from about 9000 data points to 12k or 15k realtime data points.

    ACTIV is great but approaching Bloomberg B-Pipe prices when I start looking at London and NYSE. Will ACTIV max out at a certain point?

    Anyone with experience please chime in.

    Thx.
     
  2. thstart

    thstart

    What happens?
     
  3. Long answer: The quote server bogs down and can not supply quotes to me and any other individual attempting to use that quote server.

    Short: Quotes are delayed.
     
  4. Anyone ?
     
  5. I've had some experience with Activ and other full-market all-message feeds.

    Even if you are equities only, the peak message rate has increased by so much in the last few years, you will need a multi-threaded feed handling app to keep up. You'll also need a fairly powerful multi-core cpu (or two of them). Lots of RAM helps too. My advice is to use multiple ring buffers and hash maps (write your own, don't use the slow boost or std:tr1 versions) and manage thread priorities based on whichever of these starts to fall behind.

    I used to parse feeds in Java, but had to move back to C/C++ because Java just couldn't keep up.

    Which Activ feed are you looking into? Are you on Windows or Linux?
     
  6. I am looking to for a windows based solution. I have a quad machine, running windows 7 with 2gig memory.

    Hopefully this is enough.
     
  7. Occam

    Occam

    I think NxCore is easier to implement than Activ and is cheaper (at least at default pricing). But Activ has a broader range of data and markets covered (e.g., BBO + 10 for the major equities exchanges, as well as more non-North-American exchanges).
     
  8. My experience with really thick UDP data feeds like used at Genesis (is that Activ?) was to have a multithreaded parsing approach where each port had a separate thread, kept a queue of messages, and updated shared mem for symbols of interest. This was the only way I could get thousands of data points in mem in realtime. The parsing itself of 6 feeds (SIAC1-5 + Nasdaq) was resource intensive.

    I think pros would divide and conquer: have separate machines recording data (different feeds, as the message rates can surpass disk write speeds), and other machines just filtering messages and passing on pertinent info to yet other machines that actually make trading decisions.
     
  9. nitro

    nitro

    This is the way to go, imo. I have been trying to see if the new DataFlow TPL in .Net 4.x whether you can swap out the scheduler for something like LMAX's Disruptor. Otherwise, the idea of doing this (HFT) in C# is not possible, imo. Probably C++ is the right way to do it anyway. You even have to understand the subtleties of the OS on which you are running. Linux however is a safe bet. Low latency programming is about keeping things really simple and understanding the machine and the OS on an intimate level.

    I got disinterested in this stuff because I thought it was a bottomless pit from which I could not even step up to the plate, but a recent request from an ET person got me interested in it again. With 10GigE and being able to inject a message into a seperate machine over a 10GBE port in micro (nano?) seconds, and the right software and a proper understanding of the CPU, cache and a GC, you may be able to compete again against the big houses on HFT (assuming you can leverage this technology layer with a strategy with edge of course).

    I read somewhere that HFT makes about $30 per $100k in margin per minute. That means you can make $1800 an hour, or about $12,000 a day. I am not sure if commissions are in these numbers or not, but I suspect that even the lowest rates imaginable by us retail traders is a world away from making HFT possible. But if you can achieve something close to that pay rate, it can pay for the hardware and expensive datafeeds and colos...

    Sadly I don't have direct experience with any of this.
     
    #10     Dec 12, 2011