any IB java api experts? code question

Discussion in 'Automated Trading' started by newguy05, May 6, 2009.

  1. Maybe, but the use of a queue is very simple and more than adequate for handling the data rates from TWS. I do it this way, and any overhead is immeasurably small. I can monitor dozens of instruments and update hundreds of time series and you are still talking tiny CPU utilisation. It is the upstream code that may load the CPU depending on what you are doing.

    The stuff in java.util.concurrent is pretty good.

    No need for thread pools at all.
     
    #21     May 6, 2009
  2. that is cool if it works for your specific application.

    we are supporting a platform that several thousand people have downloaded... some commercial users have 100+ instruments so little things count. if you use tradelink there is no need to understand any of this (although if you want to, the source code is open so you're welcome to).

    anyways, I am sure the original poster is happy that he has many solutions to choose from.
     
    #22     May 6, 2009
  3. didnt read all the responses, gotta run to work. but i did a few simple test just now and confirmed tickPrice() is synced. This makes my life a whole lot easier now, but i am kind surprised IB does not leave this to the developer.

    Just a comment about starting a new thread in tickPrice(), usually it's never a good idea unless properly managed with a threadpool(spring has a good one) or at least a static thread max limit. Especially in such a high volume data stream listener function.

    tradelink, can you explain this a bit more, what do you mean by the function signature will look different. tickPrice() just seem like a regular function, i guess it's triggered from some type of data stream listener internally. But what function signature gave it away that makes it synced without running testing to confirm. Thanks!
     
    #23     May 6, 2009
  4. chvid

    chvid

    guys - creating threads like this it is not wasteful - give it a go if you don't believe me
     
    #24     May 6, 2009
  5. chvid,

    I am happy your code works well for you, hopefully it stays that way.

    google threading to learn more.

    cheers,

    -josh
     
    #25     May 6, 2009
  6. chvid

    chvid

    right after you, josh
     
    #26     May 6, 2009
  7. I put all the ticks in a que (syncronized collection) which gets accessed by another thread that pulls out all the ticks and makes them available to other components of the system which are also running in seperate threads.

    -david
     
    #27     May 6, 2009
  8. chvid,

    1. do you understand that you are creating a new thread every time tickprice is called?

    2. if tickprice is called up to 5 times a second, how many threads is this throughout the course of a trading day?

    3. on your windows task manager, how many threads in total exist in your thread column? (mine is about 300 across entire OS)

    4. how many cores do you have on your machine... do you know what happens when you have more threads than cores?

    5. is it more wasteful to startup and destroy 135,000 threads over the course of a day... or to startup the number of threads that can actually be used at once, and re-use them?

    6. what happens when real code is introduced into your example, such as needing to calculate a moving average across multiple calls to tickprice for the same symbol.... if you have 5 different threads trying to update the MA at the same time... what will happen?

    7. do you think that threads aalways run in sequence? what happens if you start 5 threads in a second, but the last one actually finishes first?
     
    #28     May 6, 2009
  9. rosy2

    rosy2

    #29     May 6, 2009
  10. This thread isn't about HF trading, but...

    In high-frequency trading, it isn't enough to just have a threadpool. You have to understand realtime schedulers and work with inversion and the standard OS scheduling mechanisms, as well as understanding how the scheduler picks processes with similar page-table constructions to schedule next.

    The part of this thread that made me cringe was the idea that someone actually considered creating a new thread just to handle an event asynchronously. The issue at hand is not just context switch overhead, but heap allocation and fragmentation. To some degree, slab allocators like the one the linux kernel has will save you some penalty, but you are looking at a baseline cost of 50microseconds on the brk() (on a modern quadcore system @ 2.7GHz) system call from user space + some O(n^2) or worse algorithm penalty. System call overhead is enough to make you lose the opportunity to send out that IOC order and remove liquidity when you have to escape.

    I don't know whether the Java VM is intelligent enough to recycle the blocks of memory for the creation of runnable thread objects, ... but in general, you can assume that there is substantial penalty in allocating objects on the heap in response to some sort of tick.

    For this guy's case, maybe it was fast enough; however, I would not let thread-per-tick stuff anywhere near my codebase.

    When I worked at the IB, I had to fight with people who believed it was ok to use something like std::vector<> in managing packets off the wire. Guess what happens when the market moves fast? The default allocator will kick in and try to double the allocation of memory at the time when you least want to deal with memory allocation.

    Wrestling with people over these issues is becoming a flat out nuisance in the industry because people come out of these MSFE programs with 1 or 2 C++ courses under their belt and no background in architecture or OS development whatsoever.
     
    #30     May 6, 2009
    mwahal likes this.