GPU accelerated high-frequency trading

Discussion in 'Automated Trading' started by alpha_tokyo, Oct 12, 2009.

  1. Occam

    Occam

    As far as I can tell, this discussion has little or nothing to do with chipsets. CUDA-esque computation is all done on discrete GPU's, the main (and by far most profitable) part of NVDA's business, which is not the same as chipsets.

    (FYI, the "chipset" is basically helper chips -- northbridge and southbridge -- that allow a CPU to talk to the rest of the components, such as the memory (in older chips) and the discrete graphics card, if applicable. They may also include graphics, but this is for low-end solutions, not the type used by CUDA et al.)
     
    #11     Oct 12, 2009
  2. maxpi

    maxpi

    I used to develop chips actually... it's interesting work.. but I know little of what NVIDIA does.. I assumed that all their products centered around proprietary stuff that they developed...
     
    #12     Oct 12, 2009
  3. Does anyone here actually use this stuff? If so, how in the world would you get started? I actually have a degree in EE but when all this technology meets financials/high-frequency trading, I feel like a lost puppy. How do you even get started in this?
     
    #13     Oct 12, 2009
  4. rosy2

    rosy2

    http://www.nvidia.com/object/cuda_home.html
     
    #14     Oct 12, 2009
  5. I'm sure this will become something normal to be using for what we are doing 5-10 years from now.
    Its just too early now until we have proven wrappers to make things easier.
    accelereyes.com is working on this for matlab, which IMO would probly be the way to go.
     
    #15     Oct 13, 2009
  6. Actually it is not so complicated. If you are familiar with C or a comparable programming language then you can write a simple GPU program after some days ...

    Then, the idea is to connect this power to a trading API ...
     
    #16     Oct 13, 2009
  7. Corey

    Corey

    Its application to trading is fairly obvious, especially on the quantitative side. Performing matrix transformations, numerical integration, et cetera on thousands of securities can be done in parallel instead of in sequence.

    Let's say I am doing a monte-carlo pricing of some options. Instead of having to do each path of each option in sequence, I can do all paths of each option in parallel, or one path for all options in parallel.

    Either way, a huge speed up. The power is very impressive, and it is very easy to use.
     
    #17     Oct 13, 2009
  8. cashcow

    cashcow

    As another poster suggested, GPU cards have certain limitations. Unless you are doing complex matrix manipulation it probably is not worth it. From my experience you can usually optimize most algorithms to run faster on the main CPU than transfer the data to the GPU, process and send the data back.
    Unfortunately for many of the banks, I think that this technology is a new trendy geek thing and 70%+ of the time gives little advantage. I can think of scenarios where GPUs may be useful in calculating risk on large portfolios very quickly, but as for execution of high frequency trades I cannot think of a single example where the GPU approach would be beneficial.
     
    #18     Oct 13, 2009
  9. nitro

    nitro

    That's interesting. Thanks.
     
    #19     Oct 13, 2009
  10. Occam

    Occam

    Interesting observations, thanks. Also, before spending a lot of time learning the relatively restrictive instructions of current-generation graphics cards, many groups might find it worthwhile to wait for Intel's Larrabee, which is (supposedly) coming out in 2010 and has a far more general instruction set/computational architecture.
     
    #20     Oct 13, 2009