GPU accelerated high-frequency trading

Discussion in 'Automated Trading' started by alpha_tokyo, Oct 12, 2009.

  1. Recent developments in high performance computing provide strong evidence that graphic cards can be successfully used as high-performance many-core processors.

    Thus, it is possible to accelerate a wide range of applications. These workhorses should be excellent for needs in financial markets, in particular for high frequency trading.

    Is anybody already using graphic cards (GPUs) in order to accelerate trading algorithms? I'm just transferring some theoretical results in order to implement a GPU solution for a trading interface.

    Thanks in advance!
     
  2. rosy2

    rosy2

    yes they're used. i know for pricing engines and analytics, even at banks now which are the last to adopt. firms are also using hardware like fusion-io now
     
  3. Very interesting aspect! I already thought about the possibilities and advantages this approach could deliver? How can the user benefit from GPU-support? Which functions could make meaningful use of the computing power?

    On the other hand you have to make sure that the app will also work on machines without that supported GPU. That could become a major work for the developer....

    But anyway - here's a link to the nvidia-site providing more information about what could be implemented and how these application can work: http://www.nvidia.com/object/tesla_computing_solutions.html

    Daniel
     
  4. maxpi

    maxpi

    The latest news is that NVIDIA is stopping making chipsets, they are frozen out by Intel..
     
  5. Corey

    Corey

    Given that NVIDIA just announced Fermi, I don't think that NVIDIA is stopping their chipsets.

    Anyway, for those getting into CUDA, check out the Thrust project. It wraps CUDA with an STL like wrapper for C++-esque programming. Much easier to wrap your head around at first.

    I am hoping OpenCL picks up soon -- it isn't bound to just GPU, which is nice. Niche processors is a cool idea.
     
  6. the concept is not new, check out OpenVIDIA.
     
  7. Occam

    Occam

    GPU's may not be Turing complete, or if so then only by hook and crook. You may end up banging your head against the wall, depending on what it is you're trying to parallelize. If you're doing very intensive, obviously parallel operations such as matrix multiplies, these may work well for you, though.

    Intel's Larrabee may be easier to program in this regard, but I still don't think it's going to have the branch prediction wizardry, etc. that a Core 2 or later processor has. These types of features make things seem to run a lot faster than they would otherwise, so your speedup with going to GPU's may not be as big as anticipated (unless of course you're doing something that maps well to their architecture).
     
  8. maxpi

    maxpi

    #10     Oct 12, 2009