Cuda questions

Discussion in 'App Development' started by 931, Sep 24, 2016.

  1. OK, OK I'll admit it. I only own AMD-based GPU hardware, I only have experience with OpenCL and I don't do AI :D
    That said I have a personal trove of OpenCL source code for pattern matching, backtesting, etc. CUDA holds no interest for me because of my existing personal code base.
     
    #11     Oct 23, 2016
  2. Zzzz1

    Zzzz1

    thanks for sharing and as long as it works for you then all is good, though you definitely bet on the wrong horse there. But if your code accomplishes what you need then you do not have to worry.

     
    #12     Oct 23, 2016
  3. It's not a statistically significant sample, but 2 major crypto applications I know of use or used OpenCL:
    • Bitcoin mining - I mined the bulk of my BTC using GPU-based OpenCL mining software. There were CUDA miners but hardly anyone used them.
    • BitMessage - Anonymous, encrypted, blockchain-based messaging. Uses OpenCL for accelerated proof-of-work
    I'm not sure there's a right or wrong "horse" here as long as I achieve my ends. These arguments about which language is better amount to nothing better than dick measuring contests. I can get from point A to point B faster with OpenCL and C++ than I can with <fill in the blank>. I use what I'm familiar with because it gets the job done. I stopped worrying about "what's the best..." decades ago.

    OP started out saying he was going to use CUDA. I merely pointed out that he should keep an open mind about CUDA vs OpenCL. What if he finds a ton of good examples in OpenCL for what he's trying to accomplish? It's probably better to be agnostic with language selection in this case. Who knows, maybe CUDA would come out on top for him.
     
    Last edited: Oct 23, 2016
    #13     Oct 23, 2016
  4. Zzzz1

    Zzzz1

    Well in AI space it does very much matter given the fact that all packages outright support Cuda while hardly any of them supports Open Cl. If you don't work in Ai space then obviously this is a moot question. But I can assure you, and please independently verify my claim, that most every AI framework that is used by financial practitioners supports Cuda and thats just not the case re OpenCl

     
    #14     Oct 23, 2016
  5. 931

    931

    Nice argumenting , lot of info in short time.
    From bitcoin mining times have also read that amd gpus had more but simpler cores ,clocked lower , while nvidia gpus have more complex cores and if remember right ,nvidia clocked higher usually.
    For bitcoin mining while gpus were still beneficial to use, amd ones worked better.
    It may also be application specific what hardwre is best or what software is best optimized at that time.

    To learn low level gpu programming does not appear to be as easy or quick as I thought initially.

    Was researching about OpenACC, but that has strange licenses and not sure how open it really is for nvidia gpus and if amd has even implemented it yet.

    It works by placing macros in code and with macros it auto generates all neccesary for directing the execution to gpu at certain parts of code that are parallelizable while running base code on cpu.

    It cannot be as optimized as lower level custom approaches but it should be much easyer way to start.

    Cannot find much info of amd implementing it , only that amd joined the standards group few years ago. Nvidia is getting big market share also.
    If gpus work in similar ways it still should be possible to have same standards for coding.
    Amd appears to have more open source approach while nvidia does not appear to share much.

    It is for genetic algorythm and mostly without outer libs.
    Could there be better performance in parralelizing to gpu if most smaller loops in code have arrays of structs as outer dependencies?
    Large loops are long and more complex , would parralelizing those even work efficient on gpu?
     
    Last edited: Oct 24, 2016
    #15     Oct 24, 2016
  6. Zzzz1

    Zzzz1

    The reason opencl and amd gpus were used in bitcoin mining is only because the relative cost per core of amd gpus was and is much lower. That significantly lowers the cost to mine bit coins. Those who made money in this endeavor at some point employed huge gpu farms and hence the cost really mattered at the thin haircuts in profitability when mining bitcoins.

    But in the past 10 years Nvidia has almost always produced the highest performing cards albeit at high cost to the consumer. Performance in terms of number of cores and the speed to move data between gpu memory and main memory in AI really matters and makes the difference between running an optimization or training for hours or days.

     
    #16     Oct 24, 2016