What is the most affordable solution for getting maximum number of logical cores

Discussion in 'Hardware' started by fan27, Dec 22, 2019.

  1. fan27

    fan27

    Running my FasterQuant research solution is just not cutting it on my laptop with an i7 processor with its 8 logical cores. FasterQuant will consume all available cores and I have tried running on an AWS EC2 instance with a high core count but that is not getting the desired effect as I suspect the fact that the instance is running on shared hardware is preventing me from utilizing all available cores. So I am looking for a solution, be it cloud or purchasing my own hardware. I am not looking for a GPU solution as I would have to rewrite my software to handle that. Any advice on a relatively affordable solution? It can be windows or Linux.

    Thanks!
     
  2. what you are doing needs so much cpu?
     
  3. fan27

    fan27

    Backtesting strategies via a custom machine learning implementation.
     
  4. gaussian

    gaussian


    Waste of money for OP.

    OP should look into 1U Xeon rack mounted servers. You can scoop up one that needs love for less than $500 on ebay, install a good Linux installation and have tons of cores. If OP is particular adept at IT he can purchase old Xeon CPU clamshells and run 2x or 4x CPUs in some 1U units. For his work, he's wasting his time saturating CPU cores instead of GPUs. Using CPUs is neither efficient on your wallet or your work. Even under the assumption OP knows what he's doing and is optimizing his code for SSE/SIMD (I assume it's his code since every modern ML framework supports GPU work) he's still running measurably slower than even a single GPU because floating point operations are typically not optimized well on CPUs. Optimizing for SIMD is probably OP's best bet to get peak FLOPS out of each CPU in the server.


    As a note, OP should be benchmarking IPS and optimizing parallelizable operations completely via SIMD before going out and purchasing more cores. If you're using C/C++ clang supports auto-vectorization so it's likely some of the optimization is done for you. Hope you have a good disassembler handy!
     
    Last edited: Dec 22, 2019
    apdxyk, RubiconTrading and fan27 like this.
  5. schizo

    schizo

    Even though you stated that you're not interested in GPU, you really should look into dedicated GPU hosting.
     
    fan27 likes this.
  6. fan27

    fan27

    Great info....thanks!