Hi Folks, I wanted to find out if anyone has experience of renting and using supercomputer time to perform a large number of calculations in an modelling (offline) process. Looks like I might need it and would appreciate some recommendations. Thanks.
Amazon AWS. You can spend $1 billion on compute time if you want or just a few dollars. What sort of problem are you trying to solve?
What kind of application is it? (integer only like for crypto, or mainly floating point/finance?). Some years ago I did use 8 networked Linux servers each with 4 or 6 CPU cores (total about 40 TCP/IP processors for processing the partial jobs) for my biggest simulation yet. I did it on retail cloud servers like this one: https://www.hetzner.com/cloud Nowadays the CPUs have way much more cores/threads, for example top end AMD EPYC have 64c/128 threads, or even more), and one can of course also network many dedicated servers together using fast network links.
Here is an idea, depending on what you call a "super computer" and if you have somewhat of an ongoing need - A 64 threads / 256G RAM Dell PowerEdge 620 of the Ebay is ~1k. Renting them from AWS would cost 100-200% of that for 1 month.
For crypto mining, ASIC fits better than CPU or GPU. For crypto research, GPU (ie. compute units, vector processors, SIMD) could be used. A cheaper but slower alternative is of course CPU.