More like $2300 for prebuilt. I think it's an older version of Storm Trooper, aka. something like this but non-transparent: https://www.newegg.com/Product/Product.aspx?Item=N82E16811119297 Though bought through local (non US) webshop several years ago. Has been robust enough and no plans to replace it yet. There are fans, but adjustable. Decent size to expand, but haven't had need for RAID or much beyond SSD and external USB HDDs. A decent place should provide a good custom build and warranty. I just upped the memory some and the GPU/CPU at the time.
Check out microcenter sometimes they have good deals in the store . Its on 3rd ave in bklyn so not to bad of a trip from manhattan
I recently bought similar machine with 128GB ram and 512 SSD. This machine is well built and can be upgraded very easily and super quiet. https://www.newegg.com/Product/Prod...8376&cm_re=hp_z820-_-9SIA68F5C38376-_-Product Some one posted in this site previously link of this review.
1TB ram? That must be for the whole cluster not for a single processor? Or, are you referring to a PCI solid state hd? What are you using for back testing software? You are right that an entire day of ES tick is around a gb, but why do you need to keep all of that data in ram memory?
Nope, it's a single box NinjaTrader, of course.. just kidding. I got a bunch of different backtesting engines, some are built for higher frequency stuff and deal with tick data, some are more geared toward minutely data for stuff that's less sensitive to latency. In reality, it's the cross-sectional vol stuff that is RAM-hungry, once you start dealing with hyper-cubes of minutely option prices across a few hundred names it gets big very quickly.
Damn, I didn't know that even know that existed. What kind of chip is it ? What do y'all use it for ? The curse of dimensionality rules us all, unfortunately .
I think it's 4x4 Xeon (Broadwell?), but I don't remember. It's blindingly fast for everything, including silly things like 1K x 1K SVD. PS. It's not mine, obviously - too grownup of a toy
Hmmmm. 1k x 1k of doubles is 8mb. SVD (assuming that means singular value decomposition) on a 1000x1000 matrix is trivial on almost any system. Did you mean 100k x 100k? That would be ~80gb but would still run in seconds on almost any system, and run much faster if, as is likely the case, you are interested in far fewer than 100k singular values/vectors (see Lanzos algo and other online methods). I'm impressed by the 1tb ram though. That's a lot of ram! On my database server (and its clone) I have 384gb, but it's mostly used for Oracle's inmemory column store, I can hardly imagine what it would be like to have 1tb of free memory to play with.
OK, I'll bite, how do you manipulate 1m x 1m (=1t) matrix of doubles in 1t of RAM (assuming not sparse or other special case)?