below is the chipset i have now i7 2700 second gen.. not looking to upgrade really.. its like brand new a couple months old.. Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 8 On-line CPU(s) list: 0-7 Thread(s) per core: 2 Core(s) per socket: 4 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 42 Stepping: 7 CPU MHz: 1600.000 BogoMIPS: 6819.56 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 8192K NUMA node0 CPU(s): 0-7
yeah.. i hate to talk shit about NT .. but it seems like your right amibroker to me seems like a guy who was a developer/trader.... ended up turning his platform over to the public for a price.. which in this case is good for us... its 4 bills .. which seems steep.. but thats all relative.. if it works.. and it uses resourses better then anything else.... it seems like its worth it.. plus you can pull your symbols right out of your own database.. i've always wondered how you could incorporate dervatives pricing into indicators.. you know not just your simple implied/historical volatility type of thing.. but something looking more specifically at term structure and skew..
Hm, this reallsy depends on requirements. Ninja for examle has shitty backtest to start with - so any optimization there costs a lot (their stupid "we just see the bar" results in bad executions all over the place). Once that is solved, the software must scale. Even with high money , there are limits where you can go efficiently in hardware. Ths is why the software must support a high performance computing cluster setup. Take a job, take it apart into parts, distribute it to machines. My own cluster (half a year development) went life last friday (and I am since then rolling out bugs). Now I backtest on 6 computers, around 45 cores total, adding another 44 cores in 4 computers until April (by replacing computers that run dual core now with top end machines and installing the cluster agent on them). Try beating that - regardless what you do on one machine, the limit is the software. For a 2 year backtest, I can use around 110 cores - jobs are one week backtest, Sunday to Saturday. You can not beat that. And the whole thing is event playback, no arbitrary "we just look at the bars".
Yes, I work with a small team. We built everything from scratch, and I contributed to some parts of it (primarily the algorithmic implementations of certain models). There isn't a commercial solution that supports the trading strategies that we run. Otherwise, we'd probably have paid for it. Unfortunately, it's not appropriate for me to be specific in this area on a public forum. I am in the position to say anything I want, but to be fair to my colleagues who closely adhere to the non-disclosures, I have to be tight-lipped. I hope you understand.
so your virtualizing all your resources?? i mean thats great if your there financially.. i'm just not.. just theorizing.. whats the marketability of building software to cater to guys or A guy like you... its a nich market to say the least.. haha but i think virtualization of resourses can be accomplished pretty easy.. and i'm sure the software isn't there for the utilization of this large of a system..
haha of course i can understand that.. i'm so new.. even if you did disclose stuff.. i wouldn't know what it was or what to do with it.. algo is like a keyword for the last 10-15 years..
This is mostly BS from the uninformed. RAM and CPU are very important for Ninja, if you are doing optimizations on intraday data (enough of it) you *will* need 64GB RAM. Ninja uses all cores in optimization, I personally have a i7-3930k w/ 64GB RAM and use about 50GB RAM for any optimization (CL 6 years at 100-volume + 1sec, ES 10-years at 1000-volume + 1-sec). The # cores for me is way more important than the core frequency itself. Ninja will also benefit of SSD in backtest, as it spends quite a bit of energy reading the data in files. I have seen pretty good improvement going SSD.