Discussion in 'Automated Trading' started by sevenlaws, Jul 15, 2010.

  1. anybody know or have ever had a server hooked up at equinix. and have any advice for server hardware for blackbox trading? and fiber optics?
  2. no not at equinix - no idea who/what that is.

    Server is a rather general term - there are desktop servers and there are rack mount servers - most of which aren't even needed for trading. Most ATS can be run on a simple fast desktop PC which can be configured as a desktop or as cheap rack mount.

    Today, most fiber is useless unless you want to spend a lot of money. The latency involved in converting fiber to copper (even at the NIC/motherboard) is usually greater than fast internet over copper. Also, copper is more durable than fiber. If you are in a datacenter and being offered fiber drops you will want to spend the money on a decent router so that you don't loose time in that conversion. for 99.9999999999999999% of people, a simple 50/50 symetrical line will be overkill. A 100-meg line is enough to run a small office on.
  3. equinix is the biggest data server/ provider in the states maybe the world. bloomberg, reuters and all the big cats set up camp there. direct into the exchanges.

    the fiber is directly connected to the exchange it self. my quetion is about the server hardware. we have to provide it. need the fastest set up, and reliable. we have some idea but are looking for more in deepth details of commercial grade products.
  4. kolnstyle


    It honestly depends on the needs of your strategies. There is no one universal machine that's perfect for all your strategies. Your hardware needs might be relatively simple, requiring only a lower-end 1U poweredge, if your strategy isn't too intensive. They might be rather elaborate, requiring a monster of a machine with multiple sockets, FPGA boards, ludicrous amounts of memory, and GPGPUs.

    First, analyse your strategies and determine what your needs are. Then evaluate your options and purchase your hardware. Don't buy some "bomb ass machine" if you don't honestly need it. You'll make the same mistakes that many dot-com era startups and companies made and run out of funds pretty quickly.

    You don't see cheap and rackmount juxtaposed together very often, especially when discussing chassis and rack options. ;)
  5. i think we are mostly concerned about getting something that is out of date in a year or so. sence we dont really mess with servers we dont know whats what. at least want to get 4 or 5 years out it befoe we need to do any major updates. and something that can handle a lot of work and remain fast.
  6. Why don't you just have a look on Dell or HP or IBM etc websites and see what's on offer. And make some attempt at specifying requirements eg memory, number of CPUs/cores, storage etc. What you have written here is meaningless.
  7. RickLong


    I agree Dcraig! Good suggestion!

  8. Lease, don't buy. Pay the depreciation on the hardware and replace as needed.
  9. Agreed, however there is a huge difference between paying retail Dell/HP with multiple Xeons and building your own 1U i7 box. You can pick up a rack on craigs list for $250 and build a 1U i7 box for pretty cheap. Again it depends on OP's needs.

    Sevenlaws, I know a guy who does IT finance consulting freelance out of FL. PM me if you want me to put you in touch.

    As an FYI, if you are talking fiber its really big money. Lower grade fiber is slower (latency not bandwidth) than decent copper. The hardware to convert from light over to copper can have a higher latency than a good copper NIC (that doens't even factor in your router/switches, etc.). Without knowing what your strategies are its impossible to reccomend hardware to you. What are they running on now? I have a bunch of Dell Refurb desktops and workstations that I use and I'm slowly converting over to rack space - but I'm doing it on the cheap and only as needed. Because I can DIY, and I'm doing it on the cheap, I'm not as concerned with the lifespan since I'll use desktop motherboards with i7's I can pull them out and put them into a desktop if need be at a later date - and then upgrade the mobo & CPU in the 1U cases I have.
  10. I have some servers co-located in equinix via fiber cross connect. I assume you are not talking about the other hardware, such as atm hardware, etc. You will not get 4 to 5 years out of any server config, based on my experience, about 2 years is almost the max, figure out a "server rotation" (such that newest hardware becomes the production failover pair, etc), do not get attached to hardware, the cost of using older or non-optimized hardware are worth much more than the $7-10k you paid for them. Figure out a standard build for them (get your linux packages and driver images down). Do use as good of network card as you can handle (make sure you optimize all the driver settings), figure out a storage mechanism (a badly timed memory to disk swap will slow you down significantly). Make sure your application can manage the thread across the processors efficiently (be aware of caching issues).

    Oh yes, manageability, make sure everything you have in the data center can be monitored, from UPS to blades, etc, and have your monitoring software hooked up to them (do this on day one, do not wait until you have a problem). And perhaps most importantly, test, test and then test some more, take the time to write some custom application to test and measure the timing differences (for instance, my hardware person swapped out a network card once, and the test suite almost immediately determined that the new card had some framing issues, which is bizarre, but the test suite caught it before it went into production).

    All of all, you will be competing against players who have no problem rewriting entire 500k line modules into assembler and FGPAs just to gain maybe 30-50 nanoseconds in advantage. So any little tweaks you can figure out will be a plus.
    #10     Jul 19, 2010