Server vs PC for trading

Discussion in 'Hardware' started by IndexTrader, Feb 6, 2004.

  1. Recently, on another thread, Nitro indicated he's using a server-type machine(s).

    Can someone point out some potential pro's and con's in using servers versus say, a higher-end PC for trading?

    More stability against crashes and other such things?

    Is it a lot more expensive to purchase/upgrade the hardware/software?

    Do "regular" trading programs like tradestation, esignal and metastock run on servers the same as on a pc?

  2. Luto


  3. Luto


    In general, a program written for Win will work regardless of the hardware, assuming it is working properly. If you have program specifially written for XP Server APIs, that is another story.

    Sometimes servers will have:

    ECC RAM which is has the advantage of error checking.
    better heat management, which helps in all situtuations
    Faster hard disk configurations which just mean better perf.

    Once you move to dual processor, you start paying the $$$ and most trading software will not really be that much better give the additional costs.

    Hope this helps,

  4. complex


    the difference between a server vs. a consumer pc is often not that great. performance wise, the consumer pc may actually be a little better: it will have non-ecc ram which may be a hair faster. but the ecc ram in the server will prevent crashes due to a memory parity error.

    servers are designed for businesses that want to have high managability and low total cost of ownership. for example, where i work, we use ibm eservers. they have this cool daisy chain system when you can connect 30 servers together to one keyboard and monitor, and switch between them just by hitting a button on the front of the server you want. no KVM switch needed! of course, you pay for these extra features.

    trading software will run the same on both, there won't be any compatibility problems.

    pm me if you need any help or advice.
  5. nitro


    I use these machines because for the applications that I am running, my CPU is pegged at 98% utilization from 8:30 CST to 3:15 CST. This is on dual 2.4 Hyperthreaded Xeon server with 4GB of dual channel/interleaved DDR 2100 registered ECC RAM, with dual gig ethernet controllers and RAIDED 10000 RPM SATA drives. ECC RAM, also known as parity RAM, is actually slower than standard RAM. It is used on high end servers because it is better to be slow, then to act on bad information.

    Soon, I will be replacing the 2.4's and moving up to dual 3.2 Xeons on this machine, hoping to get the CPU utilization down under 90%. It is really bad to have an Operating System like Windows 2000, that is not a deterministic response OS, pegging the CPUs that high. A fast market and the two CPU's (in a sense 4, since my applications are multi-threaded and I have hyperthreading turned on making it look like I have four CPUs to windows 2000) in the machine go to 100% utilization and the mouse won't even respond - not good.

    If you go to the task manager and you are utilizing say, 50% of your CPU(s), spending more money on horse power will do you no good.

    Servers systems do have higher grade components in them. They won't necessarily run standard apps any better than a run of the mill machine. However, start throwing 30,000 quotes a second at a machine, have it store the data to disk, add the need to analyze that data as fast as possible and act on it, perhaps have to put 400 orders on the wire in seconds or in some cases fractions of a second, and you may crash a normal PC, while a high end server will just eat it up.

  6. complex


    nitro, if your app isn't specifically designed for hyperthreaded cpus, you might get better performance by turning off hyperthreading. at work we turn off hyperthreading on all our boxes as our homegrown app runs better without it. just something to think about.
  7. nitro



    Thanks for the suggestion.

    My applications are not only multithreaded, they are massively mt. I have on average 75 threads from the threadpool running to handle the load.

  8. Just a thought to consider - it is possible to OVER thread an app.

    Breaking processing into multiple threads is often convenient and (because of all the stuff written about it) seems like the optimal design approach - but it's possible to be TOO granular.

    Massively threaded apps are great on very large MP or MPP hardware where most or all of the actively running threads can be scheduled to a processor at the same time, but when you're pushing apps with fifty or a hundred threads on only two physical processors (hyperthreading doesn't change the fact that you only have two physical processors) you could actually be ADDING to your CPU utilization through excessive context switching overhead.

    In many situations, you can get greater performance by condensing your 75 actively running threads into just two internally multithreaded threads (before OS support for physically segregated and schedulable threads - we still did multithreading for high demand systems in this way).

    Doing this can eliminate substantial context switching overhead (the 2 threads can be independently scheduled to a processor without competing with 73 other schedulable entities that are constantly needing to be switched in and out of the processors).

    To internally multithread, you just insure that you're using consistent internally asynchronous event driven logic.

    Generally, for maximum performance, you don't want an app with too many more highly active threads than you have physical processors in the box.
  9. Guys, I appreciate the responses. Maybe next time I get a new system, I'll go the server route to see the difference for myself.

    Lately, on 1 system running tradestation and excel using DDE data for 600 symbols, excel sometimes freezes/crashes, which is unusual I think, for excel.
    Its on a new P4 winXppro with 512mb mem.

    Makes me wonder if things would be different using a server.
  10. nitro



    Thanks for the tip. This is a very complex issue, and I can tell from your post that you know what you are talking about.

    I have thought deeply about this issue, but perhaps not deeply enough. What you are saying about the huge overhead of context switches is rigth on the nose. I am currently considering going from threads to what used to be called fibres in Windows, or in the Unix world, co-routines. I do not know if my problem maps well to fibres, but even if it did, the current .NET implementation does not support it.

    In a sense, using fibres is what you are describing. However, I am not sure that my application maps to that design pattern. You see, I am at the mercy of my datafeed design decisions and the way they dispatch the quotes to me. It is imperative that I not hang on to a quote for too long on the thread that dispatched the quote to me.

    To be honest with you, I am not convinced that the way I am doing things is the most efficient. FWIW, I have given this alot of thought and I am not certain that I have the "best" or even the third best design.

    BTW, not all 75 threads are mine. About 25-30 of them are overhead from the datafeed.

    #10     Feb 9, 2004