AMD 64 X2 3800 for multitask trading

Discussion in 'Hardware' started by Bernard111, Sep 15, 2005.

  1. Yes Linux and all modern Unices (Solaris, AIX etc) support multiple CPUs and have done so for quite a few years.
     
    #31     Sep 26, 2005
  2. cmaxb

    cmaxb

    Please do not confuse multi-tasking with -multi-processing. In the former, an OS can run processes "at the same time". In reality it is just switching between them very quickly (imperceptibly). A multi-processer OS can distribute those tasks to several processors, so that the individual tasks can in fact run at the same time. NT, 2000, XP, and above can run multiple processes across several cpu's. Most *nix OS's have been able to do this for a while.
     
    #32     Sep 26, 2005
  3. ER9

    ER9

    i am using a bottom of the line athlon 64. about 6 months old with a gig of ram, a last generation video card and have not ever experienced even the slightest hiccup.

    I have to say though i am pretty cautious about allowing any program to automatically update. i set them all to manual. These programs i have found will stress the amount of data coming through the cable connection rather than my computer resources.

    my usual programs running when i trade are data feed and broker software, esignal, zonealarm firewall, antivirus software, usually msn, all the normal windows background services and im usually browsing the net.
     
    #33     Sep 26, 2005
  4. Sorry, I don't agree with very much of this other than some charting software may not benefit much if there is nothing else running on the machine.

    Floating point is quite irrelevant. Many database, web servers etc use multiple CPUs and benefit greatly from it and perform virtually no floating point ops.

    If you are running more than one application at a time then dual core/multiple CPU boxes will perform better and will feel snappier. For example rendering a web page in a browser is fairly computationally expensive and on a dual core box, one core can be doing this while another is dealing with other matters. 'Latency' - the catchphrase of the year - will be lower. The real question is whether its worth a couple of hundred dollars more.

    There seems to be a bit of talk on PC web sites and presumably PC magazines (which I don't read) that dual core has to prove itself. This is nonsense. SMP technology has been around for many years.

    How effective multiple CPUs are is a complex issue and not limited to the multithreaded-ness or otherwise of any given application. For example I use Linux, and here all graphical operations are handed off to the X server by any application that wants to put anything on the screen. This provides some natural concurrency to any application that writes things to the screen.

    In any case multicore processes are the immediate future as chip clock speeds have run up against a wall in term of power consumption and cooling requirements.
     
    #34     Sep 26, 2005
  5. Do you even know what a multi-threaded application is?

    I'm not talking about running servers, databases, multiple applications or rendering web pages (this is trivial). If you are a single user on your home computer and your are going to do these tasks simultaneously (and believe the marketing ploy) then pay for that extra processor (waste of money IMO). You might notice a slight improvement under heavy use but I say that "heavy use" is too small a fraction of the time spent to make the extra cash worth it.

    What I am talking about is a multi-threaded APPLICATION. ONE application, like MatLab or Spice or any other type of algorithm based program. These are heavily floating point orientated applications that require the ability to be multi-threaded (one application spread over multiple processors) due to the days, weeks even months it can take to run certain programs - the amount of time computing can be tremendous. The applications to trading are such in signal processing (FFT's, DFT's, etc.). This is highly mathematically based processing and most likely not very common due to the complexity required. In these cases floating point is extremely relevant because it take the largest number of clock cycles per execution.

    I'm not trying to be derisive but I did my master's thesis in parallel processing, specifically in superscalar architectures. I got my PhD in VLSI.

    A better observation is what all this has to do with better trading and bottom line - well... nothing. I doubt spending the money for a dual core will actually provide any real value, but go ahead if it makes you feel good.
     
    #35     Sep 26, 2005
  6. Yes. I have developed both server side and client side multithreaded code. Server side on medium size SMP boxes (4 - 16 CPUs). C++ and pthreads. Client side Java. And yes Java apps will distribute threads over multiple CPUs if you are using a modern JVM and assuming the threads are runnable ie not blocked on a lock of some sort.

    I seem to recall reading a a discussion with Linus Torvalds somewhere where he was backing the use of SMP on PCs because of the reduced latency making the box more pleasant to use.

    I just don't subscribe to the view that SMP is just for (or even mostly for) heavy duty engineering/scientific computations that you are implying.
     
    #36     Sep 26, 2005
  7. I have little experience in software development. My knowledge base is from application specific microprocessor development and general trends in the industry.

    Software developers use multi-threaded in a different context - you can create an application with multiple threads and use semaphores to obtain a level of apparent "concurrency". This is about as far as my knowledge of OS takes me.

    What I can speak for is the number of executions per clock cycle which is the real mark of performance and the reason why people get the dual processor system. The complexity behind something like this is transparent to software applications because of the level of abstraction. On you average home system, JAVA and C are too abstract to utilize a concurrent architecture because this type of concurrency is available only via the machine language layer.

    This is why this is a gimmick. It is a way to sell more chips. The type of hardware/software that actually utilizes this concurrency is very complex and expensive.

    The systems you refer to are much different than what a home user will get. The hardware architecture allows you these features and I would have to know more about the system to tell you if actual concurrency is taking place.
     
    #37     Sep 26, 2005
  8. Not so. There is nothing magical about concurrency on SMP boxes. It's really just a system call to create a new thread (or process). The OS scheduler/dispatcher sorts out the running of them and executes any required privilged instructions. Really.
     
    #38     Sep 26, 2005
  9. Holmes

    Holmes

    I am not going to boast with credentials etc, serves no purpose.

    But I can assure you that even home use can benefit from multicore / multi CPU applications. I mentioned on ET before: you'll have to figure out where the bottleneck in your setup / environment is and that can differ from user to user. First you have to measure and then test modifications before you know for certain.

    Making blanket statements like you are doing serves no purpose and is a disservice to ET by confusing the readers.

    Sherlock
     
    #39     Sep 26, 2005
  10. In the OS context you are correct - you are utilizing a form of concurrency.

    I am going a bit further to the register transfer level where a compiled instruction is actually executed. By this time it is either in machine/assembly format or in a proprietary (MIPS) format or something similar.

    This is where the nitty gritty occurs about concurrency: say you have an instruction to add 1 plus 2 and then an instruction to increment the result. How do you execute these instructions concurrently when you do not know the result of the first instruction?

    This is true parallel processing and as you may be able to infer, a compiler would have to recognize such cases and properly interpret them.

    This is a simple case but gets to the heart of the matter: what you call concurrency is a sophisticated usage of control mechanisms (semaphores, locks etc...). On the hardware level concurrency is a completely different issue.

    I can point you to some studies of the performance characteristics in the above case. The end result was the complexity was just too cumbersome in all but very few applications. But, given this data, intel and AMD are still going to try as many chips as they can and charge a premium for the dual core model. Dual core doesn' t address the execution per clock cycle problem.

    Sorry for the geek out, its been a while since I was involved with this stuff but it still really interests me.
     
    #40     Sep 26, 2005