Mach 3.8 Supercomputer

Discussion in 'Hardware' started by chinook, Dec 8, 2003.

  1. Ninja

    Ninja

    I agree 100% with you on this. I switched from Solaris x86 to Linux last year. Running Linux on x86 PCs and servers is where its strong.
     
    #41     Dec 11, 2003

  2. "It's all about timing, kid."
     
    #42     Dec 11, 2003
  3. #43     Dec 11, 2003
  4. YYNOTT

    YYNOTT

    I would love to have that equipment.
     
    #44     Dec 11, 2003
  5. b1tr0t

    b1tr0t

    1. Jim Clark left SGI to found Netscape and then Healtheon. Not sure what he is up to now.

    2. SGI demanded Microsoft pay huge royalties a few weeks before the XBox was to ship. Microsoft bought out the bulk of SGI's intellectual property instead.

    3. SGI graphics engineers left to do video games at ATI, NVidia, and Nintendo. The Nintendo 64 was a collabarative project between SGI and Nintendo. I guess the engineers saw a brighter future at Nintendo.

    4. Tera computing bought Cray from SGI, taking much of the real supercomputing talent along with.

    5. Around 1995-1996, NASA engineers discovered that you could build a supercomputer out of a ton of off the shelf PCs for about $30k and get about 110% of the performance of an IBM SP2. At the time, an IBM SP2's list price started at $1.2M. Since then, more and more supercomputers have been built using the NASA "Beowulf" technique. Google's search engine is a good example. Even SGI's new Altix systems are essentially Beowulf's in nice boxes with friendly support.

    6. Apple took built OS X on top of Unix and ships powerful desktop and notebook workstations that are highly compatible with popular Linux software. Ouch for SGI and Sun.
     
    #45     Dec 12, 2003
  6. nitro

    nitro

    Yes,

    All that is bad news for SGI. However, I take exception to point 5 below.

    The SGI superclusters are _way_ more powerful than "standard" (beowulf or otherwise) clusters. It is well known that the limiting process in clusters is the interconnect (assuming "embarrasing parallel" applications.) In the past, people have used things like Myrinet or the like, which is essentially a fibre interconnect offering about 2Gbit/sec of thoroughput.

    The SGI are using NUMA technology as the "interconnect." The bandwidth of this interconnect is 2 orders of magnitude (100 times) that of the fastest "old technology," including everything you sited below in point 5.

    Although NUMA is hardly new, SGI are the first to bring the technology together in a package made for massive parallelism and "out-of-the-box-don't-need-to-know-anything-to-use-it-and-start-getting-benefits-immediately."

    These machines are being ordered by places that have the kinds of clusters that you are talking about - there is no comparison.

    Point 6 has nothing to do with SGI. No one who in the market for an Altix is wondering whether they should get an IMac instead.

    nitro
     
    #46     Dec 12, 2003
  7. b1tr0t

    b1tr0t

    Great analysis, but the class of problems known as "embarrasingly parallel" are those which require little or no communication between processing units.

    An easy example of an embarrasingly parallel problem is 3D rendering. You can send the underlying data for the movie (with M frames) to every node in the (N-Node) network, along with instructions to process M/N frames each. Since the renderer only cares about the scene description file and not any of the other frames, there will be no further communication until the work is finished and the data is sent back to the control node.

    As you can see, the embarrassingly parallel problem requires a minimal amount of inter-node communication.

    The next class of cluster problems are those which require local communication. Once you get your film back on the control node, you may want to encode it into a compressed MPEG file. Since MPEG compression depends on one or more frames behind and in front of the current frame, some local communication is required. However when processing frame 1024, the compression algorithm is unlikely to need to reference frame 65535, so communication required is greater than in embarassingly parallel problems.

    The worst case is when the parallel agorithm operating on node N needs to frequently communicate with ALL of the nodes in the network. These are the problems for which communication becomes critical. For some of these problems, a NUMA (Non-Uniform Memory Access) architecture will work. In the worst cases, a great deal of custom code is required to shoehorn a problem into a NUMA system. But given the limitations on CPU <-> Storage communications in massive systems, there is little else to choose from.

    So, depending on the problem class, interconnects may or may not be the limiting factor. With dual port GigE cards selling for under $500 and 12 port GigE switches for $999, it is easy to build fast hub and spoke networks and even faster hypercube networks with commodity off the shelf hardware.


    Regarding the Mac, I agree that few SGI customers will consider an iMac as an alternative. However, the G5 is a very compelling machine, particularly since SGIs are now using the same ATI chips (http://news.com.com/2100-1010-1025324.html) that are available for Macintosh and PC systems. Some 3D animators are switching from their old multi-million dollar SGI systems to Apple G5s for design, and clusters of G5s or PCs for final rendering.


    I'd love to see SGI take off again, I just don't see it happening. Apple is a highly erratic company, so I wouldn't place a big bet on them taking over the workstation market that is currently being offered to them on a silver platter. I am hearing the same kind of negativity from the traditional unix world about Apple and OS X that the same group offered towards Linux in the mid-90s. Notice that IBM's current Chariman was an internal linux advocate...


     
    #47     Dec 12, 2003
  8. nitro

    nitro

    Whoops typo! I meant to say "(assuming not "embarrasing parallel" applications.) "

    There are many "cluster interconnects" that being considered, some proprietary (such as Infiniband, Myrinet and Quadrics), others not, e.g., Virtual Interface Architecture (VIA), PCI-X, 10.0 GigE, TCP Off-load Engines, Direct Data Placement (DDP), RDMA over IP, Rapid I/O, Hyper-transport, PCI-Express, etc.

    They all have their advantages/disadvantages, but I am betting on NUMA or some NUMA hybrid as the one that ultimately scales to hundred/thousands of nodes. The three tiers of these massive clusters systems is high performance, scalability, and reliability. NUMA has proven itself in all three.

    Ok.

    Well, with SUN on the brinks, SGI not much better off, to me it looks like HP is the one that is most likely to address this niche. I always thought that HP Unix workstations were always a better buy than SUN. As to SGI vs Apple, I have never really put them in the same category, though I see why Apple would not want to stretch itself too thin and spend more money going after peanuts.

    My guess is that we will continue chugging along with SUN/SGI/APPLE struggling along until some new technolgy innovation comes along, _if_ they can survice tilll then. SGI _will_ survive - I cannot say that about APPLE or SUN.

    Yeah. I worked with IBM's Unix and couldn't stand it. I am not suprised they did not want to throw more money at it.

    The thing to remember is that these companies are hardware companies (APPLE/IBM/SUN/SGI) and coincidentally sell software. The key is to who wins is, IMHO, who is able to better serve their customers as cheaply without sacrificing quality by providing an integrated run-out-of-the-box solution. That is one thing I have noticed about INTEL - their processors are no longer state of the art compared to AMD, but boy their support is stellar.

    nitro
     
    #48     Dec 12, 2003
  9. CalTrader

    CalTrader Guest

    You can debate this forever .... although I hope not !

    As both of you (should) know, the efficacy of a particular choice of parallel architecture depends upon the structure of the problem being solved.

    When I worked for the government we had the money to build very specialized architectures to optimize the solution of very specialized problems. Most academic, government or corporate entities dont have the luxury to match the architecture exactly to the class of problem and look towards more general purpose parallel systems - who cares if your solution has to run an extra couple of days if the bulk of the jobs get finished in reasonable timeframes relevant to your business or project ? In these situations - the bulk of supercomputer sales or build projects that are public - cost is a very important factor and reducing it by orders of magnitudes makes less optimal custom build projects very viable. The perfect system is usually abandoned for more cost effective solutions ......
     
    #49     Dec 12, 2003
  10. nitro

    nitro

    From that article:

    "...The systems are used for graphically challenging chores such as looking at oil field computer models, reviewing new car designs or simulating military combat. Even Procter&Gamble has used them to visualize airflow over Pringle's potato chips to maximize the speed they can be packed in cans without crumbling. " :eek:

    LMAO.

    nitro :D
     
    #50     Dec 12, 2003