Hi nononsense, I assume you are the resident guru on gentoo linux. I am wondering if you have installed the Intel C++ compilers onto a gentoo installation. I tried to do this but I was not able to do it. I think it has to do with rpm's, but I emegered the rpm program and it still does not want to install. When I ran rpm by hand on the Intel supplied rpm files, it complains it cannot find certain libraries, probably linux compatibility ones. For anyone that has experience with this I would enjoy hearing about how you got it installed and working. nitro
Not to interrupt your discussion... but this is one of the reasons I switched to Windows after being a dedicated Linux user for 8 years. Linux was evolving so fast, I ended up spending too much time staying up to date. Tools change. Distributions change. Once I ended up with a nice customized Linux system, it was not an option to reinstall. I had too many installed application and customizations. So to stay current I was forced to manually update various parts of the system and periodically resolve kernel, driver, installation and library dependencies, especially between binaries/source packages developed for a different distribution or directory layout than the one I was using.
I could never see myself porting over to Linux and spending that kind of time (away from improving my systems) experimenting between distros for an extra ounce of performance gain that is probably at most 20%. There isn't going to be much difference between TCP/IP stack and kernel performance between Linux distros and Windows. Even if the performance difference was 50% to 100% I'd still put more hardware into the cluster and make porting to Linux a secondary priority. It shouldn't cost you that much you can get single and 2-way Opteron machines at a significantly lower price/performance point than a quad. My dual 242 was probably 1/4 the price of the quad system you bought. I just can't understand why you aren't more gung ho about moving to a cluster. Why can't you use the Win XP 64 bit beta? It works great for me. Why are you spending time trying to get every ounce of performance out of a single 4-way machine when you could be adapting your systems to work on an OS independent cluster, efficient for N-way systems for any value of N? Seems like that is the most scalable solution with excellent performance/price right now. Perhaps use one process per CPU and let the OS handle memory affinity. This will payoff big when the dual core CPUs come out. It will be more effective to buy whole new systems to add to your cluster, as is always the case, especially if MB designers continue the trend of increasing memory clock rate and width. This is why you got the Quad Opteron in the first place.
J, Everything you are saying is essentially correct as it applies to me. However, even in those cases if nothing else, the linux SMP kernel is sufficiently superior to the Windows kernel that I would still be motivated to getting my system to work under linux even on a cluster of linux machines. It is almost trivial for me to go from my current configuration of one or two machines to a cluster when the need arises. Who knows, the need may arise sooner than I think You also have to understand that any front end development I do now is mostly done in an OS independent way, so if I ever chose to move to windows from linux or mac or whatever, it would not be very difficult. I am in the middle of a port now from C# to C++. Already this experience is making my programs better, not because I am going from C# to C++ for my backend, but because I learned alot writing the first version and this knowledge/experience will be incorporated into this version. I cannot use the 64 bit xp beta because I have problems compiling my code to run on it and it depends on third party things that won't even install in 64-bits. Inspite of that, I am going to be testing the new code (using alternate third party data delivery) in windows EE for extended systems (64-bit AMD) to see if that would work ok. Then I will make up my mind... nitro
Oh, so you are moving towards an efficient OS-independent cluster implementation. Excellent choice. Now I see why Win XP 64 bit won't work for you. Have you considered developing/executing with the Cygwin libraries+GCC under Win XP 64 bit?
Hi nitro, Sorry, it took me a while to get back to your question. I did a few tries with the Intel compiler a few months ago without really using it. My problems right now are not in the high-performance category like yours. (see further) I understand from your post that you tried the rpm way, probably with the download from Intel's. You may have missed the ebuild that you will find at gentoo under 'icc'. If you haven't tried this, I would first go this way. Stay within the 'stable' (non ~x86) versions. They keep many on line. Normally the stable versions will install like a piece of cake (almost always ). Doing a search on the forum I came accross this one which refers to earlier posts of people running the Intel. I did not look any further but I remember reading some of these some time ago. http://forums.gentoo.org/viewtopic.php?t=229698&highlight=intel+compile Coming back to compilers in general, I must say that you find a rather wide range of possibilities at gentoo's. Many people in the avant-garde segment consider it to be a challenge to be in the forefront. However, you also quickly find out that many screwed up their system by installing stuff (compilers, libraries) marked 'unstable'. So I am rather conservative with this. As I am well set up for backups, I will always take an image of my root partition before doing anything like system upgrading. For a year now this went without a problem, but just last week I got caught in the glibc mishap. Luckily this was only minor and I quickly went back to my earlier version. From the forums it appears extremely difficult to manually go back to earlier versions of compilers/libraries. If you have enough disk space nitro, you could also make a clone of your running partition and experiment in that one. I also do this often - don't forget to change /etc/fstab and grub.conf. Be good, nononsense
Its not an argument, merely an observation from about 20 + years writing massively parallel and not so massively parallel systems. Even people that claim to be expert in these areas have produced system designs that were astoundingly bad with very poor quality control and testing during their design development phases - essentially rendering their systems salvage. Yes some of your points are valid but, each system is essentially a custom build for these problems: even slightly different requirements for an optimization can change the program problem regime enough that only a very clever design will make multiple problem regimes tractable: these are few and far between ....
... I find it trivial to switch between distributions or migrate from machines etc. Our stable Linux configurations haven't changed much over the last year except for kernel upgrades. We run our own repository: things just get pushed out within our networks when we deem an upgrade or change is worthy of use. With tools like rsync and others you can simply mirror entire configurations .... Really, this stuff is IMHO simple and transparent once you know what you are doing. Like I said our costs for admining on Linux is more than 50% less than Windows ...
I recently started up an old Intel OR840 motherboard with dual PIII-800MHz as an auxilliary in my "linux stable". I did some work on this system under Win2K with multithreading trying to load both processors but got overtaken by the upcome of the P4's which didn't make the effort worthwile anymore. Looking at my brought-back-to-life dual PIII, I see it now hum along with both cpu's each sometimes at 95+% loads (190% total), this while doing distcc compiles under linux. This is the only time, except for my own (now obsolete) programming that I had the pleasure of seeing it run at this kind of load. ALL OTHER software I ran, never got me at a similar 190% total - all I got was typically below 100% aggregate. This is only my personal example, but getting SMP systems loaded up efficiently is a pretty big job. I fully agree with you on this, linuxtrader. nononsense