I personally never used a debugger in my whole life of programming...and my programs usually work well ;-)
I've found C++ code to be really unproductive after programming for more than 14 years.. compile errors can be absolute hell with template expansions sometimes causing one method declaration to be 10 lines long. Also, GCC is pretty damned slow and I've yet to find an IDE that can refactor c++ code. I can't say much about C#.. it seems nice, but again, locked into MS, so not for me. Mono is out there for linux but it is amateur bullshit writing by gnome script kiddies. I've said it before and I'll say it again, Java+Eclipse IDE is absolutely killer. Refactoring, reference searching, code completion, real-time syntax validation and compilation. Just beautiful.. my productivity has increased by an order of magnitude after learning all the features. Speed can be a real issue in java, especially dealing with performance sensitive apps. For the really critical parts of my platform I've used JNI/Native calls. I Completely bypassed SQL for data storage and am using Berkeley DB/Java wrapper from Sleepycat. My local database easily has several hundred million rows in it.. marshalling to/from database is simply Java serialization so there is no data translation layer to and from my app, and the database access is completely transparant, it is simply exposed via standard java collections. For high performance math I've written bindings to atlas and some custom code converted from fortran to C. Also, the plethora of free packages available for java are often plug-n-play, thus increasing productivity even more. Also, re: debuggers, the eclipse debugger is absolutely beautiful.. supports remote debugging as well.
I like eclipse and we use it more frequently. We use Java when it makes sense ... We still use C++ over JNI for most things.
You can still use c++ from JNI by declaring your methods "extern C", and then from there calling the rest of your c++ code. I also use a mixture of c and c++ but most of the native stuff I use is either fortran or C so I don't really have a huge need for c++.
I know about extern ... but we still can not use this method in many cases. Most of our high performance stuff is well tested libraies of C/C++ that we have no need to re-write or wrap. Other libraries can indeed be wrapped and used from within Java - or something else. I still think eclipse is a very nice environment and you cant beat the price.
My next motherboard or the future of ATS? (imagine they would have popped in opterons ) Blue Gene/L, already ranked as the fastest supercomputer on the planet, has been doubled in size, according to researchers at Lawrence Livermore National Laboratory in Livermore, California. Lawrence Livermore has been running a 32,000-processor system since December, but three weeks ago trucks began delivering the components that allowed it to add another 32,000-processors worth of power to the supercomputer, effectively doubling its processing power. Though there are still some adjustments being made, the system is now operational, said Robin Goldstone, group leader with the Production Linux (news - web sites) Group at Lawrence Livermore. "It's mostly functional. They've actually run calculations on the 32,000 nodes," she said Wednesday. "They're shaking out the last few bad nodes." Perhaps the most remarkable characteristic of Blue Gene/L is how compact it is. When the complete system is assembled into a total of 64 server racks this June, it will be a about half the size of a tennis court, which is much smaller than most of today's supercomputers. Blue Gene/L will consume less power too. The final system is expected to draw approximately 1.6 megawatts. To put this in perspective, another supercomputer that Lawrence Livermore will be bringing online this June, the 100-teraflop ASCI Purple system, is expected to require 4.8 megawatts. 72 Trillion Calculations Per Second Blue Gene/L is made up of approximately 32,000 two-processor nodes, giving it about 64,000 processors in total, Goldstone said. A 33,000-processor prototype of Blue Gene/L, assembled by IBM last November was ranked the fastest computer on the planet on the Top 500 list of the world's fastest supercomputers. IBM's prototype was benchmarked at 70.72 trillion calculations per second, or teraflops, using the Linpack benchmark, which puts the system through a series of mathematical calculations. Lawrence Livermore's new system is expected to be capable of approximately twice that performance, making it nearly three times as powerful as the next system on the list, the U.S. National Aeronautics and Space Administration's 10,240-processor Columbia supercomputer. Columbia has been benchmarked at 51.87 teraflops. Goldstone declined to comment on the Livermore system's benchmark performance. The 32,000-node Blue Gene/L represents the second stage of a three-part build out of the $100 million supercomputer that is expected to be completed by June. When fully assembled at Lawrence Livermore, Blue Gene/L will be a 130,000-processor system with a theoretical peak performance of 360 teraflops, according to IBM. The difference between ASCI Purple and Blue Gene/L is that ASCI Purple will be made out of general purpose-servers, similar to IBM's eServer p655, whereas Blue Gene/L's compute nodes contain little more than memory and processors. "We've kind of reached the limit with these commodity clusters," said Goldstone. "They just generate too much heat and too much power." Going Commercial IBM is now in the process of commercializing Blue Gene/L and is selling a 5.7-teraflop single-rack version of the system, called the eServer Blue Gene Solution, to high-performance computing customers. The company has also agreed to deliver Blue Gene systems to a number of research institutions, including the San Diego Supercomputer Center and the University of Edinburgh. This month the computer maker plans to operate a 100-teraflop Blue Gene system at its Thomas J. Watson Research Center in Yorktown Heights, New York. This system, which IBM claims will be the world's largest privately owned supercomputer, will be used, in part, for life sciences research.
No Stephen, I didn't say so. I only wanted to liven up a bit that great next motherboard thread. nono
Well, perhaps I spoke to soon.. you don't need this much horsepower in real time. But, I have a method of automatic basket search which could easily use up this much power every day Estimation of models can take a lot of crunch time, but once that is done, model evaluation is usually a piece of cake. As it is I have to make some assumptions about the relations and tell it to search with the limited resources I have. Dual opterons are working fine for me
Let me add this: I picked this up in the context of a discussion about cracking a 128 bit cryptographic key. A participant estimated, it would take about 107902830708060144 years to brute force a 128 bit key with this computer. nononsense PS: How many years would it take to find that Holy Grail? Haven't figured that one out. ______________________________ There are two types of encryption: one that will prevent your sister from reading your diary and one that will prevent your government. (Bruce Schneier)