In the past, supercomputing was based on clustering method. But, of late, it is taking a new shape of quick effective problem solver.
Supercomputers have always been known for mass computing at ultra high speed. In other words, it has capability to solve highly complicated problems within seconds and even generate scientific simulations, (animated) graphics for the analysis of geological data (e.g. in petrochemical prospecting), structural analysis and computational fluid dynamics. But, for doing all these it needs to exchange and coordinate data efficiently. There are various ways of exchanging and coordinating data in least possible time. At any given time, there are very few supercomputers that operate at the required speed. The term is also sometimes applied to far slower (but still impressively fast) computers. Most supercomputers are really multiple computers that perform parallel processing. In general, there are two parallel processing approaches: symmetric multiprocessing (SMP) and massively parallel processing (MPP). Perhaps one of the best known supercomputer has been Cray Research, now a part of Silicon Graphics. Supercomputers like IBMs "Blue Pacific" are placed at the top most position of the tree. Blue Pacific is reported to operate at 3.9 teraflop (trillion operations per second), which is 15,000 times faster than the average personal computer. This operation is generally carried out through 5,800 processors containing a total of 2.6 trilllion bytes of memory and interconnected with five miles of cable. It was built to simulate the physics of a nuclear explosion. IBM is also building an academic supercomputer for the San Diego Supercomputer Center that will operate at 1 teraflop. It is based on IBMs RISC System/6000 and the AIX operating system and will have 1,000 microprocessors with IBMs own POWER3 chip.
There are several methods that can be employed in programming supercomputers. The modern supercomputer uses parallel processing method to coordinate and exchange data efficiently. This is because parallel processing is cheaper in terms of price/performance; it is also faster than equivalently expensive single processor machines; it is more reliable and can handle bigger problems. Programming in parallel involves breaking a problem into relatively independent parcels that can be computed individually by one of the many processing units. However, an easy programming language for supercomputers remains an open research topic in computer science.
Benchmarking is one of the best ways of evaluating parallel-computing codes. This is because testing of codes on a particular machine helps to determine and improve efficiency of code. In addition to this, it also helps to point out whether a specific coding style is suitable for a machine without altering the underlying hardware architecture. In 1994, standard performance evaluation corporation (SPEC/HPG) was founded with the mission to establish, maintain and endorse a suite of benchmarks representative of real-world, high-performance computing application. The latest of SPEC/HPG benchmark suites is SPEChpc2002 V1.0 suite used for deriving benchmarks from real HPC applications and application practices. This includes measuring the overall performance of high-end computer systems, the computers processors (CPU), the interconnection system, the compilers, the MPI and/or openMP parallel library implementation, and the input/output system.
|Posted : 10/26/2005|