Power Trip – The Challenge of Supercomputing

30th January, 2014 by


When the word ‘supercomputer’ is uttered, it’s easy to envision a gigantic, alien device, composed of back to the future-style technology. After all, these devices are used in the fields of quantum mechanics, climate research, and oil exploration. But there are a few things you might not know about these specialized appliances.

While modern supercomputers are generally gargantuan and sophisticated, the technologies they use are far from exotic. In fact, they typically employ high-end versions of familiar architectures such as AMD Opteron, Intel Ivy Bridge, Nvidia Tesla, and even Sony’s Cell.

Another surprise comes in the cost of supercomputers. Sure, you wouldn’t expect them to be cheap, but we’re talking really big bucks here. Today’s supercomputers operate at the petaflop level – over 10^15 floating point operations per second. Floating point operations are used as the standard in high-performance computing. They’re used because floating point calculations are more difficult than integer operations, and are needed for the level of precision and mathematical sophistication required by scientific models. The first petaflop-range supercomputer, the IBM Roadrunner commissioned by the U.S. Department of Energy, operated a system that required 2,345 kilowatts to reach more than a petaflop and carried with it a
potential yearly energy cost of $2.5 million. No wonder it was retired after just 5 years.

The current highest-performing supercomputer system in the world, Titan, operates at 17.6 petaflops and 2,143 million floating point operations per watt of power consumption. Running this system at the scale needed for some of the most difficult problems in scientific computing, such as climate change modeling, can easily incur energy expenditure equal to those of a small town.

(Visited 1 times, 1 visits today)