The Worldwide Quest for Energy Efficient Supercomputing


Outside Boulder Colorado, construction is finishing up on a new 182,500-ft2 (16,954-m2)  building which this fall will open as the US Department of Energy National Renewable Energy Lab’s Energy Systems Integration Facility (ESIF). The ESIF will include a state-of-the-art, high-performance computing data center that will improve and expand capabilities in modeling and simulation of renewable energy and energy-efficiency technologies. Across the Pacific Ocean, in Tokyo Japan, Tokyo Institute of Technology, according to the Green500 list, uses HP ProLiant servers to operate the world’s most energy efficient production petascale supercomputer. While across the Atlantic, outside Reykjavik Iceland, in a data center built by Verne Global one of the first occupants, a consortium of Nordic universities, has installed an HP ProLiant based supercomputer. From using the latest energy efficient CPUs and GPUs, to super-efficient data centers, leading supercomputer centers around the world are continually exploring new ways to increase the performance/watt of new supercomputers. A quick look at the power usage of proposed Exascale systems explains why.


Today’s best process technology requires about 70 picojoules per floating point operation (FLOP). A US Defense Advanced Research Agency (DARPA) study projected that by the end of the decade, this will drop to 5-10 picojoules per FLOP. Doing the math, a 2020 Exascale system would require 5-10 megawatts to perform 1 ExaFLOP of calculations. While that is a lot of power, there are certainly many data centers today that can support 5-10 metawatts. So why all the concern about energy efficiency? Unfortunately, FLOPs alone are not the power driver in modern supercomputers. The energy cost of moving two 64 bit operands in and out of the processor is estimated at 1000 to 3000 picojoules per FLOP, and that means up to 1 gigawatt would be required for an Exascale system, well outside the capabilities of even the world’s largest data centers. With HP partners such as Intel, AMD, and Nvidia focused on driving down energy usage/FLOP at the processor level, HP Labs Exascale efforts include work addressing the data movement side of the equation, including advanced low-power photonics and research on new non-volatile memory using memristors.


At the data center level, the focus for much of the last decade has been on decreasing Power Usage Effectiveness (PUE), getting as close as possible to an ideal “1” PUE. Solutions such as HP’s EcoPOD and state-of-the-art data centers like ESIF already can operate at PUEs of 1.1 or lower, well below the 2010 average of 2. As a result, further efforts to approach a “perfect” PUE of 1.0 through incremental advances in power distribution and cooling will have increasingly diminishing returns. Thus isn’t it time to stop focusing on the “right side” of the PUE decimal point and shift the focus from energy efficiency to sustainability? HP researchers have suggested that a 1.0 PUE should not be the end goal but instead new metrics are needed to measure and promote net energy consumption, net carbon footprint, and net water usage with an objective of reaching “0” not “1”. While no one has figured out how to build a processor that uses zero energy, data centers as a whole can start to approach net zero efficiency, or NZE, through a variety of sustainability approaches including on-site wind, solar, geothermal and other energy co-generation, by capturing and reusing heat generated by servers for building heat and other uses, and by so called “warm water” cooling systems that drastically reduce the evaporative loss of chilled water systems in today’s modern data centers.


HP once again looks forward to being a Sponsor of ISC12 where we will be discussing many of our energy efficient supercomputer technologies. If you are attending ISC12, you are also invited to attend the HP-CAST user group meeting immediately prior to the main ISC12 conference.



« Go back


Comment by Reader |

"While that is a lot of power, there are certainly many data centers today that can support 5-10 metawatts."


Comment by Marc Hamilton |

The reference to metawatts was a typo, should have been megawatts (one million watts).

Add a comment

* mandatory

Platinum Sponsors

ISC Partner

Gold Sponsors