ISC HPC Blog
Big Science Requires Big Computers
The high performance computing (HPC) community has reached a remarkable juncture - there are 23 systems sited in countries around the world capable of achieving several quadrillion floating point operations per second (or petaFLOP/s). More importantly, these extraordinary resources are making transformational science possible. Equipped with the ability to model enormous systems and generate immense volumes of data, scientists are now conducting simulations on a scale that used to be considered impossible, and consequently are solving incredibly challenging, game-changing scientific problems.
Today’s “big computers” are enabling scientists to explore the interiors of stars, analyze the response of materials to extreme pressures and temperatures, study the consequences of global warming and climate change, understand the mechanisms of genes and proteins, and investigate the effects of seismic forces on structures.
Yet hardware is only part of the story. As computers have become bigger, faster, and more robust and efficient, so too have the visualization capabilities, operating systems, scalable tools, application development environments, mathematical algorithms, and simulation codes. This convergence of big capabilities is helping to herald in a new era in scientific discovery.
Lawrence Livermore National Laboratory’s (LLNL’s) cutting-edge Cardioid simulation, developed in collaboration with scientists at IBM Research, is a prime example of the groundbreaking, globally significant science that can be accomplished on big machines. Run on the 20-petaFLOP/s Sequoia supercomputer (#2 on the TOP500), Cardioid is a highly scalable code that models in exquisite detail the electrophysiology of the human heart, including activation of heart muscle cells and cell-to-cell electrical coupling. Developed to run with high efficiency in the extreme strong-scaling limit, LLNL scientists were able to model a highly resolved whole heart beating in nearly real time, representing a greater than 1,200-time improvement in time-to-solution from the previous state of the art and performing to within 12% of real-time simulation.
Due to the advances in code and the speed and power of Sequoia, the Cardioid team was able to simulate for the first time the generation of a transmural reentrant activation pattern thought to presage Torsades de Pointes, a kind of arrhythmia which can result in sudden cardiac death. The potential to elucidate detailed mechanisms of arrhythmia will have impact on a multitude of nontrivial applications in medicine, pharmaceuticals, and implantable devices.
Yet, in a field as evolutionary and revolutionary as computing, there will always be a need for bigger computers because there will always be bigger science that beckons us. Even as we test the limits of today’s remarkable machines, we are painfully aware that full simulations still cannot be completed for many problems. For example, we want to understand the chemical evolution of the galaxy and the evolutionary state of a nucleus as it undergoes fission and fusion reactions. We want to add full atmospheric processes to climate models and increase the resolution of climate models to visualize detailed regional impacts of, say, droughts. We want to model cells and organisms and study their evolution at meaningful time and length scales. All of these examples are complex systems containing billions of interacting components, and they all require machines hundreds of times more powerful than current supercomputers.
In the end, we as a community are driven by really big, really fascinating, and really difficult science applications that demand big machines. With big science as our biggest motivator, the HPC community continues to seek out ways to unravel longstanding scientific mysteries and find new opportunities for scientific discovery using our current and future systems. And that is a big challenge that is worth pursuing.