ISC HPC Blog

Breaking The Law

Posted:


As a theoretical physicist, I am fascinated by “fundamental” laws of physics describing basic phenomena of nature. Many of these laws are tested through numerous experiments and are “valid” within the constraints defined by these experiments. In this sense we “believe”, for example, in Newton's law of gravity.


However, any further experiment might challenge our fundamental concepts. Sometimes, they even can be broken. For example. Michelson's interferometer experiment questioned some of Newton's basic assumptions on the structure of space and time and paved the way for new ideas.  Eventually this led to Einstein's theories of special and general relativity, thus changing our understanding of the structure of space-time once and forever.


It is the experiments that differentiate natural science and engineering from mathematics.  Mathematicians can "choose" their axioms – often inspired by real world phenomena – while successful physicists, chemists and engineers infer relevant axioms from controlled, robust and reproducible experiments and might formulate far reaching theories. However, while theories often are believed to be true laws of nature, experiments might teach us better. Conclusion: challenging fundamental laws by experiments is crucial for progress in science and engineering.

 

In parallel computing, there is a fundamental law stating that the fastest speedup achievable through parallelization is restricted by the part of the program that cannot be parallelized. This law, named after Gene Amdahl, appears to be fundamental for strong scaling. Its generalization governs efficiencies in the presence of different concurrency level. From Gustafson we learned how weak scaling can explain why many problems scale very far, sometimes to hundreds of thousands of cores. He showed us a specific loophole in Amdahl’s law. Still the exponentially growing numbers of processors of  future supercomputers will entail increasing restrictions on the efficiency of  massively parallel computing and will make life very hard in the Exascale era.

 

I believe, time is ripe to challenge Amdahl's generalized law by exposing it to a new class of  experiments in parallel computing. With the DEEP project we are about to demonstrate that the pitfalls of Amdahl’s law can be avoided in specific situations.


DEEP keeps the code parts of a simulation that can only be parallelized up to a concurrency of p = L on a Cluster Computer equipped with fast general purpose processors. The highly parallelizable parts of the simulation are run on a massively parallel Booster-system with a concurrency of p = H,  H >> L.  The booster is equipped with many-core Xeon Phi processors and connected by a 3D-torus network of sub-microsecond latency based on EXTOLL technology.

 

The DEEP system software allows to dynamically distribute the tasks to the most appropriate parts of the hardware in order to achieve highest computational efficiency. The MPI programming paradigm in combination with an improved version of BSC's OmpSs task-based programming environment allows application programmers to abstract from the system software by simply requesting the necessary resources. The rest is done dynamically by the system. Hence the name DEEP, the “Dynamical Exascale Entry Platform”.

 

The applications adapted to DEEP are selected in order to investigate and demonstrate the usefulness of the combination of hardware, system software and the programming model to leave ground and leap beyond the limits of Amdahl’s law of parallel computing. We are eager to show our first results at the ISC’13 in Leipzig.

 

The DEEP project (www.deep-project.eu), comprising 16 partners from 8 different countries and funded by the European commission, started in December 2011. In the first BoF session of ISC’13 we will present results achieved since then and demonstrate the hardware that already is up and running at the Jülich Supercomputing Centre.

 

So, learn more about our experiment aimed at breaking the fundamental law of parallel computing! Join the BoF 1 “Exascale Research ­ The European  Approach” of the three EU funded projects in exascale computing DEEP, CRESTA and Mont-Blanc on Tuesday, June 18, 2013, 9 am – 10 am at Hall 4.

 

Biography:

Prof. Dr. Dr. Thomas Lippert received his diploma in Theoretical Physics in 1987 from the University of Würzburg. He completed Ph.D. theses in theoretical physics at Wuppertal University on simulations of lattice quantum chromodynamics and at Groningen University in the field of parallel computing with systolic algorithms. He is director of the Jülich Supercomputing Centre at Forschungzentrum Jülich, member of the board of directors of the John von Neumann Institute for Computing (NIC), and he holds the chair for Computational Theoretical Physics at the University of Wuppertal. His research interests include lattice gauge theories, quantum computing, numerical and parallel algorithms, and cluster computing.

 

 

tl_files/isc/images/bloggers/Lippert-Thomas-JSC-2013.jpg
Thomas Lippert
 

« Go back

Comments

Comment by Ghiora |

Hi,
As much as I read Einstein did not know about the Michelson Morley experiment and derived the fact that light has a constant speed in vacuum from Maxwell's theory of electromagnetism. There are some people who dispute this by I assume they are American :)

Add a comment

* mandatory

Platinum Sponsors

ISC Partner

Gold Sponsors