Hybrid MPI & OpenMP Parallel Programming
Sunday, May 30, 2010, 1:30pm – 6:00pm
Hamburg University, Main Building, Room C
- Dr. Rolf Rabenseifner, Head of Department Parallel Computing – Training & Application Services, High Performance Computing Center, Germany
- Dr. Georg Hager, Senior Research Scientist, Erlangen Regional Computing Center, University of Erlangen-Nuremberg, Germany
- Dr. Gabriele Jost, Research Scientist, Texas Advanced Computing Center, The University of Texas at Austin, USA
Most HPC systems are clusters of shared memory nodes. Such systems can be PC clusters with dual or quad boards and single or multi-core CPUs, but also "constellation" type systems with large SMP nodes. Parallel programming may combine the distributed memory parallelization on the node inter-connect with the shared memory parallelization inside of each node. This tutorial analyzes the strength and weakness of several parallel programming models on clusters of SMP nodes. Various hybrid MPI+OpenMP programming models are compared with pure MPI. Benchmark results of several platforms are presented. The thread-safety quality of several existing MPI libraries is also discussed. Case studies will be provided to demonstrate various aspects of hybrid MPI/OpenMP programming. Another option is the use of distributed virtual shared-memory technologies. Application categories that can take advantage of hybrid programming are identified. Multi-socket-multi-core systems in highly parallel environments are given special consideration.
Level of tutorial
A participant should have already some knowledge about shared and distributed memory parallelization, e.g., with OpenMP and MPI.