ISC'14

June 22–26, 2014
Leipzig, Germany

Session Details

 
Name: Tutorial 06: Advanced OpenMP: Performance & 4.0 Features
 
Time: Sunday, June 22, 2014
09:00 am - 01:00 pm
 
Room:   Seminar Room 14/15
CCL - Congress Center Leipzig
 
Breaks:08:00 am - 10:30 am Welcome Coffee
 
Presenter:   Bronis R. de Supinski, LLNL
  Michael Klemm, Intel
  Eric Stotzer, Texas Instruments
  Christian Terboven, RWTH Aachen University
 
Abstract:   With the increasing prevalence of multicore processors, shared-memory programming models are essential. OpenMP is a popular, portable, widely supported and easy-to-use shared-memory model. Developers usually find OpenMP easy to learn. However, they are often disappointed with the performance and scalability of the resulting code. This disappointment stems not from shortcomings of OpenMP but rather from the lack of depth with which it is employed. Our “Advanced OpenMP Programming” tutorial addresses this critical need by exploring the implications of possible OpenMP parallelization strategies, both in terms of correctness and performance.
While we quickly review the basics of OpenMP programming, we assume attendees understand basic parallelization concepts and will easily grasp those basics. We focus on performance aspects, such as data and thread locality on NUMA architectures, false sharing, and exploitation of vector units. We discuss language features in-depth, with emphasis on advanced features like tasking and those recently added to OpenMP 4.0 such as cancellation. We close with a presentation of the new directives for attached compute accelerators.

Content Level
  • 10% Introductory; quick overview of OpenMP features
  • 50% Intermediate; new and advanced language features
  • 40% Advanced; correctness pitfalls, performance pitfalls, performance use cases
Prerequisites
  • Common knowledge of general computer architecture concepts (e.g., SMT, multi-core, and NUMA).
  • A basic knowledge of OpenMP, as (for example) taught in A Hands-On Introduction to OpenMP by Mattson et al.
  • Good knowledge of either C, C++ or Fortran.
Target Audience
Our primary target is HPC programmers with some knowledge of OpenMP that want to implement efficient shared-memory code.