MPI Tutorials to be presented on

June 26th, 2017, at the Argonne TCS Center

(public event; Argonne gate pass not required)

by

Pavan Balaji, Argonne National Laboratory, USA

Ken Raffenetti, Argonne National Laboratory, USA

Halim Amer, Argonne National Laboratory, USA

Yanfei Guo, Argonne National Laboratory, USA


Please RSVP to Mary Dzielski by June 22nd (Thursday).


Directions: The tutorial will be held in Rm 1416 of the TCS Building (240). Attendees from outside Argonne do not need a gate pass. To reach the building, do not go all the way to the guard gate at the main entrance. Instead, just before the guard gate turn right at the Visitors Center and go past the Visitors Center to the conference center of the TCS Building.

Lunch: Lunch is on your own. If you do not have an Argonne badge, you will not be able to go to the Argonne cafeteria since it is inside the fence. You are welcome to bring your own lunch or else plan to go to one of the many restaurants on 75th Street (10 min drive).

Important Note: This year's tutorials are part of the Scaling to Petascale Institute. From June 26-30, 2017, the free week-long institute will prepare participants to scale simulations and data analytics programs to petascale-class computing systems. Participants must register to attend one of the host sites or to watch the sessions live on YouTube. For details, visit: https://bluewaters.ncsa.illinois.edu/petascale-summer-institute. Only register for the Scaling to Petascale Institute if you plan to attend more than just the MPI tutorials on Monday.


Agenda

1000 - 1045: Keynote by Paul Messina, Argonne National Laboratory

1045 - 1100: Coffee Break

1100 - 1300: Introduction to MPI (part 1)


1300 - 1400: Lunch (on your own)


1400 - 1500: Introduction to MPI (part 2)

1500 - 1545: Advanced Parallel Programming with MPI-3 (part 1)

1545 - 1600: Coffee Break

1600 - 1800: Advanced Parallel Programming with MPI-3 (part 2)


Introduction to MPI

Abstract: The Message Passing Interface (MPI) has been the de facto standard for parallel programming for nearly two decades now, and knowledge of MPI is considered a pre-requisite for most people aiming for a career in parallel programming. This is a beginner-level tutorial aimed at introducing parallel programming with MPI. This tutorial will provide an overview of MPI, its offered features, current implementations of MPI, and its suitability for parallel computing environments. Together with a brief overview of MPI and its features the tutorial will also discuss good programming practices and issues to watch out for in MPI programming. Finally, several application case studies, including examples in nuclear physics, combustion, and quantum chemistry applications, and how they use MPI, will be shown.

Tutorial Goals: MPI is widely recognized as the de facto standard for parallel programming. Even though knowledge of MPI is increasingly becoming a prerequisite for researchers and developers involved in parallel programming institutes, including universities, research labs, and the industry, very few institutes offer formal training in MPI. The goal of this tutorial is to educate users with basic programming knowledge in MPI and equip them to the capability to get started with MPI programming. Based on these emerging trends and the associated challenges, the goals of this tutorial are:

  • Making the attendees familiar with MPI programming and its associated benefits
  • Providing an overview of available MPI implementations and the status of their capabilities with respect to the MPI standard
  • Illustrating MPI usage models from various example application domains including nuclear physics, computational chemistry, and combustion.

Targeted Audience: This tutorial is targeted for various categories of people working in the areas of high performance communication and I/O, storage, networking, middleware, programming models, and applications related to high-end systems. Specific audience this tutorial is aimed at include:

  • Newcomers to the field of distributed memory programming models who are interested in familiarizing themselves with MPI
  • Managers and administrators responsible for setting-up next generation high-end systems, and facilities in their organizations/laboratories
  • Scientists, engineers, and researchers working on the design and development of next generation high-end systems including clusters, data centers, and storage centers
  • System administrators of large-scale clusters
  • Developers of next generation parallel middleware and applications.

Advanced Parallel Programming with MPI-3

Abstract: The Message Passing Interface (MPI) has been the de facto standard for parallel programming for nearly two decades now. However, a vast majority of applications only rely on basic MPI-1 features without taking advantage of the rich set of functionality the rest of the standard provides. Further, with the advent of MPI-3 (released September 2012), a vast number of new features are being introduced in MPI, including efficient one-sided communication, support for external tools, non-blocking collective operations, and improved support for topology-aware data movement. This is an advanced-level tutorial that will provide an overview of various powerful features in MPI, especially with MPI-2 and MPI-3.

Tutorial Goals: MPI is widely recognized as the de facto standard for parallel programming. Even though knowledge of MPI is increasingly becoming a prerequisite for researchers and developers involved in parallel programming institutes, including universities, research labs, and the industry, very few institutes offer formal training in MPI. The goal of this tutorial is to educate users with advanced programming knowledge in MPI and equip them with the knowledge of powerful techniques present in various MPI versions including the MPI-3 standard. Based on these emerging trends and the associated challenges, the goals of this tutorial are:

  • Providing an overview of current large-scale applications and data movement efficiency issues they are facing
  • Providing an overview of the advanced powerful features available in MPI-2 and MPI-3
  • Illustrating how scientists, researchers and developers can use these features to design new applications

Targeted Audience: This tutorial is targeted for various categories of people working in the areas of high performance communication and I/O, storage, networking, middleware, programming models, and applications related to high-end systems. Specific audience this tutorial is aimed at include:

  • Scientists, engineers, and researchers working on the design and development of next generation high-end systems including clusters, data centers, storage centers
  • System administrators of large-scale clusters
  • Developers of next generation middleware and applications