MPI Tutorials to be presented on
June 21st, 2019, at the Argonne TCS Center
(public event; Argonne gate pass not required)
by
Pavan Balaji, Argonne National Laboratory, USA
Rajeev Thakur, Argonne National Laboratory, USA
Ken Raffenetti, Argonne National Laboratory, USA
Giuseppe Congiu, Argonne National Laboratory, USA
Huansong Fu, Argonne National Laboratory, USA
This event is open to the public. Registration is free. An Argonne Gate Pass is not required.
Please RSVP to Kathleen Daily by Friday, June 7th to attend.
Directions: The tutorial will be held in room 1416 of the TCS Building (240). Attendees from outside Argonne do not need a gate pass. To reach the building, do not go all the way to the guard gate at the main entrance. Instead, just before the guard gate turn right at the Visitors Center and go past the Visitors Center to the conference center of the TCS Building.
Lunch: Lunch is on your own. If you do not have an Argonne badge, you will not be able to go to the Argonne cafeteria since it is inside the fence. You are welcome to bring your own lunch or else plan to drive to one of many nearby restaurants.
Note: Slides and exercises from the tutorial will be made available here prior to the event. Attendees are encouraged to bring laptops and follow along. Computing equipment will not be provided.
Agenda
0900 - 1030: Introduction to MPI (Point-to-Point, Collectives, Datatypes)
1030 - 1100: Coffee Break
1100 - 1230: Advanced MPI (One-Sided Communication)
1230 - 1400: Lunch (on your own)
1400 - 1530: Advanced MPI (MPI+Threads, MPI+Shared Memory)
1530 - 1600: Coffee Break
1600 - 1730: Advanced MPI (MPI+Accelerators)
Introduction to MPI
Abstract: The Message Passing Interface (MPI) has been the de facto standard for parallel programming for nearly two decades now, and knowledge of MPI is considered a pre-requisite for most people aiming for a career in parallel programming. This is a beginner-level tutorial aimed at introducing parallel programming with MPI. This tutorial will provide an overview of MPI, its offered features, current implementations of MPI, and its suitability for parallel computing environments. Together with a brief overview of MPI and its features the tutorial will also discuss good programming practices and issues to watch out for in MPI programming. Finally, several application case studies, including examples in nuclear physics, combustion, and quantum chemistry applications, and how they use MPI, will be shown.
Tutorial Goals: MPI is widely recognized as the de facto standard for parallel programming. Even though knowledge of MPI is increasingly becoming a prerequisite for researchers and developers involved in parallel programming institutes, including universities, research labs, and the industry, very few institutes offer formal training in MPI. The goal of this tutorial is to educate users with basic programming knowledge in MPI and equip them to the capability to get started with MPI programming. Based on these emerging trends and the associated challenges, the goals of this tutorial are:
- Making the attendees familiar with MPI programming and its associated benefits
- Providing an overview of available MPI implementations and the status of their capabilities with respect to the MPI standard
- Illustrating MPI usage models from various example application domains including nuclear physics, computational chemistry, and combustion.
Targeted Audience: This tutorial is targeted for various categories of people working in the areas of high performance communication and I/O, storage, networking, middleware, programming models, and applications related to high-end systems. Specific audience this tutorial is aimed at include:
- Newcomers to the field of distributed memory programming models who are interested in familiarizing themselves with MPI
- Managers and administrators responsible for setting-up next generation high-end systems, and facilities in their organizations/laboratories
- Scientists, engineers, and researchers working on the design and development of next generation high-end systems including clusters, data centers, and storage centers
- System administrators of large-scale clusters
- Developers of next generation parallel middleware and applications.
Advanced MPI
Abstract: The Message Passing Interface (MPI) has been the de facto standard for parallel programming for nearly two decades now. However, a vast majority of applications only rely on basic MPI-1 features without taking advantage of the rich set of functionality the rest of the standard provides. Further, with the advent of MPI-3 (released September 2012), a vast number of new features are being introduced in MPI, including efficient one-sided communication, support for external tools, non-blocking collective operations, and improved support for topology-aware data movement. This is an advanced-level tutorial that will provide an overview of various powerful features in MPI, especially with MPI-2 and MPI-3.
Tutorial Goals: MPI is widely recognized as the de facto standard for parallel programming. Even though knowledge of MPI is increasingly becoming a prerequisite for researchers and developers involved in parallel programming institutes, including universities, research labs, and the industry, very few institutes offer formal training in MPI. The goal of this tutorial is to educate users with advanced programming knowledge in MPI and equip them with the knowledge of powerful techniques present in various MPI versions including the MPI-3 standard. Based on these emerging trends and the associated challenges, the goals of this tutorial are:
- Providing an overview of current large-scale applications and data movement efficiency issues they are facing
- Providing an overview of the advanced powerful features available in MPI-2 and MPI-3
- Illustrating how scientists, researchers and developers can use these features to design new applications
Targeted Audience: This tutorial is targeted for various categories of people working in the areas of high performance communication and I/O, storage, networking, middleware, programming models, and applications related to high-end systems. Specific audience this tutorial is aimed at include:
- Scientists, engineers, and researchers working on the design and development of next generation high-end systems including clusters, data centers, storage centers
- System administrators of large-scale clusters
- Developers of next generation middleware and applications