Tutorial on MPI: The Message-Passing Interface
William Gropp

Mathematics and Computer Science Division
Argonne National Laboratory
Argonne, IL 60439
[email protected]


Contents
  • Course Outline
  • Background
  • Parallel Computing
  • Types of parallel computing
  • Communicating with other processes
  • Cooperative operations
  • One-sided operations
  • Class Example
  • Hardware models
  • What is MPI?
  • Motivation for a New Design
  • Motivation (cont.)
  • The MPI Process
  • Who Designed MPI?
  • Features of MPI
  • Features of MPI (cont.)
  • Features not in MPI
  • Is MPI Large or Small?
  • Where to use MPI?
  • Why learn MPI?
  • Getting started
  • Writing MPI programs
  • Commentary
  • Compiling and linking
  • Special compilation commands
  • Using Makefiles
  • Sample Makefile.in
  • Sample Makefile.in (con't)
  • Running MPI programs
  • Finding out about the environment
  • A simple program
  • Caveats
  • Exercise - Getting Started
  • Sending and Receiving messages
  • Current Message-Passing
  • The Buffer
  • Generalizing the Buffer Description
  • Generalizing the Type
  • Sample Program using Library Calls
  • Correct Execution of Library Calls
  • Incorrect Execution of Library Calls
  • Correct Execution of Library Calls with Pending Communcication
  • Incorrect Execution of Library Calls with Pending Communication
  • Solution to the type problem
  • Delimiting Scope of Communication
  • Generalizing the Process Identifier
  • MPI Basic Send/Receive
  • Getting information about a message
  • Simple Fortran example
  • Simple Fortran example (cont.)
  • Six Function MPI
  • A taste of things to come
  • Broadcast and Reduction
  • Fortran example: PI
  • Fortran example (cont.)
  • C example: PI
  • C example (cont.)
  • Exercise - PI
  • Exercise - Ring
  • Topologies
  • Cartesian Topologies
  • Defining a Cartesian Topology
  • Finding neighbors
  • Who am I?
  • Partitioning
  • Other Topology Routines
  • Why are these routines in MPI?
  • The periods argument
  • Periodic Grids
  • Nonperiodic Grids
  • Collective Communications in MPI
  • Synchronization
  • Available Collective Patterns
  • Available Collective Computation Patterns
  • MPI Collective Routines
  • Built-in Collective Computation Operations
  • Defining Your Own Collective Operations
  • Sample user function
  • Defining groups
  • Subdividing a communicator
  • Subdividing (con't)
  • Manipulating Groups
  • Creating Groups
  • Buffering issues
  • Better buffering
  • Blocking and Non-Blocking communication
  • Some Solutions to the ``Unsafe'' Problem
  • MPI's Non-Blocking Operations
  • Multiple completions
  • Fairness
  • Fairness in message-passing
  • Providing Fairness
  • Providing Fairness (Fortran)
  • Exercise - Fairness
  • More on nonblocking communication
  • Communication Modes
  • Buffered Send
  • Reusing the same buffer
  • Other Point-to-Point Features
  • Datatypes and Heterogenity
  • Datatypes in MPI
  • Basic Datatypes (Fortran)
  • Basic Datatypes (C)
  • Vectors
  • Structures
  • Example: Structures
  • Strides
  • Vectors revisited
  • Structures revisited
  • Interleaving data
  • An interleaved datatype
  • Scattering a Matrix
  • Exercises - datatypes
  • Tools for writing libraries
  • Private communicators
  • Attributes
  • What is an attribute?
  • Examples of using attributes
  • Sequential Sections
  • Sequential Sections II
  • Sequential Sections III
  • Comments on sequential sections
  • Example: Managing tags
  • Caching tags on communicator
  • Caching tags on communicator II
  • Caching tags on communicator III
  • Caching tags on communicator IV
  • Caching tags on communicator V
  • Caching tags on communicator VI
  • Commentary
  • Exercise - Writing libraries
  • MPI Objects
  • The MPI Objects
  • When should objects be freed?
  • Reference counting
  • Why reference counts
  • Tools for evaluating programs
  • The MPI Timer
  • Profiling
  • Writing profiling routines
  • Another profiling example
  • Another profiling example (con't)
  • Generating and viewing log files
  • Generating a log file
  • Connecting several programs together
  • Sending messages between different programs
  • Exchanging data between programs
  • Collective operations
  • Final Comments
  • Sharable MPI Resources
  • Sharable MPI Resources, continued
  • MPI-2
  • Summary