In the message-passing library approach to parallel programming, a collection of processes executes programs written in a standard sequential language augmented with calls to a library of functions for sending and receiving messages. In this chapter, we introduce the key concepts of message-passing programming and show how designs developed using the techniques discussed in Part I can be adapted for message-passing execution. For concreteness, we base our presentation on the Message Passing Interface (MPI), the de facto message-passing standard. However, the basic techniques discussed are applicable to other such systems, including p4, PVM, Express, and PARMACS.
MPI is a complex system. In its entirety, it comprises 129 functions, many of which have numerous parameters or variants. As our goal is to convey the essential concepts of message-passing programming, not to provide a comprehensive MPI reference manual, we focus here on a set of 24 functions and ignore some of the more esoteric features. These 24 functions provide more than adequate support for a wide range of applications.
After studying this chapter, you should understand the essential features of the message-passing programming model and its realization in MPI, and you should be able to write simple MPI programs. In particular, you should understand how MPI implements local, global, and asynchronous communications. You should also be familiar with the mechanisms that MPI provides to support the development of modular programs and the sequential and parallel composition of program components.
© Copyright 1995 by Ian Foster