In previous chapters, we have focused on the problem of deriving efficient parallel algorithms for individual program components, such as search and finite difference computation. Yet complete programs may need to incorporate multiple parallel algorithms, each operating on different data structures and requiring different partitioning, communication, and mapping strategies for its efficient execution.
Experience shows that the complexity that tends to arise when constructing large programs can be controlled by the application of modular design techniques. The key idea is to encapsulate complex or changeable aspects of a design inside separate program components, or modules, with well-defined interfaces indicating how each module interacts with its environment. Complete programs are developed by plugging together, or composing, these modules. Modular design can increase reliability and reduce costs by making it easier to build programs, change programs to suit changing requirements, and reuse components in new programs.
Our goal in this chapter is to introduce some of the design issues that arise when developing large parallel programs. After studying this chapter, you should understand the basic principles and benefits of modular design. You should be familiar with the techniques used to apply modular design in parallel programs: what needs to be encapsulated in parallel program components, and the various ways in which components can be composed. You should also understand the performance issues that arise when composing parallel program components.
© Copyright 1995 by Ian Foster