Kauffman and Smarr  discuss the impact of high-performance computing on science. Levin  and several U.S. government reports [232,233,215] describe the so-called Grand Challenge problems that have motivated recent initiatives in high-performance computing. The computational requirements in Table 1.1 are derived from the project plan for the CHAMMP climate modeling program, which has adapted a range of climate models for execution on parallel computers . Dewitt and Gray  discuss developments in parallel databases. Lawson  discusses industrial real-time applications of parallel computing. Worlton , Meindl , and Hennessy and Joupp  discuss trends in processor design and sequential and parallel computer architecture. Ullman  provides a succinct explanation of the complexity results.
Goldstine and von Neumann  provide an early exposition of the von Neumann computer. Bailey  explains how this model derived from the automation of algorithms performed previously by ``human computers.'' He argues that highly parallel computers are stimulating not only new algorithmic approaches, but also new ways of thinking about problems. Many researchers have proposed abstract machine models for parallel computing [67,99,288]. Snyder  explains why the multicomputer is a good choice. Early parallel computers with a multicomputer-like architecture include the Ultracomputer  and the Cosmic Cube . Athas and Seitz  and Seitz  discuss developments in this area. Almasi and Gottlieb  and Hwang  provide good introductions to parallel computer architectures and interconnection networks. Hillis  describes SIMD computers. Fortune and Wylie  and Jájá  discuss the PRAM model. Comer  discusses LANs and WANs. Kahn  describes the ARPANET, an early WAN. The chapter notes in Chapter 3 provide additional references on parallel computer architecture.
The basic abstractions used to describe parallel algorithms have been developed in the course of many years of research in operating systems, distributed computing, and parallel computation. The use of channels was first explored by Hoare in Communicating Sequential Processes (CSP)  and is fundamental to the occam programming language [231,280]. However, in CSP the task and channel structure is static, and both sender and receiver block until a communication has completed. This approach has proven too restrictive for general-purpose parallel programming. The task/channel model introduced in this chapter is described by Chandy and Foster , who also discuss the conditions under which the model can guarantee deterministic execution .
Seitz  and Gropp, Lusk, and Skjellum  describe the message-passing model (see also the chapter notes in Chapter 8). Ben Ari  and Karp and Babb  discuss shared-memory programming. Hillis and Steele  and Hatcher and Quinn  describe data-parallel programming; the chapter notes in Chapter 7 provide additional references. Other approaches that have generated considerable interest include Actors , concurrent logic programming , functional programming , Linda , and Unity . Bal et al.  provide a useful survey of some of these approaches. Pancake and Bergmark  emphasize the importance of deterministic execution in parallel computing.
Here is a
providing access to additional information on parallel applications,
parallel computer architecture, and parallel programming models.
© Copyright 1995 by Ian Foster