[DBPP] previous next up contents index [Search]
Next: 5.8 Mapping Up: 5 Compositional C++ Previous: 5.6 Asynchronous Communication

5.7 Determinism

 

We noted in Section 1.3.1 that determinism can greatly simplify program development. CC++ does not provide any guarantees   of deterministic execution: indeed, the basic execution model is highly nondeterministic, allowing as it does the interleaved execution of multiple threads in a single address space. Nevertheless, there are simple rules that, if followed, allow us to avoid unwanted deterministic interactions. In particular, a CC++ program is easily shown to be deterministic if it uses a   task-based concurrency model (one thread per processor object) and if tasks interact only by using the channel library used in Program 5.3, with one sender and one receiver per channel.

While a task/channel model ensures determinism, there are also circumstances in which it is advantageous to use CC++ constructs in more flexible ways. For example:

  1. Concurrent threads provide a mechanism for overlapping computation and communication. When one thread is suspended waiting for a communication, another thread can be executing. For example, the following code can perform computation while waiting for the remote datum, value.

    par {
       value = pobj->get_remote_value();
       perform_computation();
    }
    use_remote_value(value);
    

  2. RPCs that read and write data structures in other processor objects can be used to implement a variety of asynchronous communication mechanisms; see, for example, Section 5.12.

  3. On a shared-memory computer, threads created in the same processor object can execute in parallel (on different processors), communicating by reading and writing shared data rather than sending and receiving data. This shared-memory programming model (Section 1.3.2) can improve performance relative to channel-based communication by reducing data movement and copying.

These more general forms of concurrency and communication introduce the possibility of complex, nondeterministic interactions between concurrently executing threads. However, the risk of nondeterministic interactions can be reduced substantially by avoiding the use of global variables, by making shared data structures have the sync attribute, and by ensuring that accesses to nonsync shared data structures occur within atomic functions.



[DBPP] previous next up contents index [Search]
Next: 5.8 Mapping Up: 5 Compositional C++ Previous: 5.6 Asynchronous Communication

© Copyright 1995 by Ian Foster