"Optimization of Collective Communication Operations in MPICH"
R. Thakur, R. Rabenseifner, and W. Gropp
Preprint Version: [pdf]
We describe our work on optimizing the collective communication operations in MPICH for clusters connected by switched networks. For each collective operation, we use multiple algorithms depending on the message size, with the goal of minimizing latency for short messages and minimizing bandwidth use for long messages. Although we have implemented new algorithms for all MPI collective operations, because of limited space we describe only the algorithms for all-gather, broadcast, all-to-all, reduce-scatter, reduce, and allreduce. Performance results on a Myrinet-connected Linux cluster and an IBM SP indicate that, in all cases, the new algorithms significantly outperform the old algorithms used in MPICH on the Myrinet cluster, and, in many cases, they outperform the algorithms used in IBM's MPI on the SP. We also explore in further detail the optimization of two of the most commonly used collective operations, allreduce and reduce, particularly for long messages and non-power-of-two numbers of processes. The optimized algorithms for these operations perform several times better than the native algorithms on a Myrinet cluster, IBM SP, and Cray T3E. This work demonstrates that to achieve the best performance for a collective communication operation, we need to use a number of different algorithms and select the right algorithm for a particular message size and number of processes.