The Importance of Non-Data-Communication Overheads in MPI

TitleThe Importance of Non-Data-Communication Overheads in MPI
Publication TypeJournal Article
Year of Publication2010
AuthorsBalaji, P, Chan, A, Gropp, WD, Thakur, R, Lusk, EL
JournalInternational Journal of High Performance Computing Applications
Volume24
Issue1
Pagination5-15
Date Published01/2010
Abstract

With processor speeds no longer doubling every 18-24 months owing to the exponential increase in power consumption and heat dissipation, modern HEC systems tend to rely lesser on the performance of single processing units. Instead, they rely on achieving high-performance by using the parallelism of a massive number of low-frequency/low-power processing cores. Using such low-frequency cores, however, puts a premium on end-host pre- and post-communication processing required within communication stacks, such as the message passing interface (MPI) implementation. Similarly, small amounts of serialization within the communication stack that were acceptable on small/medium systems can be brutal on massively parallel systems. Thus, in this paper, we study the different non-data-communication overheads within the MPI implementation on the IBM Blue Gene/P system. Specifically, we analyze various aspects of MPI, including the MPI stack overhead itself, overhead of allocating and queueing requests, queue searches within the MPI stack, multi-request operations and various others. Our experiments, that scale up to 131,072 cores of the largest Blue Gene/P system in the world (80% of the total system size), reveal several insights into overheads in the MPI stack, which were previously not considered significant, but can have a substantial impact on such massive systems.

URLhttp://hpc.sagepub.com/content/24/1.toc
PDFhttp://www.mcs.anl.gov/papers/P1699.pdf