Processing MPI Derived Datatypes on Noncontiguous GPU-Resident Data

TitleProcessing MPI Derived Datatypes on Noncontiguous GPU-Resident Data
Publication TypeJournal Article
Year of Publication2013
AuthorsJenkins, J, Dinan, J, Balaji, P, Peterka, T, Samatova, NF, Thakur, R
JournalIEEE Transactions on Parallel and Distributed Systems
Other NumbersANL/MCS-P5008-0813
Abstract

Driven by the goals of efficient and generic communication of noncontiguous data layouts in GPU memory, for which solutions do not currently exist, we present a parallel, noncontiguous data-processing methodology through the MPI datatypes specification. Our processing algorithm utilizes a kernel on the GPU to pack arbitrary noncontiguous GPU data by enriching the datatypes encoding to expose a fine-grained, data-point level of parallelism. Additionally, the typically tree-based datatype encoding is preprocessed to enable efficient, cached access across GPU threads.

Using CUDA, we show that the computational method outperforms DMA-based alternatives for several common data layouts as well as more complex data layouts for which reasonable DMA-based processing does not exist. Our method incurs low overhead for data layouts that closely match best-case DMA usage or that can be processed by layout-specific implementations. We additionally investigate usage scenarios for data packing that incur resource contention, identifying potential pitfalls for various packing strategies. We also demonstrate the efficacy of kernel-based packing in various communication scenarios, showing multifold improvement in point-to-point communication and evaluating packing within the context of the SHOC stencil benchmark and HACC mesh analysis.

 

URLhttp://www.mcs.anl.gov/papers/P5008-0813_1.pdf
PDFhttp://www.mcs.anl.gov/papers/P5008-0813_2.pdf