MPI Derived Datatypes Processing on Noncontiguous GPU-Resident Data

TitleMPI Derived Datatypes Processing on Noncontiguous GPU-Resident Data
Publication TypeConference Paper
Year of Publication2013
AuthorsJenkins, J, Dinan, J, Balaji, P, Peterka, T, Samatova, NF, Thakur, R
Other Numbers ANL/MCS-P4042-0313
Abstract

Driven by the goals of efficient and generic communication of noncontiguous data layouts in GPU memory, for which solutions do not currently exist, we present a parallel, noncontiguous data-processing methodology through the MPI datatypes specification. Our processing algorithm utilizes a kernel on the GPU to pack arbitrary noncontiguous GPU data by enriching the datatypes encoding to expose a fine-grained, data-point level of parallelism. Additionally, the typically tree-based datatype encoding is preprocessed to enable efficient, cached access across GPU threads.

Using CUDA, we show that the computational method outperforms DMA-based alternatives for several common data layouts as well as more complex data layouts for which reasonable DMA-based processing does not exist. Our method incurs low overhead for data layouts that closely match best-case DMA usage or that can be processed by layout specific implementations. We additionally investigate usage scenarios for data packing that incur resource contention, identifying potential pitfalls for various packing strategies. We also demonstrate the efficacy of kernel-based packing in various communication scenarios, showing multifold improvement in point to point communication and evaluating packing within the context of the SHOC stencil benchmark and HACC mesh analysis.

PDFhttp://www.mcs.anl.gov/papers/P4042-0313.pdf