%0 Conference Paper %B 25th International Symposium on High-Performance Parallel and Distributed Computing (HPDC'16) %D 2016 %T GPU-Aware Non-contiguous Data Movement In Open MPI %A Wei Wu %A George Bosilca %A Rolf vandeVaart %A Sylvain Jeaugey %A Jack Dongarra %K datatype %K gpu %K hybrid architecture %K MPI %K non-contiguous data %X

Due to better parallel density and power efficiency, GPUs have become more popular for use in scientific applications. Many of these applications are based on the ubiquitous Message Passing Interface (MPI) programming paradigm, and take advantage of non-contiguous memory layouts to exchange data between processes. However, support for efficient non-contiguous data movements for GPU-resident data is still in its infancy, imposing a negative impact on the overall application performance.

To address this shortcoming, we present a solution where we take advantage of the inherent parallelism in the datatype packing and unpacking operations. We developed a close integration between Open MPI's stack-based datatype engine, NVIDIA's Uni ed Memory Architecture and GPUDirect capabilities. In this design the datatype packing and unpacking operations are offloaded onto the GPU and handled by specialized GPU kernels, while the CPU remains the driver for data movements between nodes. By incorporating our design into the Open MPI library we have shown significantly better performance for non-contiguous GPU-resident data transfers on both shared and distributed memory machines.

%B 25th International Symposium on High-Performance Parallel and Distributed Computing (HPDC'16) %I ACM %C Kyoto, Japan %8 2016-06 %G eng %R http://dx.doi.org/10.1145/2907294.2907317