On the performance and energy efficiency of sparse linear algebra on GPUs

TitleOn the performance and energy efficiency of sparse linear algebra on GPUs
Publication TypeJournal Article
Year of Publication2016
AuthorsAnzt, H., S. Tomov, and J. Dongarra
JournalInternational Journal of High Performance Computing Applications
Date Published2016-10
AbstractIn this paper we unveil some performance and energy efficiency frontiers for sparse computations on GPU-based supercomputers. We compare the resource efficiency of different sparse matrix–vector products (SpMV) taken from libraries such as cuSPARSE and MAGMA for GPU and Intel’s MKL for multicore CPUs, and develop a GPU sparse matrix–matrix product (SpMM) implementation that handles the simultaneous multiplication of a sparse matrix with a set of vectors in block-wise fashion. While a typical sparse computation such as the SpMV reaches only a fraction of the peak of current GPUs, we show that the SpMM succeeds in exceeding the memory-bound limitations of the SpMV. We integrate this kernel into a GPU-accelerated Locally Optimal Block Preconditioned Conjugate Gradient (LOBPCG) eigensolver. LOBPCG is chosen as a benchmark algorithm for this study as it combines an interesting mix of sparse and dense linear algebra operations that is typical for complex simulation applications, and allows for hardware-aware optimizations. In a detailed analysis we compare the performance and energy efficiency against a multi-threaded CPU counterpart. The reported performance and energy efficiency results are indicative of sparse computations on supercomputers.
URLhttp://hpc.sagepub.com/content/early/2016/10/05/1094342016672081.abstract
DOI10.1177/1094342016672081