%0 Journal Article %J ACM Transactions on Parallel Computing %D 2020 %T Load-Balancing Sparse Matrix Vector Product Kernels on GPUs %A Hartwig Anzt %A Terry Cojean %A Chen Yen-Chen %A Jack Dongarra %A Goran Flegar %A Pratik Nayak %A Stanimire Tomov %A Yuhsiang M. Tsai %A Weichung Wang %X Efficient processing of Irregular Matrices on Single Instruction, Multiple Data (SIMD)-type architectures is a persistent challenge. Resolving it requires innovations in the development of data formats, computational techniques, and implementations that strike a balance between thread divergence, which is inherent for Irregular Matrices, and padding, which alleviates the performance-detrimental thread divergence but introduces artificial overheads. To this end, in this article, we address the challenge of designing high performance sparse matrix-vector product (SpMV) kernels designed for Nvidia Graphics Processing Units (GPUs). We present a compressed sparse row (CSR) format suitable for unbalanced matrices. We also provide a load-balancing kernel for the coordinate (COO) matrix format and extend it to a hybrid algorithm that stores part of the matrix in SIMD-friendly Ellpack format (ELL) format. The ratio between the ELL- and the COO-part is determined using a theoretical analysis of the nonzeros-per-row distribution. For the over 2,800 test matrices available in the Suite Sparse matrix collection, we compare the performance against SpMV kernels provided by NVIDIA’s cuSPARSE library and a heavily-tuned sliced ELL (SELL-P) kernel that prevents unnecessary padding by considering the irregular matrices as a combination of matrix blocks stored in ELL format. %B ACM Transactions on Parallel Computing %V 7 %8 2020-03 %G eng %N 1 %R https://doi.org/10.1145/3380930