Accelerating Restarted GMRES with Mixed Precision Arithmetic

TitleAccelerating Restarted GMRES with Mixed Precision Arithmetic
Publication TypeJournal Article
Year of Publication2021
AuthorsLindquist, N., P. Luszczek, and J. Dongarra
JournalIEEE Transactions on Parallel and Distributed Systems
KeywordsConvergence, Error correction, iterative methods, Kernel, Lifting equipment, linear systems, Stability analysis
Abstract

The generalized minimum residual method (GMRES) is a commonly used iterative Krylov solver for sparse, non-symmetric systems of linear equations. Like other iterative solvers, data movement dominates its run time. To improve this performance, we propose running GMRES in reduced precision with key operations remaining in full precision. Additionally, we provide theoretical results linking the convergence of finite precision GMRES with classical Gram-Schmidt with reorthogonalization (CGSR) and its infinite precision counterpart which helps justify the convergence of this method to double-precision accuracy. We tested the mixed-precision approach with a variety of matrices and preconditioners on a GPU-accelerated node. Excluding the incomplete LU factorization without fill in (ILU(0)) preconditioner, we achieved average speedups ranging from 8 to 61 percent relative to comparable double-precision implementations, with the simpler preconditioners achieving the higher speedups.

DOI10.1109/TPDS.2021.3090757
External Publication Flag: