%0 Journal Article
%J Journal of Computational Science
%D 2018
%T Accelerating the SVD Bi-Diagonalization of a Batch of Small Matrices using GPUs
%A Tingxing Dong
%A Azzam Haidar
%A Stanimire Tomov
%A Jack Dongarra
%K Batched
%K Eigenvalue and singular value problems
%K hardware accelerators
%K numerical linear algebra
%K Two-sided factorization algorithms
%X The acceleration of many small-sized linear algebra problems has become extremely challenging for current many-core architectures, and in particular GPUs. Standard interfaces have been proposed for some of these problems, called batched problems, so that they get targeted for optimization and used in a standard way in applications, calling them directly from highly optimized, standard numerical libraries, like (batched) BLAS and LAPACK. While most of the developments have been for one-sided factorizations and solvers, many important applications – from big data analytics to information retrieval, low-rank approximations for solvers and preconditioners – require two-sided factorizations, and most notably the SVD factorization. To address these needs and the parallelization challenges related to them, we developed a number of new batched computing techniques and designed batched Basic Linear Algebra Subroutines (BLAS) routines, and in particular the Level-2 BLAS GEMV and the Level-3 BLAS GEMM routines, to solve them. We propose a device functions-based methodology and big-tile setting techniques in our batched BLAS design. The different optimization techniques result in many software versions that must be tuned, for which we adopt an auto-tuning strategy to automatically derive the optimized instances of the routines. We illustrate our batched BLAS approach to optimize batched SVD bi-diagonalization progressively on GPUs. The progression is illustrated on an NVIDIA K40c GPU, and also, ported and presented on AMD Fiji Nano GPU, using AMD's Heterogeneous–Compute Interface for Portability (HIP) C++ runtime API. We demonstrate achieving 80% of the theoretically achievable peak performance for the overall algorithm, and significant acceleration of the Level-2 BLAS GEMV and Level-3 BLAS GEMM needed compared to vendor-optimized libraries on GPUs and multicore CPUs. The optimization techniques in this paper are applicable to the other two-sided factorizations as well.
%B Journal of Computational Science
%V 26
%P 237–245
%8 05-2018
%G eng
%R https://doi.org/10.1016/j.jocs.2018.01.007
%0 Conference Paper
%B International Conference on Computational Science (ICCS 2017)
%D 2017
%T Optimizing the SVD Bidiagonalization Process for a Batch of Small Matrices
%A Tingxing Dong
%A Azzam Haidar
%A Stanimire Tomov
%A Jack Dongarra
%X A challenging class of problems arising in many GPU applications, called batched problems, involves linear algebra operations on many small-sized matrices. We designed batched BLAS (Basic Linear Algebra Subroutines) routines, and in particular the Level-2 BLAS GEMV and the Level-3 BLAS GEMM routines, to solve them. We proposed device functions and big-tile settings in our batched BLAS design. We adopted auto-tuning to optimize different instances of GEMV routines. We illustrated our batched BLAS approach to optimize batched bi-diagonalization progressively on a K40c GPU. The optimization techniques in this paper are applicable to the other two-sided factorizations as well.
%B International Conference on Computational Science (ICCS 2017)
%I Procedia Computer Science
%C Zurich, Switzerland
%8 06-2017
%G eng
%U http://www.sciencedirect.com/science/article/pii/S1877050917308645
%0 Journal Article
%J International Journal of High Performance Computing Applications
%D 2015
%T Batched matrix computations on hardware accelerators based on GPUs
%A Azzam Haidar
%A Tingxing Dong
%A Piotr Luszczek
%A Stanimire Tomov
%A Jack Dongarra
%K batched factorization
%K hardware accelerators
%K numerical linear algebra
%K numerical software libraries
%K one-sided factorization algorithms
%X Scientific applications require solvers that work on many small size problems that are independent from each other. At the same time, the high-end hardware evolves rapidly and becomes ever more throughput-oriented and thus there is an increasing need for an effective approach to develop energy-efficient, high-performance codes for these small matrix problems that we call batched factorizations. The many applications that need this functionality could especially benefit from the use of GPUs, which currently are four to five times more energy efficient than multicore CPUs on important scientific workloads. This paper, consequently, describes the development of the most common, one-sided factorizations, Cholesky, LU, and QR, for a set of small dense matrices. The algorithms we present together with their implementations are, by design, inherently parallel. In particular, our approach is based on representing the process as a sequence of batched BLAS routines that are executed entirely on a GPU. Importantly, this is unlike the LAPACK and the hybrid MAGMA factorization algorithms that work under drastically different assumptions of hardware design and efficiency of execution of the various computational kernels involved in the implementation. Thus, our approach is more efficient than what works for a combination of multicore CPUs and GPUs for the problems sizes of interest of the application use cases. The paradigm where upon a single chip (a GPU or a CPU) factorizes a single problem at a time is not at all efficient in our applications’ context. We illustrate all of these claims through a detailed performance analysis. With the help of profiling and tracing tools, we guide our development of batched factorizations to achieve up to two-fold speedup and three-fold better energy efficiency as compared against our highly optimized batched CPU implementations based on MKL library. The tested system featured two sockets of Intel Sandy Bridge CPUs and we compared with a batched LU factorizations featured in the CUBLAS library for GPUs, we achieve as high as 2.5× speedup on the NVIDIA K40 GPU.
%B International Journal of High Performance Computing Applications
%8 02-2015
%G eng
%R 10.1177/1094342014567546
%0 Conference Paper
%B ISC High Performance
%D 2015
%T Framework for Batched and GPU-resident Factorization Algorithms to Block Householder Transformations
%A Azzam Haidar
%A Tingxing Dong
%A Stanimire Tomov
%A Piotr Luszczek
%A Jack Dongarra
%B ISC High Performance
%I Springer
%C Frankfurt, Germany
%8 07-2015
%G eng
%0 Conference Paper
%B 8th Workshop on General Purpose Processing Using GPUs (GPGPU 8)
%D 2015
%T Optimization for Performance and Energy for Batched Matrix Computations on GPUs
%A Azzam Haidar
%A Tingxing Dong
%A Piotr Luszczek
%A Stanimire Tomov
%A Jack Dongarra
%K batched factorization
%K hardware accelerators
%K numerical linear algebra
%K numerical software libraries
%K one-sided factorization algorithms
%X As modern hardware keeps evolving, an increasingly effective approach to develop energy efficient and high-performance solvers is to design them to work on many small size independent problems. Many applications already need this functionality, especially for GPUs, which are known to be currently about four to five times more energy efficient than multicore CPUs. We describe the development of the main one-sided factorizations that work for a set of small dense matrices in parallel, and we illustrate our techniques on the LU and Cholesky factorizations. We refer to this mode of operation as a batched factorization. Our approach is based on representing the algorithms as a sequence of batched BLAS routines for GPU-only execution. The goal of avoiding multicore CPU use, e.g., as in the hybrid CPU-GPU algorithms, is to exclusively benefit from the GPU’s significantly higher energy efficiency, as well as from the removal of the costly CPU-to-GPU communications. Furthermore, we do not use a single symmetric multiprocessor (on the GPU) to factorize a single problem at a time. We illustrate how our performance analysis and the use of profiling and tracing tools guided the development and optimization of batched factorizations to achieve up to 2-fold speedup and 3-fold better energy efficiency compared to our highly optimized batched CPU implementations based on the MKL library (when using two sockets of Intel Sandy Bridge CPUs). Compared to a batched LU factorization featured in the CUBLAS library for GPUs, we achieved up to 2.5 speedup on the K40 GPU.
%B 8th Workshop on General Purpose Processing Using GPUs (GPGPU 8)
%I ACM
%C San Francisco, CA
%8 02-2015
%G eng
%R 10.1145/2716282.2716288
%0 Conference Paper
%B International Conference on Parallel Processing (ICPP-2014)
%D 2014
%T A Fast Batched Cholesky Factorization on a GPU
%A Tingxing Dong
%A Azzam Haidar
%A Stanimire Tomov
%A Jack Dongarra
%X Currently, state of the art libraries, like MAGMA, focus on very large linear algebra problems, while solving many small independent problems, which is usually referred to as batched problems, is not given adequate attention. In this paper, we proposed a batched Cholesky factorization on a GPU. Three algorithms – nonblocked, blocked, and recursive blocked – were examined. The left-looking version of the Cholesky factorization is used to factorize the panel, and the right-looking Cholesky version is used to update the trailing matrix in the recursive blocked algorithm. Our batched Cholesky achieves up to 1:8 speedup compared to the optimized parallel implementation in the MKL library on two sockets of Intel Sandy Bridge CPUs. Further, we use the new routines to develop a single Cholesky factorization solver which targets large matrix sizes. Our approach differs from MAGMA by having an entirely GPU implementation where both the panel factorization and the trailing matrix updates are on the GPU. Such an implementation does not depend on the speed of the CPU. Compared to the MAGMA library, our full GPU solution achieves 85% of the hybrid MAGMA performance which uses 16 Sandy Bridge cores, in addition to a K40 Nvidia GPU. Moreover, we achieve 80% of the practical dgemm peak of the machine, while MAGMA achieves only 75%, and finally, in terms of energy consumption, we outperform MAGMA by 1.5 in performance-per-watt for large matrices.
%B International Conference on Parallel Processing (ICPP-2014)
%C Minneapolis, MN
%8 09-2014
%G eng
%0 Conference Paper
%B 16th IEEE International Conference on High Performance Computing and Communications (HPCC)
%D 2014
%T LU Factorization of Small Matrices: Accelerating Batched DGETRF on the GPU
%A Tingxing Dong
%A Azzam Haidar
%A Piotr Luszczek
%A James Harris
%A Stanimire Tomov
%A Jack Dongarra
%X Gaussian Elimination is commonly used to solve dense linear systems in scientific models. In a large number of applications, a need arises to solve many small size problems, instead of few large linear systems. The size of each of these small linear systems depends, for example, on the number of the ordinary differential equations (ODEs) used in the model, and can be on the order of hundreds of unknowns. To efficiently exploit the computing power of modern accelerator hardware, these linear systems are processed in batches. To improve the numerical stability of the Gaussian Elimination, at least partial pivoting is required, most often accomplished with row pivoting. However, row pivoting can result in a severe performance penalty on GPUs because it brings in thread divergence and non-coalesced memory accesses. The state-of-the-art libraries for linear algebra that target GPUs, such as MAGMA, focus on large matrix sizes. They change the data layout by transposing the matrix to avoid these divergence and non-coalescing penalties. However, the data movement associated with transposition is very expensive for small matrices. In this paper, we propose a batched LU factorization for GPUs by using a multi-level blocked right looking algorithm that preserves the data layout but minimizes the penalty of partial pivoting. Our batched LU achieves up to 2:5-fold speedup when compared to the alternative CUBLAS solutions on a K40c GPU and 3:6-fold speedup over MKL on a node of the Titan supercomputer at ORNL in a nuclear reaction network simulation.
%B 16th IEEE International Conference on High Performance Computing and Communications (HPCC)
%I IEEE
%C Paris, France
%8 08-2014
%G eng
%0 Conference Paper
%B VECPAR 2014 (Best Paper)
%D 2014
%T Mixed-precision orthogonalization scheme and adaptive step size for CA-GMRES on GPUs
%A Ichitaro Yamazaki
%A Stanimire Tomov
%A Tingxing Dong
%A Jack Dongarra
%X We propose a mixed-precision orthogonalization scheme that takes the input matrix in a standard 32 or 64-bit floating-point precision, but uses higher-precision arithmetics to accumulate its intermediate results. For the 64-bit precision, our scheme uses software emulation for the higher-precision arithmetics, and requires about 20x more computation but about the same amount of communication as the standard orthogonalization scheme. Since the computation is becoming less expensive compared to the communication on new and emerging architectures, the relative cost of our mixed-precision scheme is decreasing. Our case studies with CA-GMRES on a GPU demonstrate that using mixed-precision for this small but critical segment of CA-GMRES can improve not only its overall numerical stability but also, in some cases, its performance.
%B VECPAR 2014 (Best Paper)
%C Eugene, OR
%8 06-2014
%G eng
%0 Conference Paper
%B IPDPS 2014
%D 2014
%T A Step towards Energy Efficient Computing: Redesigning A Hydrodynamic Application on CPU-GPU
%A Tingxing Dong
%A Veselin Dobrev
%A Tzanio Kolev
%A Robert Rieben
%A Stanimire Tomov
%A Jack Dongarra
%K Computer science
%K CUDA
%K FEM
%K Finite element method
%K linear algebra
%K nVidia
%K Tesla K20
%X Power and energy consumption are becoming an increasing concern in high performance computing. Compared to multi-core CPUs, GPUs have a much better performance per watt. In this paper we discuss efforts to redesign the most computation intensive parts of BLAST, an application that solves the equations for compressible hydrodynamics with high order finite elements, using GPUs [10, 1]. In order to exploit the hardware parallelism of GPUs and achieve high performance, we implemented custom linear algebra kernels. We intensively optimized our CUDA kernels by exploiting the memory hierarchy, which exceed the vendor’s library routines substantially in performance. We proposed an autotuning technique to adapt our CUDA kernels to the orders of the finite element method. Compared to a previous base implementation, our redesign and optimization lowered the energy consumption of the GPU in two aspects: 60% less time to solution and 10% less power required. Compared to the CPU-only solution, our GPU accelerated BLAST obtained a 2:5x overall speedup and 1:42x energy efficiency (greenup) using 4th order (Q4) finite elements, and a 1:9x speedup and 1:27x greenup using 2nd order (Q2) finite elements.
%B IPDPS 2014
%I IEEE
%C Phoenix, AZ
%8 05-2014
%G eng
%0 Generic
%D 2013
%T Hydrodynamic Computation with Hybrid Programming on CPU-GPU Clusters
%A Tingxing Dong
%A Veselin Dobrev
%A Tzanio Kolev
%A Robert Rieben
%A Stanimire Tomov
%A Jack Dongarra
%X The explosion of parallelism and heterogeneity in today's computer architectures has created opportunities as well as challenges for redesigning legacy numerical software to harness the power of new hardware. In this paper we address the main challenges in redesigning BLAST { a numerical library that solves the equations of compressible hydrodynamics using high order nite element methods (FEM) in a moving Lagrangian frame { to support CPU-GPU clusters. We use a hybrid MPI + OpenMP + CUDA programming model that includes two layers: domain decomposed MPI parallelization and OpenMP + CUDA acceleration in a given domain. To optimize the code, we implemented custom linear algebra kernels and introduced an auto-tuning technique to deal with heterogeneity and load balancing at runtime. Our tests show that 12 Intel Xeon cores and two M2050 GPUs deliver a 24x speedup compared to a single core, and a 2.5x speedup compared to 12 MPI tasks in one node. Further, we achieve perfect weak scaling, demonstrated on a cluster with up to 64 GPUs in 32 nodes. Our choice of programming model and proposed solutions, as related to parallelism and load balancing, specifically targets high order FEM discretizations, and can be used equally successfully for applications beyond hydrodynamics. A major accomplishment is that we further establish the appeal of high order FEMs, which despite their better approximation properties, are often avoided due to their high computational cost. GPUs, as we show, have the potential to make them the method of choice, as the increased computational cost is also localized, e.g., cast as Level 3 BLAS, and thus can be done very efficiently (close to \free" relative to the usual overheads inherent in sparse computations).
%B University of Tennessee Computer Science Technical Report
%8 07-2013
%G eng
%0 Journal Article
%J Concurrency and Computation: Practice and Experience
%D 2013
%T Tridiagonalization of a dense symmetric matrix on multiple GPUs and its application to symmetric eigenvalue problems
%A Ichitaro Yamazaki
%A Tingxing Dong
%A Raffaele Solcà
%A Stanimire Tomov
%A Jack Dongarra
%A Thomas C. Schulthess
%X For software to fully exploit the computing power of emerging heterogeneous computers, not only must the required computational kernels be optimized for the specific hardware architectures but also an effective scheduling scheme is needed to utilize the available heterogeneous computational units and to hide the communication between them. As a case study, we develop a static scheduling scheme for the tridiagonalization of a symmetric dense matrix on multicore CPUs with multiple graphics processing units (GPUs) on a single compute node. We then parallelize and optimize the Basic Linear Algebra Subroutines (BLAS)-2 symmetric matrix-vector multiplication, and the BLAS-3 low rank symmetric matrix updates on the GPUs. We demonstrate the good scalability of these multi-GPU BLAS kernels and the effectiveness of our scheduling scheme on twelve Intel Xeon processors and three NVIDIA GPUs. We then integrate our hybrid CPU-GPU kernel into computational kernels at higher-levels of software stacks, that is, a shared-memory dense eigensolver and a distributed-memory sparse eigensolver. Our experimental results show that our kernels greatly improve the performance of these higher-level kernels, not only reducing the solution time but also enabling the solution of larger-scale problems. Because such symmetric eigenvalue problems arise in many scientific and engineering simulations, our kernels could potentially lead to new scientific discoveries. Furthermore, these dense linear algebra algorithms present algorithmic characteristics that can be found in other algorithms. Hence, they are not only important computational kernels on their own but also useful testbeds to study the performance of the emerging computers and the effects of the various optimization techniques.
%B Concurrency and Computation: Practice and Experience
%8 10-2013
%G eng
%0 Generic
%D 2012
%T Acceleration of the BLAST Hydro Code on GPU
%A Tingxing Dong
%A Tzanio Kolev
%A Robert Rieben
%A Veselin Dobrev
%A Stanimire Tomov
%A Jack Dongarra
%B Supercomputing '12 (poster)
%I SC12
%C Salt Lake City, Utah
%8 11-2012
%G eng
%0 Conference Proceedings
%B ACM/IEEE Conference on Supercomputing (SC’11)
%D 2011
%T Optimizing Symmetric Dense Matrix-Vector Multiplication on GPUs
%A Rajib Nath
%A Stanimire Tomov
%A Tingxing Dong
%A Jack Dongarra
%K magma
%B ACM/IEEE Conference on Supercomputing (SC’11)
%C Seattle, WA
%8 11-2011
%G eng