%0 Conference Paper %B International Conference on Computational Science (ICCS 2017) %D 2017 %T The Design and Performance of Batched BLAS on Modern High-Performance Computing Systems %A Jack Dongarra %A Sven Hammarling %A Nicholas J. Higham %A Samuel Relton %A Pedro Valero-Lara %A Mawussi Zounon %K Batched BLAS %K BLAS %K High-performance computing %K Memory management %K Parallel processing %K Scientific computing %X A current trend in high-performance computing is to decompose a large linear algebra problem into batches containing thousands of smaller problems, that can be solved independently, before collating the results. To standardize the interface to these routines, the community is developing an extension to the BLAS standard (the batched BLAS), enabling users to perform thousands of small BLAS operations in parallel whilst making efficient use of their hardware. We discuss the benefits and drawbacks of the current batched BLAS proposals and perform a number of experiments, focusing on a general matrix-matrix multiplication (GEMM), to explore their affect on the performance. In particular we analyze the effect of novel data layouts which, for example, interleave the matrices in memory to aid vectorization and prefetching of data. Utilizing these modifications our code outperforms both MKL1 CuBLAS2 by up to 6 times on the self-hosted Intel KNL (codenamed Knights Landing) and Kepler GPU architectures, for large numbers of double precision GEMM operations using matrices of size 2 × 2 to 20 × 20. %B International Conference on Computational Science (ICCS 2017) %I Elsevier %C Zürich, Switzerland %8 2017-06 %G eng %R DOI:10.1016/j.procs.2017.05.138