%0 Conference Paper %B 2019 International Conference on Parallel Computing (ParCo2019) %D 2019 %T Characterization of Power Usage and Performance in Data-Intensive Applications using MapReduce over MPI %A Joshua Davis %A Tao Gao %A Sunita Chandrasekaran %A Heike Jagode %A Anthony Danalis %A Pavan Balaji %A Jack Dongarra %A Michela Taufer %B 2019 International Conference on Parallel Computing (ParCo2019) %C Prague, Czech Republic %8 2019-09 %G eng %0 Journal Article %J IEEE Transactions on Parallel and Distributed Systems %D 2017 %T Argobots: A Lightweight Low-Level Threading and Tasking Framework %A Sangmin Seo %A Abdelhalim Amer %A Pavan Balaji %A Cyril Bordage %A George Bosilca %A Alex Brooks %A Philip Carns %A Adrian Castello %A Damien Genet %A Thomas Herault %A Shintaro Iwasaki %A Prateek Jindal %A Sanjay Kale %A Sriram Krishnamoorthy %A Jonathan Lifflander %A Huiwei Lu %A Esteban Meneses %A Mar Snir %A Yanhua Sun %A Kenjiro Taura %A Pete Beckman %K Argobots %K context switch %K I/O %K interoperability %K lightweight %K MPI %K OpenMP %K stackable scheduler %K tasklet %K user-level thread %X In the past few decades, a number of user-level threading and tasking models have been proposed in the literature to address the shortcomings of OS-level threads, primarily with respect to cost and flexibility. Current state-of-the-art user-level threading and tasking models, however, are either too specific to applications or architectures or are not as powerful or flexible. In this paper, we present Argobots, a lightweight, low-level threading and tasking framework that is designed as a portable and performant substrate for high-level programming models or runtime systems. Argobots offers a carefully designed execution model that balances generality of functionality with providing a rich set of controls to allow specialization by the user or high-level programming model. We describe the design, implementation, and optimization of Argobots and present integrations with three example high-level models: OpenMP, MPI, and co-located I/O service. Evaluations show that (1) Argobots outperforms existing generic threading runtimes; (2) our OpenMP runtime offers more efficient interoperability capabilities than production OpenMP runtimes do; (3) when MPI interoperates with Argobots instead of Pthreads, it enjoys reduced synchronization costs and better latency hiding capabilities; and (4) I/O service with Argobots reduces interference with co-located applications, achieving performance competitive with that of the Pthreads version. %B IEEE Transactions on Parallel and Distributed Systems %8 2017-10 %G eng %U http://ieeexplore.ieee.org/document/8082139/ %R 10.1109/TPDS.2017.2766062 %0 Book Section %B High Performance Computing: 31st International Conference, ISC High Performance 2016, Frankfurt, Germany, June 19-23, 2016, Proceedings %D 2016 %T Performance, Design, and Autotuning of Batched GEMM for GPUs %A Ahmad Abdelfattah %A Azzam Haidar %A Stanimire Tomov %A Jack Dongarra %E Julian M. Kunkel %E Pavan Balaji %E Jack Dongarra %X The general matrix-matrix multiplication (GEMM) is the most important numerical kernel in dense linear algebra, and is the key component for obtaining high performance in most LAPACK routines. As batched computations on relatively small problems continue to gain interest in many scientific applications, a need arises for a high performance GEMM kernel for batches of small matrices. Such a kernel should be well designed and tuned to handle small sizes, and to maintain high performance for realistic test cases found in the higher level LAPACK routines, and scientific computing applications in general. This paper presents a high performance batched GEMM kernel on Graphics Processing Units (GPUs). We address batched problems with both fixed and variable sizes, and show that specialized GEMM designs and a comprehensive autotuning process are needed to handle problems of small sizes. For most performance tests reported in this paper, the proposed kernels outperform state-of-the-art approaches using a K40c GPU. %B High Performance Computing: 31st International Conference, ISC High Performance 2016, Frankfurt, Germany, June 19-23, 2016, Proceedings %I Springer International Publishing %P 21–38 %@ 978-3-319-41321-1 %G eng %U http://dx.doi.org/10.1007/978-3-319-41321-1_2 %R 10.1007/978-3-319-41321-1_2