%0 Conference Paper %B IPASS-2014 %D 2014 %T MIAMI: A Framework for Application Performance Diagnosis %A Gabriel Marin %A Jack Dongarra %A Dan Terpstra %X A typical application tuning cycle repeats the following three steps in a loop: performance measurement, analysis of results, and code refactoring. While performance measurement is well covered by existing tools, analysis of results to understand the main sources of inefficiency and to identify opportunities for optimization is generally left to the user. Today's state of the art performance analysis tools use instrumentation or hardware counter sampling to measure the performance of interactions between code and the target architecture during execution. Such measurements are useful to identify hotspots in applications, places where execution time is spent or where cache misses are incurred. However, explanatory understanding of tuning opportunities requires a more detailed, mechanistic modeling approach. This paper presents MIAMI (Machine Independent Application Models for performance Insight), a set of tools for automatic performance diagnosis. MIAMI uses application characterization and models of target architectures to reason about an application's performance. MIAMI uses a modeling approach based on first-order principles to identify performance bottlenecks, pinpoint optimization opportunities, and compute bounds on the potential for improvement. %B IPASS-2014 %I IEEE %C Monterey, CA %8 2014-03 %@ 978-1-4799-3604-5 %G eng %R 10.1109/ISPASS.2014.6844480 %0 Conference Paper %B International Supercomputing Conference 2013 (ISC'13) %D 2013 %T Beyond the CPU: Hardware Performance Counter Monitoring on Blue Gene/Q %A Heike McCraw %A Dan Terpstra %A Jack Dongarra %A Kris Davis %A Roy Musselman %B International Supercomputing Conference 2013 (ISC'13) %I Springer %C Leipzig, Germany %8 2013-06 %G eng %0 Conference Paper %B 2013 IEEE International Symposium on Performance Analysis of Systems and Software %D 2013 %T Non-Determinism and Overcount on Modern Hardware Performance Counter Implementations %A Vincent Weaver %A Dan Terpstra %A Shirley Moore %B 2013 IEEE International Symposium on Performance Analysis of Systems and Software %I IEEE %C Austin, TX %8 2013-04 %G eng %0 Generic %D 2013 %T PAPI 5: Measuring Power, Energy, and the Cloud %A Vincent Weaver %A Dan Terpstra %A Heike McCraw %A Matt Johnson %A Kiran Kasichayanula %A James Ralph %A John Nelson %A Phil Mucci %A Tushar Mohan %A Shirley Moore %I 2013 IEEE International Symposium on Performance Analysis of Systems and Software %C Austin, TX %8 2013-04 %G eng %0 Conference Proceedings %B International Workshop on Power-Aware Systems and Architectures %D 2012 %T Measuring Energy and Power with PAPI %A Vincent M Weaver %A Matt Johnson %A Kiran Kasichayanula %A James Ralph %A Piotr Luszczek %A Dan Terpstra %A Shirley Moore %K papi %X Energy and power consumption are becoming critical metrics in the design and usage of high performance systems. We have extended the Performance API (PAPI) analysis library to measure and report energy and power values. These values are reported using the existing PAPI API, allowing code previously instrumented for performance counters to also measure power and energy. Higher level tools that build on PAPI will automatically gain support for power and energy readings when used with the newest version of PAPI. We describe in detail the types of energy and power readings available through PAPI. We support external power meters, as well as values provided internally by recent CPUs and GPUs. Measurements are provided directly to the instrumented process, allowing immediate code analysis in real time. We provide examples showing results that can be obtained with our infrastructure. %B International Workshop on Power-Aware Systems and Architectures %C Pittsburgh, PA %8 2012-09 %G eng %R 10.1109/ICPPW.2012.39 %0 Journal Article %J CloudTech-HPC 2012 %D 2012 %T PAPI-V: Performance Monitoring for Virtual Machines %A Matt Johnson %A Heike McCraw %A Shirley Moore %A Phil Mucci %A John Nelson %A Dan Terpstra %A Vincent M Weaver %A Tushar Mohan %K papi %X This paper describes extensions to the PAPI hardware counter library for virtual environments, called PAPI-V. The extensions support timing routines, I/O measurements, and processor counters. The PAPI-V extensions will allow application and tool developers to use a familiar interface to obtain relevant hardware performance monitoring information in virtual environments. %B CloudTech-HPC 2012 %C Pittsburgh, PA %8 2012-09 %G eng %R 10.1109/ICPPW.2012.29 %0 Journal Article %J SAAHPC '12 (Best Paper Award) %D 2012 %T Power Aware Computing on GPUs %A Kiran Kasichayanula %A Dan Terpstra %A Piotr Luszczek %A Stanimire Tomov %A Shirley Moore %A Gregory D. Peterson %K magma %B SAAHPC '12 (Best Paper Award) %C Argonne, IL %8 2012-07 %G eng %0 Conference Proceedings %B 6th Workshop on Virtualization in High-Performance Cloud Computing %D 2011 %T Evaluation of the HPC Challenge Benchmarks in Virtualized Environments %A Piotr Luszczek %A Eric Meek %A Shirley Moore %A Dan Terpstra %A Vincent M Weaver %A Jack Dongarra %K hpcc %B 6th Workshop on Virtualization in High-Performance Cloud Computing %C Bordeaux, France %8 2011-08 %G eng %0 Journal Article %J Tools for High Performance Computing 2009 %D 2010 %T Collecting Performance Data with PAPI-C %A Dan Terpstra %A Heike Jagode %A Haihang You %A Jack Dongarra %K mumi %K papi %X Modern high performance computer systems continue to increase in size and complexity. Tools to measure application performance in these increasingly complex environments must also increase the richness of their measurements to provide insights into the increasingly intricate ways in which software and hardware interact. PAPI (the Performance API) has provided consistent platform and operating system independent access to CPU hardware performance counters for nearly a decade. Recent trends toward massively parallel multi-core systems with often heterogeneous architectures present new challenges for the measurement of hardware performance information, which is now available not only on the CPU core itself, but scattered across the chip and system. We discuss the evolution of PAPI into Component PAPI, or PAPI-C, in which multiple sources of performance data can be measured simultaneously via a common software interface. Several examples of components and component data measurements are discussed. We explore the challenges to hardware performance measurement in existing multi-core architectures. We conclude with an exploration of future directions for the PAPI interface. %B Tools for High Performance Computing 2009 %I Springer Berlin / Heidelberg %C 3rd Parallel Tools Workshop, Dresden, Germany %P 157-173 %8 2010-05 %G eng %R https://doi.org/10.1007/978-3-642-11261-4_11 %0 Journal Article %J ISC'09 %D 2009 %T I/O Performance Analysis for the Petascale Simulation Code FLASH %A Heike Jagode %A Shirley Moore %A Dan Terpstra %A Jack Dongarra %A Andreas Knuepfer %A Matthias Jurenz %A Matthias S. Mueller %A Wolfgang E. Nagel %K test %B ISC'09 %C Hamburg, Germany %8 2009-06 %G eng %0 Conference Paper %B PADTAD Workshop, IPDPS 2003 %D 2003 %T Experiences and Lessons Learned with a Portable Interface to Hardware Performance Counters %A Jack Dongarra %A Kevin London %A Shirley Moore %A Phil Mucci %A Dan Terpstra %A Haihang You %A Min Zhou %K lacsi %K papi %X The PAPI project has defined and implemented a cross-platform interface to the hardware counters available on most modern microprocessors. The interface has gained widespread use and acceptance from hardware vendors, users, and tool developers. This paper reports on experiences with the community-based open-source effort to define the PAPI specification and implement it on a variety of platforms. Collaborations with tool developers who have incorporated support for PAPI are described. Issues related to interpretation and accuracy of hardware counter data and to the overheads of collecting this data are discussed. The paper concludes with implications for the design of the next version of PAPI. %B PADTAD Workshop, IPDPS 2003 %I IEEE %C Nice, France %8 2003-04 %@ 0-7695-1926-1 %G eng %0 Conference Paper %B Conference on Linux Clusters: The HPC Revolution %D 2001 %T Using PAPI for Hardware Performance Monitoring on Linux Systems %A Jack Dongarra %A Kevin London %A Shirley Moore %A Phil Mucci %A Dan Terpstra %K papi %X PAPI is a specification of a cross-platform interface to hardware performance counters on modern microprocessors. These counters exist as a small set of registers that count events, which are occurrences of specific signals related to a processor's function. Monitoring these events has a variety of uses in application performance analysis and tuning. The PAPI specification consists of both a standard set of events deemed most relevant for application performance tuning, as well as both high-level and low-level sets of routines for accessing the counters. The high level interface simply provides the ability to start, stop, and read sets of events, and is intended for the acquisition of simple but accurate measurement by application engineers. The fully programmable low-level interface provides sophisticated options for controlling the counters, such as setting thresholds for interrupt on overflow, as well as access to all native counting modes and events, and is intended for third-party tool writers or users with more sophisticated needs. PAPI has been implemented on a number of platforms, including Linux/x86 and Linux/IA-64. The Linux/x86 implementation requires a kernel patch that provides a driver for the hardware counters. The driver memory maps the counter registers into user space and allows virtualizing the counters on a perprocess or per-thread basis. The kernel patch is being proposed for inclusion in the main Linux tree. The PAPI library provides access on Linux platforms not only to the standard set of events mentioned above but also to all the Linux/x86 and Linux/IA-64 native events. PAPI has been installed and is in use, either directly or through incorporation into third-party end-user performance analysis tools, on a number of Linux clusters, including the New Mexico LosLobos cluster and Linux clusters at NCSA and the University of Tennessee being used for the GrADS (Grid Application Development Software) project. %B Conference on Linux Clusters: The HPC Revolution %I Linux Clusters Institute %C Urbana, Illinois %8 2001-06 %G eng