Local Rollback for Resilient MPI Applications with Application-Level Checkpointing and Message Logging

TitleLocal Rollback for Resilient MPI Applications with Application-Level Checkpointing and Message Logging
Publication TypeJournal Article
Year of Publication2019
AuthorsLosada, N., G. Bosilca, A. Bouteiller, P. González, and M. J. Martín
JournalFuture Generation Computer Systems
Volume91
Pagination450-464
Date Published02-2019
KeywordsApplication-level checkpointing, Local rollback, Message logging, MPI, resilience
Abstract

The resilience approach generally used in high-performance computing (HPC) relies on coordinated checkpoint/restart, a global rollback of all the processes that are running the application. However, in many instances, the failure has a more localized scope and its impact is usually restricted to a subset of the resources being used. Thus, a global rollback would result in unnecessary overhead and energy consumption, since all processes, including those unaffected by the failure, discard their state and roll back to the last checkpoint to repeat computations that were already done. The User Level Failure Mitigation (ULFM) interface – the last proposal for the inclusion of resilience features in the Message Passing Interface (MPI) standard – enables the deployment of more flexible recovery strategies, including localized recovery. This work proposes a local rollback approach that can be generally applied to Single Program, Multiple Data (SPMD) applications by combining ULFM, the ComPiler for Portable Checkpointing (CPPC) tool, and the Open MPI VProtocol system-level message logging component. Only failed processes are recovered from the last checkpoint, while consistency before further progress in the execution is achieved through a two-level message logging process. To further optimize this approach point-to-point communications are logged by the Open MPI VProtocol component, while collective communications are optimally logged at the application level—thereby decoupling the logging protocol from the particular collective implementation. This spatially coordinated protocol applied by CPPC reduces the log size, the log memory requirements and overall the resilience impact on the applications.

DOI10.1016/j.future.2018.09.041
Project Tags: 
External Publication Flag: