|Title||Assessing the impact of ABFT and Checkpoint composite strategies|
|Publication Type||Tech Report|
|Year of Publication||2013|
|Authors||Bosilca, G., A. Bouteiller, T. Herault, Y. Robert, and J. Dongarra|
|Technical Report Series Title||University of Tennessee Computer Science Technical Report|
|Keywords||ABFT, checkpoint, fault-tolerance, High-performance computing, resilience|
Algorithm-specific fault tolerant approaches promise unparalleled scalability and performance in failure-prone environments. With the advances in the theoretical and practical understanding of algorithmic traits enabling such approaches, a growing number of frequently used algorithms (including all widely used factorization kernels) have been proven capable of such properties. These algorithms provide a temporal section of the execution when the data is protected by it’s own intrinsic properties, and can be algorithmically recomputed without the need of checkpoints. However, while typical scientific applications spend a significant fraction of their execution time in library calls that can be ABFT-protected, they interleave sections that are difficult or even impossible to protect with ABFT. As a consequence, the only fault-tolerance approach that is currently used for these applications is checkpoint/restart. In this paper we propose a model and a simulator to investigate the behavior of a composite protocol, that alternates between ABFT and checkpoint/restart protection for effective protection of each phase of an iterative application composed of ABFT-aware and ABFT-unaware sections. We highlight this approach drastically increases the performance delivered by the system, especially at scale, by providing means to rarefy the checkpoints while simultaneously decreasing the volume of data needed to be checkpointed.
Assessing the impact of ABFT and Checkpoint composite strategies