Understanding Checkpointing Overheads on Massive-Scale Systems: Analysis of the IBM Blue Gene/P System
|Title||Understanding Checkpointing Overheads on Massive-Scale Systems: Analysis of the IBM Blue Gene/P System|
|Publication Type||Journal Article|
|Year of Publication||2008|
|Authors||Gupta, R, Naik, H, Beckman, PH|
|Journal||International Journal of High Performance Computing Applications|
Providing fault tolerance in high-end petascale systems, consisting of millions of hardware components and complex software stacks, is becoming an increasingly challenging task. Checkpointing continues to be the most prevalent technique for providing fault tolerance in such high-end systems. Considerable research has focused on optimizing checkpointing; however, in practice, checkpointing still involves a high-cost overhead for users. In this paper, we study the checkpointing overhead seen by various applications running on leadership-class machines like the IBM Blue Gene/P at Argonne National Laboratory. In addition to studying popular applications, we design a methodology to help users understand and intelligently choose an optimal checkpointing frequency to reduce the overall checkpointing overhead incurred. In particular, we study the Grid-Based Projector-Augmented Wave application, the Carr-Parrinello Molecular Dynamics application, the Nek5000 computational fluid dynamics application and the Parallel Ocean Program application---and analyze their memory usage and possible checkpointing trends on 65,536 processors of the Blue Gene/P system.