Computation Mapping for Multi-Level Storage Cache Hierarchies
|Title||Computation Mapping for Multi-Level Storage Cache Hierarchies|
|Publication Type||Conference Paper|
|Year of Publication||2010|
|Authors||Kandemir, M, Muralidhara, SP, Karakot, M, Son, SW|
|Conference Name||ACM HPDC 2010|
|Conference Location||Chicago, Illinois|
Improving I/O performance is an important issue for many data intensive, large scale parallel applications. While storage caches has been one of the ways of improving I/O latencies of parallel applications, most of the prior work on storage caches focus on the management and partitioning of cache space. The compiler\'s role in taking advantage of, in particular, multi-level storage caches, has been largely unexplored. The main contribution of this paper is a shared storage cache aware loop iteration distribution (iteration-to-processor mapping) scheme for I/O intensive applications that manipulate disk-resident data sets. The proposed scheme is compiler directed and can be tuned to target any multi-level storage cache hierarchy. At the core of our scheme lies an iterative strategy that clusters loop iterations based on the underlying storage cache hierarchy and how these different storage caches in the hierarchy are shared by different processors. We tested this mapping scheme using a set of eight I/O intensive application programs and collected experimental data. The results collected so far are very promising and show that our proposed scheme 1) is able to improve the I/O performance of original applications by 26.3% on average, and this leads to an average of 18.9% reduction in overall execution latencies of these applications, and 2) performs significantly better than a state-of-the-art (but storage cache hierarchy agnostic) data locality optimization scheme. We also present an enhancement to our baseline implementation that performs local scheduling once the loop iteration distribution is performed.