Argonne National Laboratory

Fast Error-bounded Lossy HPC Data Compression with SZ

TitleFast Error-bounded Lossy HPC Data Compression with SZ
Publication TypeConference Paper
Year of Publication2015
AuthorsDi, S, Cappello, F
Conference NameIPDPS 2016
Date Published06/2016
PublisherIEEE
Conference LocationChicago, IL
Other NumbersANL/MCS-P5437-1115
AbstractToday’s HPC applications are producing ex- tremely large amounts of data, thus it is necessary to use an efficient compression before storing them to parallel file systems. In this paper, we optimize the error-bounded HPC data compression, by proposing a novel HPC data compression method that works very effectively on compressing large-scale HPC data sets. The compression method starts by linearizing multi-dimensional snapshot data. The key idea is to fit/predict the successive data points with the bestfit selection of curve fitting models. The data that can be predicted precisely will be replaced by the code of the corresponding curve-fitting model. As for the unpredictable data that cannot be approxi- mated by curve-fitting models, we perform an optimized lossy compression via a binary representation analysis. We evaluate our proposed solution using 13 real-world HPC applications across different scientific domains, and compare it to many other state-of-the-art compression methods (including Gzip, FPC, ISABELA, NUMARCK, ZFP, FPZIP, etc.). Experiments show that the compression ratio of our compressor ranges in 3.3/1 - 436/1, which is higher than the second-best solution ZFP by as little as 2x and as much as an order of magnitude for most cases. The compression time of SZ is comparable to other solutions’, while its decompression time is less than the second best one by 50%-90%. On an extreme-scale use case, experiments show that the compression ratio of SZ exceeds that of ZFP by 80%.  
URLhttp://www.ipdps.org/ipdps2016/2016_advance_program.html
PDFhttp://www.mcs.anl.gov/papers/P5437-1115.pdf