Abstract  Today’s HPC applications are producing ex tremely large amounts of data, thus it is necessary to use an efficient compression before storing them to parallel file systems. In this paper, we optimize the errorbounded HPC data compression, by proposing a novel HPC data compression method that works very effectively on compressing largescale HPC data sets. The compression method starts by linearizing multidimensional snapshot data. The key idea is to fit/predict the successive data points with the bestfit selection of curve fitting models. The data that can be predicted precisely will be replaced by the code of the corresponding curvefitting model. As for the unpredictable data that cannot be approxi mated by curvefitting models, we perform an optimized lossy compression via a binary representation analysis. We evaluate our proposed solution using 13 realworld HPC applications across different scientific domains, and compare it to many other stateoftheart compression methods (including Gzip, FPC, ISABELA, NUMARCK, ZFP, FPZIP, etc.). Experiments show that the compression ratio of our compressor ranges in 3.3/1  436/1, which is higher than the secondbest solution ZFP by as little as 2x and as much as an order of magnitude for most cases. The compression time of SZ is comparable to other solutions’, while its decompression time is less than the second best one by 50%90%. On an extremescale use case, experiments show that the compression ratio of SZ exceeds that of ZFP by 80%.
