DIY: Do-it-Yourself Analysis
An open-source package of scalable building blocks for data movement tailored to the needs of large-scale parallel analysis workloads

Installation on Linux, Mac, IBM and Cray supercomputers (sorry, Windows):
Download DIY with the following command:
git clone
and follow the instructions in the README file in the top-level directory. There is also an instruction manual in the doc directory.

Scalable, parallel analysis of data-intensive computational science relies on the decomposition of the analysis problem among a large number of distributed-memory compute nodes, the efficient data exchange among them, and data transport between compute nodes and a parallel storage system. Configurable data partitioning, scalable data exchange, and efficient parallel I/O are the main components of DIY, a library that assists developers in parallelizing serial analysis algorithms by providing configurable, high-performance data movement algorithms built on top of MPI. Computational scientists, data analysis researchers, and visualization tool builders can all benefit from these tools.

DIY structure

The figure above shows the overall structure of DIY and its use in higher-level libraries and applications. The I/O module provides efficient parallel algorithms for reading datasets from storage as well as writing analysis results to storage. The decomposition module supports block-structured and unstructured domain decomposition in 2D, 3D, and 4D, with flexible numbers of data blocks assigned to MPI processes. The communication module supports three configurable communication algorithms: nearest neighbor exchange, merge-based reduction, and swap-based reduction. The utilities module includes tools for creating DIY data types, lossless parallel compression, and parallel sorting.

DIY performance

DIY was tested in three analysis applications: parallel particle tracing, parallel information theory, and parallel topological analysis; and three science domains: fluid dynamics, astrophysics, and combustion. The results above highlight a 2X performance improvement in particle tracing, a 59% strong scaling efficiency in information theory, and a 35% end-to-end strong scaling efficiency in topological analysis. Additionally, this marks the first time that the information entropy and Morse-Smale algorithms have been parallelized. More information can be found in these slides and in this paper.

Citing DIY:
Please cite our LDAV'11 paper, pdf bibtex