Software Elements for Transfer and Analysis of Large-Scale Scientific
Data
As science has become increasingly data-driven, and as data volumes and velocity
are increasing, scientific advance in many areas will only be feasible if
critical `big-data' problems are addressed - and even more importantly, software
tools embedding these solutions are readily available to the scientists.
Particularly, the major challenge being faced by current data-intensive
scientific research efforts is that while the dataset sizes continue to grow
rapidly, neither among network bandwidths, memory capacity of parallel machines,
memory access speeds, and disk bandwidths are increasing at the same rate.
Building on top of recent research at Ohio State University, which includes work
on automatic data virtualization, indexing methods for scientific data, and a
novel bit-vectors based sampling method, the goal of this project is to fully
develop, disseminate, deploy, and support robust software elements addressing
challenges in data transfers and analysis. The prototypes that have been already
developed at Ohio State are being extended into two robust software elements: an
extention of GridFTP (Grid Partial-File Transport Protocol)that allows users to
specify a subset of the file to be transferred, avoiding unnecessary transfer of
the entire file; and Parallel Readers for NetCDF and HDF5 for Paraview and VTK,
data subsetting and sampling tools for NetCDF and HDF5 that perform data
selection and sampling at the I/O level, and in parallel.
This project impacts a number of scientific areas, i.e., any area that involves
big (and growing) dataset sizes and need for data transfers and/or
visualization. This project also contributes to computer science research in
`big data', including scientific (array-based) databases, and visualization.
Another contribution will be towards preparation of the broad science and
engineering research community for big data handling and analytics.