Friday September 4, 2009
New Orleans, LA, USA
Call for Papers
High-performance computing simulations and large scientific experiments such as those in high energy physics generate tens of terabytes of data, and these data sizes grow each year. Existing systems for storing, managing, and analyzing data are being pushed to their limits by these applications, and new techniques are necessary to enable efficient data processing for future simulations and experiments.
The purpose of this workshop is to provide a forum for engineers and scientists to present and discuss their most recent work related to the storage, management, and analysis of data for scientific workloads. Emphasis is placed on forward-looking approaches to tackle the challenges of storage at extreme scale or to provide better abstractions for use in scientific workloads.
Topics of interest include, but are not limited to:
- parallel file systems
- scientific databases
- active storage
- scientific I/O middleware
- extreme scale storage
Program Committee
Robert Ross, Argonne National Laboratory
Jacek Becla, SLAC National Accelerator Laboratory
Evan Felix, Pacific Northwest National Laboratory
Gary Grider, Los Alamos National Laboratory
Quincey Koziol, The HDF Group
Wei-Keng Liao, Northwestern University
Carlos Maltzahn, University of California Santa Cruz
Doron Rotem, Lawrence Berkeley National Laboratory
Lee Ward, Sandia National Laboratories
Kesheng Wu, Lawrence Berkeley National Laboratory
