September 30, 2011
Austin, Texas, USA
Call for Papers
High-performance computing simulations and large scientific experiments such as those in high energy physics generate tens of terabytes of data, and these data sizes grow each year. Existing systems for storing, managing, and analyzing data are being pushed to their limits by these applications, and new techniques are necessary to enable efficient data processing for future simulations and experiments.
This workshop will provide a forum for engineers and scientists to present and discuss their most recent work related to the storage, management, and analysis of data for scientific workloads. Emphasis will be placed on forward-looking approaches to tackle the challenges of storage at extreme scale or to provide better abstractions for use in scientific workloads.
Topics of interest include, but are not limited to:
- parallel file systems
- scientific databases
- active storage
- scientific I/O middleware
- extreme scale storage
Rob Latham, Mathematics and Computer Science Division, Argonne National Laboratory
Robert Latham, Argonne National Laboratory
Quincey Koziol, The HDF Group
Pete Wyckoff, Netapp
Wei-Keng Liao, Northwestern University
Florin Isalia, Universidad Carlos III de Madrid
Katie Antypas, NERSC
Anshu Dubey, FLASH
Bradley Settlemyer, Oak Ridge National Laboratory
Avery Ching, Yahoo!