Data analysis and visualization can efficiently extract knowledge from scientific data. As computational science approaches exascale, however, managing the scale and complexity of the visualization process can be daunting.
For example, the DOE/ASCR Workshop on Visual Analysis and Data Exploration at Extreme Scale (Salt Lake City, 2007) concluded that, "datasets being produced by experiments and simulations are rapidly outstripping our ability to explore and understand them," and the International Exascale Software Project draft road map (Dongarra et al. 2009) agreed that, "analysis and visualization will be limiting factors in gaining insight from exascale data."
There is a critical need to assist scientists with intelligent algorithms that save the most important data and extract the knowledge contained therein. The challenges are deciding what data are the most essential for analysis and transforming these data into visual representations that rapidly convey the most insight.
To address these needs, our team is investigating, among other techniques, the execution of some analysis and visualization tasks in parallel directly on leadership machines, at very large scale. In conjunction with parallel analysis algorithms, we are investigating novel workspaces within the scientists' everyday work environment as interfaces to scientific discovery.
The topics below represent some of the research that our group tackles. Parallel volume rendering, image compositing, and particle tracing are examples of scaling well-known visualization algorithms to run on supercomputers. The interaction with complex, multivariate, time-varying datasets drives our research in immersive 3D display environments for visualization. The benchmark results summarize the outcomes of these research areas, and an open-source software package called DIY supports much of our research in scalable data analysis.