Argonne National Laboratory

Science Highlights

Date Postedsort descending
Darshan has been integrated into the web portal at NERSC to provide feedback to scientists about application behavior. This screen shot shows an example of the I/O summary data that is available to users as soon as their job has completed. It indicates the amount of data accessed, the percentage of runtime consumed by I/O activity, and the access sizes used by the application.
Characterizing I/O performance on leadership-class systems

Argonne National Laboratory have developed Darshan, a scalable I/O characterization tool that collects I/O access pattern information from HPC production applications.

May 30, 2013
This image shows that the "cold cache effect" (whereby the execution time of the first job requires more time than subsequent trials/replications--35 in this test--of the same job) exists and that the run times stabilize after the second trial. The new analysis reported here, however, shows that runtime differences related to the cold cache effect are not statistically significant relative to runtime differences due to applying code optimization strategies.
Dynamic trees can aid in performance tuning of scientific codes

Researchers from the University of Chicago Booth School of Business, together with Stefan Wild, an assistant computational mathematician in the Mathematics and Computer Science Division at Argonne, have recently demonstrated how the dynamic tree model can support both variable selection and sensitivity analysis of inputs.

June 10, 2013
Dynamic trees can aid in performance tuning of scientific codes

Researchers have developed a data analysis tool that uses dynamic trees to rapidly determine which software and hardware tuning parameters best explain differences in code performance.

July 16, 2013
MCS division researchers help develop new sequencing analysis service

The Argonne/University of Chicago Computation Institute has announced a new sequencing analysis service called Globus Genomics.

July 16, 2013
Argo exascale architecture
Click on image to enlarge.
Designing a new operating system for exascale architectures

The Argo project will design and develop a platform-neutral prototype of an exascale operating system and runtime software.

August 7, 2013
Using dual decomposition for solving problems involving data uncertainty

Researchers develop a new parallel formulation using dual decomposition to solve problems requiring decisions under uncertainty.

August 14, 2013
OOI combines data from multiple sensing devices to understand and reason about the ocean. Courtesy of OOI Regional Scale Nodes program and the Center for Environmental Visualization, University of Washington
Toward Observatory Cloud Computing

New inexpensive and reliable sensing devices, such as weather cameras and floats, are allowing us to monitor in real-time a variety of phenomena on an unprecedented scale.

August 27, 2013
Transferring data rapidly with Mercury

Mercury is a remote procedure call interface for high-performance computing.

August 28, 2013
Fig. 1. Binary representation of the 64-bit floating-point data of a numerical simulation of the brain. Each value is expressed in a column of 64 small squares: white for a 1 and black for a 0. The first bits present high regularity (upper part of figure) while the less significant bits show high irregularity (lower part of figure).
Using Masks to Improve Compression of Big Data in Scientific Applications

Masks can be used to achieve high compression ratios of big data.

September 23, 2013
Simulation of the granular bed with rolling friction. Contact forces are displayed in direction and size as those cylinders.
Modeling rolling friction – a real drag on a rolling body

Rolling friction at the interface between moving parts has long attracted the interest of researchers in applied mechanics.  Among the challenges facing model developers, however, has been the larg

September 23, 2013