LANS Informal Seminar
"Student Argonne Summer Symposium - SASSy 2011"
DATE: August 2, 2011
TIME: 14:00:00 - 17:00:00 Description:
SPEAKER: 10 LANS Summer Students
LOCATION: Building 240, Conference Center 1416, Argonne National Laboratory
2:00- 2:15 - Hayes Stripling
Uncertainty Quantification for Nuclide Depletion Calculation
2:15- 2:30 - Drew Wicke
Identifying Active Variables to Improve the Performance of Operator Overloading Automatic Differentiation
2:30- 2:45 - Karim Ahmed
Phase-Field Modeling of Void Kinetics in Irradiated Metals
2:45- 3:00 - Break
3:00- 3:15 - Jayash Koshal
Heuristics for Mixed Integer Non-Linear Programs (MINLPs)
3:15- 3:30 - Alex Stovall
Constructing the building blocks of a loop less code generator.
3:30- 3:45 - Edward Nash
The Cost of Using Loop Control
3:45- 4:00 - Break
4:00- 4:15 - Alexandru Cioaca
Improving the accuracy of wind energy prediction
4:15- 4:30 - Brett Robbins
Parallel Newton's Method for Dynamic Simulation of Electrical Networks
4:30- 4:45 - Zhu Wang
Approximating the output of large models with uncertainty using model reduction and Kriging
4:45- 5:00 - Yongjia Song
Solving sample average approximation for stochastic programs
Title: Uncertainty Quantification for Nuclide Depletion Calculation
This summer I worked to support CESAR, the recently-funded exascale co-design center focused on the simulation of nuclear reactors. One major task of the center is to develop efficient software/algorithms for quantification of error and uncertainty at scale, which is especially challenging in the high-dimensional, multi-scale calculations required for reactor simulations. We first developed an asymptotic model for the global time-discretization error in neutron/nuclide depletion calculations. We then developed and tested an adjoint framework for multi-physics systems governed by differential-algebraic equations. This framework was designed to be general and abstract in order to scale and be flexible in HPC environments.
Title: Identifying Active Variables to Improve the Performance of Operator Overloading Automatic Differentiation
Automatic Differentiation (AD) is a means of computing the derivative of a function within a computer program. AD can be performed by using the operator overloading approach which utilizes features of the programming language to alter the meaning of mathematical operators to compute the derivative. Operator overloading as a means of performing AD allows for maintainable code; however, speed of computation is sacrificed. One method to increase the speed is to perform derivative computation for only active variables. Active variables depend on the value of an input variable and are used in the computation of an output variable and therefore are necessary for the calculation of the derivative. All other variables are considered inactive and are not needed for the derivative calculation. Activity analysis is a technique used to identify active variables in the input source code. The goal of this research was to use activity analysis to improve the performance of the calculation of derivatives using Sacado -- an implementation in C++ of the operator overloading method of AD. The tool created to accomplish this combines the activity analysis of the source code analysis toolkit, OpenAnalysis, with the source-to-source transformation tool ROSE. The tool was tested to ensure proper identification of all active variables by creating test cases that used the C and C++ constructs.
Title: Phase-Field Modeling of Void Kinetics in Irradiated Metals
A phase field model was developed to simulate the void kinetics and interactions in irradiated metals. The model captures the growth and shrink of the void due to supersaturated/ sub-saturated vacancy content in the solid matrix because of radiation. The model is spatially resolved meaning that it takes into account the inhomogeneity of the domain which makes it more accurate than rate theory models since it can take into account the gradients in external forces such as temperature and stress.
Title: Heuristics for Mixed Integer Non-Linear Programs (MINLPs)
In this talk, we describe heuristics for global optimization and MINLP. These problems may have nonlinear nonconvex functions in objective and constraints. Branch-and-bound algorithm is usually used to solve these problems. While it finds bounds on optimal value quickly, finding a good feasible solution takes longer. A good feasible point from a heuristic helps in speeding up branch-and-bound algorithm by pruning the search tree. It can also be used as a "good enough" solution if the algorithm is terminated before completion.
We use a Multistart heuristic for global optimization of non-convex continuous NLPs. We iteratively call an NLP solver from different starting points selected on the basis of previous solutions and their objective values. A Diving heuristic works by changing bounds of some of the fractional variable and resolving the NLP relaxation. We implement different methods for selecting variables whose bounds are to be changed and backtracking if infeasibility is detected. The Feasibility Pump heuristic generates sequence of points by alternatively solving NLP or LP relaxation and rounding the fractional solution. Numerical results are presented along with future directions.
Title: Constructing the building blocks of a loop less code generator.
Small, dense, rectangular matrix-matrix multiplication is used extensively in the computation kernels of DOE simulation applications, such as MADNESS and NEK5000. A loop-less code generator has been developed that can be used to produce instruction sets that can compute these kernels at peak performance for a targeted computer platform. This research will show how to design and code macros that utilizing the fewest instructions, maximizing the use of computing resourses at the processor level (instruction scheduling, cache memory, and xmm registers).
The building blocks can be utilized to construct the loop less code generator so that no more tedious and time consuming assembly programming to deal with, and peak performance code can be achieved without much effort.
Title: The Cost of Using Loop Control
In both high-level and low-level programming languages, the loop instruction is used to group instructions together and execute them continually. However, the use of loops incurs efficiency costs. This study reviews current research results on reducing the efficiency costs of a program by using loop-less codes. Comparisons are made between the performance of the similar code with and without loop control in computing small, dense rectangular matrix-matrix multiplication operations in targeted platforms (AMD 64 processor based systems). Information such as instruction counts, stalls, cycles, cache access, and conditional branching will be used as metrics to compare the efficiency of code without loops to that of code with loops. Based upon our research, we have observed that the compilers generate longer instruction code then that of loop-less code. Therefore, it increases the number of stalls which reduces the efficiency. We believe that this phenomenon is true to any well-defined computational tasks.
Title: Improving the accuracy of wind energy prediction
Integrating wind farms in an intelligent electrical grid can be achieved by considering weather forecasts in the unit commitment and energy dispatch problems. These forecasts are performed through numerical models that need to be operated from accurate initial conditions, at high spatial resolutions. We are currently working on an adjoint sensitivity method that can indicate which areas to target for observation and forecast.
Title: Parallel Newton's Method for Dynamic Simulation of Electrical Networks
Traditional approaches to perform dynamic simulations of an electrical network use an implicit method for the numerical integration and Newton's method to solve the system of nonlinear equations sequentially in time. Although effective, this strategy does not take advantage of computer systems with a large number of cores that are capable of parallel processing techniques to reduce the number of iterations required for the simulation. A proposed strategy that computes every time step simultaneously will be presented and demonstrated with a numerical example.
Title: Approximating the output of large models with uncertainty using model reduction and Kriging
In this talk, we introduce model reduction technique in uncertainty quantification for large simulation model. Computationally cheap reduced-order model (ROM) is employed to replace expensive sampling of the full model. However, a limitation of ROM is that it is empirical. Since the quality of the ROM cannot be satisfactorily described a priori, we use Gaussian-processes based Kriging to correct the outputs of the reduced-order model. The method is supported by numerical experiments.
Title: Solving sample average approximation for stochastic programs
We will talk about solution methods on sample average approximation(SAA) for stochastic programs. Two types of stochastic programs are considered in the talk: two-stage problem with recourse, and chance constrained problem. Decomposition algorithms are used to solve these problems within the SAA scheme.
TIME: 14:00:00 - 17:00:00
Please send questions or suggestions to Jeffrey Larson: jmlarson at anl dot gov.