Seminar Details:

LANS Informal Seminar
"SASSy: Student Argonne Summer Symposium"

DATE: August 4, 2010

TIME: 13:30:00 - 17:00:00
SPEAKER: LANS Summer Students
LOCATION: Building 240, Conference Center, Rm 1404 -> 1407, Argonne National Laboratory

The symposium features 11 talks by Summer students at LANS.

1:30- 1:45 Jorge Castaton
Computing Projections
1:45- 2:00 Zhu Tao
Filter Methods for Augmented Lagrangian
2:00- 2:15 Anirban Chatterjee
Improving Clustering of High-Dimensional Data through Algebraic Distances
2:15- 2:30 Break
2:30- 2:45 Zhu Wang
Dimensionality reduction for uncertainty quantification of nuclear engineering models
2:45- 3:00 Alexander Stovall and Pierre Robinson
Binary Optimization and Empirical Instruction Scheduling for Autotuning
3:00- 3:15 Grantland Gray and Corie Wilson
A Classified Method Based on Support Vector Machine for Network Intrusion Detection
3:15- 3:30 Break
3:30- 3:45 Mihai Alexe
Monty Python and the Holy Grail of fast uncertainty quantification using magic tricks and automatic differentiation.
3:45- 4:00 Jing Fu
Parallel I/O approaches for check pointing on massively parallel partitioned solvers
4:00- 4:15 Shankar Prasad Sastry
Preconditioner for Optimization in Power Flow Systems
4:15- 4:25 Break
4:25- 4:40 Chia-chun Tsai
Power Grid Models In Application
4:40- 4:55 Brian Haines
Numerical homogenization approach for Stokesian suspensions


Jorge Castaton
Title: Computing Projections
In this talk we will discuss algorithms for computing projections onto convex sets. These projections are used in a great variety of science applications. Specifically, we use a semi-smooth approach considering both a semi-smooth Newton method and a matrix-free first order method. Numerical tests indicate that as the dimensions of the problem grow, the first order method has a better performance. A diagonal pre-conditioner also improves the first order method. These findings are particularly important because matrix-free methods, unlike interior point methods, require less memory storage.

Mihai Alexe
Title: Monty Python and the Holy Grail of fast uncertainty quantification using magic tricks and automatic differentiation.
A posteriori uncertainty quantification may benefit from the introduction of derivative information for the outputs or parameters of interest. Automatic differentiation (AD) is a natural choice for computing derivatives of program outputs wrt the control variables. Moreover, it can do so at a significantly lower cost than the naive finite difference approach. The talk will give a quick introduction to the principles of AD. Then, we describe the results obtained with MCS' own OpenAD tool for the nuclear reactor safety simulation code MATWS, developed at Argonne's Nuclear Engineering division. Efficient taping and checkpointing of intermediate program variables enable the fast computation of gradients using the reverse mode of AD.

Zhu Tao
Title: Filter Methods for Augmented Lagrangian
Bound constrained augmented Lagrangian (BCL) method is an appealing way to solve large scale nonlinearly constrained optimization problems because the bound constrained subproblems can usually be solved efficiently and allow large scale implementation. Unfortunately, classic BCL methods suffer from several deficiencies: 1) progress towards the solution is rigidly prescribed via two forcing sequences which control feasibility and optimality of the subproblems; 2) convergence near regular minimizers can be slow; 3) the penalty update may result in slow convergence in the beginning and difficult subproblems in the later iterations. In this project, we investigated a new class of filter BCL methods for nonlinear optimization that overcome these deficiencies. First, the forcing sequences are replaced by a two-dimensional filter which is more flexible in terms of accepting trial steps. Second, an equality constrained quadratic programming (EQP) phase is added to accelerate the convergence. Third, a penalty estimate function is used to ensure convergence of the subproblems. Numerical experiments on a subset of the CUTEr test problems demonstrate the effectiveness of this approach.

Anirban Chatterjee
Title: Improving Clustering of High-Dimensional Data through Algebraic Distances
Measuring the connection strength between two entities in high-dimensional space is one of the most vital concerns in data mining. In this project, we adapt recently introduced measure on simple graphs and hypergraphs, namely the algebraic distance, for improving unsupervised classification of high-dimensional text data. In particular, we work on a multilevel approach towards the noise elimination problem for high-dimensional discrete systems obtained in text mining.

Jing Fu
Title: Parallel I/O approaches for check pointing on massively parallel partitioned solvers
We present several parallel I/O approaches and compare them with traditional POSIX I/O strategy (1 file per processor). We tackle this problem from different aspects, including I/O library choosing, access concurrency reduction, and separating I/O communicator from computation communicator. These approaches are especially useful for checkpoint-restart on large-scale parallel partitioned solvers. We applied these approaches on two applications, NEKCEM@MCS and PHASTA@RPI, and we will analyze the performance (I/O rates and application run time reduction) on Intrepid at different scales.

Shankar Prasad Sastry
Title: Preconditioner for Optimization in Power Flow Systems
In this talk, we explore the use of preconditioners in power flow system optimization. In a power flow system, we optimize the cost of production of electricity under demand, network capacity and other physical constraints. Interior point method is used in such constrained optimization problems. In interior point methods, we incorporate the constraints in the objective function such that the cost of violating the constraints is very high. We lower the weight of the constraints after every iteration. Thus, when the weights are small enough, we optimize the real objective function. The constraints are large as the network. Thus, in every iteration, we solve huge linear systems, which determine the optimal solution. In current implementation in MATPOWER solver, the linear system is solved through LU decomposition. However, the system is sparse and we can use iterative techniques with preconditioners to solve the linear system efficiently. In this talk, we introduce power systems, interior point methods, linear solvers, preconditioners and also propose a preconditioner applicable to this problem.

Alexander Stovall and Pierre Robinson
Title: Binary Optimization and Empirical Instruction Scheduling for Autotuning
Empirical performance tuning has been emerging as an effective means to improve performance of application programs. In this presentation, we will talk about experiences of using low-level optimization techniques to improve performance of applications on an AMD Phenom processor. We also explored a new approach where the given instruction scheduling is altered to improve performance based solely on performance measurement. This approach does not suffer from modeling errors found in some techniques such as using integer linear programming solvers.

Grantland Gray and Corie Wilson
Title: A Classified Method Based on Support Vector Machine for Network Intrusion Detection
Intrusion detection is a critical requirement for enterprise network protection, since one of its necessary tasks is to protect the computers responsible for the infrastructures operational control, and an effective intrusion detection system (IDS) is essential for ensuring network security. Network-based attacks make it difficult for legitimate users to access various network services by sabotaging network resources and services. This is achieved by sending large amounts of network traffic, exploiting well-known flaws in networking services, and by overwhelming network hosts. Intrusion Detection attempts to detect computer attacks by examining various data records observed in processes on the network and it is split into two groups, anomaly detection systems and misuse detection systems. Anomaly detection is an attempt to search for malicious behavior that deviates from established normal patterns. Misuse detection is used to identify intrusions that match known attack scenarios. In this research effort, we focus on anomaly detection and our proposed strategy is an efficient and reliable solution for detecting network based anomalies. We employ a supervised machine learning method, Support Vector Machines (SVM) for classification of abnormal traffic and normal traffic and LiBSVM and LibLinear, as support vector machine tools. The tools provide an effective mechanism to perform cross-validation, parameter selection and training large datasets. Performance evaluation of the proposed method is conducted on diverse publicly available network packet traces. Experimental results show that high average detection rates and low average false positive rates in anomaly detection are achieved by our proposed method.

Chia-chun Tsai
Title: Power Grid Models In Application
This project investigate three power system problems: (1)Optimal Power flow(OPF). (2)Transmission Network Expansion(TNE). (3)Optimal Transmission Switching(OTS). Among these models, we concentrate on building efficient reformulation techniques that model the non-convex power flow equations such as Kirchhoff's law in AC problem and Ohm's law in DC problem. We also investigate the structure and formulation of these models and identify common mathematical components. The DC model in TNE and OTS problems of this type lead to non-convex nonlinear mix-integer optimization problem, we can apply big-M method to get linear mix-integer model and we can also apply complementarity method to get another nonlinear model. Our goal is to develop a number of case studies that illustrate the computational and mathematical challenges, and that can be used to benchmark new global optimization solvers.

Zhu Wang
Title: Dimensionality reduction for uncertainty quantification of nuclear engineering models
We address the question of uncertainty quantification of complex nuclear engineering simulation models. Previous research effort has shown that propagation of uncertainty in the inputs can be approximated by polynomial regression, at the cost of very few computationally expensive model evaluations, if derivative information is also used. When dimension of uncertainty space is truly large (we estimate ~100), its sampling does not extract enough information for a good approximation, even if derivative information is used, and regression setup is optimal. Some sort of dimensionality reduction is required. We project the high-dimensional uncertainty space into a reduced representation, using proper orthogonal decomposition (POD) based reduction that is dual-weighted (i.e. individual importance is assigned to each training set point, and to each component of points). The weighting is based on derivative information of the model; comparisons with different weighting schemes are also performed. Our work indicates that it is possible to perform high-precision uncertainty quantification when the dimension of the uncertainty space is ~100, or more, which has practical applications for currently difficult tasks of uncertainty quantification, parametric dependence analysis, verification and validation for nuclear engineering models with many parameters, and sparse description of the mathematical structure.

Brian Haines
Title: Numerical homogenization approach for Stokesian suspensions
Suspensions of rigid particles present numerical difficulties when they contain many particles, largely due to their complicated moving boundaries. We propose a new approach to studying these problems through numerical homogenization. This is done by introducing a FEM basis that is computed once for the suspension in its initial configuration. This basis can be cheaply advected to produce a basis to be used at later times (in the evolved geometry) with explicitly controlled error. Furthermore, we present error estimates for the approximate solution and discuss ongoing work on localizing the initial basis so that it is cheaper to compute. With a localized basis, the computational complexity is reduced significantly when compared to standard approaches.


Please send questions or suggestions to Jeffrey Larson: jmlarson at anl dot gov.