Argonne National Laboratory Mihai Anitescu, Ph.D, Computational Mathematician
Mathematics and Computer Science (MCS) Division
 
Comp Math 310, Part B: Optimization

Stat 310: Computational Math and Optimization II:

Optimization Topics

( This page is frequently updated in February-March 2010, please consult the latest version. The links should allow access to the mentioned references from the University network or by using the proxy server)

0. Introduction MA

  • 0.1 Formulation of the Nonlinear Programming Problem.
  • 0.2 Example Problems.
    • 0.2.1 Constrained Max Likelihood.
    • 0.2.2 Nonlinear Least Squares.
    • 0.2.3 Control Problems after discretization.
      • Example in thermal management of a building (Equation 16).
    • 0.2.4 Stochastic Programming Problems after sample average approximation.
      • Example in thermal management of a building under weather forecast uncertainty. (Equation 17).
  • 0.3 Why and when should you use Matlab?
  • 0.4 Why and when should you use a modeling language to solve Optimization Problems
  • 0.5 Special Cases of Nonlinear Programming.
  • 0.6 What does it mean "I solve an NLP"?
    • Goal 1 of NLP algorithms: global convergence to local minima.
  • 0.7 Scope and goals of the course.

1. Fundamentals of Unconstrained Optimization. MA

  • 1.1 Taylor's theorem.
  • 1.2 Necessary Optimality Conditions
    • Refresher: Powerpoint slides, 5th edition Vector Calculus Marsden and Tromba. Go to chapter 3.
  • 1.3 Sufficient Optimality Conditions.
  • 1.4. How large is the gap between necessary and sufficient optimality conditions?
  • 1.5 Newton's method for nonlinear equations and unconstrained optimization.
  • 1.6 Rate of convergence of Newton's Method.
    • Rates of convergence of iterative algorithms.
    • Goal A (refined): Achieve fast (superlinear) local convergence.
  • 1.7 What can be the problem with Newton's Method?
  • 1.8 Globalization strategies.

2. Line Search Methods MA

  • 2.1 Principles of Line Search Methods.
  • 2.2 Computing step length. Need for a sufficient decrease condition.
  • 2.3 Wolfe Conditions.
  • 2.4 Backtracking (Armijo Rule).
  • 2.5 Convergence of line search methods.
    • Why does Armijo's rule fit here?
  • 2.6 Steepest descent method.
  • 2.7 Quasi-Newton methods.
  • 2.8 Newton's method with Hessian modifications.
  • 2.9 (If time permits) Refinements.

3. Trust-Region Methods. MA

  • 3.1 The trust region subproblem.
  • 3.2 Trust region size management.
  • 3.3 The Cauchy Point.
  • 3.4 (If time permits) The dogleg approach and the two-dimensional subspace minimization approach.
  • 3.5 Convergence to stationary points.
  • 3.6 Iterative resolution of the trust-region subproblem.
    • Lagrange Multiplier Theorem Refresher: Powerpoint slides, 5th edition Vector Calculus Marsden and Tromba. Go to chapter 3, Section 3.4.
  • 3.7 The trust-region subproblem safeguards.
  • 3.8 Local convergence of trust region methods.
  • 3.9 (If time permits) Refinements.
  • 3.10 Summary: When use trust-region, when use line search? nc

4. Special Topics MA

  • 4.1 The Elements and Structure of an Unconstrained Optimization Program
  • 4.2 Using Sparsity in Matrix Manipulations
    • Reference: Templates for the solutions of linear systems, from Netlib. See Chapter 4.3
  • 4.3 Review of the Conjugate Gradient Method.
  • 4.4 Early Termination of the CG Method in Unconstrained Optimization.
  • 4.5 Inexact Newton Methods.
  • 4.6 CG-Newton and CG-Trust Region algorithms.
  • 4.7 Preconditioning CG approaches.
  • 4.8 Finite Difference Methods for Approximating Derivative Information.
  • 4.9 Automatic Differentiation
    • Siegfried Rump's Intval package, which includes automatic differentiation for Matlab. Example usage for fenton's function, fenton_wrap.m
  • 4.10 (if time permits) quasi-Newton methods
  • 4.11 (if time permits) Nonlinear Equations and Nonlinear Least-Square Problems.

5. Theory of Constrained Optimization. MA

  • 5.1 Review of the implicit function theorem.
  • 5.2 Optimality conditions for problems with equality constraints.
  • 5.3 The Augmented Lagrangian
  • 5.4 Optimality conditions for problems with inequality constraints.The Karush-Kuhn-Tucker conditions.
  • 5.5 Fundamentals of algorithms for Nonlinear Constrained Optimization.

6. Interior Point Methods for Nonlinear Programming. MA

  • 6.1 Justification and interpretation (homotopy versus barrier).
  • 6.2 Basic concepts.
  • 6.3 Algorithmic Development.
  • 6.4 Merit Functions and Line Search.

Topics which may be covered if time permits.

  • Nonlinear Equations (and, if time permits, Nonlinear Least-Square Problems).

  • Calculating Derivatives, Automatic Differentiation.

  • Quasi-Newton Methods.

  • Nonlinear Conjugate Gradient.

 

 



Disclaimer
Security/Privacy Notice
webmaster@mcs.anl.gov
Last modified: February 11, 2010