Nonlinear Constrained Optimization

Basic Concepts

The general form of a nonlinearly-constrained problem or a nonlinear programming problem is to minimize a scalar-valued function \(f\) of several variables \(x\) subject to other functions (constraints) that limit or define the values of the variables. In mathematical terms,
\[ \begin{array}{lllll}
\mbox{minimize} & f(x) & & & \\
\mbox{subject to} & c_i(x) & = & 0 & \forall i \in \mathcal{E} \\
& c_i(x) & \leq & 0 & \forall i \in \mathcal{I}
\end{array}
\] where each \(c_i(x) \,\) is a mapping from \(R^n \,\) to \(R \,\) and \(\mathcal{E}\) and \(\mathcal{I}\) are index sets for equality and inequality constraints, respectively.

An equivalent formulation is
\[ \begin{array}{llll}
\mbox{minimize} & f(x) & & \\
\mbox{subject to} & c(x) & = & 0 \\
& l \leq & x & \leq u
\end{array}
\] where \(c(x) \,\) maps \(R^n \,\) to \(R^m \,\) and the lower-bound and upper-bound vectors, \(l \,\) and \(u \,\), may contain some infinite components.

Nonlinear programming is a broad field with a number of well-studied subfields, some of which are listed below. For many general nonlinear programming problems, the objective function has many locally optimal solutions; finding the best of all such minima, the global solution, is often difficult. An important special case of nonlinear programming is convex programming in which all local solutions are global solutions.

  • If there are no constraints at all on the objective function \(f\), then the problem is an unconstrained optimization problem.
  • When the objective function \(f\) is linear and all of the constraint functions \(c_i\) are linear, the problem is a linear programming (LP) problem.
  • When the objective function \(f\) is quadratic and the constraint functions \(c_i\) are linear, the problem is a quadratic programming (QP) problem.
  • When the objective function \(f\) is quadratic and the constraint functions \(c_i\) are quadratic, the problem is a quadratically constrained quadratic programming (QCQP) problem.
  • In a second order cone programming (SOCP) problem, a linear function \(f\) is minimized over the intersection of an affine set and the product of second-order (quadratic) cones.
  • In a semidefinite programming (SDP) problem, a linear function \(f\) is minimized subject to a linear matrix inequality.

Complementarity problems are closely related to nonlinear optimization problems. The most familiar complementarity conditions are the complementary slackness conditions for optimality in linear programming. The complementary slackness conditions specify that either a certain dual variable is zero or the corresponding dual slack is zero or both. Pure complementarity problems consist of these and related either-or conditions. Mathematical programs with equilibrium constraints (MPECs) combine complementarity conditions with ordinary objective functions and constraints.

Optimality Conditions

The main techniques that have been proposed for solving constrained optimization problems are reduced-gradient methods, sequential linear and quadratic programming methods, and methods based on augmented Lagrangians and exact penalty functions. Fundamental to the understanding of these algorithms is the Lagrangian function, which for the first formulation is defined as
\[L(x,y) = f(x) + \sum_{i \in E \cup I} y_i c_i(x).\] The Lagrangian is used to express first-order and second-order conditions for a local minimizer. We simplify matters by stating just first-order necessary and second-order sufficiency conditions without trying to make the weakest possible assumptions.

The first-order necessary conditions for the existence of a local minimizer \(x^*\) of the constrained optimization problem require the existence of Lagrange multipliers \(y^*\) such that
\[\nabla L(x^*,y^*) = \nabla f(x^*) + \sum_{i \in A^*} y_i^* \nabla c_i(x^*) = 0\] where
\[A^* = A(x^*) = \{ i : c_i(x^*) = 0 \} \,\] is the active set at \(x^*\) and \(y_i^* \geq 0\) if \(i \in I\). This result requires a constraint qualification to ensure that the geometry of the feasible set is adequately captured by a linearization of the constraints about \(x^*\). A standard constraint qualification requires the constraint normals, \(\nabla c_i(x^*)\) for \(i \in A^*\), to be linearly independent.

The second-order sufficiency condition requires that \((x^*,y^*) \) satisfies the first-order condition and that the Hessian of the Lagrangian
\[\nabla^2_{xx} L(x^*,y^*) = \nabla^2 f(x^*) + \sum_{i \in A^*} y_i^* \nabla^2 c_i(x^*) = 0\] satisfies the condition
\[s^T \nabla^2_{xx} L(x^*,y^*) s > 0 \] for all nonzero \(s\) in the set

\[\{ s | \nabla c_i(x^*)^T s = 0 \; \forall i \in I_+^* \cup E \quad \mbox{and} \quad \nabla c_i(x^*)^T s \geq 0 \; \forall i \in I_0^*\}\]

where
\[I_+^* = \{ i \in I \cap A : y_i^* > 0 \] and \[I_0^* = \{ i \in I \cap A : y_i^* = 0\}.\] The previous condition guarantees that the optimization problem is well behaved near \(x^* \); in particular, if the second-order sufficiency condition holds, then \(x^* \) is a strict local minimizer of the first constrained problem. An important ingredient in the convergence analysis of a constrained algorithm is its behavior in the vicinity of a point \((x^*,y^*) \) that satisfies the second-order sufficiency condition.

Algorithms
Software Resources

There is not one general nonlinear programming solver that will work effectively for every kind of nonlinear programming problem. If your problem fits into one of the special cases, you should select a solver for that particular problem type.

Nonlinear Programming Software on the NEOS Server

If you do not have access to an appropriate solver at your institution and you prefer not to download a demo version or a free solver, you can access for free a number of commercial and freely available nonlinear programming solvers on the NEOS Server. The following categories of solvers are available:

The NEOS Server also offers a number of Global Optimization solvers.

Other Sources of Nonlinear Programming Solvers

Hans Mittelmann’s Decision Tree for Optimization Software lists additional public domain and free-for-research codes for QP problems and general nonlinear programming problems.

  • COIN-OR, COmputational INfrastructure for Operations Research, is an open-source community for the development and deployment of operations research software. There are several nonlinear optimization projects:
    • filterSD is an open-source package written in Fortran for solving nonlinear programming problems and linearly constrained continuous optimization problems.
    • Ipopt uses an interior point method with a filter linear search procedure; it can be accessed from various modeling environments or as a callable library. Ipopt is available at NEOS.
  • Most spreadsheet programs come with features or add-ins that can do some nonlinear optimization. Frontline Systems offers more powerful upgrades of the Excel solver. Lindo Systems, Inc. offers What’sBest!, an Excel add-on for solving linear, integer, and nonlinear optimization models.
  • If you have MATLAB, there are a number of options for nonlinear optimization:
  • If you have Mathematica, there are a number of options for nonlinear optimization beyond the standard Mathematica capabilities:
    • MathOptimizer is an advanced modeling and optimization system for Mathematica users; it enables the global and local (numerical) solution of a very general class of nonlinear optimization problems defined by a finite number of real-valued, continuous functions over a finite \(n\)-dimensional interval region. It is compatible with the most recent version of Mathematica.
    • GlobalOptimization is a collection of functions for global nonlinear optimization that uses the Mathematica system as an interface for defining the nonlinear system to be solved and for computing function numeric values.
  • The Netlib Repository opt directory contains a number of freely available optimization routines.
  • NLopt is an open-source library for nonlinear optimization that provides a common interface to some algorithms available online as well as original implementations of other nonlinear optimization algorithms.
  • Toolkit for Advanced Optimization (TAO) provides a framework for developing optimization software and solvers for unconstrained and bound constrained optimization, least squares problems, and complementarity problems.
Test Problems
  • AMPL Nonlinear Test Problems: compiled by Robert Vanderbei at Princeton University. They were translated by Andre Savitsky into GAMS and are available as the PrincetonLib.
  • COPS: Large-Scale Optimization Problems: COPS (Constrained Optimization Problem Set) is a collection of large-scale constrained optimization problems intended to provide difficult test cases for optimization software. Models are available in AMPL and GAMS.
  • CUTEst: CUTEst is the latest evolution of CUTE, the constrained and unconstrained testing environment for numerical optimization.
  • Decision Tree for Optimization: Hans Mittelmann’s website has a collection of test cases for general nonlinear programming. The site includes test cases for other problem types as well as benchmark results.
  • GAMS Model Library includes many nonlinear models and there is a GlobalLib collection of nonlinear programming models.
  • LIBOPT: LIBOPT is a methodology and a set of tools that can be used for testing, comparing, and profiling solvers on problems belonging to various collections.
References

Optimization Online Nonlinear Optimization area

Books
  • Bazaraa, M. S., Sherali, H. D., and Shetty, C. M. 2006. Nonlinear Programming: Theory and Applications, 3rd ed. Wiley-Interscience, Hoboken, NJ.
  • Bertsekas, D. P. 1999. Nonlinear Programming, 2nd, ed. Athena Scientific, Boston.
  • Boyd, S. and Vandenberghe, L. 2004. Convex Optimization. Cambridge University Press, Cambridge, UK.
  • Dennis, J. E. and Schnabel, R. B. 1983. Numerical Methods for Unconstrained Optimization and Nonlinear Equations. Prentice Hall, Upper Saddle River, NJ.
  • Fletcher, R. 1987. Practical Methods of Optimization, 2nd ed. John Wiley & Sons, Inc., New York.
  • Gill, P. E., Murray, W., Saunders, M. A., and Wright, M. H. 1989. Constrained nonlinear programming, in Optimization, G. L. Nemhauser, A. H. G. Rinnooy Kan, and M. J. Todd, eds., North-Holland, Amsterdam, pp. 171-210.
  • Gill, P. E., Murray, W., and Wright, M. H. 1981. Practical Optimization. Academic Press, New York.
  • Griva, I., Nash, S. G., and Sofer, A. 2009. Linear and Nonlinear Optimization, 2nd ed. Society for Industrial and Applied Mathematics, Philadelphia.
  • Luenberger, D. G. 2003. Linear and Nonlinear Programming, 2nd ed. Kluwer Academic Publishers, Boston.
  • Nocedal, J. and Wright, S. J. 2006. Numerical Optimization, 2nd ed. Springer Series in Operations Research, Springer-Verlag, New York.
  • Ruszczynski, A. P. 2006. Nonlinear Optimization. Princeton University Press, Princeton, NJ.