Optimization Problem Types

As noted in the Introduction to Optimization, an important step in the optimization process is classifying your optimization model, since algorithms for solving optimization problems are tailored to a particular type of problem. Here we provide some guidance to help you classify your optimization model; for the various optimization problem types, we provide a linked page with some basic information, links to algorithms and software, and online and print resources.

  • Convex Optimization versus Nonconvex Optimization
  • Continuous Optimization versus Discrete Optimization
    Some models only make sense if the variables take on values from a discrete set, often a subset of integers, whereas other models contain variables that can take on any real value. Models with discrete variables are discrete optimization problems; models with continuous variables are continuous optimization problems. Continuous optimization problems tend to be easier to solve than discrete optimization problems; the smoothness of the functions means that the objective function and constraint function values at a point \(x\) can be used to deduce information about points in a neighborhood of \(x\). However, improvements in algorithms coupled with advancements in computing technology have dramatically increased the size and complexity of discrete optimization problems that can be solved efficiently. Continuous optimization algorithms are important in discrete optimization because many discrete optimization algorithms generate a sequence of continuous subproblems.
  • Unconstrained Optimization versus Constrained Optimization
    Another important distinction is between problems in which there are no constraints on the variables and problems in which there are constraints on the variables. Unconstrained optimization problems arise directly in many practical applications; they also arise in the reformulation of constrained optimization problems in which the constraints are replaced by a penalty term in the objective function. Constrained optimization problems arise from applications in which there are explicit constraints on the variables. The constraints on the variables can vary widely from simple bounds to systems of equalities and inequalities that model complex relationships among the variables. Constrained optimization problems can be furthered classified according to the nature of the constraints (e.g., linear, nonlinear, convex) and the smoothness of the functions (e.g., differentiable or nondifferentiable).
  • Deterministic Optimization versus Stochastic Optimization
    In deterministic optimization, it is assumed that the data for the given problem are known accurately. However, for many actual problems, the data cannot be known accurately for a variety of reasons. The first reason is due to simple measurement error. The second and more fundamental reason is that some data represent information about the future (e. g., product demand or price for a future time period) and simply cannot be known with certainty. In optimization under uncertainty, or stochastic optimization, the uncertainty is incorporated into the model. Robust optimization techniques can be used when the parameters are known only within certain bounds; the goal is to find a solution that is feasible for all data and optimal in some sense. Stochastic optimization models take advantage of the fact that probability distributions governing the data are known or can be estimated; the goal is to find some policy that is feasible for all (or almost all) the possible data instances and optimizes the expected performance of the model.
Continuous Optimization

In continuous optimization, the variables in the model are allowed to take on any value within a range of values, usually real numbers. This property of the variables is in contrast to discrete optimization, in which some or all of the variables may be binary (restricted to the values 0 and 1), integer (for which only integer values are allowed), or more abstract objects drawn from sets with finitely many elements.

An important distinction in continuous optimization is between problems in which there are no constraints on the variables and problems in which there are constraints on the variables. Unconstrained optimization problems arise directly in many practical applications; they also arise in the reformulation of constrained optimization problems in which the constraints are replaced by a penalty term in the objective function. Constrained optimization problems arise from applications in which there are explicit constraints on the variables. There are many subfields of constrained optimization for which specific algorithms are available.

Discrete Optimization

In discrete optimization, some or all of the variables in a model are required to belong to a discrete set; this is in contrast to continuous optimization in which the variables are allowed to take on any value within a range of values. Here, we consider two branches of discrete optimization. In integer programming, the discrete set is a subset of integers. In combinatorial optimization, the discrete set is a set of objects, or combinatorial structures, such as assignments, combinations, routes, schedules, or sequences.

Unconstrained Optimization

Unconstrained optimization problems consider the problem of minimizing an objective function that depends on real variables with no restrictions on their values. Mathematically, let \(x \in \mathcal{R}^n\) be a real vector with \(n \geq 1\) components and let \(f : \mathcal{R}^n \rightarrow \mathcal{R}\) be a smooth function. Then, the unconstrained optimization problem is \[\mbox{min}_x \; f(x).\]

Unconstrained optimization problems arise directly in some applications but they also arise indirectly from reformulations of constrained optimization problems. Often it is practical to replace the constraints of an optimization problem with penalized terms in the objective function and to solve the problem as an unconstrained problem.

Constrained Optimization

Constrained optimization problems consider the problem of optimizing an objective function subject to constraints on the variables. In general terms,
\[ \begin{array}{lllll}
\mbox{minimize} & f(x) & & & \\
\mbox{subject to} & c_i(x) & = & 0 & \forall i \in \mathcal{E} \\
& c_i(x) & \leq & 0 & \forall i \in \mathcal{I}
\end{array}
\] where \(f\) and the functions \(c_i(x) \,\) are all smooth, real-valued functions on a subset of \(R^n \,\) and \(\mathcal{E}\) and \(\mathcal{I}\) are index sets for equality and inequality constraints, respectively. The feasible set is the set of points \(x\) that satisfy the constraints.

Optimization Under Uncertainty

The optimization problem types described in the Continuous Optimization section and the Discrete Optimization section implicitly assume that the data for the given problem are known accurately. For many actual problems, however, the problem data cannot be known accurately for a variety of reasons. The first reason is due to simple measurement error. The second and more fundamental reason is that some data represent information about the future (e. g., product demand or price for a future time period) and simply cannot be known with certainty.

Stochastic Programming and Robust Optimization are the most popular frameworks for explicitly incorporating uncertainty. Stochastic programming uses random variables with specified probability distributions to characterize the uncertainty and optimizes the expected value of the objective function. Robust optimization uses set membership to characterize the uncertainty and optimizes a worst possible case of the problem.