Constrained minimization algorithms
Contents
Constrained minimization algorithms#
Some of the algorithms and techniques used to find local minima of unconstrained optimization problems can be applied mutatis mutandis for the constrained problem case.
Sequential Quadratic Programming (SQP)#
For the sake of simplicity the equality constrained minimization problem is considered:
with optimization variable \(x \in \mathbb{R}^{n}\) and twice continuously differentiable objective and constraint functions \(f, h_{j} \in C^{2}(\mathbb{R}^{n}, \mathbb{R})\), \(h := (h_{1}, \ldots, h_{p})^{T} \in C^{2}(\mathbb{R}^{n}, \mathbb{R}^{p})\).
The corresponding Lagrangian is:
and KKT optimality conditions:
Again the goal is to perform an iteration
with \(\mathbf{x} = (x, \mu)^{T}\), which converges from a starting point \(\mathbf{x}^{0}\) to a local minimum \(\mathbf{x}^{*}\) (a stationary point of the KKT optimality conditions) choosing a descent direction \(\mathbf{d} = (\Delta x, \Delta \mu)^{T}\) and a step size \(\alpha\) in each step (cf. Gradient methods).
Using Newton’s method (quadratic Taylor-approximation of the Lagrangian \(L(x, \mu)\)) to determine the descent direction, results in solving the following system of non-linear equations:
This Hessian matrix has a certain structure:
where:
\(Q := \nabla^{2}_{x,x} L(x, \mu) = \nabla^{2} f(x) + \sum_{j = 1}^{p} \mu_{j} \nabla^{2} h_{j}(x)\).
\(B := \nabla^{2}_{\mu,x} L(x, \mu) = \) \(\begin{pmatrix} \nabla^{T} h_{1}(x) \\ \vdots \\ \nabla^{T} h_{p}(x) \end{pmatrix}\) is the Jacobian matrix of \(h(x)\).
\(B^{T} = \nabla^{2}_{x,\mu} L(x, \mu) = \begin{pmatrix} \nabla h_{1}(x) & \ldots & \nabla h_{p}(x) \end{pmatrix}\) is the transposed Jacobian matrix of \(h(x)\).
\(\nabla^{2}_{\mu,\mu} L(x, \mu) = 0\) is a zero matrix, as \(L(x, \mu)\) is linear in \(\mu\).
The derived Newton-equation happens to be identical to the KKT optimality conditions of the following quadratic optimization problem:
By choosing the Lagrange multipliers \(\lambda = \Delta \mu\) for the equality constraints, the corresponding Lagrangian is:
and the KKT optimality conditions:
or written in matrix notation and regarding \(\lambda = \Delta \mu\):
Summarizing#
SQP methods solve a sequence of quadratic optimization sub-problems (quadratic objective function and linearized constraints).
In the unconstrained case, SQP reduces to Newton’s method for finding a stationary point \(\nabla f(x) = 0\).
In the only equality constrained case, the SQP method is equivalent to applying Newton’s method to the KKT optimality conditions.
Extensions for equality and inequality constraints exist, but are not discussed in this seminar.
Similar problems and limitations as discussed in the section about Newton’s method apply to the SQP method.
Computing the Hessian matrix is expensive.
If the Hessian matrix is not symmetric positive definite or the constraints are not linear independent (LICQ) during the iteration, convergence cannot be guaranteed.
See exercise RM04 for an application example.