- Source: Active-set method
In mathematical optimization, the active-set method is an algorithm used to identify the active constraints in a set of inequality constraints. The active constraints are then expressed as equality constraints, thereby transforming an inequality-constrained problem into a simpler equality-constrained subproblem.
An optimization problem is defined using an objective function to minimize or maximize, and a set of constraints
g
1
(
x
)
≥
0
,
…
,
g
k
(
x
)
≥
0
{\displaystyle g_{1}(x)\geq 0,\dots ,g_{k}(x)\geq 0}
that define the feasible region, that is, the set of all x to search for the optimal solution. Given a point
x
{\displaystyle x}
in the feasible region, a constraint
g
i
(
x
)
≥
0
{\displaystyle g_{i}(x)\geq 0}
is called active at
x
0
{\displaystyle x_{0}}
if
g
i
(
x
0
)
=
0
{\displaystyle g_{i}(x_{0})=0}
, and inactive at
x
0
{\displaystyle x_{0}}
if
g
i
(
x
0
)
>
0.
{\displaystyle g_{i}(x_{0})>0.}
Equality constraints are always active. The active set at
x
0
{\displaystyle x_{0}}
is made up of those constraints
g
i
(
x
0
)
{\displaystyle g_{i}(x_{0})}
that are active at the current point (Nocedal & Wright 2006, p. 308).
The active set is particularly important in optimization theory, as it determines which constraints will influence the final result of optimization. For example, in solving the linear programming problem, the active set gives the hyperplanes that intersect at the solution point. In quadratic programming, as the solution is not necessarily on one of the edges of the bounding polygon, an estimation of the active set gives us a subset of inequalities to watch while searching the solution, which reduces the complexity of the search.
Active-set methods
In general an active-set algorithm has the following structure:
Find a feasible starting point
repeat until "optimal enough"
solve the equality problem defined by the active set (approximately)
compute the Lagrange multipliers of the active set
remove a subset of the constraints with negative Lagrange multipliers
search for infeasible constraints
end repeat
Methods that can be described as active-set methods include:
Successive linear programming (SLP)
Sequential quadratic programming (SQP)
Sequential linear-quadratic programming (SLQP)
Reduced gradient method (RG)
Generalized reduced gradient method (GRG)
Performance
Consider the problem of Linearly Constrained Convex Quadratic Programming. Under reasonable assumptions (the problem is feasible, the system of constraints is regular at every point, and the quadratic objective is strongly convex), the active-set method terminates after finitely many steps, and yields a global solution to the problem. Theoretically, the active-set method may perform a number of iterations exponential in m, like the simplex method. However, its practical behaviour is typically much better.: Sec.9.1
References
Bibliography
Murty, K. G. (1988). Linear complementarity, linear and nonlinear programming. Sigma Series in Applied Mathematics. Vol. 3. Berlin: Heldermann Verlag. pp. xlviii+629 pp. ISBN 3-88538-403-5. MR 0949214. Archived from the original on 2010-04-01. Retrieved 2010-04-03.
Nocedal, Jorge; Wright, Stephen J. (2006). Numerical Optimization (2nd ed.). Berlin, New York: Springer-Verlag. ISBN 978-0-387-30303-1.
Kata Kunci Pencarian:
- Jenna Jameson
- Kafeina
- Kekristenan
- Mononatrium glutamat
- Gempa bumi
- Whataboutisme
- Dimetil sulfida
- BTR-60
- Buku ilmu antik
- Kompleks eksosom
- Active-set method
- Active Directory
- Scientific method
- Non-negative matrix factorization
- Non-negative least squares
- Active learning
- In-crowd algorithm
- Active object
- Active users
- Method acting