Wolfe duality

In mathematical optimization, Wolfe duality, named after Philip Wolfe, is type of dual problem in which the objective function and constraints are all differentiable functions. Using this concept a lower bound for a minimization problem can be found because of the weak duality principle.[1]

Mathematical formulation

For a minimization problem with inequality constraints,

minimize x f ( x ) s u b j e c t t o g i ( x ) 0 , i = 1 , , m {\displaystyle {\begin{aligned}&{\underset {x}{\operatorname {minimize} }}&&f(x)\\&\operatorname {subject\;to} &&g_{i}(x)\leq 0,\quad i=1,\dots ,m\end{aligned}}}

the Lagrangian dual problem is

maximize u inf x ( f ( x ) + j = 1 m u j g j ( x ) ) s u b j e c t t o u i 0 , i = 1 , , m {\displaystyle {\begin{aligned}&{\underset {u}{\operatorname {maximize} }}&&\inf _{x}\left(f(x)+\sum _{j=1}^{m}u_{j}g_{j}(x)\right)\\&\operatorname {subject\;to} &&u_{i}\geq 0,\quad i=1,\dots ,m\end{aligned}}}

where the objective function is the Lagrange dual function. Provided that the functions f {\displaystyle f} and g 1 , , g m {\displaystyle g_{1},\ldots ,g_{m}} are convex and continuously differentiable, the infimum occurs where the gradient is equal to zero. The problem

maximize x , u f ( x ) + j = 1 m u j g j ( x ) s u b j e c t t o f ( x ) + j = 1 m u j g j ( x ) = 0 u i 0 , i = 1 , , m {\displaystyle {\begin{aligned}&{\underset {x,u}{\operatorname {maximize} }}&&f(x)+\sum _{j=1}^{m}u_{j}g_{j}(x)\\&\operatorname {subject\;to} &&\nabla f(x)+\sum _{j=1}^{m}u_{j}\nabla g_{j}(x)=0\\&&&u_{i}\geq 0,\quad i=1,\dots ,m\end{aligned}}}

is called the Wolfe dual problem.[2] This problem employs the KKT conditions as a constraint. Also, the equality constraint f ( x ) + j = 1 m u j g j ( x ) {\displaystyle \nabla f(x)+\sum _{j=1}^{m}u_{j}\nabla g_{j}(x)} is nonlinear in general, so the Wolfe dual problem may be a nonconvex optimization problem. In any case, weak duality holds.[3]

See also

  • Lagrangian duality
  • Fenchel duality

References

  1. ^ Philip Wolfe (1961). "A duality theorem for non-linear programming". Quarterly of Applied Mathematics. 19 (3): 239–244. doi:10.1090/qam/135625.
  2. ^ "Chapter 3. Duality in convex optimization" (PDF). October 30, 2011. Retrieved May 20, 2012.
  3. ^ Geoffrion, Arthur M. (1971). "Duality in Nonlinear Programming: A Simplified Applications-Oriented Development". SIAM Review. 13 (1): 1–37. doi:10.1137/1013001. JSTOR 2028848.
  • v
  • t
  • e