﻿

# Constrained optimization and Lagrange multipliers

This tutorial presents an introduction to optimization problems that involve finding a maximum or a minimum value of an objective function $f\left(x_1,x_2,ldots, x_n\right)$ subject to a constraint of the form $g\left(x_1,x_2,ldots, x_n\right)=k$.

Maximum and minimum

Finding optimum values of the function $f\left(x_1,x_2,ldots, x_n\right)$ without a constraint is a well known problem dealt with in calculus courses. One would normally use the gradient to find stationary points. Then check all stationary and boundary points to find optimum values.

Example

* $f\left(x,y\right)=2x^2+y^2$
* $f_x\left(x,y\right)=4x=0$
* $f_y\left(x,y\right)=2y=0$$f\left(x,y\right)$ has one stationary point at (0,0).

The Hessian

A common method of determining whether or not a function has an extreme value at a stationary point is to evaluate the hessian [3] of the function at that point. where the hessian is defined as

: partial}^2 f}partial}^2 x_1} & fracpartial}^2 f}{partial x_1 partial x_2} & dots & fracpartial}^2 f}{partial x_1 partial x_n} \fracpartial}^2 f}{partial x_2 partial x_1} & fracpartial}^2f}partial}^2 x_2}& dots & fracpartial}^2f}{partial x_2 partial x_n}\vdots & vdots & ddots & vdots \fracpartial}^2f}{partial x_n partial x_1} & fracpartial}^2f}{partial x_n partial x_2}& dots & fracpartial}^2f}partial}^2 x_n}\end{bmatrix}.

econd derivative test

The Second derivative test determines the optimality of stationary point $x$ according to the following rules [2] :
* If $H\left(f\right)>0$ at point x then $f$ has a local minimum at x
* If $H\left(f\right) < 0$ at point x then $f$ has a local maximum at x
* If $H\left(f\right)$ has negative and positive eigenvalues then x is a saddle point
* Otherwise the test is inconclusiveIn the above example.

:

Therefore $f\left(x,y\right)$ has a minimum at (0,0).

Constrained maximum and minimum

When finding the extreme values of $f\left(x_1,x_2,cdots, x_n\right)$ subject to a constraint $g\left(x_1,x_2,cdots, x_n\right)=k$, the stationary points found above will not work. This new problem can be thought of as finding extreme values of $f\left(x_1,x_2,dots, x_n\right)$ when the point $\left(x_1,x_2,dots, x_n\right)$ is restricted to lie on the surface $g\left(x_1,x_2,dots, x_n\right)=k$. The value of $f\left(x_1,x_2,dots, x_n\right)$ is maximized(minimized) when the surfaces touch each other,"i.e" , they have a common tangent line. This means that the surfaces gradient vectors at that point are parallel, hence,

: $abla f\left(x_1,x_2,cdots, x_n\right) = lambda abla g\left(x_1,x_2,cdots, x_n\right) .$

The number $lambda$ in the equation is known as the Lagrange multiplier.

Lagrange multiplier method

The Lagrange multiplier methods solves the constrained optimization problem by transforming it into a non-constrained optimization problem of the form:
* $L\left(x_1,x_2,ldots, x_n,lambda\right)= f\left(x_1,x_2,ldots, x_n\right)+lambda \left(k-g\left(x_1,x_2,ldots, x_n\right)\right)$Then finding the gradient and hessian as was done above will determine any optimum values of $L\left(x_1,x_2,ldots, x_n,lambda\right)$.

Suppose we now want to find optimum values for $f\left(x,y\right)=2x^2+y^2$ subject to $x+y=1$ from [2] .

Then the Lagrangian method will result in a non-constrained function.
* $L\left(x,y,lambda\right)= 2x^2+y^2+lambda \left(1-x-y\right)$

The gradient for this new function is
* $frac\left\{partial L\right\}\left\{partial x\right\}\left(x,y,lambda\right)= 4x+lambda \left(-1\right)=0$
* $frac\left\{partial L\right\}\left\{partial y\right\}\left(x,y,lambda\right)= 2y+lambda \left(-1\right)=0$
* $frac\left\{partial L\right\}\left\{partial lambda\right\}\left(x,y,lambda\right)=1-x-y=0$

Finding the stationary points of the above equations can be obtained from their matrix from.

:

This results in $x=1/3, y=2/3, lambda=4/3$.

Next we can use the hessian as before to determine the type of this stationary point.

:

Since $H\left(L\right) >0$ then the solution $\left(1/3,2/3,4/3\right)$ minimizes $f\left(x,y\right)=2x^2+y^2$ subject to $x+y=1$ with $f\left(x,y\right)=2/3$.

References

: [1] T.K. Moon and W.C. Stirling. "Mathematical Methods and Algorithms for Signal Processing". Prentice Hall. 2000.: [2] http://mat.gsia.cmu.edu/QUANT/NOTES/chap4/node1.html: [3] http://www.ece.tamu.edu/~chmbrlnd/Courses/ECEN601/ECEN601-Chap3.pdf

Wikimedia Foundation. 2010.

### Look at other dictionaries:

• Lagrange multipliers — In mathematical optimization problems, the method of Lagrange multipliers, named after Joseph Louis Lagrange, is a method for finding the extrema of a function of several variables subject to one or more constraints; it is the basic tool in… …   Wikipedia

• Lagrange multipliers on Banach spaces — In the field of calculus of variations in mathematics, the method of Lagrange multipliers on Banach spaces can be used to solve certain infinite dimensional constrained optimization problems. The method is a generalization of the classical method …   Wikipedia

• Lagrange multiplier — Figure 1: Find x and y to maximize f(x,y) subject to a constraint (shown in red) g(x,y) = c …   Wikipedia

• Convex optimization — Convex minimization, a subfield of optimization, studies the problem of minimizing convex functions over convex sets. Given a real vector space X together with a convex, real valued function defined on a convex subset of X, the problem is to find …   Wikipedia

• Mathematical optimization — For other uses, see Optimization (disambiguation). The maximum of a paraboloid (red dot) In mathematics, computational science, or management science, mathematical optimization (alternatively, optimization or mathematical programming) refers to… …   Wikipedia

• Multidisciplinary design optimization — Multi disciplinary design optimization (MDO) is a field of engineering that uses optimization methods to solve design problems incorporating a number of disciplines. As defined by Prof. Carlo Poloni, MDO is the art of finding the best compromise …   Wikipedia

• Constraint optimization — In constraint satisfaction, constrained optimization (also called constraint optimization) seeks for a solution maximizing or minimizing a cost function. Contents 1 Definition 2 Solution methods 2.1 Branch and bound …   Wikipedia

• Newton's method in optimization — A comparison of gradient descent (green) and Newton s method (red) for minimizing a function (with small step sizes). Newton s method uses curvature information to take a more direct route. In mathematics, Newton s method is an iterative method… …   Wikipedia

• Shape optimization — is part of the field of optimal control theory. The typical problem is to find the shape which is optimal in that it minimizes a certain cost functional while satisfying given constraints. In many cases, the functional being solved depends on the …   Wikipedia

• List of mathematics articles (C) — NOTOC C C closed subgroup C minimal theory C normal subgroup C number C semiring C space C symmetry C* algebra C0 semigroup CA group Cabal (set theory) Cabibbo Kobayashi Maskawa matrix Cabinet projection Cable knot Cabri Geometry Cabtaxi number… …   Wikipedia