1. D Nagesh Kumar, IISc Optimization Methods: M2L41
Optimization using Calculus
Optimization of Functions of
Multiple Variables subject to
Equality Constraints
2. D Nagesh Kumar, IISc Optimization Methods: M2L42
Objectives
Optimization of functions of multiple variables
subjected to equality constraints using
the method of constrained variation, and
the method of Lagrange multipliers.
3. D Nagesh Kumar, IISc Optimization Methods: M2L43
Constrained optimization
A function of multiple variables, f(x), is to be optimized subject to one or
more equality constraints of many variables. These equality constraints,
gj(x), may or may not be linear. The problem statement is as follows:
Maximize (or minimize) f(X), subject to gj(X) = 0, j = 1, 2, … , m
where
1
2
n
x
x
x
⎧ ⎫
⎪ ⎪
⎪ ⎪
= ⎨ ⎬
⎪ ⎪
⎪ ⎪⎩ ⎭
X
M
4. D Nagesh Kumar, IISc Optimization Methods: M2L44
Constrained optimization (contd.)
With the condition that ; or else if m > n then the problem
becomes an over defined one and there will be no solution. Of the
many available methods, the method of constrained variation and the
method of using Lagrange multipliers are discussed.
m n≤
5. D Nagesh Kumar, IISc Optimization Methods: M2L45
Solution by method of Constrained
Variation
For the optimization problem defined above, let us consider a specific
case with n = 2 and m = 1 before we proceed to find the necessary and
sufficient conditions for a general problem using Lagrange
multipliers. The problem statement is as follows:
Minimize f(x1,x2), subject to g(x1,x2) = 0
For f(x1,x2) to have a minimum at a point X* = [x1
*,x2
*], a necessary
condition is that the total derivative of f(x1,x2) must be zero at [x1
*,x2
*].
(1)
1 2
1 2
0
f f
df dx dx
x x
∂ ∂
= + =
∂ ∂
6. D Nagesh Kumar, IISc Optimization Methods: M2L46
Method of Constrained Variation (contd.)
Since g(x1
*,x2
*) = 0 at the minimum point, variations dx1 and dx2 about
the point [x1
*, x2
*] must be admissible variations, i.e. the point lies on
the constraint:
g(x1
* + dx1 , x2
* + dx2) = 0 (2)
assuming dx1 and dx2 are small the Taylor series expansion of this
gives us
(3)
*
1 1 2 2
* * * * * *
1 2 1 2 1 1 2 2
1 2
( * , )
( , ) (x ,x ) (x ,x ) 0
g x dx x dx
g g
g x x dx dx
x x
+ +
∂ ∂
= + + =
∂ ∂
7. D Nagesh Kumar, IISc Optimization Methods: M2L47
Method of Constrained Variation (contd.)
or at [x1
*,x2
*] (4)
which is the condition that must be satisfied for all admissible
variations.
Assuming , (4) can be rewritten as
(5)
1 2
1 2
0
g g
dg dx dx
x x
∂ ∂
= + =
∂ ∂
2/ 0g x∂ ∂ ≠
* *1
2 1 2 1
2
/
( , )
/
g x
dx x x dx
g x
∂ ∂
= −
∂ ∂
8. D Nagesh Kumar, IISc Optimization Methods: M2L48
Method of Constrained Variation (contd.)
(5) indicates that once variation along x1 (dx1) is chosen arbitrarily, the variation
along x2 (dx2) is decided automatically to satisfy the condition for the admissible
variation. Substituting equation (5) in (1) we have:
(6)
The equation on the left hand side is called the constrained variation of f. Equation
(5) has to be satisfied for all dx1, hence we have
(7)
This gives us the necessary condition to have [x1
*, x2
*] as an extreme point
(maximum or minimum)
* *
1 2
1
1
1 2 2 (x , x )
/
0
/
f g x f
df dx
x g x x
⎛ ⎞∂ ∂ ∂ ∂
= − =⎜ ⎟
∂ ∂ ∂ ∂⎝ ⎠
* *
1 2
1 2 2 1 (x , x )
0
f g f g
x x x x
⎛ ⎞∂ ∂ ∂ ∂
− =⎜ ⎟
∂ ∂ ∂ ∂⎝ ⎠
9. D Nagesh Kumar, IISc Optimization Methods: M2L49
Solution by method of Lagrange
multipliers
Continuing with the same specific case of the optimization problem
with n = 2 and m = 1 we define a quantity λ, called the Lagrange
multiplier as
(8)
Using this in (5)
(9)
And (8) written as
(10)
* *
1 2
2
2 (x , x )
/
/
f x
g x
λ
∂ ∂
= −
∂ ∂
* *
1 2
1 1 (x , x )
0
f g
x x
λ
⎛ ⎞∂ ∂
+ =⎜ ⎟
∂ ∂⎝ ⎠
* *
1 2
2 2 (x , x )
0
f g
x x
λ
⎛ ⎞∂ ∂
+ =⎜ ⎟
∂ ∂⎝ ⎠
10. D Nagesh Kumar, IISc Optimization Methods: M2L410
Solution by method of Lagrange
multipliers…contd.
Also, the constraint equation has to be satisfied at the extreme point
(11)
Hence equations (9) to (11) represent the necessary conditions for the
point [x1*, x2*] to be an extreme point.
Note that λ could be expressed in terms of as well and
has to be non-zero.
Thus, these necessary conditions require that at least one of the partial
derivatives of g(x1 , x2) be non-zero at an extreme point.
* *
1 2
1 2 ( , )
( , ) 0x x
g x x =
1/g x∂ ∂ 1/g x∂ ∂
11. D Nagesh Kumar, IISc Optimization Methods: M2L411
Solution by method of Lagrange
multipliers…contd.
The conditions given by equations (9) to (11) can also be generated by
constructing a functions L, known as the Lagrangian function, as
(12)
Alternatively, treating L as a function of x1,x2 and λ, the necessary
conditions for its extremum are given by
(13)
1 2 1 2 1 2( , , ) ( , ) ( , )L x x f x x g x xλ λ= +
1 2 1 2 1 2
1 1 1
1 2 1 2 1 2
2 2 2
( , , ) ( , ) ( , ) 0
( , , ) ( , ) ( , ) 0
( , , ) ( , ) 0
L f g
x x x x x x
x x x
L f g
x x x x x x
x x x
L
x x g x x
λ λ
λ λ
λ1 2 1 2
λ
∂ ∂ ∂
= + =
∂ ∂ ∂
∂ ∂ ∂
= + =
∂ ∂ ∂
∂
= =
∂
12. D Nagesh Kumar, IISc Optimization Methods: M2L412
Necessary conditions for a general
problem
For a general problem with n variables and m equality constraints the
problem is defined as shown earlier
Maximize (or minimize) f(X), subject to gj(X) = 0, j = 1, 2, … , m
where
In this case the Lagrange function, L, will have
one Lagrange multiplier λj for each constraint
as
(14)
1 2 , 1 2 1 1 2 2( , ,..., , ,..., ) ( ) ( ) ( ) ... ( )n m m mL x x x f g g gλ λ λ λ λ λ= + + + +X X X X
1
2
n
x
x
x
⎧ ⎫
⎪ ⎪
⎪ ⎪
= ⎨ ⎬
⎪ ⎪
⎪ ⎪⎩ ⎭
X
M
13. D Nagesh Kumar, IISc Optimization Methods: M2L413
Necessary conditions for a general
problem…contd.
L is now a function of n + m unknowns, , and the
necessary conditions for the problem defined above are given by
(15)
which represent n + m equations in terms of the n + m unknowns, xi and λj.
The solution to this set of equations gives us
(16)
and
1 2 , 1 2, ,..., , ,...,n mx x x λ λ λ
1
( ) ( ) 0, 1,2,..., 1,2,...,
( ) 0, 1,2,...,
m
j
j
ji i i
j
j
gL f
i n j m
x x x
L
g j m
λ
λ
=
∂∂ ∂
= + = = =
∂ ∂ ∂
∂
= = =
∂
∑X X
X
*
1
*
2
*
n
x
x
x
⎧ ⎫
⎪ ⎪
⎪ ⎪
= ⎨ ⎬
⎪ ⎪
⎪ ⎪⎩ ⎭
X
M
*
1
*
* 2
*
m
λ
λ
λ
λ
⎧ ⎫
⎪ ⎪
⎪ ⎪
= ⎨ ⎬
⎪ ⎪
⎪ ⎪⎩ ⎭
M
14. D Nagesh Kumar, IISc Optimization Methods: M2L414
Sufficient conditions for a general problem
A sufficient condition for f(X) to have a relative minimum at X* is that each
root of the polynomial in Є, defined by the following determinant equation
be positive.
(17)
11 12 1 11 21 1
21 22 2 12 22 2
1 2 1 2
11 12 1
21 22 2
1 2
0
0 0
0 0
n m
n m
n n nn n n mn
n
n
m m mn
L L L g g g
L L L g g g
L L L g g g
g g g
g g g
g g g
−∈
−∈
−∈
=
L L
M O M M O M
L L
L L L
M O M
M O M M M
L L L
15. D Nagesh Kumar, IISc Optimization Methods: M2L415
Sufficient conditions for a general
problem…contd.
where
(18)
Similarly, a sufficient condition for f(X) to have a relative maximum at X*
is that each root of the polynomial in Є, defined by equation (17) be
negative.
If equation (17), on solving yields roots, some of which are positive and
others negative, then the point X* is neither a maximum nor a minimum.
2
* *
*
( , ), for 1,2,..., 1,2,...,
( ), where 1,2,..., and 1,2,...,
ij
i j
p
pq
q
L
L i n and j m
x x
g
g p m q
x
λ
∂
= = =
∂ ∂
∂
= = =
∂
X
X n
16. D Nagesh Kumar, IISc Optimization Methods: M2L416
Example
Minimize , Subject to
or
2 2
1 1 2 2 1 2( ) 3 6 5 7 5f x x x x x x= − − − + +X 1 2 5x x+ =
17. 1 2 1
2
1 2 1
1 2 2 1
6 10 5 0
1
3 5 (5 )
2
1
3( ) 2 (5 )
2
x x
x
x x
x x x
λ
λ
λ
∂
= − − + + =
∂
=> + = +
=> + + = +
L
2
1
2
x
−
=
1
11
2
x =
[ ]
1 11
* , ; * 23
2 2
−⎡ ⎤
= =⎢ ⎥⎣ ⎦
X λ 11 12 11
21 22 21
11 12
0
0
L L g
L L g
g g
−∈⎛ ⎞
⎜ ⎟
−∈ =⎜ ⎟
⎜ ⎟
⎝ ⎠