SlideShare a Scribd company logo
1 of 78
Gujarat University
M.Sc. Applied Mathematical Science
Semester - III
Name Satish Kubavat Smit Kumbhani Bhavini Lalwani Akshat Merchant
Roll no. 13 14 15 16
Subject Code AMS-502
Subject Name Numerical Optimization
Topic Conventional methods for Constrained Multivariate Optimization
And Kunh Tucker Method
The Kuhn–Tucker
Conditions
Lagrange
Multiplier
Technique
02
03
AGENDA
01
Quadratic
Programming
04
Substitution
Method
INTRODUCTION
Substitution Method
Substitution method used to solve constrained optimization problem is used
when constraint equation is simple and not too complex. For example
substitution method to maximize or minimize the objective function is used
when it is subject to only one constraint equation of a very simple nature.
It is particularly useful when the constraints can be explicitly solved for one of
the variables, allowing you to eliminate that variable from the objective
function.
The simple idea behind this method is :‘we make subject any one variable
from the constraint and put the value of that variable in the objective function
and than solve that new unconstraint problem.’
The steps of this method is in next slide :
Steps / Algorithm
Steps :
Step 1 : Define the Objective Function: Start by defining the objective function,
denoted as f(x,y,…), where x, y,… are the variables you want to optimize.
Step 2 : Define the Constraint(s): Next, define the constraint(s) that must be satisfied
in the problem. Constraints are typically given as equations or inequalities involving
the same variables as the objective function. A constraint can be represented as
g(x,y,…)=0.
Step 3 : Solve for a Variable: Identify one of the variables in the constraint equation
that you can explicitly solve for in terms of the other variables. This variable, which
we'll call x, should be chosen in a way that makes it relatively easy to substitute into
the objective function. So, you solve g(x,y,…)=0 for x to get x=h(y,…).
Step 4 : Substitute into the Objective Function: Substitute the expression for x obtained
in step 3 into the objective function. This results in a new objective function with one
less variable: f(h(y,…),y,…).
Step 5 : Unconstrained Optimization: Treat the new objective function as an
unconstrained optimization problem. Find the critical points of this function by taking
its partial derivatives with respect to the remaining variables (y,…) and setting them
equal to zero:
∂
∂𝑦
f(h(y,…),y,…)=0…∂​f(h(y,…),y,…)=0
Step 6 : Solve for the Remaining Variables: Solve the system of equations obtained in
step 5 to find the values of the remaining variables (y,…) that optimize the modified
objective function.
Step 7 : Find x: Use the expression x=h(y,…) from step 3 to find the value of x
corresponding to the optimal values of y,….
Check the Feasibility: Ensure that the values of x,y,… satisfy the original constraints
g(x,y,…)=0. If they do, you have found a solution to the constrained optimization
problem.
Step 8 : Interpret the Results: The values of the variables that satisfy the constraints
and optimize the original objective function represent the solution to the constrained
optimization problem.
Minimum and
Maximum (one
Variable )
Checking for Maximum or Minimum for
One Variable
• Note that we have converted a constrained problem in a problem without
constraint by substitution
• We had 2 variables in objective function but after substitution of constraint in
objective function, now we only have objective function we 1 variable hence we
can check for maximization or minimization by taking double derivative of the
objective function
• If 𝑓′′
𝑥 > 0 ⇒ 𝑐𝑜𝑛𝑣𝑒𝑥 𝑓𝑢𝑛𝑡𝑖𝑜𝑛 ⇒ 𝑔𝑙𝑜𝑏𝑎𝑙 𝑚𝑖𝑛𝑖𝑚𝑎
• If 𝑓′′
𝑥 < 0 ⇒ 𝑐𝑜𝑛𝑐𝑎𝑣𝑒 𝑓𝑢𝑛𝑡𝑖𝑜𝑛 ⇒ 𝑔𝑙𝑜𝑏𝑎𝑙 𝑚𝑎𝑥𝑖𝑚𝑎
Checking for Maximum or Minimum(Two variable)
After substitution if we have two or more than two variables in the objective function
then for minimum or maximum we have to check with the hessian matrix method:
For two variables: calculate
𝜕2
𝑓
𝜕𝑥1
2
𝜕2
𝑓
𝜕𝑥1𝜕𝑥2
𝜕2
𝑓
𝜕𝑥2𝜕𝑥1
𝜕2
𝑓
𝜕𝑥2
2
Now calculate 𝑑1=
𝜕2
𝑓
𝜕𝑥1
2
Then calculate 𝑑1 where 𝑑2 is determinant of the above hessian matrix
If 𝑑1 > 0 , 𝑑2 > 0 ⇒ 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒 𝑑𝑒𝑓𝑖𝑛𝑖𝑡𝑒 ⇒ 𝑐𝑜𝑛𝑣𝑒𝑥 𝑓𝑢𝑛𝑡𝑖𝑜𝑛 ⇒ 𝑔𝑙𝑜𝑏𝑎𝑙 𝑚𝑖𝑛𝑖𝑚𝑎
If 𝑑1 < 0 , 𝑑2 > 0 ⇒ 𝑛𝑒𝑔𝑎𝑡𝑖𝑣𝑒 𝑑𝑒𝑓𝑖𝑛𝑖𝑡𝑒 ⇒ 𝑐𝑜𝑛𝑐𝑎𝑣𝑒 𝑓𝑢𝑛𝑡𝑖𝑜𝑛 ⇒ 𝑔𝑙𝑜𝑏𝑎𝑙 𝑚𝑎𝑥𝑖𝑚𝑎
If 𝑑1 > 0 , 𝑑2 < 0 ⇒ 𝑖𝑛𝑑𝑒𝑓𝑖𝑛𝑖𝑡𝑒 ⇒ 𝑛𝑒𝑖𝑡ℎ𝑒𝑟 𝑐𝑜𝑛𝑣𝑒𝑥 𝑛𝑜𝑟 𝑐𝑜𝑛𝑐𝑎𝑣𝑒
EXAMPLE
Example
If K units of capital and L units of labor are used, a company can produce KL units
of a manufactured good. Capital can be purchased at $4/unit and labor can be
purchased at $1/unit. A total of $8 is available to purchase capital and labor. How
can the firm maximize the quantity of the good that can be manufactured?
Let K units of capital purchased and L units of labor purchased. Then K and L
must satisfy, 4𝐾 + 𝐿 ≤ 8, 𝐿 ≥ 0 Thus, the firm wants to solve the following
constrained maximization problem:
max 𝑧 = 𝑓 𝐿, 𝐾 = 𝐾𝐿
𝑠. 𝑡. 4𝐾 + 𝐿 ≤ 8
𝐾, 𝐿 ≥ 0
Now, Substitute 𝐿 = 8 − 4𝐾 in Objective function
And, we get New objective function
max 𝑧 = 𝐾(8 − 4𝐾)
max 𝑧 = 𝑓 𝑘 = 8𝐾 − 4𝐾2
after substitution of constraint in objective function, now we only have objective
function we 1 variable hence we can check for maximization or minimization by
taking double derivative of the objective function
Now , 𝑓′
𝑘 = 8 − 8𝐾 ⇒ f′
K = 0 ⇒
𝑓′′
𝑘 = −8
𝑓′′
𝐾 < 0
So, f(k) is concave function Therefor k = 1 is global maximum point
And The firm Maximum f(1) = 4 unit of quantity of good can manufactured
K = 1
Example
A monopolist producing a single product has two types of customers. If 𝑞1 units
are produced for customer 1, then customer 1 is willing to pay a price of 70 −4𝑞1
dollars. If q2 units are produced for customer 2, then customer 2 is willing to pay a
price of 150 − 15𝑞2 dollars. For 𝑞 > 0, the cost of manufacturing q units is
100 − 15𝑞 dollars. To maximize profit, how much should the monopolist sell to
each customer
Solution:
Let 𝑓(𝑞1, 𝑞2) be the monopolist’s profit if she produces qi units for customer i.
Then (assuming some production takes place
𝑓 𝑞1, 𝑞2 = 𝑞1 70 − 4𝑞1 + 𝑞2 150 − 𝑞2 − 100 − 15𝑞1 − 15𝑞2
To find the stationary point/s for 𝑓 𝑞1, 𝑞2 , we set
∂f
∂𝑞1
= 70 − 8𝑞1 − 15 = 0 ⇒ 𝑓𝑜𝑟 𝑞1 =
55
8
∂f
∂𝑞2
= 150 − 30𝑞2 − 15 = 0 ⇒ 𝑓𝑜𝑟 𝑞2 =
9
2
Thus, the only stationary point of 𝑓 𝑞1, 𝑞2 is
55
8
,
9
2
. Next we find the Hessian for
𝑓 𝑞1, 𝑞2
𝐻 𝑞1, 𝑞2 =
−8 0
0 −30
 Since the first leading principal minor of H is −8 < 0, and the second leading
principal minor of H is −8 −30 = 240 > 0,
 We know that if for k = 1,2,…..,n,𝐻𝑘 𝑥 is nonzero and has the same sign as
−1 𝑘
, then a stationary point x is a local maximum for given NLP
 So, x = (
55
8
,
9
2
) is a local maximum. And function is also concave
 Thus, It implies that (
55
8
,
9
2
) maximizes profit among all production possibilities
(with the possible exception of no production). Then (
55
8
,
9
2
) yields a profit of
𝑓 𝑞1, 𝑞2
=
55
8
70 − 4 ×
55
8
+
9
2
150 − 15 ×
9
2
− 100 − 15 ×
55
8
− 15 ×
9
2
= $392.81
 The Monopolist should sell
55
8
≈ 7 unit to customer 1 and
9
2
≈ 4 unit to customer 2
.
INTRODUCTION
Lagrange Multiplier Techique
 The substitution method for solving constrained optimization problem cannot be
used easily when the constraint equation is very complex and therefore cannot be
solved for one of the decision variable. In such cases of constrained optimization we
employ the Lagrangian Multiplier technique. In this Lagrangian technique of
solving constrained optimization problem, a combined equation called Lagrangian
function is formed which incorporates both the original objective function and
constraint equation.
 This Lagrangian function is formed in a way which ensures that when it is
maximized or minimized, the original given objective function is also maximized or
minimized and at the same time it fulfills all the constraint requirements.
 In creating this Lagrangian function, an artificial variable λ (Greek letter Lambda) is
used and it is multiplied by the given constraint function having been set equal to
zero. λ is known as Lagrangian multiplier.
 Since Lagrangian function incorporates the constraint equation into the objective
function, it can be considered as unconstrained optimization problem and solved
accordingly. Let us illustrate Lagrangian multiplier technique by taking the
constrained optimization problem solved above by substitution method
NECESSARY
CONDITION
Method and Necessary Conditions
 Lagrange’s multiplier method can be used to solve NLP’s in which
all the constraints are of equality type
Consider an NLP:
max or min (z) = f(X)
s.t. g1(x1,x2,x3,…….,xn) = b1
g2(x1,x2,x3,…….,xn) = b2
……
gn(x1,x2,x3,…….,xn) = bn
Method and Necessary Conditions
 To solve the given problem we consider “m” lagrangian multipliers, say λi and
lagrangian function becomes
L(x1,x2,……xn) = f(x) + 𝑖=1
𝑚
λi(𝑔𝑖 𝑥1, 𝑥2, … . . , 𝑥𝑛 − 𝑏1)
Therefore,
L = f(x) + λ1 𝑔1 − 𝑏1 +λ2(𝑔2 −𝑏2) … … … . +λm(𝑔m − 𝑏m)
The necessary condition for a point (x1,x2,…… xn , λ1, λ2,…. λn) to be an extreme
point is
𝜕𝐿
𝜕𝑥1
=
𝜕𝐿
𝜕𝑥2
… . . =
𝜕𝐿
𝜕𝑥𝑛
=
𝜕𝐿
𝜕λ1
=
𝜕𝐿
𝜕λ2
… … =
𝜕𝐿
𝜕λ𝑚
= 0
EXAMPLES
Example
A company is planning to spend $10,000 on advertising. It costs $3,000 per minute
to advertise on television and $1,000 per minute to advertise on radio. If the firm
buys x minutes of television advertising and y minutes of radio advertising, then its
revenue in thousands of dollars is given by 𝑓 𝑥, 𝑦 = −2𝑥2
− 𝑦2
+ 𝑥𝑦 + 8𝑥 +
3𝑦. How can the firm maximize its revenue?
Solution:
we want to solve the following NLP :
𝑚𝑎𝑥𝑧 = −2𝑥2
− 𝑦2
+ 𝑥𝑦 + 8𝑥 + 3𝑦
𝑠. 𝑡. 3𝑥 + 𝑦 = 10
Then 𝐿 𝑥, 𝑦, 𝐿 = −2𝑥2
− 𝑦2
+ 𝑥𝑦 + 8𝑥 + 3𝑦 + 𝜆 10 − 3𝑥 − 𝑦
Observe that 10 − 3𝑥 − 𝑦 = 0 reduces to the constraint 3𝑥 = + 𝑦 = 10.
from above equations 𝑦 = 3𝜆 – 8 +4x , 𝑥 = 𝜆 – 3 +2y
Thus, 𝑦 = 3𝜆 – 8 +4(𝜆 – 3 +2y) = 7𝜆 – 20 +8y or
𝑦 =
20
7
− 𝜆
𝑥 = 𝜆 – 3 + 2
20
7
− 𝜆 =
19
7
− 𝜆
Subtituting value of x and y in (10 − 3x − y) , yields
10 − 3
19
7
− 𝜆 −
20
7
− 𝜆 = 0
4𝜆 − 1 = 0
𝜆 =
1
4
Then, 𝑥, 𝑦 =
73
28
,
69
28
The Hessian for f(x,y) is,
−4 1
1 −2
Since each first-order principal minor is negative, and 𝐻2 𝑥, 𝑦 = 7 > 0
𝑓(𝑥, 𝑦)is concave. The constraint is linear, so the Lagrange multiplier method
does yield the optimal solution to the NLP
Thus the firm should purchase
69
28
minutes of television time and
73
28
minutes of
radio time
Since 1 4 , spending an extra (thousands) (for small delta) would increase the
firm’s revenues by approximately $0.25 .
Python code
Usage
 Finance: In portfolio optimization, investors may want to maximize their
expected return while minimizing risk. The Lagrange multiplier method can
help find the optimal allocation of assets under various constraints, such as
risk tolerance and capital limits.
 Operations Research: The Lagrange multiplier method is a fundamental tool
in linear and nonlinear programming used to optimize various aspects of
logistics and supply chain management, such as production scheduling,
inventory management, and transportation planning.
 Economics: In economics, the Lagrange multiplier method is used to find the
optimal allocation of resources subject to constraints. For instance, a firm may
want to maximize its profit while adhering to various production constraints,
such as labor, capital, and material availability
 Machine Learning: In machine learning, the Lagrange multiplier method can
be used for support vector machines (SVMs) to find the optimal hyperplane that
best separates data points while considering constraints on the margin.
 Aerospace Engineering: In aircraft design, engineers use optimization
techniques, including the Lagrange multiplier method, to minimize fuel
consumption while meeting constraints related to aircraft weight, safety, and
performance.
Work Flow
EXAMPLE &
USAGE
INTRODUCTION CONDITIONS
INTRODUCTION
 The KKT conditions were originally named after Harold W.
Kuhn and Albert W. Tucker, who first published the conditions in
1951. Later scholars discovered that the necessary conditions for this
problem had been stated by William Karush in his master's thesis in 1939.
 The Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–
Tucker conditions, are first derivative tests (sometimes called first-
order necessary conditions) for a solution in nonlinear programming to
be optimal, provided that some regularity conditions are satisfied.
Stationary
point
Lagrange’s
Mutyplier
KKT
Condition
KT CONDITION
• Consider the Non – Linear Problem,
Maximize f(x)
Sub. To 𝑔𝑖 ≤ 𝑏𝑖
• Convert each 𝑖𝑡ℎ
inequality constraints into equality by adding the non-
negative slack variable say (𝑆𝑖
2
)
𝑔𝑖(𝑥) + 𝑆𝑖
2
= 𝑏𝑖
ℎ𝑖(𝑥) = 𝑔𝑖(𝑥) + 𝑆𝑖
2
- 𝑏𝑖
• Lagrangian Multiplier,
L(x, 𝑆𝑖, 𝜆) = f(x) - 𝑖 𝜆𝑖 ℎ𝑖(𝑥)
= f(x) - 𝑖 𝜆𝑖(𝑔𝑖(𝑥) + 𝑆𝑖
2
- 𝑏𝑖).
NECESSARY CONDITION
 The Necessary Conditions are given by,
∇L = 0
ⅆ𝐿
ⅆ𝑥
= 0
ⅆ𝑓
ⅆ𝑥
- 𝑖 𝜆𝑖
ⅆ𝑔𝑖(𝑥)
ⅆ𝑥
= 0 (1)
ⅆ𝐿
ⅆ𝜆1
= 0 𝑔𝑖(𝑥) + 𝑆𝑖
2
- 𝑏𝑖 = 0 (2)
ⅆ𝐿
ⅆ𝑆𝑖
= 0 - 2 𝜆𝑖 𝑆𝑖 = 0 (3)
From (2),
𝑆𝑖
2
= 𝑏𝑖 - 𝑔𝑖(𝑥)
 Multiply equation (3) by 𝑆𝑖 & we get, 𝜆𝑖 = 0 or 𝑏𝑖 - 𝑔𝑖 𝑥 = 0.
 𝜆𝑖 measures the rate of variation of f w.r.t 𝑏𝑖,
ⅆ𝐿
ⅆ𝑏𝑖
= 𝜆𝑖.
SUFFICIENT CONDITION
 The KT Conditions which are Necessary by KT Conditions are also Sufficient
Conditions.
 KT Conditions (Necessary) Sufficient,
 IF,

 Maximum Function
 f is concave & the feasible
region is convex.
 Minimum Function
 f is convex & the feasible
region is convex.
Conditions
Max. f(x)
S.t. 𝑔𝑖(𝑥) ≤ 𝑏𝑖
F is Concave
𝑔𝑖(𝑥) is convex
Max. f(x)
S.t. 𝑔𝑖(𝑥) ≥ 𝑏𝑖
F is Concave
𝑔𝑖(𝑥) is Concave
Min. f(x)
S.t. 𝑔𝑖(𝑥) ≤ 𝑏𝑖
F is Convex
𝑔𝑖(𝑥) is Convex
Min. f(x)
S.t. 𝑔𝑖(𝑥) ≥ 𝑏𝑖
F is Convex
𝑔𝑖(𝑥) is Concave
EXAMPLES
Example:1
 Find the Optimum value of the
Objective function , solve the KUHN –
TUCKER
Max Z = 𝒙𝟑
− 𝟑𝒙𝟐
+ 𝟐𝒙 − 𝟏
Subject to constraints are,
− 𝒙 ≤ 2
𝒙 ≤ 4
 Solution :-
By using KT Condition:
f=𝑥3
− 3𝑥2
+ 2𝑥 − 1
𝑔1 = − 𝑥 − 2 ≤ 0
𝑔2=𝑥 − 4 ≤ 0
∂𝑓(𝑥)
∂𝑥j
− 𝑖 𝜆𝑖
∂𝑔𝑖(𝑥)
∂𝑥j
= 0 (j= 1,2,….,n)
𝜆𝑖[𝑏𝑖- 𝑔𝑖(𝑥)] = 0 (i= 1,2,….,m)
𝜆𝑖≥ 0 (i= 1,2,….,m)
3𝑥2
− 6𝑥 + 2 − 𝜆1 −1 − 𝜆2 1 = 0
3𝑥2
− 6𝑥 + 𝜆1 − 𝜆2 + 2 = 0 ……(1)
𝜆1[0 −(− 𝑥 − 2)] = 0  𝜆1(𝑥 + 2) = 0 ……(2)
𝜆2[0 −(𝑥 − 4)] = 0  𝜆2(4 − 𝑥) = 0 ……(3)
𝜆1 ≥ 0 , 𝜆2 ≥ 0 ,
(1) 𝜆1= 0 , 𝜆2 = 0
from eqn(1)
3𝑥2
− 6𝑥 + 2 = 0
𝑥 =
6± 12
6
𝑥 = 1 +
1
3
, 𝑥 = 1
−
1
3
2) 𝜆1= 0 , 𝜆2 > 0
𝜆2> 0  4 − 𝑥 = 0 𝑥 = 4
Using 𝑥 = 4 this value put in
eqn(1),we have
26 + 𝜆1 − 𝜆2 = 0
here 𝜆1 = 0  26 −𝜆2 = 0
𝑥 = 4 , 𝜆1 = 0, 𝜆2 = 26 > 0
3) 𝜆1> 0 , 𝜆2 = 0
𝜆1> 0  𝑥 + 2 = 0
𝑥 = −2
Put this value in eqn(1),we have
26 + 𝜆1= 0
𝜆1 = − 26 < 0
Here the 𝜆1 = − 26 < 0 but we
take 𝜆1> 0 so this case is
not possible
4) 𝜆1> 0 , 𝜆2 > 0
𝑥 + 2 = 0 , 𝑥 = −2
4 − 𝑥 = 0, 𝑥 = 4
𝑥 = −2 Put this value in eqn(1),
26 + 𝜆1 − 𝜆2 = 0
𝜆1 − 𝜆2 = − 26
𝑥 = −2 Put this value in
eqn(3),𝜆2(6) = 0 ,𝜆2=(0)  𝜆1 =
− 26,
but here ,we take 𝜆1> 0 𝑎𝑛𝑑 𝜆2 > 0
so this case is not possible
Therefore, the value of 𝑥 is 1 +
1
3
, 1 −
1
3
and 4
f=𝑥3
− 3𝑥2
+ 2𝑥 − 1
f(1 +
1
3
) = −1.38,
f(1 +
1
3
) = −0.62
f(4) = 23
So the optimal solution is x = 4 and z=23
The maximum value of z is 23
Example:2
 A monopolist can purchase up to 17.25
oz of a chemical for $10/oz. At a cost of
$3/oz, the chemical can be processed
into an ounce of product 1; or, at a cost
of $5/oz, the chemical can be processed
into an ounce of product 2. If 𝒙𝟏 oz of
product 1 are produced, it sells for a
price of $30 -𝒙𝟏 per ounce. If 𝒙𝟐 oz of
product 2 are produced, it sells for a
price of $50 - 2 𝒙𝟐 per ounce.
Determine how the monopolist can
maximize profits.
 Solution:
Decision variables:
𝑥1 = ounces of product 1 produced
𝑥2 = ounces of product 2 produced
𝑥3 = ounces of chemical processed
Then we want to solve the following NLP:
To find profit , Profit = selling price –costs
Objective Function:
max z = 𝑥1 30 − 𝑥1 + 𝑥2 50 − 2𝑥2 − 3𝑥1 − 5𝑥2 −10𝑥3
Constrains:
𝑥1+𝑥2 ≤𝑥3 𝑜𝑟 𝑥1+𝑥2 - 𝑥3 ≤0
𝑥3 ≤17.25
So by K-T conditions :
30 −2𝑥1 − 3 − λ1 = 0 ---------(1)
50 −2𝑥2 − 5 − λ1 = 0 ---------(2)
− 10 − λ1 −λ2 =0 ---------(3)
λ1 (−𝑥1 − 𝑥2 + 𝑥3) = 0 ---------(4)
λ2 (17.25 − 𝑥3) =0 ---------(5)
λ1 ≥ 0 ---------(6)
λ2 ≥ 0 ---------(7)
• There are four cases to consider:
Case 1: λ1=1 , λ2 = 0. This case cannot occur, because (3) would be violated.
Case 2: λ1=0 , λ2 > 0. If λ1=0 then (3) implies λ2 = −10. This would violated (7),
Case 3: λ1 > 0, λ2 = 0, From (3) we obtain λ1= 10.now (1) yields x1=8.5 , and (2)
yields 𝑥2 = 8.75. From (4) ,we obtain 𝑥1 + 𝑥2 = 𝑥3,
so 𝑥3 = 17.25. Thus, Thus , 𝑥1 = 8.5, 𝑥2 = 8.75, 𝑥3 = 17.25,
λ1 = 10,λ2 =0 Satisfies the K-T conditions.
Case 4: λ1>0, λ2 > 0 . Case 3 yields an optimal solution , so we need not consider
Case4 ,
Result: Thus, the optimal solution to our problem is to buy 17.25 oz of the
chemical and produce 8.5 oz of product 1 and 8.75 oz of product 2.
Python code
Python code :
Python code :
Usage
• Support Vector Machines (SVMs): Optimizing hyperplanes for machine
learning classification.
• Economic Equilibrium: Allocating resources efficiently considering
preferences and constraints.
• Engineering Design: Optimizing mechanical systems for performance and
safety.
• Chemical Process Optimization: Achieving optimal chemical reactions
within constraints.
• Energy Generation: Optimizing power distribution while adhering to
demand and limitations.
• Supply Chain Management: Efficiently allocating resources in logistics operations.
• Health Care Planning: Designing optimal medical treatment plans or device
parameters.
• Structural Engineering: Optimizing structural designs for performance and safety.
• Game Theory: Analyzing optimal strategies in competitive situations.
INTRODUCTION
Consider an NLP whose objective function is the sum of terms of the
form 𝑥1
𝑘1
, 𝑥2
𝑘2
, … . ., 𝑥𝑛
𝑘𝑛
The degree of the term 𝑥1
𝑘1
, 𝑥2
𝑘2
, … . ., 𝑥𝑛
𝑘𝑛
is
𝑘1 + 𝑘2 +……… 𝑘𝑛 . Thus, the degree of the term 𝑥1
2
𝑥2 is 3, and the
degree of the term 𝑥1𝑥2 is 2. An NLP whose constraints are linear
and whose objective is the sum of terms of the form 𝑥1
𝑘1
, 𝑥2
𝑘2
, … . .,
𝑥𝑛
𝑘𝑛
(with each term having a degree of 2, 1, or 0) is a quadratic
programming problem (QPP).
Quadratic Programming
Wolfe’s method used to solve QPPs in which all variables must be nonnegative.
We illustrate the method by solving the following QPP:
min z = -𝑥1- 𝑥2 +(
1
2
) 𝑥1
2
+ 𝑥2
2
- 𝑥1𝑥2
s.t 𝑥1+ 𝑥2 ≤ 3
-2𝑥1- 3𝑥2 ≤ -6
𝑥1, 𝑥2 ≥ 0
The objective function may be shown to be convex, so any point satisfying the
Kuhn–Tucker conditions (8’)–(11’ ) will solve this QPP After employing excess
variables 𝑒1 for the 𝑥1 constraint and 𝑒2 for the 𝑥2 constraint in (8’ ), 𝑒2
′
for the
constraint -2𝑥1- 3𝑥2 ≤ -6
and a slack variable 𝑠1
′
for the constraint 𝑥1+ 𝑥2 ≤ 3 the K–T conditions may be
written as
Wolfe’s Method for Solving Quadratic Programming Problems
𝑥1- 1- 𝑥2 + 𝜆1- 2𝜆2- 𝑒1 = 0 [here 𝑒1 , 𝑒2 are multiplier]
2𝑥2- 1 - 𝑥1+ 𝜆1- 3𝜆2 - 𝑒2 = 0 [here 𝑠1
′
is slack and 𝑒2
′
is surplus variable]
𝑥1+ 𝑥2 + 𝑠1
′
=3
2𝑥1+ 3𝑥2 - 𝑒2
′
= 6
All variables nonnegative
𝜆1𝑠1
′
= 0 , 𝜆2𝑒2
′
=0 , 𝑒1𝑥1=0, 𝑒2𝑥2=0
Observe that with the exception of the last four equations, the K–T conditions are all linear or
nonnegativity constraints. The last four equations are the complementary slackness conditions
for this QPP. For a general QPP, the complementary slackness conditions may be verbally
expressed by
𝑒i from 𝑥i constraint in (8’ ) and 𝑥i cannot both be positive …(12)
Slack or excess variable for the ith constraint and i cannot both be positive
and 𝜆i both basic variable
To find a point satisfying the K–T conditions (except for the complementary slackness
conditions), Wolfe’s method simply applies a modified version of Phase I of the two-phase
simplex method. We first add an artificial variable to each constraint in the K–T conditions
that does not have an obvious basic variable, and then we attempt to minimize the sum of the
artificial variables. To ensure that the final solution (with all artificial variables equal to zero)
satisfies the complementary slackness conditions (12), Wolfe’s method modifies the simplex’s
choice of the entering variable as follows:
1) Never perform a pivot that would make the 𝑒i from the ith constraint in (8’ ) and 𝑥i both
basic variables.
2) Never perform a pivot that would make the slack (or excess) variable for the ith constraint
To apply Wolfe’s method to our example, we must solve the following LP:
min w = 𝑎1+ 𝑎2 + 𝑎2
′
s.t 𝑥1- 𝑥2 + 𝜆1- 2𝜆2- 𝑒1+ 𝑎1 = 1
- 𝑥1 +2𝑥2+ 𝜆1- 3𝜆2 - 𝑒2 + 𝑎2 = 1
𝑥1+ 𝑥2 + 𝑠1
′
= 3
2𝑥1+ 3𝑥2 - 𝑒2
′
+ 𝑎2
′
= 6 , All variables nonnegative
initial Table for Wolfe’s Method (Table 1)
𝐶j 0 0 0 0 0 0 0 0 1 1 1
𝐵V 𝐶B 𝑋B 𝑥1 𝑥2 𝜆1 𝜆2 𝑒1 𝑒2 𝑠1
′
𝑒2
′
𝑎1 𝑎2 𝑎2
′ RR
𝑋B/𝐶j
𝑎1 1 1 1 -1 1 -2 -1 0 0 0 1 0 0 -
𝑎2 1 1 -1 2 1 -3 0 -1 0 0 0 1 0 ½
𝑠1
′
0 3 1 1 0 0 0 0 1 0 0 0 0 3/1
𝑎2
′
1 6 2 3 0 0 0 0 0 -1 0 0 1 6/3
w=8 𝑍j − 𝐶j 2 4 2 -5 -1 -1 0 -1 0 0 0
After eliminating the artificial variables from row 0, we obtain the tableau in Table 1. The current
basic feasible solution is w = 8 , 𝑎1 = 1 , 𝑎2 = 1 , 𝑠1
′
= 3 , 𝑎2
′
= 6
Since 𝑥2 has the most positive coefficient in row 0, we choose to enter 𝑥2 into the basis.
𝐶j 0 0 0 0 0 0 0 0 1 1 1
𝐵V 𝐶B 𝑋B 𝑥1 𝑥2 𝜆1 𝜆2 𝑒1 𝑒2 𝑠1
′
𝑒2
′
𝑎1 𝑎2 𝑎2
′
RR
𝑎1 1 3/2 ½ 0 3/2 -7/2 -1 -1/2 0 0 1 1/2 0 3
𝑥2 0 1/2 -1/2 1 1/2 -3/2 0 -1/2 0 0 0 1/2 0 --
𝑠1
′
0 5/2 3/2 0 -1/2 3/2 0 1/2 1 0 0 -1/2 0 5/3
𝑎2
′
1 9/2 7/2 0 -3/2 9/2 0 3/2 0 -1 0 -3/2 1 9/7
w=6 𝑍j − 𝐶j 4 0 0 1 -1 1 0 -1 0 -2 0
The resulting tableau is Table 2. The current basic feasible solution is w = 6, 𝑎1 = 3/2, 𝑥2 =1/2
𝑠1
′
=5/2, 𝑎2
′
= 9/2 Since 𝑥1 has the most positive coefficient in row 0, we now enter x1 into the
basis.
𝐶j 0 0 0 0 0 0 0 0 1 1 1
𝐵V 𝐶B 𝑋B 𝑥1 𝑥2 𝜆1 𝜆2 𝑒1 𝑒2 𝑠1
′
𝑒2
′
𝑎1 𝑎2 𝑎2
′
RR
𝑎1 1 6
7
0 0 12
7
−
29
7
-1 -5/7 0 1/7 1 5/7 -1/7 6
𝑥2 0 8
7
0 1 2
7
−
6
7
0 -2/7 0 -1/7 0 2/7 1/7 -
𝑠1
′
0 4
7
0 0 1
7
−
3
7
0 -1/7 1 3/7 0 1/7 -3/7 4/3
𝑥1 0 9/7 1 0
−
3
7
9
7
0 3/7 0 -2/7 0 3/7 2/7 -
w=
6
7
𝑍j − 𝐶j 0 0 12
7
−
29
7
-1 -5/7 0 1/7 0 -2/7 -8/7
The resulting tableau is Table 3.
The current basic feasible solution is w = 6/7, 𝑎1 = 6/7, 𝑥2 =8/7 ,𝑠1
′
=4/7, 𝑥1=9/7
The simplex method recommends that 𝜆1 should enter the basis. However, Wolfe’s modification of the simplex
method for selecting the entering variable does not allow 𝜆1 and 𝑠1
′
to both be basic variables. Thus, 𝜆1 cannot
enter the basis. Because 𝑒2
′
is the only other variable with a positive coefficient in row 0, we now enter 𝑒2
′
into the
basis
𝐶j 0 0 0 0 0 0 0 0 1 1 1
𝐵V 𝐶B 𝑋B 𝑥1 𝑥2 𝜆1 𝜆2 𝑒1 𝑒2 𝑠1
′
𝑒2
′
𝑎1 𝑎2 𝑎2
′
RR
𝑎1 1 2/3 0 0 5/3 -4 -1 -2/3 -1/3 0 1 2/3 0 2/5
𝑥2 0 4/3 0 1 1/3 -1 0 -1/3 1/3 0 0 1/3 0 4/1
𝑒2
′
0 4/3 0 0 1/3 -1 0 -1/3 7/3 1 0 1/3 -1 4/1
𝑥1 0 5/3 1 0 -1/3 1 0 1/3 23 0 0 -1/3 0 -
W=2/
3
𝑍j − 𝐶j 0 0 5/3 -4 -1 -2/7 -1/3 0 0 -1/3 -1
The resulting tableau is Table 4
The current basic feasible solution is w = 2/3, 𝑎1=2/3, 𝑥2=4/3, 𝑒2
′
=4/3 and 𝑥1=5/3
Because 𝑠1
′
is now a non-basic variable, we can enter 𝜆1 into the basis.
Optimal Table for Wolfe’s Method (Table 5)
𝐶j 0 0 0 0 0 0 0 0 1 1 1
𝐵V 𝐶B 𝑋B 𝑥1 𝑥2 𝜆1 𝜆2 𝑒1 𝑒2 𝑠1
′
𝑒2
′
𝑎1 𝑎2 𝑎2
′
𝜆1 0 2/5 0 0 1 -12/5 -3/5 -2/5 -1/5 0 3/5 2/5 0
𝑥2 0 6/5 0 1 0 -1/5 1/5 -1/5 2/5 0 -1/5 1/5 0
𝑒2
′
0 6/5 0 0 0 -1/5 1/5 -1/5 12/5 1 -1/5 1/5 -1
𝑥1 0 9/5 1 0 0 1/5 -1/5 1/5 3/5 0 1/5 -1/5 0
w=0 𝑍j − 𝐶j 0 0 0 0 0 0 0 0 -1 -1 -1
The resulting table is Table 5
This is an optimal table. Because w=0, we have found a solution that satisfies the Kuhn–Tucker
conditions and is optimal for the QPP. Thus, the optimal solution to the QPP is 𝑥1= 9/5 , 𝑥2 =6/ 5 .
Usage
 Finance - Portfolio Optimization:
In finance, investors often use the Quadratic Programming method to optimize their
investment portfolios. The objective is to maximize returns while managing risk.
This involves quadratic optimization to find the optimal asset allocation under
constraints such as budget constraints and risk limits.
 Operations Research - Production Planning:
In manufacturing and production planning, QP can be used to optimize production
schedules. Companies can maximize profit by adjusting production quantities while
considering constraints like capacity limitations, resource availability, and demand
fluctuations.
 Chemical Engineering - Process Optimization:
In chemical engineering, QP methods are used to optimize chemical processes, such
as reactor design and operation. Engineers aim to maximize product yields while
adhering to constraints related to reaction kinetics, heat transfer, and material
balances.
 Economics - Utility Maximization :
Economists may use QP to solve utility maximization problems. Individuals or
firms aim to maximize utility (or profit) subject to various constraints, which can be
nonlinear, such as production functions or utility functions
 Transportation - Vehicle Routing:
In logistics and transportation, QP can be employed to optimize vehicle routing and
scheduling. Companies aim to minimize transportation costs while ensuring timely
deliveries and considering vehicle capacity constraints.
 Energy - Power System Optimization:
Power system operators use quadratic programming to optimize the dispatch of
power generation resources. The objective is to minimize production costs while
satisfying constraints on power demand, transmission limits, and environmental
regulations
Thank you

More Related Content

Similar to Gujarat University M.Sc. Applied Mathematical Science Numerical Optimization Conventional methods and Kuhn Tucker Method

Group No 05, calculus.pptx
Group No 05, calculus.pptxGroup No 05, calculus.pptx
Group No 05, calculus.pptxEmonKundu
 
LINEAR PROGRAMMING
LINEAR PROGRAMMINGLINEAR PROGRAMMING
LINEAR PROGRAMMINGrashi9
 
Linear Programming Quiz Solution
Linear Programming Quiz SolutionLinear Programming Quiz Solution
Linear Programming Quiz SolutionEd Dansereau
 
Deep learning Unit1 BasicsAllllllll.pptx
Deep learning Unit1 BasicsAllllllll.pptxDeep learning Unit1 BasicsAllllllll.pptx
Deep learning Unit1 BasicsAllllllll.pptxFreefireGarena30
 
Simplex part 1 of 4
Simplex part 1 of 4Simplex part 1 of 4
Simplex part 1 of 4Ed Dansereau
 
A machine learning method for efficient design optimization in nano-optics
A machine learning method for efficient design optimization in nano-opticsA machine learning method for efficient design optimization in nano-optics
A machine learning method for efficient design optimization in nano-opticsJCMwave
 
Introduction to Operations Research/ Management Science
Introduction to Operations Research/ Management Science Introduction to Operations Research/ Management Science
Introduction to Operations Research/ Management Science um1222
 
Mastering Greedy Algorithms: Optimizing Solutions for Efficiency"
Mastering Greedy Algorithms: Optimizing Solutions for Efficiency"Mastering Greedy Algorithms: Optimizing Solutions for Efficiency"
Mastering Greedy Algorithms: Optimizing Solutions for Efficiency"22bcs058
 

Similar to Gujarat University M.Sc. Applied Mathematical Science Numerical Optimization Conventional methods and Kuhn Tucker Method (20)

Group No 05, calculus.pptx
Group No 05, calculus.pptxGroup No 05, calculus.pptx
Group No 05, calculus.pptx
 
Thesis
ThesisThesis
Thesis
 
LINEAR PROGRAMMING
LINEAR PROGRAMMINGLINEAR PROGRAMMING
LINEAR PROGRAMMING
 
Linear Programming Quiz Solution
Linear Programming Quiz SolutionLinear Programming Quiz Solution
Linear Programming Quiz Solution
 
linear programming
linear programming linear programming
linear programming
 
Calculus ebook
Calculus ebookCalculus ebook
Calculus ebook
 
opt_slides_ump.pdf
opt_slides_ump.pdfopt_slides_ump.pdf
opt_slides_ump.pdf
 
aaoczc2252
aaoczc2252aaoczc2252
aaoczc2252
 
Daa unit 1
Daa unit 1Daa unit 1
Daa unit 1
 
E2
E2E2
E2
 
Unit.2. linear programming
Unit.2. linear programmingUnit.2. linear programming
Unit.2. linear programming
 
Deep learning Unit1 BasicsAllllllll.pptx
Deep learning Unit1 BasicsAllllllll.pptxDeep learning Unit1 BasicsAllllllll.pptx
Deep learning Unit1 BasicsAllllllll.pptx
 
Simplex part 1 of 4
Simplex part 1 of 4Simplex part 1 of 4
Simplex part 1 of 4
 
A machine learning method for efficient design optimization in nano-optics
A machine learning method for efficient design optimization in nano-opticsA machine learning method for efficient design optimization in nano-optics
A machine learning method for efficient design optimization in nano-optics
 
Introduction to Operations Research/ Management Science
Introduction to Operations Research/ Management Science Introduction to Operations Research/ Management Science
Introduction to Operations Research/ Management Science
 
5163147.ppt
5163147.ppt5163147.ppt
5163147.ppt
 
03 optimization
03 optimization03 optimization
03 optimization
 
Constrained Optimization
Constrained OptimizationConstrained Optimization
Constrained Optimization
 
Mastering Greedy Algorithms: Optimizing Solutions for Efficiency"
Mastering Greedy Algorithms: Optimizing Solutions for Efficiency"Mastering Greedy Algorithms: Optimizing Solutions for Efficiency"
Mastering Greedy Algorithms: Optimizing Solutions for Efficiency"
 
CALCULUS 2.pptx
CALCULUS 2.pptxCALCULUS 2.pptx
CALCULUS 2.pptx
 

Recently uploaded

Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfEnzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfSumit Tiwari
 
Class 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdfClass 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdfakmcokerachita
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Krashi Coaching
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionSafetyChain Software
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxmanuelaromero2013
 
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...M56BOOKSTORE PRODUCT/SERVICE
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppCeline George
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...EduSkills OECD
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxSayali Powar
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...Marc Dusseiller Dusjagr
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docxPoojaSen20
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxiammrhaywood
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxheathfieldcps1
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13Steve Thomason
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxNirmalaLoungPoorunde1
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfsanyamsingh5019
 
How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17Celine George
 

Recently uploaded (20)

Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdfEnzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
Enzyme, Pharmaceutical Aids, Miscellaneous Last Part of Chapter no 5th.pdf
 
Class 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdfClass 11 Legal Studies Ch-1 Concept of State .pdf
Class 11 Legal Studies Ch-1 Concept of State .pdf
 
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
Kisan Call Centre - To harness potential of ICT in Agriculture by answer farm...
 
9953330565 Low Rate Call Girls In Rohini Delhi NCR
9953330565 Low Rate Call Girls In Rohini  Delhi NCR9953330565 Low Rate Call Girls In Rohini  Delhi NCR
9953330565 Low Rate Call Girls In Rohini Delhi NCR
 
Mastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory InspectionMastering the Unannounced Regulatory Inspection
Mastering the Unannounced Regulatory Inspection
 
How to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptxHow to Make a Pirate ship Primary Education.pptx
How to Make a Pirate ship Primary Education.pptx
 
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
KSHARA STURA .pptx---KSHARA KARMA THERAPY (CAUSTIC THERAPY)————IMP.OF KSHARA ...
 
URLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website AppURLs and Routing in the Odoo 17 Website App
URLs and Routing in the Odoo 17 Website App
 
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
Presentation by Andreas Schleicher Tackling the School Absenteeism Crisis 30 ...
 
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptxPOINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
POINT- BIOCHEMISTRY SEM 2 ENZYMES UNIT 5.pptx
 
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
“Oh GOSH! Reflecting on Hackteria's Collaborative Practices in a Global Do-It...
 
mini mental status format.docx
mini    mental       status     format.docxmini    mental       status     format.docx
mini mental status format.docx
 
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
Model Call Girl in Tilak Nagar Delhi reach out to us at 🔝9953056974🔝
 
Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1Código Creativo y Arte de Software | Unidad 1
Código Creativo y Arte de Software | Unidad 1
 
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptxSOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
SOCIAL AND HISTORICAL CONTEXT - LFTVD.pptx
 
The basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptxThe basics of sentences session 2pptx copy.pptx
The basics of sentences session 2pptx copy.pptx
 
The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13The Most Excellent Way | 1 Corinthians 13
The Most Excellent Way | 1 Corinthians 13
 
Employee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptxEmployee wellbeing at the workplace.pptx
Employee wellbeing at the workplace.pptx
 
Sanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdfSanyam Choudhary Chemistry practical.pdf
Sanyam Choudhary Chemistry practical.pdf
 
How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17How to Configure Email Server in Odoo 17
How to Configure Email Server in Odoo 17
 

Gujarat University M.Sc. Applied Mathematical Science Numerical Optimization Conventional methods and Kuhn Tucker Method

  • 1. Gujarat University M.Sc. Applied Mathematical Science Semester - III Name Satish Kubavat Smit Kumbhani Bhavini Lalwani Akshat Merchant Roll no. 13 14 15 16 Subject Code AMS-502 Subject Name Numerical Optimization Topic Conventional methods for Constrained Multivariate Optimization And Kunh Tucker Method
  • 3.
  • 5. Substitution Method Substitution method used to solve constrained optimization problem is used when constraint equation is simple and not too complex. For example substitution method to maximize or minimize the objective function is used when it is subject to only one constraint equation of a very simple nature. It is particularly useful when the constraints can be explicitly solved for one of the variables, allowing you to eliminate that variable from the objective function. The simple idea behind this method is :‘we make subject any one variable from the constraint and put the value of that variable in the objective function and than solve that new unconstraint problem.’ The steps of this method is in next slide :
  • 7. Steps : Step 1 : Define the Objective Function: Start by defining the objective function, denoted as f(x,y,…), where x, y,… are the variables you want to optimize. Step 2 : Define the Constraint(s): Next, define the constraint(s) that must be satisfied in the problem. Constraints are typically given as equations or inequalities involving the same variables as the objective function. A constraint can be represented as g(x,y,…)=0. Step 3 : Solve for a Variable: Identify one of the variables in the constraint equation that you can explicitly solve for in terms of the other variables. This variable, which we'll call x, should be chosen in a way that makes it relatively easy to substitute into the objective function. So, you solve g(x,y,…)=0 for x to get x=h(y,…).
  • 8. Step 4 : Substitute into the Objective Function: Substitute the expression for x obtained in step 3 into the objective function. This results in a new objective function with one less variable: f(h(y,…),y,…). Step 5 : Unconstrained Optimization: Treat the new objective function as an unconstrained optimization problem. Find the critical points of this function by taking its partial derivatives with respect to the remaining variables (y,…) and setting them equal to zero: ∂ ∂𝑦 f(h(y,…),y,…)=0…∂​f(h(y,…),y,…)=0 Step 6 : Solve for the Remaining Variables: Solve the system of equations obtained in step 5 to find the values of the remaining variables (y,…) that optimize the modified objective function.
  • 9. Step 7 : Find x: Use the expression x=h(y,…) from step 3 to find the value of x corresponding to the optimal values of y,…. Check the Feasibility: Ensure that the values of x,y,… satisfy the original constraints g(x,y,…)=0. If they do, you have found a solution to the constrained optimization problem. Step 8 : Interpret the Results: The values of the variables that satisfy the constraints and optimize the original objective function represent the solution to the constrained optimization problem.
  • 11. Checking for Maximum or Minimum for One Variable • Note that we have converted a constrained problem in a problem without constraint by substitution • We had 2 variables in objective function but after substitution of constraint in objective function, now we only have objective function we 1 variable hence we can check for maximization or minimization by taking double derivative of the objective function • If 𝑓′′ 𝑥 > 0 ⇒ 𝑐𝑜𝑛𝑣𝑒𝑥 𝑓𝑢𝑛𝑡𝑖𝑜𝑛 ⇒ 𝑔𝑙𝑜𝑏𝑎𝑙 𝑚𝑖𝑛𝑖𝑚𝑎 • If 𝑓′′ 𝑥 < 0 ⇒ 𝑐𝑜𝑛𝑐𝑎𝑣𝑒 𝑓𝑢𝑛𝑡𝑖𝑜𝑛 ⇒ 𝑔𝑙𝑜𝑏𝑎𝑙 𝑚𝑎𝑥𝑖𝑚𝑎
  • 12. Checking for Maximum or Minimum(Two variable) After substitution if we have two or more than two variables in the objective function then for minimum or maximum we have to check with the hessian matrix method: For two variables: calculate 𝜕2 𝑓 𝜕𝑥1 2 𝜕2 𝑓 𝜕𝑥1𝜕𝑥2 𝜕2 𝑓 𝜕𝑥2𝜕𝑥1 𝜕2 𝑓 𝜕𝑥2 2 Now calculate 𝑑1= 𝜕2 𝑓 𝜕𝑥1 2 Then calculate 𝑑1 where 𝑑2 is determinant of the above hessian matrix If 𝑑1 > 0 , 𝑑2 > 0 ⇒ 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒 𝑑𝑒𝑓𝑖𝑛𝑖𝑡𝑒 ⇒ 𝑐𝑜𝑛𝑣𝑒𝑥 𝑓𝑢𝑛𝑡𝑖𝑜𝑛 ⇒ 𝑔𝑙𝑜𝑏𝑎𝑙 𝑚𝑖𝑛𝑖𝑚𝑎 If 𝑑1 < 0 , 𝑑2 > 0 ⇒ 𝑛𝑒𝑔𝑎𝑡𝑖𝑣𝑒 𝑑𝑒𝑓𝑖𝑛𝑖𝑡𝑒 ⇒ 𝑐𝑜𝑛𝑐𝑎𝑣𝑒 𝑓𝑢𝑛𝑡𝑖𝑜𝑛 ⇒ 𝑔𝑙𝑜𝑏𝑎𝑙 𝑚𝑎𝑥𝑖𝑚𝑎 If 𝑑1 > 0 , 𝑑2 < 0 ⇒ 𝑖𝑛𝑑𝑒𝑓𝑖𝑛𝑖𝑡𝑒 ⇒ 𝑛𝑒𝑖𝑡ℎ𝑒𝑟 𝑐𝑜𝑛𝑣𝑒𝑥 𝑛𝑜𝑟 𝑐𝑜𝑛𝑐𝑎𝑣𝑒
  • 14. Example If K units of capital and L units of labor are used, a company can produce KL units of a manufactured good. Capital can be purchased at $4/unit and labor can be purchased at $1/unit. A total of $8 is available to purchase capital and labor. How can the firm maximize the quantity of the good that can be manufactured? Let K units of capital purchased and L units of labor purchased. Then K and L must satisfy, 4𝐾 + 𝐿 ≤ 8, 𝐿 ≥ 0 Thus, the firm wants to solve the following constrained maximization problem: max 𝑧 = 𝑓 𝐿, 𝐾 = 𝐾𝐿 𝑠. 𝑡. 4𝐾 + 𝐿 ≤ 8 𝐾, 𝐿 ≥ 0
  • 15. Now, Substitute 𝐿 = 8 − 4𝐾 in Objective function And, we get New objective function max 𝑧 = 𝐾(8 − 4𝐾) max 𝑧 = 𝑓 𝑘 = 8𝐾 − 4𝐾2 after substitution of constraint in objective function, now we only have objective function we 1 variable hence we can check for maximization or minimization by taking double derivative of the objective function Now , 𝑓′ 𝑘 = 8 − 8𝐾 ⇒ f′ K = 0 ⇒ 𝑓′′ 𝑘 = −8 𝑓′′ 𝐾 < 0 So, f(k) is concave function Therefor k = 1 is global maximum point And The firm Maximum f(1) = 4 unit of quantity of good can manufactured K = 1
  • 16. Example A monopolist producing a single product has two types of customers. If 𝑞1 units are produced for customer 1, then customer 1 is willing to pay a price of 70 −4𝑞1 dollars. If q2 units are produced for customer 2, then customer 2 is willing to pay a price of 150 − 15𝑞2 dollars. For 𝑞 > 0, the cost of manufacturing q units is 100 − 15𝑞 dollars. To maximize profit, how much should the monopolist sell to each customer Solution: Let 𝑓(𝑞1, 𝑞2) be the monopolist’s profit if she produces qi units for customer i. Then (assuming some production takes place 𝑓 𝑞1, 𝑞2 = 𝑞1 70 − 4𝑞1 + 𝑞2 150 − 𝑞2 − 100 − 15𝑞1 − 15𝑞2 To find the stationary point/s for 𝑓 𝑞1, 𝑞2 , we set
  • 17. ∂f ∂𝑞1 = 70 − 8𝑞1 − 15 = 0 ⇒ 𝑓𝑜𝑟 𝑞1 = 55 8 ∂f ∂𝑞2 = 150 − 30𝑞2 − 15 = 0 ⇒ 𝑓𝑜𝑟 𝑞2 = 9 2 Thus, the only stationary point of 𝑓 𝑞1, 𝑞2 is 55 8 , 9 2 . Next we find the Hessian for 𝑓 𝑞1, 𝑞2 𝐻 𝑞1, 𝑞2 = −8 0 0 −30  Since the first leading principal minor of H is −8 < 0, and the second leading principal minor of H is −8 −30 = 240 > 0,  We know that if for k = 1,2,…..,n,𝐻𝑘 𝑥 is nonzero and has the same sign as −1 𝑘 , then a stationary point x is a local maximum for given NLP
  • 18.  So, x = ( 55 8 , 9 2 ) is a local maximum. And function is also concave  Thus, It implies that ( 55 8 , 9 2 ) maximizes profit among all production possibilities (with the possible exception of no production). Then ( 55 8 , 9 2 ) yields a profit of 𝑓 𝑞1, 𝑞2 = 55 8 70 − 4 × 55 8 + 9 2 150 − 15 × 9 2 − 100 − 15 × 55 8 − 15 × 9 2 = $392.81  The Monopolist should sell 55 8 ≈ 7 unit to customer 1 and 9 2 ≈ 4 unit to customer 2 .
  • 19.
  • 21. Lagrange Multiplier Techique  The substitution method for solving constrained optimization problem cannot be used easily when the constraint equation is very complex and therefore cannot be solved for one of the decision variable. In such cases of constrained optimization we employ the Lagrangian Multiplier technique. In this Lagrangian technique of solving constrained optimization problem, a combined equation called Lagrangian function is formed which incorporates both the original objective function and constraint equation.  This Lagrangian function is formed in a way which ensures that when it is maximized or minimized, the original given objective function is also maximized or minimized and at the same time it fulfills all the constraint requirements.
  • 22.  In creating this Lagrangian function, an artificial variable λ (Greek letter Lambda) is used and it is multiplied by the given constraint function having been set equal to zero. λ is known as Lagrangian multiplier.  Since Lagrangian function incorporates the constraint equation into the objective function, it can be considered as unconstrained optimization problem and solved accordingly. Let us illustrate Lagrangian multiplier technique by taking the constrained optimization problem solved above by substitution method
  • 24. Method and Necessary Conditions  Lagrange’s multiplier method can be used to solve NLP’s in which all the constraints are of equality type Consider an NLP: max or min (z) = f(X) s.t. g1(x1,x2,x3,…….,xn) = b1 g2(x1,x2,x3,…….,xn) = b2 …… gn(x1,x2,x3,…….,xn) = bn
  • 25. Method and Necessary Conditions  To solve the given problem we consider “m” lagrangian multipliers, say λi and lagrangian function becomes L(x1,x2,……xn) = f(x) + 𝑖=1 𝑚 λi(𝑔𝑖 𝑥1, 𝑥2, … . . , 𝑥𝑛 − 𝑏1) Therefore, L = f(x) + λ1 𝑔1 − 𝑏1 +λ2(𝑔2 −𝑏2) … … … . +λm(𝑔m − 𝑏m) The necessary condition for a point (x1,x2,…… xn , λ1, λ2,…. λn) to be an extreme point is 𝜕𝐿 𝜕𝑥1 = 𝜕𝐿 𝜕𝑥2 … . . = 𝜕𝐿 𝜕𝑥𝑛 = 𝜕𝐿 𝜕λ1 = 𝜕𝐿 𝜕λ2 … … = 𝜕𝐿 𝜕λ𝑚 = 0
  • 27. Example A company is planning to spend $10,000 on advertising. It costs $3,000 per minute to advertise on television and $1,000 per minute to advertise on radio. If the firm buys x minutes of television advertising and y minutes of radio advertising, then its revenue in thousands of dollars is given by 𝑓 𝑥, 𝑦 = −2𝑥2 − 𝑦2 + 𝑥𝑦 + 8𝑥 + 3𝑦. How can the firm maximize its revenue? Solution: we want to solve the following NLP : 𝑚𝑎𝑥𝑧 = −2𝑥2 − 𝑦2 + 𝑥𝑦 + 8𝑥 + 3𝑦 𝑠. 𝑡. 3𝑥 + 𝑦 = 10 Then 𝐿 𝑥, 𝑦, 𝐿 = −2𝑥2 − 𝑦2 + 𝑥𝑦 + 8𝑥 + 3𝑦 + 𝜆 10 − 3𝑥 − 𝑦
  • 28. Observe that 10 − 3𝑥 − 𝑦 = 0 reduces to the constraint 3𝑥 = + 𝑦 = 10. from above equations 𝑦 = 3𝜆 – 8 +4x , 𝑥 = 𝜆 – 3 +2y Thus, 𝑦 = 3𝜆 – 8 +4(𝜆 – 3 +2y) = 7𝜆 – 20 +8y or
  • 29. 𝑦 = 20 7 − 𝜆 𝑥 = 𝜆 – 3 + 2 20 7 − 𝜆 = 19 7 − 𝜆 Subtituting value of x and y in (10 − 3x − y) , yields 10 − 3 19 7 − 𝜆 − 20 7 − 𝜆 = 0 4𝜆 − 1 = 0 𝜆 = 1 4 Then, 𝑥, 𝑦 = 73 28 , 69 28 The Hessian for f(x,y) is, −4 1 1 −2
  • 30. Since each first-order principal minor is negative, and 𝐻2 𝑥, 𝑦 = 7 > 0 𝑓(𝑥, 𝑦)is concave. The constraint is linear, so the Lagrange multiplier method does yield the optimal solution to the NLP Thus the firm should purchase 69 28 minutes of television time and 73 28 minutes of radio time Since 1 4 , spending an extra (thousands) (for small delta) would increase the firm’s revenues by approximately $0.25 .
  • 32.
  • 33.
  • 34. Usage
  • 35.  Finance: In portfolio optimization, investors may want to maximize their expected return while minimizing risk. The Lagrange multiplier method can help find the optimal allocation of assets under various constraints, such as risk tolerance and capital limits.  Operations Research: The Lagrange multiplier method is a fundamental tool in linear and nonlinear programming used to optimize various aspects of logistics and supply chain management, such as production scheduling, inventory management, and transportation planning.  Economics: In economics, the Lagrange multiplier method is used to find the optimal allocation of resources subject to constraints. For instance, a firm may want to maximize its profit while adhering to various production constraints, such as labor, capital, and material availability
  • 36.  Machine Learning: In machine learning, the Lagrange multiplier method can be used for support vector machines (SVMs) to find the optimal hyperplane that best separates data points while considering constraints on the margin.  Aerospace Engineering: In aircraft design, engineers use optimization techniques, including the Lagrange multiplier method, to minimize fuel consumption while meeting constraints related to aircraft weight, safety, and performance.
  • 37.
  • 40.  The KKT conditions were originally named after Harold W. Kuhn and Albert W. Tucker, who first published the conditions in 1951. Later scholars discovered that the necessary conditions for this problem had been stated by William Karush in his master's thesis in 1939.  The Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn– Tucker conditions, are first derivative tests (sometimes called first- order necessary conditions) for a solution in nonlinear programming to be optimal, provided that some regularity conditions are satisfied.
  • 43. • Consider the Non – Linear Problem, Maximize f(x) Sub. To 𝑔𝑖 ≤ 𝑏𝑖 • Convert each 𝑖𝑡ℎ inequality constraints into equality by adding the non- negative slack variable say (𝑆𝑖 2 ) 𝑔𝑖(𝑥) + 𝑆𝑖 2 = 𝑏𝑖 ℎ𝑖(𝑥) = 𝑔𝑖(𝑥) + 𝑆𝑖 2 - 𝑏𝑖 • Lagrangian Multiplier, L(x, 𝑆𝑖, 𝜆) = f(x) - 𝑖 𝜆𝑖 ℎ𝑖(𝑥) = f(x) - 𝑖 𝜆𝑖(𝑔𝑖(𝑥) + 𝑆𝑖 2 - 𝑏𝑖).
  • 44. NECESSARY CONDITION  The Necessary Conditions are given by, ∇L = 0 ⅆ𝐿 ⅆ𝑥 = 0 ⅆ𝑓 ⅆ𝑥 - 𝑖 𝜆𝑖 ⅆ𝑔𝑖(𝑥) ⅆ𝑥 = 0 (1) ⅆ𝐿 ⅆ𝜆1 = 0 𝑔𝑖(𝑥) + 𝑆𝑖 2 - 𝑏𝑖 = 0 (2) ⅆ𝐿 ⅆ𝑆𝑖 = 0 - 2 𝜆𝑖 𝑆𝑖 = 0 (3)
  • 45. From (2), 𝑆𝑖 2 = 𝑏𝑖 - 𝑔𝑖(𝑥)  Multiply equation (3) by 𝑆𝑖 & we get, 𝜆𝑖 = 0 or 𝑏𝑖 - 𝑔𝑖 𝑥 = 0.  𝜆𝑖 measures the rate of variation of f w.r.t 𝑏𝑖, ⅆ𝐿 ⅆ𝑏𝑖 = 𝜆𝑖.
  • 46. SUFFICIENT CONDITION  The KT Conditions which are Necessary by KT Conditions are also Sufficient Conditions.  KT Conditions (Necessary) Sufficient,  IF,   Maximum Function  f is concave & the feasible region is convex.  Minimum Function  f is convex & the feasible region is convex.
  • 47. Conditions Max. f(x) S.t. 𝑔𝑖(𝑥) ≤ 𝑏𝑖 F is Concave 𝑔𝑖(𝑥) is convex Max. f(x) S.t. 𝑔𝑖(𝑥) ≥ 𝑏𝑖 F is Concave 𝑔𝑖(𝑥) is Concave Min. f(x) S.t. 𝑔𝑖(𝑥) ≤ 𝑏𝑖 F is Convex 𝑔𝑖(𝑥) is Convex Min. f(x) S.t. 𝑔𝑖(𝑥) ≥ 𝑏𝑖 F is Convex 𝑔𝑖(𝑥) is Concave
  • 49. Example:1  Find the Optimum value of the Objective function , solve the KUHN – TUCKER Max Z = 𝒙𝟑 − 𝟑𝒙𝟐 + 𝟐𝒙 − 𝟏 Subject to constraints are, − 𝒙 ≤ 2 𝒙 ≤ 4
  • 50.  Solution :- By using KT Condition: f=𝑥3 − 3𝑥2 + 2𝑥 − 1 𝑔1 = − 𝑥 − 2 ≤ 0 𝑔2=𝑥 − 4 ≤ 0 ∂𝑓(𝑥) ∂𝑥j − 𝑖 𝜆𝑖 ∂𝑔𝑖(𝑥) ∂𝑥j = 0 (j= 1,2,….,n) 𝜆𝑖[𝑏𝑖- 𝑔𝑖(𝑥)] = 0 (i= 1,2,….,m) 𝜆𝑖≥ 0 (i= 1,2,….,m)
  • 51. 3𝑥2 − 6𝑥 + 2 − 𝜆1 −1 − 𝜆2 1 = 0 3𝑥2 − 6𝑥 + 𝜆1 − 𝜆2 + 2 = 0 ……(1) 𝜆1[0 −(− 𝑥 − 2)] = 0  𝜆1(𝑥 + 2) = 0 ……(2) 𝜆2[0 −(𝑥 − 4)] = 0  𝜆2(4 − 𝑥) = 0 ……(3) 𝜆1 ≥ 0 , 𝜆2 ≥ 0 , (1) 𝜆1= 0 , 𝜆2 = 0 from eqn(1) 3𝑥2 − 6𝑥 + 2 = 0 𝑥 = 6± 12 6 𝑥 = 1 + 1 3 , 𝑥 = 1 − 1 3 2) 𝜆1= 0 , 𝜆2 > 0 𝜆2> 0  4 − 𝑥 = 0 𝑥 = 4 Using 𝑥 = 4 this value put in eqn(1),we have 26 + 𝜆1 − 𝜆2 = 0 here 𝜆1 = 0  26 −𝜆2 = 0 𝑥 = 4 , 𝜆1 = 0, 𝜆2 = 26 > 0
  • 52. 3) 𝜆1> 0 , 𝜆2 = 0 𝜆1> 0  𝑥 + 2 = 0 𝑥 = −2 Put this value in eqn(1),we have 26 + 𝜆1= 0 𝜆1 = − 26 < 0 Here the 𝜆1 = − 26 < 0 but we take 𝜆1> 0 so this case is not possible 4) 𝜆1> 0 , 𝜆2 > 0 𝑥 + 2 = 0 , 𝑥 = −2 4 − 𝑥 = 0, 𝑥 = 4 𝑥 = −2 Put this value in eqn(1), 26 + 𝜆1 − 𝜆2 = 0 𝜆1 − 𝜆2 = − 26 𝑥 = −2 Put this value in eqn(3),𝜆2(6) = 0 ,𝜆2=(0)  𝜆1 = − 26, but here ,we take 𝜆1> 0 𝑎𝑛𝑑 𝜆2 > 0 so this case is not possible
  • 53. Therefore, the value of 𝑥 is 1 + 1 3 , 1 − 1 3 and 4 f=𝑥3 − 3𝑥2 + 2𝑥 − 1 f(1 + 1 3 ) = −1.38, f(1 + 1 3 ) = −0.62 f(4) = 23 So the optimal solution is x = 4 and z=23 The maximum value of z is 23
  • 54. Example:2  A monopolist can purchase up to 17.25 oz of a chemical for $10/oz. At a cost of $3/oz, the chemical can be processed into an ounce of product 1; or, at a cost of $5/oz, the chemical can be processed into an ounce of product 2. If 𝒙𝟏 oz of product 1 are produced, it sells for a price of $30 -𝒙𝟏 per ounce. If 𝒙𝟐 oz of product 2 are produced, it sells for a price of $50 - 2 𝒙𝟐 per ounce. Determine how the monopolist can maximize profits.
  • 55.  Solution: Decision variables: 𝑥1 = ounces of product 1 produced 𝑥2 = ounces of product 2 produced 𝑥3 = ounces of chemical processed Then we want to solve the following NLP: To find profit , Profit = selling price –costs Objective Function: max z = 𝑥1 30 − 𝑥1 + 𝑥2 50 − 2𝑥2 − 3𝑥1 − 5𝑥2 −10𝑥3 Constrains: 𝑥1+𝑥2 ≤𝑥3 𝑜𝑟 𝑥1+𝑥2 - 𝑥3 ≤0 𝑥3 ≤17.25
  • 56. So by K-T conditions : 30 −2𝑥1 − 3 − λ1 = 0 ---------(1) 50 −2𝑥2 − 5 − λ1 = 0 ---------(2) − 10 − λ1 −λ2 =0 ---------(3) λ1 (−𝑥1 − 𝑥2 + 𝑥3) = 0 ---------(4) λ2 (17.25 − 𝑥3) =0 ---------(5) λ1 ≥ 0 ---------(6) λ2 ≥ 0 ---------(7)
  • 57. • There are four cases to consider: Case 1: λ1=1 , λ2 = 0. This case cannot occur, because (3) would be violated. Case 2: λ1=0 , λ2 > 0. If λ1=0 then (3) implies λ2 = −10. This would violated (7), Case 3: λ1 > 0, λ2 = 0, From (3) we obtain λ1= 10.now (1) yields x1=8.5 , and (2) yields 𝑥2 = 8.75. From (4) ,we obtain 𝑥1 + 𝑥2 = 𝑥3, so 𝑥3 = 17.25. Thus, Thus , 𝑥1 = 8.5, 𝑥2 = 8.75, 𝑥3 = 17.25, λ1 = 10,λ2 =0 Satisfies the K-T conditions. Case 4: λ1>0, λ2 > 0 . Case 3 yields an optimal solution , so we need not consider Case4 , Result: Thus, the optimal solution to our problem is to buy 17.25 oz of the chemical and produce 8.5 oz of product 1 and 8.75 oz of product 2.
  • 61. Usage
  • 62. • Support Vector Machines (SVMs): Optimizing hyperplanes for machine learning classification. • Economic Equilibrium: Allocating resources efficiently considering preferences and constraints. • Engineering Design: Optimizing mechanical systems for performance and safety. • Chemical Process Optimization: Achieving optimal chemical reactions within constraints. • Energy Generation: Optimizing power distribution while adhering to demand and limitations.
  • 63. • Supply Chain Management: Efficiently allocating resources in logistics operations. • Health Care Planning: Designing optimal medical treatment plans or device parameters. • Structural Engineering: Optimizing structural designs for performance and safety. • Game Theory: Analyzing optimal strategies in competitive situations.
  • 64.
  • 66. Consider an NLP whose objective function is the sum of terms of the form 𝑥1 𝑘1 , 𝑥2 𝑘2 , … . ., 𝑥𝑛 𝑘𝑛 The degree of the term 𝑥1 𝑘1 , 𝑥2 𝑘2 , … . ., 𝑥𝑛 𝑘𝑛 is 𝑘1 + 𝑘2 +……… 𝑘𝑛 . Thus, the degree of the term 𝑥1 2 𝑥2 is 3, and the degree of the term 𝑥1𝑥2 is 2. An NLP whose constraints are linear and whose objective is the sum of terms of the form 𝑥1 𝑘1 , 𝑥2 𝑘2 , … . ., 𝑥𝑛 𝑘𝑛 (with each term having a degree of 2, 1, or 0) is a quadratic programming problem (QPP). Quadratic Programming
  • 67. Wolfe’s method used to solve QPPs in which all variables must be nonnegative. We illustrate the method by solving the following QPP: min z = -𝑥1- 𝑥2 +( 1 2 ) 𝑥1 2 + 𝑥2 2 - 𝑥1𝑥2 s.t 𝑥1+ 𝑥2 ≤ 3 -2𝑥1- 3𝑥2 ≤ -6 𝑥1, 𝑥2 ≥ 0 The objective function may be shown to be convex, so any point satisfying the Kuhn–Tucker conditions (8’)–(11’ ) will solve this QPP After employing excess variables 𝑒1 for the 𝑥1 constraint and 𝑒2 for the 𝑥2 constraint in (8’ ), 𝑒2 ′ for the constraint -2𝑥1- 3𝑥2 ≤ -6 and a slack variable 𝑠1 ′ for the constraint 𝑥1+ 𝑥2 ≤ 3 the K–T conditions may be written as Wolfe’s Method for Solving Quadratic Programming Problems
  • 68. 𝑥1- 1- 𝑥2 + 𝜆1- 2𝜆2- 𝑒1 = 0 [here 𝑒1 , 𝑒2 are multiplier] 2𝑥2- 1 - 𝑥1+ 𝜆1- 3𝜆2 - 𝑒2 = 0 [here 𝑠1 ′ is slack and 𝑒2 ′ is surplus variable] 𝑥1+ 𝑥2 + 𝑠1 ′ =3 2𝑥1+ 3𝑥2 - 𝑒2 ′ = 6 All variables nonnegative 𝜆1𝑠1 ′ = 0 , 𝜆2𝑒2 ′ =0 , 𝑒1𝑥1=0, 𝑒2𝑥2=0 Observe that with the exception of the last four equations, the K–T conditions are all linear or nonnegativity constraints. The last four equations are the complementary slackness conditions for this QPP. For a general QPP, the complementary slackness conditions may be verbally expressed by 𝑒i from 𝑥i constraint in (8’ ) and 𝑥i cannot both be positive …(12) Slack or excess variable for the ith constraint and i cannot both be positive and 𝜆i both basic variable
  • 69. To find a point satisfying the K–T conditions (except for the complementary slackness conditions), Wolfe’s method simply applies a modified version of Phase I of the two-phase simplex method. We first add an artificial variable to each constraint in the K–T conditions that does not have an obvious basic variable, and then we attempt to minimize the sum of the artificial variables. To ensure that the final solution (with all artificial variables equal to zero) satisfies the complementary slackness conditions (12), Wolfe’s method modifies the simplex’s choice of the entering variable as follows: 1) Never perform a pivot that would make the 𝑒i from the ith constraint in (8’ ) and 𝑥i both basic variables. 2) Never perform a pivot that would make the slack (or excess) variable for the ith constraint
  • 70. To apply Wolfe’s method to our example, we must solve the following LP: min w = 𝑎1+ 𝑎2 + 𝑎2 ′ s.t 𝑥1- 𝑥2 + 𝜆1- 2𝜆2- 𝑒1+ 𝑎1 = 1 - 𝑥1 +2𝑥2+ 𝜆1- 3𝜆2 - 𝑒2 + 𝑎2 = 1 𝑥1+ 𝑥2 + 𝑠1 ′ = 3 2𝑥1+ 3𝑥2 - 𝑒2 ′ + 𝑎2 ′ = 6 , All variables nonnegative initial Table for Wolfe’s Method (Table 1) 𝐶j 0 0 0 0 0 0 0 0 1 1 1 𝐵V 𝐶B 𝑋B 𝑥1 𝑥2 𝜆1 𝜆2 𝑒1 𝑒2 𝑠1 ′ 𝑒2 ′ 𝑎1 𝑎2 𝑎2 ′ RR 𝑋B/𝐶j 𝑎1 1 1 1 -1 1 -2 -1 0 0 0 1 0 0 - 𝑎2 1 1 -1 2 1 -3 0 -1 0 0 0 1 0 ½ 𝑠1 ′ 0 3 1 1 0 0 0 0 1 0 0 0 0 3/1 𝑎2 ′ 1 6 2 3 0 0 0 0 0 -1 0 0 1 6/3 w=8 𝑍j − 𝐶j 2 4 2 -5 -1 -1 0 -1 0 0 0
  • 71. After eliminating the artificial variables from row 0, we obtain the tableau in Table 1. The current basic feasible solution is w = 8 , 𝑎1 = 1 , 𝑎2 = 1 , 𝑠1 ′ = 3 , 𝑎2 ′ = 6 Since 𝑥2 has the most positive coefficient in row 0, we choose to enter 𝑥2 into the basis. 𝐶j 0 0 0 0 0 0 0 0 1 1 1 𝐵V 𝐶B 𝑋B 𝑥1 𝑥2 𝜆1 𝜆2 𝑒1 𝑒2 𝑠1 ′ 𝑒2 ′ 𝑎1 𝑎2 𝑎2 ′ RR 𝑎1 1 3/2 ½ 0 3/2 -7/2 -1 -1/2 0 0 1 1/2 0 3 𝑥2 0 1/2 -1/2 1 1/2 -3/2 0 -1/2 0 0 0 1/2 0 -- 𝑠1 ′ 0 5/2 3/2 0 -1/2 3/2 0 1/2 1 0 0 -1/2 0 5/3 𝑎2 ′ 1 9/2 7/2 0 -3/2 9/2 0 3/2 0 -1 0 -3/2 1 9/7 w=6 𝑍j − 𝐶j 4 0 0 1 -1 1 0 -1 0 -2 0
  • 72. The resulting tableau is Table 2. The current basic feasible solution is w = 6, 𝑎1 = 3/2, 𝑥2 =1/2 𝑠1 ′ =5/2, 𝑎2 ′ = 9/2 Since 𝑥1 has the most positive coefficient in row 0, we now enter x1 into the basis. 𝐶j 0 0 0 0 0 0 0 0 1 1 1 𝐵V 𝐶B 𝑋B 𝑥1 𝑥2 𝜆1 𝜆2 𝑒1 𝑒2 𝑠1 ′ 𝑒2 ′ 𝑎1 𝑎2 𝑎2 ′ RR 𝑎1 1 6 7 0 0 12 7 − 29 7 -1 -5/7 0 1/7 1 5/7 -1/7 6 𝑥2 0 8 7 0 1 2 7 − 6 7 0 -2/7 0 -1/7 0 2/7 1/7 - 𝑠1 ′ 0 4 7 0 0 1 7 − 3 7 0 -1/7 1 3/7 0 1/7 -3/7 4/3 𝑥1 0 9/7 1 0 − 3 7 9 7 0 3/7 0 -2/7 0 3/7 2/7 - w= 6 7 𝑍j − 𝐶j 0 0 12 7 − 29 7 -1 -5/7 0 1/7 0 -2/7 -8/7
  • 73. The resulting tableau is Table 3. The current basic feasible solution is w = 6/7, 𝑎1 = 6/7, 𝑥2 =8/7 ,𝑠1 ′ =4/7, 𝑥1=9/7 The simplex method recommends that 𝜆1 should enter the basis. However, Wolfe’s modification of the simplex method for selecting the entering variable does not allow 𝜆1 and 𝑠1 ′ to both be basic variables. Thus, 𝜆1 cannot enter the basis. Because 𝑒2 ′ is the only other variable with a positive coefficient in row 0, we now enter 𝑒2 ′ into the basis 𝐶j 0 0 0 0 0 0 0 0 1 1 1 𝐵V 𝐶B 𝑋B 𝑥1 𝑥2 𝜆1 𝜆2 𝑒1 𝑒2 𝑠1 ′ 𝑒2 ′ 𝑎1 𝑎2 𝑎2 ′ RR 𝑎1 1 2/3 0 0 5/3 -4 -1 -2/3 -1/3 0 1 2/3 0 2/5 𝑥2 0 4/3 0 1 1/3 -1 0 -1/3 1/3 0 0 1/3 0 4/1 𝑒2 ′ 0 4/3 0 0 1/3 -1 0 -1/3 7/3 1 0 1/3 -1 4/1 𝑥1 0 5/3 1 0 -1/3 1 0 1/3 23 0 0 -1/3 0 - W=2/ 3 𝑍j − 𝐶j 0 0 5/3 -4 -1 -2/7 -1/3 0 0 -1/3 -1
  • 74. The resulting tableau is Table 4 The current basic feasible solution is w = 2/3, 𝑎1=2/3, 𝑥2=4/3, 𝑒2 ′ =4/3 and 𝑥1=5/3 Because 𝑠1 ′ is now a non-basic variable, we can enter 𝜆1 into the basis. Optimal Table for Wolfe’s Method (Table 5) 𝐶j 0 0 0 0 0 0 0 0 1 1 1 𝐵V 𝐶B 𝑋B 𝑥1 𝑥2 𝜆1 𝜆2 𝑒1 𝑒2 𝑠1 ′ 𝑒2 ′ 𝑎1 𝑎2 𝑎2 ′ 𝜆1 0 2/5 0 0 1 -12/5 -3/5 -2/5 -1/5 0 3/5 2/5 0 𝑥2 0 6/5 0 1 0 -1/5 1/5 -1/5 2/5 0 -1/5 1/5 0 𝑒2 ′ 0 6/5 0 0 0 -1/5 1/5 -1/5 12/5 1 -1/5 1/5 -1 𝑥1 0 9/5 1 0 0 1/5 -1/5 1/5 3/5 0 1/5 -1/5 0 w=0 𝑍j − 𝐶j 0 0 0 0 0 0 0 0 -1 -1 -1 The resulting table is Table 5 This is an optimal table. Because w=0, we have found a solution that satisfies the Kuhn–Tucker conditions and is optimal for the QPP. Thus, the optimal solution to the QPP is 𝑥1= 9/5 , 𝑥2 =6/ 5 .
  • 75. Usage
  • 76.  Finance - Portfolio Optimization: In finance, investors often use the Quadratic Programming method to optimize their investment portfolios. The objective is to maximize returns while managing risk. This involves quadratic optimization to find the optimal asset allocation under constraints such as budget constraints and risk limits.  Operations Research - Production Planning: In manufacturing and production planning, QP can be used to optimize production schedules. Companies can maximize profit by adjusting production quantities while considering constraints like capacity limitations, resource availability, and demand fluctuations.  Chemical Engineering - Process Optimization: In chemical engineering, QP methods are used to optimize chemical processes, such as reactor design and operation. Engineers aim to maximize product yields while adhering to constraints related to reaction kinetics, heat transfer, and material balances.
  • 77.  Economics - Utility Maximization : Economists may use QP to solve utility maximization problems. Individuals or firms aim to maximize utility (or profit) subject to various constraints, which can be nonlinear, such as production functions or utility functions  Transportation - Vehicle Routing: In logistics and transportation, QP can be employed to optimize vehicle routing and scheduling. Companies aim to minimize transportation costs while ensuring timely deliveries and considering vehicle capacity constraints.  Energy - Power System Optimization: Power system operators use quadratic programming to optimize the dispatch of power generation resources. The objective is to minimize production costs while satisfying constraints on power demand, transmission limits, and environmental regulations