5. Substitution Method
Substitution method used to solve constrained optimization problem is used
when constraint equation is simple and not too complex. For example
substitution method to maximize or minimize the objective function is used
when it is subject to only one constraint equation of a very simple nature.
It is particularly useful when the constraints can be explicitly solved for one of
the variables, allowing you to eliminate that variable from the objective
function.
The simple idea behind this method is :‘we make subject any one variable
from the constraint and put the value of that variable in the objective function
and than solve that new unconstraint problem.’
The steps of this method is in next slide :
7. Steps :
Step 1 : Define the Objective Function: Start by defining the objective function,
denoted as f(x,y,…), where x, y,… are the variables you want to optimize.
Step 2 : Define the Constraint(s): Next, define the constraint(s) that must be satisfied
in the problem. Constraints are typically given as equations or inequalities involving
the same variables as the objective function. A constraint can be represented as
g(x,y,…)=0.
Step 3 : Solve for a Variable: Identify one of the variables in the constraint equation
that you can explicitly solve for in terms of the other variables. This variable, which
we'll call x, should be chosen in a way that makes it relatively easy to substitute into
the objective function. So, you solve g(x,y,…)=0 for x to get x=h(y,…).
8. Step 4 : Substitute into the Objective Function: Substitute the expression for x obtained
in step 3 into the objective function. This results in a new objective function with one
less variable: f(h(y,…),y,…).
Step 5 : Unconstrained Optimization: Treat the new objective function as an
unconstrained optimization problem. Find the critical points of this function by taking
its partial derivatives with respect to the remaining variables (y,…) and setting them
equal to zero:
∂
∂𝑦
f(h(y,…),y,…)=0…∂f(h(y,…),y,…)=0
Step 6 : Solve for the Remaining Variables: Solve the system of equations obtained in
step 5 to find the values of the remaining variables (y,…) that optimize the modified
objective function.
9. Step 7 : Find x: Use the expression x=h(y,…) from step 3 to find the value of x
corresponding to the optimal values of y,….
Check the Feasibility: Ensure that the values of x,y,… satisfy the original constraints
g(x,y,…)=0. If they do, you have found a solution to the constrained optimization
problem.
Step 8 : Interpret the Results: The values of the variables that satisfy the constraints
and optimize the original objective function represent the solution to the constrained
optimization problem.
11. Checking for Maximum or Minimum for
One Variable
• Note that we have converted a constrained problem in a problem without
constraint by substitution
• We had 2 variables in objective function but after substitution of constraint in
objective function, now we only have objective function we 1 variable hence we
can check for maximization or minimization by taking double derivative of the
objective function
• If 𝑓′′
𝑥 > 0 ⇒ 𝑐𝑜𝑛𝑣𝑒𝑥 𝑓𝑢𝑛𝑡𝑖𝑜𝑛 ⇒ 𝑔𝑙𝑜𝑏𝑎𝑙 𝑚𝑖𝑛𝑖𝑚𝑎
• If 𝑓′′
𝑥 < 0 ⇒ 𝑐𝑜𝑛𝑐𝑎𝑣𝑒 𝑓𝑢𝑛𝑡𝑖𝑜𝑛 ⇒ 𝑔𝑙𝑜𝑏𝑎𝑙 𝑚𝑎𝑥𝑖𝑚𝑎
12. Checking for Maximum or Minimum(Two variable)
After substitution if we have two or more than two variables in the objective function
then for minimum or maximum we have to check with the hessian matrix method:
For two variables: calculate
𝜕2
𝑓
𝜕𝑥1
2
𝜕2
𝑓
𝜕𝑥1𝜕𝑥2
𝜕2
𝑓
𝜕𝑥2𝜕𝑥1
𝜕2
𝑓
𝜕𝑥2
2
Now calculate 𝑑1=
𝜕2
𝑓
𝜕𝑥1
2
Then calculate 𝑑1 where 𝑑2 is determinant of the above hessian matrix
If 𝑑1 > 0 , 𝑑2 > 0 ⇒ 𝑝𝑜𝑠𝑖𝑡𝑖𝑣𝑒 𝑑𝑒𝑓𝑖𝑛𝑖𝑡𝑒 ⇒ 𝑐𝑜𝑛𝑣𝑒𝑥 𝑓𝑢𝑛𝑡𝑖𝑜𝑛 ⇒ 𝑔𝑙𝑜𝑏𝑎𝑙 𝑚𝑖𝑛𝑖𝑚𝑎
If 𝑑1 < 0 , 𝑑2 > 0 ⇒ 𝑛𝑒𝑔𝑎𝑡𝑖𝑣𝑒 𝑑𝑒𝑓𝑖𝑛𝑖𝑡𝑒 ⇒ 𝑐𝑜𝑛𝑐𝑎𝑣𝑒 𝑓𝑢𝑛𝑡𝑖𝑜𝑛 ⇒ 𝑔𝑙𝑜𝑏𝑎𝑙 𝑚𝑎𝑥𝑖𝑚𝑎
If 𝑑1 > 0 , 𝑑2 < 0 ⇒ 𝑖𝑛𝑑𝑒𝑓𝑖𝑛𝑖𝑡𝑒 ⇒ 𝑛𝑒𝑖𝑡ℎ𝑒𝑟 𝑐𝑜𝑛𝑣𝑒𝑥 𝑛𝑜𝑟 𝑐𝑜𝑛𝑐𝑎𝑣𝑒
14. Example
If K units of capital and L units of labor are used, a company can produce KL units
of a manufactured good. Capital can be purchased at $4/unit and labor can be
purchased at $1/unit. A total of $8 is available to purchase capital and labor. How
can the firm maximize the quantity of the good that can be manufactured?
Let K units of capital purchased and L units of labor purchased. Then K and L
must satisfy, 4𝐾 + 𝐿 ≤ 8, 𝐿 ≥ 0 Thus, the firm wants to solve the following
constrained maximization problem:
max 𝑧 = 𝑓 𝐿, 𝐾 = 𝐾𝐿
𝑠. 𝑡. 4𝐾 + 𝐿 ≤ 8
𝐾, 𝐿 ≥ 0
15. Now, Substitute 𝐿 = 8 − 4𝐾 in Objective function
And, we get New objective function
max 𝑧 = 𝐾(8 − 4𝐾)
max 𝑧 = 𝑓 𝑘 = 8𝐾 − 4𝐾2
after substitution of constraint in objective function, now we only have objective
function we 1 variable hence we can check for maximization or minimization by
taking double derivative of the objective function
Now , 𝑓′
𝑘 = 8 − 8𝐾 ⇒ f′
K = 0 ⇒
𝑓′′
𝑘 = −8
𝑓′′
𝐾 < 0
So, f(k) is concave function Therefor k = 1 is global maximum point
And The firm Maximum f(1) = 4 unit of quantity of good can manufactured
K = 1
16. Example
A monopolist producing a single product has two types of customers. If 𝑞1 units
are produced for customer 1, then customer 1 is willing to pay a price of 70 −4𝑞1
dollars. If q2 units are produced for customer 2, then customer 2 is willing to pay a
price of 150 − 15𝑞2 dollars. For 𝑞 > 0, the cost of manufacturing q units is
100 − 15𝑞 dollars. To maximize profit, how much should the monopolist sell to
each customer
Solution:
Let 𝑓(𝑞1, 𝑞2) be the monopolist’s profit if she produces qi units for customer i.
Then (assuming some production takes place
𝑓 𝑞1, 𝑞2 = 𝑞1 70 − 4𝑞1 + 𝑞2 150 − 𝑞2 − 100 − 15𝑞1 − 15𝑞2
To find the stationary point/s for 𝑓 𝑞1, 𝑞2 , we set
17. ∂f
∂𝑞1
= 70 − 8𝑞1 − 15 = 0 ⇒ 𝑓𝑜𝑟 𝑞1 =
55
8
∂f
∂𝑞2
= 150 − 30𝑞2 − 15 = 0 ⇒ 𝑓𝑜𝑟 𝑞2 =
9
2
Thus, the only stationary point of 𝑓 𝑞1, 𝑞2 is
55
8
,
9
2
. Next we find the Hessian for
𝑓 𝑞1, 𝑞2
𝐻 𝑞1, 𝑞2 =
−8 0
0 −30
Since the first leading principal minor of H is −8 < 0, and the second leading
principal minor of H is −8 −30 = 240 > 0,
We know that if for k = 1,2,…..,n,𝐻𝑘 𝑥 is nonzero and has the same sign as
−1 𝑘
, then a stationary point x is a local maximum for given NLP
18. So, x = (
55
8
,
9
2
) is a local maximum. And function is also concave
Thus, It implies that (
55
8
,
9
2
) maximizes profit among all production possibilities
(with the possible exception of no production). Then (
55
8
,
9
2
) yields a profit of
𝑓 𝑞1, 𝑞2
=
55
8
70 − 4 ×
55
8
+
9
2
150 − 15 ×
9
2
− 100 − 15 ×
55
8
− 15 ×
9
2
= $392.81
The Monopolist should sell
55
8
≈ 7 unit to customer 1 and
9
2
≈ 4 unit to customer 2
.
21. Lagrange Multiplier Techique
The substitution method for solving constrained optimization problem cannot be
used easily when the constraint equation is very complex and therefore cannot be
solved for one of the decision variable. In such cases of constrained optimization we
employ the Lagrangian Multiplier technique. In this Lagrangian technique of
solving constrained optimization problem, a combined equation called Lagrangian
function is formed which incorporates both the original objective function and
constraint equation.
This Lagrangian function is formed in a way which ensures that when it is
maximized or minimized, the original given objective function is also maximized or
minimized and at the same time it fulfills all the constraint requirements.
22. In creating this Lagrangian function, an artificial variable λ (Greek letter Lambda) is
used and it is multiplied by the given constraint function having been set equal to
zero. λ is known as Lagrangian multiplier.
Since Lagrangian function incorporates the constraint equation into the objective
function, it can be considered as unconstrained optimization problem and solved
accordingly. Let us illustrate Lagrangian multiplier technique by taking the
constrained optimization problem solved above by substitution method
24. Method and Necessary Conditions
Lagrange’s multiplier method can be used to solve NLP’s in which
all the constraints are of equality type
Consider an NLP:
max or min (z) = f(X)
s.t. g1(x1,x2,x3,…….,xn) = b1
g2(x1,x2,x3,…….,xn) = b2
……
gn(x1,x2,x3,…….,xn) = bn
25. Method and Necessary Conditions
To solve the given problem we consider “m” lagrangian multipliers, say λi and
lagrangian function becomes
L(x1,x2,……xn) = f(x) + 𝑖=1
𝑚
λi(𝑔𝑖 𝑥1, 𝑥2, … . . , 𝑥𝑛 − 𝑏1)
Therefore,
L = f(x) + λ1 𝑔1 − 𝑏1 +λ2(𝑔2 −𝑏2) … … … . +λm(𝑔m − 𝑏m)
The necessary condition for a point (x1,x2,…… xn , λ1, λ2,…. λn) to be an extreme
point is
𝜕𝐿
𝜕𝑥1
=
𝜕𝐿
𝜕𝑥2
… . . =
𝜕𝐿
𝜕𝑥𝑛
=
𝜕𝐿
𝜕λ1
=
𝜕𝐿
𝜕λ2
… … =
𝜕𝐿
𝜕λ𝑚
= 0
27. Example
A company is planning to spend $10,000 on advertising. It costs $3,000 per minute
to advertise on television and $1,000 per minute to advertise on radio. If the firm
buys x minutes of television advertising and y minutes of radio advertising, then its
revenue in thousands of dollars is given by 𝑓 𝑥, 𝑦 = −2𝑥2
− 𝑦2
+ 𝑥𝑦 + 8𝑥 +
3𝑦. How can the firm maximize its revenue?
Solution:
we want to solve the following NLP :
𝑚𝑎𝑥𝑧 = −2𝑥2
− 𝑦2
+ 𝑥𝑦 + 8𝑥 + 3𝑦
𝑠. 𝑡. 3𝑥 + 𝑦 = 10
Then 𝐿 𝑥, 𝑦, 𝐿 = −2𝑥2
− 𝑦2
+ 𝑥𝑦 + 8𝑥 + 3𝑦 + 𝜆 10 − 3𝑥 − 𝑦
29. 𝑦 =
20
7
− 𝜆
𝑥 = 𝜆 – 3 + 2
20
7
− 𝜆 =
19
7
− 𝜆
Subtituting value of x and y in (10 − 3x − y) , yields
10 − 3
19
7
− 𝜆 −
20
7
− 𝜆 = 0
4𝜆 − 1 = 0
𝜆 =
1
4
Then, 𝑥, 𝑦 =
73
28
,
69
28
The Hessian for f(x,y) is,
−4 1
1 −2
30. Since each first-order principal minor is negative, and 𝐻2 𝑥, 𝑦 = 7 > 0
𝑓(𝑥, 𝑦)is concave. The constraint is linear, so the Lagrange multiplier method
does yield the optimal solution to the NLP
Thus the firm should purchase
69
28
minutes of television time and
73
28
minutes of
radio time
Since 1 4 , spending an extra (thousands) (for small delta) would increase the
firm’s revenues by approximately $0.25 .
35. Finance: In portfolio optimization, investors may want to maximize their
expected return while minimizing risk. The Lagrange multiplier method can
help find the optimal allocation of assets under various constraints, such as
risk tolerance and capital limits.
Operations Research: The Lagrange multiplier method is a fundamental tool
in linear and nonlinear programming used to optimize various aspects of
logistics and supply chain management, such as production scheduling,
inventory management, and transportation planning.
Economics: In economics, the Lagrange multiplier method is used to find the
optimal allocation of resources subject to constraints. For instance, a firm may
want to maximize its profit while adhering to various production constraints,
such as labor, capital, and material availability
36. Machine Learning: In machine learning, the Lagrange multiplier method can
be used for support vector machines (SVMs) to find the optimal hyperplane that
best separates data points while considering constraints on the margin.
Aerospace Engineering: In aircraft design, engineers use optimization
techniques, including the Lagrange multiplier method, to minimize fuel
consumption while meeting constraints related to aircraft weight, safety, and
performance.
40. The KKT conditions were originally named after Harold W.
Kuhn and Albert W. Tucker, who first published the conditions in
1951. Later scholars discovered that the necessary conditions for this
problem had been stated by William Karush in his master's thesis in 1939.
The Karush–Kuhn–Tucker (KKT) conditions, also known as the Kuhn–
Tucker conditions, are first derivative tests (sometimes called first-
order necessary conditions) for a solution in nonlinear programming to
be optimal, provided that some regularity conditions are satisfied.
45. From (2),
𝑆𝑖
2
= 𝑏𝑖 - 𝑔𝑖(𝑥)
Multiply equation (3) by 𝑆𝑖 & we get, 𝜆𝑖 = 0 or 𝑏𝑖 - 𝑔𝑖 𝑥 = 0.
𝜆𝑖 measures the rate of variation of f w.r.t 𝑏𝑖,
ⅆ𝐿
ⅆ𝑏𝑖
= 𝜆𝑖.
46. SUFFICIENT CONDITION
The KT Conditions which are Necessary by KT Conditions are also Sufficient
Conditions.
KT Conditions (Necessary) Sufficient,
IF,
Maximum Function
f is concave & the feasible
region is convex.
Minimum Function
f is convex & the feasible
region is convex.
47. Conditions
Max. f(x)
S.t. 𝑔𝑖(𝑥) ≤ 𝑏𝑖
F is Concave
𝑔𝑖(𝑥) is convex
Max. f(x)
S.t. 𝑔𝑖(𝑥) ≥ 𝑏𝑖
F is Concave
𝑔𝑖(𝑥) is Concave
Min. f(x)
S.t. 𝑔𝑖(𝑥) ≤ 𝑏𝑖
F is Convex
𝑔𝑖(𝑥) is Convex
Min. f(x)
S.t. 𝑔𝑖(𝑥) ≥ 𝑏𝑖
F is Convex
𝑔𝑖(𝑥) is Concave
49. Example:1
Find the Optimum value of the
Objective function , solve the KUHN –
TUCKER
Max Z = 𝒙𝟑
− 𝟑𝒙𝟐
+ 𝟐𝒙 − 𝟏
Subject to constraints are,
− 𝒙 ≤ 2
𝒙 ≤ 4
52. 3) 𝜆1> 0 , 𝜆2 = 0
𝜆1> 0 𝑥 + 2 = 0
𝑥 = −2
Put this value in eqn(1),we have
26 + 𝜆1= 0
𝜆1 = − 26 < 0
Here the 𝜆1 = − 26 < 0 but we
take 𝜆1> 0 so this case is
not possible
4) 𝜆1> 0 , 𝜆2 > 0
𝑥 + 2 = 0 , 𝑥 = −2
4 − 𝑥 = 0, 𝑥 = 4
𝑥 = −2 Put this value in eqn(1),
26 + 𝜆1 − 𝜆2 = 0
𝜆1 − 𝜆2 = − 26
𝑥 = −2 Put this value in
eqn(3),𝜆2(6) = 0 ,𝜆2=(0) 𝜆1 =
− 26,
but here ,we take 𝜆1> 0 𝑎𝑛𝑑 𝜆2 > 0
so this case is not possible
53. Therefore, the value of 𝑥 is 1 +
1
3
, 1 −
1
3
and 4
f=𝑥3
− 3𝑥2
+ 2𝑥 − 1
f(1 +
1
3
) = −1.38,
f(1 +
1
3
) = −0.62
f(4) = 23
So the optimal solution is x = 4 and z=23
The maximum value of z is 23
54. Example:2
A monopolist can purchase up to 17.25
oz of a chemical for $10/oz. At a cost of
$3/oz, the chemical can be processed
into an ounce of product 1; or, at a cost
of $5/oz, the chemical can be processed
into an ounce of product 2. If 𝒙𝟏 oz of
product 1 are produced, it sells for a
price of $30 -𝒙𝟏 per ounce. If 𝒙𝟐 oz of
product 2 are produced, it sells for a
price of $50 - 2 𝒙𝟐 per ounce.
Determine how the monopolist can
maximize profits.
55. Solution:
Decision variables:
𝑥1 = ounces of product 1 produced
𝑥2 = ounces of product 2 produced
𝑥3 = ounces of chemical processed
Then we want to solve the following NLP:
To find profit , Profit = selling price –costs
Objective Function:
max z = 𝑥1 30 − 𝑥1 + 𝑥2 50 − 2𝑥2 − 3𝑥1 − 5𝑥2 −10𝑥3
Constrains:
𝑥1+𝑥2 ≤𝑥3 𝑜𝑟 𝑥1+𝑥2 - 𝑥3 ≤0
𝑥3 ≤17.25
57. • There are four cases to consider:
Case 1: λ1=1 , λ2 = 0. This case cannot occur, because (3) would be violated.
Case 2: λ1=0 , λ2 > 0. If λ1=0 then (3) implies λ2 = −10. This would violated (7),
Case 3: λ1 > 0, λ2 = 0, From (3) we obtain λ1= 10.now (1) yields x1=8.5 , and (2)
yields 𝑥2 = 8.75. From (4) ,we obtain 𝑥1 + 𝑥2 = 𝑥3,
so 𝑥3 = 17.25. Thus, Thus , 𝑥1 = 8.5, 𝑥2 = 8.75, 𝑥3 = 17.25,
λ1 = 10,λ2 =0 Satisfies the K-T conditions.
Case 4: λ1>0, λ2 > 0 . Case 3 yields an optimal solution , so we need not consider
Case4 ,
Result: Thus, the optimal solution to our problem is to buy 17.25 oz of the
chemical and produce 8.5 oz of product 1 and 8.75 oz of product 2.
62. • Support Vector Machines (SVMs): Optimizing hyperplanes for machine
learning classification.
• Economic Equilibrium: Allocating resources efficiently considering
preferences and constraints.
• Engineering Design: Optimizing mechanical systems for performance and
safety.
• Chemical Process Optimization: Achieving optimal chemical reactions
within constraints.
• Energy Generation: Optimizing power distribution while adhering to
demand and limitations.
63. • Supply Chain Management: Efficiently allocating resources in logistics operations.
• Health Care Planning: Designing optimal medical treatment plans or device
parameters.
• Structural Engineering: Optimizing structural designs for performance and safety.
• Game Theory: Analyzing optimal strategies in competitive situations.
66. Consider an NLP whose objective function is the sum of terms of the
form 𝑥1
𝑘1
, 𝑥2
𝑘2
, … . ., 𝑥𝑛
𝑘𝑛
The degree of the term 𝑥1
𝑘1
, 𝑥2
𝑘2
, … . ., 𝑥𝑛
𝑘𝑛
is
𝑘1 + 𝑘2 +……… 𝑘𝑛 . Thus, the degree of the term 𝑥1
2
𝑥2 is 3, and the
degree of the term 𝑥1𝑥2 is 2. An NLP whose constraints are linear
and whose objective is the sum of terms of the form 𝑥1
𝑘1
, 𝑥2
𝑘2
, … . .,
𝑥𝑛
𝑘𝑛
(with each term having a degree of 2, 1, or 0) is a quadratic
programming problem (QPP).
Quadratic Programming
67. Wolfe’s method used to solve QPPs in which all variables must be nonnegative.
We illustrate the method by solving the following QPP:
min z = -𝑥1- 𝑥2 +(
1
2
) 𝑥1
2
+ 𝑥2
2
- 𝑥1𝑥2
s.t 𝑥1+ 𝑥2 ≤ 3
-2𝑥1- 3𝑥2 ≤ -6
𝑥1, 𝑥2 ≥ 0
The objective function may be shown to be convex, so any point satisfying the
Kuhn–Tucker conditions (8’)–(11’ ) will solve this QPP After employing excess
variables 𝑒1 for the 𝑥1 constraint and 𝑒2 for the 𝑥2 constraint in (8’ ), 𝑒2
′
for the
constraint -2𝑥1- 3𝑥2 ≤ -6
and a slack variable 𝑠1
′
for the constraint 𝑥1+ 𝑥2 ≤ 3 the K–T conditions may be
written as
Wolfe’s Method for Solving Quadratic Programming Problems
68. 𝑥1- 1- 𝑥2 + 𝜆1- 2𝜆2- 𝑒1 = 0 [here 𝑒1 , 𝑒2 are multiplier]
2𝑥2- 1 - 𝑥1+ 𝜆1- 3𝜆2 - 𝑒2 = 0 [here 𝑠1
′
is slack and 𝑒2
′
is surplus variable]
𝑥1+ 𝑥2 + 𝑠1
′
=3
2𝑥1+ 3𝑥2 - 𝑒2
′
= 6
All variables nonnegative
𝜆1𝑠1
′
= 0 , 𝜆2𝑒2
′
=0 , 𝑒1𝑥1=0, 𝑒2𝑥2=0
Observe that with the exception of the last four equations, the K–T conditions are all linear or
nonnegativity constraints. The last four equations are the complementary slackness conditions
for this QPP. For a general QPP, the complementary slackness conditions may be verbally
expressed by
𝑒i from 𝑥i constraint in (8’ ) and 𝑥i cannot both be positive …(12)
Slack or excess variable for the ith constraint and i cannot both be positive
and 𝜆i both basic variable
69. To find a point satisfying the K–T conditions (except for the complementary slackness
conditions), Wolfe’s method simply applies a modified version of Phase I of the two-phase
simplex method. We first add an artificial variable to each constraint in the K–T conditions
that does not have an obvious basic variable, and then we attempt to minimize the sum of the
artificial variables. To ensure that the final solution (with all artificial variables equal to zero)
satisfies the complementary slackness conditions (12), Wolfe’s method modifies the simplex’s
choice of the entering variable as follows:
1) Never perform a pivot that would make the 𝑒i from the ith constraint in (8’ ) and 𝑥i both
basic variables.
2) Never perform a pivot that would make the slack (or excess) variable for the ith constraint
76. Finance - Portfolio Optimization:
In finance, investors often use the Quadratic Programming method to optimize their
investment portfolios. The objective is to maximize returns while managing risk.
This involves quadratic optimization to find the optimal asset allocation under
constraints such as budget constraints and risk limits.
Operations Research - Production Planning:
In manufacturing and production planning, QP can be used to optimize production
schedules. Companies can maximize profit by adjusting production quantities while
considering constraints like capacity limitations, resource availability, and demand
fluctuations.
Chemical Engineering - Process Optimization:
In chemical engineering, QP methods are used to optimize chemical processes, such
as reactor design and operation. Engineers aim to maximize product yields while
adhering to constraints related to reaction kinetics, heat transfer, and material
balances.
77. Economics - Utility Maximization :
Economists may use QP to solve utility maximization problems. Individuals or
firms aim to maximize utility (or profit) subject to various constraints, which can be
nonlinear, such as production functions or utility functions
Transportation - Vehicle Routing:
In logistics and transportation, QP can be employed to optimize vehicle routing and
scheduling. Companies aim to minimize transportation costs while ensuring timely
deliveries and considering vehicle capacity constraints.
Energy - Power System Optimization:
Power system operators use quadratic programming to optimize the dispatch of
power generation resources. The objective is to minimize production costs while
satisfying constraints on power demand, transmission limits, and environmental
regulations