SlideShare a Scribd company logo
1 of 317
Optimization
Assoc. Prof. Dr. Pelin Gündeş
gundesbakir@yahoo.com
2
Optimization
Basic Information
• Instructor: Assoc. Professor Pelin Gundes
(http://atlas.cc.itu.edu.tr/~gundes/)
• E-mail: gundesbakir@yahoo.com
• Office Hours: TBD by email appointment
• Website:
http://atlas.cc.itu.edu.tr/~gundes/teaching/Optimi
zation.htm
• Lecture Time: Wednesday 13:00 - 16:00
• Lecture Venue: M 2180
3
Optimization literature
Textbooks:
1. Nocedal J. and Wright S.J., Numerical Optimization, Springer Series in
Operations Research, Springer, 636 pp, 1999.
2. Spall J.C., Introduction to Stochastic Search and Optimization,
Estimation, Simulation and Control, Wiley, 595 pp, 2003.
3. Chong E.K.P. and Zak S.H., An Introduction to Optimization, Second
Edition, John Wiley & Sons, New York, 476 pp, 2001.
4. Rao S.S., Engineering Optimization - Theory and Practice, John Wiley &
Sons, New York, 903 pp, 1996.
5. Gill P.E., Murray W. and Wright M.H., Practical Optimization, Elsevier,
401 pp., 2004.
6. Goldberg D.E., Genetic Algorithms in Search, Optimization and Machine
Learning, Addison Wesley, Reading, Mass., 1989.
7. S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge
University Press, 2004.(available at
http://www.stanford.edu/~boyd/cvxbook/)
4
Optimization literature
Journals:
1. Engineering Optimization
2. ASME Journal of Mechnical Design
3. AIAA Journal
4. ASCE Journal of Structural Engineering
5. Computers and Structures
6. International Journal for Numerical Methods in Engineering
7. Structural Optimization
8. Journal of Optimization Theory and Applications
9. Computers and Operations Research
10. Operations Research and Management Science
5
Optimization
Course Schedule:
1. Introduction to Optimization
2. Classical Optimization Techniques
3. Linear programming and the Simplex method
4. Nonlinear programming-One Dimensional Minimization Methods
5. Nonlinear programming-Unconstrained Optimization Techniques
6. Nonlinear programming-Constrained Optimization Techniques
7. Global Optimization Methods-Genetic algorithms
8. Global Optimization Methods-Simulated Annealing
9. Global Optimization Methods- Coupled Local Minimizers
6
Optimization
Course Prerequisite:
• Familiarity with MATLAB, if you are not familiar with MATLAB, please visit
http://www.ece.ust.hk/~palomar/courses/ELEC692Q/lecture%2006%20-%20cvx/matlab_crashcourse.pdf
http://www.ece.ust.hk/~palomar/courses/ELEC692Q/lecture%2006%20-%20cvx/official_getting_started.pdf
7
Optimization
• 70% attendance is required!
• Grading:
Homeworks: 15%
Mid-term projects: 40%
Final Project: 45%
8
Optimization
• There will also be lab sessions for
MATLAB exercises!
9
1. Introduction
• Optimization is the act of obtaining the best result under given
circumstances.
• Optimization can be defined as the process of finding the conditions
that give the maximum or minimum of a function.
• The optimum seeking methods are also known as mathematical
programming techniques and are generally studied as a part of
operations research.
• Operations research is a branch of mathematics concerned with the
application of scientific methods and techniques to decision making
problems and with establishing the best or optimal solutions.
10
1. Introduction
• Operations research (in the UK) or operational research (OR)
(in the US) or yöneylem araştırması (in Turkish) is an
interdisciplinary branch of mathematics which uses methods like:
– mathematical modeling
– statistics
– algorithms to arrive at optimal or good decisions in complex
problems which are concerned with optimizing the maxima (profit,
faster assembly line, greater crop yield, higher bandwidth, etc) or
minima (cost loss, lowering of risk, etc) of some objective function.
• The eventual intention behind using operations research is to
elicit a best possible solution to a problem mathematically, which
improves or optimizes the performance of the system.
11
1. Introduction
12
1. Introduction
Historical development
• Isaac Newton (1642-1727)
(The development of differential calculus
methods of optimization)
• Joseph-Louis Lagrange (1736-1813)
(Calculus of variations, minimization of functionals,
method of optimization for constrained problems)
• Augustin-Louis Cauchy (1789-1857)
(Solution by direct substitution, steepest
descent method for unconstrained optimization)
13
1. Introduction
Historical development
• Leonhard Euler (1707-1783)
(Calculus of variations, minimization of
functionals)
• Gottfried Leibnitz (1646-1716)
(Differential calculus methods
of optimization)
14
1. Introduction
Historical development
• George Bernard Dantzig (1914-2005)
(Linear programming and Simplex method (1947))
• Richard Bellman (1920-1984)
(Principle of optimality in dynamic
programming problems)
• Harold William Kuhn (1925-)
(Necessary and sufficient conditions for the optimal solution of
programming problems, game theory)
15
1. Introduction
Historical development
• Albert William Tucker (1905-1995)
(Necessary and sufficient conditions
for the optimal solution of programming
problems, nonlinear programming, game
theory: his PhD student
was John Nash)
• Von Neumann (1903-1957)
(game theory)
16
1. Introduction
• Mathematical optimization problem:
• f0 : Rn R: objective function
• x=(x1,…..,xn): design variables (unknowns of the problem,
they must be linearly independent)
• gi : Rn R: (i=1,…,m): inequality constraints
• The problem is a constrained optimization problem
m
i
b
x
g
x
f
i
i ,....,
1
,
)
(
subject to
)
(
minimize 0


17
1. Introduction
• If a point x* corresponds to the minimum value of the function f (x), the
same point also corresponds to the maximum value of the negative of
the function, -f (x). Thus optimization can be taken to mean
minimization since the maximum of a function can be found by seeking
the minimum of the negative of the same function.
18
1. Introduction
Constraints
• Behaviour constraints: Constraints that represent limitations on
the behaviour or performance of the system are termed behaviour or
functional constraints.
• Side constraints: Constraints that represent physical limitations on
design variables such as manufacturing limitations.
19
1. Introduction
Constraint Surface
• For illustration purposes, consider an optimization problem with only
inequality constraints gj (X)  0. The set of values of X that satisfy
the equation gj (X) =0 forms a hypersurface in the design space and
is called a constraint surface.
20
1. Introduction
Constraint Surface
• Note that this is a (n-1) dimensional subspace, where n is the
number of design variables. The constraint surface divides the
design space into two regions: one in which gj (X)  0and the other
in which gj (X) 0.
21
1. Introduction
Constraint Surface
• Thus the points lying on the hypersurface will satisfy the constraint
gj (X) critically whereas the points lying in the region where gj (X) >0
are infeasible or unacceptable, and the points lying in the region
where gj (X) < 0 are feasible or acceptable.
22
1. Introduction
Constraint Surface
• In the below figure, a hypothetical two dimensional design space is
depicted where the infeasible region is indicated by hatched lines. A
design point that lies on one or more than one constraint surface is
called a bound point, and the associated constraint is called an
active constraint.
23
1. Introduction
Constraint Surface
• Design points that do not lie on any constraint surface are known as
free points.
24
1. Introduction
Constraint Surface
Depending on whether a
particular design point belongs to
the acceptable or unacceptable
regions, it can be identified as one
of the following four types:
• Free and acceptable point
• Free and unacceptable point
• Bound and acceptable point
• Bound and unacceptable point
25
1. Introduction
• The conventional design procedures aim at finding an acceptable or
adequate design which merely satisfies the functional and other
requirements of the problem.
• In general, there will be more than one acceptable design, and the
purpose of optimization is to choose the best one of the many
acceptable designs available.
• Thus a criterion has to be chosen for comparing the different
alternative acceptable designs and for selecting the best one.
• The criterion with respect to which the design is optimized, when
expressed as a function of the design variables, is known as the
objective function.
26
1. Introduction
• In civil engineering, the objective is usually taken as the
minimization of the cost.
• In mechanical engineering, the maximization of the mechanical
efficiency is the obvious choice of an objective function.
• In aerospace structural design problems, the objective function for
minimization is generally taken as weight.
• In some situations, there may be more than one criterion to be
satisfied simultaneously. An optimization problem involving multiple
objective functions is known as a multiobjective programming
problem.
27
1. Introduction
• With multiple objectives there arises a possibility of conflict, and one
simple way to handle the problem is to construct an overall objective
function as a linear combination of the conflicting multiple objective
functions.
• Thus, if f1 (X) and f2 (X) denote two objective functions, construct a new
(overall) objective function for optimization as:
where 1 and 2 are constants whose values indicate the relative
importance of one objective function to the other.
)
(
)
(
)
( 2
2
1
1 X
X
X f
f
f 
 

28
1. Introduction
• The locus of all points satisfying f (X) = c = constant forms a
hypersurface in the design space, and for each value of c there
corresponds a different member of a family of surfaces. These surfaces,
called objective function surfaces, are shown in a hypothetical two-
dimensional design space in the figure below.
29
1. Introduction
• Once the objective function surfaces are drawn along with the constraint
surfaces, the optimum point can be determined without much difficulty.
• But the main problem is that as the number of design variables exceeds
two or three, the constraint and objective function surfaces become
complex even for visualization and the problem has to be solved purely
as a mathematical problem.
30
Example
Example:
Design a uniform column of tubular section to carry a compressive load P=2500 kgf
for minimum cost. The column is made up of a material that has a yield stress of 500
kgf/cm2, modulus of elasticity (E) of 0.85e6 kgf/cm2, and density () of 0.0025 kgf/cm3.
The length of the column is 250 cm. The stress induced in this column should be less
than the buckling stress as well as the yield stress. The mean diameter of the column
is restricted to lie between 2 and 14 cm, and columns with thicknesses outside the
range 0.2 to 0.8 cm are not available in the market. The cost of the column includes
material and construction costs and can be taken as 5W + 2d, where W is the weight
in kilograms force and d is the mean diameter of the column in centimeters.
31
Example
Example:
The design variables are the
mean diameter (d) and tube
thickness (t):
The objective function to be
minimized is given by:














t
d
x
x
2
1
X
1
2
1 2
82
.
9
2
5
2
5
)
( x
x
x
d
dt
l
d
W
f 




 

X
32
Example
• The behaviour constraints can be expressed as:
stress induced ≤ yield stress
stress induced ≤ buckling stress
• The induced stress is given by:
2
1
i
2500
stress
induced
x
x
dt
P


 


33
Example
• The buckling stress for a pin connected column is given by:
where I is the second moment of area of the cross section of the
column given by:
dt
l
EI


 2
2
b
area
sectional
cross
load
buckling
Euler
stress
buckling 



   
)
(
8
)
(
8
)
(
)
(
)
(
)
(
)
(
)
(
64
)
)(
)(
(
64
)
(
64
2
2
2
1
2
1
2
2
2
2
2
2
4
4
x
x
x
x
t
d
dt
t
d
t
d
t
d
t
d
t
d
t
d
d
d
d
d
d
d
d
d
I i
o
i
o
i
o
i
o

























34
Example
• Thus, the behaviour constraints can be restated as:
• The side constraints are given by:
0
)
250
(
8
)
)(
10
85
.
0
(
2500
)
(
0
500
2500
)
(
2
2
2
2
1
6
2
2
1
2
2
1
1








x
x
x
x
g
x
x
g



X
X
8
.
0
2
.
0
14
2




t
d
35
Example
• The side constraints can be expressed in standard form as:
0
8
.
0
)
(
0
2
.
0
)
(
0
14
)
(
0
2
)
(
2
6
2
5
1
4
1
3














x
g
x
g
x
g
x
g
X
X
X
X
36
Example
• For a graphical solution, the constraint surfaces are to be
plotted in a two dimensional design space where the two axes
represent the two design variables x1 and x2. To plot the first
constraint surface, we have:
• Thus the curve x1x2=1.593 represents the constraint surface
g1(X)=0. This curve can be plotted by finding several points on
the curve. The points on the curve can be found by giving a
series of values to x1 and finding the corresponding values of x2
that satisfy the relation x1x2=1.593 as shown in the Table below:
593
.
1
0
500
2500
)
( 2
1
2
1
1 


 x
x
x
x
g

X
x1 2 4 6 8 10 12 14
x2 0.7965 0.3983 0.2655 0.199 0.1593 0.1328 0.114
37
Example
• The infeasible region represented by g1(X)>0 or x1x2< 1.593 is
shown by hatched lines. These points are plotted and a curve P1Q1
passing through all these points is drawn as shown:
38
Example
• Similarly the second
constraint g2(X) < 0 can
be expressed as:
• The points lying on the
constraint surface g2
(X)=0 can be obtained as
follows (These points are
plotted as Curve P2Q2:
3
.
47
)
( 2
2
2
1
2
1 
 x
x
x
x
x1 2 4 6 8 10 12 14
x2 2.41 0.716 0.219 0.0926 0.0473 0.0274 0.0172
39
Example
• The plotting of side
constraints is simple
since they represent
straight lines.
• After plotting all the six
constraints, the feasible
region is determined as
the bounded area
ABCDEA
3
.
47
)
( 2
2
2
1
2
1 
 x
x
x
x
40
Example
• Next, the contours of the
objective function are to be
plotted before finding the
optimum point. For this, we
plot the curves given by:
for a series of values of c. By
giving different values to c, the
contours of f can be plotted
with the help of the following
points.
constant
2
82
.
9
)
( 1
2
1



 c
x
x
x
f X
41
Example
• For
• For
• For
• For
0
.
50
2
82
.
9
)
( 1
2
1 

 x
x
x
f X
x2 0.1 0.2 0.3 0.4 0.5 0.6 0.7
x1 16.77 12.62 10.10 8.44 7.24 6.33 5.64
0
.
40
2
82
.
9
)
( 1
2
1 

 x
x
x
f X
x2 0.1 0.2 0.3 0.4 0.5 0.6 0.7
x1 13.40 10.10 8.08 6.75 5.79 5.06 4.51
C)
point
corner
he
throught
(passing
58
.
31
2
82
.
9
)
( 1
2
1 

 x
x
x
f X
x2 0.1 0.2 0.3 0.4 0.5 0.6 0.7
x1 10.57 7.96 6.38 5.33 4.57 4 3.56
B)
point
corner
he
throught
(passing
53
.
26
2
82
.
9
)
( 1
2
1 

 x
x
x
f X
x2 0.1 0.2 0.3 0.4 0.5 0.6 0.7
x1 8.88 6.69 5.36 4.48 3.84 3.36 2.99
42
Example
• These contours are shown in the
figure below and it can be seen
that the objective function can not
be reduced below a value of 26.53
(corresponding to point B) without
violating some of the constraints.
Thus, the optimum solution is
given by point B with d*=x1*=5.44
cm and t*=x2*=0.293 cm with
fmin=26.53.
43
Examples
Design of civil engineering structures
• variables: width and height of member cross-sections
• constraints: limit stresses, maximum and minimum dimensions
• objective: minimum cost or minimum weight
Analysis of statistical data and building empirical models
from measurements
• variables: model parameters
• Constraints: physical upper and lower bounds for model parameters
• Objective: prediction error
44
Classification of optimization problems
Classification based on:
• Constraints
– Constrained optimization problem
– Unconstrained optimization problem
• Nature of the design variables
– Static optimization problems
– Dynamic optimization problems
45
Classification of optimization problems
Classification based on:
• Physical structure of the problem
– Optimal control problems
– Non-optimal control problems
• Nature of the equations involved
– Nonlinear programming problem
– Geometric programming problem
– Quadratic programming problem
– Linear programming problem
46
Classification of optimization problems
Classification based on:
• Permissable values of the design variables
– Integer programming problems
– Real valued programming problems
• Deterministic nature of the variables
– Stochastic programming problem
– Deterministic programming problem
47
Classification of optimization problems
Classification based on:
• Separability of the functions
– Separable programming problems
– Non-separable programming problems
• Number of the objective functions
– Single objective programming problem
– Multiobjective programming problem
48
Geometric Programming
• A geometric programming problem (GMP)
is one in which the objective function and
constraints are expressed as posynomials
in X.
49
50
Quadratic Programming Problem
• A quadratic programming problem is a nonlinear programming problem with a
quadratic objective function and linear constraints. It is usually formulated as
follows:
subject to
where c, qi,Qij, aij, and bj are constants.
j
i
n
j
ij
n
i
i
n
i
i x
x
Q
x
q
c
F 

 





1
1
1
)
(X
n
i
x
m
j
b
x
a
i
j
i
n
i
ij
,
,
2
,
1
,
0
,
,
2
,
1
,
1








51
Optimal Control Problem
• An optimal control (OC) problem is
a mathematical programming
problem involving a number of
stages, where each stage evolves
from the preceding stage in a
prescribed manner.
• It is usually described by two
types of variables: the control
(design) and the state variables.
The control variables define the
system and govern the evolution
of the system from one stage to
the next, and the state variables
describe the behaviour or status of
the system in any stage.
52
Optimal Control Problem
• The problem is to find a set of control or design
variables such that the total objective function
(also known as the performance index) over all
stages is minimized subject to a set of
constraints on the control and state variables.
• An OC problem can be stated as follows:
Find X which minimizes
subject to the constraints
where xi is the ith control variable, yi is the ith
control variable, and fi is the contribution of the
ith stage to the total objective function; gj, hk and
qi are functions of xj, yk and xi and yi,
respectively, and l is the total number of stages.
)
,
(
)
(
l
1
i
i
i
i y
x
f
f 


X
l
k
y
l
j
x
l
i
y
y
y
x
q
k
k
j
j
i
i
i
i
i
,
,
2
,
1
,
0
)
(
,
,
2
,
1
,
0
)
(
,
,
2
,
1
,
)
,
( 1









 
h
g
53
Integer Programming Problem
• If some or all of the design variables x1,x2,..,xn of
an optimization problem are restricted to take
on only integer (or discrete) values, the problem
is called an integer programming problem.
• If all the design variables are permitted to take
any real value, the optimization problem is
called a real-valued programming problem.
54
Stochastic Programming Problem
• A stochastic programming problem is an
optimization problem in which some or all of the
parameters (design variables and/or
preassigned parameters) are probabilistic
(nondeterministic or stochastic).
• In other words, stochastic programming deals
with the solution of the optimization problems in
which some of the variables are described by
probability distributions.
55
Separable Programming Problem
• A function f (x) is said to be separable if it can be expressed as
the sum of n single variable functions, f1(x1), f2(x2),….,fn(xn), that is,
• A separable programming problem is one in which the objective
function and the constraints are separable and can be expressed in
standard form as:
Find X which minimizes
subject to
where bj is constant.
i
n
i
i x
f
f 


1
)
(X
)
(
)
(
1
i
n
i
i x
f
f 


X
m
j
b
x
g
g j
i
n
i
ij
j ,
,
2
,
1
,
)
(
)
(
1



 

X
56
Multiobjective Programming
Problem
• A multiobjective programming problem can be stated as follows:
Find X which minimizes f1 (X), f2 (X),…., fk (X)
subject to
where f1 , f2,…., fk denote the objective functions to be minimized
simultaneously.
m
j
g j ,...,
2
,
1
,
0
)
( 

X
57
Review of mathematics
Concepts from linear algebra:
Positive definiteness
• Test 1: A matrix A will be positive definite if all its
eigenvalues are positive; that is, all the values of  that satisfy
the determinental equation
should be positive. Similarly, the matrix A will be negative
definite if its eigenvalues are negative.
0
I 
 
A
58
Review of mathematics
Positive definiteness
• Test 2: Another test that can be used to find the positive definiteness
of a matrix A of order n involves evaluation of the determinants
• The matrix A will be positive definite if and only if all the values A1,
A2, A3,An are positive
• The matrix A will be negative definite if and only if the sign of Aj is (-
1)j for j=1,2,,n
• If some of the Aj are positive and the remaining Aj are zero, the matrix
A will be positive semidefinite
33
32
31
23
22
21
13
12
11
3
22
21
12
11
2
11
a
a
a
a
a
a
a
a
a
A
a
a
a
a
A
a
A



nn
n
n
n
n
n
n
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
a
A





3
2
1
3
33
32
31
2
23
22
21
1
13
12
11
3 
59
Review of mathematics
Negative definiteness
• Equivalently, a matrix is negative-definite if all its
eigenvalues are negative
• It is positive-semidefinite if all its eigenvalues are all
greater than or equal to zero
• It is negative-semidefinite if all its eigenvalues are all
less than or equal to zero
60
Concepts from linear algebra:
Nonsingular matrix: The determinant of the matrix is not
zero.
Rank: The rank of a matrix A is the order of the largest
nonsingular square submatrix of A, that is, the largest
submatrix with a determinant other than zero.
Review of mathematics
61
Review of mathematics
Solutions of a linear problem
Minimize f(x)=cTx
Subject to g(x): Ax=b
Side constraints: x ≥0
• The existence of a solution to this problem depends on the
rows of A.
• If the rows of A are linearly independent, then there is a unique
solution to the system of equations.
• If det(A) is zero, that is, matrix A is singular, there are either
no solutions or infinite solutions.
62
Review of mathematics
Suppose
The new matrix A* is called the augmented matrix- the columns of b are added
to A. According to the theorems of linear algebra:
• If the augmented matrix A* and the matrix of coefficients A have the same rank
r which is less than the number of design variables n: (r < n), then there are
many solutions.
• If the augmented matrix A* and the matrix of coefficients A do not have the
same rank, a solution does not exist.
• If the augmented matrix A* and the matrix of coefficients A have the same rank
r=n, where the number of constraints is equal to the number of design variables,
then there is a unique solution.






















1.5
0.5
1
1
-
3
1
1
1
A*
1.5
3
b
A
5
.
0
1
1
1
1
1
63
Review of mathematics
In the example
The largest square submatrix is a 2 x 2 matrix (since m = 2
and m < n). Taking the submatrix which includes the first
two columns of A, the determinant has a value of 2 and
therefore is nonsingular. Thus the rank of A is 2 (r = 2). The
same columns appear in A* making its rank also 2. Since
r < n, infinitely many solutions exist.






















1.5
0.5
1
1
-
3
1
1
1
A*
1.5
3
b
A
5
.
0
1
1
1
1
1
64
Review of mathematics
In the example
One way to determine the solutions is to assign ( n-r) variables arbitrary
values and use them to determine values for the remaining r variables.
The value n-r is often identified as the degree of freedom for the system
of equations.
In this example, the degree of freedom is 1 (i.e., 3-2). For instance x3 can
be assigned a value of 1 in which case x1=0.5 and x2=1.5






















1.5
0.5
1
1
-
3
1
1
1
A*
1.5
3
b
A
5
.
0
1
1
1
1
1
65
Homework
What is the solution of the system given below?
Hint: Determine the rank of the matrix of the coefficients and
the augmented matrix.
1
2
:
1
:
2
:
2
1
3
2
1
2
2
1
1







x
x
g
x
x
g
x
x
g
66
2. Classical optimization techniques
Single variable optimization
• Useful in finding the optimum solutions of continuous and differentiable
functions
• These methods are analytical and make use of the techniques of
differential calculus in locating the optimum points.
• Since some of the practical problems involve objective functions that are
not continuous and/or differentiable, the classical optimization techniques
have limited scope in practical applications.
67
2. Classicial optimization techniques
Single variable optimization
• A function of one variable f (x)
has a relative or local minimum
at x = x* if f (x*) ≤ f (x*+h)
for all sufficiently small
positive and negative values of
h
• A point x* is called a relative
or local maximum if f (x*) ≥ f
(x*+h) for all values of h
sufficiently close to zero.
Local
minimum
Global minima
Local minima
68
2. Classicial optimization techniques
Single variable optimization
• A function f (x) is said to have a global or absolute
minimum at x* if f (x*) ≤ f (x) for all x, and not just for all
x close to x*, in the domain over which f (x) is defined.
• Similarly, a point x* will be a global maximum of f (x) if f
(x*) ≥ f (x) for all x in the domain.
69
Necessary condition
• If a function f (x) is defined in the
interval a ≤ x ≤ b and has a relative
minimum at x = x*, where a < x* < b,
and if the derivative df (x) / dx = f’(x)
exists as a finite number at x = x*, then
f’(x*)=0
• The theorem does not say that the
function necessarily will have a
minimum or maximum at every point
where the derivative is zero. e.g. f’(x)=0
at x= 0 for the function shown in figure.
However, this point is neither a
minimum nor a maximum. In general, a
point x* at which f’(x*)=0 is called a
stationary point.
70
Necessary condition
• The theorem does not say what
happens if a minimum or a
maximum occurs at a point x*
where the derivative fails to exist.
For example, in the figure
depending on whether h
approaches zero through positive
or negative values, respectively.
Unless the numbers or are
equal, the derivative f’(x*) does
not exist. If f’(x*) does not exist,
the theorem is not applicable.
(negative)
m
or
(positive)
*)
(
)
*
(
lim -
0





m
h
x
f
h
x
f
h
FIGURE 2.2 SAYFA 67

m 
m
71
Sufficient condition
• Let f’(x*)=f’’(x*)=…=f (n-1)(x*)=0, but f(n)(x*) ≠ 0. Then f(x*)
is
– A minimum value of f (x) if f (n)(x*) > 0 and n is even
– A maximum value of f (x) if f (n)(x*) < 0 and n is even
– Neither a minimum nor a maximum if n is odd
72
Example
Determine the maximum and minimum values of the function:
Solution: Since f’(x)=60(x4-3x3+2x2)=60x2(x-1)(x-2),
f’(x)=0 at x=0,x=1, and x=2.
The second derivative is:
At x=1, f’’(x)=-60 and hence x=1 is a relative maximum. Therefore,
fmax= f (x=1) = 12
At x=2, f’’(x)=240 and hence x=2 is a relative minimum. Therefore,
fmin= f (x=2) = -11
5
40
45
12
)
( 3
4
5



 x
x
x
x
f
)
4
9
4
(
60
)
( 2
3
x
x
x
x
f 




73
Example
Solution cont’d:
At x=0, f’’(x)=0 and hence we must investigate the next derivative.
Since at x=0, x=0 is neither a maximum nor a minimum, and it is an
inflection point.
0
at
240
)
4
18
12
(
60
)
( 2







 x
x
x
x
f
0
)
( 


 x
f
74
Multivariable optimization with no
constraints
• Definition: rth Differential of f
If all partial derivatives of the function f through order r ≥ 1
exist and are continuous at a point X*, the polynomial
is called the rth differential of f at X*.
k
j
i
r
k
j
n
k
i
n
j
n
i
r
x
x
x
f
h
h
h
f
d




 

 

 


*)
(
*)
(
1
1
1
X
X
r summations
75
Multivariable optimization with no
constraints
• Example: rth Differential of f
when r = 2 and n = 3, we have
k
j
i
r
k
j
n
k
i
n
j
n
i
r
x
x
x
f
h
h
h
f
d




 

 

 


*)
(
*)
(
1
1
1
X
X
r summations
)
(
2
)
(
2
)
(
2
)
(
)
(
)
(
)
(
*)
*,
*,
(
)
(
3
1
2
3
1
3
2
2
3
2
2
1
2
2
1
2
3
2
2
3
2
2
2
2
2
2
1
2
2
1
2
3
1
3
1
3
2
1
2
2
X*
X*
X*
X*
X*
X*
X*
X*
x
x
f
h
h
x
x
f
h
h
x
x
f
h
h
x
f
h
x
f
h
x
f
h
x
x
f
h
h
x
x
x
f
d
f
d
j
i
j
j
i
i

























 
 

76
Multivariable optimization with no
constraints
• Definition: rth Differential of f
The Taylor series expansion of a function f (X*)
about a point X* is given by:
where the last term, called the remainder is given by:
)
h
*,
(
)
*
(
!
1
)
*
(
!
3
1
)
*
(
!
2
1
)
*
(
)
*
(
)
( 3
2
X
X
X
X
X
X
X
N
N
R
f
d
N
f
d
f
d
df
f
f








*
X
-
X
h
h
X*,
h
X*,




 
and
1
0
where
)
(
)!
1
(
1
)
( 1


f
d
N
R N
N
77
Example
Find the second order Taylor’s series approximation of the function
about the point
Solution: The second order Taylor’s series approximation of the function f about point
X* is given by
3
1
3
2
2
3
2
1 )
,
,
( x
e
x
x
x
x
x
x
f 












2
-
0
1
X*




































2
0
1
!
2
1
2
0
1
2
0
1
)
( 2
f
d
df
f
f X
78
Example cont’d
where
2
2
0
1













e
f
2
3
2
1
1
3
2
2
3
3
2
2
1
3
3
2
2
1
1
2
0
1
]
)
2
(
[
2
0
1
2
0
1
2
0
1
2
0
1
3
3 







































































e
h
e
h
e
x
h
x
h
x
x
h
e
h
x
f
h
x
f
h
x
f
h
df
x
x
79
Example cont’d
where
2
3
1
2
3
2
2
2
3
1
2
3
2
2
1
1
2
3
3
2
2
2
1
3
1
2
3
1
3
2
2
3
2
2
1
2
2
1
2
3
2
2
3
2
2
2
2
2
2
1
2
2
1
2
3
1
3
1
2
2
-4
2
0
1
)]
(
2
)
2
(
2
)
0
(
2
)
(
)
2
(
)
0
(
[
2
0
1
)
2
2
2
(
2
0
1
2
0
1
3
3




















































































e
h
h
h
e
h
e
h
h
x
h
h
h
h
e
x
h
x
h
h
x
x
f
h
h
x
x
f
h
h
x
x
f
h
h
x
f
h
x
f
h
x
f
h
x
x
f
h
h
f
d
x
x
j
i
j
i
j
i
80
Example cont’d
Thus, the Taylor’s series approximation is given by:
Where h1=x1-1, h2=x2, and h3=x3+2
)
2
(-4
!
2
1
)
(
)
( 2
3
1
2
3
2
2
2
3
1
2
2 








 e
h
h
h
e
h
h
h
e
e
f X
81
Multivariable optimization with no
constraints
• Necessary condition
If f(X) has an extreme point (maximum or minimum) at X=X*
and if the first partial derivatives of f (X) exist at X*, then
• Sufficient condition
A sufficient condition for a stationary point X* to be an
extreme point is that the matrix of second partial derivatives
(Hessian matrix) of f (X*) evaluated at X* is
– Positive definite when X* is a relative minimum point
– Negative definite when X* is a relative maximum point
0
*)
(
*)
(
*)
(
2
1










X
X
X
n
x
f
x
f
x
f

82
Example
Figure shows two frictionless rigid bodies (carts) A and B connected by
three linear elastic springs having spring constants k1, k2, and k3. The
springs are at their natural positions when the applied force P is zero. Find
the displacements x1 and x2 under the force P by using the principle of
minimum potential energy.
83
Example
Solution: According to the principle of minimum potential energy, the
system will be in equilibrium under the load P if the potential energy is
a minimum. The potential energy of the system is given by:
Potential energy (U)
= Strain energy of springs-work done by external forces
The necessary condition for the minimum of U are
2
2
2
1
2
1
2
3
2
1
2 ]
2
1
)
(
2
1
2
1
[ Px
x
k
x
x
k
x
k 




0
)
(
0
)
(
2
1
1
2
3
2
1
2
3
1
2
1













P
x
k
x
x
k
x
U
x
x
k
x
k
x
U
3
2
3
1
2
1
3
2
2
3
2
3
1
2
1
3
1
)
(
*
*
k
k
k
k
k
k
k
k
P
x
k
k
k
k
k
k
Pk
x







84
Example
Solution cont’d: The sufficiency conditions for the minimum at (x1*,x2*) can also
be verified by testing the positive definiteness of the Hessian matrix of U. The
Hessian matrix of U evaluated at (x1*,x2*) is:
The determinants of the square submatrices of J are
Since the spring constants are always positive. Thus the matrix J is positive definite
and hence (x1*,x2*) corresponds to the minimum of potential energy.


































3
1
3
3
3
2
*)
*,
(
2
2
2
2
1
2
2
1
2
2
1
2
*)
*,
(
2
1
2
1
k
k
k
k
k
k
x
U
x
x
U
x
x
U
x
U
x
x
x
x
J
0
0
3
2
3
1
2
1
3
1
3
3
3
2
2
3
2
3
2
1














k
k
k
k
k
k
k
k
k
k
k
k
J
k
k
k
k
J
85
Semi-definite case
The sufficient conditions for the case when the Hessian
matrix of the given function is semidefinite:
• In case of a function of a single variable, the higher order derivatives
in the Taylor’s series expansion are investigated
86
Semi-definite case
The sufficient conditions for a function of several
variables for the case when the Hessian matrix of the
given function is semidefinite:
• Let the partial derivatives of f of all orders up to the order k ≥ 2 be
continuous in the neighborhood of a stationary point X*, and
so that dk f |X=X* is the first nonvanishing higher-order differential of f
at X*.
• If k is even:
– X* is a relative minimum if dk f |X=X* is positive definite
– X* is a relative maximum if dk f |X=X* is negative definite
– If dk f |X=X* is semidefinite, no general conclusions can be drawn
• If k is odd, X* is not an extreme point of f(X*)
0
|
1
1
0
|







*
X
X
*
X
X
f
d
k
r
f
d
k
r
87
Saddle point
• In the case of a function of two variables f (x,y), the
Hessian matrix may be neither positive nor negative
definite at a point (x*,y*) at which
In such a case, the point (x*,y*) is called a saddle point.
• The characteristic of a saddle point is that it corresponds to
a relative minimum or maximum of f (x,y) wrt one
variable, say, x (the other variable being fixed at y=y* )
and a relative maximum or minimum of f (x,y) wrt the
second variable y (the other variable being fixed at x*).
0






y
f
x
f
88
Saddle point
Example: Consider the function
f (x,y)=x2-y2. For this function:
These first derivatives are zero at x* =
0 and y* = 0. The Hessian matrix of f
at (x*,y*) is given by:
Since this matrix is neither positive
definite nor negative definite, the
point ( x*=0, y*=0) is a saddle point.
y
y
f
x
x
f
2
and
2 






2
0
0
2


J
89
Saddle point
Example cont’d:
It can be seen from the figure that f (x, y*) = f (x, 0) has a relative minimum and f
(x*, y) = f (0, y) has a relative maximum at the saddle point (x*, y*).
90
Example
Find the extreme points of the function
Solution: The necessary conditions for the existence of an extreme point
are:
These equations are satisfied at the points: (0,0), (0,-8/3), (-4/3,0), and
(-4/3,-8/3)
6
4
2
)
,
( 2
2
2
1
3
2
3
1
2
1 



 x
x
x
x
x
x
f
0
)
8
3
(
8
3
0
)
4
3
(
4
3
2
2
2
2
2
2
1
1
1
2
1
1














x
x
x
x
x
f
x
x
x
x
x
f
91
Example
Solution cont’d: To find the nature of these extreme points,
we have to use the sufficiency conditions. The second
order partial derivatives of f are given by:
The Hessian matrix of f is given by:
0
8
6
4
6
2
1
2
2
2
2
2
1
2
1
2












x
x
f
x
x
f
x
x
f









8
6
0
0
4
6
2
1
x
x
J
92
Example
Solution cont’d:
If J1=|6x1+4| and , the values of J1 and J2 and
the nature of the extreme point are as given in the next slide:









8
6
0
0
4
6
2
1
x
x
J
8
6
0
0
4
6
2
1



x
x
2
J
93
Example
Point X Value of J1 Value of J2 Nature of J Nature of
X
f (X)
(0,0) +4 +32 Positive
definite
Relative
minimum
6
(0,-8/3) +4 -32 Indefinite Saddle
point
418/27
(-4/3,0) -4 -32 Indefinite Saddle
point
194/27
(-4/3,-8/3) -4 +32 Negative
definite
Relative
maximum
50/3
94
Multivariable optimization with equality
constraints
• Problem statement:
Minimize f = f (X) subject to gj(X)=0, j=1,2,…..,m where
Here m is less than or equal to n, otherwise the problem becomes
overdefined and, in general, there will be no solution.
• Solution:
– Solution by direct substitution
– Solution by the method of constrained variation
– Solution by the method of Lagrange multipliers















n
x
x
x

2
1
X
95
Solution by direct substitution
For a problem with n variables and m equality constraints:
• Solve the m equality constraints and express any set of m
variables in terms of the remaining n-m variables
• Substitute these expressions into the original objective
function, the result is a new objective function involving
only n-m variables
• The new objective function is not subjected to any
constraint, and hence its optimum can be found by using
the unconstrained optimization techniques.
96
Solution by direct substitution
• Simple in theory
• Not convenient from a practical point of view as the
constraint equations will be nonlinear for most of the
problems
• Suitable only for simple problems
97
Example
Find the dimensions of a box of largest volume that can be inscribed in
a sphere of unit radius
Solution: Let the origin of the Cartesian coordinate system x1, x2, x3
be at the center of the sphere and the sides of the box be 2x1, 2x2, and
2x3. The volume of the box is given by:
Since the corners of the box lie on the surface of the sphere of unit
radius, x1, x2 and x3 have to satisfy the constraint
3
2
1
3
2
1 8
)
,
,
( x
x
x
x
x
x
f 
1
2
3
2
2
2
1 

 x
x
x
98
Example
This problem has three design variables and one equality
constraint. Hence the equality constraint can be used to
eliminate any one of the design variables from the
objective function. If we choose to eliminate x3:
Thus, the objective function becomes:
f(x1,x2)=8x1x2(1-x1
2-x2
2)1/2
which can be maximized as an unconstrained function in
two variables.
2
/
1
2
2
2
1
3 )
1
( x
x
x 


99
Example
The necessary conditions for the maximum of f give:
which can be simplified as:
From which it follows that x1*=x2*=1/3 and hence x3*= 1/3
0
]
)
1
(
)
1
[(
8
0
]
)
1
(
)
1
[(
8
2
/
1
2
2
2
1
2
2
2
/
1
2
2
2
1
1
2
2
/
1
2
2
2
1
2
1
2
/
1
2
2
2
1
2
1


















x
x
x
x
x
x
x
f
x
x
x
x
x
x
x
f
0
2
1
0
2
1
2
2
2
1
2
2
2
1






x
x
x
x
100
Example
This solution gives the maximum volume of the box as:
To find whether the solution found corresponds to a
maximum or minimum, we apply the sufficiency conditions
to f (x1,x2) of the equation f (x1,x2)=8x1x2(1-x1
2-x2
2)1/2. The
second order partial derivatives of f at (x1*,x2*) are given by:
*)
*,
(
at
3
32
)
1
(
2
)
1
(
1
8
)
1
(
8
2
1
2
/
1
2
2
2
1
1
2
/
1
2
2
2
1
3
1
2
2
2
1
2
2
/
1
2
2
2
1
2
1
2
1
2
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
f






















3
3
8
max 
f
101
Example
The second order partial derivatives of f at (x1*,x2*) are given
by:
*)
*,
(
at
3
32
)
1
(
2
)
1
(
1
8
)
1
(
8
2
1
2
/
1
2
2
2
1
2
2
/
1
2
2
2
1
3
2
2
2
2
1
1
2
/
1
2
2
2
1
2
1
2
2
2
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
f






















*)
*,
(
at
3
16
]
)
1
(
)
1
[(
1
8
)
1
(
8
)
1
(
8
2
1
2
/
1
2
2
2
1
2
2
2
/
1
2
2
2
1
2
2
2
1
2
1
2
/
1
2
2
2
1
2
2
2
/
1
2
2
2
1
2
1
2
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
x
f



















102
Example
Since
the Hessian matrix of f is negative definite at (x1*,x2*).
Hence the point (x1*,x2*) corresponds to the maximum of f.
0
and
0
2
2
1
2
2
2
2
2
1
2
2
2
2




















x
x
f
x
f
x
f
x
f
103
Solution by constrained variation
• Minimize f (x1,x2)
subject to g(x1,x2)=0
• A necessary condition for f to have a minimum at some point (x1*,x2*)
is that the total derivative of f (x1,x2) wrt x1 must be zero at (x1*,x2*)
• Since g(x1*,x2*)=0 at the minimum point, any variations dx1 and dx2
taken about the point (x1*,x2*) are called admissable variations
provided that the new point lies on the constraint:
0
2
2
1
1






 dx
x
f
dx
x
f
df
0
)
,
( 2
*
2
1
*
1 

 dx
x
dx
x
g
104
Solution by constrained variation
• Taylor’s series expansion of the function about the point (x1*,x2*):
• Since g(x1*, x2*)=0
• Assuming
• Substituting the above equation into
)
,
(
at
0 *
2
*
1
2
2
1
1
x
x
dx
x
g
dx
x
g
dg 






0
)
,
(
)
,
(
)
,
(
)
,
( 2
*
2
*
1
2
1
*
2
*
1
1
*
2
*
1
2
*
2
1
*
1 








 dx
x
x
x
g
dx
x
x
x
g
x
x
g
dx
x
dx
x
g
0
2



x
g
1
*
2
*
1
2
1
2 )
,
(
/
/
dx
x
x
x
g
x
g
dx






0
2
2
1
1






 dx
x
f
dx
x
f
df
0
)
/
/
( 1
)
,
(
2
2
1
1
*
2
*
1










 dx
x
f
x
g
x
g
x
f
df x
x
105
• The expression on the left hand side is called the constrained variation
of f
• Since dx1 can be chosen arbitrarily:
• This equation represents a necessary condition in order to have
(x1*,x2*) as an extreme point (minimum or maximum)
Solution by constrained variation
0
)
/
/
( 1
)
,
(
2
2
1
1
*
2
*
1










 dx
x
f
x
g
x
g
x
f
df x
x
0
)
( )
,
(
1
2
2
1
*
2
*
1










x
x
x
g
x
f
x
g
x
f
106
A beam of uniform rectangular cross section is to be cut from a log
having a circular cross secion of diameter 2 a. The beam has to be
used as a cantilever beam (the length is fixed) to carry a concentrated
load at the free end. Find the dimensions of the beam that correspond
to the maximum tensile (bending) stress carrying capacity.
Example
107
Solution: From elementary strength of materials, we know that the
tensile stress induced in a rectangular beam  at any fiber located at a
distance y from the neutral axis is given by
where M is the bending moment acting and I is the moment of inertia
of the cross-section about the x axis. If the width and the depth of the
rectangular beam shown in the figure are 2x and 2y, respectively, the
maximum tensile stress induced is given by:
Example
I
M
y


2
3
max
4
3
)
2
)(
2
(
12
1 xy
M
y
x
My
y
I
M




108
solution cont’d: Thus for any specified bending moment,
the beam is said to have maximum tensile stress carrying
capacity if the maximum induced stress (max) is a
minimum. Hence we need to minimize k/xy2 or maximize
Kxy2, where k=3M/4 and K=1/k, subject to the constraint
This problem has two variables and one constraint; hence
the equation
can be applied for finding the optimum solution.
Example
2
2
2
a
y
x 

0
)
( )
,
(
1
2
2
1
*
2
*
1










x
x
x
g
x
f
x
g
x
f
109
Solution: Since
we have:
Equation gives:
Example
2
2
2
2
1
a
y
x
g
y
kx
f



 

3
1
2
2
2 











y
kx
y
f
y
kx
x
f
y
y
g
x
x
g
2
2






0
)
( )
,
(
1
2
2
1
*
2
*
1










x
x
x
g
x
f
x
g
x
f
)
*
*,
(
at
0
)
2
(
2
)
2
( 3
1
2
2
y
x
x
y
kx
y
y
kx 

 



110
Solution: that is
Thus the beam of maximum tensile stress carrying
capacity has a depth of 2 times its breadth. The optimum
values of x and y can be obtained from the above equation
and
as:
Example
*
2
* x
y 
2
2
2
a
y
x
g 


3
2
*
and
3
*
a
y
a
x 

111
Necessary conditions for a general problem
• The procedure described can be generalized to a problem with n
variables and m constraints.
• In this case, each constraint equation gj(x)=0, j=1,2,..,m gives rise to a
linear equation in the variations dxi, i=1,2,…,n.
• Thus, there will be in all m linear equations in n variations. Hence any
m variations can be expressed in terms of the remaining n-m
variations.
• These expressions can be used to express the differential of the
objective function, df, in terms of the n-m independent variations.
• By letting the coefficients of the independent variations vanish in the
equation df = 0, one obtains the necessary conditions for the
cnstrained optimum of the given function.
Solution by constrained variation
112
Necessary conditions for a general problem
• These conditions can be expressed as:
• It is to be noted that the variations of the first m variables (dx1, dx2,..,
dxm) have been expressed in terms of the variations of the remaining
n-m variables (dxm+1, dxm+2,.., dxn) in deriving the above equation.
Solution by constrained variation
n
m
m
k
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
f
x
f
x
f
x
f
x
x
x
x
x
g
g
g
f
J
m
m
m
m
k
m
m
k
m
k
m
k
m
k
m
,
,
2
,
1
where
0
,
,
,
,
,
,
,
,
,
2
1
2
2
2
1
2
2
1
2
1
1
1
1
2
1
3
2
1
2
1





















































113
Necessary conditions for a general problem
• This implies that the following relation is satisfied:
• The n-m equations given by the below equation represent the
necessary conditions for the extremum of f(X) under the m equality
constraints, gj(X) = 0, j=1,2,…,m.
Solution by constrained variation
0
,
,
,
,
,
,
2
1
2
1









m
m
x
x
x
g
g
g
J


n
m
m
k
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
f
x
f
x
f
x
f
x
x
x
x
x
g
g
g
f
J
m
m
m
m
k
m
m
k
m
k
m
k
m
k
m
,
,
2
,
1
where
0
,
,
,
,
,
,
,
,
,
2
1
2
2
2
1
2
2
1
2
1
1
1
1
2
1
3
2
1
2
1





















































114
Minimize
subject to
Solution: This problem can be solved by applying the
necessary conditions given by
Example
)
(
2
1
)
( 2
4
2
3
2
2
2
1 y
y
y
y
f 



Y
0
15
6
5
2
)
(
0
10
5
3
2
)
(
4
3
2
1
2
4
3
2
1
1












y
y
y
y
g
y
y
y
y
g
Y
Y
n
m
m
k
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
f
x
f
x
f
x
f
x
x
x
x
x
g
g
g
f
J
m
m
m
m
k
m
m
k
m
k
m
k
m
k
m
,
,
2
,
1
where
0
,
,
,
,
,
,
,
,
,
2
1
2
2
2
1
2
2
1
2
1
1
1
1
2
1
3
2
1
2
1





















































115
Solution cont’d: Since n = 4 and m = 2, we have to select two
variables as independent variables. First we show that any arbitrary set
of variables can not be chosen as independent variables since the
remaining (dependent) variables have to satisfy the condition of
In terms of the notation of our equations, let us take the independent
variables as x3=y3 and x4=y4 so that x1=y1 and x2=y2. Then the
Jacobian becomes:
and hence the necessary conditions can not be applied.
Example
0
,
,
,
,
,
,
2
1
2
1









m
m
x
x
x
g
g
g
J


0
2
1
2
1
,
,
2
2
1
2
2
1
1
1
2
1
2
1



















y
g
y
g
y
g
y
g
x
x
g
g
J
116
Solution cont’d: Next, let us take the independent variables as x3=y2
and x4=y4 so that x1=y1 and x2=y3. Then the Jacobian becomes:
and hence the necessary conditions of
can be applied.
Example
0
2
5
1
3
1
,
,
3
2
1
2
3
1
1
1
2
1
2
1




















y
g
y
g
y
g
y
g
x
x
g
g
J
n
m
m
k
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
f
x
f
x
f
x
f
x
x
x
x
x
g
g
g
f
J
m
m
m
m
k
m
m
k
m
k
m
k
m
k
m
,
,
2
,
1
where
0
,
,
,
,
,
,
,
,
,
2
1
2
2
2
1
2
2
1
2
1
1
1
1
2
1
3
2
1
2
1





















































117
Solution cont’d: The equation
give for k = m+1=3
Example
n
m
m
k
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
f
x
f
x
f
x
f
x
x
x
x
x
g
g
g
f
J
m
m
m
m
k
m
m
k
m
k
m
k
m
k
m
,
,
2
,
1
where
0
,
,
,
,
,
,
,
,
,
2
1
2
2
2
1
2
2
1
2
1
1
1
1
2
1
3
2
1
2
1





















































0
4
2
)
2
2
(
)
6
10
(
)
3
5
(
5
1
2
3
1
2 1
2
3
1
2
3
1
2
3
2
1
2
2
2
3
1
1
1
2
1
3
1
2
2
2
1
2
3
2
2
1
1
1
3
1
2
1
3















































y
y
y
y
y
y
y
y
y
g
y
g
y
g
y
g
y
g
y
g
y
f
y
f
y
f
x
g
x
g
x
g
x
g
x
g
x
g
x
f
x
f
x
f
118
Solution cont’d:
For k = m+2=4
From the two previous equations, the necessary conditions for the
minimum or the maximum of f is obtained as:
Example
0
7
2
)
6
5
(
)
18
25
(
)
3
5
(
5
1
6
3
1
5 3
1
4
3
1
4
3
1
4
3
2
1
2
4
2
3
1
1
1
4
1
3
1
4
2
2
1
2
4
2
2
1
1
1
4
1
2
1
4
















































y
y
y
y
y
y
y
y
y
y
g
y
g
y
g
y
g
y
g
y
g
y
f
y
f
y
f
x
g
x
g
x
g
x
g
x
g
x
g
x
f
x
f
x
f
2
4
1
4
3
2
1
2
7
2
7
2
2
1
y
y
y
y
y
y
y





119
Solution cont’d:
When the equations
are substituted, the equations
take the form:
Example
0
15
6
5
2
)
(
0
10
5
3
2
)
(
4
3
2
1
2
4
3
2
1
1












y
y
y
y
g
y
y
y
y
g
Y
Y
2
4
1
4
3
2
1
2
7
2
7
2
2
1
y
y
y
y
y
y
y





15
16
15
10
11
8
4
2
4
2






y
y
y
y
120
Solution cont’d:
from which the desired optimum solution can be obtained as:
Example
37
30
*
74
155
*
37
5
*
74
5
*
4
3
2
1






y
y
y
y
121
Sufficiency conditions for a general problem
• By eliminating the first m variables, using the m equality constraints,
the objective function can be made to depend only on the remaining
variables xm+1, xm+2, …,xn. Then the Taylor’s series expansion of f , in
terms of these variables, about the extreme point X* gives:
where is used to denote the partial derivative of f wrt xi
(holding all the other variables xm+1, xm+2, …,xi-1, xi+1, xi+2,…,xn
constant) when x1, x2, …,xm are allowed to change so that the
constraints gj(X*+dX)=0, j=1,2,…,m are satisfied; the second
derivative is used to denote a similar meaning.
Solution by constrained variation
j
i
g
j
i
n
m
j
n
m
i
i
g
n
m
i i
dx
dx
x
x
f
dx
x
f
)
f
d
f
























 

 





2
1
1
1 2
1
(
)
( *
X
X
*
X
g
i
x
f )
/
( 

g
j
i x
x
f )
/
( 2



122
Example
Consider the problem of minimizing
Subject to the only constraint
Since n=3 and m=1 in this problem, one can think of any of the m
variables, say x1, to be dependent and the remaining n-m variables,
namely x2 and x3, to be independent.
Here the constrained partial derivative means the rate of
change of f with respect to x2 (holding the other independent variable
x3 constant) and at the same time allowing x1 to change about X* so as
to satisfy the constraint g1(X)=0
Solution by constrained variation
)
,
,
(
)
( 3
2
1 x
x
x
f
f 
X
0
8
)
( 2
3
2
2
2
1
1 



 x
x
x
g X
g
x
f )
/
( 2


123
Example
In the present case, this means that dx1 has to be chosen to satisfy the
relation
since g1(X*)=0 at the optimum point and dx3= 0 (x3 is held constant.)
Solution by constrained variation
0
2
2
is
that
0
)
(
)
(
)
(
)
(
)
(
2
*
2
1
*
1
3
3
1
2
2
1
1
1
1
1
1














dx
x
dx
x
dx
x
g
dx
x
g
dx
x
g
g
d
g X*
X*
X*
X*
X
*
X
124
Example
Notice that (df/dxi)g has to be zero for i=m+1, m+2,...,n since the dxi
appearing in the equation
are all independent. Thus, the necessary conditions for the existence of
constrained optimum at X* can also be expressed as:
Solution by constrained variation
j
i
g
j
i
n
m
j
n
m
i
i
g
n
m
i i
dx
dx
x
x
f
dx
x
f
)
f
d
f
























 

 





2
1
1
1 2
1
(
)
( *
X
X
*
X
n
m
m
i
x
f
g
i
,...,
2
,
1
,
0 













125
Example
It can be shown that the equations
are nothing bu the equation
Solution by constrained variation
n
m
m
i
x
f
g
i
,...,
2
,
1
,
0 













n
m
m
k
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
g
x
f
x
f
x
f
x
f
x
x
x
x
x
g
g
g
f
J
m
m
m
m
k
m
m
k
m
k
m
k
m
k
m
,
,
2
,
1
where
0
,
,
,
,
,
,
,
,
,
2
1
2
2
2
1
2
2
1
2
1
1
1
1
2
1
3
2
1
2
1





















































126
• A sufficient condition for X* to be a constrained relative minimum
(maximum) is that the quadratic form Q defined by
is positive (negative) for all nonvanishing variations dxi and the matrix
has to be positive (negative) definite to have Q positive (negative) for
all choices of dxi
Sufficiency conditions for a general
problem
j
i
g
n
m
j j
i
n
m
i
dx
dx
x
x
f
Q 
 















1
2
1
























































































g
n
g
m
n
g
m
n
g
n
m
g
m
m
g
m
x
f
x
x
f
x
x
f
x
x
f
x
x
f
x
f
2
2
2
2
1
2
1
2
2
1
2
2
1
2



127
• The computation of the constrained derivatives in the sufficiency
condition is difficult and may be prohibitive for problems with more
than three constraints
• Simple in theory
• Difficult to apply since the necessary conditions involve evaluation of
determinants of order m+1
Solution by constrained variation
128
Problem with two variables and one constraint:
Minimize f (x1,x2)
Subject to g(x1,x2)=0
For this problem, the necessary condition was found to be:
By defining a quantity , called the Lagrange multiplier as:
Solution by Lagrange multipliers
0
)
/
/
( )
,
(
1
2
2
1
*
2
*
1










x
x
x
g
x
g
x
f
x
f
*)
*,
(
2
2
2
1
/
/
x
x
x
g
x
f















129
Problem with two variables and one constraint:
Necessary conditions for the point (x1*,x2*) to be an extreme point
The problem can be rewritten as:
In addition, the constraint equation has to be satisfied at the extreme
point:
Solution by Lagrange multipliers
0
)
( )
,
(
1
1
*
2
*
1






x
x
x
g
x
f

0
)
( )
,
(
2
2
*
2
*
1






x
x
x
g
x
f

0
)
,
(
)
,
(
2
1 *
2
*
1

x
x
x
x
g
130
Problem with two variables and one constraint:
• The derivation of the necessary conditions by the method of Lagrange
multipliers requires that at least one of the partial derivatives of
g(x1,x2) be nonzero at an extreme point.
• The necessary conditions are more commonly generated by
constructing a function L,known as the Lagrange function, as
Solution by Lagrange multipliers
0
)
,
(
)
,
(
)
,
,
( 2
1
2
1
2
1 

 x
x
g
x
x
f
x
x
L 

131
Problem with two variables and one constraint:
• By treating L as a function of the three variables x1, x2 and , the
necessary conditions for its extremum are given by:
Solution by Lagrange multipliers
0
)
,
(
)
,
,
(
0
)
,
(
)
,
(
)
,
,
(
0
)
,
(
)
,
(
)
,
,
(
2
1
2
1
2
1
2
2
1
2
2
1
2
2
1
1
2
1
1
2
1
1






















x
x
g
x
x
L
x
x
x
g
x
x
x
f
x
x
x
L
x
x
x
g
x
x
x
f
x
x
x
L






132
Example: Find the solution using the Lagrange multiplier method.
Solution
The Lagrange function is
Example
0
0
2
2
0
2
)
,
(
of
minimum
for the
conditions
necessary
The
)
(
)
,
(
)
,
(
)
,
,
(
2
2
2
3
1
2
2
2
2
2
2
1






























a
y
x
L
y
y
kx
y
L
x
y
kx
x
L
y
x
f
a
y
x
y
kx
y
x
g
y
x
f
y
x
L






0
)
,
(
subject to
)
,
(
Minimize
2
2
2
2
1




 

a
y
x
y
x
g
y
kx
y
x
f
133
Solution cont’d
which yield:
Example
3
a
2
*
and
3
a
x*
:
as
solution
optimum
the
gives
0
with
along
relation
This
obtained.
be
can
*
)
2
/
1
(
*
relation
which the
from
2
2
2
2
2
4
2
3











y
a
y
x
L
y
x
xy
k
y
x
k


134
Necessary conditions for a general problem:
Minimize f(X)
subject to
gj (X)= 0, j=1, 2,….,m
The Lagrange function, L, in this case is defined by introducing one
Lagrange multiplier j for each constraint gj(X) as
Solution by Lagrange multipliers
)
(
)
(
)
(
)
(
)
,
,
,
,
,
,
,
( 2
2
1
1
2
1
2
1 X
X
X
X m
m
m
n g
g
g
f
x
x
x
L 




 



 


135
By treating L as a function of the n+m unknowns, x1, x2,…,xn,1,
2,…, m, the necessary conditions for the extremum of L, which also
corresponds to the solution of the original problem are given by:
The above equations represent n+m equations in terms of the n+m
unknowns, xi and j
Solution by Lagrange multipliers
m
j
g
L
n
i
x
g
x
f
x
L
j
j
i
j
m
j
j
i
i
,
,
2
,
1
,
0
)
(
,
,
2
,
1
,
0
1



















X


136
The solution:
The vector X* corresponds to the relative constrained minimum of
f(X) (sufficient conditions are to be verified) while the vector *
provides the sensitivity information.
Solution by Lagrange multipliers






























*
*
2
*
1
*
*
2
*
1
*
and
m
n
x
x
x






X*
137
Sufficient Condition
A sufficient condition for f(X) to have a constrained relative minimum
at X* is that the quadratic Q defined by
evaluated at X=X* must be positive definite for all values of dX for
which the constraints are satisfied.
If
is negative for all choices of the admissable variations dxi, X* will be a
constrained maximum of f(X)
Solution by Lagrange multipliers
j
i
n
j j
i
n
i
dx
dx
x
x
L
Q 
 
 



1
2
1
j
i
n
j j
i
n
i
dx
dx
x
x
L
Q )
(
1
2
1
*
X*,

 
 



138
A necessary condition for the quadratic form Q to be positive
(negative) definite for all admissable variations dX is that each root of
the polynomial zi, defined by the following determinantal equation, be
positive (negative):
• The determinantal equation, on expansion, leads to an (n-m)th-order
polynomial in z. If some of the roots of this polynomial are positive
while the others are negative, the point X* is not an extreme point.
Solution by Lagrange multipliers
139
Find the dimensions of a cylindirical tin (with top and bottom) made
up of sheet metal to maximize its volume such that the total surface
area is equal to A0=24.
Solution
If x1 and x2 denote the radius of the base and length of the tin,
respectively, the problem can be stated as:
Maximize f (x1,x2) = x1
2x2
subject to
Example 1


 24
2
2 0
2
1
2
1 

 A
x
x
x
140
Solution
Maximize f (x1,x2) = x1
2x2
subject to
The Lagrange function is:
and the necessary conditions for the maximum of f give:
Example 1


 24
2
2 0
2
1
2
1 

 A
x
x
x
)
2
2
(
)
,
,
( 0
2
1
2
1
2
1
2
1 A
x
x
x
x
x
x
L 


 




0
2
2
0
2
0
2
4
2
0
2
1
2
1
1
2
1
2
2
1
2
1
1

















A
x
x
x
L
x
x
x
L
x
x
x
x
x
L








141
Solution
that is,
The above equations give the desired solution as:
Example 1
1
2
1
2
1
2
1
2
x
x
x
x
x






2
1
2
1
x
x 
2
/
1
0
2
/
1
0
*
2
2
/
1
0
*
1
24
*
and
,
3
2
,
6


























A
A
x
A
x
142
Solution
This gives the maximum value of f as
If A0 = 24, the optimum solution becomes
To see that this solution really corresponds to the maximum of f, we apply
the sufficiency condition of equation
Example 1
2
/
1
3
0
54
* 









A
f

 16
*
and
,
1
*
,
4
,
2 *
2
*
1 



 f
x
x
143
Solution
In this case:
Example 1




2
2
2 *
*
1
)
(
2
1
2
12 





 x
x
x
L
L *
X*,




4
4
2 *
*
2
)
(
2
1
2
11 




 x
x
L
L *
X*,
0
)
(
2
2
2
22 


 *
X*,
x
L
L




16
2
4 *
2
*
1
)
(
1
1
11 




 x
x
x
g
g
*
X*,



4
2 *
1
)
(
2
1
12 



 x
x
g
g
*
X*,
144
Solution
Thus, equation
becomes
Example 1
0
0
4
16
4
0
2
16
2
4










z
z
145
Solution
that is,
This gives
Since the value of z is negative, the point (x1*,x2*) corresponds to the
maximum of f.
Example 1
0
192
272 3
2

 
 z

17
12


z
146
Find the maximum of the function f (X) = 2x1+x2+10 subject to g
(X)=x1
2+2x2
2 = 3 using the Lagrange multiplier method. Also find the
effect of changing the right-hand side of the constraint on the optimum
value of f.
Solution
The Lagrange function is given by:
The necessary conditions for the solution of the problem are:
Example 2
)
2
3
(
10
2
)
( 2
2
1
2
1 x
x
x
x
L 




 

X,
2
2
1
2
2
1
2
3
0
4
1
0
2
x
x
L
x
x
L
x
L


















147
Solution
The solution of the equation is:
The application of the sufficiency condition yields:
Example 2
2
*
13
.
0
97
.
2
*
2
*
1




















x
x
X*
0
0
52
.
0
1
52
.
0
8
0
1
0
0
4
1
4
4
0
1
0
2
2 















z
z
x
x
z
z

0
0
12
11
12
22
21
11
12
11



g
g
g
z
L
L
g
L
z
L
148
Solution
Hence X* will be a maximum of f with f* = f (X*)=16.07
Example 2
2972
.
6
0
8
2704
.
0





z
z
z
149
Minimize f (X)
subject to
gj (X) ≤ 0, j=1, 2,…,m
The inequality constraints can be transformed to equality constraints
by adding nonnegative slack variables, yj
2, as
gj (X) + yj
2 = 0, j = 1,2,…,m
where the values of the slack variables are yet unknown.
Multivariable optimization with
inequality constraints
150
Minimize f(X) subject to
Gj(X,Y)= gj (X) + yj
2=0, j=1, 2,…,m
where is the vector of slack variables
This problem can be solved by the method of Lagrange multipliers.
For this, the Lagrange function L is constructed as:
Multivariable optimization with
inequality constraints















m
y
y
y

2
1
Y
s
multiplier
Lagrange
of
vector
the
is
where
)
(
)
(
)
(
2
1
1
















 

m
j
m
j
jG
f
L







Y
X,
X
Y,
X,
151
The stationary points of the Lagrange function can be found by
solving the following equations (necessary conditions):
(n+2m) equations
(n+2m) unknowns
The solution gives the optimum solution vector X*, the Lagrange
multiplier vector, *, and the slack variable vector, Y*.
Multivariable optimization with
inequality constraints
m
j
y
y
L
m
j
y
g
G
L
n
i
x
g
x
f
x
L
j
j
j
j
j
j
j
i
j
m
j
j
i
i
,
,
2
,
1
,
0
2
)
(
,
,
2
,
1
,
,
0
(
(
)
(
,
,
2
,
1
,
0
)
(
)
(
)
(
2
1

































Y,
X,
X)
Y)
X,
Y,
X,
X
X
Y,
X,
152
Equation
ensure that the constraints
are satisfied, while the equation
implies that either j=0 or yj=0
Multivariable optimization with
inequality constraints
m
j
y
g
G
L
j
j
j
j
,
,
2
,
1
,
,
0
(
(
)
( 2








X)
Y)
X,
Y,
X, 

m
j
g j ,
,
2
,
1
,
0
)
( 


X
m
j
y
y
L
j
j
j
,
,
2
,
1
,
0
2
)
( 







Y,
X,
153
• If j=0, it means that the jth constraint is inactive and hence can be
ignored.
• On the other hand, if yj= 0, it means that the constraint is active (gj =
0) at the optimum point.
• Consider the division of the constraints into two subsets, J1 and J2,
where J1 + J2 represent the total set of constraints.
• Let the set J1 indicate the indices of those constraints that are active at
the optimum point and J2 include the indices of all the inactive
constraints.
• Those constraints that are satisfied with an equality sign, gj= 0, at the
optimum point are called the active constraints, while those that are
satisfied with a strict inequality sign, gj< 0 are termed inactive
constraints.
Multivariable optimization with
inequality constraints
154
• Thus for j J1, yj = 0 (constraints are active), for j J2, j=0 (constraints
are inactive), and the equation
can be simplified as:
Multivariable optimization with
inequality constraints
n
i
x
g
x
f
i
j
m
J
j
j
i
,
,
2
,
1
,
0
1











n
i
x
g
x
f
x
L
i
j
m
j
j
i
i
,
,
2
,
1
,
0
)
(
)
(
)
(
1













X
X
Y,
X, 

155
Similarly, the equation
can be written as:
The equations (1) and (2) represent n+p+(m-p)=n+m equations in the
n+m unknowns xi (i=1,2,…,n), j (j  J1), and yj (j  J2), where p
denotes the number of active constraints.
Multivariable optimization with
inequality constraints
2
2
1
,
0
)
(
,
0
)
(
J
j
y
g
J
j
g
j
j
j





X
X
m
j
y
g
G
L
j
j
j
j
,
,
2
,
1
,
,
0
(
(
)
( 2








X)
Y)
X,
Y,
X, 

n
i
x
g
x
f
i
j
m
J
j
j
i
,
,
2
,
1
,
0
1










 (1)
(2)
156
Assuming that the first p constraints are active, the equation
can be expressed as:
These equations can be collectively written as
Multivariable optimization with
inequality constraints
n
i
x
g
x
g
x
g
x
f
i
p
p
i
i
i
,
,
2
,
1
,
,
2
2
1
1 
 












 


n
i
x
g
x
f
i
j
m
J
j
j
i
,
,
2
,
1
,
0
1











ly.
respective
,
constraint
jth
the
and
function
objective
the
of
gradients
the
are
and
where
2
2
1
1
j
p
p
g
f
g
g
g
f










 

 
157
Equation
indicates that the negative of the gradient of the objective function can
be expressed as a linear combination of the gradients of the active
constraints at the optimum point.
Multivariable optimization with
inequality constraints
/
/
/
and
/
/
/
2
1
2
1












































n
j
j
j
j
n x
g
x
g
x
g
g
x
f
x
f
x
f
f


p
p g
g
g
f 







 

 
2
2
1
1
158
• A vector S is called a feasible direction from a point X if at least a
small step can be taken along S that does not immediately leave the
feasible region.
• Thus for problems with sufficiently smooth constraint surfaces, vector
S satisfying the relation
can be called a feasible direction.
Multivariable optimization with
inequality constraints-Feasible region
0

 j
T
g
S
159
• On the other hand, if the constraint is either linear or concave, any
vector satisfying the relation
can be called a feasible region.
• The geometric interpretation of a feasible direction is that the vector S
makes an obtuse angle with all the constraint normals.
Multivariable optimization with
inequality constraints-Feasible region
0

 j
T
g
S
160
Multivariable optimization with
inequality constraints-Feasible region
161
• Further we can show that in the case of a minimization problem, the j
values (j  J1), have to be positive. For simplicity of illustration,
suppose that only two constraints (p=2) are active at the optimum
point.
• Then the equation
reduces to
Multivariable optimization with
inequality constraints
p
p g
g
g
f 







 

 
2
2
1
1
2
2
1
1 g
g
f 




 

162
• Let S be a feasible direction at the optimum point. By premultiplying
both sides of the equation
by ST, we obtain:
where the superscript T denotes the transpose. Since S is a feasible
direction, it should satisfy the relations:
Multivariable optimization with
inequality constraints
2
T
2
1
T
1
T
g
g
f 




 S
S
S 

2
2
1
1 g
g
f 




 

0
0
2
1




g
S
g
S
T
T
163
• Thus if, 1 > 0 and 2 > 0 the quantity STf is always positive.
• As f indicates the gradient direction, along which the value of the
function increases at the maximum rate, STf represents the
component of the increment of f along the direction S.
• If STf > 0, the function value increases, the function value increases
as we move along the direction S.
• Hence if 1 and 2 are positive, we will not be able to find any
direction in the feasible domain along which the function value can be
decreased further.
Multivariable optimization with
inequality constraints
164
• Since the point at which the equation
is valid is assumed to be optimum, 1 and 2 have to be positive.
• This reasoning can be extended to cases where there are more than
two constraints active. By proceeding in a similar manner, one can
show that the j values have to be negative for a maximization
problem.
Multivariable optimization with
inequality constraints
0
0
2
1




g
S
g
S
T
T
165
Kuhn-Tucker Conditions
• The conditions to be satisfied at a constrained minimum point, X*,
of the problem can be expressed as:
• These conditions are in general not sufficient to ensure a relative
minimum.
• There is only a class of problems, called convex programming
problems, for which the Kuhn-Tucker conditions are necessary
and sufficient for a global minimum.
1
,
0
,
,
2
,
1
,
0
1
J
j
n
i
x
g
x
f
j
i
j
J
j
j
i












 
166
Kuhn-Tucker Conditions
• Those constraints that are satisfied with an equality sign, gj=0, at
the optimum point are called the active constraints. If the set of
active constraints is not known, the Kuhn-Tucker conditions can
be stated as follows:
m
j
m
j
g
m
j
g
n
i
x
g
x
f
j
j
j
j
i
j
m
j
j
i
,
,
2
,
1
,
0
,
,
2
,
1
,
0
,
,
2
,
1
,
0
,
,
2
,
1
,
0
1






















167
Kuhn-Tucker Conditions
• Note that if the problem is one of maximization or if the
constraints are of the type gj ≥ 0, the j have to be nonpositive in
the equations below :
• On the other hand, if the problem is one of maximization with
constraints in the form gj ≥ 0, the j have to be nonnegtaive in the
above equations.
m
j
m
j
g
m
j
g
n
i
x
g
x
f
j
j
j
j
i
j
m
j
j
i
,
,
2
,
1
,
0
,
,
2
,
1
,
0
,
,
2
,
1
,
0
,
,
2
,
1
,
0
1






















168
Constraint Qualification
• When the optimization problem is stated as:
Minimize f(X)
subject to
gj (X) ≤ 0, j=1, 2,…,m
hk(X) = 0, k=1,2,….,p
the Kuhn-Tucker conditions become
ly.
respective
,
0
and
0
s
constraint
with the
associated
s
multiplier
Lagrange
the
denote
and
where
,
,
2
,
1
,
0
,
,
2
,
1
,
0
,
,
2
,
1
,
0
,
,
2
,
1
,
0
0
j
1
1















 
 

k
j
k
j
k
j
j
j
k
p
k
k
j
m
j
j
h
g
m
j
p
k
h
m
j
g
m
j
g
h
g
f










169
Constraint Qualification
• Although we found that the Kuhn-Tucker conditions represent the necessary
conditions of optimality, the following theorem gives the precise conditions of
optimality:
• Theorem: Let X* be a feasible solution to the problem of
Minimize f(X)
subject to
gj (X) ≤ 0, j=1, 2,…,m
hk(X) = 0, k=1,2,….,p
If gj(X*), j J1 and hk(X*), k=1,2,…,p are linearly independent, there exist *
and * such that (X*, *, *) satisfy the equations below:
m
j
p
k
h
m
j
g
m
j
g
h
g
f
j
k
j
j
j
k
p
k
k
j
m
j
j
,
,
2
,
1
,
0
,
,
2
,
1
,
0
,
,
2
,
1
,
0
,
,
2
,
1
,
0
0
1
1

















 
 





170
Example 1
Consider the problem:
Minimize f(x1,x2)=(x1-1)2 +x2
2
subject to
g1 (x1,x2) =x1
3-2x2≤ 0
g2 (x1,x2) =x1
3+2x2≤ 0
Determine whether the constraint qualification and the Kuhn-Tucker
conditions are satisfied at the optimum point.
171
Example 1
Solution: The feasible region and the contours of the objective
function are shown in the figure below. It can be seen that the
optimum solution is at (0,0).
172
Example 1
Solution cont’d: Since g1 and g2 are both active at the optimum
point (0,0), their gradients can be computed as:
It is clear that g1(X*) and g2(X*) are not linearly independent.
Hence the constraint qualification is not satisfied at the optimum
point.
































2
0
2
3
*)
X
(
2
0
2
3
*)
X
(
)
0
,
0
(
2
1
2
)
0
,
0
(
2
1
1
x
g
x
g
173
Example 1
Solution cont’d: Noting that:
The Kuhn Tucker conditions can be written using the equations
as:
Since equation (E4) is not satisfied and equation (E5) can be satisfied for negative values of 1= 2 also, the Kuhn-
Tucker conditions are not satisfied at the optimum point.












 


0
2
2
)
1
(
2
*)
X
(
)
0
,
0
(
2
1
x
x
f
1
,
0
,
,
2
,
1
,
0
1
J
j
n
i
x
g
x
f
j
i
j
J
j
j
i












 
(E4)
0
(E3)
0
(E2)
0
)
2
(
)
2
(
0
(E1)
0
)
0
(
)
0
(
2
2
1
2
1
2
1
















174
Example 2
A manufacturing firm producing small refrigerators has entered into
a contract to supply 50 refrigerators at the end of the first month, 50
at the end of the second month, and 50 at the end of the third. The
cost of producing x refrigerators in any month is given by
$(x2+1000). The firm can produce more refrigerators in any month
and carry them to a subsequent month. However, it costs $20 per
unit for any refrigerator carried over from one month to the next.
Assuming that there is no initial inventory, determine the number of
refrigerators to be produced in each month to minimize the total
cost.
175
Example 2
Solution:
Let x1, x2, x3 represent the number of refrigerators produced in the
first, second and third month respectively. The total cost to be
minimized is given by:
total cost= production cost + holding cost
2
1
2
3
2
2
2
1
2
1
1
2
3
2
2
2
1
3
2
1
20
40
)
100
(
20
)
50
(
20
)
1000
(
)
1000
(
)
1000
(
)
,
,
(
x
x
x
x
x
x
x
x
x
x
x
x
x
x
f
















176
Example 2
0
150
)
,
,
(
0
100
)
,
,
(
0
50
)
,
,
(
3
2
1
3
2
1
3
2
1
3
2
1
2
1
3
2
1
1












x
x
x
x
x
x
g
x
x
x
x
x
g
x
x
x
x
g
Solution cont’d:
The constraints can be stated as:
The first Kuhn Tucker condition is given by:
(E3)
0
2
(E2)
0
20
2
(E1)
0
40
2
is
that
3
,
2
,
1
,
0
3
3
3
2
2
3
2
1
1
3
3
2
2
1
1

































x
x
x
i
x
g
x
g
x
g
x
f
i
i
i
i
177
Example 2
Solution cont’d:
The second Kuhn Tucker condition is given by :
(E6)
0
)
150
(
(E5)
0
)
100
(
(E4)
0
)
50
(
is
that
3
,
2
,
1
,
0
3
2
1
3
2
1
2
1
1











x
x
x
x
x
x
j
g j
j




178
Example 2
Solution cont’d:
The third Kuhn Tucker condition is given by :
(E9)
0
)
150
(
(E8)
0
)
100
(
(E7)
0
)
50
(
is,
that
3
,
2
,
1
0
3
2
1
2
1
1











x
x
x
x
x
x
j
g j
179
Example 2
Solution cont’d:
The fourth Kuhn Tucker condition is given by :
(E12)
0
(E11)
0
(E10)
0
is,
that
3
,
2
,
1
0
3
2
1








 j
j
180
Example 2
Solution cont’d:
The solution of Eqs.(E1) to (E12) can be found in several ways. We
proceed to solve these equations by first noting that either 1=0 or
x1=50 according to (E4).
Using this information, we investigate the following cases to identify
the optimum solution of the problem:
• Case I: 1=0
• Case II: x1=50
181
Example 2
Solution cont’d:
• Case I: 1=0
Equations (E1) to (E3) give:
2
2
20
x
(E13)
2
2
10
2
3
2
1
3
2
2
3
3















x
x
182
Example 2
Solution cont’d:
• Case I: 1=0
Substituting Equations (E13) into Eqs. (E5) and (E6) give:
(E14)
0
)
2
3
180
(
0
)
130
(
3
2
3
3
2
2














183
Example 2
Solution cont’d:
Case I: 1=0
The four possible solutions of Eqs. (E14) are:
1. 2=0, -180- 2-3/2 3=0. These equations along with Eqs.
(E13) yield the solution:
2=0, 3=-120, x1=40, x2=50, x3=60
This solution satisfies Eqs.(E10) to (E12) but violates Eqs.(E7)
and (E8) and hence can not be optimum.
184
Example 2
Solution cont’d:
Case I: 1=0
The second possible solution of Eqs. (E14) is:
2. 3=0, -130- 2-3=0. The solution of these equations leads to:
2=-130, 3=0, x1=45, x2=55, x3=0
This solution satisfies Eqs.(E10) to (E12) but violates Eqs.(E7)
and (E9) and hence can not be optimum.
185
Example 2
Solution cont’d:
Case I: 1=0
The third possible solution of Eqs. (E14) is:
3. 2=0, 3=0. Equations (E13) give:
x1=-20, x2=-10, x3=0
This solution satisfies Eqs.(E10) to (E12) but violates the
constraints Eqs.(E7) and (E9) and hence can not be optimum.
186
Example 2
Solution cont’d:
Case I: 1=0
The third possible solution of Eqs. (E14) is:
4. -130- 2-3=0, -180- 2-3/2 3=0. The solutions of these equations
and Equations (E13) give:
2 =-30, 3 =-100, x1=45, x2=55, x3=50
This solution satisfies Eqs.(E10) to (E12) but violates the
constraint Eq.(E7) and hence can not be optimum.
187
Example 2
2
3
2
1
1
3
2
3
2
2
3
3
2
120
2
40
(E15)
2
2
20
2
20
2
x
x
x
x
x
x
























Solution cont’d:
Case II: x1=50. In this case, Eqs. (E1) to (E3) give:
Substitution of Eqs.(E15) in Eqs
give:
(E16)
0
)
150
)(
2
(
0
)
100
)(
2
2
20
(
3
2
1
3
2
1
3
2











x
x
x
x
x
x
x
x
(E6)
0
)
150
(
(E5)
0
)
100
(
3
2
1
3
2
1
2







x
x
x
x
x


188
Example 2
Solution cont’d:
Case II: x1=50. Once again, there are four possible solutions to
Eq.(E16) as indicated below:
1. -20 - 2x2 + 2x3 = 0, x1 + x2 + x3 -150 = 0: The solution of these
equations yields:
x1 = 50, x2 = 45, x3 = 55
This solution can be seen to violate Eq.(E8) which says:
(E8)
0
)
100
( 2
1 

 x
x
189
Example 2
Solution cont’d:
Case II: x1=50. Once again, there are four possible solutions to
Eq.(E16) as indicated below:
2. -20 - 2x2 + 2x3 = 0, -2x3 = 0: The solution of these equations
yields:
x1 = 50, x2 = -10, x3 = 0
This solution can be seen to violate Eqs.(E8) and (E9) which say:
(E9)
0
)
150
(
(E8)
0
)
100
(
3
2
1
2
1







x
x
x
x
x
190
Example 2
Solution cont’d:
Case II: x1=50. Once again, there are four possible solutions to
Eq.(E16) as indicated below:
3. x1 + x2 -100 = 0, -2x3 = 0: The solution of these equations
yields:
x1 = 50, x2 = 50, x3 = 0
This solution can be seen to violate Eq. (E9) which say:
(E9)
0
)
150
( 3
2
1 


 x
x
x
191
Example 2
Solution cont’d:
Case II: x1=50. Once again, there are four possible solutions to
Eq.(E16) as indicated below:
4. x1 + x2 -100 = 0, x1 + x2 + x3 -150 = 0 : The solution of these
equations yields:
x1 = 50, x2 = 50, x3 = 50
This solution can be seen to satisfy all the constraint Eqs.(E7-E9)
which say:
(E9)
0
)
150
(
(E8)
0
)
100
(
(E7)
0
)
50
(
3
2
1
2
1
1









x
x
x
x
x
x
192
Example 2
Solution cont’d:
Case II: x1=50.
The values of 1 , 2 , and 3 corresponding to this solution can be
obtained from
as:
2
3
2
1
1
3
2
3
2
2
3
3
2
120
2
40
(E15)
2
2
20
2
20
2
x
x
x
x
x
x
























100
,
20
,
20 3
2
1 




 


193
Example 2
Solution cont’d:
Case II: x1=50.
Since these values of i satisfy the requirements:
this solution can be identified as the optimum solution. Thus
100
,
20
,
20 3
2
1 




 


(E12)
0
(E11)
0
(E10)
0
3
2
1






50
*
,
50
*
,
50
* 3
2
1 

 x
x
x
194
Convex functions
• A function f(X) is said to be convex if for any pair of points
that is, if the segment joining the two points lies entirely above or on the
graph of f(X).
• A convex function is always bending upward and hence it is apparent
that the local minimum of a convex function is also a global minimum
  )
(
)
1
(
)
(
)
1
(
1,
0
,
all
and
and
1
2
1
2
)
2
(
)
2
(
2
)
2
(
1
2
)
1
(
)
1
(
2
)
1
(
1
1
X
X
X
X
X
X
f
f
f
x
x
x
x
x
x
n
n













































195
Convex functions
• A function f(x) is convex if for any two points x and y, we have
• A function f(X) is convex if the Hessian matrix
is positive semidefinite.
• Any local minimum of a convex function f(X) is a global minimum
x)
(x)(y
f
f(x)
f(y) T




 
j
i x
x
f 


 /
)
(
2
X
H(X)
196
Concave function
• A function f(X) is called a concave function if for any two points X1
and X2, and for all 0    1,
that is, if the line segment joining the two points lies entirely below
or on the graph of f(X).
• It can be seen that a concave function bends downward and
hence the local maximum will also be its global maximum.
• It can be seen that the negative of a convex function is a concave
function.
  )
(
)
1
(
)
(
)
1
( 1
2
1
2 X
X
X
X f
f
f 


 




197
Concave function
• Convex and concave functions in one variable
198
Concave function
• Convex and concave functions in two variables
199
Example
Determine whether the following function is convex or concave.
f(x) = ex
Solution:
convex.
strictly
is
f(x)
Hence
x.
of
values
real
all
for
0
)
( 2
2


 x
e
dx
f
d
x
H
200
Example
Determine whether the following function is convex or concave.
f(x) = -8x2
Solution:
concave.
strictly
is
f(x)
Hence
x.
of
values
real
all
for
0
16
)
( 2
2




dx
f
d
x
H
201
Example
Determine whether the following function is convex or concave.
f(x1,x2) = 2x1
3-6x2
2
Solution:
Here
Hence H(X) will be negative semidefinite and f(X) is concave for x1 ≤ 0































12
0
0
12
)
( 1
2
2
2
2
1
2
2
1
2
2
1
2
x
x
f
x
x
f
x
x
f
x
f
H X
0
for
0
0
for
0
12
1
1
1
2
1
2







x
x
x
x
f
0
for
0
0
for
0
144
1
1
1






x
x
x
H(X)
202
Example
Determine whether the following function is convex or concave.
Solution:
15
2
3
6
5
3
4
)
,
,
( 2
1
3
1
2
1
2
3
2
2
2
1
3
2
1 






 x
x
x
x
x
x
x
x
x
x
x
x
f
























































10
0
1
0
6
6
1
6
8
)
(
2
3
2
3
2
2
3
1
2
3
2
2
2
2
2
2
1
2
3
1
2
2
1
2
2
1
2
x
f
x
x
f
x
x
f
x
x
f
x
f
x
x
f
x
x
f
x
x
f
x
f
H X
203
Example
Determine whether the following function is convex or concave.
Solution cont’d:
Here the principal minors are given by:
and hence the matrix H(X) is positive definite for all real values of x1, x2, x3.
Therefore f(X) is strictly convex function.
15
2
3
6
5
3
4
)
,
,
( 2
1
3
1
2
1
2
3
2
2
2
1
3
2
1 






 x
x
x
x
x
x
x
x
x
x
x
x
f
0
114
10
0
1
0
6
6
1
6
8
0
12
6
6
6
8
0
8
8






204
Convex programming problem
• When the optimization problem is stated as:
Minimize f (X)
subject to
gj (X) ≤ 0, j = 1, 2,…,m
it is called a convex programming problem if the objective function f
(X), and the constraint functions, gj (X) are convex.
• Supposing that f (X) and gj(X), j=1,2,…,m are convex functions, the
Lagrange function can be written as:
 
2
1
)
(
)
(
)
( j
j
m
J
j y
g
f
L 

 

X
X
λ
Y,
X, 
205
Convex programming problem
• If j ≥ 0, then jgj(X) is convex, and since jyj=0 from
L(X,Y,) will be a convex function
• A necessary condition for f (X) to be a relative minimum at X* is that
L(X,Y,) have a stationary point at X*. However, if L(X,Y,) is a convex
function, its derivative vanishes only at one point, which must be an
absolute minimum of the function f (X). Thus the Kuhn-Tucker conditions
are both necessary and sufficient for an absolute minimum of f (X) at X*.
 
2
1
)
(
)
(
)
( j
j
m
J
j y
g
f
L 

 

X
X
λ
Y,
X, 
m
j
y
y
L
j
j
j
,
,
2
,
1
,
0
2
)
( 







Y,
X,
206
Convex programming problem
• If the given optimization problem is known to be a convex programming
problem, there will be no relative minima or saddle points, and hence the
exteme point found by applying the Kuhn-Tucke conditions is guaranteed
to be an absolute minimum of f (X). However, it is often very difficult to
ascertain whether the objective and constraint functions involved in a
practical engineering problem are convex.
207
Linear Programming I:
Simplex method
• Linear programming is an optimization method applicable
for the solution of problems in which the objective function
and the constraints appear as linear functions of the decision
variables.
• Simplex method is the most efficient and popular method
for solving general linear programming problems.
• At least four Nobel prizes were awarded for contributions
related to linear programming (e.g. In 1975, Kantorovich of
the former Soviet Union and T.C. Koopmans of USA were
awarded for application of LP to the economic problem of
allocating resources).
208
Linear Programming I:
Simplex method-Applications
• Petroleum refineries
– choice of buying crude oil from several different sources with
differing compositions and at differing prices
– manufacturing different products such as aviation fuel, diesel fuel,
and gasoline, in varying quantities
– Constraints due to the restrictions on the quantity of the crude oil
from a particular source, the capacity of the refinery to produce a
particular product
– A mix of the purchased crude oil and the manufactured products is
sought that gives the maximum profit
• Optimal production plan in a manufacturing firm
– Pay overtime rates to achieve higher production during periods of
higher demand
• The routing of aircraft and ships can also be decided using
LP
209
Standard Form of a Linear
Programming Problem
• Scalar form
ariables
decision v
the
are
and
constants,
known
are
)
,
,
2
,
1
;
,
,
2
,
1
(
and
,
where
0
0
0
s
constraint
the
subject to
)
,
,
,
(
Minimize
2
1
2
2
1
1
2
2
2
22
1
21
1
1
2
12
1
11
2
2
1
1
2
1
j
ij
j
j
n
m
n
mn
m
m
n
n
n
n
n
n
n
x
n
j
m
i
a
b
c
x
x
x
b
x
a
x
a
x
a
b
x
a
x
a
x
a
b
x
a
x
a
x
a
x
c
x
c
x
c
x
x
x
f






























210
Standard Form of a Linear
Programming Problem
• Matrix form





























































mn
m
m
m
n
n
n
m
n
a
a
a
a
a
a
a
a
a
a
a
a
c
c
c
b
b
b
x
x
x
f







3
2
1
2
23
22
21
1
13
12
11
2
1
2
1
2
1
,
,
where
0
s
constraint
the
subject to
)
(
Minimize
a
c
b
X
X
b
aX
X
cT
X
211
Characteristic of a Linear
Programming Problem
• The objective function is of the minimization type
• All the constraints are of the equality type
• All the decision variables are nonnegative
• The number of the variables in the problem is n. This
includes the slack and surplus variables.
• The number of constraints is m (m < n).
212
Characteristic of a Linear
Programming Problem
• The number of basic variables is m (same as the number of
constraints).
• The number of nonbasic variables is n-m.
• The column of the right hand side b is positive and greater
than or equal to zero.
• The calculations are organized in a table.
• Only the values of the coefficients are necessary for the
calculations. The table therefore contains only coefficient
values, the matrix A in previous discussions. These are the
coefficients in the constraint equations.
213
Characteristic of a Linear
Programming Problem
• The objective function is the last row in the table. The
constraint coefficients are written first.
• Row operations consist of adding (subtracting)a definite
multiple of the pivot row from other rows of the table.
214
Transformation of LP Problems into
Standard Form
• The maximization of a function f(x1,x2,…,xn ) is equivalent
to the minimization of the negative of the same function.
For example, the objective function
Consequently, the objective function can be stated in the
minimization form in any linear programming problem.
n
n
n
n
x
c
x
c
x
c
f
f
x
c
x
c
x
c
f














2
2
1
1
2
2
1
1
maximize
to
equivalent
is
minimize
215
Transformation of LP Problems into
Standard Form
• A variable may be unrestricted in sign in some problems. In
such cases, an unrestricted variable (which can take a
positive, negative or zero value) can be written as the
difference of two nonnegative variables.
• Thus if xj is unrestricted in sign, it can be written as xj=xj'-
xj", where
• It can be seen that xj will be negative, zero or positive,
depending on whether xj" is greater than, equal to, or less
than xj
’
.
0
"
and
0
' 
 j
j x
x
216
Transformation of LP Problems into
Standard Form
If a constraint appears in the form of a “less than or equal
to” type of inequality as:
it can be converted into the equality form by adding a
nonnegative slack variable xn+1 as follows:
k
n
kn
k
k b
x
a
x
a
x
a 


 
2
2
1
1
k
n
n
kn
k
k b
x
x
a
x
a
x
a 



 1
2
2
1
1 
217
Transformation of LP Problems into
Standard Form
If a constraint appears in the form of a “greater than or equal
to” type of inequality as:
it can be converted into the equality form by subtracting a
variable as:
where xn+1 is a nonnegative variable known as a surplus
variable.
k
n
kn
k
k b
x
a
x
a
x
a 


 
2
2
1
1
k
n
n
kn
k
k b
x
x
a
x
a
x
a 



 1
2
2
1
1 
218
Geometry of LP Problems
Example: A manufacturing firm produces two machine
parts using lathes, milling machines, and grinding machines.
The different machining times required for each part, the
machining times available for different machines, and the
profit on each machine part are given in the following table.
Determine the number of parts I and II to be manufactured
per week to maximize the profit.
Type of machine Machine time
required (min)
Machine Part I
Machine time
required (min)
Machine Part II
Maximum time
available per week
(min)
Lathes 10 5 2500
Milling machines 4 10 2000
Grinding machines 1 1.5 450
Profit per unit $50 $100
219
Geometry of LP Problems
Solution: Let the machine parts I and II manufactured per
week be denoted by x and y, respectively. The constraints
due to the maximum time limitations on the various
machines are given by:
Since the variables x and y can not take negative values, we
have
220
Geometry of LP Problems
Solution: The total profit is given by:
Thus the problem is to determine the
nonnegative values of x and y that
satisfy the constraints stated in
Eqs.(E1) to (E3) and maximize the
objective function given by (E5). The
inequalities (E1) to (E4) can be
plotted in the xy plane and the feasible
region identified as shown in the
figure. Our objective is to find at least
one point out of the infinite points in
the shaded region in figure which
maximizes the profit function (E5).
221
Geometry of LP Problems
Solution: The contours of the objective
function, f, are defined by the linear
equation:
As k is varied, the objective function
line is moved parallel to itself. The
maximum value of f is the largest k
whose objective f, unction line has at
least one point in common with the
feasible region). Such a point can be
identified as point G in the figure.
The optimum solution corresponds to
a value of x*=187.5, y*=125, and a
profit of $21,875.00.
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt
CN.ppt

More Related Content

Similar to CN.ppt

Optimization Techniques.pdf
Optimization Techniques.pdfOptimization Techniques.pdf
Optimization Techniques.pdfanandsimple
 
LECTUE 2-OT (1).pptx
LECTUE 2-OT (1).pptxLECTUE 2-OT (1).pptx
LECTUE 2-OT (1).pptxProfOAJarali
 
Week1_slides_Mathematical Optimization for Engineers
Week1_slides_Mathematical Optimization for EngineersWeek1_slides_Mathematical Optimization for Engineers
Week1_slides_Mathematical Optimization for EngineersMarcoRavelo2
 
LINEAR PROGRAMMING
LINEAR PROGRAMMINGLINEAR PROGRAMMING
LINEAR PROGRAMMINGrashi9
 
Reading papers - survey on Non-Convex Optimization
Reading papers - survey on Non-Convex OptimizationReading papers - survey on Non-Convex Optimization
Reading papers - survey on Non-Convex OptimizationX 37
 
Optimization Computing Platform for the Construction Industry
Optimization Computing Platform for the Construction IndustryOptimization Computing Platform for the Construction Industry
Optimization Computing Platform for the Construction IndustryKostas Dimitriou
 
Linear Programing.pptx
Linear Programing.pptxLinear Programing.pptx
Linear Programing.pptxAdnanHaleem
 
Design Optimization.ppt
Design Optimization.pptDesign Optimization.ppt
Design Optimization.pptKhalil Alhatab
 
Chapter 1 (1).pdf
Chapter 1 (1).pdfChapter 1 (1).pdf
Chapter 1 (1).pdfrziguiala
 
IRJET- Optimization of Fink and Howe Trusses
IRJET-  	  Optimization of Fink and Howe TrussesIRJET-  	  Optimization of Fink and Howe Trusses
IRJET- Optimization of Fink and Howe TrussesIRJET Journal
 
Deep learning Unit1 BasicsAllllllll.pptx
Deep learning Unit1 BasicsAllllllll.pptxDeep learning Unit1 BasicsAllllllll.pptx
Deep learning Unit1 BasicsAllllllll.pptxFreefireGarena30
 
Linear programming manzoor nabi
Linear programming  manzoor nabiLinear programming  manzoor nabi
Linear programming manzoor nabiManzoor Wani
 
Mba i ot unit-1.1_linear programming
Mba i ot unit-1.1_linear programmingMba i ot unit-1.1_linear programming
Mba i ot unit-1.1_linear programmingRai University
 
Least Square Optimization and Sparse-Linear Solver
Least Square Optimization and Sparse-Linear SolverLeast Square Optimization and Sparse-Linear Solver
Least Square Optimization and Sparse-Linear SolverJi-yong Kwon
 

Similar to CN.ppt (20)

lecture.ppt
lecture.pptlecture.ppt
lecture.ppt
 
Optimization Techniques.pdf
Optimization Techniques.pdfOptimization Techniques.pdf
Optimization Techniques.pdf
 
LECTUE 2-OT (1).pptx
LECTUE 2-OT (1).pptxLECTUE 2-OT (1).pptx
LECTUE 2-OT (1).pptx
 
Week1_slides_Mathematical Optimization for Engineers
Week1_slides_Mathematical Optimization for EngineersWeek1_slides_Mathematical Optimization for Engineers
Week1_slides_Mathematical Optimization for Engineers
 
LINEAR PROGRAMMING
LINEAR PROGRAMMINGLINEAR PROGRAMMING
LINEAR PROGRAMMING
 
Reading papers - survey on Non-Convex Optimization
Reading papers - survey on Non-Convex OptimizationReading papers - survey on Non-Convex Optimization
Reading papers - survey on Non-Convex Optimization
 
Optimization Computing Platform for the Construction Industry
Optimization Computing Platform for the Construction IndustryOptimization Computing Platform for the Construction Industry
Optimization Computing Platform for the Construction Industry
 
Linear Programing.pptx
Linear Programing.pptxLinear Programing.pptx
Linear Programing.pptx
 
OR Ndejje Univ (1).pptx
OR Ndejje Univ (1).pptxOR Ndejje Univ (1).pptx
OR Ndejje Univ (1).pptx
 
Design Optimization.ppt
Design Optimization.pptDesign Optimization.ppt
Design Optimization.ppt
 
Visual Techniques
Visual TechniquesVisual Techniques
Visual Techniques
 
Chapter 1 (1).pdf
Chapter 1 (1).pdfChapter 1 (1).pdf
Chapter 1 (1).pdf
 
IRJET- Optimization of Fink and Howe Trusses
IRJET-  	  Optimization of Fink and Howe TrussesIRJET-  	  Optimization of Fink and Howe Trusses
IRJET- Optimization of Fink and Howe Trusses
 
Introduction to optimization Problems
Introduction to optimization ProblemsIntroduction to optimization Problems
Introduction to optimization Problems
 
OR Ndejje Univ.pptx
OR Ndejje Univ.pptxOR Ndejje Univ.pptx
OR Ndejje Univ.pptx
 
Deep learning Unit1 BasicsAllllllll.pptx
Deep learning Unit1 BasicsAllllllll.pptxDeep learning Unit1 BasicsAllllllll.pptx
Deep learning Unit1 BasicsAllllllll.pptx
 
Linear programming manzoor nabi
Linear programming  manzoor nabiLinear programming  manzoor nabi
Linear programming manzoor nabi
 
D05511625
D05511625D05511625
D05511625
 
Mba i ot unit-1.1_linear programming
Mba i ot unit-1.1_linear programmingMba i ot unit-1.1_linear programming
Mba i ot unit-1.1_linear programming
 
Least Square Optimization and Sparse-Linear Solver
Least Square Optimization and Sparse-Linear SolverLeast Square Optimization and Sparse-Linear Solver
Least Square Optimization and Sparse-Linear Solver
 

More from raj20072

Genet algo.ppt
Genet algo.pptGenet algo.ppt
Genet algo.pptraj20072
 
A Decomposition Aggregation Method for Solving Electrical Power Dispatch Prob...
A Decomposition Aggregation Method for Solving Electrical Power Dispatch Prob...A Decomposition Aggregation Method for Solving Electrical Power Dispatch Prob...
A Decomposition Aggregation Method for Solving Electrical Power Dispatch Prob...raj20072
 

More from raj20072 (7)

PS.pptx
PS.pptxPS.pptx
PS.pptx
 
OI.ppt
OI.pptOI.ppt
OI.ppt
 
pt.pptx
pt.pptxpt.pptx
pt.pptx
 
pso.ppt
pso.pptpso.ppt
pso.ppt
 
nsga.ppt
nsga.pptnsga.ppt
nsga.ppt
 
Genet algo.ppt
Genet algo.pptGenet algo.ppt
Genet algo.ppt
 
A Decomposition Aggregation Method for Solving Electrical Power Dispatch Prob...
A Decomposition Aggregation Method for Solving Electrical Power Dispatch Prob...A Decomposition Aggregation Method for Solving Electrical Power Dispatch Prob...
A Decomposition Aggregation Method for Solving Electrical Power Dispatch Prob...
 

Recently uploaded

ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...ZTE
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxJoão Esperancinha
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxwendy cai
 
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVRajaP95
 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...Soham Mondal
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile servicerehmti665
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AIabhishek36461
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSKurinjimalarL3
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionDr.Costas Sachpazis
 
main PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidmain PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidNikhilNagaraju
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSCAESB
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130Suhani Kapoor
 
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escortsranjana rawat
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024hassan khalil
 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.eptoze12
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxpurnimasatapathy1234
 

Recently uploaded (20)

ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
ZXCTN 5804 / ZTE PTN / ZTE POTN / ZTE 5804 PTN / ZTE POTN 5804 ( 100/200 GE Z...
 
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
★ CALL US 9953330565 ( HOT Young Call Girls In Badarpur delhi NCR
 
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
🔝9953056974🔝!!-YOUNG call girls in Rajendra Nagar Escort rvice Shot 2000 nigh...
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptx
 
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
VICTOR MAESTRE RAMIREZ - Planetary Defender on NASA's Double Asteroid Redirec...
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
 
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
OSVC_Meta-Data based Simulation Automation to overcome Verification Challenge...
 
Call Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile serviceCall Girls Delhi {Jodhpur} 9711199012 high profile service
Call Girls Delhi {Jodhpur} 9711199012 high profile service
 
Past, Present and Future of Generative AI
Past, Present and Future of Generative AIPast, Present and Future of Generative AI
Past, Present and Future of Generative AI
 
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICSAPPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
APPLICATIONS-AC/DC DRIVES-OPERATING CHARACTERISTICS
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
 
main PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfidmain PPT.pptx of girls hostel security using rfid
main PPT.pptx of girls hostel security using rfid
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentation
 
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
VIP Call Girls Service Hitech City Hyderabad Call +91-8250192130
 
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
 
Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024Architect Hassan Khalil Portfolio for 2024
Architect Hassan Khalil Portfolio for 2024
 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.
 
Microscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptxMicroscopic Analysis of Ceramic Materials.pptx
Microscopic Analysis of Ceramic Materials.pptx
 

CN.ppt

  • 1. Optimization Assoc. Prof. Dr. Pelin Gündeş gundesbakir@yahoo.com
  • 2. 2 Optimization Basic Information • Instructor: Assoc. Professor Pelin Gundes (http://atlas.cc.itu.edu.tr/~gundes/) • E-mail: gundesbakir@yahoo.com • Office Hours: TBD by email appointment • Website: http://atlas.cc.itu.edu.tr/~gundes/teaching/Optimi zation.htm • Lecture Time: Wednesday 13:00 - 16:00 • Lecture Venue: M 2180
  • 3. 3 Optimization literature Textbooks: 1. Nocedal J. and Wright S.J., Numerical Optimization, Springer Series in Operations Research, Springer, 636 pp, 1999. 2. Spall J.C., Introduction to Stochastic Search and Optimization, Estimation, Simulation and Control, Wiley, 595 pp, 2003. 3. Chong E.K.P. and Zak S.H., An Introduction to Optimization, Second Edition, John Wiley & Sons, New York, 476 pp, 2001. 4. Rao S.S., Engineering Optimization - Theory and Practice, John Wiley & Sons, New York, 903 pp, 1996. 5. Gill P.E., Murray W. and Wright M.H., Practical Optimization, Elsevier, 401 pp., 2004. 6. Goldberg D.E., Genetic Algorithms in Search, Optimization and Machine Learning, Addison Wesley, Reading, Mass., 1989. 7. S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge University Press, 2004.(available at http://www.stanford.edu/~boyd/cvxbook/)
  • 4. 4 Optimization literature Journals: 1. Engineering Optimization 2. ASME Journal of Mechnical Design 3. AIAA Journal 4. ASCE Journal of Structural Engineering 5. Computers and Structures 6. International Journal for Numerical Methods in Engineering 7. Structural Optimization 8. Journal of Optimization Theory and Applications 9. Computers and Operations Research 10. Operations Research and Management Science
  • 5. 5 Optimization Course Schedule: 1. Introduction to Optimization 2. Classical Optimization Techniques 3. Linear programming and the Simplex method 4. Nonlinear programming-One Dimensional Minimization Methods 5. Nonlinear programming-Unconstrained Optimization Techniques 6. Nonlinear programming-Constrained Optimization Techniques 7. Global Optimization Methods-Genetic algorithms 8. Global Optimization Methods-Simulated Annealing 9. Global Optimization Methods- Coupled Local Minimizers
  • 6. 6 Optimization Course Prerequisite: • Familiarity with MATLAB, if you are not familiar with MATLAB, please visit http://www.ece.ust.hk/~palomar/courses/ELEC692Q/lecture%2006%20-%20cvx/matlab_crashcourse.pdf http://www.ece.ust.hk/~palomar/courses/ELEC692Q/lecture%2006%20-%20cvx/official_getting_started.pdf
  • 7. 7 Optimization • 70% attendance is required! • Grading: Homeworks: 15% Mid-term projects: 40% Final Project: 45%
  • 8. 8 Optimization • There will also be lab sessions for MATLAB exercises!
  • 9. 9 1. Introduction • Optimization is the act of obtaining the best result under given circumstances. • Optimization can be defined as the process of finding the conditions that give the maximum or minimum of a function. • The optimum seeking methods are also known as mathematical programming techniques and are generally studied as a part of operations research. • Operations research is a branch of mathematics concerned with the application of scientific methods and techniques to decision making problems and with establishing the best or optimal solutions.
  • 10. 10 1. Introduction • Operations research (in the UK) or operational research (OR) (in the US) or yöneylem araştırması (in Turkish) is an interdisciplinary branch of mathematics which uses methods like: – mathematical modeling – statistics – algorithms to arrive at optimal or good decisions in complex problems which are concerned with optimizing the maxima (profit, faster assembly line, greater crop yield, higher bandwidth, etc) or minima (cost loss, lowering of risk, etc) of some objective function. • The eventual intention behind using operations research is to elicit a best possible solution to a problem mathematically, which improves or optimizes the performance of the system.
  • 12. 12 1. Introduction Historical development • Isaac Newton (1642-1727) (The development of differential calculus methods of optimization) • Joseph-Louis Lagrange (1736-1813) (Calculus of variations, minimization of functionals, method of optimization for constrained problems) • Augustin-Louis Cauchy (1789-1857) (Solution by direct substitution, steepest descent method for unconstrained optimization)
  • 13. 13 1. Introduction Historical development • Leonhard Euler (1707-1783) (Calculus of variations, minimization of functionals) • Gottfried Leibnitz (1646-1716) (Differential calculus methods of optimization)
  • 14. 14 1. Introduction Historical development • George Bernard Dantzig (1914-2005) (Linear programming and Simplex method (1947)) • Richard Bellman (1920-1984) (Principle of optimality in dynamic programming problems) • Harold William Kuhn (1925-) (Necessary and sufficient conditions for the optimal solution of programming problems, game theory)
  • 15. 15 1. Introduction Historical development • Albert William Tucker (1905-1995) (Necessary and sufficient conditions for the optimal solution of programming problems, nonlinear programming, game theory: his PhD student was John Nash) • Von Neumann (1903-1957) (game theory)
  • 16. 16 1. Introduction • Mathematical optimization problem: • f0 : Rn R: objective function • x=(x1,…..,xn): design variables (unknowns of the problem, they must be linearly independent) • gi : Rn R: (i=1,…,m): inequality constraints • The problem is a constrained optimization problem m i b x g x f i i ,...., 1 , ) ( subject to ) ( minimize 0  
  • 17. 17 1. Introduction • If a point x* corresponds to the minimum value of the function f (x), the same point also corresponds to the maximum value of the negative of the function, -f (x). Thus optimization can be taken to mean minimization since the maximum of a function can be found by seeking the minimum of the negative of the same function.
  • 18. 18 1. Introduction Constraints • Behaviour constraints: Constraints that represent limitations on the behaviour or performance of the system are termed behaviour or functional constraints. • Side constraints: Constraints that represent physical limitations on design variables such as manufacturing limitations.
  • 19. 19 1. Introduction Constraint Surface • For illustration purposes, consider an optimization problem with only inequality constraints gj (X)  0. The set of values of X that satisfy the equation gj (X) =0 forms a hypersurface in the design space and is called a constraint surface.
  • 20. 20 1. Introduction Constraint Surface • Note that this is a (n-1) dimensional subspace, where n is the number of design variables. The constraint surface divides the design space into two regions: one in which gj (X)  0and the other in which gj (X) 0.
  • 21. 21 1. Introduction Constraint Surface • Thus the points lying on the hypersurface will satisfy the constraint gj (X) critically whereas the points lying in the region where gj (X) >0 are infeasible or unacceptable, and the points lying in the region where gj (X) < 0 are feasible or acceptable.
  • 22. 22 1. Introduction Constraint Surface • In the below figure, a hypothetical two dimensional design space is depicted where the infeasible region is indicated by hatched lines. A design point that lies on one or more than one constraint surface is called a bound point, and the associated constraint is called an active constraint.
  • 23. 23 1. Introduction Constraint Surface • Design points that do not lie on any constraint surface are known as free points.
  • 24. 24 1. Introduction Constraint Surface Depending on whether a particular design point belongs to the acceptable or unacceptable regions, it can be identified as one of the following four types: • Free and acceptable point • Free and unacceptable point • Bound and acceptable point • Bound and unacceptable point
  • 25. 25 1. Introduction • The conventional design procedures aim at finding an acceptable or adequate design which merely satisfies the functional and other requirements of the problem. • In general, there will be more than one acceptable design, and the purpose of optimization is to choose the best one of the many acceptable designs available. • Thus a criterion has to be chosen for comparing the different alternative acceptable designs and for selecting the best one. • The criterion with respect to which the design is optimized, when expressed as a function of the design variables, is known as the objective function.
  • 26. 26 1. Introduction • In civil engineering, the objective is usually taken as the minimization of the cost. • In mechanical engineering, the maximization of the mechanical efficiency is the obvious choice of an objective function. • In aerospace structural design problems, the objective function for minimization is generally taken as weight. • In some situations, there may be more than one criterion to be satisfied simultaneously. An optimization problem involving multiple objective functions is known as a multiobjective programming problem.
  • 27. 27 1. Introduction • With multiple objectives there arises a possibility of conflict, and one simple way to handle the problem is to construct an overall objective function as a linear combination of the conflicting multiple objective functions. • Thus, if f1 (X) and f2 (X) denote two objective functions, construct a new (overall) objective function for optimization as: where 1 and 2 are constants whose values indicate the relative importance of one objective function to the other. ) ( ) ( ) ( 2 2 1 1 X X X f f f    
  • 28. 28 1. Introduction • The locus of all points satisfying f (X) = c = constant forms a hypersurface in the design space, and for each value of c there corresponds a different member of a family of surfaces. These surfaces, called objective function surfaces, are shown in a hypothetical two- dimensional design space in the figure below.
  • 29. 29 1. Introduction • Once the objective function surfaces are drawn along with the constraint surfaces, the optimum point can be determined without much difficulty. • But the main problem is that as the number of design variables exceeds two or three, the constraint and objective function surfaces become complex even for visualization and the problem has to be solved purely as a mathematical problem.
  • 30. 30 Example Example: Design a uniform column of tubular section to carry a compressive load P=2500 kgf for minimum cost. The column is made up of a material that has a yield stress of 500 kgf/cm2, modulus of elasticity (E) of 0.85e6 kgf/cm2, and density () of 0.0025 kgf/cm3. The length of the column is 250 cm. The stress induced in this column should be less than the buckling stress as well as the yield stress. The mean diameter of the column is restricted to lie between 2 and 14 cm, and columns with thicknesses outside the range 0.2 to 0.8 cm are not available in the market. The cost of the column includes material and construction costs and can be taken as 5W + 2d, where W is the weight in kilograms force and d is the mean diameter of the column in centimeters.
  • 31. 31 Example Example: The design variables are the mean diameter (d) and tube thickness (t): The objective function to be minimized is given by:               t d x x 2 1 X 1 2 1 2 82 . 9 2 5 2 5 ) ( x x x d dt l d W f         X
  • 32. 32 Example • The behaviour constraints can be expressed as: stress induced ≤ yield stress stress induced ≤ buckling stress • The induced stress is given by: 2 1 i 2500 stress induced x x dt P      
  • 33. 33 Example • The buckling stress for a pin connected column is given by: where I is the second moment of area of the cross section of the column given by: dt l EI    2 2 b area sectional cross load buckling Euler stress buckling         ) ( 8 ) ( 8 ) ( ) ( ) ( ) ( ) ( ) ( 64 ) )( )( ( 64 ) ( 64 2 2 2 1 2 1 2 2 2 2 2 2 4 4 x x x x t d dt t d t d t d t d t d t d d d d d d d d d I i o i o i o i o                         
  • 34. 34 Example • Thus, the behaviour constraints can be restated as: • The side constraints are given by: 0 ) 250 ( 8 ) )( 10 85 . 0 ( 2500 ) ( 0 500 2500 ) ( 2 2 2 2 1 6 2 2 1 2 2 1 1         x x x x g x x g    X X 8 . 0 2 . 0 14 2     t d
  • 35. 35 Example • The side constraints can be expressed in standard form as: 0 8 . 0 ) ( 0 2 . 0 ) ( 0 14 ) ( 0 2 ) ( 2 6 2 5 1 4 1 3               x g x g x g x g X X X X
  • 36. 36 Example • For a graphical solution, the constraint surfaces are to be plotted in a two dimensional design space where the two axes represent the two design variables x1 and x2. To plot the first constraint surface, we have: • Thus the curve x1x2=1.593 represents the constraint surface g1(X)=0. This curve can be plotted by finding several points on the curve. The points on the curve can be found by giving a series of values to x1 and finding the corresponding values of x2 that satisfy the relation x1x2=1.593 as shown in the Table below: 593 . 1 0 500 2500 ) ( 2 1 2 1 1     x x x x g  X x1 2 4 6 8 10 12 14 x2 0.7965 0.3983 0.2655 0.199 0.1593 0.1328 0.114
  • 37. 37 Example • The infeasible region represented by g1(X)>0 or x1x2< 1.593 is shown by hatched lines. These points are plotted and a curve P1Q1 passing through all these points is drawn as shown:
  • 38. 38 Example • Similarly the second constraint g2(X) < 0 can be expressed as: • The points lying on the constraint surface g2 (X)=0 can be obtained as follows (These points are plotted as Curve P2Q2: 3 . 47 ) ( 2 2 2 1 2 1   x x x x x1 2 4 6 8 10 12 14 x2 2.41 0.716 0.219 0.0926 0.0473 0.0274 0.0172
  • 39. 39 Example • The plotting of side constraints is simple since they represent straight lines. • After plotting all the six constraints, the feasible region is determined as the bounded area ABCDEA 3 . 47 ) ( 2 2 2 1 2 1   x x x x
  • 40. 40 Example • Next, the contours of the objective function are to be plotted before finding the optimum point. For this, we plot the curves given by: for a series of values of c. By giving different values to c, the contours of f can be plotted with the help of the following points. constant 2 82 . 9 ) ( 1 2 1     c x x x f X
  • 41. 41 Example • For • For • For • For 0 . 50 2 82 . 9 ) ( 1 2 1    x x x f X x2 0.1 0.2 0.3 0.4 0.5 0.6 0.7 x1 16.77 12.62 10.10 8.44 7.24 6.33 5.64 0 . 40 2 82 . 9 ) ( 1 2 1    x x x f X x2 0.1 0.2 0.3 0.4 0.5 0.6 0.7 x1 13.40 10.10 8.08 6.75 5.79 5.06 4.51 C) point corner he throught (passing 58 . 31 2 82 . 9 ) ( 1 2 1    x x x f X x2 0.1 0.2 0.3 0.4 0.5 0.6 0.7 x1 10.57 7.96 6.38 5.33 4.57 4 3.56 B) point corner he throught (passing 53 . 26 2 82 . 9 ) ( 1 2 1    x x x f X x2 0.1 0.2 0.3 0.4 0.5 0.6 0.7 x1 8.88 6.69 5.36 4.48 3.84 3.36 2.99
  • 42. 42 Example • These contours are shown in the figure below and it can be seen that the objective function can not be reduced below a value of 26.53 (corresponding to point B) without violating some of the constraints. Thus, the optimum solution is given by point B with d*=x1*=5.44 cm and t*=x2*=0.293 cm with fmin=26.53.
  • 43. 43 Examples Design of civil engineering structures • variables: width and height of member cross-sections • constraints: limit stresses, maximum and minimum dimensions • objective: minimum cost or minimum weight Analysis of statistical data and building empirical models from measurements • variables: model parameters • Constraints: physical upper and lower bounds for model parameters • Objective: prediction error
  • 44. 44 Classification of optimization problems Classification based on: • Constraints – Constrained optimization problem – Unconstrained optimization problem • Nature of the design variables – Static optimization problems – Dynamic optimization problems
  • 45. 45 Classification of optimization problems Classification based on: • Physical structure of the problem – Optimal control problems – Non-optimal control problems • Nature of the equations involved – Nonlinear programming problem – Geometric programming problem – Quadratic programming problem – Linear programming problem
  • 46. 46 Classification of optimization problems Classification based on: • Permissable values of the design variables – Integer programming problems – Real valued programming problems • Deterministic nature of the variables – Stochastic programming problem – Deterministic programming problem
  • 47. 47 Classification of optimization problems Classification based on: • Separability of the functions – Separable programming problems – Non-separable programming problems • Number of the objective functions – Single objective programming problem – Multiobjective programming problem
  • 48. 48 Geometric Programming • A geometric programming problem (GMP) is one in which the objective function and constraints are expressed as posynomials in X.
  • 49. 49
  • 50. 50 Quadratic Programming Problem • A quadratic programming problem is a nonlinear programming problem with a quadratic objective function and linear constraints. It is usually formulated as follows: subject to where c, qi,Qij, aij, and bj are constants. j i n j ij n i i n i i x x Q x q c F          1 1 1 ) (X n i x m j b x a i j i n i ij , , 2 , 1 , 0 , , 2 , 1 , 1        
  • 51. 51 Optimal Control Problem • An optimal control (OC) problem is a mathematical programming problem involving a number of stages, where each stage evolves from the preceding stage in a prescribed manner. • It is usually described by two types of variables: the control (design) and the state variables. The control variables define the system and govern the evolution of the system from one stage to the next, and the state variables describe the behaviour or status of the system in any stage.
  • 52. 52 Optimal Control Problem • The problem is to find a set of control or design variables such that the total objective function (also known as the performance index) over all stages is minimized subject to a set of constraints on the control and state variables. • An OC problem can be stated as follows: Find X which minimizes subject to the constraints where xi is the ith control variable, yi is the ith control variable, and fi is the contribution of the ith stage to the total objective function; gj, hk and qi are functions of xj, yk and xi and yi, respectively, and l is the total number of stages. ) , ( ) ( l 1 i i i i y x f f    X l k y l j x l i y y y x q k k j j i i i i i , , 2 , 1 , 0 ) ( , , 2 , 1 , 0 ) ( , , 2 , 1 , ) , ( 1            h g
  • 53. 53 Integer Programming Problem • If some or all of the design variables x1,x2,..,xn of an optimization problem are restricted to take on only integer (or discrete) values, the problem is called an integer programming problem. • If all the design variables are permitted to take any real value, the optimization problem is called a real-valued programming problem.
  • 54. 54 Stochastic Programming Problem • A stochastic programming problem is an optimization problem in which some or all of the parameters (design variables and/or preassigned parameters) are probabilistic (nondeterministic or stochastic). • In other words, stochastic programming deals with the solution of the optimization problems in which some of the variables are described by probability distributions.
  • 55. 55 Separable Programming Problem • A function f (x) is said to be separable if it can be expressed as the sum of n single variable functions, f1(x1), f2(x2),….,fn(xn), that is, • A separable programming problem is one in which the objective function and the constraints are separable and can be expressed in standard form as: Find X which minimizes subject to where bj is constant. i n i i x f f    1 ) (X ) ( ) ( 1 i n i i x f f    X m j b x g g j i n i ij j , , 2 , 1 , ) ( ) ( 1       X
  • 56. 56 Multiobjective Programming Problem • A multiobjective programming problem can be stated as follows: Find X which minimizes f1 (X), f2 (X),…., fk (X) subject to where f1 , f2,…., fk denote the objective functions to be minimized simultaneously. m j g j ,..., 2 , 1 , 0 ) (   X
  • 57. 57 Review of mathematics Concepts from linear algebra: Positive definiteness • Test 1: A matrix A will be positive definite if all its eigenvalues are positive; that is, all the values of  that satisfy the determinental equation should be positive. Similarly, the matrix A will be negative definite if its eigenvalues are negative. 0 I    A
  • 58. 58 Review of mathematics Positive definiteness • Test 2: Another test that can be used to find the positive definiteness of a matrix A of order n involves evaluation of the determinants • The matrix A will be positive definite if and only if all the values A1, A2, A3,An are positive • The matrix A will be negative definite if and only if the sign of Aj is (- 1)j for j=1,2,,n • If some of the Aj are positive and the remaining Aj are zero, the matrix A will be positive semidefinite 33 32 31 23 22 21 13 12 11 3 22 21 12 11 2 11 a a a a a a a a a A a a a a A a A    nn n n n n n n a a a a a a a a a a a a a a a a A      3 2 1 3 33 32 31 2 23 22 21 1 13 12 11 3 
  • 59. 59 Review of mathematics Negative definiteness • Equivalently, a matrix is negative-definite if all its eigenvalues are negative • It is positive-semidefinite if all its eigenvalues are all greater than or equal to zero • It is negative-semidefinite if all its eigenvalues are all less than or equal to zero
  • 60. 60 Concepts from linear algebra: Nonsingular matrix: The determinant of the matrix is not zero. Rank: The rank of a matrix A is the order of the largest nonsingular square submatrix of A, that is, the largest submatrix with a determinant other than zero. Review of mathematics
  • 61. 61 Review of mathematics Solutions of a linear problem Minimize f(x)=cTx Subject to g(x): Ax=b Side constraints: x ≥0 • The existence of a solution to this problem depends on the rows of A. • If the rows of A are linearly independent, then there is a unique solution to the system of equations. • If det(A) is zero, that is, matrix A is singular, there are either no solutions or infinite solutions.
  • 62. 62 Review of mathematics Suppose The new matrix A* is called the augmented matrix- the columns of b are added to A. According to the theorems of linear algebra: • If the augmented matrix A* and the matrix of coefficients A have the same rank r which is less than the number of design variables n: (r < n), then there are many solutions. • If the augmented matrix A* and the matrix of coefficients A do not have the same rank, a solution does not exist. • If the augmented matrix A* and the matrix of coefficients A have the same rank r=n, where the number of constraints is equal to the number of design variables, then there is a unique solution.                       1.5 0.5 1 1 - 3 1 1 1 A* 1.5 3 b A 5 . 0 1 1 1 1 1
  • 63. 63 Review of mathematics In the example The largest square submatrix is a 2 x 2 matrix (since m = 2 and m < n). Taking the submatrix which includes the first two columns of A, the determinant has a value of 2 and therefore is nonsingular. Thus the rank of A is 2 (r = 2). The same columns appear in A* making its rank also 2. Since r < n, infinitely many solutions exist.                       1.5 0.5 1 1 - 3 1 1 1 A* 1.5 3 b A 5 . 0 1 1 1 1 1
  • 64. 64 Review of mathematics In the example One way to determine the solutions is to assign ( n-r) variables arbitrary values and use them to determine values for the remaining r variables. The value n-r is often identified as the degree of freedom for the system of equations. In this example, the degree of freedom is 1 (i.e., 3-2). For instance x3 can be assigned a value of 1 in which case x1=0.5 and x2=1.5                       1.5 0.5 1 1 - 3 1 1 1 A* 1.5 3 b A 5 . 0 1 1 1 1 1
  • 65. 65 Homework What is the solution of the system given below? Hint: Determine the rank of the matrix of the coefficients and the augmented matrix. 1 2 : 1 : 2 : 2 1 3 2 1 2 2 1 1        x x g x x g x x g
  • 66. 66 2. Classical optimization techniques Single variable optimization • Useful in finding the optimum solutions of continuous and differentiable functions • These methods are analytical and make use of the techniques of differential calculus in locating the optimum points. • Since some of the practical problems involve objective functions that are not continuous and/or differentiable, the classical optimization techniques have limited scope in practical applications.
  • 67. 67 2. Classicial optimization techniques Single variable optimization • A function of one variable f (x) has a relative or local minimum at x = x* if f (x*) ≤ f (x*+h) for all sufficiently small positive and negative values of h • A point x* is called a relative or local maximum if f (x*) ≥ f (x*+h) for all values of h sufficiently close to zero. Local minimum Global minima Local minima
  • 68. 68 2. Classicial optimization techniques Single variable optimization • A function f (x) is said to have a global or absolute minimum at x* if f (x*) ≤ f (x) for all x, and not just for all x close to x*, in the domain over which f (x) is defined. • Similarly, a point x* will be a global maximum of f (x) if f (x*) ≥ f (x) for all x in the domain.
  • 69. 69 Necessary condition • If a function f (x) is defined in the interval a ≤ x ≤ b and has a relative minimum at x = x*, where a < x* < b, and if the derivative df (x) / dx = f’(x) exists as a finite number at x = x*, then f’(x*)=0 • The theorem does not say that the function necessarily will have a minimum or maximum at every point where the derivative is zero. e.g. f’(x)=0 at x= 0 for the function shown in figure. However, this point is neither a minimum nor a maximum. In general, a point x* at which f’(x*)=0 is called a stationary point.
  • 70. 70 Necessary condition • The theorem does not say what happens if a minimum or a maximum occurs at a point x* where the derivative fails to exist. For example, in the figure depending on whether h approaches zero through positive or negative values, respectively. Unless the numbers or are equal, the derivative f’(x*) does not exist. If f’(x*) does not exist, the theorem is not applicable. (negative) m or (positive) *) ( ) * ( lim - 0      m h x f h x f h FIGURE 2.2 SAYFA 67  m  m
  • 71. 71 Sufficient condition • Let f’(x*)=f’’(x*)=…=f (n-1)(x*)=0, but f(n)(x*) ≠ 0. Then f(x*) is – A minimum value of f (x) if f (n)(x*) > 0 and n is even – A maximum value of f (x) if f (n)(x*) < 0 and n is even – Neither a minimum nor a maximum if n is odd
  • 72. 72 Example Determine the maximum and minimum values of the function: Solution: Since f’(x)=60(x4-3x3+2x2)=60x2(x-1)(x-2), f’(x)=0 at x=0,x=1, and x=2. The second derivative is: At x=1, f’’(x)=-60 and hence x=1 is a relative maximum. Therefore, fmax= f (x=1) = 12 At x=2, f’’(x)=240 and hence x=2 is a relative minimum. Therefore, fmin= f (x=2) = -11 5 40 45 12 ) ( 3 4 5     x x x x f ) 4 9 4 ( 60 ) ( 2 3 x x x x f     
  • 73. 73 Example Solution cont’d: At x=0, f’’(x)=0 and hence we must investigate the next derivative. Since at x=0, x=0 is neither a maximum nor a minimum, and it is an inflection point. 0 at 240 ) 4 18 12 ( 60 ) ( 2         x x x x f 0 ) (     x f
  • 74. 74 Multivariable optimization with no constraints • Definition: rth Differential of f If all partial derivatives of the function f through order r ≥ 1 exist and are continuous at a point X*, the polynomial is called the rth differential of f at X*. k j i r k j n k i n j n i r x x x f h h h f d               *) ( *) ( 1 1 1 X X r summations
  • 75. 75 Multivariable optimization with no constraints • Example: rth Differential of f when r = 2 and n = 3, we have k j i r k j n k i n j n i r x x x f h h h f d               *) ( *) ( 1 1 1 X X r summations ) ( 2 ) ( 2 ) ( 2 ) ( ) ( ) ( ) ( *) *, *, ( ) ( 3 1 2 3 1 3 2 2 3 2 2 1 2 2 1 2 3 2 2 3 2 2 2 2 2 2 1 2 2 1 2 3 1 3 1 3 2 1 2 2 X* X* X* X* X* X* X* X* x x f h h x x f h h x x f h h x f h x f h x f h x x f h h x x x f d f d j i j j i i                              
  • 76. 76 Multivariable optimization with no constraints • Definition: rth Differential of f The Taylor series expansion of a function f (X*) about a point X* is given by: where the last term, called the remainder is given by: ) h *, ( ) * ( ! 1 ) * ( ! 3 1 ) * ( ! 2 1 ) * ( ) * ( ) ( 3 2 X X X X X X X N N R f d N f d f d df f f         * X - X h h X*, h X*,       and 1 0 where ) ( )! 1 ( 1 ) ( 1   f d N R N N
  • 77. 77 Example Find the second order Taylor’s series approximation of the function about the point Solution: The second order Taylor’s series approximation of the function f about point X* is given by 3 1 3 2 2 3 2 1 ) , , ( x e x x x x x x f              2 - 0 1 X*                                     2 0 1 ! 2 1 2 0 1 2 0 1 ) ( 2 f d df f f X
  • 80. 80 Example cont’d Thus, the Taylor’s series approximation is given by: Where h1=x1-1, h2=x2, and h3=x3+2 ) 2 (-4 ! 2 1 ) ( ) ( 2 3 1 2 3 2 2 2 3 1 2 2           e h h h e h h h e e f X
  • 81. 81 Multivariable optimization with no constraints • Necessary condition If f(X) has an extreme point (maximum or minimum) at X=X* and if the first partial derivatives of f (X) exist at X*, then • Sufficient condition A sufficient condition for a stationary point X* to be an extreme point is that the matrix of second partial derivatives (Hessian matrix) of f (X*) evaluated at X* is – Positive definite when X* is a relative minimum point – Negative definite when X* is a relative maximum point 0 *) ( *) ( *) ( 2 1           X X X n x f x f x f 
  • 82. 82 Example Figure shows two frictionless rigid bodies (carts) A and B connected by three linear elastic springs having spring constants k1, k2, and k3. The springs are at their natural positions when the applied force P is zero. Find the displacements x1 and x2 under the force P by using the principle of minimum potential energy.
  • 83. 83 Example Solution: According to the principle of minimum potential energy, the system will be in equilibrium under the load P if the potential energy is a minimum. The potential energy of the system is given by: Potential energy (U) = Strain energy of springs-work done by external forces The necessary condition for the minimum of U are 2 2 2 1 2 1 2 3 2 1 2 ] 2 1 ) ( 2 1 2 1 [ Px x k x x k x k      0 ) ( 0 ) ( 2 1 1 2 3 2 1 2 3 1 2 1              P x k x x k x U x x k x k x U 3 2 3 1 2 1 3 2 2 3 2 3 1 2 1 3 1 ) ( * * k k k k k k k k P x k k k k k k Pk x       
  • 84. 84 Example Solution cont’d: The sufficiency conditions for the minimum at (x1*,x2*) can also be verified by testing the positive definiteness of the Hessian matrix of U. The Hessian matrix of U evaluated at (x1*,x2*) is: The determinants of the square submatrices of J are Since the spring constants are always positive. Thus the matrix J is positive definite and hence (x1*,x2*) corresponds to the minimum of potential energy.                                   3 1 3 3 3 2 *) *, ( 2 2 2 2 1 2 2 1 2 2 1 2 *) *, ( 2 1 2 1 k k k k k k x U x x U x x U x U x x x x J 0 0 3 2 3 1 2 1 3 1 3 3 3 2 2 3 2 3 2 1               k k k k k k k k k k k k J k k k k J
  • 85. 85 Semi-definite case The sufficient conditions for the case when the Hessian matrix of the given function is semidefinite: • In case of a function of a single variable, the higher order derivatives in the Taylor’s series expansion are investigated
  • 86. 86 Semi-definite case The sufficient conditions for a function of several variables for the case when the Hessian matrix of the given function is semidefinite: • Let the partial derivatives of f of all orders up to the order k ≥ 2 be continuous in the neighborhood of a stationary point X*, and so that dk f |X=X* is the first nonvanishing higher-order differential of f at X*. • If k is even: – X* is a relative minimum if dk f |X=X* is positive definite – X* is a relative maximum if dk f |X=X* is negative definite – If dk f |X=X* is semidefinite, no general conclusions can be drawn • If k is odd, X* is not an extreme point of f(X*) 0 | 1 1 0 |        * X X * X X f d k r f d k r
  • 87. 87 Saddle point • In the case of a function of two variables f (x,y), the Hessian matrix may be neither positive nor negative definite at a point (x*,y*) at which In such a case, the point (x*,y*) is called a saddle point. • The characteristic of a saddle point is that it corresponds to a relative minimum or maximum of f (x,y) wrt one variable, say, x (the other variable being fixed at y=y* ) and a relative maximum or minimum of f (x,y) wrt the second variable y (the other variable being fixed at x*). 0       y f x f
  • 88. 88 Saddle point Example: Consider the function f (x,y)=x2-y2. For this function: These first derivatives are zero at x* = 0 and y* = 0. The Hessian matrix of f at (x*,y*) is given by: Since this matrix is neither positive definite nor negative definite, the point ( x*=0, y*=0) is a saddle point. y y f x x f 2 and 2        2 0 0 2   J
  • 89. 89 Saddle point Example cont’d: It can be seen from the figure that f (x, y*) = f (x, 0) has a relative minimum and f (x*, y) = f (0, y) has a relative maximum at the saddle point (x*, y*).
  • 90. 90 Example Find the extreme points of the function Solution: The necessary conditions for the existence of an extreme point are: These equations are satisfied at the points: (0,0), (0,-8/3), (-4/3,0), and (-4/3,-8/3) 6 4 2 ) , ( 2 2 2 1 3 2 3 1 2 1      x x x x x x f 0 ) 8 3 ( 8 3 0 ) 4 3 ( 4 3 2 2 2 2 2 2 1 1 1 2 1 1               x x x x x f x x x x x f
  • 91. 91 Example Solution cont’d: To find the nature of these extreme points, we have to use the sufficiency conditions. The second order partial derivatives of f are given by: The Hessian matrix of f is given by: 0 8 6 4 6 2 1 2 2 2 2 2 1 2 1 2             x x f x x f x x f          8 6 0 0 4 6 2 1 x x J
  • 92. 92 Example Solution cont’d: If J1=|6x1+4| and , the values of J1 and J2 and the nature of the extreme point are as given in the next slide:          8 6 0 0 4 6 2 1 x x J 8 6 0 0 4 6 2 1    x x 2 J
  • 93. 93 Example Point X Value of J1 Value of J2 Nature of J Nature of X f (X) (0,0) +4 +32 Positive definite Relative minimum 6 (0,-8/3) +4 -32 Indefinite Saddle point 418/27 (-4/3,0) -4 -32 Indefinite Saddle point 194/27 (-4/3,-8/3) -4 +32 Negative definite Relative maximum 50/3
  • 94. 94 Multivariable optimization with equality constraints • Problem statement: Minimize f = f (X) subject to gj(X)=0, j=1,2,…..,m where Here m is less than or equal to n, otherwise the problem becomes overdefined and, in general, there will be no solution. • Solution: – Solution by direct substitution – Solution by the method of constrained variation – Solution by the method of Lagrange multipliers                n x x x  2 1 X
  • 95. 95 Solution by direct substitution For a problem with n variables and m equality constraints: • Solve the m equality constraints and express any set of m variables in terms of the remaining n-m variables • Substitute these expressions into the original objective function, the result is a new objective function involving only n-m variables • The new objective function is not subjected to any constraint, and hence its optimum can be found by using the unconstrained optimization techniques.
  • 96. 96 Solution by direct substitution • Simple in theory • Not convenient from a practical point of view as the constraint equations will be nonlinear for most of the problems • Suitable only for simple problems
  • 97. 97 Example Find the dimensions of a box of largest volume that can be inscribed in a sphere of unit radius Solution: Let the origin of the Cartesian coordinate system x1, x2, x3 be at the center of the sphere and the sides of the box be 2x1, 2x2, and 2x3. The volume of the box is given by: Since the corners of the box lie on the surface of the sphere of unit radius, x1, x2 and x3 have to satisfy the constraint 3 2 1 3 2 1 8 ) , , ( x x x x x x f  1 2 3 2 2 2 1    x x x
  • 98. 98 Example This problem has three design variables and one equality constraint. Hence the equality constraint can be used to eliminate any one of the design variables from the objective function. If we choose to eliminate x3: Thus, the objective function becomes: f(x1,x2)=8x1x2(1-x1 2-x2 2)1/2 which can be maximized as an unconstrained function in two variables. 2 / 1 2 2 2 1 3 ) 1 ( x x x   
  • 99. 99 Example The necessary conditions for the maximum of f give: which can be simplified as: From which it follows that x1*=x2*=1/3 and hence x3*= 1/3 0 ] ) 1 ( ) 1 [( 8 0 ] ) 1 ( ) 1 [( 8 2 / 1 2 2 2 1 2 2 2 / 1 2 2 2 1 1 2 2 / 1 2 2 2 1 2 1 2 / 1 2 2 2 1 2 1                   x x x x x x x f x x x x x x x f 0 2 1 0 2 1 2 2 2 1 2 2 2 1       x x x x
  • 100. 100 Example This solution gives the maximum volume of the box as: To find whether the solution found corresponds to a maximum or minimum, we apply the sufficiency conditions to f (x1,x2) of the equation f (x1,x2)=8x1x2(1-x1 2-x2 2)1/2. The second order partial derivatives of f at (x1*,x2*) are given by: *) *, ( at 3 32 ) 1 ( 2 ) 1 ( 1 8 ) 1 ( 8 2 1 2 / 1 2 2 2 1 1 2 / 1 2 2 2 1 3 1 2 2 2 1 2 2 / 1 2 2 2 1 2 1 2 1 2 x x x x x x x x x x x x x x x x f                       3 3 8 max  f
  • 101. 101 Example The second order partial derivatives of f at (x1*,x2*) are given by: *) *, ( at 3 32 ) 1 ( 2 ) 1 ( 1 8 ) 1 ( 8 2 1 2 / 1 2 2 2 1 2 2 / 1 2 2 2 1 3 2 2 2 2 1 1 2 / 1 2 2 2 1 2 1 2 2 2 x x x x x x x x x x x x x x x x f                       *) *, ( at 3 16 ] ) 1 ( ) 1 [( 1 8 ) 1 ( 8 ) 1 ( 8 2 1 2 / 1 2 2 2 1 2 2 2 / 1 2 2 2 1 2 2 2 1 2 1 2 / 1 2 2 2 1 2 2 2 / 1 2 2 2 1 2 1 2 x x x x x x x x x x x x x x x x x f                   
  • 102. 102 Example Since the Hessian matrix of f is negative definite at (x1*,x2*). Hence the point (x1*,x2*) corresponds to the maximum of f. 0 and 0 2 2 1 2 2 2 2 2 1 2 2 2 2                     x x f x f x f x f
  • 103. 103 Solution by constrained variation • Minimize f (x1,x2) subject to g(x1,x2)=0 • A necessary condition for f to have a minimum at some point (x1*,x2*) is that the total derivative of f (x1,x2) wrt x1 must be zero at (x1*,x2*) • Since g(x1*,x2*)=0 at the minimum point, any variations dx1 and dx2 taken about the point (x1*,x2*) are called admissable variations provided that the new point lies on the constraint: 0 2 2 1 1        dx x f dx x f df 0 ) , ( 2 * 2 1 * 1    dx x dx x g
  • 104. 104 Solution by constrained variation • Taylor’s series expansion of the function about the point (x1*,x2*): • Since g(x1*, x2*)=0 • Assuming • Substituting the above equation into ) , ( at 0 * 2 * 1 2 2 1 1 x x dx x g dx x g dg        0 ) , ( ) , ( ) , ( ) , ( 2 * 2 * 1 2 1 * 2 * 1 1 * 2 * 1 2 * 2 1 * 1           dx x x x g dx x x x g x x g dx x dx x g 0 2    x g 1 * 2 * 1 2 1 2 ) , ( / / dx x x x g x g dx       0 2 2 1 1        dx x f dx x f df 0 ) / / ( 1 ) , ( 2 2 1 1 * 2 * 1            dx x f x g x g x f df x x
  • 105. 105 • The expression on the left hand side is called the constrained variation of f • Since dx1 can be chosen arbitrarily: • This equation represents a necessary condition in order to have (x1*,x2*) as an extreme point (minimum or maximum) Solution by constrained variation 0 ) / / ( 1 ) , ( 2 2 1 1 * 2 * 1            dx x f x g x g x f df x x 0 ) ( ) , ( 1 2 2 1 * 2 * 1           x x x g x f x g x f
  • 106. 106 A beam of uniform rectangular cross section is to be cut from a log having a circular cross secion of diameter 2 a. The beam has to be used as a cantilever beam (the length is fixed) to carry a concentrated load at the free end. Find the dimensions of the beam that correspond to the maximum tensile (bending) stress carrying capacity. Example
  • 107. 107 Solution: From elementary strength of materials, we know that the tensile stress induced in a rectangular beam  at any fiber located at a distance y from the neutral axis is given by where M is the bending moment acting and I is the moment of inertia of the cross-section about the x axis. If the width and the depth of the rectangular beam shown in the figure are 2x and 2y, respectively, the maximum tensile stress induced is given by: Example I M y   2 3 max 4 3 ) 2 )( 2 ( 12 1 xy M y x My y I M    
  • 108. 108 solution cont’d: Thus for any specified bending moment, the beam is said to have maximum tensile stress carrying capacity if the maximum induced stress (max) is a minimum. Hence we need to minimize k/xy2 or maximize Kxy2, where k=3M/4 and K=1/k, subject to the constraint This problem has two variables and one constraint; hence the equation can be applied for finding the optimum solution. Example 2 2 2 a y x   0 ) ( ) , ( 1 2 2 1 * 2 * 1           x x x g x f x g x f
  • 109. 109 Solution: Since we have: Equation gives: Example 2 2 2 2 1 a y x g y kx f       3 1 2 2 2             y kx y f y kx x f y y g x x g 2 2       0 ) ( ) , ( 1 2 2 1 * 2 * 1           x x x g x f x g x f ) * *, ( at 0 ) 2 ( 2 ) 2 ( 3 1 2 2 y x x y kx y y kx       
  • 110. 110 Solution: that is Thus the beam of maximum tensile stress carrying capacity has a depth of 2 times its breadth. The optimum values of x and y can be obtained from the above equation and as: Example * 2 * x y  2 2 2 a y x g    3 2 * and 3 * a y a x  
  • 111. 111 Necessary conditions for a general problem • The procedure described can be generalized to a problem with n variables and m constraints. • In this case, each constraint equation gj(x)=0, j=1,2,..,m gives rise to a linear equation in the variations dxi, i=1,2,…,n. • Thus, there will be in all m linear equations in n variations. Hence any m variations can be expressed in terms of the remaining n-m variations. • These expressions can be used to express the differential of the objective function, df, in terms of the n-m independent variations. • By letting the coefficients of the independent variations vanish in the equation df = 0, one obtains the necessary conditions for the cnstrained optimum of the given function. Solution by constrained variation
  • 112. 112 Necessary conditions for a general problem • These conditions can be expressed as: • It is to be noted that the variations of the first m variables (dx1, dx2,.., dxm) have been expressed in terms of the variations of the remaining n-m variables (dxm+1, dxm+2,.., dxn) in deriving the above equation. Solution by constrained variation n m m k x g x g x g x g x g x g x g x g x g x g x g x g x f x f x f x f x x x x x g g g f J m m m m k m m k m k m k m k m , , 2 , 1 where 0 , , , , , , , , , 2 1 2 2 2 1 2 2 1 2 1 1 1 1 2 1 3 2 1 2 1                                                     
  • 113. 113 Necessary conditions for a general problem • This implies that the following relation is satisfied: • The n-m equations given by the below equation represent the necessary conditions for the extremum of f(X) under the m equality constraints, gj(X) = 0, j=1,2,…,m. Solution by constrained variation 0 , , , , , , 2 1 2 1          m m x x x g g g J   n m m k x g x g x g x g x g x g x g x g x g x g x g x g x f x f x f x f x x x x x g g g f J m m m m k m m k m k m k m k m , , 2 , 1 where 0 , , , , , , , , , 2 1 2 2 2 1 2 2 1 2 1 1 1 1 2 1 3 2 1 2 1                                                     
  • 114. 114 Minimize subject to Solution: This problem can be solved by applying the necessary conditions given by Example ) ( 2 1 ) ( 2 4 2 3 2 2 2 1 y y y y f     Y 0 15 6 5 2 ) ( 0 10 5 3 2 ) ( 4 3 2 1 2 4 3 2 1 1             y y y y g y y y y g Y Y n m m k x g x g x g x g x g x g x g x g x g x g x g x g x f x f x f x f x x x x x g g g f J m m m m k m m k m k m k m k m , , 2 , 1 where 0 , , , , , , , , , 2 1 2 2 2 1 2 2 1 2 1 1 1 1 2 1 3 2 1 2 1                                                     
  • 115. 115 Solution cont’d: Since n = 4 and m = 2, we have to select two variables as independent variables. First we show that any arbitrary set of variables can not be chosen as independent variables since the remaining (dependent) variables have to satisfy the condition of In terms of the notation of our equations, let us take the independent variables as x3=y3 and x4=y4 so that x1=y1 and x2=y2. Then the Jacobian becomes: and hence the necessary conditions can not be applied. Example 0 , , , , , , 2 1 2 1          m m x x x g g g J   0 2 1 2 1 , , 2 2 1 2 2 1 1 1 2 1 2 1                    y g y g y g y g x x g g J
  • 116. 116 Solution cont’d: Next, let us take the independent variables as x3=y2 and x4=y4 so that x1=y1 and x2=y3. Then the Jacobian becomes: and hence the necessary conditions of can be applied. Example 0 2 5 1 3 1 , , 3 2 1 2 3 1 1 1 2 1 2 1                     y g y g y g y g x x g g J n m m k x g x g x g x g x g x g x g x g x g x g x g x g x f x f x f x f x x x x x g g g f J m m m m k m m k m k m k m k m , , 2 , 1 where 0 , , , , , , , , , 2 1 2 2 2 1 2 2 1 2 1 1 1 1 2 1 3 2 1 2 1                                                     
  • 117. 117 Solution cont’d: The equation give for k = m+1=3 Example n m m k x g x g x g x g x g x g x g x g x g x g x g x g x f x f x f x f x x x x x g g g f J m m m m k m m k m k m k m k m , , 2 , 1 where 0 , , , , , , , , , 2 1 2 2 2 1 2 2 1 2 1 1 1 1 2 1 3 2 1 2 1                                                      0 4 2 ) 2 2 ( ) 6 10 ( ) 3 5 ( 5 1 2 3 1 2 1 2 3 1 2 3 1 2 3 2 1 2 2 2 3 1 1 1 2 1 3 1 2 2 2 1 2 3 2 2 1 1 1 3 1 2 1 3                                                y y y y y y y y y g y g y g y g y g y g y f y f y f x g x g x g x g x g x g x f x f x f
  • 118. 118 Solution cont’d: For k = m+2=4 From the two previous equations, the necessary conditions for the minimum or the maximum of f is obtained as: Example 0 7 2 ) 6 5 ( ) 18 25 ( ) 3 5 ( 5 1 6 3 1 5 3 1 4 3 1 4 3 1 4 3 2 1 2 4 2 3 1 1 1 4 1 3 1 4 2 2 1 2 4 2 2 1 1 1 4 1 2 1 4                                                 y y y y y y y y y y g y g y g y g y g y g y f y f y f x g x g x g x g x g x g x f x f x f 2 4 1 4 3 2 1 2 7 2 7 2 2 1 y y y y y y y     
  • 119. 119 Solution cont’d: When the equations are substituted, the equations take the form: Example 0 15 6 5 2 ) ( 0 10 5 3 2 ) ( 4 3 2 1 2 4 3 2 1 1             y y y y g y y y y g Y Y 2 4 1 4 3 2 1 2 7 2 7 2 2 1 y y y y y y y      15 16 15 10 11 8 4 2 4 2       y y y y
  • 120. 120 Solution cont’d: from which the desired optimum solution can be obtained as: Example 37 30 * 74 155 * 37 5 * 74 5 * 4 3 2 1       y y y y
  • 121. 121 Sufficiency conditions for a general problem • By eliminating the first m variables, using the m equality constraints, the objective function can be made to depend only on the remaining variables xm+1, xm+2, …,xn. Then the Taylor’s series expansion of f , in terms of these variables, about the extreme point X* gives: where is used to denote the partial derivative of f wrt xi (holding all the other variables xm+1, xm+2, …,xi-1, xi+1, xi+2,…,xn constant) when x1, x2, …,xm are allowed to change so that the constraints gj(X*+dX)=0, j=1,2,…,m are satisfied; the second derivative is used to denote a similar meaning. Solution by constrained variation j i g j i n m j n m i i g n m i i dx dx x x f dx x f ) f d f                                   2 1 1 1 2 1 ( ) ( * X X * X g i x f ) / (   g j i x x f ) / ( 2   
  • 122. 122 Example Consider the problem of minimizing Subject to the only constraint Since n=3 and m=1 in this problem, one can think of any of the m variables, say x1, to be dependent and the remaining n-m variables, namely x2 and x3, to be independent. Here the constrained partial derivative means the rate of change of f with respect to x2 (holding the other independent variable x3 constant) and at the same time allowing x1 to change about X* so as to satisfy the constraint g1(X)=0 Solution by constrained variation ) , , ( ) ( 3 2 1 x x x f f  X 0 8 ) ( 2 3 2 2 2 1 1      x x x g X g x f ) / ( 2  
  • 123. 123 Example In the present case, this means that dx1 has to be chosen to satisfy the relation since g1(X*)=0 at the optimum point and dx3= 0 (x3 is held constant.) Solution by constrained variation 0 2 2 is that 0 ) ( ) ( ) ( ) ( ) ( 2 * 2 1 * 1 3 3 1 2 2 1 1 1 1 1 1               dx x dx x dx x g dx x g dx x g g d g X* X* X* X* X * X
  • 124. 124 Example Notice that (df/dxi)g has to be zero for i=m+1, m+2,...,n since the dxi appearing in the equation are all independent. Thus, the necessary conditions for the existence of constrained optimum at X* can also be expressed as: Solution by constrained variation j i g j i n m j n m i i g n m i i dx dx x x f dx x f ) f d f                                   2 1 1 1 2 1 ( ) ( * X X * X n m m i x f g i ,..., 2 , 1 , 0              
  • 125. 125 Example It can be shown that the equations are nothing bu the equation Solution by constrained variation n m m i x f g i ,..., 2 , 1 , 0               n m m k x g x g x g x g x g x g x g x g x g x g x g x g x f x f x f x f x x x x x g g g f J m m m m k m m k m k m k m k m , , 2 , 1 where 0 , , , , , , , , , 2 1 2 2 2 1 2 2 1 2 1 1 1 1 2 1 3 2 1 2 1                                                     
  • 126. 126 • A sufficient condition for X* to be a constrained relative minimum (maximum) is that the quadratic form Q defined by is positive (negative) for all nonvanishing variations dxi and the matrix has to be positive (negative) definite to have Q positive (negative) for all choices of dxi Sufficiency conditions for a general problem j i g n m j j i n m i dx dx x x f Q                   1 2 1                                                                                         g n g m n g m n g n m g m m g m x f x x f x x f x x f x x f x f 2 2 2 2 1 2 1 2 2 1 2 2 1 2   
  • 127. 127 • The computation of the constrained derivatives in the sufficiency condition is difficult and may be prohibitive for problems with more than three constraints • Simple in theory • Difficult to apply since the necessary conditions involve evaluation of determinants of order m+1 Solution by constrained variation
  • 128. 128 Problem with two variables and one constraint: Minimize f (x1,x2) Subject to g(x1,x2)=0 For this problem, the necessary condition was found to be: By defining a quantity , called the Lagrange multiplier as: Solution by Lagrange multipliers 0 ) / / ( ) , ( 1 2 2 1 * 2 * 1           x x x g x g x f x f *) *, ( 2 2 2 1 / / x x x g x f               
  • 129. 129 Problem with two variables and one constraint: Necessary conditions for the point (x1*,x2*) to be an extreme point The problem can be rewritten as: In addition, the constraint equation has to be satisfied at the extreme point: Solution by Lagrange multipliers 0 ) ( ) , ( 1 1 * 2 * 1       x x x g x f  0 ) ( ) , ( 2 2 * 2 * 1       x x x g x f  0 ) , ( ) , ( 2 1 * 2 * 1  x x x x g
  • 130. 130 Problem with two variables and one constraint: • The derivation of the necessary conditions by the method of Lagrange multipliers requires that at least one of the partial derivatives of g(x1,x2) be nonzero at an extreme point. • The necessary conditions are more commonly generated by constructing a function L,known as the Lagrange function, as Solution by Lagrange multipliers 0 ) , ( ) , ( ) , , ( 2 1 2 1 2 1    x x g x x f x x L  
  • 131. 131 Problem with two variables and one constraint: • By treating L as a function of the three variables x1, x2 and , the necessary conditions for its extremum are given by: Solution by Lagrange multipliers 0 ) , ( ) , , ( 0 ) , ( ) , ( ) , , ( 0 ) , ( ) , ( ) , , ( 2 1 2 1 2 1 2 2 1 2 2 1 2 2 1 1 2 1 1 2 1 1                       x x g x x L x x x g x x x f x x x L x x x g x x x f x x x L      
  • 132. 132 Example: Find the solution using the Lagrange multiplier method. Solution The Lagrange function is Example 0 0 2 2 0 2 ) , ( of minimum for the conditions necessary The ) ( ) , ( ) , ( ) , , ( 2 2 2 3 1 2 2 2 2 2 2 1                               a y x L y y kx y L x y kx x L y x f a y x y kx y x g y x f y x L       0 ) , ( subject to ) , ( Minimize 2 2 2 2 1        a y x y x g y kx y x f
  • 134. 134 Necessary conditions for a general problem: Minimize f(X) subject to gj (X)= 0, j=1, 2,….,m The Lagrange function, L, in this case is defined by introducing one Lagrange multiplier j for each constraint gj(X) as Solution by Lagrange multipliers ) ( ) ( ) ( ) ( ) , , , , , , , ( 2 2 1 1 2 1 2 1 X X X X m m m n g g g f x x x L              
  • 135. 135 By treating L as a function of the n+m unknowns, x1, x2,…,xn,1, 2,…, m, the necessary conditions for the extremum of L, which also corresponds to the solution of the original problem are given by: The above equations represent n+m equations in terms of the n+m unknowns, xi and j Solution by Lagrange multipliers m j g L n i x g x f x L j j i j m j j i i , , 2 , 1 , 0 ) ( , , 2 , 1 , 0 1                    X  
  • 136. 136 The solution: The vector X* corresponds to the relative constrained minimum of f(X) (sufficient conditions are to be verified) while the vector * provides the sensitivity information. Solution by Lagrange multipliers                               * * 2 * 1 * * 2 * 1 * and m n x x x       X*
  • 137. 137 Sufficient Condition A sufficient condition for f(X) to have a constrained relative minimum at X* is that the quadratic Q defined by evaluated at X=X* must be positive definite for all values of dX for which the constraints are satisfied. If is negative for all choices of the admissable variations dxi, X* will be a constrained maximum of f(X) Solution by Lagrange multipliers j i n j j i n i dx dx x x L Q         1 2 1 j i n j j i n i dx dx x x L Q ) ( 1 2 1 * X*,        
  • 138. 138 A necessary condition for the quadratic form Q to be positive (negative) definite for all admissable variations dX is that each root of the polynomial zi, defined by the following determinantal equation, be positive (negative): • The determinantal equation, on expansion, leads to an (n-m)th-order polynomial in z. If some of the roots of this polynomial are positive while the others are negative, the point X* is not an extreme point. Solution by Lagrange multipliers
  • 139. 139 Find the dimensions of a cylindirical tin (with top and bottom) made up of sheet metal to maximize its volume such that the total surface area is equal to A0=24. Solution If x1 and x2 denote the radius of the base and length of the tin, respectively, the problem can be stated as: Maximize f (x1,x2) = x1 2x2 subject to Example 1    24 2 2 0 2 1 2 1    A x x x
  • 140. 140 Solution Maximize f (x1,x2) = x1 2x2 subject to The Lagrange function is: and the necessary conditions for the maximum of f give: Example 1    24 2 2 0 2 1 2 1    A x x x ) 2 2 ( ) , , ( 0 2 1 2 1 2 1 2 1 A x x x x x x L          0 2 2 0 2 0 2 4 2 0 2 1 2 1 1 2 1 2 2 1 2 1 1                  A x x x L x x x L x x x x x L        
  • 141. 141 Solution that is, The above equations give the desired solution as: Example 1 1 2 1 2 1 2 1 2 x x x x x       2 1 2 1 x x  2 / 1 0 2 / 1 0 * 2 2 / 1 0 * 1 24 * and , 3 2 , 6                           A A x A x
  • 142. 142 Solution This gives the maximum value of f as If A0 = 24, the optimum solution becomes To see that this solution really corresponds to the maximum of f, we apply the sufficiency condition of equation Example 1 2 / 1 3 0 54 *           A f   16 * and , 1 * , 4 , 2 * 2 * 1      f x x
  • 143. 143 Solution In this case: Example 1     2 2 2 * * 1 ) ( 2 1 2 12        x x x L L * X*,     4 4 2 * * 2 ) ( 2 1 2 11       x x L L * X*, 0 ) ( 2 2 2 22     * X*, x L L     16 2 4 * 2 * 1 ) ( 1 1 11       x x x g g * X*,    4 2 * 1 ) ( 2 1 12      x x g g * X*,
  • 145. 145 Solution that is, This gives Since the value of z is negative, the point (x1*,x2*) corresponds to the maximum of f. Example 1 0 192 272 3 2     z  17 12   z
  • 146. 146 Find the maximum of the function f (X) = 2x1+x2+10 subject to g (X)=x1 2+2x2 2 = 3 using the Lagrange multiplier method. Also find the effect of changing the right-hand side of the constraint on the optimum value of f. Solution The Lagrange function is given by: The necessary conditions for the solution of the problem are: Example 2 ) 2 3 ( 10 2 ) ( 2 2 1 2 1 x x x x L         X, 2 2 1 2 2 1 2 3 0 4 1 0 2 x x L x x L x L                  
  • 147. 147 Solution The solution of the equation is: The application of the sufficiency condition yields: Example 2 2 * 13 . 0 97 . 2 * 2 * 1                     x x X* 0 0 52 . 0 1 52 . 0 8 0 1 0 0 4 1 4 4 0 1 0 2 2                 z z x x z z  0 0 12 11 12 22 21 11 12 11    g g g z L L g L z L
  • 148. 148 Solution Hence X* will be a maximum of f with f* = f (X*)=16.07 Example 2 2972 . 6 0 8 2704 . 0      z z z
  • 149. 149 Minimize f (X) subject to gj (X) ≤ 0, j=1, 2,…,m The inequality constraints can be transformed to equality constraints by adding nonnegative slack variables, yj 2, as gj (X) + yj 2 = 0, j = 1,2,…,m where the values of the slack variables are yet unknown. Multivariable optimization with inequality constraints
  • 150. 150 Minimize f(X) subject to Gj(X,Y)= gj (X) + yj 2=0, j=1, 2,…,m where is the vector of slack variables This problem can be solved by the method of Lagrange multipliers. For this, the Lagrange function L is constructed as: Multivariable optimization with inequality constraints                m y y y  2 1 Y s multiplier Lagrange of vector the is where ) ( ) ( ) ( 2 1 1                    m j m j jG f L        Y X, X Y, X,
  • 151. 151 The stationary points of the Lagrange function can be found by solving the following equations (necessary conditions): (n+2m) equations (n+2m) unknowns The solution gives the optimum solution vector X*, the Lagrange multiplier vector, *, and the slack variable vector, Y*. Multivariable optimization with inequality constraints m j y y L m j y g G L n i x g x f x L j j j j j j j i j m j j i i , , 2 , 1 , 0 2 ) ( , , 2 , 1 , , 0 ( ( ) ( , , 2 , 1 , 0 ) ( ) ( ) ( 2 1                                  Y, X, X) Y) X, Y, X, X X Y, X,
  • 152. 152 Equation ensure that the constraints are satisfied, while the equation implies that either j=0 or yj=0 Multivariable optimization with inequality constraints m j y g G L j j j j , , 2 , 1 , , 0 ( ( ) ( 2         X) Y) X, Y, X,   m j g j , , 2 , 1 , 0 ) (    X m j y y L j j j , , 2 , 1 , 0 2 ) (         Y, X,
  • 153. 153 • If j=0, it means that the jth constraint is inactive and hence can be ignored. • On the other hand, if yj= 0, it means that the constraint is active (gj = 0) at the optimum point. • Consider the division of the constraints into two subsets, J1 and J2, where J1 + J2 represent the total set of constraints. • Let the set J1 indicate the indices of those constraints that are active at the optimum point and J2 include the indices of all the inactive constraints. • Those constraints that are satisfied with an equality sign, gj= 0, at the optimum point are called the active constraints, while those that are satisfied with a strict inequality sign, gj< 0 are termed inactive constraints. Multivariable optimization with inequality constraints
  • 154. 154 • Thus for j J1, yj = 0 (constraints are active), for j J2, j=0 (constraints are inactive), and the equation can be simplified as: Multivariable optimization with inequality constraints n i x g x f i j m J j j i , , 2 , 1 , 0 1            n i x g x f x L i j m j j i i , , 2 , 1 , 0 ) ( ) ( ) ( 1              X X Y, X,  
  • 155. 155 Similarly, the equation can be written as: The equations (1) and (2) represent n+p+(m-p)=n+m equations in the n+m unknowns xi (i=1,2,…,n), j (j  J1), and yj (j  J2), where p denotes the number of active constraints. Multivariable optimization with inequality constraints 2 2 1 , 0 ) ( , 0 ) ( J j y g J j g j j j      X X m j y g G L j j j j , , 2 , 1 , , 0 ( ( ) ( 2         X) Y) X, Y, X,   n i x g x f i j m J j j i , , 2 , 1 , 0 1            (1) (2)
  • 156. 156 Assuming that the first p constraints are active, the equation can be expressed as: These equations can be collectively written as Multivariable optimization with inequality constraints n i x g x g x g x f i p p i i i , , 2 , 1 , , 2 2 1 1                    n i x g x f i j m J j j i , , 2 , 1 , 0 1            ly. respective , constraint jth the and function objective the of gradients the are and where 2 2 1 1 j p p g f g g g f               
  • 157. 157 Equation indicates that the negative of the gradient of the objective function can be expressed as a linear combination of the gradients of the active constraints at the optimum point. Multivariable optimization with inequality constraints / / / and / / / 2 1 2 1                                             n j j j j n x g x g x g g x f x f x f f   p p g g g f              2 2 1 1
  • 158. 158 • A vector S is called a feasible direction from a point X if at least a small step can be taken along S that does not immediately leave the feasible region. • Thus for problems with sufficiently smooth constraint surfaces, vector S satisfying the relation can be called a feasible direction. Multivariable optimization with inequality constraints-Feasible region 0   j T g S
  • 159. 159 • On the other hand, if the constraint is either linear or concave, any vector satisfying the relation can be called a feasible region. • The geometric interpretation of a feasible direction is that the vector S makes an obtuse angle with all the constraint normals. Multivariable optimization with inequality constraints-Feasible region 0   j T g S
  • 160. 160 Multivariable optimization with inequality constraints-Feasible region
  • 161. 161 • Further we can show that in the case of a minimization problem, the j values (j  J1), have to be positive. For simplicity of illustration, suppose that only two constraints (p=2) are active at the optimum point. • Then the equation reduces to Multivariable optimization with inequality constraints p p g g g f              2 2 1 1 2 2 1 1 g g f        
  • 162. 162 • Let S be a feasible direction at the optimum point. By premultiplying both sides of the equation by ST, we obtain: where the superscript T denotes the transpose. Since S is a feasible direction, it should satisfy the relations: Multivariable optimization with inequality constraints 2 T 2 1 T 1 T g g f       S S S   2 2 1 1 g g f         0 0 2 1     g S g S T T
  • 163. 163 • Thus if, 1 > 0 and 2 > 0 the quantity STf is always positive. • As f indicates the gradient direction, along which the value of the function increases at the maximum rate, STf represents the component of the increment of f along the direction S. • If STf > 0, the function value increases, the function value increases as we move along the direction S. • Hence if 1 and 2 are positive, we will not be able to find any direction in the feasible domain along which the function value can be decreased further. Multivariable optimization with inequality constraints
  • 164. 164 • Since the point at which the equation is valid is assumed to be optimum, 1 and 2 have to be positive. • This reasoning can be extended to cases where there are more than two constraints active. By proceeding in a similar manner, one can show that the j values have to be negative for a maximization problem. Multivariable optimization with inequality constraints 0 0 2 1     g S g S T T
  • 165. 165 Kuhn-Tucker Conditions • The conditions to be satisfied at a constrained minimum point, X*, of the problem can be expressed as: • These conditions are in general not sufficient to ensure a relative minimum. • There is only a class of problems, called convex programming problems, for which the Kuhn-Tucker conditions are necessary and sufficient for a global minimum. 1 , 0 , , 2 , 1 , 0 1 J j n i x g x f j i j J j j i              
  • 166. 166 Kuhn-Tucker Conditions • Those constraints that are satisfied with an equality sign, gj=0, at the optimum point are called the active constraints. If the set of active constraints is not known, the Kuhn-Tucker conditions can be stated as follows: m j m j g m j g n i x g x f j j j j i j m j j i , , 2 , 1 , 0 , , 2 , 1 , 0 , , 2 , 1 , 0 , , 2 , 1 , 0 1                      
  • 167. 167 Kuhn-Tucker Conditions • Note that if the problem is one of maximization or if the constraints are of the type gj ≥ 0, the j have to be nonpositive in the equations below : • On the other hand, if the problem is one of maximization with constraints in the form gj ≥ 0, the j have to be nonnegtaive in the above equations. m j m j g m j g n i x g x f j j j j i j m j j i , , 2 , 1 , 0 , , 2 , 1 , 0 , , 2 , 1 , 0 , , 2 , 1 , 0 1                      
  • 168. 168 Constraint Qualification • When the optimization problem is stated as: Minimize f(X) subject to gj (X) ≤ 0, j=1, 2,…,m hk(X) = 0, k=1,2,….,p the Kuhn-Tucker conditions become ly. respective , 0 and 0 s constraint with the associated s multiplier Lagrange the denote and where , , 2 , 1 , 0 , , 2 , 1 , 0 , , 2 , 1 , 0 , , 2 , 1 , 0 0 j 1 1                     k j k j k j j j k p k k j m j j h g m j p k h m j g m j g h g f          
  • 169. 169 Constraint Qualification • Although we found that the Kuhn-Tucker conditions represent the necessary conditions of optimality, the following theorem gives the precise conditions of optimality: • Theorem: Let X* be a feasible solution to the problem of Minimize f(X) subject to gj (X) ≤ 0, j=1, 2,…,m hk(X) = 0, k=1,2,….,p If gj(X*), j J1 and hk(X*), k=1,2,…,p are linearly independent, there exist * and * such that (X*, *, *) satisfy the equations below: m j p k h m j g m j g h g f j k j j j k p k k j m j j , , 2 , 1 , 0 , , 2 , 1 , 0 , , 2 , 1 , 0 , , 2 , 1 , 0 0 1 1                          
  • 170. 170 Example 1 Consider the problem: Minimize f(x1,x2)=(x1-1)2 +x2 2 subject to g1 (x1,x2) =x1 3-2x2≤ 0 g2 (x1,x2) =x1 3+2x2≤ 0 Determine whether the constraint qualification and the Kuhn-Tucker conditions are satisfied at the optimum point.
  • 171. 171 Example 1 Solution: The feasible region and the contours of the objective function are shown in the figure below. It can be seen that the optimum solution is at (0,0).
  • 172. 172 Example 1 Solution cont’d: Since g1 and g2 are both active at the optimum point (0,0), their gradients can be computed as: It is clear that g1(X*) and g2(X*) are not linearly independent. Hence the constraint qualification is not satisfied at the optimum point.                                 2 0 2 3 *) X ( 2 0 2 3 *) X ( ) 0 , 0 ( 2 1 2 ) 0 , 0 ( 2 1 1 x g x g
  • 173. 173 Example 1 Solution cont’d: Noting that: The Kuhn Tucker conditions can be written using the equations as: Since equation (E4) is not satisfied and equation (E5) can be satisfied for negative values of 1= 2 also, the Kuhn- Tucker conditions are not satisfied at the optimum point.                 0 2 2 ) 1 ( 2 *) X ( ) 0 , 0 ( 2 1 x x f 1 , 0 , , 2 , 1 , 0 1 J j n i x g x f j i j J j j i               (E4) 0 (E3) 0 (E2) 0 ) 2 ( ) 2 ( 0 (E1) 0 ) 0 ( ) 0 ( 2 2 1 2 1 2 1                
  • 174. 174 Example 2 A manufacturing firm producing small refrigerators has entered into a contract to supply 50 refrigerators at the end of the first month, 50 at the end of the second month, and 50 at the end of the third. The cost of producing x refrigerators in any month is given by $(x2+1000). The firm can produce more refrigerators in any month and carry them to a subsequent month. However, it costs $20 per unit for any refrigerator carried over from one month to the next. Assuming that there is no initial inventory, determine the number of refrigerators to be produced in each month to minimize the total cost.
  • 175. 175 Example 2 Solution: Let x1, x2, x3 represent the number of refrigerators produced in the first, second and third month respectively. The total cost to be minimized is given by: total cost= production cost + holding cost 2 1 2 3 2 2 2 1 2 1 1 2 3 2 2 2 1 3 2 1 20 40 ) 100 ( 20 ) 50 ( 20 ) 1000 ( ) 1000 ( ) 1000 ( ) , , ( x x x x x x x x x x x x x x f                
  • 176. 176 Example 2 0 150 ) , , ( 0 100 ) , , ( 0 50 ) , , ( 3 2 1 3 2 1 3 2 1 3 2 1 2 1 3 2 1 1             x x x x x x g x x x x x g x x x x g Solution cont’d: The constraints can be stated as: The first Kuhn Tucker condition is given by: (E3) 0 2 (E2) 0 20 2 (E1) 0 40 2 is that 3 , 2 , 1 , 0 3 3 3 2 2 3 2 1 1 3 3 2 2 1 1                                  x x x i x g x g x g x f i i i i
  • 177. 177 Example 2 Solution cont’d: The second Kuhn Tucker condition is given by : (E6) 0 ) 150 ( (E5) 0 ) 100 ( (E4) 0 ) 50 ( is that 3 , 2 , 1 , 0 3 2 1 3 2 1 2 1 1            x x x x x x j g j j    
  • 178. 178 Example 2 Solution cont’d: The third Kuhn Tucker condition is given by : (E9) 0 ) 150 ( (E8) 0 ) 100 ( (E7) 0 ) 50 ( is, that 3 , 2 , 1 0 3 2 1 2 1 1            x x x x x x j g j
  • 179. 179 Example 2 Solution cont’d: The fourth Kuhn Tucker condition is given by : (E12) 0 (E11) 0 (E10) 0 is, that 3 , 2 , 1 0 3 2 1          j j
  • 180. 180 Example 2 Solution cont’d: The solution of Eqs.(E1) to (E12) can be found in several ways. We proceed to solve these equations by first noting that either 1=0 or x1=50 according to (E4). Using this information, we investigate the following cases to identify the optimum solution of the problem: • Case I: 1=0 • Case II: x1=50
  • 181. 181 Example 2 Solution cont’d: • Case I: 1=0 Equations (E1) to (E3) give: 2 2 20 x (E13) 2 2 10 2 3 2 1 3 2 2 3 3                x x
  • 182. 182 Example 2 Solution cont’d: • Case I: 1=0 Substituting Equations (E13) into Eqs. (E5) and (E6) give: (E14) 0 ) 2 3 180 ( 0 ) 130 ( 3 2 3 3 2 2              
  • 183. 183 Example 2 Solution cont’d: Case I: 1=0 The four possible solutions of Eqs. (E14) are: 1. 2=0, -180- 2-3/2 3=0. These equations along with Eqs. (E13) yield the solution: 2=0, 3=-120, x1=40, x2=50, x3=60 This solution satisfies Eqs.(E10) to (E12) but violates Eqs.(E7) and (E8) and hence can not be optimum.
  • 184. 184 Example 2 Solution cont’d: Case I: 1=0 The second possible solution of Eqs. (E14) is: 2. 3=0, -130- 2-3=0. The solution of these equations leads to: 2=-130, 3=0, x1=45, x2=55, x3=0 This solution satisfies Eqs.(E10) to (E12) but violates Eqs.(E7) and (E9) and hence can not be optimum.
  • 185. 185 Example 2 Solution cont’d: Case I: 1=0 The third possible solution of Eqs. (E14) is: 3. 2=0, 3=0. Equations (E13) give: x1=-20, x2=-10, x3=0 This solution satisfies Eqs.(E10) to (E12) but violates the constraints Eqs.(E7) and (E9) and hence can not be optimum.
  • 186. 186 Example 2 Solution cont’d: Case I: 1=0 The third possible solution of Eqs. (E14) is: 4. -130- 2-3=0, -180- 2-3/2 3=0. The solutions of these equations and Equations (E13) give: 2 =-30, 3 =-100, x1=45, x2=55, x3=50 This solution satisfies Eqs.(E10) to (E12) but violates the constraint Eq.(E7) and hence can not be optimum.
  • 187. 187 Example 2 2 3 2 1 1 3 2 3 2 2 3 3 2 120 2 40 (E15) 2 2 20 2 20 2 x x x x x x                         Solution cont’d: Case II: x1=50. In this case, Eqs. (E1) to (E3) give: Substitution of Eqs.(E15) in Eqs give: (E16) 0 ) 150 )( 2 ( 0 ) 100 )( 2 2 20 ( 3 2 1 3 2 1 3 2            x x x x x x x x (E6) 0 ) 150 ( (E5) 0 ) 100 ( 3 2 1 3 2 1 2        x x x x x  
  • 188. 188 Example 2 Solution cont’d: Case II: x1=50. Once again, there are four possible solutions to Eq.(E16) as indicated below: 1. -20 - 2x2 + 2x3 = 0, x1 + x2 + x3 -150 = 0: The solution of these equations yields: x1 = 50, x2 = 45, x3 = 55 This solution can be seen to violate Eq.(E8) which says: (E8) 0 ) 100 ( 2 1    x x
  • 189. 189 Example 2 Solution cont’d: Case II: x1=50. Once again, there are four possible solutions to Eq.(E16) as indicated below: 2. -20 - 2x2 + 2x3 = 0, -2x3 = 0: The solution of these equations yields: x1 = 50, x2 = -10, x3 = 0 This solution can be seen to violate Eqs.(E8) and (E9) which say: (E9) 0 ) 150 ( (E8) 0 ) 100 ( 3 2 1 2 1        x x x x x
  • 190. 190 Example 2 Solution cont’d: Case II: x1=50. Once again, there are four possible solutions to Eq.(E16) as indicated below: 3. x1 + x2 -100 = 0, -2x3 = 0: The solution of these equations yields: x1 = 50, x2 = 50, x3 = 0 This solution can be seen to violate Eq. (E9) which say: (E9) 0 ) 150 ( 3 2 1     x x x
  • 191. 191 Example 2 Solution cont’d: Case II: x1=50. Once again, there are four possible solutions to Eq.(E16) as indicated below: 4. x1 + x2 -100 = 0, x1 + x2 + x3 -150 = 0 : The solution of these equations yields: x1 = 50, x2 = 50, x3 = 50 This solution can be seen to satisfy all the constraint Eqs.(E7-E9) which say: (E9) 0 ) 150 ( (E8) 0 ) 100 ( (E7) 0 ) 50 ( 3 2 1 2 1 1          x x x x x x
  • 192. 192 Example 2 Solution cont’d: Case II: x1=50. The values of 1 , 2 , and 3 corresponding to this solution can be obtained from as: 2 3 2 1 1 3 2 3 2 2 3 3 2 120 2 40 (E15) 2 2 20 2 20 2 x x x x x x                         100 , 20 , 20 3 2 1         
  • 193. 193 Example 2 Solution cont’d: Case II: x1=50. Since these values of i satisfy the requirements: this solution can be identified as the optimum solution. Thus 100 , 20 , 20 3 2 1          (E12) 0 (E11) 0 (E10) 0 3 2 1       50 * , 50 * , 50 * 3 2 1    x x x
  • 194. 194 Convex functions • A function f(X) is said to be convex if for any pair of points that is, if the segment joining the two points lies entirely above or on the graph of f(X). • A convex function is always bending upward and hence it is apparent that the local minimum of a convex function is also a global minimum   ) ( ) 1 ( ) ( ) 1 ( 1, 0 , all and and 1 2 1 2 ) 2 ( ) 2 ( 2 ) 2 ( 1 2 ) 1 ( ) 1 ( 2 ) 1 ( 1 1 X X X X X X f f f x x x x x x n n                                             
  • 195. 195 Convex functions • A function f(x) is convex if for any two points x and y, we have • A function f(X) is convex if the Hessian matrix is positive semidefinite. • Any local minimum of a convex function f(X) is a global minimum x) (x)(y f f(x) f(y) T       j i x x f     / ) ( 2 X H(X)
  • 196. 196 Concave function • A function f(X) is called a concave function if for any two points X1 and X2, and for all 0    1, that is, if the line segment joining the two points lies entirely below or on the graph of f(X). • It can be seen that a concave function bends downward and hence the local maximum will also be its global maximum. • It can be seen that the negative of a convex function is a concave function.   ) ( ) 1 ( ) ( ) 1 ( 1 2 1 2 X X X X f f f         
  • 197. 197 Concave function • Convex and concave functions in one variable
  • 198. 198 Concave function • Convex and concave functions in two variables
  • 199. 199 Example Determine whether the following function is convex or concave. f(x) = ex Solution: convex. strictly is f(x) Hence x. of values real all for 0 ) ( 2 2    x e dx f d x H
  • 200. 200 Example Determine whether the following function is convex or concave. f(x) = -8x2 Solution: concave. strictly is f(x) Hence x. of values real all for 0 16 ) ( 2 2     dx f d x H
  • 201. 201 Example Determine whether the following function is convex or concave. f(x1,x2) = 2x1 3-6x2 2 Solution: Here Hence H(X) will be negative semidefinite and f(X) is concave for x1 ≤ 0                                12 0 0 12 ) ( 1 2 2 2 2 1 2 2 1 2 2 1 2 x x f x x f x x f x f H X 0 for 0 0 for 0 12 1 1 1 2 1 2        x x x x f 0 for 0 0 for 0 144 1 1 1       x x x H(X)
  • 202. 202 Example Determine whether the following function is convex or concave. Solution: 15 2 3 6 5 3 4 ) , , ( 2 1 3 1 2 1 2 3 2 2 2 1 3 2 1         x x x x x x x x x x x x f                                                         10 0 1 0 6 6 1 6 8 ) ( 2 3 2 3 2 2 3 1 2 3 2 2 2 2 2 2 1 2 3 1 2 2 1 2 2 1 2 x f x x f x x f x x f x f x x f x x f x x f x f H X
  • 203. 203 Example Determine whether the following function is convex or concave. Solution cont’d: Here the principal minors are given by: and hence the matrix H(X) is positive definite for all real values of x1, x2, x3. Therefore f(X) is strictly convex function. 15 2 3 6 5 3 4 ) , , ( 2 1 3 1 2 1 2 3 2 2 2 1 3 2 1         x x x x x x x x x x x x f 0 114 10 0 1 0 6 6 1 6 8 0 12 6 6 6 8 0 8 8      
  • 204. 204 Convex programming problem • When the optimization problem is stated as: Minimize f (X) subject to gj (X) ≤ 0, j = 1, 2,…,m it is called a convex programming problem if the objective function f (X), and the constraint functions, gj (X) are convex. • Supposing that f (X) and gj(X), j=1,2,…,m are convex functions, the Lagrange function can be written as:   2 1 ) ( ) ( ) ( j j m J j y g f L      X X λ Y, X, 
  • 205. 205 Convex programming problem • If j ≥ 0, then jgj(X) is convex, and since jyj=0 from L(X,Y,) will be a convex function • A necessary condition for f (X) to be a relative minimum at X* is that L(X,Y,) have a stationary point at X*. However, if L(X,Y,) is a convex function, its derivative vanishes only at one point, which must be an absolute minimum of the function f (X). Thus the Kuhn-Tucker conditions are both necessary and sufficient for an absolute minimum of f (X) at X*.   2 1 ) ( ) ( ) ( j j m J j y g f L      X X λ Y, X,  m j y y L j j j , , 2 , 1 , 0 2 ) (         Y, X,
  • 206. 206 Convex programming problem • If the given optimization problem is known to be a convex programming problem, there will be no relative minima or saddle points, and hence the exteme point found by applying the Kuhn-Tucke conditions is guaranteed to be an absolute minimum of f (X). However, it is often very difficult to ascertain whether the objective and constraint functions involved in a practical engineering problem are convex.
  • 207. 207 Linear Programming I: Simplex method • Linear programming is an optimization method applicable for the solution of problems in which the objective function and the constraints appear as linear functions of the decision variables. • Simplex method is the most efficient and popular method for solving general linear programming problems. • At least four Nobel prizes were awarded for contributions related to linear programming (e.g. In 1975, Kantorovich of the former Soviet Union and T.C. Koopmans of USA were awarded for application of LP to the economic problem of allocating resources).
  • 208. 208 Linear Programming I: Simplex method-Applications • Petroleum refineries – choice of buying crude oil from several different sources with differing compositions and at differing prices – manufacturing different products such as aviation fuel, diesel fuel, and gasoline, in varying quantities – Constraints due to the restrictions on the quantity of the crude oil from a particular source, the capacity of the refinery to produce a particular product – A mix of the purchased crude oil and the manufactured products is sought that gives the maximum profit • Optimal production plan in a manufacturing firm – Pay overtime rates to achieve higher production during periods of higher demand • The routing of aircraft and ships can also be decided using LP
  • 209. 209 Standard Form of a Linear Programming Problem • Scalar form ariables decision v the are and constants, known are ) , , 2 , 1 ; , , 2 , 1 ( and , where 0 0 0 s constraint the subject to ) , , , ( Minimize 2 1 2 2 1 1 2 2 2 22 1 21 1 1 2 12 1 11 2 2 1 1 2 1 j ij j j n m n mn m m n n n n n n n x n j m i a b c x x x b x a x a x a b x a x a x a b x a x a x a x c x c x c x x x f                              
  • 210. 210 Standard Form of a Linear Programming Problem • Matrix form                                                              mn m m m n n n m n a a a a a a a a a a a a c c c b b b x x x f        3 2 1 2 23 22 21 1 13 12 11 2 1 2 1 2 1 , , where 0 s constraint the subject to ) ( Minimize a c b X X b aX X cT X
  • 211. 211 Characteristic of a Linear Programming Problem • The objective function is of the minimization type • All the constraints are of the equality type • All the decision variables are nonnegative • The number of the variables in the problem is n. This includes the slack and surplus variables. • The number of constraints is m (m < n).
  • 212. 212 Characteristic of a Linear Programming Problem • The number of basic variables is m (same as the number of constraints). • The number of nonbasic variables is n-m. • The column of the right hand side b is positive and greater than or equal to zero. • The calculations are organized in a table. • Only the values of the coefficients are necessary for the calculations. The table therefore contains only coefficient values, the matrix A in previous discussions. These are the coefficients in the constraint equations.
  • 213. 213 Characteristic of a Linear Programming Problem • The objective function is the last row in the table. The constraint coefficients are written first. • Row operations consist of adding (subtracting)a definite multiple of the pivot row from other rows of the table.
  • 214. 214 Transformation of LP Problems into Standard Form • The maximization of a function f(x1,x2,…,xn ) is equivalent to the minimization of the negative of the same function. For example, the objective function Consequently, the objective function can be stated in the minimization form in any linear programming problem. n n n n x c x c x c f f x c x c x c f               2 2 1 1 2 2 1 1 maximize to equivalent is minimize
  • 215. 215 Transformation of LP Problems into Standard Form • A variable may be unrestricted in sign in some problems. In such cases, an unrestricted variable (which can take a positive, negative or zero value) can be written as the difference of two nonnegative variables. • Thus if xj is unrestricted in sign, it can be written as xj=xj'- xj", where • It can be seen that xj will be negative, zero or positive, depending on whether xj" is greater than, equal to, or less than xj ’ . 0 " and 0 '   j j x x
  • 216. 216 Transformation of LP Problems into Standard Form If a constraint appears in the form of a “less than or equal to” type of inequality as: it can be converted into the equality form by adding a nonnegative slack variable xn+1 as follows: k n kn k k b x a x a x a      2 2 1 1 k n n kn k k b x x a x a x a      1 2 2 1 1 
  • 217. 217 Transformation of LP Problems into Standard Form If a constraint appears in the form of a “greater than or equal to” type of inequality as: it can be converted into the equality form by subtracting a variable as: where xn+1 is a nonnegative variable known as a surplus variable. k n kn k k b x a x a x a      2 2 1 1 k n n kn k k b x x a x a x a      1 2 2 1 1 
  • 218. 218 Geometry of LP Problems Example: A manufacturing firm produces two machine parts using lathes, milling machines, and grinding machines. The different machining times required for each part, the machining times available for different machines, and the profit on each machine part are given in the following table. Determine the number of parts I and II to be manufactured per week to maximize the profit. Type of machine Machine time required (min) Machine Part I Machine time required (min) Machine Part II Maximum time available per week (min) Lathes 10 5 2500 Milling machines 4 10 2000 Grinding machines 1 1.5 450 Profit per unit $50 $100
  • 219. 219 Geometry of LP Problems Solution: Let the machine parts I and II manufactured per week be denoted by x and y, respectively. The constraints due to the maximum time limitations on the various machines are given by: Since the variables x and y can not take negative values, we have
  • 220. 220 Geometry of LP Problems Solution: The total profit is given by: Thus the problem is to determine the nonnegative values of x and y that satisfy the constraints stated in Eqs.(E1) to (E3) and maximize the objective function given by (E5). The inequalities (E1) to (E4) can be plotted in the xy plane and the feasible region identified as shown in the figure. Our objective is to find at least one point out of the infinite points in the shaded region in figure which maximizes the profit function (E5).
  • 221. 221 Geometry of LP Problems Solution: The contours of the objective function, f, are defined by the linear equation: As k is varied, the objective function line is moved parallel to itself. The maximum value of f is the largest k whose objective f, unction line has at least one point in common with the feasible region). Such a point can be identified as point G in the figure. The optimum solution corresponds to a value of x*=187.5, y*=125, and a profit of $21,875.00.