SlideShare a Scribd company logo
1 of 82
1
OPTIMISATION AND
OPTIMAL CONTROL
AIM
To provide an understanding of the
principles of optimisation
techniques in the static and dynamic
contexts.
2
LEARNING OBJECTIVES
On completion of the module the student should
be able to demonstrate:
- an understanding of the basic principles of
optimisation and the ability to apply them to linear
and non-linear unconstrained and constrained
static problems,
- an understanding of the fundamentals of
optimal control, system identification and
parameter estimation.
3
CONTENT
(20 lectures common to BEng and MSc)
Principles of optimisation, essential features and
examples of application.
(1 lecture)
Basic principles of static optimisation theory,
unconstrained and constrained optimisation, Lagrange
multipliers, necessary and sufficient conditions for
optimality, limitations of analytical methods.
(3 lectures)
Numerical solution of static optimisation problems:
unidimensional search methods; unconstrained
multivariable optimisation, direct and indirect methods,
gradient and Newton type approaches; non-linear
programming with constraints.
(4 lectures)
4
Introduction to genetic optimisation
algorithms.
(1 lecture)
Basic concepts of linear programming.
(1 lecture)
On-line optimisation, integrated system
optimisation and parameter estimation.
(1 lecture)
5
The optimal control problem for continuous
dynamic systems: calculus of variations, the
maximum principle, and two-point boundary
value problems. The linear quadratic regulator
problem and the matrix Riccati equation.
(4 lectures)
Introduction to system identification, parameter
estimation and self-adaptive control
(3 lectures)
Introduction to the Kalman filter.
(2 lectures)
6
(10 Lectures MSc Students only)
7
LABORATORY WORK
The module will be illustrated by laboratory exercises and
demonstrations on the use of MATLAB and the
associated Optimization and Control Tool Boxes for
solving unconstrained and constrained static optimisation
problems and for solving linear quadratic regulator
problems.
ASSESSMENT
Via written examination.
MSc only - also laboratory session and report
8
READING LIST
P.E Gill, W. Murray and M.H. Wright: Practical Optimization"
(Academic Press, 1981)
T.E. Edgar. D.M. Himmelblau and L.S. Lasdon: "Optimization of
Chemical Processes“ 2nd Edition (McGraw-Hill, 2001)
M.S. Bazaraa, H.D. Sherali and C.M. Shetty: “Nonlinear
Programming - Theory and Algorithms” 2nd Edition (Wiley
Interscience, 1993)
J. Nocedal and S.J. Wright: “Numerical Optimization”, (Springer,
1999)
J.E. Dennis, Jr. and R.B. Schnabel: “Numerical methods for
Unconstrained Optimization and Nonlinear Equations” (SIAM
Classics in Applied Mathematics, 1996) (Prentice Hall, 1983)
K.J. Astrom and B Wittenmark: "Adaptive Control", 2nd Edition
(Prentice Hall, 1993)
F.L. Lewis and V S Syrmos: "Optimal Control" 2nd Edition
(Wiley, 1995)
9
PRINCIPLES OF OPTIMISATION
Typical engineering problem: You have a process that
can be represented by a mathematical model. You also
have a performance criterion such as minimum cost.
The goal of optimisation is to find the values of the
variables in the process that yield the best value of the
performance criterion.
Two ingredients of an optimisation problem:
(i) process or model
(ii) performance criterion
10
Some typical performance criteria:
 maximum profit
 minimum cost
 minimum effort
 minimum error
 minimum waste
 maximum throughput
 best product quality
Note the need to express the performance
criterion in mathematical form.
11
Static optimisation: variables have
numerical values, fixed with
respect to time.
Dynamic optimisation: variables
are functions of time.
12
Essential Features
Every optimisation problem contains three
essential categories:
1. At least one objective function to be
optimised
2. Equality constraints
3. Inequality constraints
13
By a feasible solution we mean a set of variables which satisfy
categories 2 and 3. The region of feasible solutions is called the
feasible region.
linear
inequality
constraint
linear
equality
constraint
nonlinear
inequality
constraint
nonlinear
inequality
constraints
x2
x1
linear inequality constraint
14
An optimal solution is a set of values of the
variables that are contained in the feasible
region and also provide the best value of the
objective function in category 1.
For a meaningful optimisation problem the
model needs to be underdetermined.
15
Mathematical Description
1 2
Minimize : ( ) objective function
( ) equality constraints
Subject to:
( ) inequality constraints
where , is a vector of n variables ( , , , )
( ) is a ve
n
n
f
x x x








x
h x 0
g x 0
x
h x 1
2
ctor of equalities of dimension
( ) is a vector of inequalities of dimension
m
m
g x
16
Steps Used To Solve Optimisation Problems
1. Analyse the process in order to make a list of all the
variables.
2. Determine the optimisation criterion and specify the
objective function.
3. Develop the mathematical model of the process to define
the equality and inequality constraints. Identify the
independent and dependent variables to obtain the number
of degrees of freedom.
4. If the problem formulation is too large or complex simplify
it if possible.
5. Apply a suitable optimisation technique.
6. Check the result and examine it’s sensitivity to changes in
model parameters and assumptions.
17
Classification of Optimisation
Problems
Properties of f(x)
 single variable or multivariable
 linear or nonlinear
 sum of squares
 quadratic
 smooth or non-smooth
 sparsity
18
Properties of h(x) and g(x)
 simple bounds
 smooth or non-smooth
 sparsity
 linear or nonlinear
 no constraints
19
Properties of variables x
•time variant or invariant
•continuous or discrete
•take only integer values
•mixed
20
Obstacles and Difficulties
• Objective function and/or the constraint functions
may have finite discontinuities in the continuous
parameter values.
• Objective function and/or the constraint functions
may be non-linear functions of the variables.
• Objective function and/or the constraint functions
may be defined in terms of complicated interactions
of the variables. This may prevent calculation of
unique values of the variables at the optimum.
21
• Objective function and/or the constraint functions
may exhibit nearly “flat” behaviour for some ranges
of variables or exponential behaviour for other ranges.
This causes the problem to be insensitive, or too
sensitive.
• The problem may exhibit many local optima
whereas the global optimum is sought. A solution may
be obtained that is less satisfactory than another
solution elsewhere.
• Absence of a feasible region.
• Model-reality differences.
22
Typical Examples of Application
static optimisation
• Plant design (sizing and layout).
• Operation (best steady-state operating
condition).
• Parameter estimation (model fitting).
• Allocation of resources.
• Choice of controller parameters (e.g. gains, time
constants) to minimise a given performance index
(e.g. overshoot, settling time, integral of error
squared).
23
dynamic optimisation
• Determination of a control signal u(t) to
transfer a dynamic system from an initial
state to a desired final state to satisfy a
given performance index.
• Optimal plant start-up and/or shut down.
• Minimum time problems
24
BASIC PRINCIPLES OF STATIC
OPTIMISATION THEORY
Continuity of Functions
Functions containing discontinuities can cause
difficulty in solving optimisation problems.
Definition: A function of a single variable x is
continuous at a point xo if:
(a ) exist s
(b) exist s
(c)
f x
f x
f x f x
o
x x
x x
o
o
o
( )
lim ( )
lim ( ) ( )



25
If f(x) is continuous at every point in a region R,
then f(x) is said to be continuous throughout R.
x
f(x)
x
f(x)
f(x) is discontinuous.
f(x) is continuous, but
( ) ( )
df
f x x
dx

 is not.
26
Unimodal and Multimodal Functions
A unimodal function f(x) (in the range specified for
x) has a single extremum (minimum or maximum).
A multimodal function f(x) has two or more
extrema.
There is a distinction between the global
extremum (the biggest or smallest between a set
of extrema) and local extrema (any extremum).
Note: many numerical
procedures terminate at a local extremum.
If ( ) 0
f x 
 at the extremum, the point is
called a stationary point.
27
f(x)
x
local max (stationary)
local min (stationary)
global max (not stationary)
stationary point
(saddle point)
global min (stationary)
A multimodal function
28
Multivariate Functions - Surface and
Contour Plots
We shall be concerned with basic properties of a scalar
function f(x) of n variables (x1,...,xn).
If n = 1, f(x) is a univariate function
If n > 1, f(x) is a multivariate function.
 
n 1
For any multivariate function, the equation
z = f(x) defines a surface in n+1 dimensional space
.
29
In the case n = 2, the points z = f(x1,x2) represent a
three dimensional surface.
Let c be a particular value of f(x1,x2). Then f(x1,x2)
= c defines a curve in x1 and x2 on the plane z = c.
If we consider a selection of different values of c,
we obtain a family of curves which provide a
contour map of the function z = f(x1,x2).
30
1 2 2
1 2 1 2 2
(4 2 4 2 1)
x
z e x x x x x
    
contour map of
-3 -2 -1 0 1 2
-3
-2.5
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
x2
x1
0.2
0.4
0.7
1.0
1.7
1.8 2
3 4 5
6
z = 20
1.8
2
1.7
3
4
5
6
local minimum
saddle point
31
32
33
Example: Surface and Contour
Plots of “Peaks” Function
 
 
2 2 2
1 1 2
3 5 2 2
1 1 2 1 2
2 2
1 2
3(1 ) exp ( 1)
10(0.2 )exp( )
1 3exp ( 1)
z x x x
x x x x x
x x
    
    
   
34
0
5
10
15
20
0
5
10
15
20
-10
-5
0
5
10
x2
z
x1
multimodal!
10 20 30 40 50 60 70 80 90
10
20
30
40
50
60
70
80
90
100
x1
x2
global max
global min
local min
local max
local max
saddle
35
36
Gradient Vector
The slope of f(x) at a point x x
 in the direction
of the ith co-ordinate axis is
The n-vector of these partial derivatives is termed the
gradient vector of f, denoted by:
 
L
N
M
M
M
M
M
O
Q
P
P
P
P
P
f
f
x
f
xn
( )
( )
( )
x
x
x




1
 (a column vector)


f
xi
( )
x
x x

37
The gradient vector at a point x x

is normal to the the contour through that point in
the direction of increasing f.
x
f ( )
x
increasing f
At a stationary point:
 
f ( )
x 0 (a null vector)
38
Example
f x x x x
f
f
x
f
x
x x x
x x x
( ) cos
( )
( )
( )
sin
cos
x
x
x
x
 
 
L
N
M
M
M
M
O
Q
P
P
P
P



L
N
M O
Q
P
1 2
2
2 1
1
2
2
2
2 1
1 2 1
2




and the stationary point (points) are given by the
simultaneous solution(s) of:-
x x x
x x x
2
2
2 1
1 2 1
0
2 0
 
 
sin
cos
39
e.g.
f T
( )
x c x x c
   
 f( )=
Note: If f ( )
x is a constant vector,
f(x) is then linear.
40
Hessian Matrix (Curvature Matrix
The second derivative of a n - variable function is
defined by the n2 partial derivatives:




x
f
x
i n j n
i j
( )
, , , ; , ,
x
F
H
G
I
K
J  
1 1
 
written as:

 


2 2
2
f
x x
i j
f
x
i j
i j i
( )
, ,
( )
, .
x x
 
41
These n2 second partial derivatives are usually
represented by a square, symmetric matrix, termed
the Hessian matrix, denoted by:
H x x
x x
x x
2
2 2
2 2
( ) ( )
( ) ( )
( ) ( )
  
L
N
M
M
M
M
M
O
Q
P
P
P
P
P
f
f
x
f
x x
f
x x
f
x
n
n n



 

 


1
2
1
1
2

  

42
Example: For the previous example:
 
L
N
M
M
M
M
O
Q
P
P
P
P

 

L
N
M O
Q
P
2
2
1
2
2
1 2
2
1 2
2
2
2
2 1 2 1
2 1 1
2
2 2
f
f
x
f
x x
f
x x
f
x
x x x x
x x x
( )
cos sin
sin
x



 

 


Note: If the Hessian matrix of f(x) is a constant
matrix, f(x) is then quadratic, expressed as:
f
f f
T T
( )
( ) , ( )
x x Hx c x
x Hx c x H
  
      
1
2
2

43
Convex and Concave Functions
A function is called concave over a given region R
if:
f f f
R
a b a b
a b
( ( ) ) ( ) ( ) ( )
, , .
   

x x x x
x x
    
  
1 1
0 1
where: and
The function is strictly concave if is replaced
by >.

A function is called convex (strictly convex) if
is replaced by (<). 

44
concave function
convex function
x
xa xb
f(x)
 
f x
( ) 0
x
xa xb
f(x)
 
f x
( ) 0
45
If then is concave.
If then is convex.
  
  
f x
f
x
f x
f x
f
x
f x
( ) ( )
( ) ( )




2
2
2
2
0
0
For a multivariate function f(x) the conditions are:-
Strictly convex +ve def
convex +ve semi def
concave -ve semi def
strictly concave -ve def
f(x) H(x) Hessian matrix
46
Tests for Convexity and Concavity
H is +ve def (+ve semi def) iff
x Hx x 0
T
   
0 0
( ), .
H is -ve def (-ve semi def) iff
x Hx x 0
T
   
0 0
( ), .
Convenient tests: H(x) is strictly convex (+ve def)
(convex) (+ve semi def)) if:
1. all eigenvalues of H(x) are
or 2. all principal determinants of H(x) are
 
0 0
( )
 
0 0
( )
47
H(x) is strictly concave (-ve def)
(concave (- ve semi def)) if:
1. all eigenvalues of H(x) are
or 2. the principal determinants of H(x)
are alternating in sign:
 
0 0
( )
  
  
1 2 3
1 2 3
0 0 0
0 0 0
  
  
, , ,
( , , , )


48
Example f x x x x x
( )   
2 3 2
1
2
1 2 2
2
2 2
1 2 2
1 1 2
1
2
1 2 2
2 2
( ) ( ) ( )
4 3 4 3
( ) ( )
3 4 4
f f f
x x
x x x
x
f f
x x
x x
  
  

 
 
    
   
x x x
x x
1 2
4 3 4 3
( ) , 4, 7
3 4 3 4
 
 
      
 
 
 
H x
2
2
1 2
4 3
eigenvalues: | | 8 7 0
3 4
1 Hence, ( ) is strictly conv
, 7. e .
x
f

  

 

     

  
I H
x
49
Convex Region
xa
xb
non convex
region
A convex set of points exist if for any two points, xa
and xb, in a region, all points:
x x x
    
  
a b
( ) ,
1 0 1
on the straight line joining xa and xb are in the set.
If a region is completely bounded by concave functions
then the functions form a convex region.
xa
xb
convex
region
50
Necessary and Sufficient Conditions for an
Extremum of an Unconstrained Function
A condition N is necessary for a result R if R can be true
only if N is true.
R N

A condition S is sufficient for a result R if R is
true if S is true.
S R

A condition T is necessary and sufficient for a result R
iff T is true.
T R

51
There are two necessary and a single sufficient
conditions to guarantee that x* is an extremum of a
function f(x) at x = x*:
1. f(x) is twice continuously differentiable at x*.
2. , i.e. a stationary point exists at x*.
3. is +ve def for a minimum to exist
at x*, or -ve def for a maximum to exist at x*
 
f ( )
*
x 0
 
2
f ( ) ( )
* *
x H x
1 and 2 are necessary conditions; 3 is a
sufficient condition.
Note: an extremum may exist at x* even though it
is not possible to demonstrate the fact using the
three conditions.
52
Example: Consider:
The gradient vector is:  
   
   
L
N
M O
Q
P
f
x x x x x
x x x
( )
.
x
45 2 2 4 4
4 4 2 2
1 2 1
3
1 2
2 1 1
2
yielding three stationary points located by setting
and solving numerically:
 
f ( )
x 0
x*=(x1,x2) f(x*) eigenvalues
of 2
f ( )
x
classification
A.(-1.05,1.03) -0.51 10.5 3.5 global m in
B.(1.94,3.85) 0.98 37.0 0.97 local m in
C.(0.61,1.49) 2.83 7.0 -2.56 saddle
where:  
   
 
L
N
M O
Q
P
2 1
2
2 1
1
2 12 4 2 4
2 4 4
f
x x x
x
( )
x
f x x x x x x x x x
( ) .
x        
4 45 4 2 2 2
1 2 1
2
2
2
1 2 1
4
1
2
2
53
contour map
-4 -3 -2 -1 0 1 2 3 4
0
1
2
3
4
5
6
x2
x1
54
Interpretation of the Objective Function in Terms
of its Quadratic Approximation
If a function of two variables can be approximated within a
region of a stationary point by a quadratic function:
f x x x x
h h
h h
x
x
c c
x
x
h x h x h x x c x c x
( , )
1 2
1
2 1 2
11 12
12 22
1
2
1 2
1
2
1
2 11 1
2 1
2 22 2
2
12 1 2 1 1 2 2

L
N
M O
Q
P
L
N
MO
Q
P

L
N
MO
Q
P

     


then the eigenvalues and eigenvectors of:
H( , ) ( , )
* * * *
x x f x x
h h
h h
1 2
2
1 2
11 12
12 22
  
L
N
M O
Q
P
can be used to interpret the nature of f(x1,x2) at:
x x x x
1 1 2 2
 
* *
,
55
They provide information on the shape of
f(x1,x2) at x x x x
1 1 2 2
 
* *
, If H( , )
* *
x x
1 2 is +ve def, the
eigenvectors are at right angles (orthogonal) and
correspond to the principal axes of elliptical contours of
f(x1,x2).
A valley or ridge lies in the direction of the eigenvector
associated with a relative small eigenvalue.
These interpretations can be generalized to the
multivariate quadratic approximation:
f T T
( )
x x Hx c x
  
1
2 
56
Case 1: Equal Eigenvalues - circular contours which are
interpreted as a circular hill (max)(-ve eigenvalues) or
circular valley (min) (+ve eigenvalues)
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
x2
x1
57
58
Case 2: Unequal Eigenvalues (same sign) –
elliptical contours which are interpreted as an
elliptical hill (max)(-ve eigenvalues) or elliptical
valley (min)(+ve eigenvalues)
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
x1
X2
59
60
Case 3: Eigenvalues of opposite sign but equal in
magnitude - symmetrical saddle.
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
x1
x2
61
62
Case 4: Eigenvalues of opposite sign but unequal in
magnitude - asymmetrical saddle.
-2 -1.5 -1 -0.5 0 0.5 1 1.5 2
-2
-1.5
-1
-0.5
0
0.5
1
1.5
2
x1
x2
63
64
Optimisation with Equality
Constraints
min ( );
( ) ;
x
x x
h x 0
n
f
subject to: m constraints (m n)

 
Elimination of variables:
example: min ( )
,
x x
f x x
x x
1 2
4 5
2 3 6
1
2
2
2
1 2
x  
 
(a)
s.t. (b)
and substituting into (a) :- f x x x
( ) ( )
2 2
2
2
2
6 3 5
  
Using (b) to eliminate x1 gives: x
x
1
2
6 3
2


(c)
65
At a stationary point


f x
x
x x
x x
( )
( )
.
*
2
2
2 2
2 2
0 6 6 3 10 0
28 36 1286
     
   
Then using (c): x
x
1
2
6 3
2
1071
*
*
.



Hence, the stationary point (min) is: (1.071, 1.286)
66
The Lagrange Multiplier Method
Consider a two variable problem with a single
equality constraint:
min ( , )
( , )
,
x x
f x x
h x x
1 2
1 2
1 2 0
s.t. 
At a stationary point we may write:
(a)
(b)
df
f
x
dx
f
x
dx
dh
h
x
dx
h
x
dx
f
x
f
x
h
x
h
x
dx
dx
  
  

L
N
M
M
M
M
O
Q
P
P
P
P
L
N
MO
Q
P

L
N
M
O
Q
P
















1
1
2
2
1
1
2
2
1 2
1 2
1
2
0
0
0
0
67
If:








f
x
f
x
h
x
h
x
1 2
1 2
0
 nontrivial nonunique solutions
for dx1 and dx2 will exist.
This is achieved by setting
 
   



















h
x
h
x
h
x
h
x
f
x
h
x
f
x
h
x
1 2
1 2
1 1 2 2
with
where  is known as a Lagrange multiplier.
68
If an augmented objective function, called the
Lagrangian is defined as:
L x x f x x h x x
( , , ) ( , ) ( , )
1 2 1 2 1 2
 
 
we can solve the constrained optimisation problem by
solving:
















L
x
f
x
h
x
L
x
f
x
h
x
L
h x x
1 1 1
2 2 2
1 2
0
0
0
  
  
U
V
|
|
W
|
|
 
provides equations (a) and (b)
re-statement of equality constraint
( , )
69
Generalizing : To solve the problem:
min ( );
subject to: ( ) ; constraints ( )
f
m m n

 
n
x
x x
h x 0
define the Lagrangian:
( , ) ( ) ( ),
T m
L f 
  
x x h x
 
and the stationary point (points) is obtained from:-
( )
( , ) ( )
( , ) ( )
T
L f
L


 
    
 
 
  
x x
h x
x x 0
x
x h x 0

 

70
Example Consider the previous example
again. The Lagrangian is:-
L x x x x
L
x
x
L
x
x
L
x x
    
  
  
   
4 5 2 3 6
8 2 0
10 3 0
2 3 6 0
1
2
2
2
1 2
1
1
2
2
1 2









( )
(a)
(b)
(c)
Substituting (a) and (b) into (c) gives:
71
which agrees with the previous result.
x x
x x
1 2
1 2
4
3
10 2
9
10
6 0
30
7
4 281
15
14
1071
90
70
1286
             
   
   

, .
. , .
Hence,
-4 -3 -2 -1 0 1 2 3 4
-4
-3
-2
-1
0
1
2
3
4
x1
x2
72
Necessary Conditions for a Local Extremum of
an Optimisation Problem Subject to Equality and
Inequality Constraints
(Kuhn-Tucker Conditions)
Consider the problem:
min ( );
s.t.: ( ) ; equalities ( )
( ) ; inequalities
f
m m n
p

 

n
x
x x
h x 0
g x 0
Here, we define the Lagrangian:
( , , ) ( ) ; ,
T T m p
L f
    
x x h(x) g(x)
     
The necessary conditions for x* to be a local
extremum of f(x) are:-
73
(a) f(x), hj(x), gj(x) are all twice differentiable at x*
(b) The Lagrange multipliers exist
(c) All constraints are satisfied at x*
h x 0 x g x 0 x
( ) ( ) ; ( *) ( )
* * *
     
h g
j j
0 0
(d) The Lagrange multipliers (at x*) for the
inequality constraints are not negative, i.e.
 j
*
 j
*
 0
(e) The binding (active) inequality constraints are
zero, the inactive inequality constraints are > 0,
and the associated j’s are 0 at x*, i.e.
 j j
g
* *
( )
x  0
(f) The Lagrangian function is at a stationary point
( , , ) 0
L
 
x x  
74
Notes:
1. Further analysis or investigation is
required to determine if the extremum is a
minimum (or maximum)
2. If f(x) is convex, h(x) are linear and g(x)
are concave:
x* will be a global extremum.
75
Limitations of Analytical Methods
The computations needed to evaluate the above
conditions can be extensive and intractable.
Furthermore, the resulting simultaneous equations
required for solving x*, * and * are often nonlinear
and cannot be solved without resorting to numerical
methods.
The results may be inconclusive.
For these reasons, we often have to resort to numerical
methods for solving optimisation problems, using
computer codes (e.g.. MATLAB)
76
Example
Determine if the potential minimum
x* = (1.00,4.90) satisfies the Kuhn Tucker
conditions for the problem:
min ( )
( )
( )
( ) ( ) ( )
( )
( )
x
x
x
x
x
x
x
f x x
h x x
g x x x x
g x x
g x
g x
  
   
     
    
  
 
4 12
25 0
10 10 34 0
3 1 0
2
0
1 2
2
1 1
2
2
2
1 1 1
2
2 2
2
2 1
2
2
2
3 1
4 2
s.t.
77
-2 0 2 4 6 8
-2
-1
0
1
2
3
4
5
6
7
8
x1
x2
(1.00,4.90)
)
x2=0
h1(x)=0
g1(x)=0
10
5
0
-5
-10
-20
-25
-30
-40
-50
-60
contours and constraints
78
We test each Kuhn-Tucker condition in turn:
(a) All functions are seen by inspection to be twice
differentiable.
(b) We assume the Lagrange multipliers exist
(c) Are the constraints satisfied?
h yes
g yes binding
g yes not active
g yes not active
g yes not active
1
2 2
1
2 2
2
2
3
4
25 100 490 001 0
10 100 100 10 490 490 34 001 0
490 1 1921 0
1
490 0
: ( . ) ( . ) .
: ( . ) ( . ) ( . ) ( . ) .
: ( ( . ) .
:
: .
1.00-3)
.00 -2
2
    
      
   


79
To test the rest of the conditions we need to determine the
Lagrange multipliers using the stationarity conditions.
First we note that from condition (e) we require:
j j
g j
* *
( ) , , , ,
x  
0 12 3 4




1 1
2 2
3 3
4 4
0
0
0
0
* *
* *
* *
* *
( )
( )
( )
( )
can have any value because
must be zero because
must be zero because
must be zero because
g
g
g
g
x
x
x
x




80
Hence:
       
         

L
N
M O
Q
P
L
N
MO
Q
P


L
N
M O
Q
P
  
x
x
L x x
L x x x
1
2
4 2 10 2 4 2 8
2 2 10 2 98 98 02
2 8
98 02
4
98
1015
1 1 1 1 1 1
2 1 2 1 2 1 1
1
1
1 1
   
   


 
* * * * * *
* * * * * * *
*
*
* *
( )
( ) . . .
. . .
. , = 0.754
Now consider the stationarity condition:
* *
2 3
2 2 2 2 2
1 2 1 1 2 1 1 1 2 2
( , , ) 0 where, since 0
=4 12 (25 ) (10 10 34)
L
L x x x x x x x x
 
 
   
         
x x  
81
Now we can check the remaining conditions:
(d) Are j j
*
, , , , ?
 
0 12 34
   
1 2 3 4
0754 0
* * * *
. ,
   
Hence, the answer is yes
(e) Are j j
g j
* *
( ) , , , , ?
x  
0 12 34
Yes, because we have already used this above.
(f) Is the Lagrangian function at a stationary point?
Yes, because we have already used this above.
82
Hence, all the Kuhn-Tucker conditions are
satisfied and we can have confidence in the
solution :
x* = (1.00,4.90)

More Related Content

Similar to lecture.ppt

Determination of Optimal Product Mix for Profit Maximization using Linear Pro...
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...Determination of Optimal Product Mix for Profit Maximization using Linear Pro...
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...IJERA Editor
 
Convex optmization in communications
Convex optmization in communicationsConvex optmization in communications
Convex optmization in communicationsDeepshika Reddy
 
A brief study on linear programming solving methods
A brief study on linear programming solving methodsA brief study on linear programming solving methods
A brief study on linear programming solving methodsMayurjyotiNeog
 
lecture-nlp-1 YUYTYNYHU00000000000000000000000000(1).pptx
lecture-nlp-1 YUYTYNYHU00000000000000000000000000(1).pptxlecture-nlp-1 YUYTYNYHU00000000000000000000000000(1).pptx
lecture-nlp-1 YUYTYNYHU00000000000000000000000000(1).pptxglorypreciousj
 
Classification of optimization Techniques
Classification of optimization TechniquesClassification of optimization Techniques
Classification of optimization Techniquesshelememosisa
 
Operation research - Chapter 01
Operation research - Chapter 01Operation research - Chapter 01
Operation research - Chapter 012013901097
 
Machine learning and linear regression programming
Machine learning and linear regression programmingMachine learning and linear regression programming
Machine learning and linear regression programmingSoumya Mukherjee
 
Linear programming manzoor nabi
Linear programming  manzoor nabiLinear programming  manzoor nabi
Linear programming manzoor nabiManzoor Wani
 
Ebook mst209 block3_e1i1_n9780749252830_l1 (2)
Ebook mst209 block3_e1i1_n9780749252830_l1 (2)Ebook mst209 block3_e1i1_n9780749252830_l1 (2)
Ebook mst209 block3_e1i1_n9780749252830_l1 (2)Fred Stanworth
 
Solving Optimization Problems using the Matlab Optimization.docx
Solving Optimization Problems using the Matlab Optimization.docxSolving Optimization Problems using the Matlab Optimization.docx
Solving Optimization Problems using the Matlab Optimization.docxwhitneyleman54422
 
SINGLE VARIABLE OPTIMIZATION AND MULTI VARIABLE OPTIMIZATIUON.pptx
SINGLE VARIABLE OPTIMIZATION AND MULTI VARIABLE OPTIMIZATIUON.pptxSINGLE VARIABLE OPTIMIZATION AND MULTI VARIABLE OPTIMIZATIUON.pptx
SINGLE VARIABLE OPTIMIZATION AND MULTI VARIABLE OPTIMIZATIUON.pptxglorypreciousj
 
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...SSA KPI
 
Certified global minima
Certified global minimaCertified global minima
Certified global minimassuserfa7e73
 
Quantitativetechniqueformanagerialdecisionlinearprogramming 090725035417-phpa...
Quantitativetechniqueformanagerialdecisionlinearprogramming 090725035417-phpa...Quantitativetechniqueformanagerialdecisionlinearprogramming 090725035417-phpa...
Quantitativetechniqueformanagerialdecisionlinearprogramming 090725035417-phpa...kongara
 
LPP, Duality and Game Theory
LPP, Duality and Game TheoryLPP, Duality and Game Theory
LPP, Duality and Game TheoryPurnima Pandit
 
Cuckoo Search: Recent Advances and Applications
Cuckoo Search: Recent Advances and ApplicationsCuckoo Search: Recent Advances and Applications
Cuckoo Search: Recent Advances and ApplicationsXin-She Yang
 
Interval programming
Interval programming Interval programming
Interval programming Zahra Sadeghi
 

Similar to lecture.ppt (20)

Determination of Optimal Product Mix for Profit Maximization using Linear Pro...
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...Determination of Optimal Product Mix for Profit Maximization using Linear Pro...
Determination of Optimal Product Mix for Profit Maximization using Linear Pro...
 
Operations Research - Introduction
Operations Research - IntroductionOperations Research - Introduction
Operations Research - Introduction
 
Convex optmization in communications
Convex optmization in communicationsConvex optmization in communications
Convex optmization in communications
 
A brief study on linear programming solving methods
A brief study on linear programming solving methodsA brief study on linear programming solving methods
A brief study on linear programming solving methods
 
lecture-nlp-1 YUYTYNYHU00000000000000000000000000(1).pptx
lecture-nlp-1 YUYTYNYHU00000000000000000000000000(1).pptxlecture-nlp-1 YUYTYNYHU00000000000000000000000000(1).pptx
lecture-nlp-1 YUYTYNYHU00000000000000000000000000(1).pptx
 
Classification of optimization Techniques
Classification of optimization TechniquesClassification of optimization Techniques
Classification of optimization Techniques
 
Optimization Using Evolutionary Computing Techniques
Optimization Using Evolutionary Computing Techniques Optimization Using Evolutionary Computing Techniques
Optimization Using Evolutionary Computing Techniques
 
Operation research - Chapter 01
Operation research - Chapter 01Operation research - Chapter 01
Operation research - Chapter 01
 
Machine learning and linear regression programming
Machine learning and linear regression programmingMachine learning and linear regression programming
Machine learning and linear regression programming
 
Linear programming manzoor nabi
Linear programming  manzoor nabiLinear programming  manzoor nabi
Linear programming manzoor nabi
 
Ydstie
YdstieYdstie
Ydstie
 
Ebook mst209 block3_e1i1_n9780749252830_l1 (2)
Ebook mst209 block3_e1i1_n9780749252830_l1 (2)Ebook mst209 block3_e1i1_n9780749252830_l1 (2)
Ebook mst209 block3_e1i1_n9780749252830_l1 (2)
 
Solving Optimization Problems using the Matlab Optimization.docx
Solving Optimization Problems using the Matlab Optimization.docxSolving Optimization Problems using the Matlab Optimization.docx
Solving Optimization Problems using the Matlab Optimization.docx
 
SINGLE VARIABLE OPTIMIZATION AND MULTI VARIABLE OPTIMIZATIUON.pptx
SINGLE VARIABLE OPTIMIZATION AND MULTI VARIABLE OPTIMIZATIUON.pptxSINGLE VARIABLE OPTIMIZATION AND MULTI VARIABLE OPTIMIZATIUON.pptx
SINGLE VARIABLE OPTIMIZATION AND MULTI VARIABLE OPTIMIZATIUON.pptx
 
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
Efficient Solution of Two-Stage Stochastic Linear Programs Using Interior Poi...
 
Certified global minima
Certified global minimaCertified global minima
Certified global minima
 
Quantitativetechniqueformanagerialdecisionlinearprogramming 090725035417-phpa...
Quantitativetechniqueformanagerialdecisionlinearprogramming 090725035417-phpa...Quantitativetechniqueformanagerialdecisionlinearprogramming 090725035417-phpa...
Quantitativetechniqueformanagerialdecisionlinearprogramming 090725035417-phpa...
 
LPP, Duality and Game Theory
LPP, Duality and Game TheoryLPP, Duality and Game Theory
LPP, Duality and Game Theory
 
Cuckoo Search: Recent Advances and Applications
Cuckoo Search: Recent Advances and ApplicationsCuckoo Search: Recent Advances and Applications
Cuckoo Search: Recent Advances and Applications
 
Interval programming
Interval programming Interval programming
Interval programming
 

More from FathiShokry

Lecture 4 Ammonia Production.ppt
Lecture 4  Ammonia Production.pptLecture 4  Ammonia Production.ppt
Lecture 4 Ammonia Production.pptFathiShokry
 
lec 2 dr. marwa.ppsx
lec 2 dr. marwa.ppsxlec 2 dr. marwa.ppsx
lec 2 dr. marwa.ppsxFathiShokry
 
Lect 1 PE 200.pptx
Lect 1 PE 200.pptxLect 1 PE 200.pptx
Lect 1 PE 200.pptxFathiShokry
 
Lecture 10 (1).pptx
Lecture 10 (1).pptxLecture 10 (1).pptx
Lecture 10 (1).pptxFathiShokry
 
lec. 3 dr,marwa.pptx
lec. 3 dr,marwa.pptxlec. 3 dr,marwa.pptx
lec. 3 dr,marwa.pptxFathiShokry
 
Lecture 2 summary.pdf
Lecture 2 summary.pdfLecture 2 summary.pdf
Lecture 2 summary.pdfFathiShokry
 
Hysys Course 2022.ppt
Hysys Course 2022.pptHysys Course 2022.ppt
Hysys Course 2022.pptFathiShokry
 
Intrduction to Simulation.ppt
Intrduction to Simulation.pptIntrduction to Simulation.ppt
Intrduction to Simulation.pptFathiShokry
 
Material selection and design - No audio.pptx
Material selection and design - No audio.pptxMaterial selection and design - No audio.pptx
Material selection and design - No audio.pptxFathiShokry
 
Chapter3-Engine Cyclesشرح محاضرة الاسبوع التاسع.ppt
Chapter3-Engine Cyclesشرح محاضرة الاسبوع التاسع.pptChapter3-Engine Cyclesشرح محاضرة الاسبوع التاسع.ppt
Chapter3-Engine Cyclesشرح محاضرة الاسبوع التاسع.pptFathiShokry
 

More from FathiShokry (17)

Lecture 4 Ammonia Production.ppt
Lecture 4  Ammonia Production.pptLecture 4  Ammonia Production.ppt
Lecture 4 Ammonia Production.ppt
 
lec 2 dr. marwa.ppsx
lec 2 dr. marwa.ppsxlec 2 dr. marwa.ppsx
lec 2 dr. marwa.ppsx
 
Lect 1 PE 200.pptx
Lect 1 PE 200.pptxLect 1 PE 200.pptx
Lect 1 PE 200.pptx
 
Lecture 10 (1).pptx
Lecture 10 (1).pptxLecture 10 (1).pptx
Lecture 10 (1).pptx
 
Lect 5.pptx
Lect 5.pptxLect 5.pptx
Lect 5.pptx
 
lec. 3 dr,marwa.pptx
lec. 3 dr,marwa.pptxlec. 3 dr,marwa.pptx
lec. 3 dr,marwa.pptx
 
2.pptx
2.pptx2.pptx
2.pptx
 
1.pptx
1.pptx1.pptx
1.pptx
 
summary 1.pptx
summary 1.pptxsummary 1.pptx
summary 1.pptx
 
Lecture 2 summary.pdf
Lecture 2 summary.pdfLecture 2 summary.pdf
Lecture 2 summary.pdf
 
Hysys Course 2022.ppt
Hysys Course 2022.pptHysys Course 2022.ppt
Hysys Course 2022.ppt
 
Intrduction to Simulation.ppt
Intrduction to Simulation.pptIntrduction to Simulation.ppt
Intrduction to Simulation.ppt
 
combustion.pptx
combustion.pptxcombustion.pptx
combustion.pptx
 
lect 2.pptx
lect 2.pptxlect 2.pptx
lect 2.pptx
 
Material selection and design - No audio.pptx
Material selection and design - No audio.pptxMaterial selection and design - No audio.pptx
Material selection and design - No audio.pptx
 
Chapter3-Engine Cyclesشرح محاضرة الاسبوع التاسع.ppt
Chapter3-Engine Cyclesشرح محاضرة الاسبوع التاسع.pptChapter3-Engine Cyclesشرح محاضرة الاسبوع التاسع.ppt
Chapter3-Engine Cyclesشرح محاضرة الاسبوع التاسع.ppt
 
base.pdf
base.pdfbase.pdf
base.pdf
 

Recently uploaded

Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girlsssuser7cb4ff
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130Suhani Kapoor
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )Tsuyoshi Horigome
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxwendy cai
 
Current Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLCurrent Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLDeelipZope
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort servicejennyeacort
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxJoão Esperancinha
 
Internship report on mechanical engineering
Internship report on mechanical engineeringInternship report on mechanical engineering
Internship report on mechanical engineeringmalavadedarshan25
 
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerAnamika Sarkar
 
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escortsranjana rawat
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSCAESB
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionDr.Costas Sachpazis
 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.eptoze12
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...srsj9000
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVRajaP95
 
Heart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxHeart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxPoojaBan
 
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfCCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfAsst.prof M.Gokilavani
 

Recently uploaded (20)

Call Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call GirlsCall Girls Narol 7397865700 Independent Call Girls
Call Girls Narol 7397865700 Independent Call Girls
 
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
VIP Call Girls Service Kondapur Hyderabad Call +91-8250192130
 
SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )SPICE PARK APR2024 ( 6,793 SPICE Models )
SPICE PARK APR2024 ( 6,793 SPICE Models )
 
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCRCall Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
Call Us -/9953056974- Call Girls In Vikaspuri-/- Delhi NCR
 
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
9953056974 Call Girls In South Ex, Escorts (Delhi) NCR.pdf
 
What are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptxWhat are the advantages and disadvantages of membrane structures.pptx
What are the advantages and disadvantages of membrane structures.pptx
 
Current Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCLCurrent Transformer Drawing and GTP for MSETCL
Current Transformer Drawing and GTP for MSETCL
 
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort serviceGurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
Gurgaon ✡️9711147426✨Call In girls Gurgaon Sector 51 escort service
 
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptxDecoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
Decoding Kotlin - Your guide to solving the mysterious in Kotlin.pptx
 
Internship report on mechanical engineering
Internship report on mechanical engineeringInternship report on mechanical engineering
Internship report on mechanical engineering
 
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube ExchangerStudy on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
Study on Air-Water & Water-Water Heat Exchange in a Finned Tube Exchanger
 
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
(MEERA) Dapodi Call Girls Just Call 7001035870 [ Cash on Delivery ] Pune Escorts
 
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptxExploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
Exploring_Network_Security_with_JA3_by_Rakesh Seal.pptx
 
GDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentationGDSC ASEB Gen AI study jams presentation
GDSC ASEB Gen AI study jams presentation
 
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective IntroductionSachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
Sachpazis Costas: Geotechnical Engineering: A student's Perspective Introduction
 
Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.Oxy acetylene welding presentation note.
Oxy acetylene welding presentation note.
 
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
Gfe Mayur Vihar Call Girls Service WhatsApp -> 9999965857 Available 24x7 ^ De...
 
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IVHARMONY IN THE NATURE AND EXISTENCE - Unit-IV
HARMONY IN THE NATURE AND EXISTENCE - Unit-IV
 
Heart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptxHeart Disease Prediction using machine learning.pptx
Heart Disease Prediction using machine learning.pptx
 
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdfCCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
CCS355 Neural Network & Deep Learning UNIT III notes and Question bank .pdf
 

lecture.ppt

  • 1. 1 OPTIMISATION AND OPTIMAL CONTROL AIM To provide an understanding of the principles of optimisation techniques in the static and dynamic contexts.
  • 2. 2 LEARNING OBJECTIVES On completion of the module the student should be able to demonstrate: - an understanding of the basic principles of optimisation and the ability to apply them to linear and non-linear unconstrained and constrained static problems, - an understanding of the fundamentals of optimal control, system identification and parameter estimation.
  • 3. 3 CONTENT (20 lectures common to BEng and MSc) Principles of optimisation, essential features and examples of application. (1 lecture) Basic principles of static optimisation theory, unconstrained and constrained optimisation, Lagrange multipliers, necessary and sufficient conditions for optimality, limitations of analytical methods. (3 lectures) Numerical solution of static optimisation problems: unidimensional search methods; unconstrained multivariable optimisation, direct and indirect methods, gradient and Newton type approaches; non-linear programming with constraints. (4 lectures)
  • 4. 4 Introduction to genetic optimisation algorithms. (1 lecture) Basic concepts of linear programming. (1 lecture) On-line optimisation, integrated system optimisation and parameter estimation. (1 lecture)
  • 5. 5 The optimal control problem for continuous dynamic systems: calculus of variations, the maximum principle, and two-point boundary value problems. The linear quadratic regulator problem and the matrix Riccati equation. (4 lectures) Introduction to system identification, parameter estimation and self-adaptive control (3 lectures) Introduction to the Kalman filter. (2 lectures)
  • 6. 6 (10 Lectures MSc Students only)
  • 7. 7 LABORATORY WORK The module will be illustrated by laboratory exercises and demonstrations on the use of MATLAB and the associated Optimization and Control Tool Boxes for solving unconstrained and constrained static optimisation problems and for solving linear quadratic regulator problems. ASSESSMENT Via written examination. MSc only - also laboratory session and report
  • 8. 8 READING LIST P.E Gill, W. Murray and M.H. Wright: Practical Optimization" (Academic Press, 1981) T.E. Edgar. D.M. Himmelblau and L.S. Lasdon: "Optimization of Chemical Processes“ 2nd Edition (McGraw-Hill, 2001) M.S. Bazaraa, H.D. Sherali and C.M. Shetty: “Nonlinear Programming - Theory and Algorithms” 2nd Edition (Wiley Interscience, 1993) J. Nocedal and S.J. Wright: “Numerical Optimization”, (Springer, 1999) J.E. Dennis, Jr. and R.B. Schnabel: “Numerical methods for Unconstrained Optimization and Nonlinear Equations” (SIAM Classics in Applied Mathematics, 1996) (Prentice Hall, 1983) K.J. Astrom and B Wittenmark: "Adaptive Control", 2nd Edition (Prentice Hall, 1993) F.L. Lewis and V S Syrmos: "Optimal Control" 2nd Edition (Wiley, 1995)
  • 9. 9 PRINCIPLES OF OPTIMISATION Typical engineering problem: You have a process that can be represented by a mathematical model. You also have a performance criterion such as minimum cost. The goal of optimisation is to find the values of the variables in the process that yield the best value of the performance criterion. Two ingredients of an optimisation problem: (i) process or model (ii) performance criterion
  • 10. 10 Some typical performance criteria:  maximum profit  minimum cost  minimum effort  minimum error  minimum waste  maximum throughput  best product quality Note the need to express the performance criterion in mathematical form.
  • 11. 11 Static optimisation: variables have numerical values, fixed with respect to time. Dynamic optimisation: variables are functions of time.
  • 12. 12 Essential Features Every optimisation problem contains three essential categories: 1. At least one objective function to be optimised 2. Equality constraints 3. Inequality constraints
  • 13. 13 By a feasible solution we mean a set of variables which satisfy categories 2 and 3. The region of feasible solutions is called the feasible region. linear inequality constraint linear equality constraint nonlinear inequality constraint nonlinear inequality constraints x2 x1 linear inequality constraint
  • 14. 14 An optimal solution is a set of values of the variables that are contained in the feasible region and also provide the best value of the objective function in category 1. For a meaningful optimisation problem the model needs to be underdetermined.
  • 15. 15 Mathematical Description 1 2 Minimize : ( ) objective function ( ) equality constraints Subject to: ( ) inequality constraints where , is a vector of n variables ( , , , ) ( ) is a ve n n f x x x         x h x 0 g x 0 x h x 1 2 ctor of equalities of dimension ( ) is a vector of inequalities of dimension m m g x
  • 16. 16 Steps Used To Solve Optimisation Problems 1. Analyse the process in order to make a list of all the variables. 2. Determine the optimisation criterion and specify the objective function. 3. Develop the mathematical model of the process to define the equality and inequality constraints. Identify the independent and dependent variables to obtain the number of degrees of freedom. 4. If the problem formulation is too large or complex simplify it if possible. 5. Apply a suitable optimisation technique. 6. Check the result and examine it’s sensitivity to changes in model parameters and assumptions.
  • 17. 17 Classification of Optimisation Problems Properties of f(x)  single variable or multivariable  linear or nonlinear  sum of squares  quadratic  smooth or non-smooth  sparsity
  • 18. 18 Properties of h(x) and g(x)  simple bounds  smooth or non-smooth  sparsity  linear or nonlinear  no constraints
  • 19. 19 Properties of variables x •time variant or invariant •continuous or discrete •take only integer values •mixed
  • 20. 20 Obstacles and Difficulties • Objective function and/or the constraint functions may have finite discontinuities in the continuous parameter values. • Objective function and/or the constraint functions may be non-linear functions of the variables. • Objective function and/or the constraint functions may be defined in terms of complicated interactions of the variables. This may prevent calculation of unique values of the variables at the optimum.
  • 21. 21 • Objective function and/or the constraint functions may exhibit nearly “flat” behaviour for some ranges of variables or exponential behaviour for other ranges. This causes the problem to be insensitive, or too sensitive. • The problem may exhibit many local optima whereas the global optimum is sought. A solution may be obtained that is less satisfactory than another solution elsewhere. • Absence of a feasible region. • Model-reality differences.
  • 22. 22 Typical Examples of Application static optimisation • Plant design (sizing and layout). • Operation (best steady-state operating condition). • Parameter estimation (model fitting). • Allocation of resources. • Choice of controller parameters (e.g. gains, time constants) to minimise a given performance index (e.g. overshoot, settling time, integral of error squared).
  • 23. 23 dynamic optimisation • Determination of a control signal u(t) to transfer a dynamic system from an initial state to a desired final state to satisfy a given performance index. • Optimal plant start-up and/or shut down. • Minimum time problems
  • 24. 24 BASIC PRINCIPLES OF STATIC OPTIMISATION THEORY Continuity of Functions Functions containing discontinuities can cause difficulty in solving optimisation problems. Definition: A function of a single variable x is continuous at a point xo if: (a ) exist s (b) exist s (c) f x f x f x f x o x x x x o o o ( ) lim ( ) lim ( ) ( )   
  • 25. 25 If f(x) is continuous at every point in a region R, then f(x) is said to be continuous throughout R. x f(x) x f(x) f(x) is discontinuous. f(x) is continuous, but ( ) ( ) df f x x dx   is not.
  • 26. 26 Unimodal and Multimodal Functions A unimodal function f(x) (in the range specified for x) has a single extremum (minimum or maximum). A multimodal function f(x) has two or more extrema. There is a distinction between the global extremum (the biggest or smallest between a set of extrema) and local extrema (any extremum). Note: many numerical procedures terminate at a local extremum. If ( ) 0 f x   at the extremum, the point is called a stationary point.
  • 27. 27 f(x) x local max (stationary) local min (stationary) global max (not stationary) stationary point (saddle point) global min (stationary) A multimodal function
  • 28. 28 Multivariate Functions - Surface and Contour Plots We shall be concerned with basic properties of a scalar function f(x) of n variables (x1,...,xn). If n = 1, f(x) is a univariate function If n > 1, f(x) is a multivariate function.   n 1 For any multivariate function, the equation z = f(x) defines a surface in n+1 dimensional space .
  • 29. 29 In the case n = 2, the points z = f(x1,x2) represent a three dimensional surface. Let c be a particular value of f(x1,x2). Then f(x1,x2) = c defines a curve in x1 and x2 on the plane z = c. If we consider a selection of different values of c, we obtain a family of curves which provide a contour map of the function z = f(x1,x2).
  • 30. 30 1 2 2 1 2 1 2 2 (4 2 4 2 1) x z e x x x x x      contour map of -3 -2 -1 0 1 2 -3 -2.5 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 x2 x1 0.2 0.4 0.7 1.0 1.7 1.8 2 3 4 5 6 z = 20 1.8 2 1.7 3 4 5 6 local minimum saddle point
  • 31. 31
  • 32. 32
  • 33. 33 Example: Surface and Contour Plots of “Peaks” Function     2 2 2 1 1 2 3 5 2 2 1 1 2 1 2 2 2 1 2 3(1 ) exp ( 1) 10(0.2 )exp( ) 1 3exp ( 1) z x x x x x x x x x x              
  • 34. 34 0 5 10 15 20 0 5 10 15 20 -10 -5 0 5 10 x2 z x1 multimodal! 10 20 30 40 50 60 70 80 90 10 20 30 40 50 60 70 80 90 100 x1 x2 global max global min local min local max local max saddle
  • 35. 35
  • 36. 36 Gradient Vector The slope of f(x) at a point x x  in the direction of the ith co-ordinate axis is The n-vector of these partial derivatives is termed the gradient vector of f, denoted by:   L N M M M M M O Q P P P P P f f x f xn ( ) ( ) ( ) x x x     1  (a column vector)   f xi ( ) x x x 
  • 37. 37 The gradient vector at a point x x  is normal to the the contour through that point in the direction of increasing f. x f ( ) x increasing f At a stationary point:   f ( ) x 0 (a null vector)
  • 38. 38 Example f x x x x f f x f x x x x x x x ( ) cos ( ) ( ) ( ) sin cos x x x x     L N M M M M O Q P P P P    L N M O Q P 1 2 2 2 1 1 2 2 2 2 1 1 2 1 2     and the stationary point (points) are given by the simultaneous solution(s) of:- x x x x x x 2 2 2 1 1 2 1 0 2 0     sin cos
  • 39. 39 e.g. f T ( ) x c x x c      f( )= Note: If f ( ) x is a constant vector, f(x) is then linear.
  • 40. 40 Hessian Matrix (Curvature Matrix The second derivative of a n - variable function is defined by the n2 partial derivatives:     x f x i n j n i j ( ) , , , ; , , x F H G I K J   1 1   written as:      2 2 2 f x x i j f x i j i j i ( ) , , ( ) , . x x  
  • 41. 41 These n2 second partial derivatives are usually represented by a square, symmetric matrix, termed the Hessian matrix, denoted by: H x x x x x x 2 2 2 2 2 ( ) ( ) ( ) ( ) ( ) ( )    L N M M M M M O Q P P P P P f f x f x x f x x f x n n n           1 2 1 1 2     
  • 42. 42 Example: For the previous example:   L N M M M M O Q P P P P     L N M O Q P 2 2 1 2 2 1 2 2 1 2 2 2 2 2 1 2 1 2 1 1 2 2 2 f f x f x x f x x f x x x x x x x x ( ) cos sin sin x           Note: If the Hessian matrix of f(x) is a constant matrix, f(x) is then quadratic, expressed as: f f f T T ( ) ( ) , ( ) x x Hx c x x Hx c x H           1 2 2 
  • 43. 43 Convex and Concave Functions A function is called concave over a given region R if: f f f R a b a b a b ( ( ) ) ( ) ( ) ( ) , , .      x x x x x x         1 1 0 1 where: and The function is strictly concave if is replaced by >.  A function is called convex (strictly convex) if is replaced by (<).  
  • 44. 44 concave function convex function x xa xb f(x)   f x ( ) 0 x xa xb f(x)   f x ( ) 0
  • 45. 45 If then is concave. If then is convex.       f x f x f x f x f x f x ( ) ( ) ( ) ( )     2 2 2 2 0 0 For a multivariate function f(x) the conditions are:- Strictly convex +ve def convex +ve semi def concave -ve semi def strictly concave -ve def f(x) H(x) Hessian matrix
  • 46. 46 Tests for Convexity and Concavity H is +ve def (+ve semi def) iff x Hx x 0 T     0 0 ( ), . H is -ve def (-ve semi def) iff x Hx x 0 T     0 0 ( ), . Convenient tests: H(x) is strictly convex (+ve def) (convex) (+ve semi def)) if: 1. all eigenvalues of H(x) are or 2. all principal determinants of H(x) are   0 0 ( )   0 0 ( )
  • 47. 47 H(x) is strictly concave (-ve def) (concave (- ve semi def)) if: 1. all eigenvalues of H(x) are or 2. the principal determinants of H(x) are alternating in sign:   0 0 ( )       1 2 3 1 2 3 0 0 0 0 0 0       , , , ( , , , )  
  • 48. 48 Example f x x x x x ( )    2 3 2 1 2 1 2 2 2 2 2 1 2 2 1 1 2 1 2 1 2 2 2 2 ( ) ( ) ( ) 4 3 4 3 ( ) ( ) 3 4 4 f f f x x x x x x f f x x x x                     x x x x x 1 2 4 3 4 3 ( ) , 4, 7 3 4 3 4                  H x 2 2 1 2 4 3 eigenvalues: | | 8 7 0 3 4 1 Hence, ( ) is strictly conv , 7. e . x f                   I H x
  • 49. 49 Convex Region xa xb non convex region A convex set of points exist if for any two points, xa and xb, in a region, all points: x x x         a b ( ) , 1 0 1 on the straight line joining xa and xb are in the set. If a region is completely bounded by concave functions then the functions form a convex region. xa xb convex region
  • 50. 50 Necessary and Sufficient Conditions for an Extremum of an Unconstrained Function A condition N is necessary for a result R if R can be true only if N is true. R N  A condition S is sufficient for a result R if R is true if S is true. S R  A condition T is necessary and sufficient for a result R iff T is true. T R 
  • 51. 51 There are two necessary and a single sufficient conditions to guarantee that x* is an extremum of a function f(x) at x = x*: 1. f(x) is twice continuously differentiable at x*. 2. , i.e. a stationary point exists at x*. 3. is +ve def for a minimum to exist at x*, or -ve def for a maximum to exist at x*   f ( ) * x 0   2 f ( ) ( ) * * x H x 1 and 2 are necessary conditions; 3 is a sufficient condition. Note: an extremum may exist at x* even though it is not possible to demonstrate the fact using the three conditions.
  • 52. 52 Example: Consider: The gradient vector is:           L N M O Q P f x x x x x x x x ( ) . x 45 2 2 4 4 4 4 2 2 1 2 1 3 1 2 2 1 1 2 yielding three stationary points located by setting and solving numerically:   f ( ) x 0 x*=(x1,x2) f(x*) eigenvalues of 2 f ( ) x classification A.(-1.05,1.03) -0.51 10.5 3.5 global m in B.(1.94,3.85) 0.98 37.0 0.97 local m in C.(0.61,1.49) 2.83 7.0 -2.56 saddle where:         L N M O Q P 2 1 2 2 1 1 2 12 4 2 4 2 4 4 f x x x x ( ) x f x x x x x x x x x ( ) . x         4 45 4 2 2 2 1 2 1 2 2 2 1 2 1 4 1 2 2
  • 53. 53 contour map -4 -3 -2 -1 0 1 2 3 4 0 1 2 3 4 5 6 x2 x1
  • 54. 54 Interpretation of the Objective Function in Terms of its Quadratic Approximation If a function of two variables can be approximated within a region of a stationary point by a quadratic function: f x x x x h h h h x x c c x x h x h x h x x c x c x ( , ) 1 2 1 2 1 2 11 12 12 22 1 2 1 2 1 2 1 2 11 1 2 1 2 22 2 2 12 1 2 1 1 2 2  L N M O Q P L N MO Q P  L N MO Q P          then the eigenvalues and eigenvectors of: H( , ) ( , ) * * * * x x f x x h h h h 1 2 2 1 2 11 12 12 22    L N M O Q P can be used to interpret the nature of f(x1,x2) at: x x x x 1 1 2 2   * * ,
  • 55. 55 They provide information on the shape of f(x1,x2) at x x x x 1 1 2 2   * * , If H( , ) * * x x 1 2 is +ve def, the eigenvectors are at right angles (orthogonal) and correspond to the principal axes of elliptical contours of f(x1,x2). A valley or ridge lies in the direction of the eigenvector associated with a relative small eigenvalue. These interpretations can be generalized to the multivariate quadratic approximation: f T T ( ) x x Hx c x    1 2 
  • 56. 56 Case 1: Equal Eigenvalues - circular contours which are interpreted as a circular hill (max)(-ve eigenvalues) or circular valley (min) (+ve eigenvalues) -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 x2 x1
  • 57. 57
  • 58. 58 Case 2: Unequal Eigenvalues (same sign) – elliptical contours which are interpreted as an elliptical hill (max)(-ve eigenvalues) or elliptical valley (min)(+ve eigenvalues) -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 x1 X2
  • 59. 59
  • 60. 60 Case 3: Eigenvalues of opposite sign but equal in magnitude - symmetrical saddle. -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 x1 x2
  • 61. 61
  • 62. 62 Case 4: Eigenvalues of opposite sign but unequal in magnitude - asymmetrical saddle. -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 -2 -1.5 -1 -0.5 0 0.5 1 1.5 2 x1 x2
  • 63. 63
  • 64. 64 Optimisation with Equality Constraints min ( ); ( ) ; x x x h x 0 n f subject to: m constraints (m n)    Elimination of variables: example: min ( ) , x x f x x x x 1 2 4 5 2 3 6 1 2 2 2 1 2 x     (a) s.t. (b) and substituting into (a) :- f x x x ( ) ( ) 2 2 2 2 2 6 3 5    Using (b) to eliminate x1 gives: x x 1 2 6 3 2   (c)
  • 65. 65 At a stationary point   f x x x x x x ( ) ( ) . * 2 2 2 2 2 2 0 6 6 3 10 0 28 36 1286           Then using (c): x x 1 2 6 3 2 1071 * * .    Hence, the stationary point (min) is: (1.071, 1.286)
  • 66. 66 The Lagrange Multiplier Method Consider a two variable problem with a single equality constraint: min ( , ) ( , ) , x x f x x h x x 1 2 1 2 1 2 0 s.t.  At a stationary point we may write: (a) (b) df f x dx f x dx dh h x dx h x dx f x f x h x h x dx dx        L N M M M M O Q P P P P L N MO Q P  L N M O Q P                 1 1 2 2 1 1 2 2 1 2 1 2 1 2 0 0 0 0
  • 67. 67 If:         f x f x h x h x 1 2 1 2 0  nontrivial nonunique solutions for dx1 and dx2 will exist. This is achieved by setting                          h x h x h x h x f x h x f x h x 1 2 1 2 1 1 2 2 with where  is known as a Lagrange multiplier.
  • 68. 68 If an augmented objective function, called the Lagrangian is defined as: L x x f x x h x x ( , , ) ( , ) ( , ) 1 2 1 2 1 2     we can solve the constrained optimisation problem by solving:                 L x f x h x L x f x h x L h x x 1 1 1 2 2 2 1 2 0 0 0       U V | | W | |   provides equations (a) and (b) re-statement of equality constraint ( , )
  • 69. 69 Generalizing : To solve the problem: min ( ); subject to: ( ) ; constraints ( ) f m m n    n x x x h x 0 define the Lagrangian: ( , ) ( ) ( ), T m L f     x x h x   and the stationary point (points) is obtained from:- ( ) ( , ) ( ) ( , ) ( ) T L f L                 x x h x x x 0 x x h x 0    
  • 70. 70 Example Consider the previous example again. The Lagrangian is:- L x x x x L x x L x x L x x                4 5 2 3 6 8 2 0 10 3 0 2 3 6 0 1 2 2 2 1 2 1 1 2 2 1 2          ( ) (a) (b) (c) Substituting (a) and (b) into (c) gives:
  • 71. 71 which agrees with the previous result. x x x x 1 2 1 2 4 3 10 2 9 10 6 0 30 7 4 281 15 14 1071 90 70 1286                        , . . , . Hence, -4 -3 -2 -1 0 1 2 3 4 -4 -3 -2 -1 0 1 2 3 4 x1 x2
  • 72. 72 Necessary Conditions for a Local Extremum of an Optimisation Problem Subject to Equality and Inequality Constraints (Kuhn-Tucker Conditions) Consider the problem: min ( ); s.t.: ( ) ; equalities ( ) ( ) ; inequalities f m m n p     n x x x h x 0 g x 0 Here, we define the Lagrangian: ( , , ) ( ) ; , T T m p L f      x x h(x) g(x)       The necessary conditions for x* to be a local extremum of f(x) are:-
  • 73. 73 (a) f(x), hj(x), gj(x) are all twice differentiable at x* (b) The Lagrange multipliers exist (c) All constraints are satisfied at x* h x 0 x g x 0 x ( ) ( ) ; ( *) ( ) * * *       h g j j 0 0 (d) The Lagrange multipliers (at x*) for the inequality constraints are not negative, i.e.  j *  j *  0 (e) The binding (active) inequality constraints are zero, the inactive inequality constraints are > 0, and the associated j’s are 0 at x*, i.e.  j j g * * ( ) x  0 (f) The Lagrangian function is at a stationary point ( , , ) 0 L   x x  
  • 74. 74 Notes: 1. Further analysis or investigation is required to determine if the extremum is a minimum (or maximum) 2. If f(x) is convex, h(x) are linear and g(x) are concave: x* will be a global extremum.
  • 75. 75 Limitations of Analytical Methods The computations needed to evaluate the above conditions can be extensive and intractable. Furthermore, the resulting simultaneous equations required for solving x*, * and * are often nonlinear and cannot be solved without resorting to numerical methods. The results may be inconclusive. For these reasons, we often have to resort to numerical methods for solving optimisation problems, using computer codes (e.g.. MATLAB)
  • 76. 76 Example Determine if the potential minimum x* = (1.00,4.90) satisfies the Kuhn Tucker conditions for the problem: min ( ) ( ) ( ) ( ) ( ) ( ) ( ) ( ) x x x x x x x f x x h x x g x x x x g x x g x g x                        4 12 25 0 10 10 34 0 3 1 0 2 0 1 2 2 1 1 2 2 2 1 1 1 2 2 2 2 2 1 2 2 2 3 1 4 2 s.t.
  • 77. 77 -2 0 2 4 6 8 -2 -1 0 1 2 3 4 5 6 7 8 x1 x2 (1.00,4.90) ) x2=0 h1(x)=0 g1(x)=0 10 5 0 -5 -10 -20 -25 -30 -40 -50 -60 contours and constraints
  • 78. 78 We test each Kuhn-Tucker condition in turn: (a) All functions are seen by inspection to be twice differentiable. (b) We assume the Lagrange multipliers exist (c) Are the constraints satisfied? h yes g yes binding g yes not active g yes not active g yes not active 1 2 2 1 2 2 2 2 3 4 25 100 490 001 0 10 100 100 10 490 490 34 001 0 490 1 1921 0 1 490 0 : ( . ) ( . ) . : ( . ) ( . ) ( . ) ( . ) . : ( ( . ) . : : . 1.00-3) .00 -2 2                  
  • 79. 79 To test the rest of the conditions we need to determine the Lagrange multipliers using the stationarity conditions. First we note that from condition (e) we require: j j g j * * ( ) , , , , x   0 12 3 4     1 1 2 2 3 3 4 4 0 0 0 0 * * * * * * * * ( ) ( ) ( ) ( ) can have any value because must be zero because must be zero because must be zero because g g g g x x x x    
  • 80. 80 Hence:                    L N M O Q P L N MO Q P   L N M O Q P    x x L x x L x x x 1 2 4 2 10 2 4 2 8 2 2 10 2 98 98 02 2 8 98 02 4 98 1015 1 1 1 1 1 1 2 1 2 1 2 1 1 1 1 1 1             * * * * * * * * * * * * * * * * * ( ) ( ) . . . . . . . , = 0.754 Now consider the stationarity condition: * * 2 3 2 2 2 2 2 1 2 1 1 2 1 1 1 2 2 ( , , ) 0 where, since 0 =4 12 (25 ) (10 10 34) L L x x x x x x x x                   x x  
  • 81. 81 Now we can check the remaining conditions: (d) Are j j * , , , , ?   0 12 34     1 2 3 4 0754 0 * * * * . ,     Hence, the answer is yes (e) Are j j g j * * ( ) , , , , ? x   0 12 34 Yes, because we have already used this above. (f) Is the Lagrangian function at a stationary point? Yes, because we have already used this above.
  • 82. 82 Hence, all the Kuhn-Tucker conditions are satisfied and we can have confidence in the solution : x* = (1.00,4.90)