SlideShare a Scribd company logo
1 of 51
Download to read offline
Basic Calculus (II) Recap
(for MSc & PhD Business, Management & Finance Students)
Lecturer: Farzad Javidanrad
First Draft: Sep. 2013
Revised: Sep. 2014
Multi-Variable
Functions
Multi-Variable Functions
โ€ข In the case of one-variable function, in the form of ๐‘ฆ = ๐‘“(๐‘ฅ) , the
variable ๐’™ is called โ€œindependent variableโ€ and ๐’š โ€œdependent
variableโ€.
โ€ข There are many examples of the dependency of ๐‘ฆ on ๐‘ฅ (e.g, the state
of boiling of water depends on the amount of heat; or consumption
expenditure depends on the level of income) but the concept of
function should be understood beyond the concept of dependency.
In most of the cases, dependency is not the issue at all. The modern
concept of function is based on the idea of mapping.
Multi-Variable Functions
โ€ข When a painter paint a scene on a canvas s(he) uses a
correspondence rule (mapping rule): every point in three-
dimensional space (๐‘…3) is corresponded (mapped) to just one and
only one point in two-dimensional space (๐‘…2).
โ€ข Mathematically speaking the
function ๐‘“: ๐‘…3
โ†’ ๐‘…2
can
represent the type of
corresponding (mapping)
rule that the painter is
applying.
The Concept of Function as Mapping
โ€ข Transformation of an object is a mapping from ๐‘…2 to ๐‘…2;
โ€ข Mathematical operations describe a function from ๐‘…2 to ๐‘…
x
y
y
-xo
๐‘“: ๐‘…2 โ†’ ๐‘…2
๐‘Ž, ๐‘ โ†’ (๐‘, โˆ’๐‘Ž)
(๐‘Ž, ๐‘)
(๐‘, โˆ’๐‘Ž)
a
b
a+boo
Figure1-6: Geometrical interpretation of
the sum operator as a function. This is a
transformation from space to .
xx
๐‘”: ๐‘…2 โ†’ ๐‘…
๐‘Ž, ๐‘ โ†’ ๐‘Ž + ๐‘
Multi Variables Functions
โ€ข All basic mathematical operators such as summation, subtraction,
division and multiplication introduce a function from two-
dimensional space (๐‘…2) to the real number set (one-dimensional
space, ๐‘…), that is:
๐‘“: ๐‘…2
โ†’ ๐‘…
For e.g. for division: ๐‘Ž, ๐‘ โ†’
๐‘Ž
๐‘
(๐‘ โ‰  0)
โ€ข One of the important family of the multi-variable functions is the
โ€œreal (scalar) multi variables functionโ€, which can be shown as
๐‘“: ๐‘… ๐‘› โ†’ ๐‘… or simply, ๐‘ฆ = ๐‘“(๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ ๐‘›), where ๐‘ฆ is the
dependent variable and ๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ ๐‘› are independent variables.
Two Variables Functions
โ€ข A simple form of this function is when we have two independent
variables ๐‘ฅ, ๐‘ฆ and one dependent variable ๐‘ง, in the form of ๐‘ง =
๐‘“(๐‘ฅ, ๐‘ฆ). This is called โ€œtwo variables functionโ€ as there are two
independent variables.
โ€ข E.g. a Cobb-Douglas production function :
๐‘Œ = ๐‘“ ๐พ, ๐ฟ = ๐ด๐พ ๐›ผ
๐ฟ ๐›ฝ
Where ๐‘Œ is the level of production,
๐พ and ๐ฟ are the levels of capital and
labour employed for production,
respectively.
โ€ข ๐ด, ๐›ผ and ๐›ฝ are constants of the
function.
Adoptedfrom http://en.citizendium.org/wiki/File:Cobb-Douglas_with_dimishing_returns_to_scale.png
Y
K
L
Two Variables Functions
โ€ข ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ) represents a functional relationship if for every ordered
pair (๐‘ฅ, ๐‘ฆ) in the domain of the function there will be one and only
one value of ๐‘ง in the range of the function.
o Which graph does represent a function?
๐’™ ๐Ÿ
๐’‚ ๐Ÿ
+
๐’š ๐Ÿ
๐’ƒ ๐Ÿ
+
๐’› ๐Ÿ
๐’„ ๐Ÿ
= ๐Ÿ
Ellipsoid
Hyperboloid of Two Sheets
โˆ’
๐’™ ๐Ÿ
๐’‚ ๐Ÿ
โˆ’
๐’š ๐Ÿ
๐’ƒ ๐Ÿ
+
๐’› ๐Ÿ
๐’„ ๐Ÿ
= ๐Ÿ
Hyperbolic Paraboloid
๐’™ ๐Ÿ
๐’‚ ๐Ÿ
โˆ’
๐’š ๐Ÿ
๐’ƒ ๐Ÿ
=
๐’›
๐’„
Elliptic Paraboloid
๐’™ ๐Ÿ
๐’‚ ๐Ÿ
+
๐’š ๐Ÿ
๐’ƒ ๐Ÿ
=
๐’›
๐’„
Adoptedfromhttp://tutorial.math.lamar.edu/Classes/CalcIII/QuadricSurfaces.aspx
Derivative of Two Variables Functions
โ€ข Consider the function ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ); ๐‘ง changes if ๐‘ฅ or ๐‘ฆ or both of
them change. If we control the change of ๐‘ฆ and allow just ๐‘ฅ to
change then the average change of ๐‘ง in terms of ๐‘ฅ, is
ฮ”๐‘ง
ฮ”๐‘ฅ
. The limiting
state of this ratio when โˆ†๐‘ฅ โ†’ 0 is what is called โ€œpartial derivative of
๐’› in terms of ๐’™ โ€ and is shown by:
๐œ•๐‘ง
๐œ•๐‘ฅ
,
๐œ•๐‘“(๐‘ฅ,๐‘ฆ)
๐œ•๐‘ฅ
, ๐‘ง ๐‘ฅ
โ€ฒ
, ๐‘“๐‘ฅ
โ€ข This cutter plane shows that the
variable ๐‘ฆ is controlled (fixed)
at ๐‘ฆ = 1 but ๐‘ฅ can change from
-2 to +2 and the movement is on
the curve of intersection between
The plane and the surface of the
function.
Adoptedfrom http://msemac.redwoods.edu/~darnold/math50c/matlab/pderiv/index.xhtml
Partial Differentiation
โ€ข If ๐‘ฅ is controlled (fixed) and ๐‘ฆ is allowed to change the partial
derivative of ๐’› in terms of ๐’š can be shown by:
๐œ•๐‘ง
๐œ•๐‘ฆ
,
๐œ•๐‘“(๐‘ฅ,๐‘ฆ)
๐œ•๐‘ฆ
, ๐‘ง ๐‘ฆ
โ€ฒ
,๐‘“๐‘ฆ
โ€ข The cutter plane shows that
๐‘ฅ is controlled (fixed) at
๐‘ฅ = 0 but ๐‘ฆ can change from
-3 to +3 on the curve of intersection
between the plane and the surface
of the function.
z
y
x
Adoptedfrom http://www.uwec.edu/math/Calculus/216-Spring2007/assignments.htm
Partial Differentiation
โ€ข So, in general, the slope of the function ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ) on the curve of
intersection between the surface of the function and the cutting
plane parallel to x-axis at any point of the domain is:
๐œ•๐‘ง
๐œ•๐‘ฅ
= ๐‘“๐‘ฅ = lim
โˆ†๐‘ฅโ†’0
๐‘“ ๐‘ฅ + โˆ†๐‘ฅ , ๐‘ฆ โˆ’ ๐‘“(๐‘ฅ , ๐‘ฆ)
โˆ†๐‘ฅ
= ๐‘™๐‘–๐‘š
โ„Žโ†’0
๐‘“ ๐‘ฅ + โ„Ž , ๐‘ฆ โˆ’ ๐‘“(๐‘ฅ , ๐‘ฆ)
โ„Ž
It means when calculating
๐œ•๐‘ง
๐œ•๐‘ฅ
the
variable ๐‘ฆ should be treated as a
constant. The same rule applies for
multi variables functions.
Adoptedfrom http://moodle.capilanou.ca/mod/book/view.php?id=19700&chapterid=240
๐’› = ๐Ÿ๐ŸŽ โˆ’ ๐’™ ๐Ÿ
โˆ’ ๐’š ๐Ÿ
Partial Differentiation
โ€ข And the slope of the function ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ) on the curve of
intersection between the surface of the function and the cutting
plane parallel to y-axis at any point of the domain is:
๐œ•๐‘ง
๐œ•๐‘ฆ
= ๐‘“๐‘ฆ = ๐‘™๐‘–๐‘š
โˆ†๐‘ฆโ†’0
๐‘“ ๐‘ฅ , ๐‘ฆ + โˆ†๐‘ฆ โˆ’ ๐‘“(๐‘ฅ, ๐‘ฆ)
โˆ†๐‘ฆ
= ๐‘™๐‘–๐‘š
โ„Žโ†’0
๐‘“ ๐‘ฅ , ๐‘ฆ + โ„Ž โˆ’ ๐‘“(๐‘ฅ, ๐‘ฆ)
โ„Ž
It means when calculating
๐œ•๐‘ง
๐œ•๐‘ฆ
the
variable ๐‘ฅ should be treated as a
constant. The same rule applies for
multi variables functions.
Adoptedfrom http://moodle.capilanou.ca/mod/book/view.php?id=19700&chapterid=240
๐’› = ๐Ÿ๐ŸŽ โˆ’ ๐’™ ๐Ÿ
โˆ’ ๐’š ๐Ÿ
Partial Differentiation
โ€ข To find the partial derivatives (slope of tangent lines on the surface)
at a specific point ๐‘ƒ(๐‘Ž, ๐‘, ๐‘) we have:
โ€ข
๐œ•๐‘“(๐‘ฅ,๐‘ฆ)
๐œ•๐‘ฅ ๐‘(๐‘Ž,๐‘,๐‘)
= ๐‘™๐‘–๐‘š
โ„Žโ†’0
๐‘“ ๐‘Ž+โ„Ž , ๐‘ โˆ’๐‘“(๐‘Ž , ๐‘)
โ„Ž
โ€ข
๐œ•๐‘“(๐‘ฅ,๐‘ฆ)
๐œ•๐‘ฆ ๐‘(๐‘Ž,๐‘,๐‘)
= ๐‘™๐‘–๐‘š
โ„Žโ†’0
๐‘“ ๐‘Ž , ๐‘+โ„Ž โˆ’๐‘“(๐‘Ž , ๐‘)
โ„Ž
Example:
o Find partial derivatives of ๐‘ง = 10๐‘ฅ2
๐‘ฆ3
.
๐๐’›
๐๐’™
= ๐Ÿ๐ŸŽ๐’™๐’š ๐Ÿ‘ ,
๐๐’›
๐๐’š
= ๐Ÿ‘๐ŸŽ๐’™ ๐Ÿ ๐’š ๐Ÿ
(๐’‚, ๐’ƒ, ๐ŸŽ)
(๐’‚, ๐’ƒ, ๐’„)
(๐ŸŽ, ๐’ƒ, ๐ŸŽ)
(๐’‚, ๐ŸŽ, ๐ŸŽ)
Adoptedfrom http://www.solitaryroad.com/c353.html
Rules of Partial Differentiation
โ€ข If ๐‘“(๐‘ฅ, ๐‘ฆ) and ๐‘”(๐‘ฅ, ๐‘ฆ) are two differentiable functions with respect
to ๐‘ฅ and ๐‘ฆ ;
๏ถ ๐‘ง = ๐‘“ ๐‘ฅ, ๐‘ฆ ยฑ ๐‘” ๐‘ฅ, ๐‘ฆ โ†’
๐œ•๐‘ง
๐œ•๐‘ฅ
=
๐œ•๐‘“
๐œ•๐‘ฅ
ยฑ
๐œ•๐‘”
๐œ•๐‘ฅ
= ๐‘“๐‘ฅ ยฑ ๐‘” ๐‘ฅ
๐œ•๐‘ง
๐œ•๐‘ฆ
=
๐œ•๐‘“
๐œ•๐‘ฆ
ยฑ
๐œ•๐‘”
๐œ•๐‘ฆ
= ๐‘“๐‘ฆ ยฑ ๐‘” ๐‘ฆ
๏ถ ๐‘ง = ๐‘“ ๐‘ฅ, ๐‘ฆ ร— ๐‘” ๐‘ฅ, ๐‘ฆ โ†’
๐œ•๐‘ง
๐œ•๐‘ฅ
= ๐‘“๐‘ฅ . ๐‘” + ๐‘” ๐‘ฅ . ๐‘“
๐œ•๐‘ง
๐œ•๐‘ฆ
= ๐‘“๐‘ฆ . ๐‘” + ๐‘” ๐‘ฆ . ๐‘“
๏ถ ๐‘ง =
๐‘“(๐‘ฅ,๐‘ฆ)
๐‘”(๐‘ฅ,๐‘ฆ)
โ†’
๐œ•๐‘ง
๐œ•๐‘ฅ
=
๐‘“๐‘ฅ . ๐‘”โˆ’๐‘” ๐‘ฅ . ๐‘“
๐‘”2
๐œ•๐‘ง
๐œ•๐‘ฆ
=
๐‘“๐‘ฆ . ๐‘”โˆ’๐‘” ๐‘ฆ . ๐‘“
๐‘”2
Some Examples
o Find partial derivatives of the function ๐‘ง = ๐‘ฅ2 โˆ’ ๐‘ฅ๐‘ฆ3 โˆ’ 5๐‘ฆ2.
๐œ•๐‘ง
๐œ•๐‘ฅ
= ๐Ÿ๐’™ โˆ’ ๐’š ๐Ÿ‘ ,
๐œ•๐‘ง
๐œ•๐‘ฆ
= โˆ’๐Ÿ‘๐’™๐’š ๐Ÿ โˆ’ ๐Ÿ๐ŸŽ๐’š
o Find partial derivatives of ๐‘ง = ๐‘ฅ๐‘ฆ. ๐‘ฅ2 + ๐‘ฆ2 .
๐œ•๐‘ง
๐œ•๐‘ฅ
= y. ๐‘ฅ2 + ๐‘ฆ2 +
2๐‘ฅ
2 ๐‘ฅ2 + ๐‘ฆ2
. ๐‘ฅ๐‘ฆ = ๐ฒ. ๐’™ ๐Ÿ + ๐’š ๐Ÿ +
๐’™ ๐Ÿ ๐’š
๐’™ ๐Ÿ + ๐’š ๐Ÿ
๐œ•๐‘ง
๐œ•๐‘ฆ
= ๐‘ฅ. ๐‘ฅ2 + ๐‘ฆ2 +
2๐‘ฆ
2 ๐‘ฅ2 + ๐‘ฆ2
. ๐‘ฅ๐‘ฆ = ๐’™. ๐’™ ๐Ÿ + ๐’š ๐Ÿ +
๐’š ๐Ÿ
๐’™
๐’™ ๐Ÿ + ๐’š ๐Ÿ
o Find partial derivatives of ๐‘ง =
3๐‘ฅ2 ๐‘ฆ2
๐‘ฅ4+๐‘ฆ4.
๐œ•๐‘ง
๐œ•๐‘ฅ
=
6๐‘ฅ๐‘ฆ2
๐‘ฅ4
+ ๐‘ฆ4
โˆ’ 4๐‘ฅ3
ร— 3๐‘ฅ2
๐‘ฆ2
๐‘ฅ4 + ๐‘ฆ4 2
๐œ•๐‘ง
๐œ•๐‘ฆ
=
6๐‘ฆ๐‘ฅ2
๐‘ฅ4
+ ๐‘ฆ4
โˆ’ 4๐‘ฆ3
ร— 3๐‘ฅ2
๐‘ฆ2
๐‘ฅ4 + ๐‘ฆ4 2
Chain Rule (Different Cases)
Case 1: If ๐‘ง = ๐‘“ ๐‘ข and ๐‘ข = ๐‘”(๐‘ฅ, ๐‘ฆ) then ๐‘ง = ๐‘“(๐‘” ๐‘ฅ, ๐‘ฆ ) and
Examples:
o Find partial derivatives of ๐‘ง = ๐‘’ ๐‘ฅ๐‘ฆ2
.
Suppose ๐‘ข = ๐‘ฅ๐‘ฆ2 , so, ๐‘ง = ๐‘’ ๐‘ข and
๐œ•๐‘ง
๐œ•๐‘ฅ
=
๐œ•๐‘ง
๐œ•๐‘ข
.
๐œ•๐‘ข
๐œ•๐‘ฅ
= (๐‘’ ๐‘ข). ๐‘ข ๐‘ฅ = ๐’† ๐’™๐’š ๐Ÿ
. ๐’š ๐Ÿ
and
๐œ•๐‘ง
๐œ•๐‘ฆ
=
๐œ•๐‘ง
๐œ•๐‘ข
.
๐œ•๐‘ข
๐œ•๐‘ฆ
= (๐‘’ ๐‘ข). ๐‘ข ๐‘ฆ = ๐’† ๐’™๐’š ๐Ÿ
. ๐Ÿ๐’™๐’š
๐œ•๐‘ง
๐œ•๐‘ฅ
= ๐‘“โ€ฒ.
๐œ•๐‘”
๐œ•๐‘ฅ
=
๐œ•๐‘ง
๐œ•๐‘ข
.
๐œ•๐‘ข
๐œ•๐‘ฅ
and
๐œ•๐‘ง
๐œ•๐‘ฆ
= ๐‘“โ€ฒ.
๐œ•๐‘”
๐œ•๐‘ฆ
=
๐œ•๐‘ง
๐œ•๐‘ข
.
๐œ•๐‘ข
๐œ•๐‘ฆ
Chain Rule (Different Cases)
o Find partial derivatives of the function ๐‘ง = ๐‘’
๐‘ฅ
๐‘ฆ + cos(๐‘ฅ๐‘ฆ) .
๐œ•๐‘ง
๐œ•๐‘ฅ
=
๐Ÿ
๐’š
๐’†
๐’™
๐’š โˆ’ ๐’š. ๐’”๐’Š๐’ ๐’™๐’š ,
๐œ•๐‘ง
๐œ•๐‘ฆ
=
โˆ’๐’™
๐’š ๐Ÿ
๐’†
๐’™
๐’š โˆ’ ๐’™. ๐’”๐’Š๐’(๐’™๐’š)
โ€ข Case 2: If ๐‘ง = ๐‘“ ๐‘ฅ, ๐‘ฆ is a differentiable function of ๐‘ฅ and ๐‘ฆ and
these two variables are differentiable functions of ๐‘Ÿ , such that ๐‘ฅ =
๐‘” ๐‘Ÿ and ๐‘ฆ = โ„Ž(๐‘Ÿ) , then:
o Find partial derivatives of ๐‘ง = ๐‘ฅ โˆ’ ๐‘™๐‘›๐‘ฆ when ๐‘ฅ = ๐‘Ÿ and ๐‘ฆ = ๐‘Ÿ2 โˆ’ 1
๐œ•๐‘ง
๐œ•๐‘Ÿ
= 1.
1
2 ๐‘Ÿ
โˆ’
1
๐‘ฆ
. 2๐‘Ÿ =
๐Ÿ
๐Ÿ ๐’“
โˆ’
๐Ÿ๐’“
๐’“ ๐Ÿ โˆ’ ๐Ÿ
โ€ข Can you suggest another way?
๐œ•๐‘ง
๐œ•๐‘Ÿ
=
๐œ•๐‘ง
๐œ•๐‘ฅ
.
๐‘‘๐‘ฅ
๐‘‘๐‘Ÿ
+
๐œ•๐‘ง
๐œ•๐‘ฆ
.
๐‘‘๐‘ฆ
๐‘‘๐‘Ÿ
The same rules apply
for multi variables
functions
Chain Rules (Different Cases)
โ€ข Case 3: If ๐‘ง = ๐‘“ ๐‘ฅ, ๐‘ฆ is a differentiable function of ๐‘ฅ and ๐‘ฆ and
these two variables are differentiable functions of ๐‘Ÿ and ๐‘  , such
that ๐‘ฅ = ๐‘” ๐‘Ÿ, ๐‘  and ๐‘ฆ = โ„Ž(๐‘Ÿ, ๐‘ ) and ๐‘Ÿ and ๐‘  are independent from
each other (
๐‘‘๐‘Ÿ
๐‘‘๐‘ 
,
๐‘‘๐‘ 
๐‘‘๐‘Ÿ
= 0), then:
โ€ข These derivatives are called โ€œtotal derivatives of ๐’› with respect to
๐’“ and ๐’”โ€.
o Find partial derivatives of ๐‘ง =
3
๐‘ฅ2 โˆ’ ๐‘ฆ where ๐‘ฅ = ๐‘Ÿ2 + ๐‘ 2 and
๐‘ฆ =
๐‘Ÿ
๐‘ 
.
๐œ•๐‘ง
๐œ•๐‘Ÿ
=
๐œ•๐‘ง
๐œ•๐‘ฅ
.
๐‘‘๐‘ฅ
๐‘‘๐‘Ÿ
+
๐œ•๐‘ง
๐œ•๐‘ฆ
.
๐‘‘๐‘ฆ
๐‘‘๐‘Ÿ
and
๐œ•๐‘ง
๐œ•๐‘ 
=
๐œ•๐‘ง
๐œ•๐‘ฅ
.
๐‘‘๐‘ฅ
๐‘‘๐‘ 
+
๐œ•๐‘ง
๐œ•๐‘ฆ
.
๐‘‘๐‘ฆ
๐‘‘๐‘ 
Implicit Differentiation
โ€ข The Chain Rule can be used for implicit differentiation even for one
variable functions:
๐น ๐‘ฅ, ๐‘ฆ = 0
Using the chain rule we have:
๐œ•๐น
๐œ•๐‘ฅ
=
๐œ•๐น
๐œ•๐‘ฅ
.
๐‘‘๐‘ฅ
๐‘‘๐‘ฅ
+
๐œ•๐น
๐œ•๐‘ฆ
.
๐‘‘๐‘ฆ
๐‘‘๐‘ฅ
= 0
So,
โ€ข The same rule can be used for implicit two or multi variables functions.
For example, for an implicit function ๐น ๐‘ฅ, ๐‘ฆ, ๐‘ง = 0, we have:
As
๐’…๐’™
๐’…๐’™
= ๐Ÿ
๐‘‘๐‘ฆ
๐‘‘๐‘ฅ
= โˆ’
๐œ•๐น
๐œ•๐‘ฅ
๐œ•๐น
๐œ•๐‘ฆ
= โˆ’
๐น๐‘ฅ
๐น๐‘ฆ
๐œ•๐‘ง
๐œ•๐‘ฅ
= โˆ’
๐œ•๐น
๐œ•๐‘ฅ
๐œ•๐น
๐œ•๐‘ง
= โˆ’
๐น๐‘ฅ
๐น๐‘ง
๐‘Ž๐‘›๐‘‘
๐œ•๐‘ง
๐œ•๐‘ฆ
= โˆ’
๐œ•๐น
๐œ•๐‘ฆ
๐œ•๐น
๐œ•๐‘ง
= โˆ’
๐น๐‘ฆ
๐น๐‘ง
Examples of Implicit Functions
o Find the slope of the tangent line on the curve of intersection
between the surface ๐‘ฅ2 + ๐‘ฆ2 + ๐‘ง2 = 9 and the plane ๐‘ฆ = 2 at the
point ๐ด(1,2,2) .
As ๐‘ฆ is fixed at 2 so, we are looking for
๐œ•๐‘ง
๐œ•๐‘ฅ
at point A :
2๐‘ฅ + 0 + 2๐‘ง.
๐œ•๐‘ง
๐œ•๐‘ฅ
= 0 โ†’
๐œ•๐‘ง
๐œ•๐‘ฅ
=
โˆ’๐‘ฅ
๐‘ง
= โˆ’
1
2
Or using implicit differentiation:
๐œ•๐‘ง
๐œ•๐‘ฅ
= โˆ’
๐น๐‘ฅ
๐น๐‘ง
= โˆ’
2๐‘ฅ
2๐‘ง
= โˆ’
๐‘ฅ
๐‘ง
o Find
๐œ•๐‘ง
๐œ•๐‘ฆ
for ๐‘’ ๐‘ฅ+๐‘ฆ+๐‘ง = ๐‘ฅ2 โˆ’ 2๐‘ฆ2 + ๐‘ง2 .
0 + 1 +
๐œ•๐‘ง
๐œ•๐‘ฆ
๐‘’ ๐‘ฅ+๐‘ฆ+๐‘ง = 0 โˆ’ 4y + 2z.
๐œ•๐‘ง
๐œ•๐‘ฆ
โ†’
๐œ•๐‘ง
๐œ•๐‘ฆ
=
๐‘’ ๐‘ฅ+๐‘ฆ+๐‘ง + 4๐‘ฆ
2๐‘ง โˆ’ ๐‘’ ๐‘ฅ+๐‘ฆ+๐‘ง
Use the implicit differentiation for this question.
Higher Orders Partial Derivatives
โ€ข For the function ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ) the partial derivatives
๐œ•๐‘ง
๐œ•๐‘ฅ
and
๐œ•๐‘ง
๐œ•๐‘ฆ
are in
turn functions of ๐‘ฅ and ๐‘ฆ , in general. So, we can think of second
partial derivatives of ๐‘ง , but in this case there are three different
second derivatives:
๐‘ง ๐‘ฅ๐‘ฅ = ๐‘“๐‘ฅ๐‘ฅ =
๐œ•
๐œ•๐‘ง
๐œ•๐‘ฅ
๐œ•๐‘ฅ
=
๐œ•
๐œ•๐‘ฅ
๐œ•๐‘ง
๐œ•๐‘ฅ
=
๐œ•2 ๐‘ง
๐œ•๐‘ฅ2
๐‘ง ๐‘ฆ๐‘ฆ = ๐‘“๐‘ฆ๐‘ฆ =
๐œ•
๐œ•๐‘ง
๐œ•๐‘ฆ
๐œ•๐‘ฆ
=
๐œ•
๐œ•๐‘ฅ
๐œ•๐‘ง
๐œ•๐‘ฆ
=
๐œ•2
๐‘ง
๐œ•๐‘ฆ2
๐‘ง ๐‘ฅ๐‘ฆ = ๐‘“๐‘ฅ๐‘ฆ =
๐œ•
๐œ•๐‘ง
๐œ•๐‘ฅ
๐œ•๐‘ฆ
=
๐œ•
๐œ•๐‘ฆ
๐œ•๐‘ง
๐œ•๐‘ฅ
=
๐œ•2
๐‘ง
๐œ•๐‘ฆ. ๐œ•๐‘ฅ
Second-
order direct
partial
derivatives
Second-
order cross
partial
derivative
The Equality of Mixed (Cross) Partial Derivatives
๐‘ง ๐‘ฆ๐‘ฅ = ๐‘“๐‘ฆ๐‘ฅ =
๐œ•
๐œ•๐‘ง
๐œ•๐‘ฆ
๐œ•๐‘ฅ
=
๐œ•
๐œ•๐‘ฅ
๐œ•๐‘ง
๐œ•๐‘ฆ
=
๐œ•2
๐‘ง
๐œ•๐‘ฅ. ๐œ•๐‘ฆ
โ€ข If the cross (mixed) partial derivatives ๐‘“๐‘ฅ๐‘ฆ and ๐‘“๐‘ฆ๐‘ฅ are continuous
and finite in their domain then they are equal to one another; i.e.
๐‘“๐‘ฅ๐‘ฆ = ๐‘“๐‘ฆ๐‘ฅ
Or
๐œ•2 ๐‘ง
๐œ•๐‘ฆ.๐œ•๐‘ฅ
=
๐œ•2 ๐‘ง
๐œ•๐‘ฅ.๐œ•๐‘ฆ
๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ)
๐œ•๐‘ง
๐œ•๐‘ฅ
=๐‘“๐‘ฅ
๐œ•๐‘ง
๐œ•๐‘ฆ
=๐‘“๐‘ฆ
๐‘“๐‘ฅ๐‘ฅ
๐‘“๐‘ฅ๐‘ฆ = ๐‘“๐‘ฆ๐‘ฅ
๐‘“๐‘ฆ๐‘ฆ
Second-
order cross
partial
derivative
Total Differential
โ€ข The meaning of differential in multi variables scalar function is not
different with that in the one variable function. The only difference is
that the source of change in dependent variable is the change of all
independent variables., that is;
๐‘ง + โˆ†๐‘ง = ๐‘“(๐‘ฅ + โˆ†๐‘ฅ, ๐‘ฆ + โˆ†๐‘ฆ)
Or โˆ†๐‘ง = ๐‘“ ๐‘ฅ + โˆ†๐‘ฅ, ๐‘ฆ + โˆ†๐‘ฆ โˆ’ ๐‘“(๐‘ฅ, ๐‘ฆ)
But ๐‘‘๐‘ง, which is called โ€œtotal differentialโ€
is defined as:
๐‘‘๐‘ง =
๐œ•๐‘ง
๐œ•๐‘ฅ
. ๐‘‘๐‘ฅ +
๐œ•๐‘ง
๐œ•๐‘ฆ
๐‘‘๐‘ฆ
Or
๐‘‘๐‘ง = ๐‘“๐‘ฅ. ๐‘‘๐‘ฅ + ๐‘“๐‘ฆ. ๐‘‘๐‘ฆ Adoptedfrom Calculus Early Transcendental James Stewart p897
Total Differential
โ€ข For a multi variables scalar function the same rule applies:
๐‘ง = ๐‘“(๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ ๐‘›)
๐‘‘๐‘ง =
๐œ•๐‘ง
๐œ•๐‘ฅ1
. ๐‘‘๐‘ฅ1 +
๐œ•๐‘ง
๐œ•๐‘ฅ2
. ๐‘‘๐‘ฅ2 + โ‹ฏ +
๐œ•๐‘ง
๐œ•๐‘ฅ ๐‘›
. ๐‘‘๐‘ฅ ๐‘›
โ€ข in the case of two variables function ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ) we assumed ๐‘ฅ and ๐‘ฆ are
independent, but if they depend on other variables the differential of
each one of them can be treated as the total differential of a dependent
variable, that is;
๐‘ง = ๐‘“ ๐‘ฅ, ๐‘ฆ โ†’ ๐‘‘๐‘ง =
๐œ•๐‘ง
๐œ•๐‘ฅ
. ๐‘‘๐‘ฅ +
๐œ•๐‘ง
๐œ•๐‘ฆ
. ๐‘‘๐‘ฆ ๐ด
๐‘ฅ = โ„Ž ๐‘Ÿ, ๐‘  โ†’ ๐‘‘๐‘ฅ =
๐œ•๐‘ฅ
๐œ•๐‘Ÿ
. ๐‘‘๐‘Ÿ +
๐œ•๐‘ฅ
๐œ•๐‘ 
. ๐‘‘๐‘  ๐ต
๐‘ฆ = ๐‘˜ ๐‘Ÿ, ๐‘  โ†’ ๐‘‘๐‘ฆ =
๐œ•๐‘ฆ
๐œ•๐‘Ÿ
. ๐‘‘๐‘Ÿ +
๐œ•๐‘ฆ
๐œ•๐‘ 
. ๐‘‘๐‘  ๐ถ
Substituting B and C into A:
๐‘‘๐‘ง =
๐œ•๐‘ง
๐œ•๐‘ฅ
.
๐œ•๐‘ฅ
๐œ•๐‘Ÿ
. ๐‘‘๐‘Ÿ +
๐œ•๐‘ฅ
๐œ•๐‘ 
. ๐‘‘๐‘  +
๐œ•๐‘ง
๐œ•๐‘ฆ
.
๐œ•๐‘ฆ
๐œ•๐‘Ÿ
. ๐‘‘๐‘Ÿ +
๐œ•๐‘ฆ
๐œ•๐‘ 
. ๐‘‘๐‘ 
Total Differential
If we are looking for total derivatives of ๐‘ง with respect to ๐‘Ÿ and ๐‘ ,
which is introduced before as the chain rule (case 3), we need to
suppose that ๐‘Ÿ and ๐‘  are independent variables and not associated to
each other (
๐‘‘๐‘ 
๐‘‘๐‘Ÿ
๐‘œ๐‘Ÿ
๐‘‘๐‘Ÿ
๐‘‘๐‘ 
= 0); then:
๐‘‘๐‘ง =
๐œ•๐‘ง
๐œ•๐‘ฅ
.
๐œ•๐‘ฅ
๐œ•๐‘Ÿ
+
๐œ•๐‘ง
๐œ•๐‘ฆ
.
๐œ•๐‘ฆ
๐œ•๐‘Ÿ
. ๐‘‘๐‘Ÿ +
๐œ•๐‘ง
๐œ•๐‘ฅ
.
๐œ•๐‘ฅ
๐œ•๐‘ 
+
๐œ•๐‘ง
๐œ•๐‘ฆ
.
๐œ•๐‘ฆ
๐œ•๐‘ 
. ๐‘‘๐‘ 
๐œ•๐‘ง
๐œ•๐‘Ÿ
=
๐œ•๐‘ง
๐œ•๐‘ฅ
.
๐‘‘๐‘ฅ
๐‘‘๐‘Ÿ
+
๐œ•๐‘ง
๐œ•๐‘ฆ
.
๐‘‘๐‘ฆ
๐‘‘๐‘Ÿ
and
๐œ•๐‘ง
๐œ•๐‘ 
=
๐œ•๐‘ง
๐œ•๐‘ฅ
.
๐‘‘๐‘ฅ
๐‘‘๐‘ 
+
๐œ•๐‘ง
๐œ•๐‘ฆ
.
๐‘‘๐‘ฆ
๐‘‘๐‘ 
Second Order Total Differential
โ€ข The sign of the second order total differential ๐‘‘2 ๐‘ง shows the convexity and
concavity of the surface with respect to the ๐‘ฅ๐‘œ๐‘ฆ plane.
โ€ข Considering the total differential ๐‘‘๐‘ง , the second order total differential
๐‘‘2 ๐‘ง can be obtained by applying the differential rules:
๐‘‘2 ๐‘ง = ๐‘‘ ๐‘‘๐‘ง = ๐‘‘(
๐œ•๐‘ง
๐œ•๐‘ฅ
. ๐‘‘๐‘ฅ +
๐œ•๐‘ง
๐œ•๐‘ฆ
๐‘‘๐‘ฆ)
= ๐‘‘ ๐‘“๐‘ฅ. ๐‘‘๐‘ฅ + ๐‘“๐‘ฆ. ๐‘‘๐‘ฆ
= ๐‘‘๐‘“๐‘ฅ. ๐‘‘๐‘ฅ + ๐‘“๐‘ฅ. ๐‘‘ ๐‘‘๐‘ฅ + ๐‘‘๐‘“๐‘ฆ. ๐‘‘๐‘ฆ + ๐‘“๐‘ฆ. ๐‘‘(๐‘‘๐‘ฆ)
As
๐‘‘ ๐‘‘๐‘ฅ = ๐‘‘2
๐‘ฅ = 0
๐‘‘ ๐‘‘๐‘ฆ = ๐‘‘2
๐‘ฆ = 0
, and
๐‘‘๐‘“๐‘ฅ = ๐‘“๐‘ฅ๐‘ฅ. ๐‘‘๐‘ฅ + ๐‘“๐‘ฅ๐‘ฆ. ๐‘‘๐‘ฆ
๐‘‘๐‘“๐‘ฆ = ๐‘“๐‘ฆ๐‘ฅ. ๐‘‘๐‘ฅ + ๐‘“๐‘ฆ๐‘ฆ. ๐‘‘๐‘ฆ
therefore :
โ€ข Factorising ๐‘‘๐‘ฅ2
from the right hand side, we have:
๐‘‘2 ๐‘ง = ๐‘‘๐‘ฆ2. ๐‘“๐‘ฅ๐‘ฅ.
๐‘‘๐‘ฅ
๐‘‘๐‘ฆ
2
+ 2๐‘“๐‘ฅ๐‘ฆ.
๐‘‘๐‘ฅ
๐‘‘๐‘ฆ
+ ๐‘“๐‘ฆ๐‘ฆ
๐‘‘2 ๐‘ง = ๐‘“๐‘ฅ๐‘ฅ. ๐‘‘๐‘ฅ2 + 2๐‘“๐‘ฅ๐‘ฆ. ๐‘‘๐‘ฅ. ๐‘‘๐‘ฆ + ๐‘“๐‘ฆ๐‘ฆ. ๐‘‘๐‘ฆ2
Second Order Differential
โ€ข ๐‘‘๐‘ฆ2
> 0 (why?); so the sign of ๐‘‘2
๐‘ง depends on the sign of the
expression in the bracket.
โ€ข From elementary algebra we know that the quadratic form
๐‘Ž๐‘‹2 + ๐‘๐‘‹ + ๐‘ has the same sign as the parameter ๐‘Ž when โˆ†=
๐‘2
โˆ’ 4๐‘Ž๐‘ < 0 .
โ€ข If we assume that ๐‘‹ =
๐‘‘๐‘ฅ
๐‘‘๐‘ฆ
and ๐‘Ž = ๐‘“๐‘ฅ๐‘ฅ , ๐‘ = 2๐‘“๐‘ฅ๐‘ฆ , ๐‘ = ๐‘“๐‘ฆ๐‘ฆ then
๐‘‘2
๐‘ง = ๐‘‘๐‘ฆ2
. ๐‘Ž๐‘‹2
+ ๐‘๐‘‹ + ๐‘ has the same sign as ๐‘Ž = ๐‘“๐‘ฅ๐‘ฅ if
2๐‘“๐‘ฅ๐‘ฆ
2
โˆ’ 4๐‘“๐‘ฅ๐‘ฅ. ๐‘“๐‘ฆ๐‘ฆ < 0 โ†’ ๐‘“๐‘ฅ๐‘ฅ. ๐‘“๐‘ฆ๐‘ฆ > ๐‘“๐‘ฅ๐‘ฆ
2
So;
1. ๐‘‘2
๐‘ง > 0 if ๐‘“๐‘ฅ๐‘ฅ > 0 and ๐‘“๐‘ฅ๐‘ฅ. ๐‘“๐‘ฆ๐‘ฆ > ๐‘“๐‘ฅ๐‘ฆ
2
.
2. ๐‘‘2 ๐‘ง < 0 if ๐‘“๐‘ฅ๐‘ฅ < 0 and ๐‘“๐‘ฅ๐‘ฅ. ๐‘“๐‘ฆ๐‘ฆ > ๐‘“๐‘ฅ๐‘ฆ
2
.
Adoptedfrom Calculus Early Transcendental James Stewart DIFFERENT PAGES
Optimising of Two Variables Functions
โ€ข The two variables function ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ) have a relative maximum
(relative minimum) at a point in its domain if at that point :
Note 1: If ๐‘“๐‘ฅ๐‘ฅ . ๐‘“๐‘ฆ๐‘ฆ โˆ’ ๐‘“๐‘ฅ๐‘ฆ
2
< 0 , it
means the critical point is not a maximum
or a minimum but a saddle point.
(looks maximum from one axis but
minimum from another axis)
Adoptedfrom http://commons.wikimedia.org/wiki/File:Saddle_point.png
๐’› = ๐’™ ๐Ÿ
โˆ’ ๐’š ๐Ÿ
i. ๐‘“๐‘ฅ = 0 and ๐‘“๐‘ฆ = 0 , simultaneously.
ii. ๐‘“๐‘ฅ๐‘ฅ < 0 (๐‘“๐‘ฅ๐‘ฅ > 0)
iii. ๐‘“๐‘ฅ๐‘ฅ . ๐‘“๐‘ฆ๐‘ฆ โˆ’ ๐‘“๐‘ฅ๐‘ฆ
2
> 0
Sufficient Conditions
Necessary conditions for
differentiable functions
Optimising of Two Variables Functions
โ€ข Note 2: If ๐‘“๐‘ฅ๐‘ฅ . ๐‘“๐‘ฆ๐‘ฆ โˆ’ ๐‘“๐‘ฅ๐‘ฆ
2
= 0 at the critical point further
investigation is needed to find about the nature of the point.
โ€ข Example:
o Find the local extremum of the function ๐‘“(๐‘ฅ, ๐‘ฆ) = ๐‘ฅ3 โˆ’ 6๐‘ฅ๐‘ฆ + 8๐‘ฆ3,
if any.
๐‘“๐‘ฅ = 0
๐‘“๐‘ฆ = 0
โ†’
6๐‘ฅ2
โˆ’ 6๐‘ฆ = 0
โˆ’6๐‘ฅ + 24๐‘ฆ2
= 0
โ†’
๐‘ฅ2
= ๐‘ฆ
โˆ’๐‘ฅ + 4๐‘ฆ2
= 0
After solving these simultaneous equations two critical points emerge
๐‘จ(๐ŸŽ, ๐ŸŽ, ๐ŸŽ) and ๐‘ฉ(
๐Ÿ‘ ๐Ÿ
๐Ÿ’
,
๐Ÿ‘ ๐Ÿ
๐Ÿ๐Ÿ”
,
โˆ’๐Ÿ‘
๐Ÿ’
) .
Optimising of Two Variables Functions
Now, ๐‘“๐‘ฅ๐‘ฅ = 12๐‘ฅ and ๐‘“๐‘ฆ๐‘ฆ = 48๐‘ฆ and ๐‘“๐‘ฅ๐‘ฆ = ๐‘“๐‘ฆ๐‘ฅ = โˆ’6 .
So, ๐‘“๐‘ฅ๐‘ฅ . ๐‘“๐‘ฆ๐‘ฆ โˆ’ ๐‘“๐‘ฅ๐‘ฆ
2
= 12๐‘ฅ. 48๐‘ฆ โˆ’ โˆ’6 2 = 576๐‘ฅ๐‘ฆ โˆ’ 36 .
At the point ๐ด 0,0,0 : ๐‘“๐‘ฅ๐‘ฅ . ๐‘“๐‘ฆ๐‘ฆ โˆ’ ๐‘“๐‘ฅ๐‘ฆ
2
= โˆ’36 < 0 โ†’
๐ด ๐ข๐ฌ ๐š ๐ฌ๐š๐๐ฅ๐ฅ๐ž ๐ฉ๐จ๐ข๐ง๐ญ.
At the point
๐ต(
3 1
4
,
3 1
16
,
โˆ’3
4
) :๐‘“๐‘ฅ๐‘ฅ . ๐‘“๐‘ฆ๐‘ฆ โˆ’ ๐‘“๐‘ฅ๐‘ฆ
2
= 144 โˆ’ 36 = 108 > 0 and
๐‘“๐‘ฅ๐‘ฅ > 0 , so this point is a local minimum.
The Jacobian & Hessian Determinants
โ€ข From the matrix algebra we know that for any square matrix ๐ด
if:
๐ด = 0 โŸน ๐ด ๐‘–๐‘  ๐‘Ž ๐‘ ๐‘–๐‘›๐‘”๐‘ข๐‘™๐‘Ž๐‘Ÿ ๐‘š๐‘Ž๐‘ก๐‘Ÿ๐‘–๐‘ฅ,
Which means, there exists linear dependence between at least two
rows or two columns of the matrix.
And if:
๐ด โ‰  0 โŸน ๐ด ๐‘–๐‘  ๐‘Ž ๐‘›๐‘œ๐‘› โˆ’ ๐‘ ๐‘–๐‘›๐‘”๐‘ข๐‘™๐‘Ž๐‘Ÿ ๐‘š๐‘Ž๐‘ก๐‘Ÿ๐‘–๐‘ฅ,
Which means, all rows and all columns are linearly independent.
โ€ข So to test for linear dependence between the equations in a
simultaneous system the determinant of the coefficients matrix
can be used.
The Jacobian & Hessian Determinants
โ€ข To test for functional dependence (both linear and non-linear) between
different functions we use Jacobian Determinant shown by ๐ฝ .
โ€ข The Jacobian Matrix is the matrix of all first-order partial derivatives of a
vector function ๐น: ๐‘… ๐‘› โ†’ ๐‘… ๐‘š, which corresponds a vector in ๐‘›
dimensional space(real n-tuples) into a vector in ๐‘š dimensional space
(real m-tuples):
๐‘ฆ1 = ๐น1(๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ ๐‘›)
๐‘ฆ2 = ๐น2(๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ ๐‘›)
โ‹ฎ
๐‘ฆ ๐‘š = ๐น๐‘š(๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ ๐‘›)
So, the Jacobian matrix of ๐น is:
๐ฝ =
๐œ•๐น1
๐œ•๐‘ฅ1
โ‹ฏ
๐œ•๐น1
๐œ•๐‘ฅ ๐‘›
โ‹ฎ โ‹ฑ โ‹ฎ
๐œ•๐น๐‘š
๐œ•๐‘ฅ1
โ‹ฏ
๐œ•๐น๐‘š
๐œ•๐‘ฅ ๐‘›
Each row is the partial
derivatives of one of the
functions (e.g. ๐น1) with
respect to all independent
variables ๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ ๐‘›.
The Jacobian & Hessian Determinants
โ€ข If ๐‘š = ๐‘›, the Jacobian matrix is a square matrix and its
determinant shows if there is functional dependence or
independence between the functions.
๐ฝ = 0 โŸน ๐‘‡โ„Ž๐‘’ ๐‘’๐‘ž๐‘ข๐‘Ž๐‘ก๐‘–๐‘œ๐‘›๐‘  ๐‘Ž๐‘Ÿ๐‘’ ๐‘“๐‘ข๐‘›๐‘๐‘ก๐‘–๐‘œ๐‘›๐‘Ž๐‘™๐‘™๐‘ฆ ๐‘‘๐‘’๐‘๐‘’๐‘›๐‘‘๐‘’๐‘›๐‘ก
This means, there is a linear or non-linear association between two
functions.
๐ฝ โ‰  0 โŸน ๐‘‡โ„Ž๐‘’ ๐‘’๐‘ž๐‘ข๐‘Ž๐‘ก๐‘–๐‘œ๐‘›๐‘  ๐‘Ž๐‘Ÿ๐‘’ ๐‘“๐‘ข๐‘›๐‘๐‘ก๐‘–๐‘œ๐‘›๐‘Ž๐‘™๐‘™๐‘ฆ ๐‘–๐‘›๐‘‘๐‘’๐‘๐‘’๐‘›๐‘‘๐‘’๐‘›๐‘ก
This means, there is no linear or non-linear association between
two functions.
Example: Use the Jacobian determinant to test the functional
dependency of the following equations:
๐‘ฆ1 = 2๐‘ฅ1 โˆ’ 3๐‘ฅ2
๐‘ฆ2 = 4๐‘ฅ1
2
โˆ’ 12๐‘ฅ1 ๐‘ฅ2 + 9๐‘ฅ2
2
The Jacobian & Hessian Determinants
โ€ข The Jacobian determinant is :
๐ฝ =
๐œ•๐‘ฆ1
๐œ•๐‘ฅ1
๐œ•๐‘ฆ1
๐œ•๐‘ฅ2
๐œ•๐‘ฆ2
๐œ•๐‘ฅ1
๐œ•๐‘ฆ2
๐œ•๐‘ฅ2
=
2 โˆ’3
8๐‘ฅ1 โˆ’ 12๐‘ฅ2 โˆ’12๐‘ฅ1 + 18๐‘ฅ2
= 2 โˆ’12๐‘ฅ1 + 18๐‘ฅ2 โˆ’ โˆ’3 8๐‘ฅ1 โˆ’ 12๐‘ฅ2 = 0
โ€ข So, the functions are not independent.
โ€ข We expected such a result as we know that there is a quadratic
functional relationship between ๐‘ฆ1 and ๐‘ฆ2:
๐‘ฆ1
2
= ๐‘ฆ2
The Jacobian & Hessian Determinants
โ€ข Hessian Matrix is a square matrix which is composed of the second-
order partial derivatives of a real (scalar) multi variables function,
(๐‘“: ๐‘… ๐‘› โ†’ ๐‘…). For a function ๐‘ฆ = ๐‘“(๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ ๐‘›), Hessian
determinant is defined as:
๐ป =
๐œ•2
๐‘“
๐œ•๐‘ฅ1
2
๐œ•2
๐‘“
๐œ•๐‘ฅ1 ๐œ•๐‘ฅ2
โ‹ฏ
๐œ•2
๐‘“
๐œ•๐‘ฅ1 ๐œ•๐‘ฅ ๐‘›
๐œ•2
๐‘“
๐œ•๐‘ฅ2 ๐œ•๐‘ฅ1
๐œ•2
๐‘“
๐œ•๐‘ฅ2
2 โ‹ฏ
๐œ•2
๐‘“
๐œ•๐‘ฅ2 ๐œ•๐‘ฅ ๐‘›
โ‹ฎ
๐œ•2
๐‘“
๐œ•๐‘ฅ ๐‘› ๐œ•๐‘ฅ1
โ‹ฎ
๐œ•2
๐‘“
๐œ•๐‘ฅ ๐‘› ๐œ•๐‘ฅ2
โ€ฆ
โ‹ฎ
๐œ•2
๐‘“
๐œ•๐‘ฅ ๐‘›
2
=
๐‘“11 ๐‘“12 โ€ฆ ๐‘“1๐‘›
๐‘“21 ๐‘“22 โ€ฆ ๐‘“2๐‘›
โ‹ฎ
๐‘“๐‘›1
โ‹ฎ
๐‘“๐‘›2
โ‹ฑ โ‹ฎ
โ€ฆ ๐‘“๐‘›๐‘›
โ€ข In the optimisation of two variables function if the first-order
(necessary) conditions ๐‘“๐‘ฅ = ๐‘“๐‘ฆ = 0 are met, second-order
(sufficient) conditions are:
๏ถ ๐‘“๐‘ฅ๐‘ฅ, ๐‘“๐‘ฆ๐‘ฆ > 0 ๐‘“๐‘œ๐‘Ÿ ๐‘Ž ๐‘š๐‘–๐‘›๐‘–๐‘š๐‘ข๐‘š and ๐‘“๐‘ฅ๐‘ฅ, ๐‘“๐‘ฆ๐‘ฆ < 0 ๐‘“๐‘œ๐‘Ÿ ๐‘Ž ๐‘š๐‘Ž๐‘ฅ๐‘–๐‘š๐‘ข๐‘š
๏ถ ๐‘“๐‘ฅ๐‘ฅ . ๐‘“๐‘ฆ๐‘ฆ โˆ’ ๐‘“๐‘ฅ๐‘ฆ
2
> 0
The Jacobian & Hessian Determinants
โ€ข Using the Hessian determinant, we can simply show the sufficient
conditions as:
๏ƒ˜ The optimal point is minimum if ๐ป1 > 0 and ๐ป2 > 0, because:
o ๐ป1 = ๐‘“๐‘ฅ๐‘ฅ > 0
o ๐ป2 =
๐‘“๐‘ฅ๐‘ฅ ๐‘“๐‘ฅ๐‘ฆ
๐‘“๐‘ฆ๐‘ฅ ๐‘“๐‘ฆ๐‘ฆ
= ๐‘“๐‘ฅ๐‘ฅ ๐‘“๐‘ฆ๐‘ฆ โˆ’ ๐‘“๐‘ฅ๐‘ฆ
2 > 0
๏ƒ˜ And, the optimal point is maximum if ๐ป1 < 0 and ๐ป2 > 0.
โ€ข There is the same story for a multi-variable function ๐‘ฆ = ๐‘“(๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ ๐‘›):
๏ฑ If ๐ป1 , ๐ป2 , ๐ป3 , โ€ฆ , ๐ป ๐‘› > 0, the critical point is local minimum.
๏ฑ If the principal minors change their signs consecutively, the critical point is
the local maximum. (e.g. in case of ๐‘ฆ = ๐‘“(๐‘ฅ1, ๐‘ฅ2, ๐‘ฅ3), ๐ป1 < 0 , ๐ป2 > 0 and
๐ป3 < 0)
Optimisation with a Constraint
โ€ข In reality, independent variables in a function, are not fully
independent from each other. They might be in a linear or even
non-linear relationship with one another and make a constraint in
the process of optimisation and change the result of that.
Adoptedfrom http://staff.www.ltu.se/~larserik/applmath/chap7en/part7.html
Adopted& altered from http://en.wikipedia.org/wiki/Lagrange_multiplier
๐‘” ๐‘ฅ, ๐‘ฆ = ๐‘
Linear constraintNon-linear constraint
๐’ˆ ๐’™, ๐’š = ๐’„
Optimisation with a Constraint
โ€ข In each case, the function ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ) is the target function for
optimisation, subject to a constraint ๐‘” ๐‘ฅ, ๐‘ฆ = ๐‘ (where ๐‘ is a
constant). So;
Max or Min โˆถ ๐‘ง = ๐‘“ ๐‘ฅ, ๐‘ฆ
Subject to โˆถ ๐‘” ๐‘ฅ, ๐‘ฆ = ๐‘
โ€ข If the constraint function ๐‘” ๐‘ฅ, ๐‘ฆ = ๐‘ is linear (e.g. ๐‘ฅ โˆ’ 2๐‘ฆ = โˆ’1)
one way to include the constraint into the optimisation process is to
find one variable with respect to another from the constraint
function (here; ๐‘ฅ = 2๐‘ฆ โˆ’ 1) and put it into the target function to
make it a function with one independent variable, ๐‘ง = ๐น(๐‘ฆ), and
follow the optimisation process of two-variable function.
Example
โ€ข Example: Find the maximum of the function ๐‘ง = ๐‘ฅ๐‘ฆ subject to
the constraint ๐‘ฅ + ๐‘ฆ = 1.
From the constraint function we have ๐‘ฆ = โˆ’๐‘ฅ + 1 and if we
substitute this with the ๐‘ฆ in the target function, we will have ๐‘ง =
โˆ’ ๐‘ฅ2 + ๐‘ฅ.
๐‘‘๐‘ง
๐‘‘๐‘ฅ
= 0 โ†’ โˆ’2๐‘ฅ + 1 = 0 โ†’ ๐‘ฅ = 0.5
Putting this into the constraint equation to find ๐‘ฆ and both into the
target function to find ๐‘ง ; the maximum point
will be ๐ด(0.5, 0.5, 0.25) .
๏ƒ˜ How do we know the point is the maximum point?
The Lagrange Method
โ€ข If the constraint function is non-linear the previous method
might become very complicated. Another method, which is
called โ€œLagrange Methodโ€ or the โ€œMethod of Lagrange
Multipliersโ€, can help us to find local extremum points.
โ€ข In the Lagrange method the constraint function comes into the
process of optimisation by introducing a new variable ฮป
(Lagrange Multiplier, Lagrange coefficient) to make the Lagrange
function ๐ฟ , in the form of:
๐ฟ ๐‘ฅ, ๐‘ฆ, ๐œ† = ๐‘“ ๐‘ฅ, ๐‘ฆ + ๐œ† . [๐‘ โˆ’ ๐‘” ๐‘ฅ, ๐‘ฆ ]
โ€ข By changing ๐‘ฅ and ๐‘ฆ a point is moving on the surface of the
function but the movement is limited to the constraint ๐‘” ๐‘ฅ, ๐‘ฆ =
๐‘ .
โ€ข This means ๐‘ โˆ’ ๐‘” ๐‘ฅ, ๐‘ฆ = 0 and ๐ฟ ๐‘ฅ, ๐‘ฆ, ๐œ† = ๐‘“ ๐‘ฅ, ๐‘ฆ . So, the
optimisation of ๐ฟ is equivalent to the optimisation of ๐‘“ .
The Lagrange Method
โ€ข To find the extremum values we need to find the derivative of the
Lagrange function with respect to its variables and solve the
following simultaneous equations :
๐œ•๐ฟ
๐œ•๐‘ฅ
= 0 โ†’
๐œ•๐‘“
๐œ•๐‘ฅ
โˆ’ ๐œ†.
๐œ•๐‘”
๐œ•๐‘ฅ
= 0
๐œ•๐ฟ
๐œ•๐‘ฆ
= 0 โ†’
๐œ•๐‘“
๐œ•๐‘ฆ
โˆ’ ๐œ†.
๐œ•๐‘”
๐œ•๐‘ฆ
= 0
๐œ•๐ฟ
๐œ•๐œ†
= 0 โ†’ ๐‘ โˆ’ ๐‘” ๐‘ฅ, ๐‘ฆ = 0
โ€ข Solving this simultaneous equations gives us the critical values of ๐‘ฅ
and ๐‘ฆ and a value for ๐œ† .
โ€ข ๐œ† shows the sensitivity of the target (objective)function to the
change in the constraint function.
Necessary conditions
for having extremums A
Sufficient Condition
โ€ข To make sure that the critical point(s) from solving the
simultaneous equations are extremum(s) we need sufficient
evidence which is the sign of second order differential of the
Lagrange function ๐‘‘2 ๐ฟ at the critical point(s).
โ€ข If ๐ฟ = ๐‘“ ๐‘ฅ, ๐‘ฆ + ๐œ† . [c โˆ’ ๐‘” ๐‘ฅ, ๐‘ฆ ] then
๐‘‘๐ฟ = ๐‘‘๐‘“ โˆ’ ๐‘”. ๐‘‘๐œ† โˆ’ ๐œ†. ๐‘‘๐‘”
And
๐‘‘2 ๐ฟ = ๐‘‘2 ๐‘“ โˆ’ ๐‘‘๐‘”. ๐‘‘๐œ† โˆ’ ๐‘”. ๐‘‘2 ๐œ† โˆ’ ๐‘‘๐œ† . ๐‘‘๐‘” โˆ’ ๐œ† . ๐‘‘2 ๐‘”
Since:
๏ถ ๐‘‘2 ๐œ† = 0
๏ถ ๐‘‘2
๐‘“ = ๐‘“๐‘ฅ๐‘ฅ. ๐‘‘๐‘ฅ2
+ 2๐‘“๐‘ฅ๐‘ฆ. ๐‘‘๐‘ฅ. ๐‘‘๐‘ฆ + ๐‘“๐‘ฆ๐‘ฆ. ๐‘‘๐‘ฆ2
๏ถ ๐‘‘๐‘” = ๐‘” ๐‘ฅ. ๐‘‘๐‘ฅ + ๐‘” ๐‘ฆ. ๐‘‘๐‘ฆ
๏ถ ๐‘‘2 ๐‘” = ๐‘” ๐‘ฅ๐‘ฅ. ๐‘‘๐‘ฅ2 + 2๐‘” ๐‘ฅ๐‘ฆ. ๐‘‘๐‘ฅ. ๐‘‘๐‘ฆ + ๐‘” ๐‘ฆ๐‘ฆ. ๐‘‘๐‘ฆ2
, therefore
๐‘‘2
๐ฟ = ๐‘“๐‘ฅ๐‘ฅ โˆ’ ๐œ†. ๐‘” ๐‘ฅ๐‘ฅ . ๐‘‘๐‘ฅ2
+ 2 ๐‘“๐‘ฅ๐‘ฆ โˆ’ ๐œ†. ๐‘” ๐‘ฅ๐‘ฆ . ๐‘‘๐‘ฅ. ๐‘‘๐‘ฆ + ๐‘“๐‘ฆ๐‘ฆ โˆ’ ๐œ†. ๐‘” ๐‘ฆ๐‘ฆ . ๐‘‘๐‘ฆ2
โˆ’
2๐‘” ๐‘ฅ ๐‘‘๐‘ฅ. ๐‘‘๐œ† โˆ’ 2๐‘” ๐‘ฆ ๐‘‘๐‘ฆ. ๐‘‘๐œ†
= ๐ฟ ๐‘ฅ๐‘ฅ. ๐‘‘๐‘ฅ2 + 2๐ฟ ๐‘ฅ๐‘ฆ. ๐‘‘๐‘ฅ. ๐‘‘๐‘ฆ + ๐ฟ ๐‘ฆ๐‘ฆ. ๐‘‘๐‘ฆ2 โˆ’ 2๐‘” ๐‘ฅ ๐‘‘๐‘ฅ. ๐‘‘๐œ† โˆ’ 2๐‘” ๐‘ฆ ๐‘‘๐‘ฆ. ๐‘‘๐œ†
โ€ข In the matrix form we can use the bordered Hessian Matrix to represent the
above quadratic form:
๐‘‘2 ๐ฟ = ๐‘‘๐œ† ๐‘‘๐‘ฅ ๐‘‘๐‘ฆ
0 โˆ’๐‘” ๐‘ฅ โˆ’๐‘” ๐‘ฆ
โˆ’๐‘” ๐‘ฅ ๐ฟ ๐‘ฅ๐‘ฅ ๐ฟ ๐‘ฅ๐‘ฆ
โˆ’๐‘” ๐‘ฆ ๐ฟ ๐‘ฆ๐‘ฅ ๐ฟ ๐‘ฆ๐‘ฆ
๐‘‘๐œ†
๐‘‘๐‘ฅ
๐‘‘๐‘ฆ
โ€ข Where the bordered Hessian matrix is:
๐ป3 =
0 โˆ’๐‘” ๐‘ฅ โˆ’๐‘” ๐‘ฆ
โˆ’๐‘” ๐‘ฅ ๐ฟ ๐‘ฅ๐‘ฅ ๐ฟ ๐‘ฅ๐‘ฆ
โˆ’๐‘” ๐‘ฆ ๐ฟ ๐‘ฆ๐‘ฅ ๐ฟ ๐‘ฆ๐‘ฆ
or sometimes ๐ป3 =
๐ฟ ๐‘ฅ๐‘ฅ ๐ฟ ๐‘ฅ๐‘ฆ โˆ’๐‘” ๐‘ฅ
๐ฟ ๐‘ฆ๐‘ฅ ๐ฟ ๐‘ฆ๐‘ฆ โˆ’๐‘” ๐‘ฆ
โˆ’๐‘” ๐‘ฅ โˆ’๐‘” ๐‘ฆ 0
Sufficient Condition
โ€ข In the second form, the components of vectors of the first differentials of
the variables, need to be re-arranged, i.e.:
๐‘‘๐‘ฅ ๐‘‘๐‘ฆ ๐‘‘๐œ†
๐ฟ ๐‘ฅ๐‘ฅ ๐ฟ ๐‘ฅ๐‘ฆ โˆ’๐‘” ๐‘ฅ
๐ฟ ๐‘ฆ๐‘ฅ ๐ฟ ๐‘ฆ๐‘ฆ โˆ’๐‘” ๐‘ฆ
โˆ’๐‘” ๐‘ฅ โˆ’๐‘” ๐‘ฆ 0
๐‘‘๐‘ฅ
๐‘‘๐‘ฆ
๐‘‘๐œ†
โ€ข Note: In some books the constraint function ๐‘” enters in the Lagrange
function with a positive sign, so, the signs of the first derivatives of ๐‘”
in the bordered Hessian matrix are positive, but there is no difference
between their determinants. (Based on the properties of determinant,
if just a row or just a column of a matrix multiplied by ๐‘˜, the
determinant of the matrix is multiplied by ๐‘˜. In this case, the first row
and the first column multiplied by -1, so, the determinant is multiplied
by -1x(-1)=1)
Sufficient Condition
So, we have a minimum if
1. ๐‘‘2 ๐ฟ > 0 (i.e. all the principle minors of the Hessian matrix should
be negative: ๐ป2 , ๐ป3 < 0 )
And a maximum if:
2. ๐‘‘2 ๐ฟ < 0 (i.e. the principle minors of the Hessian matrix change their
sign one after another: ๐ป2 > 0, ๐ป3 < 0 )
โ€ข For a multi variable function ๐‘ฆ = ๐‘“(๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ ๐‘›), The Hessian matrix
is ๐‘› ร— ๐‘› but the rule is the same:
โ€ข For minimum: ๐ป2 , ๐ป3 , โ€ฆ , ๐ป ๐‘› < 0 .
โ€ข For maximum:
Tโ„Ž๐‘’ ๐‘ ๐‘–๐‘”๐‘› ๐‘œ๐‘“ ๐‘กโ„Ž๐‘’ ๐‘๐‘Ÿ๐‘–๐‘›๐‘๐‘–๐‘๐‘™๐‘’ ๐‘š๐‘–๐‘›๐‘œ๐‘Ÿ๐‘  ๐‘โ„Ž๐‘Ž๐‘›๐‘”๐‘’ ๐‘๐‘œ๐‘›๐‘ ๐‘’๐‘๐‘ข๐‘ก๐‘–๐‘ฃ๐‘’๐‘™๐‘ฆ.
Sufficient Condition
Example
โ€ข Find the extremums of the function ๐‘“ ๐‘ฅ, ๐‘ฆ = ๐‘ฅ โˆ’ ๐‘ฆ subject to the ๐‘ฅ2
+ ๐‘ฆ2
=
100, if any?
๐ฟ ๐‘ฅ, ๐‘ฆ, ๐œ† = ๐‘ฅ โˆ’ ๐‘ฆ + ๐œ†[100 โˆ’ ๐‘ฅ2 โˆ’ ๐‘ฆ2]
๐ฟ ๐‘ฅ = 1 โˆ’ 2๐œ†๐‘ฅ = 0
๐ฟ ๐‘ฆ = โˆ’1 โˆ’ 2๐œ†๐‘ฆ = 0
๐ฟ ๐œ† = 100 โˆ’ ๐‘ฅ2 โˆ’ ๐‘ฆ2 = 0
1
โˆ’1
=
2๐œ†๐‘ฅ
2๐œ†๐‘ฆ
From the first two equations ๐œ† can be omitted and we have ๐‘ฅ = โˆ’๐‘ฆ. Substituting
this new equation into the third equation we will have:
100 โˆ’๐‘ฆ2 โˆ’ ๐‘ฆ2 = 0 โ†’ ๐‘ฆ = ยฑ5 2
So, the critical points are A โˆ’5 2, 5 2, โˆ’10 2 and ๐ต(5 2, โˆ’5 2, 10 2) and
๐œ† = โˆ“
2
20
.
Without any further investigation it can be said that point A is minimum and point
๐ต is maximum. (Why?)
Example
โ€ข Using hessian determinant method we have:
๐ป3 =
0 โˆ’2๐‘ฅ โˆ’2๐‘ฆ
โˆ’2๐‘ฅ โˆ’2๐œ† 0
โˆ’2๐‘ฆ 0 โˆ’2๐œ†
= 8๐œ†(๐‘ฅ2 + ๐‘ฆ2)
Obviously, the sign of this determinant depends on the sign of ๐œ†.
๏ƒ˜ At point A โˆ’5 2, 5 2, โˆ’10 2 , ๐œ† = โˆ’
2
20
,so, ๐ป3 <0 and the point is
minimum. ( ๐ป2 is also negative).
๏ƒ˜ At point ๐ต 5 2, โˆ’5 2, 10 2 , ๐œ† = +
2
20
, so, ๐ป3 >0 and the point is
maximum.
โ€ข If there are more than one constraint the process of optimisation is
the same but there will be more than one Lagrange multiplier.
โ€ข This case is the generalisation of the previous case and will not be
discussed here.
Interpretation of the Lagrange Multiplier ๐œ†
โ€ข The first-order conditions in the form of the simultaneous
equations (slide 40), provides the critical (and perhaps) optimal
values of the independent variables (๐‘ฅโˆ—, ๐‘ฆโˆ—) and the corresponding
value(s) of the Lagrange multiplier (๐œ†โˆ—).
โ€ข The Lagrange multiplier shows the sensitivity of the optimal value
of the target (objective) function(๐‘“โˆ—) to the change in the constant
value of the constraint function (๐‘). It is calculated as the ratio, i.e.:
๐œ†โˆ— =
๐œ•๐‘“โˆ— ๐‘ฅโˆ—, ๐‘ฆโˆ—
๐œ•๐‘
This means if ๐œ†โˆ— = 2, and ๐‘ increases by 1%, the value of the target
function (calculated at the optimal values ๐‘ฅโˆ—
and ๐‘ฆโˆ—
) increases 2%.
A
Duality in Optimisation Analysis
โ€ข Consider the process of maximisation of the target (objective)
function ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ), subject to the constraint ๐‘” = ๐‘”(๐‘ฅ, ๐‘ฆ).
โ€ข As we know, the solution is the tangency point on both functions,
so, the process of optimisation can be done through different
approaches. The primal approach is what we have discussed and
done so far but the dual approach is when the constraint function
๐‘” = ๐‘”(๐‘ฅ, ๐‘ฆ) is the new target function and ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ) as the new
constraint.
โ€ข The initial idea comes from the mathematical fact that if ๐‘“ reaches
to its maximum at the point ๐‘ฅ = ๐‘ฅโˆ—, the function โˆ’๐‘“ will have a
minimum at that point.
โ€ข Therefore, instead of finding the maximum of ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ), subject
to the constraint ๐‘” = ๐‘” ๐‘ฅ, ๐‘ฆ , we can find the minimum of ๐‘” =
๐‘” ๐‘ฅ, ๐‘ฆ , subject to the constraint ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ), i.e. if we know that ๐‘ง
cannot be bigger than ๐‘งโˆ—what is the minimum value of ๐‘” ๐‘ฅ, ๐‘ฆ ,
which satisfies this constraint.
Duality in Optimisation Analysis
โ€ข Let ๐‘ˆ = ๐‘ˆ(๐‘ฅ, ๐‘ฆ) is the utility function subject to the budget
constraint ๐‘ฅ. ๐‘ƒ๐‘ฅ + ๐‘ฆ. ๐‘ƒ๐‘ฆ = ๐‘š.
โ€ข The Lagrange function is:
๐ฟ ๐‘ฅ, ๐‘ฆ, ๐œ† = ๐‘ˆ ๐‘ฅ, ๐‘ฆ + ๐œ†(๐‘š โˆ’ ๐‘ฅ. ๐‘ƒ๐‘ฅ โˆ’ ๐‘ฆ. ๐‘ƒ๐‘ฆ)
The first-order conditions are:
๐ฟ ๐‘ฅ = ๐‘ˆ ๐‘ฅ โˆ’ ๐œ†๐‘ƒ๐‘ฅ = 0
๐ฟ ๐‘ฆ = ๐‘ˆ ๐‘ฆ โˆ’ ๐œ†๐‘ƒ๐‘ฆ = 0
๐ฟ ๐œ† = ๐‘š โˆ’ ๐‘ฅ. ๐‘ƒ๐‘ฅ + ๐‘ฆ. ๐‘ƒ๐‘ฆ = 0
โ€ข The optimal value for ๐‘ฅ and ๐‘ฆ which shows the Marshallian demand
(consumption) function for ๐‘ฅ and ๐‘ฆ and the optimal value for ๐œ† are:
๐‘ฅ ๐‘€ = ๐‘ฅ ๐‘€ ๐‘ƒ๐‘ฅ, ๐‘ƒ๐‘ฆ, ๐‘š
๐‘ฆ ๐‘€ = ๐‘ฆ ๐‘€ ๐‘ƒ๐‘ฅ, ๐‘ƒ๐‘ฆ, ๐‘š
๐œ† ๐‘€ = ๐œ† ๐‘€ ๐‘ƒ๐‘ฅ, ๐‘ƒ๐‘ฆ, ๐‘š
B
Duality in Optimisation Analysis
โ€ข Substituting these solutions into the target function gives the
maximum value of the utility can be achieved by the constraint:
๐‘ˆโˆ— = ๐‘ˆโˆ— ๐‘ฅ ๐‘€ ๐‘ƒ๐‘ฅ, ๐‘ƒ๐‘ฆ, ๐‘š , ๐‘ฆ ๐‘€ ๐‘ƒ๐‘ฅ, ๐‘ƒ๐‘ฆ, ๐‘š
We call this as the indirect utility function, as it is the maximum value
of the utility obtained at the optimal values of ๐‘ฅ and ๐‘ฆ, but it is an
indirect function because now its values depends on the parameters
๐‘ƒ๐‘ฅ, ๐‘ƒ๐‘ฆ and ๐‘š.
โ€ข Now, the dual problem is when the expenditure on ๐‘ฅ and ๐‘ฆ is
minimised subject to the maintaining of a given level of utility ๐‘ˆโˆ—
.
So, the new Lagrange function is:
๐ฟ ๐‘ฅ, ๐‘ฆ, ๐œ† = ๐‘ฅ. ๐‘ƒ๐‘ฅ + ๐‘ฆ. ๐‘ƒ๐‘ฆ + ๐œ†[๐‘ˆโˆ— โˆ’ ๐‘ˆ ๐‘ฅ, ๐‘ฆ ]
The first-order conditions provide optimal solutions for ๐‘ฅ,๐‘ฆ and ๐œ†.
Duality in Optimisation Analysis
๐ฟ ๐‘ฅ = ๐‘ƒ๐‘ฅ โˆ’ ๐œ†๐‘ˆ ๐‘ฅ = 0
๐ฟ ๐‘ฆ = ๐‘ƒ๐‘ฆ โˆ’ ๐œ†๐‘ˆ ๐‘ฆ = 0
๐ฟ ๐œ† = ๐‘ˆโˆ—
โˆ’ ๐‘ˆ(๐‘ฅ, ๐‘ฆ) = 0
The optimal solutions represent the demand functions for ๐‘ฅ and ๐‘ฆ .
๐‘ฅ ๐ป
= ๐‘ฅ ๐ป
๐‘ƒ๐‘ฅ, ๐‘ƒ๐‘ฆ, ๐‘ˆโˆ—
๐‘ฆ ๐ป
= ๐‘ฆ ๐ป
๐‘ƒ๐‘ฅ, ๐‘ƒ๐‘ฆ, ๐‘ˆโˆ—
๐œ† ๐ป = ๐œ† ๐ป ๐‘ƒ๐‘ฅ, ๐‘ƒ๐‘ฆ, ๐‘ˆโˆ—
โ€ข The first two equations are called Hicksion demand functions.
Both simultaneous equations and give us the same results:
๐‘ˆ ๐‘ฅ
๐‘ƒ๐‘ฅ
=
๐‘ˆ ๐‘ฆ
๐‘ƒ๐‘ฆ
๐‘œ๐‘Ÿ
๐‘ˆ ๐‘ฅ
๐‘ˆ ๐‘ฆ
=
๐‘ƒ๐‘ฅ
๐‘ƒ๐‘ฆ
So, primal and dual analysis leads us to the same conclusion. The only
difference is that:
๐œ† ๐ป
=
1
๐œ† ๐‘€
C
B C

More Related Content

What's hot

Exponential and logrithmic functions
Exponential and logrithmic functionsExponential and logrithmic functions
Exponential and logrithmic functions
Malikahmad105
ย 
4.3 related rates
4.3 related rates4.3 related rates
4.3 related rates
math265
ย 
Differentiation
DifferentiationDifferentiation
Differentiation
timschmitz
ย 
3.4 derivative and graphs
3.4 derivative and graphs3.4 derivative and graphs
3.4 derivative and graphs
math265
ย 
Linear equations inequalities and applications
Linear equations inequalities and applicationsLinear equations inequalities and applications
Linear equations inequalities and applications
vineeta yadav
ย 

What's hot (20)

Exponential and logrithmic functions
Exponential and logrithmic functionsExponential and logrithmic functions
Exponential and logrithmic functions
ย 
Heteroscedasticity Remedial Measures.pptx
Heteroscedasticity Remedial Measures.pptxHeteroscedasticity Remedial Measures.pptx
Heteroscedasticity Remedial Measures.pptx
ย 
4.3 related rates
4.3 related rates4.3 related rates
4.3 related rates
ย 
Lesson 14: Derivatives of Logarithmic and Exponential Functions (slides)
Lesson 14: Derivatives of Logarithmic and Exponential Functions (slides)Lesson 14: Derivatives of Logarithmic and Exponential Functions (slides)
Lesson 14: Derivatives of Logarithmic and Exponential Functions (slides)
ย 
Derivatives and their Applications
Derivatives and their ApplicationsDerivatives and their Applications
Derivatives and their Applications
ย 
Oer 8 the open economy
Oer 8  the open economyOer 8  the open economy
Oer 8 the open economy
ย 
Cubic Spline Interpolation
Cubic Spline InterpolationCubic Spline Interpolation
Cubic Spline Interpolation
ย 
Lesson 21: Partial Derivatives in Economics
Lesson 21: Partial Derivatives in EconomicsLesson 21: Partial Derivatives in Economics
Lesson 21: Partial Derivatives in Economics
ย 
Lesson 15: Gradients and level curves
Lesson 15: Gradients and level curvesLesson 15: Gradients and level curves
Lesson 15: Gradients and level curves
ย 
Differentiation
DifferentiationDifferentiation
Differentiation
ย 
Lesson 5: Continuity (slides)
Lesson 5: Continuity (slides)Lesson 5: Continuity (slides)
Lesson 5: Continuity (slides)
ย 
3.4 derivative and graphs
3.4 derivative and graphs3.4 derivative and graphs
3.4 derivative and graphs
ย 
Linear equations inequalities and applications
Linear equations inequalities and applicationsLinear equations inequalities and applications
Linear equations inequalities and applications
ย 
Mankiw6e chap12
Mankiw6e chap12Mankiw6e chap12
Mankiw6e chap12
ย 
Introduction to Econometrics
Introduction to EconometricsIntroduction to Econometrics
Introduction to Econometrics
ย 
quadratic equations.pptx
quadratic equations.pptxquadratic equations.pptx
quadratic equations.pptx
ย 
Probability Distribution
Probability DistributionProbability Distribution
Probability Distribution
ย 
Functional Forms of Regression Models | Eonomics
Functional Forms of Regression Models | EonomicsFunctional Forms of Regression Models | Eonomics
Functional Forms of Regression Models | Eonomics
ย 
Continuity and differentiability
Continuity and differentiability Continuity and differentiability
Continuity and differentiability
ย 
Application of differentiation
Application of differentiationApplication of differentiation
Application of differentiation
ย 

Viewers also liked (8)

Implicit function and Total derivative
Implicit function and Total derivativeImplicit function and Total derivative
Implicit function and Total derivative
ย 
Basic of Computer component
Basic of Computer componentBasic of Computer component
Basic of Computer component
ย 
Basic Calculus in R.
Basic Calculus in R. Basic Calculus in R.
Basic Calculus in R.
ย 
General mathematics
General mathematicsGeneral mathematics
General mathematics
ย 
Basic calculus (i)
Basic calculus (i)Basic calculus (i)
Basic calculus (i)
ย 
GENERAL MATHEMATICS Module 1: Review on Functions
GENERAL MATHEMATICS Module 1: Review on FunctionsGENERAL MATHEMATICS Module 1: Review on Functions
GENERAL MATHEMATICS Module 1: Review on Functions
ย 
STATISTICS AND PROBABILITY (TEACHING GUIDE)
STATISTICS AND PROBABILITY (TEACHING GUIDE)STATISTICS AND PROBABILITY (TEACHING GUIDE)
STATISTICS AND PROBABILITY (TEACHING GUIDE)
ย 
Pre calculus Grade 11 Learner's Module Senior High School
Pre calculus Grade 11 Learner's Module Senior High SchoolPre calculus Grade 11 Learner's Module Senior High School
Pre calculus Grade 11 Learner's Module Senior High School
ย 

Similar to Basic calculus (ii) recap

Semana 24 funciones iv รกlgebra uni ccesa007
Semana 24 funciones iv รกlgebra uni ccesa007Semana 24 funciones iv รกlgebra uni ccesa007
Semana 24 funciones iv รกlgebra uni ccesa007
Demetrio Ccesa Rayme
ย 
Engineering Analysis -Third Class.ppsx
Engineering Analysis -Third Class.ppsxEngineering Analysis -Third Class.ppsx
Engineering Analysis -Third Class.ppsx
HebaEng
ย 

Similar to Basic calculus (ii) recap (20)

Differential Calculus- differentiation
Differential Calculus- differentiationDifferential Calculus- differentiation
Differential Calculus- differentiation
ย 
MT102 ะ›ะตะบั† 8
MT102 ะ›ะตะบั† 8MT102 ะ›ะตะบั† 8
MT102 ะ›ะตะบั† 8
ย 
Btech_II_ engineering mathematics_unit4
Btech_II_ engineering mathematics_unit4Btech_II_ engineering mathematics_unit4
Btech_II_ engineering mathematics_unit4
ย 
B.tech ii unit-4 material vector differentiation
B.tech ii unit-4 material vector differentiationB.tech ii unit-4 material vector differentiation
B.tech ii unit-4 material vector differentiation
ย 
BSC_COMPUTER _SCIENCE_UNIT-2_DISCRETE MATHEMATICS
BSC_COMPUTER _SCIENCE_UNIT-2_DISCRETE MATHEMATICSBSC_COMPUTER _SCIENCE_UNIT-2_DISCRETE MATHEMATICS
BSC_COMPUTER _SCIENCE_UNIT-2_DISCRETE MATHEMATICS
ย 
MT102 ะ›ะตะบั† 9
MT102 ะ›ะตะบั† 9MT102 ะ›ะตะบั† 9
MT102 ะ›ะตะบั† 9
ย 
Generalized Laplace - Mellin Integral Transformation
Generalized Laplace - Mellin Integral TransformationGeneralized Laplace - Mellin Integral Transformation
Generalized Laplace - Mellin Integral Transformation
ย 
Periodic Solutions for Nonlinear Systems of Integro-Differential Equations of...
Periodic Solutions for Nonlinear Systems of Integro-Differential Equations of...Periodic Solutions for Nonlinear Systems of Integro-Differential Equations of...
Periodic Solutions for Nonlinear Systems of Integro-Differential Equations of...
ย 
Differential Geometry for Machine Learning
Differential Geometry for Machine LearningDifferential Geometry for Machine Learning
Differential Geometry for Machine Learning
ย 
Calculus Review Session Brian Prest Duke University Nicholas School of the En...
Calculus Review Session Brian Prest Duke University Nicholas School of the En...Calculus Review Session Brian Prest Duke University Nicholas School of the En...
Calculus Review Session Brian Prest Duke University Nicholas School of the En...
ย 
Change variablethm
Change variablethmChange variablethm
Change variablethm
ย 
Semana 24 funciones iv รกlgebra uni ccesa007
Semana 24 funciones iv รกlgebra uni ccesa007Semana 24 funciones iv รกlgebra uni ccesa007
Semana 24 funciones iv รกlgebra uni ccesa007
ย 
Specific topics in optimisation
Specific topics in optimisationSpecific topics in optimisation
Specific topics in optimisation
ย 
Integral calculus
Integral calculusIntegral calculus
Integral calculus
ย 
01 FUNCTIONS.pptx
01 FUNCTIONS.pptx01 FUNCTIONS.pptx
01 FUNCTIONS.pptx
ย 
Left and Right Folds - Comparison of a mathematical definition and a programm...
Left and Right Folds- Comparison of a mathematical definition and a programm...Left and Right Folds- Comparison of a mathematical definition and a programm...
Left and Right Folds - Comparison of a mathematical definition and a programm...
ย 
Higher order differential equation
Higher order differential equationHigher order differential equation
Higher order differential equation
ย 
Advanced-Differentiation-Rules.pdf
Advanced-Differentiation-Rules.pdfAdvanced-Differentiation-Rules.pdf
Advanced-Differentiation-Rules.pdf
ย 
B.tech ii unit-5 material vector integration
B.tech ii unit-5 material vector integrationB.tech ii unit-5 material vector integration
B.tech ii unit-5 material vector integration
ย 
Engineering Analysis -Third Class.ppsx
Engineering Analysis -Third Class.ppsxEngineering Analysis -Third Class.ppsx
Engineering Analysis -Third Class.ppsx
ย 

More from Farzad Javidanrad (10)

Lecture 5
Lecture 5Lecture 5
Lecture 5
ย 
Lecture 4
Lecture 4Lecture 4
Lecture 4
ย 
Lecture 3
Lecture 3Lecture 3
Lecture 3
ย 
Lecture 2
Lecture 2Lecture 2
Lecture 2
ย 
Lecture 1
Lecture 1Lecture 1
Lecture 1
ย 
Matrix algebra
Matrix algebraMatrix algebra
Matrix algebra
ย 
Introduction to correlation and regression analysis
Introduction to correlation and regression analysisIntroduction to correlation and regression analysis
Introduction to correlation and regression analysis
ย 
Statistics (recap)
Statistics (recap)Statistics (recap)
Statistics (recap)
ย 
The Dynamic of Business Cycle in Kaleckiโ€™s Theory: Duality in the Nature of I...
The Dynamic of Business Cycle in Kaleckiโ€™s Theory: Duality in the Nature of I...The Dynamic of Business Cycle in Kaleckiโ€™s Theory: Duality in the Nature of I...
The Dynamic of Business Cycle in Kaleckiโ€™s Theory: Duality in the Nature of I...
ย 
Introductory Finance for Economics (Lecture 10)
Introductory Finance for Economics (Lecture 10)Introductory Finance for Economics (Lecture 10)
Introductory Finance for Economics (Lecture 10)
ย 

Recently uploaded

Recently uploaded (20)

Google Gemini An AI Revolution in Education.pptx
Google Gemini An AI Revolution in Education.pptxGoogle Gemini An AI Revolution in Education.pptx
Google Gemini An AI Revolution in Education.pptx
ย 
Sensory_Experience_and_Emotional_Resonance_in_Gabriel_Okaras_The_Piano_and_Th...
Sensory_Experience_and_Emotional_Resonance_in_Gabriel_Okaras_The_Piano_and_Th...Sensory_Experience_and_Emotional_Resonance_in_Gabriel_Okaras_The_Piano_and_Th...
Sensory_Experience_and_Emotional_Resonance_in_Gabriel_Okaras_The_Piano_and_Th...
ย 
ICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptxICT Role in 21st Century Education & its Challenges.pptx
ICT Role in 21st Century Education & its Challenges.pptx
ย 
Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024Mehran University Newsletter Vol-X, Issue-I, 2024
Mehran University Newsletter Vol-X, Issue-I, 2024
ย 
Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...
Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...
Beyond_Borders_Understanding_Anime_and_Manga_Fandom_A_Comprehensive_Audience_...
ย 
Graduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - EnglishGraduate Outcomes Presentation Slides - English
Graduate Outcomes Presentation Slides - English
ย 
Wellbeing inclusion and digital dystopias.pptx
Wellbeing inclusion and digital dystopias.pptxWellbeing inclusion and digital dystopias.pptx
Wellbeing inclusion and digital dystopias.pptx
ย 
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptxBasic Civil Engineering first year Notes- Chapter 4 Building.pptx
Basic Civil Engineering first year Notes- Chapter 4 Building.pptx
ย 
Application orientated numerical on hev.ppt
Application orientated numerical on hev.pptApplication orientated numerical on hev.ppt
Application orientated numerical on hev.ppt
ย 
Making communications land - Are they received and understood as intended? we...
Making communications land - Are they received and understood as intended? we...Making communications land - Are they received and understood as intended? we...
Making communications land - Are they received and understood as intended? we...
ย 
Python Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docxPython Notes for mca i year students osmania university.docx
Python Notes for mca i year students osmania university.docx
ย 
SOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning PresentationSOC 101 Demonstration of Learning Presentation
SOC 101 Demonstration of Learning Presentation
ย 
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...Kodo Millet  PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
Kodo Millet PPT made by Ghanshyam bairwa college of Agriculture kumher bhara...
ย 
Introduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The BasicsIntroduction to Nonprofit Accounting: The Basics
Introduction to Nonprofit Accounting: The Basics
ย 
Unit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptxUnit-V; Pricing (Pharma Marketing Management).pptx
Unit-V; Pricing (Pharma Marketing Management).pptx
ย 
Holdier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdfHoldier Curriculum Vitae (April 2024).pdf
Holdier Curriculum Vitae (April 2024).pdf
ย 
Unit 3 Emotional Intelligence and Spiritual Intelligence.pdf
Unit 3 Emotional Intelligence and Spiritual Intelligence.pdfUnit 3 Emotional Intelligence and Spiritual Intelligence.pdf
Unit 3 Emotional Intelligence and Spiritual Intelligence.pdf
ย 
Jamworks pilot and AI at Jisc (20/03/2024)
Jamworks pilot and AI at Jisc (20/03/2024)Jamworks pilot and AI at Jisc (20/03/2024)
Jamworks pilot and AI at Jisc (20/03/2024)
ย 
How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17How to Give a Domain for a Field in Odoo 17
How to Give a Domain for a Field in Odoo 17
ย 
General Principles of Intellectual Property: Concepts of Intellectual Proper...
General Principles of Intellectual Property: Concepts of Intellectual  Proper...General Principles of Intellectual Property: Concepts of Intellectual  Proper...
General Principles of Intellectual Property: Concepts of Intellectual Proper...
ย 

Basic calculus (ii) recap

  • 1. Basic Calculus (II) Recap (for MSc & PhD Business, Management & Finance Students) Lecturer: Farzad Javidanrad First Draft: Sep. 2013 Revised: Sep. 2014 Multi-Variable Functions
  • 2. Multi-Variable Functions โ€ข In the case of one-variable function, in the form of ๐‘ฆ = ๐‘“(๐‘ฅ) , the variable ๐’™ is called โ€œindependent variableโ€ and ๐’š โ€œdependent variableโ€. โ€ข There are many examples of the dependency of ๐‘ฆ on ๐‘ฅ (e.g, the state of boiling of water depends on the amount of heat; or consumption expenditure depends on the level of income) but the concept of function should be understood beyond the concept of dependency. In most of the cases, dependency is not the issue at all. The modern concept of function is based on the idea of mapping.
  • 3. Multi-Variable Functions โ€ข When a painter paint a scene on a canvas s(he) uses a correspondence rule (mapping rule): every point in three- dimensional space (๐‘…3) is corresponded (mapped) to just one and only one point in two-dimensional space (๐‘…2). โ€ข Mathematically speaking the function ๐‘“: ๐‘…3 โ†’ ๐‘…2 can represent the type of corresponding (mapping) rule that the painter is applying.
  • 4. The Concept of Function as Mapping โ€ข Transformation of an object is a mapping from ๐‘…2 to ๐‘…2; โ€ข Mathematical operations describe a function from ๐‘…2 to ๐‘… x y y -xo ๐‘“: ๐‘…2 โ†’ ๐‘…2 ๐‘Ž, ๐‘ โ†’ (๐‘, โˆ’๐‘Ž) (๐‘Ž, ๐‘) (๐‘, โˆ’๐‘Ž) a b a+boo Figure1-6: Geometrical interpretation of the sum operator as a function. This is a transformation from space to . xx ๐‘”: ๐‘…2 โ†’ ๐‘… ๐‘Ž, ๐‘ โ†’ ๐‘Ž + ๐‘
  • 5. Multi Variables Functions โ€ข All basic mathematical operators such as summation, subtraction, division and multiplication introduce a function from two- dimensional space (๐‘…2) to the real number set (one-dimensional space, ๐‘…), that is: ๐‘“: ๐‘…2 โ†’ ๐‘… For e.g. for division: ๐‘Ž, ๐‘ โ†’ ๐‘Ž ๐‘ (๐‘ โ‰  0) โ€ข One of the important family of the multi-variable functions is the โ€œreal (scalar) multi variables functionโ€, which can be shown as ๐‘“: ๐‘… ๐‘› โ†’ ๐‘… or simply, ๐‘ฆ = ๐‘“(๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ ๐‘›), where ๐‘ฆ is the dependent variable and ๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ ๐‘› are independent variables.
  • 6. Two Variables Functions โ€ข A simple form of this function is when we have two independent variables ๐‘ฅ, ๐‘ฆ and one dependent variable ๐‘ง, in the form of ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ). This is called โ€œtwo variables functionโ€ as there are two independent variables. โ€ข E.g. a Cobb-Douglas production function : ๐‘Œ = ๐‘“ ๐พ, ๐ฟ = ๐ด๐พ ๐›ผ ๐ฟ ๐›ฝ Where ๐‘Œ is the level of production, ๐พ and ๐ฟ are the levels of capital and labour employed for production, respectively. โ€ข ๐ด, ๐›ผ and ๐›ฝ are constants of the function. Adoptedfrom http://en.citizendium.org/wiki/File:Cobb-Douglas_with_dimishing_returns_to_scale.png Y K L
  • 7. Two Variables Functions โ€ข ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ) represents a functional relationship if for every ordered pair (๐‘ฅ, ๐‘ฆ) in the domain of the function there will be one and only one value of ๐‘ง in the range of the function. o Which graph does represent a function? ๐’™ ๐Ÿ ๐’‚ ๐Ÿ + ๐’š ๐Ÿ ๐’ƒ ๐Ÿ + ๐’› ๐Ÿ ๐’„ ๐Ÿ = ๐Ÿ Ellipsoid Hyperboloid of Two Sheets โˆ’ ๐’™ ๐Ÿ ๐’‚ ๐Ÿ โˆ’ ๐’š ๐Ÿ ๐’ƒ ๐Ÿ + ๐’› ๐Ÿ ๐’„ ๐Ÿ = ๐Ÿ Hyperbolic Paraboloid ๐’™ ๐Ÿ ๐’‚ ๐Ÿ โˆ’ ๐’š ๐Ÿ ๐’ƒ ๐Ÿ = ๐’› ๐’„ Elliptic Paraboloid ๐’™ ๐Ÿ ๐’‚ ๐Ÿ + ๐’š ๐Ÿ ๐’ƒ ๐Ÿ = ๐’› ๐’„ Adoptedfromhttp://tutorial.math.lamar.edu/Classes/CalcIII/QuadricSurfaces.aspx
  • 8. Derivative of Two Variables Functions โ€ข Consider the function ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ); ๐‘ง changes if ๐‘ฅ or ๐‘ฆ or both of them change. If we control the change of ๐‘ฆ and allow just ๐‘ฅ to change then the average change of ๐‘ง in terms of ๐‘ฅ, is ฮ”๐‘ง ฮ”๐‘ฅ . The limiting state of this ratio when โˆ†๐‘ฅ โ†’ 0 is what is called โ€œpartial derivative of ๐’› in terms of ๐’™ โ€ and is shown by: ๐œ•๐‘ง ๐œ•๐‘ฅ , ๐œ•๐‘“(๐‘ฅ,๐‘ฆ) ๐œ•๐‘ฅ , ๐‘ง ๐‘ฅ โ€ฒ , ๐‘“๐‘ฅ โ€ข This cutter plane shows that the variable ๐‘ฆ is controlled (fixed) at ๐‘ฆ = 1 but ๐‘ฅ can change from -2 to +2 and the movement is on the curve of intersection between The plane and the surface of the function. Adoptedfrom http://msemac.redwoods.edu/~darnold/math50c/matlab/pderiv/index.xhtml
  • 9. Partial Differentiation โ€ข If ๐‘ฅ is controlled (fixed) and ๐‘ฆ is allowed to change the partial derivative of ๐’› in terms of ๐’š can be shown by: ๐œ•๐‘ง ๐œ•๐‘ฆ , ๐œ•๐‘“(๐‘ฅ,๐‘ฆ) ๐œ•๐‘ฆ , ๐‘ง ๐‘ฆ โ€ฒ ,๐‘“๐‘ฆ โ€ข The cutter plane shows that ๐‘ฅ is controlled (fixed) at ๐‘ฅ = 0 but ๐‘ฆ can change from -3 to +3 on the curve of intersection between the plane and the surface of the function. z y x Adoptedfrom http://www.uwec.edu/math/Calculus/216-Spring2007/assignments.htm
  • 10. Partial Differentiation โ€ข So, in general, the slope of the function ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ) on the curve of intersection between the surface of the function and the cutting plane parallel to x-axis at any point of the domain is: ๐œ•๐‘ง ๐œ•๐‘ฅ = ๐‘“๐‘ฅ = lim โˆ†๐‘ฅโ†’0 ๐‘“ ๐‘ฅ + โˆ†๐‘ฅ , ๐‘ฆ โˆ’ ๐‘“(๐‘ฅ , ๐‘ฆ) โˆ†๐‘ฅ = ๐‘™๐‘–๐‘š โ„Žโ†’0 ๐‘“ ๐‘ฅ + โ„Ž , ๐‘ฆ โˆ’ ๐‘“(๐‘ฅ , ๐‘ฆ) โ„Ž It means when calculating ๐œ•๐‘ง ๐œ•๐‘ฅ the variable ๐‘ฆ should be treated as a constant. The same rule applies for multi variables functions. Adoptedfrom http://moodle.capilanou.ca/mod/book/view.php?id=19700&chapterid=240 ๐’› = ๐Ÿ๐ŸŽ โˆ’ ๐’™ ๐Ÿ โˆ’ ๐’š ๐Ÿ
  • 11. Partial Differentiation โ€ข And the slope of the function ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ) on the curve of intersection between the surface of the function and the cutting plane parallel to y-axis at any point of the domain is: ๐œ•๐‘ง ๐œ•๐‘ฆ = ๐‘“๐‘ฆ = ๐‘™๐‘–๐‘š โˆ†๐‘ฆโ†’0 ๐‘“ ๐‘ฅ , ๐‘ฆ + โˆ†๐‘ฆ โˆ’ ๐‘“(๐‘ฅ, ๐‘ฆ) โˆ†๐‘ฆ = ๐‘™๐‘–๐‘š โ„Žโ†’0 ๐‘“ ๐‘ฅ , ๐‘ฆ + โ„Ž โˆ’ ๐‘“(๐‘ฅ, ๐‘ฆ) โ„Ž It means when calculating ๐œ•๐‘ง ๐œ•๐‘ฆ the variable ๐‘ฅ should be treated as a constant. The same rule applies for multi variables functions. Adoptedfrom http://moodle.capilanou.ca/mod/book/view.php?id=19700&chapterid=240 ๐’› = ๐Ÿ๐ŸŽ โˆ’ ๐’™ ๐Ÿ โˆ’ ๐’š ๐Ÿ
  • 12. Partial Differentiation โ€ข To find the partial derivatives (slope of tangent lines on the surface) at a specific point ๐‘ƒ(๐‘Ž, ๐‘, ๐‘) we have: โ€ข ๐œ•๐‘“(๐‘ฅ,๐‘ฆ) ๐œ•๐‘ฅ ๐‘(๐‘Ž,๐‘,๐‘) = ๐‘™๐‘–๐‘š โ„Žโ†’0 ๐‘“ ๐‘Ž+โ„Ž , ๐‘ โˆ’๐‘“(๐‘Ž , ๐‘) โ„Ž โ€ข ๐œ•๐‘“(๐‘ฅ,๐‘ฆ) ๐œ•๐‘ฆ ๐‘(๐‘Ž,๐‘,๐‘) = ๐‘™๐‘–๐‘š โ„Žโ†’0 ๐‘“ ๐‘Ž , ๐‘+โ„Ž โˆ’๐‘“(๐‘Ž , ๐‘) โ„Ž Example: o Find partial derivatives of ๐‘ง = 10๐‘ฅ2 ๐‘ฆ3 . ๐๐’› ๐๐’™ = ๐Ÿ๐ŸŽ๐’™๐’š ๐Ÿ‘ , ๐๐’› ๐๐’š = ๐Ÿ‘๐ŸŽ๐’™ ๐Ÿ ๐’š ๐Ÿ (๐’‚, ๐’ƒ, ๐ŸŽ) (๐’‚, ๐’ƒ, ๐’„) (๐ŸŽ, ๐’ƒ, ๐ŸŽ) (๐’‚, ๐ŸŽ, ๐ŸŽ) Adoptedfrom http://www.solitaryroad.com/c353.html
  • 13. Rules of Partial Differentiation โ€ข If ๐‘“(๐‘ฅ, ๐‘ฆ) and ๐‘”(๐‘ฅ, ๐‘ฆ) are two differentiable functions with respect to ๐‘ฅ and ๐‘ฆ ; ๏ถ ๐‘ง = ๐‘“ ๐‘ฅ, ๐‘ฆ ยฑ ๐‘” ๐‘ฅ, ๐‘ฆ โ†’ ๐œ•๐‘ง ๐œ•๐‘ฅ = ๐œ•๐‘“ ๐œ•๐‘ฅ ยฑ ๐œ•๐‘” ๐œ•๐‘ฅ = ๐‘“๐‘ฅ ยฑ ๐‘” ๐‘ฅ ๐œ•๐‘ง ๐œ•๐‘ฆ = ๐œ•๐‘“ ๐œ•๐‘ฆ ยฑ ๐œ•๐‘” ๐œ•๐‘ฆ = ๐‘“๐‘ฆ ยฑ ๐‘” ๐‘ฆ ๏ถ ๐‘ง = ๐‘“ ๐‘ฅ, ๐‘ฆ ร— ๐‘” ๐‘ฅ, ๐‘ฆ โ†’ ๐œ•๐‘ง ๐œ•๐‘ฅ = ๐‘“๐‘ฅ . ๐‘” + ๐‘” ๐‘ฅ . ๐‘“ ๐œ•๐‘ง ๐œ•๐‘ฆ = ๐‘“๐‘ฆ . ๐‘” + ๐‘” ๐‘ฆ . ๐‘“ ๏ถ ๐‘ง = ๐‘“(๐‘ฅ,๐‘ฆ) ๐‘”(๐‘ฅ,๐‘ฆ) โ†’ ๐œ•๐‘ง ๐œ•๐‘ฅ = ๐‘“๐‘ฅ . ๐‘”โˆ’๐‘” ๐‘ฅ . ๐‘“ ๐‘”2 ๐œ•๐‘ง ๐œ•๐‘ฆ = ๐‘“๐‘ฆ . ๐‘”โˆ’๐‘” ๐‘ฆ . ๐‘“ ๐‘”2
  • 14. Some Examples o Find partial derivatives of the function ๐‘ง = ๐‘ฅ2 โˆ’ ๐‘ฅ๐‘ฆ3 โˆ’ 5๐‘ฆ2. ๐œ•๐‘ง ๐œ•๐‘ฅ = ๐Ÿ๐’™ โˆ’ ๐’š ๐Ÿ‘ , ๐œ•๐‘ง ๐œ•๐‘ฆ = โˆ’๐Ÿ‘๐’™๐’š ๐Ÿ โˆ’ ๐Ÿ๐ŸŽ๐’š o Find partial derivatives of ๐‘ง = ๐‘ฅ๐‘ฆ. ๐‘ฅ2 + ๐‘ฆ2 . ๐œ•๐‘ง ๐œ•๐‘ฅ = y. ๐‘ฅ2 + ๐‘ฆ2 + 2๐‘ฅ 2 ๐‘ฅ2 + ๐‘ฆ2 . ๐‘ฅ๐‘ฆ = ๐ฒ. ๐’™ ๐Ÿ + ๐’š ๐Ÿ + ๐’™ ๐Ÿ ๐’š ๐’™ ๐Ÿ + ๐’š ๐Ÿ ๐œ•๐‘ง ๐œ•๐‘ฆ = ๐‘ฅ. ๐‘ฅ2 + ๐‘ฆ2 + 2๐‘ฆ 2 ๐‘ฅ2 + ๐‘ฆ2 . ๐‘ฅ๐‘ฆ = ๐’™. ๐’™ ๐Ÿ + ๐’š ๐Ÿ + ๐’š ๐Ÿ ๐’™ ๐’™ ๐Ÿ + ๐’š ๐Ÿ o Find partial derivatives of ๐‘ง = 3๐‘ฅ2 ๐‘ฆ2 ๐‘ฅ4+๐‘ฆ4. ๐œ•๐‘ง ๐œ•๐‘ฅ = 6๐‘ฅ๐‘ฆ2 ๐‘ฅ4 + ๐‘ฆ4 โˆ’ 4๐‘ฅ3 ร— 3๐‘ฅ2 ๐‘ฆ2 ๐‘ฅ4 + ๐‘ฆ4 2 ๐œ•๐‘ง ๐œ•๐‘ฆ = 6๐‘ฆ๐‘ฅ2 ๐‘ฅ4 + ๐‘ฆ4 โˆ’ 4๐‘ฆ3 ร— 3๐‘ฅ2 ๐‘ฆ2 ๐‘ฅ4 + ๐‘ฆ4 2
  • 15. Chain Rule (Different Cases) Case 1: If ๐‘ง = ๐‘“ ๐‘ข and ๐‘ข = ๐‘”(๐‘ฅ, ๐‘ฆ) then ๐‘ง = ๐‘“(๐‘” ๐‘ฅ, ๐‘ฆ ) and Examples: o Find partial derivatives of ๐‘ง = ๐‘’ ๐‘ฅ๐‘ฆ2 . Suppose ๐‘ข = ๐‘ฅ๐‘ฆ2 , so, ๐‘ง = ๐‘’ ๐‘ข and ๐œ•๐‘ง ๐œ•๐‘ฅ = ๐œ•๐‘ง ๐œ•๐‘ข . ๐œ•๐‘ข ๐œ•๐‘ฅ = (๐‘’ ๐‘ข). ๐‘ข ๐‘ฅ = ๐’† ๐’™๐’š ๐Ÿ . ๐’š ๐Ÿ and ๐œ•๐‘ง ๐œ•๐‘ฆ = ๐œ•๐‘ง ๐œ•๐‘ข . ๐œ•๐‘ข ๐œ•๐‘ฆ = (๐‘’ ๐‘ข). ๐‘ข ๐‘ฆ = ๐’† ๐’™๐’š ๐Ÿ . ๐Ÿ๐’™๐’š ๐œ•๐‘ง ๐œ•๐‘ฅ = ๐‘“โ€ฒ. ๐œ•๐‘” ๐œ•๐‘ฅ = ๐œ•๐‘ง ๐œ•๐‘ข . ๐œ•๐‘ข ๐œ•๐‘ฅ and ๐œ•๐‘ง ๐œ•๐‘ฆ = ๐‘“โ€ฒ. ๐œ•๐‘” ๐œ•๐‘ฆ = ๐œ•๐‘ง ๐œ•๐‘ข . ๐œ•๐‘ข ๐œ•๐‘ฆ
  • 16. Chain Rule (Different Cases) o Find partial derivatives of the function ๐‘ง = ๐‘’ ๐‘ฅ ๐‘ฆ + cos(๐‘ฅ๐‘ฆ) . ๐œ•๐‘ง ๐œ•๐‘ฅ = ๐Ÿ ๐’š ๐’† ๐’™ ๐’š โˆ’ ๐’š. ๐’”๐’Š๐’ ๐’™๐’š , ๐œ•๐‘ง ๐œ•๐‘ฆ = โˆ’๐’™ ๐’š ๐Ÿ ๐’† ๐’™ ๐’š โˆ’ ๐’™. ๐’”๐’Š๐’(๐’™๐’š) โ€ข Case 2: If ๐‘ง = ๐‘“ ๐‘ฅ, ๐‘ฆ is a differentiable function of ๐‘ฅ and ๐‘ฆ and these two variables are differentiable functions of ๐‘Ÿ , such that ๐‘ฅ = ๐‘” ๐‘Ÿ and ๐‘ฆ = โ„Ž(๐‘Ÿ) , then: o Find partial derivatives of ๐‘ง = ๐‘ฅ โˆ’ ๐‘™๐‘›๐‘ฆ when ๐‘ฅ = ๐‘Ÿ and ๐‘ฆ = ๐‘Ÿ2 โˆ’ 1 ๐œ•๐‘ง ๐œ•๐‘Ÿ = 1. 1 2 ๐‘Ÿ โˆ’ 1 ๐‘ฆ . 2๐‘Ÿ = ๐Ÿ ๐Ÿ ๐’“ โˆ’ ๐Ÿ๐’“ ๐’“ ๐Ÿ โˆ’ ๐Ÿ โ€ข Can you suggest another way? ๐œ•๐‘ง ๐œ•๐‘Ÿ = ๐œ•๐‘ง ๐œ•๐‘ฅ . ๐‘‘๐‘ฅ ๐‘‘๐‘Ÿ + ๐œ•๐‘ง ๐œ•๐‘ฆ . ๐‘‘๐‘ฆ ๐‘‘๐‘Ÿ The same rules apply for multi variables functions
  • 17. Chain Rules (Different Cases) โ€ข Case 3: If ๐‘ง = ๐‘“ ๐‘ฅ, ๐‘ฆ is a differentiable function of ๐‘ฅ and ๐‘ฆ and these two variables are differentiable functions of ๐‘Ÿ and ๐‘  , such that ๐‘ฅ = ๐‘” ๐‘Ÿ, ๐‘  and ๐‘ฆ = โ„Ž(๐‘Ÿ, ๐‘ ) and ๐‘Ÿ and ๐‘  are independent from each other ( ๐‘‘๐‘Ÿ ๐‘‘๐‘  , ๐‘‘๐‘  ๐‘‘๐‘Ÿ = 0), then: โ€ข These derivatives are called โ€œtotal derivatives of ๐’› with respect to ๐’“ and ๐’”โ€. o Find partial derivatives of ๐‘ง = 3 ๐‘ฅ2 โˆ’ ๐‘ฆ where ๐‘ฅ = ๐‘Ÿ2 + ๐‘ 2 and ๐‘ฆ = ๐‘Ÿ ๐‘  . ๐œ•๐‘ง ๐œ•๐‘Ÿ = ๐œ•๐‘ง ๐œ•๐‘ฅ . ๐‘‘๐‘ฅ ๐‘‘๐‘Ÿ + ๐œ•๐‘ง ๐œ•๐‘ฆ . ๐‘‘๐‘ฆ ๐‘‘๐‘Ÿ and ๐œ•๐‘ง ๐œ•๐‘  = ๐œ•๐‘ง ๐œ•๐‘ฅ . ๐‘‘๐‘ฅ ๐‘‘๐‘  + ๐œ•๐‘ง ๐œ•๐‘ฆ . ๐‘‘๐‘ฆ ๐‘‘๐‘ 
  • 18. Implicit Differentiation โ€ข The Chain Rule can be used for implicit differentiation even for one variable functions: ๐น ๐‘ฅ, ๐‘ฆ = 0 Using the chain rule we have: ๐œ•๐น ๐œ•๐‘ฅ = ๐œ•๐น ๐œ•๐‘ฅ . ๐‘‘๐‘ฅ ๐‘‘๐‘ฅ + ๐œ•๐น ๐œ•๐‘ฆ . ๐‘‘๐‘ฆ ๐‘‘๐‘ฅ = 0 So, โ€ข The same rule can be used for implicit two or multi variables functions. For example, for an implicit function ๐น ๐‘ฅ, ๐‘ฆ, ๐‘ง = 0, we have: As ๐’…๐’™ ๐’…๐’™ = ๐Ÿ ๐‘‘๐‘ฆ ๐‘‘๐‘ฅ = โˆ’ ๐œ•๐น ๐œ•๐‘ฅ ๐œ•๐น ๐œ•๐‘ฆ = โˆ’ ๐น๐‘ฅ ๐น๐‘ฆ ๐œ•๐‘ง ๐œ•๐‘ฅ = โˆ’ ๐œ•๐น ๐œ•๐‘ฅ ๐œ•๐น ๐œ•๐‘ง = โˆ’ ๐น๐‘ฅ ๐น๐‘ง ๐‘Ž๐‘›๐‘‘ ๐œ•๐‘ง ๐œ•๐‘ฆ = โˆ’ ๐œ•๐น ๐œ•๐‘ฆ ๐œ•๐น ๐œ•๐‘ง = โˆ’ ๐น๐‘ฆ ๐น๐‘ง
  • 19. Examples of Implicit Functions o Find the slope of the tangent line on the curve of intersection between the surface ๐‘ฅ2 + ๐‘ฆ2 + ๐‘ง2 = 9 and the plane ๐‘ฆ = 2 at the point ๐ด(1,2,2) . As ๐‘ฆ is fixed at 2 so, we are looking for ๐œ•๐‘ง ๐œ•๐‘ฅ at point A : 2๐‘ฅ + 0 + 2๐‘ง. ๐œ•๐‘ง ๐œ•๐‘ฅ = 0 โ†’ ๐œ•๐‘ง ๐œ•๐‘ฅ = โˆ’๐‘ฅ ๐‘ง = โˆ’ 1 2 Or using implicit differentiation: ๐œ•๐‘ง ๐œ•๐‘ฅ = โˆ’ ๐น๐‘ฅ ๐น๐‘ง = โˆ’ 2๐‘ฅ 2๐‘ง = โˆ’ ๐‘ฅ ๐‘ง o Find ๐œ•๐‘ง ๐œ•๐‘ฆ for ๐‘’ ๐‘ฅ+๐‘ฆ+๐‘ง = ๐‘ฅ2 โˆ’ 2๐‘ฆ2 + ๐‘ง2 . 0 + 1 + ๐œ•๐‘ง ๐œ•๐‘ฆ ๐‘’ ๐‘ฅ+๐‘ฆ+๐‘ง = 0 โˆ’ 4y + 2z. ๐œ•๐‘ง ๐œ•๐‘ฆ โ†’ ๐œ•๐‘ง ๐œ•๐‘ฆ = ๐‘’ ๐‘ฅ+๐‘ฆ+๐‘ง + 4๐‘ฆ 2๐‘ง โˆ’ ๐‘’ ๐‘ฅ+๐‘ฆ+๐‘ง Use the implicit differentiation for this question.
  • 20. Higher Orders Partial Derivatives โ€ข For the function ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ) the partial derivatives ๐œ•๐‘ง ๐œ•๐‘ฅ and ๐œ•๐‘ง ๐œ•๐‘ฆ are in turn functions of ๐‘ฅ and ๐‘ฆ , in general. So, we can think of second partial derivatives of ๐‘ง , but in this case there are three different second derivatives: ๐‘ง ๐‘ฅ๐‘ฅ = ๐‘“๐‘ฅ๐‘ฅ = ๐œ• ๐œ•๐‘ง ๐œ•๐‘ฅ ๐œ•๐‘ฅ = ๐œ• ๐œ•๐‘ฅ ๐œ•๐‘ง ๐œ•๐‘ฅ = ๐œ•2 ๐‘ง ๐œ•๐‘ฅ2 ๐‘ง ๐‘ฆ๐‘ฆ = ๐‘“๐‘ฆ๐‘ฆ = ๐œ• ๐œ•๐‘ง ๐œ•๐‘ฆ ๐œ•๐‘ฆ = ๐œ• ๐œ•๐‘ฅ ๐œ•๐‘ง ๐œ•๐‘ฆ = ๐œ•2 ๐‘ง ๐œ•๐‘ฆ2 ๐‘ง ๐‘ฅ๐‘ฆ = ๐‘“๐‘ฅ๐‘ฆ = ๐œ• ๐œ•๐‘ง ๐œ•๐‘ฅ ๐œ•๐‘ฆ = ๐œ• ๐œ•๐‘ฆ ๐œ•๐‘ง ๐œ•๐‘ฅ = ๐œ•2 ๐‘ง ๐œ•๐‘ฆ. ๐œ•๐‘ฅ Second- order direct partial derivatives Second- order cross partial derivative
  • 21. The Equality of Mixed (Cross) Partial Derivatives ๐‘ง ๐‘ฆ๐‘ฅ = ๐‘“๐‘ฆ๐‘ฅ = ๐œ• ๐œ•๐‘ง ๐œ•๐‘ฆ ๐œ•๐‘ฅ = ๐œ• ๐œ•๐‘ฅ ๐œ•๐‘ง ๐œ•๐‘ฆ = ๐œ•2 ๐‘ง ๐œ•๐‘ฅ. ๐œ•๐‘ฆ โ€ข If the cross (mixed) partial derivatives ๐‘“๐‘ฅ๐‘ฆ and ๐‘“๐‘ฆ๐‘ฅ are continuous and finite in their domain then they are equal to one another; i.e. ๐‘“๐‘ฅ๐‘ฆ = ๐‘“๐‘ฆ๐‘ฅ Or ๐œ•2 ๐‘ง ๐œ•๐‘ฆ.๐œ•๐‘ฅ = ๐œ•2 ๐‘ง ๐œ•๐‘ฅ.๐œ•๐‘ฆ ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ) ๐œ•๐‘ง ๐œ•๐‘ฅ =๐‘“๐‘ฅ ๐œ•๐‘ง ๐œ•๐‘ฆ =๐‘“๐‘ฆ ๐‘“๐‘ฅ๐‘ฅ ๐‘“๐‘ฅ๐‘ฆ = ๐‘“๐‘ฆ๐‘ฅ ๐‘“๐‘ฆ๐‘ฆ Second- order cross partial derivative
  • 22. Total Differential โ€ข The meaning of differential in multi variables scalar function is not different with that in the one variable function. The only difference is that the source of change in dependent variable is the change of all independent variables., that is; ๐‘ง + โˆ†๐‘ง = ๐‘“(๐‘ฅ + โˆ†๐‘ฅ, ๐‘ฆ + โˆ†๐‘ฆ) Or โˆ†๐‘ง = ๐‘“ ๐‘ฅ + โˆ†๐‘ฅ, ๐‘ฆ + โˆ†๐‘ฆ โˆ’ ๐‘“(๐‘ฅ, ๐‘ฆ) But ๐‘‘๐‘ง, which is called โ€œtotal differentialโ€ is defined as: ๐‘‘๐‘ง = ๐œ•๐‘ง ๐œ•๐‘ฅ . ๐‘‘๐‘ฅ + ๐œ•๐‘ง ๐œ•๐‘ฆ ๐‘‘๐‘ฆ Or ๐‘‘๐‘ง = ๐‘“๐‘ฅ. ๐‘‘๐‘ฅ + ๐‘“๐‘ฆ. ๐‘‘๐‘ฆ Adoptedfrom Calculus Early Transcendental James Stewart p897
  • 23. Total Differential โ€ข For a multi variables scalar function the same rule applies: ๐‘ง = ๐‘“(๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ ๐‘›) ๐‘‘๐‘ง = ๐œ•๐‘ง ๐œ•๐‘ฅ1 . ๐‘‘๐‘ฅ1 + ๐œ•๐‘ง ๐œ•๐‘ฅ2 . ๐‘‘๐‘ฅ2 + โ‹ฏ + ๐œ•๐‘ง ๐œ•๐‘ฅ ๐‘› . ๐‘‘๐‘ฅ ๐‘› โ€ข in the case of two variables function ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ) we assumed ๐‘ฅ and ๐‘ฆ are independent, but if they depend on other variables the differential of each one of them can be treated as the total differential of a dependent variable, that is; ๐‘ง = ๐‘“ ๐‘ฅ, ๐‘ฆ โ†’ ๐‘‘๐‘ง = ๐œ•๐‘ง ๐œ•๐‘ฅ . ๐‘‘๐‘ฅ + ๐œ•๐‘ง ๐œ•๐‘ฆ . ๐‘‘๐‘ฆ ๐ด ๐‘ฅ = โ„Ž ๐‘Ÿ, ๐‘  โ†’ ๐‘‘๐‘ฅ = ๐œ•๐‘ฅ ๐œ•๐‘Ÿ . ๐‘‘๐‘Ÿ + ๐œ•๐‘ฅ ๐œ•๐‘  . ๐‘‘๐‘  ๐ต ๐‘ฆ = ๐‘˜ ๐‘Ÿ, ๐‘  โ†’ ๐‘‘๐‘ฆ = ๐œ•๐‘ฆ ๐œ•๐‘Ÿ . ๐‘‘๐‘Ÿ + ๐œ•๐‘ฆ ๐œ•๐‘  . ๐‘‘๐‘  ๐ถ Substituting B and C into A: ๐‘‘๐‘ง = ๐œ•๐‘ง ๐œ•๐‘ฅ . ๐œ•๐‘ฅ ๐œ•๐‘Ÿ . ๐‘‘๐‘Ÿ + ๐œ•๐‘ฅ ๐œ•๐‘  . ๐‘‘๐‘  + ๐œ•๐‘ง ๐œ•๐‘ฆ . ๐œ•๐‘ฆ ๐œ•๐‘Ÿ . ๐‘‘๐‘Ÿ + ๐œ•๐‘ฆ ๐œ•๐‘  . ๐‘‘๐‘ 
  • 24. Total Differential If we are looking for total derivatives of ๐‘ง with respect to ๐‘Ÿ and ๐‘ , which is introduced before as the chain rule (case 3), we need to suppose that ๐‘Ÿ and ๐‘  are independent variables and not associated to each other ( ๐‘‘๐‘  ๐‘‘๐‘Ÿ ๐‘œ๐‘Ÿ ๐‘‘๐‘Ÿ ๐‘‘๐‘  = 0); then: ๐‘‘๐‘ง = ๐œ•๐‘ง ๐œ•๐‘ฅ . ๐œ•๐‘ฅ ๐œ•๐‘Ÿ + ๐œ•๐‘ง ๐œ•๐‘ฆ . ๐œ•๐‘ฆ ๐œ•๐‘Ÿ . ๐‘‘๐‘Ÿ + ๐œ•๐‘ง ๐œ•๐‘ฅ . ๐œ•๐‘ฅ ๐œ•๐‘  + ๐œ•๐‘ง ๐œ•๐‘ฆ . ๐œ•๐‘ฆ ๐œ•๐‘  . ๐‘‘๐‘  ๐œ•๐‘ง ๐œ•๐‘Ÿ = ๐œ•๐‘ง ๐œ•๐‘ฅ . ๐‘‘๐‘ฅ ๐‘‘๐‘Ÿ + ๐œ•๐‘ง ๐œ•๐‘ฆ . ๐‘‘๐‘ฆ ๐‘‘๐‘Ÿ and ๐œ•๐‘ง ๐œ•๐‘  = ๐œ•๐‘ง ๐œ•๐‘ฅ . ๐‘‘๐‘ฅ ๐‘‘๐‘  + ๐œ•๐‘ง ๐œ•๐‘ฆ . ๐‘‘๐‘ฆ ๐‘‘๐‘ 
  • 25. Second Order Total Differential โ€ข The sign of the second order total differential ๐‘‘2 ๐‘ง shows the convexity and concavity of the surface with respect to the ๐‘ฅ๐‘œ๐‘ฆ plane. โ€ข Considering the total differential ๐‘‘๐‘ง , the second order total differential ๐‘‘2 ๐‘ง can be obtained by applying the differential rules: ๐‘‘2 ๐‘ง = ๐‘‘ ๐‘‘๐‘ง = ๐‘‘( ๐œ•๐‘ง ๐œ•๐‘ฅ . ๐‘‘๐‘ฅ + ๐œ•๐‘ง ๐œ•๐‘ฆ ๐‘‘๐‘ฆ) = ๐‘‘ ๐‘“๐‘ฅ. ๐‘‘๐‘ฅ + ๐‘“๐‘ฆ. ๐‘‘๐‘ฆ = ๐‘‘๐‘“๐‘ฅ. ๐‘‘๐‘ฅ + ๐‘“๐‘ฅ. ๐‘‘ ๐‘‘๐‘ฅ + ๐‘‘๐‘“๐‘ฆ. ๐‘‘๐‘ฆ + ๐‘“๐‘ฆ. ๐‘‘(๐‘‘๐‘ฆ) As ๐‘‘ ๐‘‘๐‘ฅ = ๐‘‘2 ๐‘ฅ = 0 ๐‘‘ ๐‘‘๐‘ฆ = ๐‘‘2 ๐‘ฆ = 0 , and ๐‘‘๐‘“๐‘ฅ = ๐‘“๐‘ฅ๐‘ฅ. ๐‘‘๐‘ฅ + ๐‘“๐‘ฅ๐‘ฆ. ๐‘‘๐‘ฆ ๐‘‘๐‘“๐‘ฆ = ๐‘“๐‘ฆ๐‘ฅ. ๐‘‘๐‘ฅ + ๐‘“๐‘ฆ๐‘ฆ. ๐‘‘๐‘ฆ therefore : โ€ข Factorising ๐‘‘๐‘ฅ2 from the right hand side, we have: ๐‘‘2 ๐‘ง = ๐‘‘๐‘ฆ2. ๐‘“๐‘ฅ๐‘ฅ. ๐‘‘๐‘ฅ ๐‘‘๐‘ฆ 2 + 2๐‘“๐‘ฅ๐‘ฆ. ๐‘‘๐‘ฅ ๐‘‘๐‘ฆ + ๐‘“๐‘ฆ๐‘ฆ ๐‘‘2 ๐‘ง = ๐‘“๐‘ฅ๐‘ฅ. ๐‘‘๐‘ฅ2 + 2๐‘“๐‘ฅ๐‘ฆ. ๐‘‘๐‘ฅ. ๐‘‘๐‘ฆ + ๐‘“๐‘ฆ๐‘ฆ. ๐‘‘๐‘ฆ2
  • 26. Second Order Differential โ€ข ๐‘‘๐‘ฆ2 > 0 (why?); so the sign of ๐‘‘2 ๐‘ง depends on the sign of the expression in the bracket. โ€ข From elementary algebra we know that the quadratic form ๐‘Ž๐‘‹2 + ๐‘๐‘‹ + ๐‘ has the same sign as the parameter ๐‘Ž when โˆ†= ๐‘2 โˆ’ 4๐‘Ž๐‘ < 0 . โ€ข If we assume that ๐‘‹ = ๐‘‘๐‘ฅ ๐‘‘๐‘ฆ and ๐‘Ž = ๐‘“๐‘ฅ๐‘ฅ , ๐‘ = 2๐‘“๐‘ฅ๐‘ฆ , ๐‘ = ๐‘“๐‘ฆ๐‘ฆ then ๐‘‘2 ๐‘ง = ๐‘‘๐‘ฆ2 . ๐‘Ž๐‘‹2 + ๐‘๐‘‹ + ๐‘ has the same sign as ๐‘Ž = ๐‘“๐‘ฅ๐‘ฅ if 2๐‘“๐‘ฅ๐‘ฆ 2 โˆ’ 4๐‘“๐‘ฅ๐‘ฅ. ๐‘“๐‘ฆ๐‘ฆ < 0 โ†’ ๐‘“๐‘ฅ๐‘ฅ. ๐‘“๐‘ฆ๐‘ฆ > ๐‘“๐‘ฅ๐‘ฆ 2 So; 1. ๐‘‘2 ๐‘ง > 0 if ๐‘“๐‘ฅ๐‘ฅ > 0 and ๐‘“๐‘ฅ๐‘ฅ. ๐‘“๐‘ฆ๐‘ฆ > ๐‘“๐‘ฅ๐‘ฆ 2 . 2. ๐‘‘2 ๐‘ง < 0 if ๐‘“๐‘ฅ๐‘ฅ < 0 and ๐‘“๐‘ฅ๐‘ฅ. ๐‘“๐‘ฆ๐‘ฆ > ๐‘“๐‘ฅ๐‘ฆ 2 . Adoptedfrom Calculus Early Transcendental James Stewart DIFFERENT PAGES
  • 27. Optimising of Two Variables Functions โ€ข The two variables function ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ) have a relative maximum (relative minimum) at a point in its domain if at that point : Note 1: If ๐‘“๐‘ฅ๐‘ฅ . ๐‘“๐‘ฆ๐‘ฆ โˆ’ ๐‘“๐‘ฅ๐‘ฆ 2 < 0 , it means the critical point is not a maximum or a minimum but a saddle point. (looks maximum from one axis but minimum from another axis) Adoptedfrom http://commons.wikimedia.org/wiki/File:Saddle_point.png ๐’› = ๐’™ ๐Ÿ โˆ’ ๐’š ๐Ÿ i. ๐‘“๐‘ฅ = 0 and ๐‘“๐‘ฆ = 0 , simultaneously. ii. ๐‘“๐‘ฅ๐‘ฅ < 0 (๐‘“๐‘ฅ๐‘ฅ > 0) iii. ๐‘“๐‘ฅ๐‘ฅ . ๐‘“๐‘ฆ๐‘ฆ โˆ’ ๐‘“๐‘ฅ๐‘ฆ 2 > 0 Sufficient Conditions Necessary conditions for differentiable functions
  • 28. Optimising of Two Variables Functions โ€ข Note 2: If ๐‘“๐‘ฅ๐‘ฅ . ๐‘“๐‘ฆ๐‘ฆ โˆ’ ๐‘“๐‘ฅ๐‘ฆ 2 = 0 at the critical point further investigation is needed to find about the nature of the point. โ€ข Example: o Find the local extremum of the function ๐‘“(๐‘ฅ, ๐‘ฆ) = ๐‘ฅ3 โˆ’ 6๐‘ฅ๐‘ฆ + 8๐‘ฆ3, if any. ๐‘“๐‘ฅ = 0 ๐‘“๐‘ฆ = 0 โ†’ 6๐‘ฅ2 โˆ’ 6๐‘ฆ = 0 โˆ’6๐‘ฅ + 24๐‘ฆ2 = 0 โ†’ ๐‘ฅ2 = ๐‘ฆ โˆ’๐‘ฅ + 4๐‘ฆ2 = 0 After solving these simultaneous equations two critical points emerge ๐‘จ(๐ŸŽ, ๐ŸŽ, ๐ŸŽ) and ๐‘ฉ( ๐Ÿ‘ ๐Ÿ ๐Ÿ’ , ๐Ÿ‘ ๐Ÿ ๐Ÿ๐Ÿ” , โˆ’๐Ÿ‘ ๐Ÿ’ ) .
  • 29. Optimising of Two Variables Functions Now, ๐‘“๐‘ฅ๐‘ฅ = 12๐‘ฅ and ๐‘“๐‘ฆ๐‘ฆ = 48๐‘ฆ and ๐‘“๐‘ฅ๐‘ฆ = ๐‘“๐‘ฆ๐‘ฅ = โˆ’6 . So, ๐‘“๐‘ฅ๐‘ฅ . ๐‘“๐‘ฆ๐‘ฆ โˆ’ ๐‘“๐‘ฅ๐‘ฆ 2 = 12๐‘ฅ. 48๐‘ฆ โˆ’ โˆ’6 2 = 576๐‘ฅ๐‘ฆ โˆ’ 36 . At the point ๐ด 0,0,0 : ๐‘“๐‘ฅ๐‘ฅ . ๐‘“๐‘ฆ๐‘ฆ โˆ’ ๐‘“๐‘ฅ๐‘ฆ 2 = โˆ’36 < 0 โ†’ ๐ด ๐ข๐ฌ ๐š ๐ฌ๐š๐๐ฅ๐ฅ๐ž ๐ฉ๐จ๐ข๐ง๐ญ. At the point ๐ต( 3 1 4 , 3 1 16 , โˆ’3 4 ) :๐‘“๐‘ฅ๐‘ฅ . ๐‘“๐‘ฆ๐‘ฆ โˆ’ ๐‘“๐‘ฅ๐‘ฆ 2 = 144 โˆ’ 36 = 108 > 0 and ๐‘“๐‘ฅ๐‘ฅ > 0 , so this point is a local minimum.
  • 30. The Jacobian & Hessian Determinants โ€ข From the matrix algebra we know that for any square matrix ๐ด if: ๐ด = 0 โŸน ๐ด ๐‘–๐‘  ๐‘Ž ๐‘ ๐‘–๐‘›๐‘”๐‘ข๐‘™๐‘Ž๐‘Ÿ ๐‘š๐‘Ž๐‘ก๐‘Ÿ๐‘–๐‘ฅ, Which means, there exists linear dependence between at least two rows or two columns of the matrix. And if: ๐ด โ‰  0 โŸน ๐ด ๐‘–๐‘  ๐‘Ž ๐‘›๐‘œ๐‘› โˆ’ ๐‘ ๐‘–๐‘›๐‘”๐‘ข๐‘™๐‘Ž๐‘Ÿ ๐‘š๐‘Ž๐‘ก๐‘Ÿ๐‘–๐‘ฅ, Which means, all rows and all columns are linearly independent. โ€ข So to test for linear dependence between the equations in a simultaneous system the determinant of the coefficients matrix can be used.
  • 31. The Jacobian & Hessian Determinants โ€ข To test for functional dependence (both linear and non-linear) between different functions we use Jacobian Determinant shown by ๐ฝ . โ€ข The Jacobian Matrix is the matrix of all first-order partial derivatives of a vector function ๐น: ๐‘… ๐‘› โ†’ ๐‘… ๐‘š, which corresponds a vector in ๐‘› dimensional space(real n-tuples) into a vector in ๐‘š dimensional space (real m-tuples): ๐‘ฆ1 = ๐น1(๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ ๐‘›) ๐‘ฆ2 = ๐น2(๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ ๐‘›) โ‹ฎ ๐‘ฆ ๐‘š = ๐น๐‘š(๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ ๐‘›) So, the Jacobian matrix of ๐น is: ๐ฝ = ๐œ•๐น1 ๐œ•๐‘ฅ1 โ‹ฏ ๐œ•๐น1 ๐œ•๐‘ฅ ๐‘› โ‹ฎ โ‹ฑ โ‹ฎ ๐œ•๐น๐‘š ๐œ•๐‘ฅ1 โ‹ฏ ๐œ•๐น๐‘š ๐œ•๐‘ฅ ๐‘› Each row is the partial derivatives of one of the functions (e.g. ๐น1) with respect to all independent variables ๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ ๐‘›.
  • 32. The Jacobian & Hessian Determinants โ€ข If ๐‘š = ๐‘›, the Jacobian matrix is a square matrix and its determinant shows if there is functional dependence or independence between the functions. ๐ฝ = 0 โŸน ๐‘‡โ„Ž๐‘’ ๐‘’๐‘ž๐‘ข๐‘Ž๐‘ก๐‘–๐‘œ๐‘›๐‘  ๐‘Ž๐‘Ÿ๐‘’ ๐‘“๐‘ข๐‘›๐‘๐‘ก๐‘–๐‘œ๐‘›๐‘Ž๐‘™๐‘™๐‘ฆ ๐‘‘๐‘’๐‘๐‘’๐‘›๐‘‘๐‘’๐‘›๐‘ก This means, there is a linear or non-linear association between two functions. ๐ฝ โ‰  0 โŸน ๐‘‡โ„Ž๐‘’ ๐‘’๐‘ž๐‘ข๐‘Ž๐‘ก๐‘–๐‘œ๐‘›๐‘  ๐‘Ž๐‘Ÿ๐‘’ ๐‘“๐‘ข๐‘›๐‘๐‘ก๐‘–๐‘œ๐‘›๐‘Ž๐‘™๐‘™๐‘ฆ ๐‘–๐‘›๐‘‘๐‘’๐‘๐‘’๐‘›๐‘‘๐‘’๐‘›๐‘ก This means, there is no linear or non-linear association between two functions. Example: Use the Jacobian determinant to test the functional dependency of the following equations: ๐‘ฆ1 = 2๐‘ฅ1 โˆ’ 3๐‘ฅ2 ๐‘ฆ2 = 4๐‘ฅ1 2 โˆ’ 12๐‘ฅ1 ๐‘ฅ2 + 9๐‘ฅ2 2
  • 33. The Jacobian & Hessian Determinants โ€ข The Jacobian determinant is : ๐ฝ = ๐œ•๐‘ฆ1 ๐œ•๐‘ฅ1 ๐œ•๐‘ฆ1 ๐œ•๐‘ฅ2 ๐œ•๐‘ฆ2 ๐œ•๐‘ฅ1 ๐œ•๐‘ฆ2 ๐œ•๐‘ฅ2 = 2 โˆ’3 8๐‘ฅ1 โˆ’ 12๐‘ฅ2 โˆ’12๐‘ฅ1 + 18๐‘ฅ2 = 2 โˆ’12๐‘ฅ1 + 18๐‘ฅ2 โˆ’ โˆ’3 8๐‘ฅ1 โˆ’ 12๐‘ฅ2 = 0 โ€ข So, the functions are not independent. โ€ข We expected such a result as we know that there is a quadratic functional relationship between ๐‘ฆ1 and ๐‘ฆ2: ๐‘ฆ1 2 = ๐‘ฆ2
  • 34. The Jacobian & Hessian Determinants โ€ข Hessian Matrix is a square matrix which is composed of the second- order partial derivatives of a real (scalar) multi variables function, (๐‘“: ๐‘… ๐‘› โ†’ ๐‘…). For a function ๐‘ฆ = ๐‘“(๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ ๐‘›), Hessian determinant is defined as: ๐ป = ๐œ•2 ๐‘“ ๐œ•๐‘ฅ1 2 ๐œ•2 ๐‘“ ๐œ•๐‘ฅ1 ๐œ•๐‘ฅ2 โ‹ฏ ๐œ•2 ๐‘“ ๐œ•๐‘ฅ1 ๐œ•๐‘ฅ ๐‘› ๐œ•2 ๐‘“ ๐œ•๐‘ฅ2 ๐œ•๐‘ฅ1 ๐œ•2 ๐‘“ ๐œ•๐‘ฅ2 2 โ‹ฏ ๐œ•2 ๐‘“ ๐œ•๐‘ฅ2 ๐œ•๐‘ฅ ๐‘› โ‹ฎ ๐œ•2 ๐‘“ ๐œ•๐‘ฅ ๐‘› ๐œ•๐‘ฅ1 โ‹ฎ ๐œ•2 ๐‘“ ๐œ•๐‘ฅ ๐‘› ๐œ•๐‘ฅ2 โ€ฆ โ‹ฎ ๐œ•2 ๐‘“ ๐œ•๐‘ฅ ๐‘› 2 = ๐‘“11 ๐‘“12 โ€ฆ ๐‘“1๐‘› ๐‘“21 ๐‘“22 โ€ฆ ๐‘“2๐‘› โ‹ฎ ๐‘“๐‘›1 โ‹ฎ ๐‘“๐‘›2 โ‹ฑ โ‹ฎ โ€ฆ ๐‘“๐‘›๐‘› โ€ข In the optimisation of two variables function if the first-order (necessary) conditions ๐‘“๐‘ฅ = ๐‘“๐‘ฆ = 0 are met, second-order (sufficient) conditions are: ๏ถ ๐‘“๐‘ฅ๐‘ฅ, ๐‘“๐‘ฆ๐‘ฆ > 0 ๐‘“๐‘œ๐‘Ÿ ๐‘Ž ๐‘š๐‘–๐‘›๐‘–๐‘š๐‘ข๐‘š and ๐‘“๐‘ฅ๐‘ฅ, ๐‘“๐‘ฆ๐‘ฆ < 0 ๐‘“๐‘œ๐‘Ÿ ๐‘Ž ๐‘š๐‘Ž๐‘ฅ๐‘–๐‘š๐‘ข๐‘š ๏ถ ๐‘“๐‘ฅ๐‘ฅ . ๐‘“๐‘ฆ๐‘ฆ โˆ’ ๐‘“๐‘ฅ๐‘ฆ 2 > 0
  • 35. The Jacobian & Hessian Determinants โ€ข Using the Hessian determinant, we can simply show the sufficient conditions as: ๏ƒ˜ The optimal point is minimum if ๐ป1 > 0 and ๐ป2 > 0, because: o ๐ป1 = ๐‘“๐‘ฅ๐‘ฅ > 0 o ๐ป2 = ๐‘“๐‘ฅ๐‘ฅ ๐‘“๐‘ฅ๐‘ฆ ๐‘“๐‘ฆ๐‘ฅ ๐‘“๐‘ฆ๐‘ฆ = ๐‘“๐‘ฅ๐‘ฅ ๐‘“๐‘ฆ๐‘ฆ โˆ’ ๐‘“๐‘ฅ๐‘ฆ 2 > 0 ๏ƒ˜ And, the optimal point is maximum if ๐ป1 < 0 and ๐ป2 > 0. โ€ข There is the same story for a multi-variable function ๐‘ฆ = ๐‘“(๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ ๐‘›): ๏ฑ If ๐ป1 , ๐ป2 , ๐ป3 , โ€ฆ , ๐ป ๐‘› > 0, the critical point is local minimum. ๏ฑ If the principal minors change their signs consecutively, the critical point is the local maximum. (e.g. in case of ๐‘ฆ = ๐‘“(๐‘ฅ1, ๐‘ฅ2, ๐‘ฅ3), ๐ป1 < 0 , ๐ป2 > 0 and ๐ป3 < 0)
  • 36. Optimisation with a Constraint โ€ข In reality, independent variables in a function, are not fully independent from each other. They might be in a linear or even non-linear relationship with one another and make a constraint in the process of optimisation and change the result of that. Adoptedfrom http://staff.www.ltu.se/~larserik/applmath/chap7en/part7.html Adopted& altered from http://en.wikipedia.org/wiki/Lagrange_multiplier ๐‘” ๐‘ฅ, ๐‘ฆ = ๐‘ Linear constraintNon-linear constraint ๐’ˆ ๐’™, ๐’š = ๐’„
  • 37. Optimisation with a Constraint โ€ข In each case, the function ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ) is the target function for optimisation, subject to a constraint ๐‘” ๐‘ฅ, ๐‘ฆ = ๐‘ (where ๐‘ is a constant). So; Max or Min โˆถ ๐‘ง = ๐‘“ ๐‘ฅ, ๐‘ฆ Subject to โˆถ ๐‘” ๐‘ฅ, ๐‘ฆ = ๐‘ โ€ข If the constraint function ๐‘” ๐‘ฅ, ๐‘ฆ = ๐‘ is linear (e.g. ๐‘ฅ โˆ’ 2๐‘ฆ = โˆ’1) one way to include the constraint into the optimisation process is to find one variable with respect to another from the constraint function (here; ๐‘ฅ = 2๐‘ฆ โˆ’ 1) and put it into the target function to make it a function with one independent variable, ๐‘ง = ๐น(๐‘ฆ), and follow the optimisation process of two-variable function.
  • 38. Example โ€ข Example: Find the maximum of the function ๐‘ง = ๐‘ฅ๐‘ฆ subject to the constraint ๐‘ฅ + ๐‘ฆ = 1. From the constraint function we have ๐‘ฆ = โˆ’๐‘ฅ + 1 and if we substitute this with the ๐‘ฆ in the target function, we will have ๐‘ง = โˆ’ ๐‘ฅ2 + ๐‘ฅ. ๐‘‘๐‘ง ๐‘‘๐‘ฅ = 0 โ†’ โˆ’2๐‘ฅ + 1 = 0 โ†’ ๐‘ฅ = 0.5 Putting this into the constraint equation to find ๐‘ฆ and both into the target function to find ๐‘ง ; the maximum point will be ๐ด(0.5, 0.5, 0.25) . ๏ƒ˜ How do we know the point is the maximum point?
  • 39. The Lagrange Method โ€ข If the constraint function is non-linear the previous method might become very complicated. Another method, which is called โ€œLagrange Methodโ€ or the โ€œMethod of Lagrange Multipliersโ€, can help us to find local extremum points. โ€ข In the Lagrange method the constraint function comes into the process of optimisation by introducing a new variable ฮป (Lagrange Multiplier, Lagrange coefficient) to make the Lagrange function ๐ฟ , in the form of: ๐ฟ ๐‘ฅ, ๐‘ฆ, ๐œ† = ๐‘“ ๐‘ฅ, ๐‘ฆ + ๐œ† . [๐‘ โˆ’ ๐‘” ๐‘ฅ, ๐‘ฆ ] โ€ข By changing ๐‘ฅ and ๐‘ฆ a point is moving on the surface of the function but the movement is limited to the constraint ๐‘” ๐‘ฅ, ๐‘ฆ = ๐‘ . โ€ข This means ๐‘ โˆ’ ๐‘” ๐‘ฅ, ๐‘ฆ = 0 and ๐ฟ ๐‘ฅ, ๐‘ฆ, ๐œ† = ๐‘“ ๐‘ฅ, ๐‘ฆ . So, the optimisation of ๐ฟ is equivalent to the optimisation of ๐‘“ .
  • 40. The Lagrange Method โ€ข To find the extremum values we need to find the derivative of the Lagrange function with respect to its variables and solve the following simultaneous equations : ๐œ•๐ฟ ๐œ•๐‘ฅ = 0 โ†’ ๐œ•๐‘“ ๐œ•๐‘ฅ โˆ’ ๐œ†. ๐œ•๐‘” ๐œ•๐‘ฅ = 0 ๐œ•๐ฟ ๐œ•๐‘ฆ = 0 โ†’ ๐œ•๐‘“ ๐œ•๐‘ฆ โˆ’ ๐œ†. ๐œ•๐‘” ๐œ•๐‘ฆ = 0 ๐œ•๐ฟ ๐œ•๐œ† = 0 โ†’ ๐‘ โˆ’ ๐‘” ๐‘ฅ, ๐‘ฆ = 0 โ€ข Solving this simultaneous equations gives us the critical values of ๐‘ฅ and ๐‘ฆ and a value for ๐œ† . โ€ข ๐œ† shows the sensitivity of the target (objective)function to the change in the constraint function. Necessary conditions for having extremums A
  • 41. Sufficient Condition โ€ข To make sure that the critical point(s) from solving the simultaneous equations are extremum(s) we need sufficient evidence which is the sign of second order differential of the Lagrange function ๐‘‘2 ๐ฟ at the critical point(s). โ€ข If ๐ฟ = ๐‘“ ๐‘ฅ, ๐‘ฆ + ๐œ† . [c โˆ’ ๐‘” ๐‘ฅ, ๐‘ฆ ] then ๐‘‘๐ฟ = ๐‘‘๐‘“ โˆ’ ๐‘”. ๐‘‘๐œ† โˆ’ ๐œ†. ๐‘‘๐‘” And ๐‘‘2 ๐ฟ = ๐‘‘2 ๐‘“ โˆ’ ๐‘‘๐‘”. ๐‘‘๐œ† โˆ’ ๐‘”. ๐‘‘2 ๐œ† โˆ’ ๐‘‘๐œ† . ๐‘‘๐‘” โˆ’ ๐œ† . ๐‘‘2 ๐‘” Since: ๏ถ ๐‘‘2 ๐œ† = 0 ๏ถ ๐‘‘2 ๐‘“ = ๐‘“๐‘ฅ๐‘ฅ. ๐‘‘๐‘ฅ2 + 2๐‘“๐‘ฅ๐‘ฆ. ๐‘‘๐‘ฅ. ๐‘‘๐‘ฆ + ๐‘“๐‘ฆ๐‘ฆ. ๐‘‘๐‘ฆ2 ๏ถ ๐‘‘๐‘” = ๐‘” ๐‘ฅ. ๐‘‘๐‘ฅ + ๐‘” ๐‘ฆ. ๐‘‘๐‘ฆ ๏ถ ๐‘‘2 ๐‘” = ๐‘” ๐‘ฅ๐‘ฅ. ๐‘‘๐‘ฅ2 + 2๐‘” ๐‘ฅ๐‘ฆ. ๐‘‘๐‘ฅ. ๐‘‘๐‘ฆ + ๐‘” ๐‘ฆ๐‘ฆ. ๐‘‘๐‘ฆ2 , therefore
  • 42. ๐‘‘2 ๐ฟ = ๐‘“๐‘ฅ๐‘ฅ โˆ’ ๐œ†. ๐‘” ๐‘ฅ๐‘ฅ . ๐‘‘๐‘ฅ2 + 2 ๐‘“๐‘ฅ๐‘ฆ โˆ’ ๐œ†. ๐‘” ๐‘ฅ๐‘ฆ . ๐‘‘๐‘ฅ. ๐‘‘๐‘ฆ + ๐‘“๐‘ฆ๐‘ฆ โˆ’ ๐œ†. ๐‘” ๐‘ฆ๐‘ฆ . ๐‘‘๐‘ฆ2 โˆ’ 2๐‘” ๐‘ฅ ๐‘‘๐‘ฅ. ๐‘‘๐œ† โˆ’ 2๐‘” ๐‘ฆ ๐‘‘๐‘ฆ. ๐‘‘๐œ† = ๐ฟ ๐‘ฅ๐‘ฅ. ๐‘‘๐‘ฅ2 + 2๐ฟ ๐‘ฅ๐‘ฆ. ๐‘‘๐‘ฅ. ๐‘‘๐‘ฆ + ๐ฟ ๐‘ฆ๐‘ฆ. ๐‘‘๐‘ฆ2 โˆ’ 2๐‘” ๐‘ฅ ๐‘‘๐‘ฅ. ๐‘‘๐œ† โˆ’ 2๐‘” ๐‘ฆ ๐‘‘๐‘ฆ. ๐‘‘๐œ† โ€ข In the matrix form we can use the bordered Hessian Matrix to represent the above quadratic form: ๐‘‘2 ๐ฟ = ๐‘‘๐œ† ๐‘‘๐‘ฅ ๐‘‘๐‘ฆ 0 โˆ’๐‘” ๐‘ฅ โˆ’๐‘” ๐‘ฆ โˆ’๐‘” ๐‘ฅ ๐ฟ ๐‘ฅ๐‘ฅ ๐ฟ ๐‘ฅ๐‘ฆ โˆ’๐‘” ๐‘ฆ ๐ฟ ๐‘ฆ๐‘ฅ ๐ฟ ๐‘ฆ๐‘ฆ ๐‘‘๐œ† ๐‘‘๐‘ฅ ๐‘‘๐‘ฆ โ€ข Where the bordered Hessian matrix is: ๐ป3 = 0 โˆ’๐‘” ๐‘ฅ โˆ’๐‘” ๐‘ฆ โˆ’๐‘” ๐‘ฅ ๐ฟ ๐‘ฅ๐‘ฅ ๐ฟ ๐‘ฅ๐‘ฆ โˆ’๐‘” ๐‘ฆ ๐ฟ ๐‘ฆ๐‘ฅ ๐ฟ ๐‘ฆ๐‘ฆ or sometimes ๐ป3 = ๐ฟ ๐‘ฅ๐‘ฅ ๐ฟ ๐‘ฅ๐‘ฆ โˆ’๐‘” ๐‘ฅ ๐ฟ ๐‘ฆ๐‘ฅ ๐ฟ ๐‘ฆ๐‘ฆ โˆ’๐‘” ๐‘ฆ โˆ’๐‘” ๐‘ฅ โˆ’๐‘” ๐‘ฆ 0 Sufficient Condition
  • 43. โ€ข In the second form, the components of vectors of the first differentials of the variables, need to be re-arranged, i.e.: ๐‘‘๐‘ฅ ๐‘‘๐‘ฆ ๐‘‘๐œ† ๐ฟ ๐‘ฅ๐‘ฅ ๐ฟ ๐‘ฅ๐‘ฆ โˆ’๐‘” ๐‘ฅ ๐ฟ ๐‘ฆ๐‘ฅ ๐ฟ ๐‘ฆ๐‘ฆ โˆ’๐‘” ๐‘ฆ โˆ’๐‘” ๐‘ฅ โˆ’๐‘” ๐‘ฆ 0 ๐‘‘๐‘ฅ ๐‘‘๐‘ฆ ๐‘‘๐œ† โ€ข Note: In some books the constraint function ๐‘” enters in the Lagrange function with a positive sign, so, the signs of the first derivatives of ๐‘” in the bordered Hessian matrix are positive, but there is no difference between their determinants. (Based on the properties of determinant, if just a row or just a column of a matrix multiplied by ๐‘˜, the determinant of the matrix is multiplied by ๐‘˜. In this case, the first row and the first column multiplied by -1, so, the determinant is multiplied by -1x(-1)=1) Sufficient Condition
  • 44. So, we have a minimum if 1. ๐‘‘2 ๐ฟ > 0 (i.e. all the principle minors of the Hessian matrix should be negative: ๐ป2 , ๐ป3 < 0 ) And a maximum if: 2. ๐‘‘2 ๐ฟ < 0 (i.e. the principle minors of the Hessian matrix change their sign one after another: ๐ป2 > 0, ๐ป3 < 0 ) โ€ข For a multi variable function ๐‘ฆ = ๐‘“(๐‘ฅ1, ๐‘ฅ2, โ€ฆ , ๐‘ฅ ๐‘›), The Hessian matrix is ๐‘› ร— ๐‘› but the rule is the same: โ€ข For minimum: ๐ป2 , ๐ป3 , โ€ฆ , ๐ป ๐‘› < 0 . โ€ข For maximum: Tโ„Ž๐‘’ ๐‘ ๐‘–๐‘”๐‘› ๐‘œ๐‘“ ๐‘กโ„Ž๐‘’ ๐‘๐‘Ÿ๐‘–๐‘›๐‘๐‘–๐‘๐‘™๐‘’ ๐‘š๐‘–๐‘›๐‘œ๐‘Ÿ๐‘  ๐‘โ„Ž๐‘Ž๐‘›๐‘”๐‘’ ๐‘๐‘œ๐‘›๐‘ ๐‘’๐‘๐‘ข๐‘ก๐‘–๐‘ฃ๐‘’๐‘™๐‘ฆ. Sufficient Condition
  • 45. Example โ€ข Find the extremums of the function ๐‘“ ๐‘ฅ, ๐‘ฆ = ๐‘ฅ โˆ’ ๐‘ฆ subject to the ๐‘ฅ2 + ๐‘ฆ2 = 100, if any? ๐ฟ ๐‘ฅ, ๐‘ฆ, ๐œ† = ๐‘ฅ โˆ’ ๐‘ฆ + ๐œ†[100 โˆ’ ๐‘ฅ2 โˆ’ ๐‘ฆ2] ๐ฟ ๐‘ฅ = 1 โˆ’ 2๐œ†๐‘ฅ = 0 ๐ฟ ๐‘ฆ = โˆ’1 โˆ’ 2๐œ†๐‘ฆ = 0 ๐ฟ ๐œ† = 100 โˆ’ ๐‘ฅ2 โˆ’ ๐‘ฆ2 = 0 1 โˆ’1 = 2๐œ†๐‘ฅ 2๐œ†๐‘ฆ From the first two equations ๐œ† can be omitted and we have ๐‘ฅ = โˆ’๐‘ฆ. Substituting this new equation into the third equation we will have: 100 โˆ’๐‘ฆ2 โˆ’ ๐‘ฆ2 = 0 โ†’ ๐‘ฆ = ยฑ5 2 So, the critical points are A โˆ’5 2, 5 2, โˆ’10 2 and ๐ต(5 2, โˆ’5 2, 10 2) and ๐œ† = โˆ“ 2 20 . Without any further investigation it can be said that point A is minimum and point ๐ต is maximum. (Why?)
  • 46. Example โ€ข Using hessian determinant method we have: ๐ป3 = 0 โˆ’2๐‘ฅ โˆ’2๐‘ฆ โˆ’2๐‘ฅ โˆ’2๐œ† 0 โˆ’2๐‘ฆ 0 โˆ’2๐œ† = 8๐œ†(๐‘ฅ2 + ๐‘ฆ2) Obviously, the sign of this determinant depends on the sign of ๐œ†. ๏ƒ˜ At point A โˆ’5 2, 5 2, โˆ’10 2 , ๐œ† = โˆ’ 2 20 ,so, ๐ป3 <0 and the point is minimum. ( ๐ป2 is also negative). ๏ƒ˜ At point ๐ต 5 2, โˆ’5 2, 10 2 , ๐œ† = + 2 20 , so, ๐ป3 >0 and the point is maximum. โ€ข If there are more than one constraint the process of optimisation is the same but there will be more than one Lagrange multiplier. โ€ข This case is the generalisation of the previous case and will not be discussed here.
  • 47. Interpretation of the Lagrange Multiplier ๐œ† โ€ข The first-order conditions in the form of the simultaneous equations (slide 40), provides the critical (and perhaps) optimal values of the independent variables (๐‘ฅโˆ—, ๐‘ฆโˆ—) and the corresponding value(s) of the Lagrange multiplier (๐œ†โˆ—). โ€ข The Lagrange multiplier shows the sensitivity of the optimal value of the target (objective) function(๐‘“โˆ—) to the change in the constant value of the constraint function (๐‘). It is calculated as the ratio, i.e.: ๐œ†โˆ— = ๐œ•๐‘“โˆ— ๐‘ฅโˆ—, ๐‘ฆโˆ— ๐œ•๐‘ This means if ๐œ†โˆ— = 2, and ๐‘ increases by 1%, the value of the target function (calculated at the optimal values ๐‘ฅโˆ— and ๐‘ฆโˆ— ) increases 2%. A
  • 48. Duality in Optimisation Analysis โ€ข Consider the process of maximisation of the target (objective) function ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ), subject to the constraint ๐‘” = ๐‘”(๐‘ฅ, ๐‘ฆ). โ€ข As we know, the solution is the tangency point on both functions, so, the process of optimisation can be done through different approaches. The primal approach is what we have discussed and done so far but the dual approach is when the constraint function ๐‘” = ๐‘”(๐‘ฅ, ๐‘ฆ) is the new target function and ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ) as the new constraint. โ€ข The initial idea comes from the mathematical fact that if ๐‘“ reaches to its maximum at the point ๐‘ฅ = ๐‘ฅโˆ—, the function โˆ’๐‘“ will have a minimum at that point. โ€ข Therefore, instead of finding the maximum of ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ), subject to the constraint ๐‘” = ๐‘” ๐‘ฅ, ๐‘ฆ , we can find the minimum of ๐‘” = ๐‘” ๐‘ฅ, ๐‘ฆ , subject to the constraint ๐‘ง = ๐‘“(๐‘ฅ, ๐‘ฆ), i.e. if we know that ๐‘ง cannot be bigger than ๐‘งโˆ—what is the minimum value of ๐‘” ๐‘ฅ, ๐‘ฆ , which satisfies this constraint.
  • 49. Duality in Optimisation Analysis โ€ข Let ๐‘ˆ = ๐‘ˆ(๐‘ฅ, ๐‘ฆ) is the utility function subject to the budget constraint ๐‘ฅ. ๐‘ƒ๐‘ฅ + ๐‘ฆ. ๐‘ƒ๐‘ฆ = ๐‘š. โ€ข The Lagrange function is: ๐ฟ ๐‘ฅ, ๐‘ฆ, ๐œ† = ๐‘ˆ ๐‘ฅ, ๐‘ฆ + ๐œ†(๐‘š โˆ’ ๐‘ฅ. ๐‘ƒ๐‘ฅ โˆ’ ๐‘ฆ. ๐‘ƒ๐‘ฆ) The first-order conditions are: ๐ฟ ๐‘ฅ = ๐‘ˆ ๐‘ฅ โˆ’ ๐œ†๐‘ƒ๐‘ฅ = 0 ๐ฟ ๐‘ฆ = ๐‘ˆ ๐‘ฆ โˆ’ ๐œ†๐‘ƒ๐‘ฆ = 0 ๐ฟ ๐œ† = ๐‘š โˆ’ ๐‘ฅ. ๐‘ƒ๐‘ฅ + ๐‘ฆ. ๐‘ƒ๐‘ฆ = 0 โ€ข The optimal value for ๐‘ฅ and ๐‘ฆ which shows the Marshallian demand (consumption) function for ๐‘ฅ and ๐‘ฆ and the optimal value for ๐œ† are: ๐‘ฅ ๐‘€ = ๐‘ฅ ๐‘€ ๐‘ƒ๐‘ฅ, ๐‘ƒ๐‘ฆ, ๐‘š ๐‘ฆ ๐‘€ = ๐‘ฆ ๐‘€ ๐‘ƒ๐‘ฅ, ๐‘ƒ๐‘ฆ, ๐‘š ๐œ† ๐‘€ = ๐œ† ๐‘€ ๐‘ƒ๐‘ฅ, ๐‘ƒ๐‘ฆ, ๐‘š B
  • 50. Duality in Optimisation Analysis โ€ข Substituting these solutions into the target function gives the maximum value of the utility can be achieved by the constraint: ๐‘ˆโˆ— = ๐‘ˆโˆ— ๐‘ฅ ๐‘€ ๐‘ƒ๐‘ฅ, ๐‘ƒ๐‘ฆ, ๐‘š , ๐‘ฆ ๐‘€ ๐‘ƒ๐‘ฅ, ๐‘ƒ๐‘ฆ, ๐‘š We call this as the indirect utility function, as it is the maximum value of the utility obtained at the optimal values of ๐‘ฅ and ๐‘ฆ, but it is an indirect function because now its values depends on the parameters ๐‘ƒ๐‘ฅ, ๐‘ƒ๐‘ฆ and ๐‘š. โ€ข Now, the dual problem is when the expenditure on ๐‘ฅ and ๐‘ฆ is minimised subject to the maintaining of a given level of utility ๐‘ˆโˆ— . So, the new Lagrange function is: ๐ฟ ๐‘ฅ, ๐‘ฆ, ๐œ† = ๐‘ฅ. ๐‘ƒ๐‘ฅ + ๐‘ฆ. ๐‘ƒ๐‘ฆ + ๐œ†[๐‘ˆโˆ— โˆ’ ๐‘ˆ ๐‘ฅ, ๐‘ฆ ] The first-order conditions provide optimal solutions for ๐‘ฅ,๐‘ฆ and ๐œ†.
  • 51. Duality in Optimisation Analysis ๐ฟ ๐‘ฅ = ๐‘ƒ๐‘ฅ โˆ’ ๐œ†๐‘ˆ ๐‘ฅ = 0 ๐ฟ ๐‘ฆ = ๐‘ƒ๐‘ฆ โˆ’ ๐œ†๐‘ˆ ๐‘ฆ = 0 ๐ฟ ๐œ† = ๐‘ˆโˆ— โˆ’ ๐‘ˆ(๐‘ฅ, ๐‘ฆ) = 0 The optimal solutions represent the demand functions for ๐‘ฅ and ๐‘ฆ . ๐‘ฅ ๐ป = ๐‘ฅ ๐ป ๐‘ƒ๐‘ฅ, ๐‘ƒ๐‘ฆ, ๐‘ˆโˆ— ๐‘ฆ ๐ป = ๐‘ฆ ๐ป ๐‘ƒ๐‘ฅ, ๐‘ƒ๐‘ฆ, ๐‘ˆโˆ— ๐œ† ๐ป = ๐œ† ๐ป ๐‘ƒ๐‘ฅ, ๐‘ƒ๐‘ฆ, ๐‘ˆโˆ— โ€ข The first two equations are called Hicksion demand functions. Both simultaneous equations and give us the same results: ๐‘ˆ ๐‘ฅ ๐‘ƒ๐‘ฅ = ๐‘ˆ ๐‘ฆ ๐‘ƒ๐‘ฆ ๐‘œ๐‘Ÿ ๐‘ˆ ๐‘ฅ ๐‘ˆ ๐‘ฆ = ๐‘ƒ๐‘ฅ ๐‘ƒ๐‘ฆ So, primal and dual analysis leads us to the same conclusion. The only difference is that: ๐œ† ๐ป = 1 ๐œ† ๐‘€ C B C