2. Multi-Variable Functions
โข In the case of one-variable function, in the form of ๐ฆ = ๐(๐ฅ) , the
variable ๐ is called โindependent variableโ and ๐ โdependent
variableโ.
โข There are many examples of the dependency of ๐ฆ on ๐ฅ (e.g, the state
of boiling of water depends on the amount of heat; or consumption
expenditure depends on the level of income) but the concept of
function should be understood beyond the concept of dependency.
In most of the cases, dependency is not the issue at all. The modern
concept of function is based on the idea of mapping.
3. Multi-Variable Functions
โข When a painter paint a scene on a canvas s(he) uses a
correspondence rule (mapping rule): every point in three-
dimensional space (๐ 3) is corresponded (mapped) to just one and
only one point in two-dimensional space (๐ 2).
โข Mathematically speaking the
function ๐: ๐ 3
โ ๐ 2
can
represent the type of
corresponding (mapping)
rule that the painter is
applying.
4. The Concept of Function as Mapping
โข Transformation of an object is a mapping from ๐ 2 to ๐ 2;
โข Mathematical operations describe a function from ๐ 2 to ๐
x
y
y
-xo
๐: ๐ 2 โ ๐ 2
๐, ๐ โ (๐, โ๐)
(๐, ๐)
(๐, โ๐)
a
b
a+boo
Figure1-6: Geometrical interpretation of
the sum operator as a function. This is a
transformation from space to .
xx
๐: ๐ 2 โ ๐
๐, ๐ โ ๐ + ๐
5. Multi Variables Functions
โข All basic mathematical operators such as summation, subtraction,
division and multiplication introduce a function from two-
dimensional space (๐ 2) to the real number set (one-dimensional
space, ๐ ), that is:
๐: ๐ 2
โ ๐
For e.g. for division: ๐, ๐ โ
๐
๐
(๐ โ 0)
โข One of the important family of the multi-variable functions is the
โreal (scalar) multi variables functionโ, which can be shown as
๐: ๐ ๐ โ ๐ or simply, ๐ฆ = ๐(๐ฅ1, ๐ฅ2, โฆ , ๐ฅ ๐), where ๐ฆ is the
dependent variable and ๐ฅ1, ๐ฅ2, โฆ , ๐ฅ ๐ are independent variables.
6. Two Variables Functions
โข A simple form of this function is when we have two independent
variables ๐ฅ, ๐ฆ and one dependent variable ๐ง, in the form of ๐ง =
๐(๐ฅ, ๐ฆ). This is called โtwo variables functionโ as there are two
independent variables.
โข E.g. a Cobb-Douglas production function :
๐ = ๐ ๐พ, ๐ฟ = ๐ด๐พ ๐ผ
๐ฟ ๐ฝ
Where ๐ is the level of production,
๐พ and ๐ฟ are the levels of capital and
labour employed for production,
respectively.
โข ๐ด, ๐ผ and ๐ฝ are constants of the
function.
Adoptedfrom http://en.citizendium.org/wiki/File:Cobb-Douglas_with_dimishing_returns_to_scale.png
Y
K
L
7. Two Variables Functions
โข ๐ง = ๐(๐ฅ, ๐ฆ) represents a functional relationship if for every ordered
pair (๐ฅ, ๐ฆ) in the domain of the function there will be one and only
one value of ๐ง in the range of the function.
o Which graph does represent a function?
๐ ๐
๐ ๐
+
๐ ๐
๐ ๐
+
๐ ๐
๐ ๐
= ๐
Ellipsoid
Hyperboloid of Two Sheets
โ
๐ ๐
๐ ๐
โ
๐ ๐
๐ ๐
+
๐ ๐
๐ ๐
= ๐
Hyperbolic Paraboloid
๐ ๐
๐ ๐
โ
๐ ๐
๐ ๐
=
๐
๐
Elliptic Paraboloid
๐ ๐
๐ ๐
+
๐ ๐
๐ ๐
=
๐
๐
Adoptedfromhttp://tutorial.math.lamar.edu/Classes/CalcIII/QuadricSurfaces.aspx
8. Derivative of Two Variables Functions
โข Consider the function ๐ง = ๐(๐ฅ, ๐ฆ); ๐ง changes if ๐ฅ or ๐ฆ or both of
them change. If we control the change of ๐ฆ and allow just ๐ฅ to
change then the average change of ๐ง in terms of ๐ฅ, is
ฮ๐ง
ฮ๐ฅ
. The limiting
state of this ratio when โ๐ฅ โ 0 is what is called โpartial derivative of
๐ in terms of ๐ โ and is shown by:
๐๐ง
๐๐ฅ
,
๐๐(๐ฅ,๐ฆ)
๐๐ฅ
, ๐ง ๐ฅ
โฒ
, ๐๐ฅ
โข This cutter plane shows that the
variable ๐ฆ is controlled (fixed)
at ๐ฆ = 1 but ๐ฅ can change from
-2 to +2 and the movement is on
the curve of intersection between
The plane and the surface of the
function.
Adoptedfrom http://msemac.redwoods.edu/~darnold/math50c/matlab/pderiv/index.xhtml
9. Partial Differentiation
โข If ๐ฅ is controlled (fixed) and ๐ฆ is allowed to change the partial
derivative of ๐ in terms of ๐ can be shown by:
๐๐ง
๐๐ฆ
,
๐๐(๐ฅ,๐ฆ)
๐๐ฆ
, ๐ง ๐ฆ
โฒ
,๐๐ฆ
โข The cutter plane shows that
๐ฅ is controlled (fixed) at
๐ฅ = 0 but ๐ฆ can change from
-3 to +3 on the curve of intersection
between the plane and the surface
of the function.
z
y
x
Adoptedfrom http://www.uwec.edu/math/Calculus/216-Spring2007/assignments.htm
10. Partial Differentiation
โข So, in general, the slope of the function ๐ง = ๐(๐ฅ, ๐ฆ) on the curve of
intersection between the surface of the function and the cutting
plane parallel to x-axis at any point of the domain is:
๐๐ง
๐๐ฅ
= ๐๐ฅ = lim
โ๐ฅโ0
๐ ๐ฅ + โ๐ฅ , ๐ฆ โ ๐(๐ฅ , ๐ฆ)
โ๐ฅ
= ๐๐๐
โโ0
๐ ๐ฅ + โ , ๐ฆ โ ๐(๐ฅ , ๐ฆ)
โ
It means when calculating
๐๐ง
๐๐ฅ
the
variable ๐ฆ should be treated as a
constant. The same rule applies for
multi variables functions.
Adoptedfrom http://moodle.capilanou.ca/mod/book/view.php?id=19700&chapterid=240
๐ = ๐๐ โ ๐ ๐
โ ๐ ๐
11. Partial Differentiation
โข And the slope of the function ๐ง = ๐(๐ฅ, ๐ฆ) on the curve of
intersection between the surface of the function and the cutting
plane parallel to y-axis at any point of the domain is:
๐๐ง
๐๐ฆ
= ๐๐ฆ = ๐๐๐
โ๐ฆโ0
๐ ๐ฅ , ๐ฆ + โ๐ฆ โ ๐(๐ฅ, ๐ฆ)
โ๐ฆ
= ๐๐๐
โโ0
๐ ๐ฅ , ๐ฆ + โ โ ๐(๐ฅ, ๐ฆ)
โ
It means when calculating
๐๐ง
๐๐ฆ
the
variable ๐ฅ should be treated as a
constant. The same rule applies for
multi variables functions.
Adoptedfrom http://moodle.capilanou.ca/mod/book/view.php?id=19700&chapterid=240
๐ = ๐๐ โ ๐ ๐
โ ๐ ๐
12. Partial Differentiation
โข To find the partial derivatives (slope of tangent lines on the surface)
at a specific point ๐(๐, ๐, ๐) we have:
โข
๐๐(๐ฅ,๐ฆ)
๐๐ฅ ๐(๐,๐,๐)
= ๐๐๐
โโ0
๐ ๐+โ , ๐ โ๐(๐ , ๐)
โ
โข
๐๐(๐ฅ,๐ฆ)
๐๐ฆ ๐(๐,๐,๐)
= ๐๐๐
โโ0
๐ ๐ , ๐+โ โ๐(๐ , ๐)
โ
Example:
o Find partial derivatives of ๐ง = 10๐ฅ2
๐ฆ3
.
๐๐
๐๐
= ๐๐๐๐ ๐ ,
๐๐
๐๐
= ๐๐๐ ๐ ๐ ๐
(๐, ๐, ๐)
(๐, ๐, ๐)
(๐, ๐, ๐)
(๐, ๐, ๐)
Adoptedfrom http://www.solitaryroad.com/c353.html
16. Chain Rule (Different Cases)
o Find partial derivatives of the function ๐ง = ๐
๐ฅ
๐ฆ + cos(๐ฅ๐ฆ) .
๐๐ง
๐๐ฅ
=
๐
๐
๐
๐
๐ โ ๐. ๐๐๐ ๐๐ ,
๐๐ง
๐๐ฆ
=
โ๐
๐ ๐
๐
๐
๐ โ ๐. ๐๐๐(๐๐)
โข Case 2: If ๐ง = ๐ ๐ฅ, ๐ฆ is a differentiable function of ๐ฅ and ๐ฆ and
these two variables are differentiable functions of ๐ , such that ๐ฅ =
๐ ๐ and ๐ฆ = โ(๐) , then:
o Find partial derivatives of ๐ง = ๐ฅ โ ๐๐๐ฆ when ๐ฅ = ๐ and ๐ฆ = ๐2 โ 1
๐๐ง
๐๐
= 1.
1
2 ๐
โ
1
๐ฆ
. 2๐ =
๐
๐ ๐
โ
๐๐
๐ ๐ โ ๐
โข Can you suggest another way?
๐๐ง
๐๐
=
๐๐ง
๐๐ฅ
.
๐๐ฅ
๐๐
+
๐๐ง
๐๐ฆ
.
๐๐ฆ
๐๐
The same rules apply
for multi variables
functions
17. Chain Rules (Different Cases)
โข Case 3: If ๐ง = ๐ ๐ฅ, ๐ฆ is a differentiable function of ๐ฅ and ๐ฆ and
these two variables are differentiable functions of ๐ and ๐ , such
that ๐ฅ = ๐ ๐, ๐ and ๐ฆ = โ(๐, ๐ ) and ๐ and ๐ are independent from
each other (
๐๐
๐๐
,
๐๐
๐๐
= 0), then:
โข These derivatives are called โtotal derivatives of ๐ with respect to
๐ and ๐โ.
o Find partial derivatives of ๐ง =
3
๐ฅ2 โ ๐ฆ where ๐ฅ = ๐2 + ๐ 2 and
๐ฆ =
๐
๐
.
๐๐ง
๐๐
=
๐๐ง
๐๐ฅ
.
๐๐ฅ
๐๐
+
๐๐ง
๐๐ฆ
.
๐๐ฆ
๐๐
and
๐๐ง
๐๐
=
๐๐ง
๐๐ฅ
.
๐๐ฅ
๐๐
+
๐๐ง
๐๐ฆ
.
๐๐ฆ
๐๐
18. Implicit Differentiation
โข The Chain Rule can be used for implicit differentiation even for one
variable functions:
๐น ๐ฅ, ๐ฆ = 0
Using the chain rule we have:
๐๐น
๐๐ฅ
=
๐๐น
๐๐ฅ
.
๐๐ฅ
๐๐ฅ
+
๐๐น
๐๐ฆ
.
๐๐ฆ
๐๐ฅ
= 0
So,
โข The same rule can be used for implicit two or multi variables functions.
For example, for an implicit function ๐น ๐ฅ, ๐ฆ, ๐ง = 0, we have:
As
๐ ๐
๐ ๐
= ๐
๐๐ฆ
๐๐ฅ
= โ
๐๐น
๐๐ฅ
๐๐น
๐๐ฆ
= โ
๐น๐ฅ
๐น๐ฆ
๐๐ง
๐๐ฅ
= โ
๐๐น
๐๐ฅ
๐๐น
๐๐ง
= โ
๐น๐ฅ
๐น๐ง
๐๐๐
๐๐ง
๐๐ฆ
= โ
๐๐น
๐๐ฆ
๐๐น
๐๐ง
= โ
๐น๐ฆ
๐น๐ง
19. Examples of Implicit Functions
o Find the slope of the tangent line on the curve of intersection
between the surface ๐ฅ2 + ๐ฆ2 + ๐ง2 = 9 and the plane ๐ฆ = 2 at the
point ๐ด(1,2,2) .
As ๐ฆ is fixed at 2 so, we are looking for
๐๐ง
๐๐ฅ
at point A :
2๐ฅ + 0 + 2๐ง.
๐๐ง
๐๐ฅ
= 0 โ
๐๐ง
๐๐ฅ
=
โ๐ฅ
๐ง
= โ
1
2
Or using implicit differentiation:
๐๐ง
๐๐ฅ
= โ
๐น๐ฅ
๐น๐ง
= โ
2๐ฅ
2๐ง
= โ
๐ฅ
๐ง
o Find
๐๐ง
๐๐ฆ
for ๐ ๐ฅ+๐ฆ+๐ง = ๐ฅ2 โ 2๐ฆ2 + ๐ง2 .
0 + 1 +
๐๐ง
๐๐ฆ
๐ ๐ฅ+๐ฆ+๐ง = 0 โ 4y + 2z.
๐๐ง
๐๐ฆ
โ
๐๐ง
๐๐ฆ
=
๐ ๐ฅ+๐ฆ+๐ง + 4๐ฆ
2๐ง โ ๐ ๐ฅ+๐ฆ+๐ง
Use the implicit differentiation for this question.
20. Higher Orders Partial Derivatives
โข For the function ๐ง = ๐(๐ฅ, ๐ฆ) the partial derivatives
๐๐ง
๐๐ฅ
and
๐๐ง
๐๐ฆ
are in
turn functions of ๐ฅ and ๐ฆ , in general. So, we can think of second
partial derivatives of ๐ง , but in this case there are three different
second derivatives:
๐ง ๐ฅ๐ฅ = ๐๐ฅ๐ฅ =
๐
๐๐ง
๐๐ฅ
๐๐ฅ
=
๐
๐๐ฅ
๐๐ง
๐๐ฅ
=
๐2 ๐ง
๐๐ฅ2
๐ง ๐ฆ๐ฆ = ๐๐ฆ๐ฆ =
๐
๐๐ง
๐๐ฆ
๐๐ฆ
=
๐
๐๐ฅ
๐๐ง
๐๐ฆ
=
๐2
๐ง
๐๐ฆ2
๐ง ๐ฅ๐ฆ = ๐๐ฅ๐ฆ =
๐
๐๐ง
๐๐ฅ
๐๐ฆ
=
๐
๐๐ฆ
๐๐ง
๐๐ฅ
=
๐2
๐ง
๐๐ฆ. ๐๐ฅ
Second-
order direct
partial
derivatives
Second-
order cross
partial
derivative
21. The Equality of Mixed (Cross) Partial Derivatives
๐ง ๐ฆ๐ฅ = ๐๐ฆ๐ฅ =
๐
๐๐ง
๐๐ฆ
๐๐ฅ
=
๐
๐๐ฅ
๐๐ง
๐๐ฆ
=
๐2
๐ง
๐๐ฅ. ๐๐ฆ
โข If the cross (mixed) partial derivatives ๐๐ฅ๐ฆ and ๐๐ฆ๐ฅ are continuous
and finite in their domain then they are equal to one another; i.e.
๐๐ฅ๐ฆ = ๐๐ฆ๐ฅ
Or
๐2 ๐ง
๐๐ฆ.๐๐ฅ
=
๐2 ๐ง
๐๐ฅ.๐๐ฆ
๐ง = ๐(๐ฅ, ๐ฆ)
๐๐ง
๐๐ฅ
=๐๐ฅ
๐๐ง
๐๐ฆ
=๐๐ฆ
๐๐ฅ๐ฅ
๐๐ฅ๐ฆ = ๐๐ฆ๐ฅ
๐๐ฆ๐ฆ
Second-
order cross
partial
derivative
22. Total Differential
โข The meaning of differential in multi variables scalar function is not
different with that in the one variable function. The only difference is
that the source of change in dependent variable is the change of all
independent variables., that is;
๐ง + โ๐ง = ๐(๐ฅ + โ๐ฅ, ๐ฆ + โ๐ฆ)
Or โ๐ง = ๐ ๐ฅ + โ๐ฅ, ๐ฆ + โ๐ฆ โ ๐(๐ฅ, ๐ฆ)
But ๐๐ง, which is called โtotal differentialโ
is defined as:
๐๐ง =
๐๐ง
๐๐ฅ
. ๐๐ฅ +
๐๐ง
๐๐ฆ
๐๐ฆ
Or
๐๐ง = ๐๐ฅ. ๐๐ฅ + ๐๐ฆ. ๐๐ฆ Adoptedfrom Calculus Early Transcendental James Stewart p897
23. Total Differential
โข For a multi variables scalar function the same rule applies:
๐ง = ๐(๐ฅ1, ๐ฅ2, โฆ , ๐ฅ ๐)
๐๐ง =
๐๐ง
๐๐ฅ1
. ๐๐ฅ1 +
๐๐ง
๐๐ฅ2
. ๐๐ฅ2 + โฏ +
๐๐ง
๐๐ฅ ๐
. ๐๐ฅ ๐
โข in the case of two variables function ๐ง = ๐(๐ฅ, ๐ฆ) we assumed ๐ฅ and ๐ฆ are
independent, but if they depend on other variables the differential of
each one of them can be treated as the total differential of a dependent
variable, that is;
๐ง = ๐ ๐ฅ, ๐ฆ โ ๐๐ง =
๐๐ง
๐๐ฅ
. ๐๐ฅ +
๐๐ง
๐๐ฆ
. ๐๐ฆ ๐ด
๐ฅ = โ ๐, ๐ โ ๐๐ฅ =
๐๐ฅ
๐๐
. ๐๐ +
๐๐ฅ
๐๐
. ๐๐ ๐ต
๐ฆ = ๐ ๐, ๐ โ ๐๐ฆ =
๐๐ฆ
๐๐
. ๐๐ +
๐๐ฆ
๐๐
. ๐๐ ๐ถ
Substituting B and C into A:
๐๐ง =
๐๐ง
๐๐ฅ
.
๐๐ฅ
๐๐
. ๐๐ +
๐๐ฅ
๐๐
. ๐๐ +
๐๐ง
๐๐ฆ
.
๐๐ฆ
๐๐
. ๐๐ +
๐๐ฆ
๐๐
. ๐๐
24. Total Differential
If we are looking for total derivatives of ๐ง with respect to ๐ and ๐ ,
which is introduced before as the chain rule (case 3), we need to
suppose that ๐ and ๐ are independent variables and not associated to
each other (
๐๐
๐๐
๐๐
๐๐
๐๐
= 0); then:
๐๐ง =
๐๐ง
๐๐ฅ
.
๐๐ฅ
๐๐
+
๐๐ง
๐๐ฆ
.
๐๐ฆ
๐๐
. ๐๐ +
๐๐ง
๐๐ฅ
.
๐๐ฅ
๐๐
+
๐๐ง
๐๐ฆ
.
๐๐ฆ
๐๐
. ๐๐
๐๐ง
๐๐
=
๐๐ง
๐๐ฅ
.
๐๐ฅ
๐๐
+
๐๐ง
๐๐ฆ
.
๐๐ฆ
๐๐
and
๐๐ง
๐๐
=
๐๐ง
๐๐ฅ
.
๐๐ฅ
๐๐
+
๐๐ง
๐๐ฆ
.
๐๐ฆ
๐๐
25. Second Order Total Differential
โข The sign of the second order total differential ๐2 ๐ง shows the convexity and
concavity of the surface with respect to the ๐ฅ๐๐ฆ plane.
โข Considering the total differential ๐๐ง , the second order total differential
๐2 ๐ง can be obtained by applying the differential rules:
๐2 ๐ง = ๐ ๐๐ง = ๐(
๐๐ง
๐๐ฅ
. ๐๐ฅ +
๐๐ง
๐๐ฆ
๐๐ฆ)
= ๐ ๐๐ฅ. ๐๐ฅ + ๐๐ฆ. ๐๐ฆ
= ๐๐๐ฅ. ๐๐ฅ + ๐๐ฅ. ๐ ๐๐ฅ + ๐๐๐ฆ. ๐๐ฆ + ๐๐ฆ. ๐(๐๐ฆ)
As
๐ ๐๐ฅ = ๐2
๐ฅ = 0
๐ ๐๐ฆ = ๐2
๐ฆ = 0
, and
๐๐๐ฅ = ๐๐ฅ๐ฅ. ๐๐ฅ + ๐๐ฅ๐ฆ. ๐๐ฆ
๐๐๐ฆ = ๐๐ฆ๐ฅ. ๐๐ฅ + ๐๐ฆ๐ฆ. ๐๐ฆ
therefore :
โข Factorising ๐๐ฅ2
from the right hand side, we have:
๐2 ๐ง = ๐๐ฆ2. ๐๐ฅ๐ฅ.
๐๐ฅ
๐๐ฆ
2
+ 2๐๐ฅ๐ฆ.
๐๐ฅ
๐๐ฆ
+ ๐๐ฆ๐ฆ
๐2 ๐ง = ๐๐ฅ๐ฅ. ๐๐ฅ2 + 2๐๐ฅ๐ฆ. ๐๐ฅ. ๐๐ฆ + ๐๐ฆ๐ฆ. ๐๐ฆ2
26. Second Order Differential
โข ๐๐ฆ2
> 0 (why?); so the sign of ๐2
๐ง depends on the sign of the
expression in the bracket.
โข From elementary algebra we know that the quadratic form
๐๐2 + ๐๐ + ๐ has the same sign as the parameter ๐ when โ=
๐2
โ 4๐๐ < 0 .
โข If we assume that ๐ =
๐๐ฅ
๐๐ฆ
and ๐ = ๐๐ฅ๐ฅ , ๐ = 2๐๐ฅ๐ฆ , ๐ = ๐๐ฆ๐ฆ then
๐2
๐ง = ๐๐ฆ2
. ๐๐2
+ ๐๐ + ๐ has the same sign as ๐ = ๐๐ฅ๐ฅ if
2๐๐ฅ๐ฆ
2
โ 4๐๐ฅ๐ฅ. ๐๐ฆ๐ฆ < 0 โ ๐๐ฅ๐ฅ. ๐๐ฆ๐ฆ > ๐๐ฅ๐ฆ
2
So;
1. ๐2
๐ง > 0 if ๐๐ฅ๐ฅ > 0 and ๐๐ฅ๐ฅ. ๐๐ฆ๐ฆ > ๐๐ฅ๐ฆ
2
.
2. ๐2 ๐ง < 0 if ๐๐ฅ๐ฅ < 0 and ๐๐ฅ๐ฅ. ๐๐ฆ๐ฆ > ๐๐ฅ๐ฆ
2
.
Adoptedfrom Calculus Early Transcendental James Stewart DIFFERENT PAGES
27. Optimising of Two Variables Functions
โข The two variables function ๐ง = ๐(๐ฅ, ๐ฆ) have a relative maximum
(relative minimum) at a point in its domain if at that point :
Note 1: If ๐๐ฅ๐ฅ . ๐๐ฆ๐ฆ โ ๐๐ฅ๐ฆ
2
< 0 , it
means the critical point is not a maximum
or a minimum but a saddle point.
(looks maximum from one axis but
minimum from another axis)
Adoptedfrom http://commons.wikimedia.org/wiki/File:Saddle_point.png
๐ = ๐ ๐
โ ๐ ๐
i. ๐๐ฅ = 0 and ๐๐ฆ = 0 , simultaneously.
ii. ๐๐ฅ๐ฅ < 0 (๐๐ฅ๐ฅ > 0)
iii. ๐๐ฅ๐ฅ . ๐๐ฆ๐ฆ โ ๐๐ฅ๐ฆ
2
> 0
Sufficient Conditions
Necessary conditions for
differentiable functions
28. Optimising of Two Variables Functions
โข Note 2: If ๐๐ฅ๐ฅ . ๐๐ฆ๐ฆ โ ๐๐ฅ๐ฆ
2
= 0 at the critical point further
investigation is needed to find about the nature of the point.
โข Example:
o Find the local extremum of the function ๐(๐ฅ, ๐ฆ) = ๐ฅ3 โ 6๐ฅ๐ฆ + 8๐ฆ3,
if any.
๐๐ฅ = 0
๐๐ฆ = 0
โ
6๐ฅ2
โ 6๐ฆ = 0
โ6๐ฅ + 24๐ฆ2
= 0
โ
๐ฅ2
= ๐ฆ
โ๐ฅ + 4๐ฆ2
= 0
After solving these simultaneous equations two critical points emerge
๐จ(๐, ๐, ๐) and ๐ฉ(
๐ ๐
๐
,
๐ ๐
๐๐
,
โ๐
๐
) .
29. Optimising of Two Variables Functions
Now, ๐๐ฅ๐ฅ = 12๐ฅ and ๐๐ฆ๐ฆ = 48๐ฆ and ๐๐ฅ๐ฆ = ๐๐ฆ๐ฅ = โ6 .
So, ๐๐ฅ๐ฅ . ๐๐ฆ๐ฆ โ ๐๐ฅ๐ฆ
2
= 12๐ฅ. 48๐ฆ โ โ6 2 = 576๐ฅ๐ฆ โ 36 .
At the point ๐ด 0,0,0 : ๐๐ฅ๐ฅ . ๐๐ฆ๐ฆ โ ๐๐ฅ๐ฆ
2
= โ36 < 0 โ
๐ด ๐ข๐ฌ ๐ ๐ฌ๐๐๐ฅ๐ฅ๐ ๐ฉ๐จ๐ข๐ง๐ญ.
At the point
๐ต(
3 1
4
,
3 1
16
,
โ3
4
) :๐๐ฅ๐ฅ . ๐๐ฆ๐ฆ โ ๐๐ฅ๐ฆ
2
= 144 โ 36 = 108 > 0 and
๐๐ฅ๐ฅ > 0 , so this point is a local minimum.
30. The Jacobian & Hessian Determinants
โข From the matrix algebra we know that for any square matrix ๐ด
if:
๐ด = 0 โน ๐ด ๐๐ ๐ ๐ ๐๐๐๐ข๐๐๐ ๐๐๐ก๐๐๐ฅ,
Which means, there exists linear dependence between at least two
rows or two columns of the matrix.
And if:
๐ด โ 0 โน ๐ด ๐๐ ๐ ๐๐๐ โ ๐ ๐๐๐๐ข๐๐๐ ๐๐๐ก๐๐๐ฅ,
Which means, all rows and all columns are linearly independent.
โข So to test for linear dependence between the equations in a
simultaneous system the determinant of the coefficients matrix
can be used.
31. The Jacobian & Hessian Determinants
โข To test for functional dependence (both linear and non-linear) between
different functions we use Jacobian Determinant shown by ๐ฝ .
โข The Jacobian Matrix is the matrix of all first-order partial derivatives of a
vector function ๐น: ๐ ๐ โ ๐ ๐, which corresponds a vector in ๐
dimensional space(real n-tuples) into a vector in ๐ dimensional space
(real m-tuples):
๐ฆ1 = ๐น1(๐ฅ1, ๐ฅ2, โฆ , ๐ฅ ๐)
๐ฆ2 = ๐น2(๐ฅ1, ๐ฅ2, โฆ , ๐ฅ ๐)
โฎ
๐ฆ ๐ = ๐น๐(๐ฅ1, ๐ฅ2, โฆ , ๐ฅ ๐)
So, the Jacobian matrix of ๐น is:
๐ฝ =
๐๐น1
๐๐ฅ1
โฏ
๐๐น1
๐๐ฅ ๐
โฎ โฑ โฎ
๐๐น๐
๐๐ฅ1
โฏ
๐๐น๐
๐๐ฅ ๐
Each row is the partial
derivatives of one of the
functions (e.g. ๐น1) with
respect to all independent
variables ๐ฅ1, ๐ฅ2, โฆ , ๐ฅ ๐.
32. The Jacobian & Hessian Determinants
โข If ๐ = ๐, the Jacobian matrix is a square matrix and its
determinant shows if there is functional dependence or
independence between the functions.
๐ฝ = 0 โน ๐โ๐ ๐๐๐ข๐๐ก๐๐๐๐ ๐๐๐ ๐๐ข๐๐๐ก๐๐๐๐๐๐๐ฆ ๐๐๐๐๐๐๐๐๐ก
This means, there is a linear or non-linear association between two
functions.
๐ฝ โ 0 โน ๐โ๐ ๐๐๐ข๐๐ก๐๐๐๐ ๐๐๐ ๐๐ข๐๐๐ก๐๐๐๐๐๐๐ฆ ๐๐๐๐๐๐๐๐๐๐๐ก
This means, there is no linear or non-linear association between
two functions.
Example: Use the Jacobian determinant to test the functional
dependency of the following equations:
๐ฆ1 = 2๐ฅ1 โ 3๐ฅ2
๐ฆ2 = 4๐ฅ1
2
โ 12๐ฅ1 ๐ฅ2 + 9๐ฅ2
2
33. The Jacobian & Hessian Determinants
โข The Jacobian determinant is :
๐ฝ =
๐๐ฆ1
๐๐ฅ1
๐๐ฆ1
๐๐ฅ2
๐๐ฆ2
๐๐ฅ1
๐๐ฆ2
๐๐ฅ2
=
2 โ3
8๐ฅ1 โ 12๐ฅ2 โ12๐ฅ1 + 18๐ฅ2
= 2 โ12๐ฅ1 + 18๐ฅ2 โ โ3 8๐ฅ1 โ 12๐ฅ2 = 0
โข So, the functions are not independent.
โข We expected such a result as we know that there is a quadratic
functional relationship between ๐ฆ1 and ๐ฆ2:
๐ฆ1
2
= ๐ฆ2
34. The Jacobian & Hessian Determinants
โข Hessian Matrix is a square matrix which is composed of the second-
order partial derivatives of a real (scalar) multi variables function,
(๐: ๐ ๐ โ ๐ ). For a function ๐ฆ = ๐(๐ฅ1, ๐ฅ2, โฆ , ๐ฅ ๐), Hessian
determinant is defined as:
๐ป =
๐2
๐
๐๐ฅ1
2
๐2
๐
๐๐ฅ1 ๐๐ฅ2
โฏ
๐2
๐
๐๐ฅ1 ๐๐ฅ ๐
๐2
๐
๐๐ฅ2 ๐๐ฅ1
๐2
๐
๐๐ฅ2
2 โฏ
๐2
๐
๐๐ฅ2 ๐๐ฅ ๐
โฎ
๐2
๐
๐๐ฅ ๐ ๐๐ฅ1
โฎ
๐2
๐
๐๐ฅ ๐ ๐๐ฅ2
โฆ
โฎ
๐2
๐
๐๐ฅ ๐
2
=
๐11 ๐12 โฆ ๐1๐
๐21 ๐22 โฆ ๐2๐
โฎ
๐๐1
โฎ
๐๐2
โฑ โฎ
โฆ ๐๐๐
โข In the optimisation of two variables function if the first-order
(necessary) conditions ๐๐ฅ = ๐๐ฆ = 0 are met, second-order
(sufficient) conditions are:
๏ถ ๐๐ฅ๐ฅ, ๐๐ฆ๐ฆ > 0 ๐๐๐ ๐ ๐๐๐๐๐๐ข๐ and ๐๐ฅ๐ฅ, ๐๐ฆ๐ฆ < 0 ๐๐๐ ๐ ๐๐๐ฅ๐๐๐ข๐
๏ถ ๐๐ฅ๐ฅ . ๐๐ฆ๐ฆ โ ๐๐ฅ๐ฆ
2
> 0
35. The Jacobian & Hessian Determinants
โข Using the Hessian determinant, we can simply show the sufficient
conditions as:
๏ The optimal point is minimum if ๐ป1 > 0 and ๐ป2 > 0, because:
o ๐ป1 = ๐๐ฅ๐ฅ > 0
o ๐ป2 =
๐๐ฅ๐ฅ ๐๐ฅ๐ฆ
๐๐ฆ๐ฅ ๐๐ฆ๐ฆ
= ๐๐ฅ๐ฅ ๐๐ฆ๐ฆ โ ๐๐ฅ๐ฆ
2 > 0
๏ And, the optimal point is maximum if ๐ป1 < 0 and ๐ป2 > 0.
โข There is the same story for a multi-variable function ๐ฆ = ๐(๐ฅ1, ๐ฅ2, โฆ , ๐ฅ ๐):
๏ฑ If ๐ป1 , ๐ป2 , ๐ป3 , โฆ , ๐ป ๐ > 0, the critical point is local minimum.
๏ฑ If the principal minors change their signs consecutively, the critical point is
the local maximum. (e.g. in case of ๐ฆ = ๐(๐ฅ1, ๐ฅ2, ๐ฅ3), ๐ป1 < 0 , ๐ป2 > 0 and
๐ป3 < 0)
36. Optimisation with a Constraint
โข In reality, independent variables in a function, are not fully
independent from each other. They might be in a linear or even
non-linear relationship with one another and make a constraint in
the process of optimisation and change the result of that.
Adoptedfrom http://staff.www.ltu.se/~larserik/applmath/chap7en/part7.html
Adopted& altered from http://en.wikipedia.org/wiki/Lagrange_multiplier
๐ ๐ฅ, ๐ฆ = ๐
Linear constraintNon-linear constraint
๐ ๐, ๐ = ๐
37. Optimisation with a Constraint
โข In each case, the function ๐ง = ๐(๐ฅ, ๐ฆ) is the target function for
optimisation, subject to a constraint ๐ ๐ฅ, ๐ฆ = ๐ (where ๐ is a
constant). So;
Max or Min โถ ๐ง = ๐ ๐ฅ, ๐ฆ
Subject to โถ ๐ ๐ฅ, ๐ฆ = ๐
โข If the constraint function ๐ ๐ฅ, ๐ฆ = ๐ is linear (e.g. ๐ฅ โ 2๐ฆ = โ1)
one way to include the constraint into the optimisation process is to
find one variable with respect to another from the constraint
function (here; ๐ฅ = 2๐ฆ โ 1) and put it into the target function to
make it a function with one independent variable, ๐ง = ๐น(๐ฆ), and
follow the optimisation process of two-variable function.
38. Example
โข Example: Find the maximum of the function ๐ง = ๐ฅ๐ฆ subject to
the constraint ๐ฅ + ๐ฆ = 1.
From the constraint function we have ๐ฆ = โ๐ฅ + 1 and if we
substitute this with the ๐ฆ in the target function, we will have ๐ง =
โ ๐ฅ2 + ๐ฅ.
๐๐ง
๐๐ฅ
= 0 โ โ2๐ฅ + 1 = 0 โ ๐ฅ = 0.5
Putting this into the constraint equation to find ๐ฆ and both into the
target function to find ๐ง ; the maximum point
will be ๐ด(0.5, 0.5, 0.25) .
๏ How do we know the point is the maximum point?
39. The Lagrange Method
โข If the constraint function is non-linear the previous method
might become very complicated. Another method, which is
called โLagrange Methodโ or the โMethod of Lagrange
Multipliersโ, can help us to find local extremum points.
โข In the Lagrange method the constraint function comes into the
process of optimisation by introducing a new variable ฮป
(Lagrange Multiplier, Lagrange coefficient) to make the Lagrange
function ๐ฟ , in the form of:
๐ฟ ๐ฅ, ๐ฆ, ๐ = ๐ ๐ฅ, ๐ฆ + ๐ . [๐ โ ๐ ๐ฅ, ๐ฆ ]
โข By changing ๐ฅ and ๐ฆ a point is moving on the surface of the
function but the movement is limited to the constraint ๐ ๐ฅ, ๐ฆ =
๐ .
โข This means ๐ โ ๐ ๐ฅ, ๐ฆ = 0 and ๐ฟ ๐ฅ, ๐ฆ, ๐ = ๐ ๐ฅ, ๐ฆ . So, the
optimisation of ๐ฟ is equivalent to the optimisation of ๐ .
40. The Lagrange Method
โข To find the extremum values we need to find the derivative of the
Lagrange function with respect to its variables and solve the
following simultaneous equations :
๐๐ฟ
๐๐ฅ
= 0 โ
๐๐
๐๐ฅ
โ ๐.
๐๐
๐๐ฅ
= 0
๐๐ฟ
๐๐ฆ
= 0 โ
๐๐
๐๐ฆ
โ ๐.
๐๐
๐๐ฆ
= 0
๐๐ฟ
๐๐
= 0 โ ๐ โ ๐ ๐ฅ, ๐ฆ = 0
โข Solving this simultaneous equations gives us the critical values of ๐ฅ
and ๐ฆ and a value for ๐ .
โข ๐ shows the sensitivity of the target (objective)function to the
change in the constraint function.
Necessary conditions
for having extremums A
41. Sufficient Condition
โข To make sure that the critical point(s) from solving the
simultaneous equations are extremum(s) we need sufficient
evidence which is the sign of second order differential of the
Lagrange function ๐2 ๐ฟ at the critical point(s).
โข If ๐ฟ = ๐ ๐ฅ, ๐ฆ + ๐ . [c โ ๐ ๐ฅ, ๐ฆ ] then
๐๐ฟ = ๐๐ โ ๐. ๐๐ โ ๐. ๐๐
And
๐2 ๐ฟ = ๐2 ๐ โ ๐๐. ๐๐ โ ๐. ๐2 ๐ โ ๐๐ . ๐๐ โ ๐ . ๐2 ๐
Since:
๏ถ ๐2 ๐ = 0
๏ถ ๐2
๐ = ๐๐ฅ๐ฅ. ๐๐ฅ2
+ 2๐๐ฅ๐ฆ. ๐๐ฅ. ๐๐ฆ + ๐๐ฆ๐ฆ. ๐๐ฆ2
๏ถ ๐๐ = ๐ ๐ฅ. ๐๐ฅ + ๐ ๐ฆ. ๐๐ฆ
๏ถ ๐2 ๐ = ๐ ๐ฅ๐ฅ. ๐๐ฅ2 + 2๐ ๐ฅ๐ฆ. ๐๐ฅ. ๐๐ฆ + ๐ ๐ฆ๐ฆ. ๐๐ฆ2
, therefore
43. โข In the second form, the components of vectors of the first differentials of
the variables, need to be re-arranged, i.e.:
๐๐ฅ ๐๐ฆ ๐๐
๐ฟ ๐ฅ๐ฅ ๐ฟ ๐ฅ๐ฆ โ๐ ๐ฅ
๐ฟ ๐ฆ๐ฅ ๐ฟ ๐ฆ๐ฆ โ๐ ๐ฆ
โ๐ ๐ฅ โ๐ ๐ฆ 0
๐๐ฅ
๐๐ฆ
๐๐
โข Note: In some books the constraint function ๐ enters in the Lagrange
function with a positive sign, so, the signs of the first derivatives of ๐
in the bordered Hessian matrix are positive, but there is no difference
between their determinants. (Based on the properties of determinant,
if just a row or just a column of a matrix multiplied by ๐, the
determinant of the matrix is multiplied by ๐. In this case, the first row
and the first column multiplied by -1, so, the determinant is multiplied
by -1x(-1)=1)
Sufficient Condition
44. So, we have a minimum if
1. ๐2 ๐ฟ > 0 (i.e. all the principle minors of the Hessian matrix should
be negative: ๐ป2 , ๐ป3 < 0 )
And a maximum if:
2. ๐2 ๐ฟ < 0 (i.e. the principle minors of the Hessian matrix change their
sign one after another: ๐ป2 > 0, ๐ป3 < 0 )
โข For a multi variable function ๐ฆ = ๐(๐ฅ1, ๐ฅ2, โฆ , ๐ฅ ๐), The Hessian matrix
is ๐ ร ๐ but the rule is the same:
โข For minimum: ๐ป2 , ๐ป3 , โฆ , ๐ป ๐ < 0 .
โข For maximum:
Tโ๐ ๐ ๐๐๐ ๐๐ ๐กโ๐ ๐๐๐๐๐๐๐๐๐ ๐๐๐๐๐๐ ๐โ๐๐๐๐ ๐๐๐๐ ๐๐๐ข๐ก๐๐ฃ๐๐๐ฆ.
Sufficient Condition
45. Example
โข Find the extremums of the function ๐ ๐ฅ, ๐ฆ = ๐ฅ โ ๐ฆ subject to the ๐ฅ2
+ ๐ฆ2
=
100, if any?
๐ฟ ๐ฅ, ๐ฆ, ๐ = ๐ฅ โ ๐ฆ + ๐[100 โ ๐ฅ2 โ ๐ฆ2]
๐ฟ ๐ฅ = 1 โ 2๐๐ฅ = 0
๐ฟ ๐ฆ = โ1 โ 2๐๐ฆ = 0
๐ฟ ๐ = 100 โ ๐ฅ2 โ ๐ฆ2 = 0
1
โ1
=
2๐๐ฅ
2๐๐ฆ
From the first two equations ๐ can be omitted and we have ๐ฅ = โ๐ฆ. Substituting
this new equation into the third equation we will have:
100 โ๐ฆ2 โ ๐ฆ2 = 0 โ ๐ฆ = ยฑ5 2
So, the critical points are A โ5 2, 5 2, โ10 2 and ๐ต(5 2, โ5 2, 10 2) and
๐ = โ
2
20
.
Without any further investigation it can be said that point A is minimum and point
๐ต is maximum. (Why?)
46. Example
โข Using hessian determinant method we have:
๐ป3 =
0 โ2๐ฅ โ2๐ฆ
โ2๐ฅ โ2๐ 0
โ2๐ฆ 0 โ2๐
= 8๐(๐ฅ2 + ๐ฆ2)
Obviously, the sign of this determinant depends on the sign of ๐.
๏ At point A โ5 2, 5 2, โ10 2 , ๐ = โ
2
20
,so, ๐ป3 <0 and the point is
minimum. ( ๐ป2 is also negative).
๏ At point ๐ต 5 2, โ5 2, 10 2 , ๐ = +
2
20
, so, ๐ป3 >0 and the point is
maximum.
โข If there are more than one constraint the process of optimisation is
the same but there will be more than one Lagrange multiplier.
โข This case is the generalisation of the previous case and will not be
discussed here.
47. Interpretation of the Lagrange Multiplier ๐
โข The first-order conditions in the form of the simultaneous
equations (slide 40), provides the critical (and perhaps) optimal
values of the independent variables (๐ฅโ, ๐ฆโ) and the corresponding
value(s) of the Lagrange multiplier (๐โ).
โข The Lagrange multiplier shows the sensitivity of the optimal value
of the target (objective) function(๐โ) to the change in the constant
value of the constraint function (๐). It is calculated as the ratio, i.e.:
๐โ =
๐๐โ ๐ฅโ, ๐ฆโ
๐๐
This means if ๐โ = 2, and ๐ increases by 1%, the value of the target
function (calculated at the optimal values ๐ฅโ
and ๐ฆโ
) increases 2%.
A
48. Duality in Optimisation Analysis
โข Consider the process of maximisation of the target (objective)
function ๐ง = ๐(๐ฅ, ๐ฆ), subject to the constraint ๐ = ๐(๐ฅ, ๐ฆ).
โข As we know, the solution is the tangency point on both functions,
so, the process of optimisation can be done through different
approaches. The primal approach is what we have discussed and
done so far but the dual approach is when the constraint function
๐ = ๐(๐ฅ, ๐ฆ) is the new target function and ๐ง = ๐(๐ฅ, ๐ฆ) as the new
constraint.
โข The initial idea comes from the mathematical fact that if ๐ reaches
to its maximum at the point ๐ฅ = ๐ฅโ, the function โ๐ will have a
minimum at that point.
โข Therefore, instead of finding the maximum of ๐ง = ๐(๐ฅ, ๐ฆ), subject
to the constraint ๐ = ๐ ๐ฅ, ๐ฆ , we can find the minimum of ๐ =
๐ ๐ฅ, ๐ฆ , subject to the constraint ๐ง = ๐(๐ฅ, ๐ฆ), i.e. if we know that ๐ง
cannot be bigger than ๐งโwhat is the minimum value of ๐ ๐ฅ, ๐ฆ ,
which satisfies this constraint.
49. Duality in Optimisation Analysis
โข Let ๐ = ๐(๐ฅ, ๐ฆ) is the utility function subject to the budget
constraint ๐ฅ. ๐๐ฅ + ๐ฆ. ๐๐ฆ = ๐.
โข The Lagrange function is:
๐ฟ ๐ฅ, ๐ฆ, ๐ = ๐ ๐ฅ, ๐ฆ + ๐(๐ โ ๐ฅ. ๐๐ฅ โ ๐ฆ. ๐๐ฆ)
The first-order conditions are:
๐ฟ ๐ฅ = ๐ ๐ฅ โ ๐๐๐ฅ = 0
๐ฟ ๐ฆ = ๐ ๐ฆ โ ๐๐๐ฆ = 0
๐ฟ ๐ = ๐ โ ๐ฅ. ๐๐ฅ + ๐ฆ. ๐๐ฆ = 0
โข The optimal value for ๐ฅ and ๐ฆ which shows the Marshallian demand
(consumption) function for ๐ฅ and ๐ฆ and the optimal value for ๐ are:
๐ฅ ๐ = ๐ฅ ๐ ๐๐ฅ, ๐๐ฆ, ๐
๐ฆ ๐ = ๐ฆ ๐ ๐๐ฅ, ๐๐ฆ, ๐
๐ ๐ = ๐ ๐ ๐๐ฅ, ๐๐ฆ, ๐
B
50. Duality in Optimisation Analysis
โข Substituting these solutions into the target function gives the
maximum value of the utility can be achieved by the constraint:
๐โ = ๐โ ๐ฅ ๐ ๐๐ฅ, ๐๐ฆ, ๐ , ๐ฆ ๐ ๐๐ฅ, ๐๐ฆ, ๐
We call this as the indirect utility function, as it is the maximum value
of the utility obtained at the optimal values of ๐ฅ and ๐ฆ, but it is an
indirect function because now its values depends on the parameters
๐๐ฅ, ๐๐ฆ and ๐.
โข Now, the dual problem is when the expenditure on ๐ฅ and ๐ฆ is
minimised subject to the maintaining of a given level of utility ๐โ
.
So, the new Lagrange function is:
๐ฟ ๐ฅ, ๐ฆ, ๐ = ๐ฅ. ๐๐ฅ + ๐ฆ. ๐๐ฆ + ๐[๐โ โ ๐ ๐ฅ, ๐ฆ ]
The first-order conditions provide optimal solutions for ๐ฅ,๐ฆ and ๐.
51. Duality in Optimisation Analysis
๐ฟ ๐ฅ = ๐๐ฅ โ ๐๐ ๐ฅ = 0
๐ฟ ๐ฆ = ๐๐ฆ โ ๐๐ ๐ฆ = 0
๐ฟ ๐ = ๐โ
โ ๐(๐ฅ, ๐ฆ) = 0
The optimal solutions represent the demand functions for ๐ฅ and ๐ฆ .
๐ฅ ๐ป
= ๐ฅ ๐ป
๐๐ฅ, ๐๐ฆ, ๐โ
๐ฆ ๐ป
= ๐ฆ ๐ป
๐๐ฅ, ๐๐ฆ, ๐โ
๐ ๐ป = ๐ ๐ป ๐๐ฅ, ๐๐ฆ, ๐โ
โข The first two equations are called Hicksion demand functions.
Both simultaneous equations and give us the same results:
๐ ๐ฅ
๐๐ฅ
=
๐ ๐ฆ
๐๐ฆ
๐๐
๐ ๐ฅ
๐ ๐ฆ
=
๐๐ฅ
๐๐ฆ
So, primal and dual analysis leads us to the same conclusion. The only
difference is that:
๐ ๐ป
=
1
๐ ๐
C
B C