Fixed points of contractive and Geraghty contraction mappings under the influ...
tutorial6
1. EENGM0014 Mathematics for Signal Processing and
Communications
Tutorial 6
Soon Yau Cheong
University of Bristol
25 Nov 2016
Soon Yau Cheong (University of Bristol) EENGM0014 Mathematics for Signal Processing and Communications25 Nov 2016 1 / 8
2. Taylor Series Expansion at nominal point x*
F(x) = F(x∗
) +
d
dx
F(x)|x=x∗ (x − x∗
)
+
1
2
d2
dx2
F(x)|x=x∗ (x − x∗
)2
+ ...
+
1
n!
dn
dxn
F(x)|x=x∗ (x − x∗
)n
In matrix form :
F(x) = F(x∗
) + F(x)T
|x=x∗ (x − x∗
)
+
1
2
(x − x∗
)T 2
F(x)|x=x∗ (x − x∗
) + ...
Soon Yau Cheong (University of Bristol) EENGM0014 Mathematics for Signal Processing and Communications25 Nov 2016 2 / 8
3. Example : Approximate F(x)=sin(x) at x*=0
F(x) = sin(0) + cos(0)(x − 0) −
1
2
sin(0)(x − 0)2
−
1
6
cos(0)(x − 0)3
+ ...
= x −
1
6
x3
+ ...
Soon Yau Cheong (University of Bristol) EENGM0014 Mathematics for Signal Processing and Communications25 Nov 2016 3 / 8
4. Replace ∆x = x − x∗
into
F(x) = F(x∗
) + F(x)T
|x=x∗ (x − x∗
)
+
1
2
(x − x∗
)T 2
F(x)|x=x∗ (x − x∗
) + ...
We get
F(x∗
+ ∆x) = F(x∗
) + F(x)T
|x=x∗ (∆x)
+
1
2
(∆x)T 2
F(x)|x=x∗ (∆x) + ...
Soon Yau Cheong (University of Bristol) EENGM0014 Mathematics for Signal Processing and Communications25 Nov 2016 4 / 8
5. Derivative of linear matrix transform
Y ∈ Rn, X ∈ Rn, A ∈ Rm×n
Y = AX
y1
...
yn
=
a11 a12 . . . a1n
...
...
...
...
am1 am2 . . . amn
x1
...
xn
yi =
n
j=1
aij xj ⇒
∂yi
∂xk
= aik
x Y =
∂y1
∂x1
∂y1
∂x2
. . . ∂y1
∂xn
...
...
...
...
∂ym
∂x1
∂y1
∂x2
. . . ∂ym
∂xn
=
a11 a12 . . . a1n
...
...
...
...
am1 am2 . . . amn
x Y = A
Soon Yau Cheong (University of Bristol) EENGM0014 Mathematics for Signal Processing and Communications25 Nov 2016 5 / 8
6. Derivative of Quadratic Form
Given a square matrix A ∈ Rn×n and a vector X ∈ Rn, the scalar value
XT AX is called quadratic form
XT
AX =
n
i=1
xi (AX)i =
n
i=1
xi
n
j=1
aij xj =
n
i=1
n
j=1
aij xi xj
∂(XT AX)
∂xk
=
n
i=1
aikxi +
n
j=1
akj xj = (AT
X)k + (AX)k
if A is symmetric
(XT
AX) = 2AX
2
(XT
AX) = 2AT
= 2A
Soon Yau Cheong (University of Bristol) EENGM0014 Mathematics for Signal Processing and Communications25 Nov 2016 6 / 8
7. Hessian and Shape of Quadratic Function
We can tell the shape of quadratic function from Hessian and its
eigenvalue and eigenvectors
1 local minimum if Hessian is positive definite. Vice versa, local
maximum if Hessian is negative definite (see lecture note)
2 the gradient is steepest in the direction of eigenvectors of Hessian
with the largest eigenvalue (see solution of question 2 in problem
sheet week9 and Chapter 8 in ”Neural Network Design” 2nd ed by
Martin Hagan)
Soon Yau Cheong (University of Bristol) EENGM0014 Mathematics for Signal Processing and Communications25 Nov 2016 7 / 8
8. Examples
F(X) = x2
1 + x1x2 + x2
2 =
1
2
XT 2 1
1 2
X
Find eigenvalue and eigenvectors of Hessian of F(X)
λ1 = 1, v1 = [1 − 1]T
, λ2 = 3, v1 = [1 1]T
Soon Yau Cheong (University of Bristol) EENGM0014 Mathematics for Signal Processing and Communications25 Nov 2016 8 / 8